id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
614,040 | https://en.wikipedia.org/wiki/Intransitivity | In mathematics, intransitivity (sometimes called nontransitivity) is a property of binary relations that are not transitive relations. That is, we can find three values , , and where the transitive condition does not hold.
Antitransitivity is a stronger property which describes a relation where, for any three values, the transitivity condition never holds.
Be warned, some authors use the term to refer to antitransitivity.
Intransitivity
A relation is transitive if, whenever it relates some A to some B, and that B to some C, it also relates that A to that C. A relation is if it is not transitive. Assuming the relation is named , it is intransitive if:
This statement is equivalent to
For example, the inequality relation, , is intransitive. This can be demonstrated by replacing with and choosing , , and . We have and and it is not true that .
Notice that, for a relation to be intransitive, the transitivity condition just has to be not true at some , , and . It can still hold for others. For example, it holds when , , and , then and and it is true that .
For a more complicated example of intransitivity, consider the relation R on the integers such that a R b if and only if a is a multiple of b or a divisor of b. This relation is intransitive since, for example, 2 R 6 (2 is a divisor of 6) and 6 R 3 (6 is a multiple of 3), but 2 is neither a multiple nor a divisor of 3. This does not imply that the relation is (see below); for example, 2 R 6, 6 R 12, and 2 R 12 as well.
An example in biology comes from the food chain. Wolves feed on deer, and deer feed on grass, but wolves do not feed on grass. Thus, the relation among life forms is intransitive, in this sense.
Antitransitivity
Antitransitivity for a relation says that the transitive condition does not hold for any three values.
In the example above, the relation is not transitive, but it still contains some transitivity: for instance, humans feed on rabbits, rabbits feed on carrots, and humans also feed on carrots.
A relation is if this never occurs at all. The formal definition is:
For example, the relation R on the integers, such that a R b if and only if a + b is odd, is intransitive. If a R b and b R c, then either a and c are both odd and b is even, or vice-versa. In either case, a + c is even.
A second example of an antitransitive relation: the defeated relation in knockout tournaments. If player A defeated player B and player B defeated player C, A can have never played C, and therefore, A has not defeated C.
By transposition, each of the following formulas is equivalent to antitransitivity of R:
Properties
An antitransitive relation is always irreflexive.
An antitransitive relation on a set of ≥4 elements is never connex. On a 3-element set, the depicted cycle has both properties.
An irreflexive and left- (or right-) unique relation is always anti-transitive. An example of the former is the mother relation. If A is the mother of B, and B the mother of C, then A cannot be the mother of C.
If a relation R is antitransitive, so is each subset of R.
Cycles
The term is often used when speaking of scenarios in which a relation describes the relative preferences between pairs of options, and weighing several options produces a "loop" of preference:
A is preferred to B
B is preferred to C
C is preferred to A
Rock, paper, scissors; intransitive dice; and Penney's game are examples. Real combative relations of competing species, strategies of individual animals, and fights of remote-controlled vehicles in BattleBots shows ("robot Darwinism") can be cyclic as well.
Assuming no option is preferred to itself i.e. the relation is irreflexive, a preference relation with a loop is not transitive. For if it is, each option in the loop is preferred to each option, including itself. This can be illustrated for this example of a loop among A, B, and C. Assume the relation is transitive. Then, since A is preferred to B and B is preferred to C, also A is preferred to C. But then, since C is preferred to A, also A is preferred to A.
Therefore such a preference loop (or ) is known as an .
Notice that a cycle is neither necessary nor sufficient for a binary relation to be not transitive. For example, an equivalence relation possesses cycles but is transitive. Now, consider the relation "is an enemy of" and suppose that the relation is symmetric and satisfies the condition that for any country, any enemy of an enemy of the country is not itself an enemy of the country. This is an example of an antitransitive relation that does not have any cycles. In particular, by virtue of being antitransitive the relation is not transitive.
The game of rock, paper, scissors is an example. The relation over rock, paper, and scissors is "defeats", and the standard rules of the game are such that rock defeats scissors, scissors defeats paper, and paper defeats rock. Furthermore, it is also true that scissors does not defeat rock, paper does not defeat scissors, and rock does not defeat paper. Finally, it is also true that no option defeats itself. This information can be depicted in a table:
The first argument of the relation is a row and the second one is a column. Ones indicate the relation holds, zero indicates that it does not hold. Now, notice that the following statement is true for any pair of elements x and y drawn (with replacement) from the set {rock, scissors, paper}: If x defeats y, and y defeats z, then x does not defeat z. Hence the relation is antitransitive.
Thus, a cycle is neither necessary nor sufficient for a binary relation to be antitransitive.
Occurrences in preferences
Intransitivity can occur under majority rule, in probabilistic outcomes of game theory, and in the Condorcet voting method in which ranking several candidates can produce a loop of preference when the weights are compared (see voting paradox).
Intransitive dice demonstrate that the relation " X rolls a higher number than die Y more than half the time" need not be transitive.
In psychology, intransitivity often occurs in a person's system of values (or preferences, or tastes), potentially leading to unresolvable conflicts.
Analogously, in economics intransitivity can occur in a consumer's preferences. This may lead to consumer behaviour that does not conform to perfect economic rationality. Economists and philosophers have questioned whether violations of transitivity must necessarily lead to 'irrational behaviour' (see Anand (1993)).
Likelihood
It has been suggested that Condorcet voting tends to eliminate "intransitive loops" when large numbers of voters participate because the overall assessment criteria for voters balances out. For instance, voters may prefer candidates on several different units of measure such as by order of social consciousness or by order of most fiscally conservative.
In such cases intransitivity reduces to a broader equation of numbers of people and the weights of their units of measure in assessing candidates.
Such as:
30% favor 60/40 weighting between social consciousness and fiscal conservatism
50% favor 50/50 weighting between social consciousness and fiscal conservatism
20% favor a 40/60 weighting between social consciousness and fiscal conservatism
While each voter may not assess the units of measure identically, the trend then becomes a single vector on which the consensus agrees is a preferred balance of candidate criteria.
References
Further reading
.
Bar-Hillel, M., & Margalit, A. (1988). How vicious are cycles of intransitive choice? Theory and Decision, 24(2), 119-145.
Properties of binary relations | Intransitivity | [
"Mathematics"
] | 1,717 | [
"Properties of binary relations",
"Mathematical relations",
"Binary relations"
] |
614,085 | https://en.wikipedia.org/wiki/Good%20manufacturing%20practice | Current good manufacturing practices (cGMP) are those conforming to the guidelines recommended by relevant agencies. Those agencies control the authorization and licensing of the manufacture and sale of food and beverages, cosmetics, pharmaceutical products, dietary supplements, and medical devices. These guidelines provide minimum requirements that a manufacturer must meet to assure that their products are consistently high in quality, from batch to batch, for their intended use.
The rules that govern each industry may differ significantly; however, the main purpose of GMP is always to prevent harm from occurring to the end user. Additional tenets include ensuring the end product is free from contamination, that it is consistent in its manufacture, that its manufacture has been well documented, that personnel are well trained, and that the product has been checked for quality more than just at the end phase. GMP is typically ensured through the effective use of a quality management system (QMS).
Good manufacturing practice, along with good agricultural practice, good laboratory practice and good clinical practice, are overseen by regulatory agencies in the United Kingdom, United States, Canada, various European countries, China, India and other countries.
High-level details
Good manufacturing practice guidelines provide guidance for manufacturing, testing, and quality assurance in order to ensure that a manufactured product is safe for human consumption or use. Many countries have legislated that manufacturers follow GMP procedures and create their own GMP guidelines that correspond with their legislation.
All guidelines follow a few basic principles:
Manufacturing facilities must maintain a clean and hygienic manufacturing area.
Manufacturing facilities must maintain controlled environmental conditions in order to prevent cross-contamination from adulterants and allergens that may render the product unsafe for human consumption or use.
Manufacturing processes must be clearly defined and controlled. All critical processes are validated to ensure consistency and compliance with specifications.
Manufacturing processes must be controlled, and any changes to the process must be evaluated. Changes that affect the quality of the drug are validated as necessary.
Instructions and procedures must be written in clear and unambiguous language using good documentation practices.
Operators must be trained to carry out and document procedures.
Records must be made, manually or electronically, during manufacture that demonstrate that all the steps required by the defined procedures and instructions were in fact taken and that the quantity and quality of the food or drug was as expected. Deviations must be investigated and documented.
Records of manufacture (including distribution) that enable the complete history of a batch to be traced must be retained in a comprehensible and accessible form.
Any distribution of products must minimize any risk to their quality.
A system must be in place for recalling any batch from sale or supply.
Complaints about marketed products must be examined, the causes of quality defects must be investigated, and appropriate measures must be taken with respect to the defective products and to prevent recurrence.
Good manufacturing practice is recommended with the goal of safeguarding the health of consumers and patients as well as producing quality products. In the United States, a food or drug may be deemed "adulterated" if it has passed all of the specifications tests but is found to be manufactured in a facility or condition which violates or does not comply with current good manufacturing guideline.
GMP standards are not prescriptive instructions on how to manufacture products. They are a series of performance based requirements that must be met during manufacturing. When a company is setting up its quality program and manufacturing process, there may be many ways it can fulfill GMP requirements. It is the company's responsibility to determine the most effective and efficient quality process that both meets business and regulatory needs.
Regulatory agencies have recently begun to look at more fundamental quality metrics of manufacturers than just compliance with basic GMP regulations. US-FDA has found that manufacturers who have implemented quality metrics programs gain a deeper insight into employee behaviors that impact product quality.
In its Guidance for Industry "Data Integrity and Compliance With Drug CGMP" US-FDA states “it is the role of management with executive responsibility to create a quality culture where employees understand that data integrity is an organizational core value and employees are encouraged to identify and promptly report data integrity issues.” Australia's Therapeutic Goods Administration has said that recent data integrity failures have raised questions about the role of quality culture in driving behaviors. In addition, non-governmental organizations such as the International Society for Pharmaceutical Engineering (ISPE) and the Parenteral Drug Association (PDA) have developed information and resources to help pharmaceutical companies better understand why quality culture is important and how to assess the current situation within a site or organization.
Guideline versions
GMP is enforced in the United States by the U.S. Food and Drug Administration (FDA), under Title 21 CFR. The regulations use the phrase "current good manufacturing practices" (CGMP) to describe these guidelines. Courts may theoretically hold that a product is adulterated even if there is no specific regulatory requirement that was violated as long as the process was not performed according to industry standards. However, since June 2007, a different set of CGMP requirements have applied to all manufacturers of dietary supplements, with additional supporting guidance issued in 2010. Additionally, in the U.S., medical device manufacturers must follow what are called "quality system regulations" which are deliberately harmonized with ISO requirements, not necessarily CGMPs.
The World Health Organization (WHO) version of GMP is used by pharmaceutical regulators and the pharmaceutical industry in over 100 countries worldwide, primarily in the developing world. The European Union's GMP (EU GMP) enforces similar requirements to WHO GMP, as does the FDA's version in the US. Similar GMPs are used in other countries, with Australia, Canada, Japan, Saudi Arabia, Singapore, Philippines], Vietnam and others having highly developed/sophisticated GMP requirements. In the United Kingdom, the Medicines Act (1968) covers most aspects of GMP in what is commonly referred to as "The Orange Guide," which is named so because of the color of its cover; it is officially known as Rules and Guidance for Pharmaceutical Manufacturers and Distributors.
Since the 1999 publication of Good Manufacturing Practice for Active Pharmaceutical Ingredients, by the International Conference on Harmonization (ICH), GMPs now apply in those countries and trade groupings that are signatories to ICH (the EU, Japan and the U.S.), and applies in other countries (e.g., Australia, Canada, Singapore) which adopt ICH guidelines for the manufacture and testing of active raw materials.
Enforcement
Within the European Union GMP inspections are performed by National Regulatory Agencies. GMP inspections are performed in Canada by the Health Products and Food Branch Inspectorate; in the United Kingdom by the Medicines and Healthcare products Regulatory Agency (MHRA); in the Republic of Korea (South Korea) by the Ministry of Food and Drug Safety (MFDS); in Australia by the Therapeutic Goods Administration (TGA); in Bangladesh by the Directorate General of Drug Administration (DGDA); in South Africa by the Medicines Control Council (MCC); in Brazil by the National Health Surveillance Agency (ANVISA); in India by state Food and Drugs Administrations (FDA), reporting to the Central Drugs Standard Control Organization; in Pakistan by the Drug Regulatory Authority of Pakistan; in Nigeria by NAFDAC; and by similar national organizations worldwide. Each of the inspectorates carries out routine GMP inspections to ensure that drug products are produced safely and correctly. Additionally, many countries perform pre-approval inspections (PAI) for GMP compliance prior to the approval of a new drug for marketing.
CGMP inspections
Regulatory agencies (including the FDA in the U.S. and regulatory agencies in many European nations) are authorized to conduct unannounced inspections, though some are scheduled. FDA routine domestic inspections are usually unannounced, but must be conducted according to 704(a) of the Food, Drug and Cosmetic Act (21 USCS § 374), which requires that they are performed at a "reasonable time". Courts have held that any time the firm is open for business is a reasonable time for an inspection.
Other good practices
Other good-practice systems, along the same lines as GMP, exist:
Good agricultural practice (GAP), for farming and ranching
Good clinical practice (GCP), for hospitals and clinicians conducting clinical studies on new drugs in humans
Good distribution practice (GDP) deals with the guidelines for the proper distribution of medicinal products for human use.
Good laboratory practice (GLP), for laboratories conducting non-clinical studies (toxicology and pharmacology studies in animals)
Good pharmacovigilance practice (GVP), for the safety of produced drugs
Good regulatory practice (GRP), for the management of regulatory commitments, procedures and documentation
Collectively, these and other good-practice requirements are referred to as "GxP" requirements, all of which follow similar philosophies. Other examples include good guidance practice and good tissue practice.
See also
Best practice
Corrective and preventive action (CAPA)
EudraLex
Food safety
Good automated manufacturing practice (GAMP) in the pharmaceutical industry
Site Master File
Washdown
References
External links
Pharmaceutical Inspection Cooperation Scheme: GMP Guides
World Health Organization GMP Guidelines
European Union GMP Guidelines
US CFR Title 21 parts 210 (GMP, general), 211 (GMP, finished pharmaceuticals), 212 (GMP, positron emission tomography drugs), 225 (GMP, medicated feeds), 226 (GMP, type A medicated articles).
Report on Optimizing and Leaning GMP Batch Record Design
Food safety
Pharmaceutical industry
Pharmaceuticals policy
Good practice
Life sciences industry | Good manufacturing practice | [
"Chemistry",
"Biology"
] | 1,966 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
614,094 | https://en.wikipedia.org/wiki/Replay%20attack | A replay attack (also known as a repeat attack or playback attack) is a form of network attack in which valid data transmission is maliciously or fraudulently repeated or delayed. This is carried out either by the originator or by an adversary who intercepts the data and re-transmits it, possibly as part of a spoofing attack by IP packet substitution. This is one of the lower-tier versions of a man-in-the-middle attack. Replay attacks are usually passive in nature.
Another way of describing such an attack is:
"an attack on a security protocol using a replay of messages from a different context into the intended (or original and expected) context, thereby fooling the honest participant(s) into thinking they have successfully completed the protocol run."
Example
Suppose Alice wants to prove her identity to Bob. Bob requests her password as proof of identity, which Alice dutifully provides (possibly after some transformation like hashing, or even salting, the password); meanwhile, Eve is eavesdropping on the conversation and keeps the password (or the hash). After the interchange is over, Eve (acting as Alice) connects to Bob; when asked for proof of identity, Eve sends Alice's password (or hash) read from the last session which Bob accepts, thus granting Eve access.
Prevention and countermeasures
Replay attacks can be prevented by tagging each encrypted component with a session ID and a component number. This combination of solutions does not use anything that is interdependent on one another. Due to the fact that there is no interdependency, there are fewer vulnerabilities. This works because a unique, random session ID is created for each run of the program; thus, a previous run becomes more difficult to replicate. In this case, an attacker would be unable to perform the replay because on a new run the session ID would have changed.
Session IDs, also known as session tokens, are one mechanism that can be used to help avoid replay attacks. The way of generating a session ID works as follows.
Bob sends a one-time token to Alice, which Alice uses to transform the password and send the result to Bob. For example, she would use the token to compute a hash function of the session token and append it to the password to be used.
On his side Bob performs the same computation with the session token.
If and only if both Alice’s and Bob’s values match, the login is successful.
Now suppose an attacker Eve has captured this value and tries to use it on another session. Bob would send a different session token, and when Eve replies with her captured value it will be different from Bob's computation so he will know it is not Alice.
Session tokens should be chosen by a random process (usually, pseudorandom processes are used). Otherwise, Eve may be able to pose as Bob, presenting some predicted future token, and convince Alice to use that token in her transformation. Eve can then replay her reply at a later time (when the previously predicted token is actually presented by Bob), and Bob will accept the authentication.
One-time passwords are similar to session tokens in that the password expires after it has been used or after a very short amount of time. They can be used to authenticate individual transactions in addition to sessions. These can also be used during the authentication process to help establish trust between the two parties that are communicating with each other.
Bob can also send nonces but should then include a message authentication code (MAC), which Alice should check.
Timestamping is another way of preventing a replay attack. Synchronization should be achieved using a secure protocol. For example, Bob periodically broadcasts the time on his clock together with a MAC. When Alice wants to send Bob a message, she includes her best estimate of the time on his clock in her message, which is also authenticated. Bob only accepts messages for which the timestamp is within a reasonable tolerance. Timestamps are also implemented during mutual authentication, when both Bob and Alice authenticate each other with unique session IDs, in order to prevent the replay attacks. The advantages of this scheme are that Bob does not need to generate (pseudo-) random numbers and that Alice doesn't need to ask Bob for a random number. In networks that are unidirectional or near unidirectional, it can be an advantage. The trade-off being that replay attacks, if they are performed quickly enough, i.e. within that 'reasonable' limit, could succeed.
Kerberos protocol prevention
The Kerberos authentication protocol includes some countermeasures. In the classic case of a replay attack, a message is captured by an adversary and then replayed at a later date in order to produce an effect. For example, if a banking scheme were to be vulnerable to this attack, a message which results in the transfer of funds could be replayed over and over to transfer more funds than originally intended. However, the Kerberos protocol, as implemented in Microsoft Windows Active Directory, includes the use of a scheme involving time stamps to severely limit the effectiveness of replay attacks. Messages which are past the "time to live (TTL)" are considered old and are discarded.
There have been improvements proposed, including the use of a triple password scheme. These three passwords are used with the authentication server, ticket-granting server, and TGS. These servers use the passwords to encrypt messages with secret keys between the different servers. The encryption that is provided by these three keys help aid in preventing replay attacks.
Secure routing in ad hoc networks
Wireless ad hoc networks are also susceptible to replay attacks. In this case, the authentication system can be improved and made stronger by extending the AODV protocol. This method of improving the security of Ad Hoc networks increases the security of the network with a small amount of overhead. If there were to be extensive overhead then the network would run the risk of becoming slower and its performance would decrease. By keeping a relatively low overhead, the network can maintain better performance while still improving the security.
Challenge-Handshake Authentication Protocol
Authentication and sign-on by clients using Point-to-Point Protocol (PPP) are susceptible to replay attacks when using Password Authentication Protocol (PAP) to validate their identity, as the authenticating client sends its username and password in "normal text", and the authenticating server then sends its acknowledgment in response to this; an intercepting client is therefore, free to read transmitted data and impersonate each of the client and server to the other, as well as being able to then store client credentials for later impersonation to the server. Challenge-Handshake Authentication Protocol (CHAP) secures against this sort of replay attack during the authentication phase by instead using a "challenge" message from the authenticator that the client responds with a hash-computed value based on a shared secret (e.g. the client's password), which the authenticator compares with its own calculation of the challenge and shared secret to authenticate the client. By relying on a shared secret that has not itself been transmitted, as well as other features such as authenticator-controlled repetition of challenges, and changing identifier and challenge values, CHAP provides limited protection against replay attacks.
Real world examples of replay attack susceptibility
There are several real-world examples of how replay attacks have been used and how the issues were detected and fixed in order to prevent further attacks.
Remote keyless-entry system for vehicles
Many vehicles on the road use a remote keyless system, or key fob, for the convenience of the user. Modern systems are hardened against simple replay attacks but are vulnerable to buffered replay attacks. This attack is performed by placing a device that can receive and transmit radio waves within range of the target vehicle. The transmitter will attempt to jam any RF vehicle unlock signal while receiving it and placing it in a buffer for later use. Upon further attempts to unlock the vehicle, the transmitter will jam the new signal, buffer it, and playback an old one, creating a rolling buffer that is one step ahead of the vehicle. At a later time, the attacker may use this buffered code to unlock the vehicle.
Text-dependent speaker verification
Various devices use speaker recognition to verify the identity of a speaker. In text-dependent systems, an attacker can record the target individual’s speech that was correctly verified by the system, then play the recording again to be verified by the system. A counter-measure was devised using spectral bitmaps from the stored speech of verified users. Replayed speech has a different pattern in this scenario and will then be rejected by the system.
Replay Attacks on IoT Devices
In the realm of smart home environments, Internet of Things (IoT) devices are increasingly vulnerable to replay attacks, where an adversary intercepts and replays legitimate communication signals between an IoT device and its companion app. These attacks can compromise a wide array of devices, including smart plugs, security cameras, and even household appliances.
A recent study demonstrated that a substantial portion of consumer IoT devices are prone to replay attacks. Researchers found that 75% of tested devices supporting local connectivity were vulnerable to such attacks. These vulnerabilities allow attackers to mimic legitimate commands, potentially enabling unauthorized actions such as turning on a smart kettle, unlocking doors, or manipulating security systems. Such breaches pose significant safety, security, and privacy risks, as malicious actors can gain control over critical home systems.
In popular culture
In the folk tale Ali Baba and the Forty Thieves, the thieves' captain used the passphrase "Open, Sesame" to open the door to their loot depot. This was overheard by Ali Baba, who later reused the passphrase to get access and collect as much of the loot as he could carry.
See also
Denial-of-service attack
Digest access authentication
Man-in-the-middle attack
Pre-play attack
Relay attack
Session replay
Telephone tapping
References
Cryptographic attacks | Replay attack | [
"Technology"
] | 2,067 | [
"Cryptographic attacks",
"Computer security exploits"
] |
614,129 | https://en.wikipedia.org/wiki/PTS-DOS | PTS-DOS (aka PTS/DOS) is a disk operating system, a DOS clone, developed in Russia by PhysTechSoft and Paragon Technology Systems.
History and versions
PhysTechSoft was formed in 1991 in Moscow, Russia by graduates and members of MIPT, informally known as PhysTech. At the end of 1993, PhysTechSoft released the first commercially available PTS-DOS as PTS-DOS v6.4. The version numbering followed MS-DOS version numbers, as Microsoft released MS-DOS 6.2 in November 1993.
In 1995, some programmers left PhysTechSoft and founded Paragon Technology Systems. They took source code with them and released their own version named PTS/DOS 6.51CD as well as S/DOS 1.0 ("Source DOS"), a stripped down open-source version. According to official PhysTechSoft announcements, these programmers violated both copyright laws and Russian military laws, as PTS-DOS was developed in close relationship with Russia's military and thus may be subject to military secrets law.
PhysTechSoft continued development on their own and released PTS-DOS v6.6 somewhere between and presented PTS-DOS v6.65 at the CeBIT exhibition in 1997. The next version from PhysTechSoft, formally PTS/DOS Extended Version 6.70 was labeled PTS-DOS 2000 and is still being distributed as a last 16-bit PTS-DOS system, .
Paragon continued their PTS-DOS line and released Paragon DOS Pro 2000 (also known and labeled in some places as PTS/DOS Pro 2000). According to Paragon, this was the last version and all development since then ceased. Moreover, this release contained bundled source code of older PTS-DOS v6.51.
Later, PhysTechSoft continued developing PTS-DOS and finally released PTS-DOS 32, formally known as PTS-DOS v7.0, which added support for the FAT32 file system.
PTS-DOS is certified by the Russian Ministry of Defense.
Commands
The following list of commands are supported by PTS-DOS 2000 Pro.
APPEND
ASK
ASSIGN
ATTR
BEEP
BREAK
CALL
CD
CHDIR
CHKDSK
CHOICE
CLS
COMMAND
COPY
CTTY
DATE
DEBUG
DEL
DIR
DISKCOPY
DISP
ECHO
ECHONLF
ERASE
EXE2BIN
EXIT
FDISK
FIND
FOR
FORMAT
GOTO
HISTORY
IF
JOIN
KEYB
LABEL
LOADFIX
MD
MEM
MKDIR
MKZOMBIE
MODE
MORE
NLSFUNC
PATH
PAUSE
PRINT
PROMPT
RD
RDZOMBIE
REM
REN
RENAME
REPLACE
RMDIR
SET
SETDRV
SETVER
SHARE
SHIFT
SORT
SUBST
SYS
TIME
TREE
TYPE
UNINSTALL
VER
VERIFY
VOL
Exclusive commands
UNINSTALL
This command is specific to PTS/DOS 2000. Paragon's description is (quote)
Purpose: Restores the booting of a system installed before PTS-DOS on the disk and restores its the boot sector.
Syntax: UNINSTALL filename [drive:]
Hardware requirements
Intel 80286 CPU or better
512 KB RAM or more
See also
Comparison of DOS operating systems
АДОС, unrelated to Russian MS-DOS
Russian MS-DOS
References
External links
Unofficial PTS-DOS FAQ
Paragon GmbH homepage
DOS variants
Assembly language software
Disk operating systems
1993 software | PTS-DOS | [
"Technology"
] | 677 | [
"Operating system stubs",
"Computing stubs"
] |
614,147 | https://en.wikipedia.org/wiki/Knuth%E2%80%93Bendix%20completion%20algorithm | The Knuth–Bendix completion algorithm (named after Donald Knuth and Peter Bendix) is a semi-decision algorithm for transforming a set of equations (over terms) into a confluent term rewriting system. When the algorithm succeeds, it effectively solves the word problem for the specified algebra.
Buchberger's algorithm for computing Gröbner bases is a very similar algorithm. Although developed independently, it may also be seen as the instantiation of Knuth–Bendix algorithm in the theory of polynomial rings.
Introduction
For a set E of equations, its deductive closure () is the set of all equations that can be derived by applying equations from E in any order.
Formally, E is considered a binary relation, () is its rewrite closure, and () is the equivalence closure of ().
For a set R of rewrite rules, its deductive closure ( ∘ ) is the set of all equations that can be confirmed by applying rules from R left-to-right to both sides until they are literally equal.
Formally, R is again viewed as a binary relation, () is its rewrite closure, () is its converse, and ( ∘ ) is the relation composition of their reflexive transitive closures ( and ).
For example, if are the group axioms, the derivation chain
demonstrates that a−1⋅(a⋅b) b is a member of E'''s deductive closure.
If is a "rewrite rule" version of E, the derivation chains
demonstrate that (a−1⋅a)⋅b ∘ b is a member of Rs deductive closure.
However, there is no way to derive a−1⋅(a⋅b) ∘ b similar to above, since a right-to-left application of the rule is not allowed.
The Knuth–Bendix algorithm takes a set E of equations between terms, and a reduction ordering (>) on the set of all terms, and attempts to construct a confluent and terminating term rewriting system R that has the same deductive closure as E.
While proving consequences from E often requires human intuition, proving consequences from R does not.
For more details, see Confluence (abstract rewriting)#Motivating examples, which gives an example proof from group theory, performed both using E and using R.
Rules
Given a set E of equations between terms, the following inference rules can be used to transform it into an equivalent convergent term rewrite system (if possible): Here: sect.8.1, p.293
They are based on a user-given reduction ordering (>) on the set of all terms; it is lifted to a well-founded ordering (▻) on the set of rewrite rules by defining if
in the encompassment ordering, or
and are literally similar and .
Example
The following example run, obtained from the E theorem prover, computes a completion of the (additive) group axioms as in Knuth, Bendix (1970).
It starts with the three initial equations for the group (neutral element 0, inverse elements, associativity), using f(X,Y) for X+Y, and i(X) for −X.
The 10 starred equations turn out to constitute the resulting convergent rewrite system.
"pm" is short for "paramodulation", implementing deduce. Critical pair computation is an instance of paramodulation for equational unit clauses.
"rw" is rewriting, implementing compose, collapse, and simplify.
Orienting of equations is done implicitly and not recorded.
See also Word problem (mathematics) for another presentation of this example.
String rewriting systems in group theory
An important case in computational group theory are string rewriting systems which can be used to give canonical labels to elements or cosets of a finitely presented group as products of the generators. This special case is the focus of this section.
Motivation in group theory
The critical pair lemma states that a term rewriting system is locally confluent (or weakly confluent) if and only if all its critical pairs are convergent. Furthermore, we have Newman's lemma which states that if an (abstract) rewriting system is strongly normalizing and weakly confluent, then the rewriting system is confluent. So, if we can add rules to the term rewriting system in order to force all critical pairs to be convergent while maintaining the strong normalizing property, then this will force the resultant rewriting system to be confluent.
Consider a finitely presented monoid where X is a finite set of generators and R is a set of defining relations on X. Let X* be the set of all words in X (i.e. the free monoid generated by X). Since the relations R generate an equivalence relation on X*, one can consider elements of M to be the equivalence classes of X* under R. For each class {w1, w2, ... } it is desirable to choose a standard representative wk. This representative is called the canonical or normal form for each word wk in the class. If there is a computable method to determine for each wk its normal form wi then the word problem is easily solved. A confluent rewriting system allows one to do precisely this.
Although the choice of a canonical form can theoretically be made in an arbitrary fashion this approach is generally not computable. (Consider that an equivalence relation on a language can produce an infinite number of infinite classes.) If the language is well-ordered then the order < gives a consistent method for defining minimal representatives, however computing these representatives may still not be possible. In particular, if a rewriting system is used to calculate minimal representatives then the order < should also have the property:
A < B → XAY < XBY for all words A,B,X,Y
This property is called translation invariance. An order that is both translation-invariant and a well-order is called a reduction order'.
From the presentation of the monoid it is possible to define a rewriting system given by the relations R. If A x B is in R then either A < B in which case B → A is a rule in the rewriting system, otherwise A > B and A → B. Since < is a reduction order a given word W can be reduced W > W_1 > ... > W_n where W_n is irreducible under the rewriting system. However, depending on the rules that are applied at each Wi → Wi+1 it is possible to end up with two different irreducible reductions Wn ≠ W'm of W. However, if the rewriting system given by the relations is converted to a confluent rewriting system via the Knuth–Bendix algorithm, then all reductions are guaranteed to produce the same irreducible word, namely the normal form for that word.
Description of the algorithm for finitely presented monoids
Suppose we are given a presentation , where is a set of generators and is a set of relations giving the rewriting system. Suppose further that we have a reduction ordering among the words generated by (e.g., shortlex order). For each relation in , suppose . Thus we begin with the set of reductions .
First, if any relation can be reduced, replace and with the reductions.
Next, we add more reductions (that is, rewriting rules) to eliminate possible exceptions of confluence. Suppose that and overlap.
Case 1: either the prefix of equals the suffix of , or vice versa. In the former case, we can write and ; in the latter case, and .
Case 2: either is completely contained in (surrounded by) , or vice versa. In the former case, we can write and ; in the latter case, and .
Reduce the word using first, then using first. Call the results , respectively. If , then we have an instance where confluence could fail. Hence, add the reduction to .
After adding a rule to , remove any rules in that might have reducible left sides (after checking if such rules have critical pairs with other rules).
Repeat the procedure until all overlapping left sides have been checked.
Examples
A terminating example
Consider the monoid:
.
We use the shortlex order. This is an infinite monoid but nevertheless, the Knuth–Bendix algorithm is able to solve the word problem.
Our beginning three reductions are therefore
A suffix of (namely ) is a prefix of , so consider the word . Reducing using (), we get . Reducing using (), we get . Hence, we get , giving the reduction rule
Similarly, using and reducing using () and (), we get . Hence the reduction
Both of these rules obsolete (), so we remove it.
Next, consider by overlapping () and (). Reducing we get , so we add the rule
Considering by overlapping () and (), we get , so we add the rule
These obsolete rules () and (), so we remove them.
Now, we are left with the rewriting system
Checking the overlaps of these rules, we find no potential failures of confluence. Therefore, we have a confluent rewriting system, and the algorithm terminates successfully.
A non-terminating example
The order of the generators may crucially affect whether the Knuth–Bendix completion terminates. As an example, consider the free Abelian group by the monoid presentation:
The Knuth–Bendix completion with respect to lexicographic order finishes with a convergent system, however considering the length-lexicographic order it does not finish for there are no finite convergent systems compatible with this latter order.
Generalizations
If Knuth–Bendix does not succeed, it will either run forever and produce successive approximations to an infinite complete system, or fail when it encounters an unorientable equation (i.e. an equation that it cannot turn into a rewrite rule). An enhanced version will not fail on unorientable equations and produces a ground confluent system, providing a semi-algorithm for the word problem.
The notion of logged rewriting discussed in the paper by Heyworth and Wensley listed below allows some recording or logging of the rewriting process as it proceeds. This is useful for computing identities among relations for presentations of groups.
References
C. Sims. 'Computations with finitely presented groups.' Cambridge, 1994.
Anne Heyworth and C.D. Wensley. "Logged rewriting and identities among relators." Groups St. Andrews 2001 in Oxford. Vol. I,'' 256–276, London Math. Soc. Lecture Note Ser., 304, Cambridge Univ. Press, Cambridge, 2003.
External links
Knuth-Bendix Completion Visualizer
Computational group theory
Donald Knuth
Combinatorics on words
Rewriting systems | Knuth–Bendix completion algorithm | [
"Mathematics"
] | 2,244 | [
"Combinatorics on words",
"Combinatorics"
] |
614,192 | https://en.wikipedia.org/wiki/Paschen%27s%20law | Paschen's law is an equation that gives the breakdown voltage, that is, the voltage necessary to start a discharge or electric arc, between two electrodes in a gas as a function of pressure and gap length. It is named after Friedrich Paschen who discovered it empirically in 1889.
Paschen studied the breakdown voltage of various gases between parallel metal plates as the gas pressure and gap distance were varied:
With a constant gap length, the voltage necessary to arc across the gap decreased as the pressure was reduced and then increased gradually, exceeding its original value.
With a constant pressure, the voltage needed to cause an arc reduced as the gap size was reduced but only to a point. As the gap was reduced further, the voltage required to cause an arc began to rise and again exceeded its original value.
For a given gas, the voltage is a function only of the product of the pressure and gap length. The curve he found of voltage versus the pressure-gap length product (right) is called Paschen's curve. He found an equation that fit these curves, which is now called Paschen's law.
At higher pressures and gap lengths, the breakdown voltage is approximately proportional to the product of pressure and gap length, and the term Paschen's law is sometimes used to refer to this simpler relation. However, this is only roughly true, over a limited range of the curve.
Paschen curve
Early vacuum experimenters found a rather surprising behavior. An arc would sometimes take place in a long irregular path rather than at the minimal distance between the electrodes. For example, in air, at a pressure of one atmosphere, the distance for minimal breakdown voltage is about 7.5 μm. The voltage required to arc this distance is 327 V, which is insufficient to ignite the arcs for gaps that are either wider or narrower. For a 3.5 μm gap, the required voltage is 533 V, nearly twice as much. If 500 V were applied, it would not be sufficient to arc at the 2.85 μm distance, but would arc at a 7.5 μm distance.
Paschen found that breakdown voltage was described by the equation
where is the breakdown voltage in volts, is the pressure in pascals, is the gap distance in meters, is the secondary-electron-emission coefficient (the number of secondary electrons produced per incident positive ion), is the saturation ionization in the gas at a particular (electric field/pressure), and is related to the excitation and ionization energies.
The constants and interpolate the first Townsend coefficient . They are determined experimentally and found to be roughly constant over a restricted range of for any given gas. For example, air with an in the range of 450 to 7500 V/(kPa·cm), = 112.50 (kPa·cm)−1 and = 2737.50 V/(kPa·cm).
The graph of this equation is the Paschen curve. By differentiating it with respect to and setting the derivative to zero, the minimal voltage can be found. This yields
and predicts the occurrence of a minimal breakdown voltage for = 7.5×10−6 m·atm. This is 327 V in air at standard atmospheric pressure at a distance of 7.5 μm.
The composition of the gas determines both the minimal arc voltage and the distance at which it occurs. For argon, the minimal arc voltage is 137 V at a larger 12 μm. For sulfur dioxide, the minimal arc voltage is 457 V at only 4.4 μm.
Long gaps
For air at standard conditions for temperature and pressure (STP), the voltage needed to arc a 1-metre gap is about 3.4 MV. The intensity of the electric field for this gap is therefore 3.4 MV/m.
The electric field needed to arc across the minimal-voltage gap is much greater than what is necessary to arc a gap of one metre. At large gaps (or large pd) Paschen's Law is known to fail. The Meek Criteria for breakdown is usually used for large gaps.
It takes into account non-uniformity in the electric field and formation of streamers due to the build up of charge within the gap that can occur over long distances. For a 7.5 μm gap the arc voltage is 327 V, which is 43 MV/m. This is about 14 times greater than the field strength for the 1.5-metre gap. The phenomenon is well verified experimentally and is referred to as the Paschen minimum.
The equation loses accuracy for gaps under about 10 μm in air at one atmosphere
and incorrectly predicts an infinite arc voltage at a gap of about 2.7 μm. Breakdown voltage can also differ from the Paschen curve prediction for very small electrode gaps, when field emission from the cathode surface becomes important.
Physical mechanism
The mean free path of a molecule in a gas is the average distance between its collision with other molecules. This is inversely proportional to the pressure of the gas, given constant temperature. In air at STP the mean free path of molecules is about 96 nm. Since electrons are much smaller, their average distance between colliding with molecules is about 5.6 times longer, or about 0.5 μm. This is a substantial fraction of the 7.5 μm spacing between the electrodes for minimal arc voltage. If the electron is in an electric field of 43 MV/m, it will be accelerated and acquire 21.5 eV of energy in 0.5 μm of travel in the direction of the field. The first ionization energy needed to dislodge an electron from nitrogen molecule is about 15.6 eV. The accelerated electron will acquire more than enough energy to ionize a nitrogen molecule. This liberated electron will in turn be accelerated, which will lead to another collision. A chain reaction then leads to avalanche breakdown, and an arc takes place from the cascade of released electrons.
More collisions will take place in the electron path between the electrodes in a higher-pressure gas. When the pressure–gap product is high, an electron will collide with many different gas molecules as it travels from the cathode to the anode. Each of the collisions randomizes the electron direction, so the electron is not always being accelerated by the electric field—sometimes it travels back towards the cathode and is decelerated by the field.
Collisions reduce the electron's energy and make it more difficult for it to ionize a molecule. Energy losses from a greater number of collisions require larger voltages for the electrons to accumulate sufficient energy to ionize many gas molecules, which is required to produce an avalanche breakdown.
On the left side of the Paschen minimum, the product is small. The electron mean free path can become long compared to the gap between the electrodes. In this case, the electrons might gain large amounts of energy, but have fewer ionizing collisions. A greater voltage is therefore required to assure ionization of enough gas molecules to start an avalanche.
Derivation
Basics
To calculate the breakthrough voltage, a homogeneous electrical field is assumed. This is the case in a parallel-plate capacitor setup. The electrodes may have the distance . The cathode is located at the point .
To get impact ionization, the electron energy must become greater than the ionization energy of the gas atoms between the plates. Per length of path a number of ionizations will occur. is known as the first Townsend coefficient as it was introduced by Townsend.
The increase of the electron current , can be described for the assumed setup as
(So the number of free electrons at the anode is equal to the number of free electrons at the cathode that were multiplied by impact ionization. The larger and/or , the more free electrons are created.)
The number of created electrons is
Neglecting possible multiple ionizations of the same atom, the number of created ions is the same as the number of created electrons:
is the ion current. To keep the discharge going on, free electrons must be created at the cathode surface. This is possible because the ions hitting the cathode release secondary electrons at the impact. (For very large applied voltages also field electron emission can occur.) Without field emission, we can write
where is the mean number of generated secondary electrons per ion. This is also known as the second Townsend coefficient. Assuming that , one gets the relation between the Townsend coefficients by putting () into () and transforming:
Impact ionization
What is the amount of ? The number of ionization depends upon the probability that an electron hits a gas molecule. This probability is the relation of the cross-sectional area of a collision between electron and ion in relation to the overall area that is available for the electron to fly through:
As expressed by the second part of the equation, it is also possible to express the probability as relation of the path traveled by the electron to the mean free path (distance at which another collision occurs).
is the number of molecules which electrons can hit. It can be calculated using the equation of state of the ideal gas
(: pressure, : volume, : Boltzmann constant, : temperature)
The adjoining sketch illustrates that . As the radius of an electron can be neglected compared to the radius of an ion it simplifies to . Using this relation, putting () into () and transforming to one gets
where the factor was only introduced for a better overview.
The alteration of the current of not yet collided electrons at every point in the path can be expressed as
This differential equation can easily be solved:
The probability that (that there was not yet a collision at the point ) is
According to its definition is the number of ionizations per length of path and thus the relation of the probability that there was no collision in the mean free path of the ions, and the mean free path of the electrons:
It was hereby considered that the energy that a charged particle can get between a collision depends on the electric field strength and the charge :
Breakdown voltage
For the parallel-plate capacitor we have , where is the applied voltage. As a single ionization was assumed is the elementary charge . We can now put () and () into () and get
Putting this into (5) and transforming to we get the Paschen law for the breakdown voltage that was first investigated by Paschen in
and whose formula was first derived by Townsend in
with
Plasma ignition
Plasma ignition in the definition of Townsend (Townsend discharge) is a self-sustaining discharge, independent of an external source of free electrons. This means that electrons from the cathode can reach the anode in the distance and ionize at least one atom on their way. So according to the definition of this relation must be fulfilled:
If is used instead of () one gets for the breakdown voltage
Conclusions, validity
Paschen's law requires that:
There are already free electrons at the cathode () which can be accelerated to trigger impact ionization. Such so-called seed electrons can be created by ionization by natural radioactivity or cosmic rays.
The creation of further free electrons is only achieved by impact ionization. Thus Paschen's law is not valid if there are external electron sources. This can, for example, be a light source creating secondary electrons by the photoelectric effect. This has to be considered in experiments.
Each ionized atom leads to only one free electron. However, multiple ionizations occur always in practice.
Free electrons at the cathode surface are created by the impacting ions. The problem is that the number of thereby created electrons strongly depends on the material of the cathode, its surface (roughness, impurities) and the environmental conditions (temperature, humidity etc.). The experimental, reproducible determination of the factor is therefore nearly impossible.
The electrical field is homogeneous.
Effects with different gases
Different gases will have different mean free paths for molecules and electrons. This is because different molecules have ionization cross sections, that is, different effective diameters. Noble gases like helium and argon are monatomic, which makes them harder to ionize and tend to have smaller effective diameters. This gives them greater mean free paths.
Ionization potentials differ between molecules, as well as the speed that they recapture electrons after they have been knocked out of orbit. All three effects change the number of collisions needed to cause an exponential growth in free electrons. These free electrons are necessary to cause an arc.
See also
Atmospheric pressure
Breakdown voltage
Dielectric strength
Townsend discharge
References
External links
Electrical breakdown limits for MEMS
High Voltage Experimenter's Handbook
Paschen's law calculator
Breakdown Voltage vs. PressureIn the internet archive 16.04.2023
Electrical Breakdown of Low Pressure Gases
Electrical Discharges
Pressure Dependence of Plasma Structure in Microwave Gas Breakdown at 110GHz
Electrical discharge in gases
Electrochemistry
Electrostatics
Electrical breakdown
Eponymous laws of physics
Plasma physics equations | Paschen's law | [
"Physics",
"Chemistry"
] | 2,643 | [
"Physical phenomena",
"Electrical discharge in gases",
"Equations of physics",
"Plasma phenomena",
"Electrochemistry",
"Electrical phenomena",
"Plasma physics equations",
"Electrical breakdown",
"Ions",
"Matter"
] |
614,197 | https://en.wikipedia.org/wiki/Outline%20of%20classical%20architecture | The following outline is provided as an overview of and topical guide to classical architecture:
Classical architecture – architecture of classical antiquity, that is, ancient Greek architecture and the architecture of ancient Rome. It also refers to the style or styles of architecture influenced by those. For example, most of the styles originating in post-Renaissance Europe can be described as classical architecture. This broad use of the term is employed by Sir John Summerson in The Classical Language of Architecture.
What type of thing is classical architecture?
Classical architecture can be described as all of the following:
Architecture – both the process and product of planning, designing and construction. Architectural works, in the material form of buildings, are often perceived as cultural and political symbols and as works of art. Historical civilizations are often identified with their surviving architectural achievements.
Architectural style – classification of architecture in terms of the use of form, techniques, materials, time period, region and other stylistic influences.
Art – aesthetic expression for presentation or performance, and the work produced from this activity. The word "art" is therefore both a verb and a noun, as is the term "classical architecture".
One of the arts – as an art form, classical architecture is an outlet of human expression, that is usually influenced by culture and which in turn helps to change culture. Classical architecture is a physical manifestation of the internal human creative impulse.
A branch of the visual arts – visual arts is a class of art forms, including painting, sculpture, photography, architecture and others, that focus on the creation of works which are primarily visual in nature.
Form of classicism – high regard in the arts for classical antiquity, as setting standards for taste which the classicists seek to emulate.
Classicism in architecture – places emphasis on symmetry, proportion, geometry and the regularity of parts as they are demonstrated in the architecture of Classical antiquity and in particular, the architecture of Ancient Rome, of which many examples remained.
Classical architectural structures
Ancient Greek architectural structures
Ancient Greek architecture – architecture produced by the Greek-speaking people (Hellenic people) whose culture flourished on the Greek mainland and Peloponnesus, the Aegean Islands, and in colonies in Asia Minor and Italy for a period from about 900 BC until the 1st century AD, with the earliest remaining architectural works dating from around 600 BC. Ancient Greek architecture is best known from its temples, and the Parthenon is a prime example.
Acropolis
Acropolis of Athens
Agora
Ancient Agora of Athens
Ancient Greek temple – List of Ancient Greek temples
Adyton
Cella
Opisthodomos
Peristasis
Pronaos
Pteron
Types of temple
Amphiprostyle
Antae temple
Metroon
Naiskos
Peripteros
Pseudodipteral
Pseudoperipteros
Ancient Greek theatre – List of ancient Greek theatres
Parodos
Skene
Bouleuterion
Greek baths
Greek gardens
Gymnasium
Conisterium
Xystus
Heroön
Hippodrome
Mausoleum
Monopteros
Neorion
Palaestra
Peribolos
Propylaea
Prostyle
Prytaneion
Pteron
Rostral column
Stadium
Stoa – List of stoae
Tholos
Ancient Roman architectural structures
Ancient Roman architecture – the Roman architectural revolution, also known as the concrete revolution, was the widespread use in Roman architecture of the previously little-used architectural forms of the arch, vault, and dome. A crucial factor in this development that saw a trend to monumental architecture was the invention of Roman concrete (also called opus caementicium).
Public architecture
Amphitheatre – List of Roman amphitheatres
Aqueduct – List of aqueducts in the city of Rome, List of aqueducts in the Roman Empire, and List of Roman aqueducts by date
Basilica
Bridge – List of Roman bridges
Canal – List of Roman canals
Castellum
Circus – List of Roman circuses
Cistern – List of Roman cisterns
Dams and reservoirs – List of Roman dams and reservoirs
Defensive wall
Dome – List of Roman domes
Forum
Hippodrome
Horreum
Hypaethral
Insula
Monument – List of ancient monuments in Rome, List of monuments of the Roman Forum
Nymphaeum
Obelisk – List of obelisks in Rome
Odeon
Roman lighthouse
Roman watermill
Rostra
Temple – List of Ancient Roman temples
Antae temple
Mithraeum
Tetrapylon
Theatre – List of Roman theatres
Cavea
Scaenae frons
Thermae – List of Roman public baths
Sphaeristerium
Tholos
Triumphal arch – List of Roman triumphal arches
Victory column
Rostral column
Private architecture
Domus
Atrium
Cavaedium
Coenaculum
Cubiculum
Exedra
Fauces
Impluvium
Oecus
Peristylium
Taberna
Tablinum
Triclinium
Vestibulum
Roman gardens
Villa
Villa rustica
Architectural styles
Architectural style
Byzantine architecture – initially, the early Byzantine architecture was stylistically and structurally indistinguishable from earlier Roman architecture; the ancient ways of building lived on, but relatively soon the architecture developed into a distinct Byzantine style.
Pre-Romanesque architecture –
Romanesque architecture – Romanesque architecture is the first pan-European architectural style since Imperial Roman architecture. Combining features of ancient Roman and Byzantine buildings and other local traditions, Romanesque architecture is known by its massive quality.
Gothic architecture – Gothic architecture (with which classical architecture is often posed), can incorporate classical elements and details, but does not to the same degree reflect a conscious effort to draw upon the architectural traditions of antiquity.
Renaissance architecture – is a conscious revival and development of certain elements of ancient Greek and Roman architecture. The Renaissance style places emphasis on symmetry, proportion, geometry and the regularity of parts, as they are demonstrated in the architecture of classical antiquity and in particular ancient Roman architecture, of which many examples remained. The classical architecture of the Renaissance from the outset represents a highly specific interpretation of the classical ideas.
Palladian architecture – European style of architecture derived from the designs of the Italian Renaissance architect Andrea Palladio (1508–1580). Palladio's work was strongly based on the symmetry, perspective and values of the formal classical temple architecture of the Ancient Greeks and Romans.
Baroque architecture – Baroque and Rococo architecture are styles which, although classical at root, display an architectural language very much in their own right. Baroque architects took the basic elements of Renaissance architecture and made them higher, grander, more decorated, and more dramatic.
Georgian architecture – set of architectural styles current between 1720 and 1840. In the mainstream of Georgian style were both Palladian architecture— and its whimsical alternatives, Gothic and Chinoiserie, which were the English-speaking world's equivalent of European Rococo.
Neoclassical architecture – architectural style produced by the neoclassical movement that began in the mid-18th century, manifested both in its details as a reaction against the Rococo style of naturalistic ornament, and in its architectural formulas as an outgrowth of some classicizing features of Late Baroque. In its purest form it is a style principally derived from the architecture of Classical Greece and the architecture of the Italian architect Andrea Palladio.
Empire style – sometimes considered the second phase of Neoclassicism, is an early-19th-century design movement in architecture, furniture, other decorative arts, and the visual arts followed in Europe and America until around 1830, although in the U. S. it continued in popularity in conservative regions outside the major metropolitan centers well past the mid-19th century.
Biedermeier architecture – neoclassical architecture in Central Europe between 1815 and 1848.
Resort architecture (Bäderarchitektur) – a specific neoclassical style that came up at the end of the 18th century in German seaside resorts and is widely used in the region until today.
Federal architecture – classicizing architecture built in the United States between c. 1780 and 1830, and particularly from 1785 to 1815. This style shares its name with its era, the Federal Period.
Regency architecture – buildings built in Britain during the period in the early 19th century when George IV was Prince Regent, and also to later buildings following the same style. The style corresponds to the Biedermeier style in the German-speaking lands, Federal style in the United States and to the French Empire style.
Greek Revival architecture – architectural movement of the late 18th and early 19th centuries, predominantly in Northern Europe and the United States. A product of Hellenism, it may be looked upon as the last phase in the development of Neoclassical architecture.
Beaux-Arts architecture –
Nordic Classicism – style of architecture that briefly blossomed in the Nordic countries (Sweden, Denmark, Norway and Finland) between 1910 and 1930.
New Classical Architecture – architectural movement to revive and embrace classical architecture as a legitimate form of architecture for the 20th and 21st centuries. Beginning first with Postmodern architecture's criticism of modernist architectural movements like International Style, New Classical architecture seeks to be an alternative to the ongoing dominance of Modern architecture.
Architectural elements
Building elements
Acroterion – ornament mounted at the apex of the pediment of a building
Aedicula – small inset shrine
Aegis
Amphiprostyle
Anathyrosis
Anta
Antefix
Apollarium
Apse
Arch
Architrave
Archivolt
Arris
Atlas – male figure support
Bracket
Bucranium
Capital
Caryatid – female figure support
Cippus
Coffer
Colonnade – long sequence of columns, joined by their entablature
Column
Corbel
Cornerstone
Cornice
Crepidoma
Crocket
Cryptoporticus
Cupola
Decastyle
Diaulos
Diocletian (thermal) window
Dome – List of Roman domes
Eisodos
Entablature – superstructure resting on the column capitals
Epistyle – see Architrave
Euthynteria
Finial
Frieze
Geison
Gutta
Hypocaust
Hypostyle
Hypotrachelium
Imbrex and tegula – interlocking roof tiles used in ancient Greek and Roman architecture
Intercolumniation
Keystone
Metope
Modillion
Mosaic
Oculus
Ornament
Orthostates
Pediment
Peristyle
Pilae stacks
Pilaster – flat surface raised from the wall to resemble a column
Plinth
Portico
Portico types: tetrastyle, hexastyle, octastyle, decastyle
Post and lintel
Pronaos
Prostyle
Puteal
Quoin – masonry blocks in a wall's corner
Roof – List of Greco-Roman roofs
Rustication
Scamilli impares
Semi-dome
Sima
Sphinx
Spiral stairs – List of ancient spiral stairs
Spur
Stoa – covered walkway or portico
Stylobate
Suspensura
Tambour
Term
Triglyph
Tympanum
Taenia
Velarium
Vitruvian opening
Volute
Vomitorium
Building materials
Aggregate
Ceramic
Lime mortar
Marble
Roman brick
Roman concrete
Spolia
Terracotta
Classical orders
Classical orders
Aeolic order – an early order of Classical architecture
Greek orders
Doric order
Ionic order
Corinthian order
Roman orders
Composite order
Tuscan order
Types of buildings and structures
Amphitheatre
Bathhouse
Greek
Roman
Bouleuterion
Nymphaeum
Odeon
Stoa
Temple
Greek
Roman
Theater
Greek
Roman
Tholos
Treasury
Villa
Watermill – List of ancient watermills
Classical architecture organizations
The Institute of Classical Architecture and Art
Classical architecture publications
De architectura – treatise on architecture written by the Roman architect Vitruvius and dedicated to his patron, the emperor Caesar Augustus, as a guide for building projects. The work is one of the most important sources of modern knowledge of Roman building methods, planning, and design.
De re aedificatoria – classic architectural treatise written by Leon Battista Alberti between 1443 and 1452. Although largely dependent on Vitruvius' De architectura, it was the first theoretical book on the subject written in the Italian Renaissance and in 1485 became the first printed book on architecture.
The Five Orders of Architecture (1562) by Giacomo Barozzi da Vignola.
I quattro libri dell'architettura (1570) – a treatise on architecture by the architect Andrea Palladio.
The Classical Language of Architecture, a 1965 compilation of six BBC radio lectures given in 1963 by Sir John Summerson.
Persons influential in classical architecture
John Summerson – one of the leading British architectural historians of the 20th century.
John Travlos – Greek architectural historian, author.
See also
Architectural glossary
Index of architecture articles
Table of years in architecture
Timeline of architecture
References
"Greek Temple." The Macmillan Visual Dictionary. Unabridged Compact ed. 1995.
The Elements of Classical Architecture (Classical America Series in Art and Architecture). Gromort Georges (Author), Richard Sammons (Introductory Essay), W. W. Norton & Co. (June 20, 2001);
External links
Illustrated Glossary of Classical Architecture
Classical architecture – article from Encyclopædia Britannica online
Architecture, Classical
Architecture, Classical
Classical architecture
Classical studies
Architectural elements
Architectural history
Design history | Outline of classical architecture | [
"Technology",
"Engineering"
] | 2,606 | [
"Architectural history",
"Building engineering",
"Design history",
"Architectural elements",
"Architecture lists",
"Design",
"Components",
"Architecture"
] |
614,294 | https://en.wikipedia.org/wiki/Spacewatch | The Spacewatch Project is an astronomical survey that specializes in the study of minor planets, including various types of asteroids and comets at University of Arizona telescopes on Kitt Peak near Tucson, Arizona. The Spacewatch Project has been active longer than any other similar currently active programs.
Spacewatch was founded in 1980 by Tom Gehrels and Robert S. McMillan, and is currently led by astronomer Melissa Brucker at the University of Arizona. Spacewatch uses several telescopes on Kitt Peak for follow-up observations of near-Earth objects.
The Spacewatch Project uses three telescopes of apertures 0.9-m, 1.8-m, and 2.3-m. These telescopes are located on Kitt Peak, and the first two are dedicated to the purpose of locating Near-Earth Objects (NEOs).
The 36 inch (0.9 meter) telescope on Kitt Peak has been in use by Spacewatch since 1984, and since 2000 the 72 inch (1.8 meter) Spacewatch telescope. The 36 inch telescope continued in use and was further upgraded, in particular, the telescopes use electronic detectors.
Spacewatch's 1.8-meter telescope is the largest in the world that is used exclusively for asteroids and comets. It can find asteroids and comets anywhere from the space near Earth to regions beyond the orbit of Neptune and to do astrometry on the fainter of objects that are already known. The telescope is pointed on stars and tracked with a real time video-rate camera at folded prime focus.
Spacewatch was the first to use CCDs to survey the sky for comets and asteroids. When added, they permitted faster coverage of the sky than the pre-2002 system.
Each year, Spacewatch observes approximately 35 radar targets, 50 near-Earth objects, and 100 potential spacecraft rendezvous destinations. From 2013 to 2016, Spacewatch observed half of all NEOs and potentially hazardous asteroids (PHAs) observed by anyone in that time. , Spacewatch had discovered over 179,000 minor planets numbered by the Minor Planet Center.
History
The 1.8 meter Spacewatch telescope and its building on Kitt Peak were dedicated on June 7, 1997 for the purpose of finding previously unknown asteroids and comets. Since January 1 2003, Spacewatch has made ~2400 separate-night detections of Near-Earth Objects.
There was an upgrade to the 0.9 meter which was funded by NASA and the Kirsch Foundation.
The Spacewatch Project is the longest-running of all present programs of astrometry of solar system objects.
Spacewatch in Action
Spacewatch conducted a survey that was proposed May 12, 2006, and accepted on November 13, 2006. This survey used data taken over 34 months by the University of Arizona’s Spacewatch Project based at Steward Observatory, Kitt Peak. Spacewatch revisited the same sky area every three to seven nights in order to track cohorts of main-belt asteroids. This survey discovered one new large Kuiper Belt Object (KBO) and detected six others. This proved that new sweeps of the sky are productive even if they have been previously examined simply due to the complexities of running large surveys over many nights and variable conditions.
Notable discoveries
Callirrhoe
5145 Pholus
9965 GNU
9885 Linux
9882 Stallman
9793 Torvalds
20000 Varuna
60558 Echeclus
, target of JAXA's Hayabusa2 extended mission.
65803 Didymos, target of the DART mission
(136617) 1994 CC
C/1992 J1
125P/Spacewatch
174567 Varda
The project rediscovered 719 Albert, a long-lost asteroid.
See also
Planetary Data System (PDS)
Spaceguard
List of near-Earth object observation projects
References
Planetary science
Astronomical discoveries by institution
Near-Earth object tracking | Spacewatch | [
"Astronomy"
] | 776 | [
"Planetary science",
"Astronomical sub-disciplines"
] |
614,297 | https://en.wikipedia.org/wiki/Solvay%20process | The Solvay process or ammonia–soda process is the major industrial process for the production of sodium carbonate (soda ash, Na2CO3). The ammonia–soda process was developed into its modern form by the Belgian chemist Ernest Solvay during the 1860s. The ingredients for this are readily available and inexpensive: salt brine (from inland sources or from the sea) and limestone (from quarries). The worldwide production of soda ash in 2005 was estimated at 42 million tonnes, which is more than six kilograms () per year for each person on Earth. Solvay-based chemical plants now produce roughly three-quarters of this supply, with the remaining being mined from natural deposits. This method superseded the Leblanc process.
History
The name "soda ash" is based on the principal historical method of obtaining alkali, which was by using water to extract it from the ashes of certain plants. Wood fires yielded potash and its predominant ingredient potassium carbonate (K2CO3), whereas the ashes from these special plants yielded "soda ash" and its predominant ingredient sodium carbonate (Na2CO3). The word "soda" (from the Middle Latin) originally referred to certain plants that grow in salt solubles; it was discovered that the ashes of these plants yielded the useful alkali soda ash. The cultivation of such plants reached a particularly high state of development in the 18th century in Spain, where the plants are named barrilla (or "barilla" in English). The ashes of kelp also yield soda ash and were the basis of an enormous 18th-century industry in Scotland. Alkali was also mined from dry lakebeds in Egypt.
By the late 18th century these sources were insufficient to meet Europe's burgeoning demand for alkali for soap, textile, and glass industries. In 1791, the French physician Nicolas Leblanc developed a method to manufacture soda ash using salt, limestone, sulfuric acid, and coal. Although the Leblanc process came to dominate alkali production in the early 19th century, the expense of its inputs and its polluting byproducts (including hydrogen chloride gas) made it apparent that it was far from an ideal solution.
It has been reported that in 1811 French physicist Augustin Jean Fresnel discovered that sodium bicarbonate precipitates when carbon dioxide is bubbled through ammonia-containing brines – which is the chemical reaction central to the Solvay process. The discovery wasn't published. As has been noted by Desmond Reilly, "The story of the evolution of the ammonium–soda process is an interesting example of the way in which a discovery can be made and then laid aside and not applied for a considerable time afterwards." Serious consideration of this reaction as the basis of an industrial process dates from the British patent issued in 1834 to H. G. Dyar and J. Hemming. There were several attempts to reduce this reaction to industrial practice, with varying success.
In 1861, Belgian industrial chemist Ernest Solvay turned his attention to the problem; he was apparently largely unaware of the extensive earlier work. His solution was a gas absorption tower in which carbon dioxide bubbled up through a descending flow of brine. This, together with efficient recovery and recycling of the ammonia, proved effective. By 1864 Solvay and his brother Alfred had acquired financial backing and constructed a plant in Couillet, today a suburb of the Belgian town of Charleroi. The new process proved more economical and less polluting than the Leblanc method, and its use spread. In 1874, the Solvays expanded their facilities with a new, larger plant at Nancy, France.
In the same year, Ludwig Mond visited Solvay in Belgium and acquired rights to use the new technology. He and John Brunner formed the firm of Brunner, Mond & Co., and built a Solvay plant at Winnington, near Northwich, Cheshire, England. The facility began operating in 1874. Mond was instrumental in making the Solvay process a commercial success. He made several refinements between 1873 and 1880 that removed byproducts that could slow or halt the process.
In 1884, the Solvay brothers licensed Americans William B. Cogswell and Rowland Hazard to produce soda ash in the US, and formed a joint venture (Solvay Process Company) to build and operate a plant in Solvay, New York.
By the 1890s, Solvay-process plants produced the majority of the world's soda ash.
In 1938 large deposits of the mineral trona were discovered near the Green River in Wyoming from which sodium carbonate can be extracted more cheaply than produced by the process. The original Solvay New York plant closed in 1986, replaced in the US by a factory in Green River. Throughout the rest of the world, the Solvay process remains the major source of soda ash.
Chemistry
The Solvay process results in soda ash (predominantly sodium carbonate (Na2CO3)) from brine (as a source of sodium chloride (NaCl)) and from limestone (as a source of calcium carbonate (CaCO3)). The overall process is:
2NaCl + CaCO3 -> Na2CO3 + CaCl2
The actual implementation of this global, overall reaction is intricate. A simplified description can be given using the four different, interacting chemical reactions illustrated in the figure. In the first step in the process, carbon dioxide (CO2) passes through a concentrated aqueous solution of sodium chloride (table salt, NaCl) and ammonia (NH3).
NaCl + CO2 + NH3 + H2O -> NaHCO3 + NH4Cl ---(I)
In industrial practice, the reaction is carried out by passing concentrated brine (salt water) through two towers. In the first, ammonia bubbles up through the brine and is absorbed by it. In the second, carbon dioxide bubbles up through the ammoniated brine, and sodium bicarbonate (baking soda) precipitates out of the solution. Note that, in a basic solution, NaHCO3 is less water-soluble than sodium chloride. The ammonia (NH3) buffers the solution at a basic (high) pH; without the ammonia, a hydrochloric acid byproduct would render the solution acidic, and arrest the precipitation. Here, NH3 along with ammoniacal brine acts as a mother liquor.
The necessary ammonia "catalyst" for reaction (I) is reclaimed in a later step, and relatively little ammonia is consumed. The carbon dioxide required for reaction (I) is produced by heating ("calcination") of the limestone at 950–1100 °C, and by calcination of the sodium bicarbonate (see below). The calcium carbonate (CaCO3) in the limestone is partially converted to quicklime (calcium oxide (CaO)) and carbon dioxide:
CaCO3 -> CO2 + CaO ---(II)
The sodium bicarbonate (NaHCO3) that precipitates out in reaction (I) is filtered out from the hot ammonium chloride (NH4Cl) solution, and the solution is then reacted with the quicklime (calcium oxide (CaO)) left over from heating the limestone in step (II).
2 NH4Cl + CaO -> 2 NH3 + CaCl2 + H2O ---(III)
CaO makes a strong basic solution. The ammonia from reaction (III) is recycled back to the initial brine solution of reaction (I).
The sodium bicarbonate (NaHCO3) precipitate from reaction (I) is then converted to the final product, sodium carbonate (washing soda: Na2CO3), by calcination (160–230 °C), producing water and carbon dioxide as byproducts:
2 NaHCO3 -> Na2CO3 + H2O + CO2 ---(IV)
The carbon dioxide from step (IV) is recovered for re-use in step (I). When properly designed and operated, a Solvay plant can reclaim almost all its ammonia, and consumes only small amounts of additional ammonia to make up for losses. The only major inputs to the Solvay process are salt, limestone and thermal energy, and its only major byproduct is calcium chloride, which is sometimes sold as road salt.
After the invention of the Haber and other new ammonia-producing processes in the 1910s and 1920s its price dropped, and there was less need in reclaiming it. So in the modified Solvay process developed by Chinese chemist Hou Debang in 1930s, the first few steps are the same as the Solvay process, but the CaCl2 is supplanted by ammonium chloride (NH4Cl). Instead of treating the remaining solution with lime, carbon dioxide and ammonia are pumped into the solution, then sodium chloride is added until the solution saturates at 40 °C. Next, the solution is cooled to 10 °C. Ammonium chloride precipitates and is removed by filtration, and the solution is recycled to produce more sodium carbonate. Hou's process eliminates the production of calcium chloride. The byproduct ammonium chloride can be refined, used as a fertilizer and may have greater commercial value than CaCl2, thus reducing the extent of waste beds.
Additional details of the industrial implementation of this process are available in the report prepared for the European Soda Ash Producer's Association.
Byproducts and wastes
The principal byproduct of the Solvay process is calcium chloride (CaCl2) in aqueous solution. The process has other waste and byproducts as well. Not all of the limestone that is calcined is converted to quicklime and carbon dioxide (in reaction II); the residual calcium carbonate and other components of the limestone become wastes. In addition, the salt brine used by the process is usually purified to remove magnesium and calcium ions, typically to form carbonates (MgCO3, CaCO3); otherwise, these impurities would lead to scale in the various reaction vessels and towers. These carbonates are additional waste products.
In inland plants, such as that in Solvay, New York, the byproducts have been deposited in "waste beds"; the weight of material deposited in these waste beds exceeded that of the soda ash produced by about 50%. These waste beds have led to water pollution, principally by calcium and chloride. The waste beds in Solvay, New York substantially increased the salinity in nearby Onondaga Lake, which used to be among the most polluted lakes in the U.S. and is a superfund pollution site. As such waste beds age, they do begin to support plant communities which have been the subject of several scientific studies.
At seaside locations, such as those at Saurashtra, Gujarat, India, the CaCl2 solution may be discharged directly into the sea, apparently without substantial environmental harm (although small amounts of heavy metals in it may be a problem), the major concern is discharge location falls within the Marine National Park of Gulf of Kutch which serves as habitat for coral reefs, seagrass and seaweed community. At Osborne, South Australia, a settling pond is now used to remove 99% of the CaCl2 as the former discharge was silting up the shipping channel. At Rosignano Solvay in Tuscany, Italy the limestone waste produced by the Solvay factory has changed the landscape, producing the "Spiagge Bianche" ("White Beaches"). A report published in 1999 by the United Nations Environment Programme (UNEP), listed Spiagge Bianche among the priority pollution hot spots in the coastal areas of the Mediterranean Sea.
Carbon sequestration and the Solvay process
Variations in the Solvay process have been proposed for carbon sequestration. One idea is to react carbon dioxide, produced perhaps by the combustion of coal, to form solid carbonates (such as sodium bicarbonate) that could be permanently stored, thus avoiding carbon dioxide emission into the atmosphere. The Solvay process could be modified to give the overall reaction:
2 NaCl + CaCO3 + + → 2NaHCO3 + CaCl2
Variations in the Solvay process have been proposed to convert carbon dioxide emissions into sodium carbonates, but carbon sequestration by calcium or magnesium carbonates appears more promising. However, the amount of carbon dioxide which can be used for carbon sequestration with calcium or magnesium (when compared to the total amount of carbon dioxide exhausted by mankind) is very low. This is primarily due to the major feasibility difference between capturing carbon dioxide from controlled and concentrated emission sources such as from coal-fired power plants as compared to capturing carbon from non-concentrated small-scale carbon sources such as small fires, vehicle exhaust, human respiration etc. when using such methods. Moreover, variation on the Solvay process will most probably add an additional energy consuming step, which will increase carbon dioxide emissions unless carbon neutral energy sources like hydropower, nuclear energy, wind or solar power are used.
See also
Chloralkali process
Hou's process, a production method similar to the Solvay process but ammonia is not recycled
References
Further reading
The minimum energy required to calcine limestone is about per tonne.
External links
European Soda Ash Producer's Association (ESAPA)
Timeline of US plant at Solvay, New York
Salt and the Chemical Revolution
Process flow diagram of Solvay process
Ammonia
Chemical processes
Belgian inventions | Solvay process | [
"Chemistry"
] | 2,816 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
614,302 | https://en.wikipedia.org/wiki/Hydrocarbon%20exploration | Hydrocarbon exploration (or oil and gas exploration) is the search by petroleum geologists and geophysicists for deposits of hydrocarbons, particularly petroleum and natural gas, in the Earth's crust using petroleum geology.
Exploration methods
Visible surface features such as oil seeps, natural gas seeps, pockmarks (underwater craters caused by escaping gas) provide basic evidence of hydrocarbon generation (be it shallow or deep in the Earth). However, most exploration depends on highly sophisticated technology to detect and determine the extent of these deposits using exploration geophysics. Areas thought to contain hydrocarbons are initially subjected to a gravity survey, magnetic survey, passive seismic or regional seismic reflection surveys to detect large-scale features of the sub-surface geology. Features of interest (known as leads) are subjected to more detailed seismic surveys which work on the principle of the time it takes for reflected sound waves to travel through matter (rock) of varying densities and using the process of depth conversion to create a profile of the substructure. Finally, when a prospect has been identified and evaluated and passes the oil company's selection criteria, an exploration well is drilled in an attempt to conclusively determine the presence or absence of oil or gas. Offshore the risk can be reduced by using electromagnetic methods
Oil exploration is an expensive, high-risk operation. Offshore and remote area exploration is generally only undertaken by very large corporations or national governments. Typical shallow shelf oil wells (e.g. North Sea) cost US$10 – 30 million, while deep water wells can cost up to US$100 million plus. Hundreds of smaller companies search for onshore hydrocarbon deposits worldwide, with some wells costing as little as US$100,000.
Elements of a petroleum prospect
A prospect is a potential trap which geologists believe may contain hydrocarbons. A significant amount of geological, structural and seismic investigation must first be completed to redefine the potential hydrocarbon drill location from a lead to a prospect. Four geological factors have to be present for a prospect to work and if any of them fail neither oil nor gas will be present.
Source rock When organic-rich rock such as oil shale or coal is subjected to high pressure and temperature over an extended period of time, hydrocarbons form.
Migration The hydrocarbons are expelled from source rock by three density-related mechanisms: the newly matured hydrocarbons are less dense than their precursors, which causes over-pressure; the hydrocarbons are lighter, and so migrate upwards due to buoyancy, and the fluids expand as further burial causes increased heating. Most hydrocarbons migrate to the surface as oil seeps, but some will get trapped.
Reservoir The hydrocarbons are contained in a reservoir rock. This is commonly a porous sandstone or limestone. The oil collects in the pores within the rock although open fractures within non-porous rocks (e.g. fractured granite) may also store hydrocarbons. The reservoir must also be permeable so that the hydrocarbons will flow to surface during production.
Trap The hydrocarbons are buoyant and have to be trapped within a structural (e.g. Anticline, fault block) or stratigraphic trap. The hydrocarbon trap has to be covered by an impermeable rock known as a seal or cap-rock in order to prevent hydrocarbons escaping to the surface.
Exploration risk
Hydrocarbon exploration is a high risk investment and risk assessment is paramount for successful project portfolio management. Exploration risk is a difficult concept and is usually defined by assigning confidence to the presence of the imperative geological factors, as discussed above. This confidence is based on data and/or models and is usually mapped on Common Risk Segment Maps (CRS Maps). High confidence in the presence of imperative geological factors is usually coloured green and low confidence coloured red. Therefore, these maps are also called Traffic Light Maps, while the full procedure is often referred to as Play Fairway Analysis (PFA). The aim of such procedures is to force the geologist to objectively assess all different geological factors. Furthermore, it results in simple maps that can be understood by non-geologists and managers to base exploration decisions on.
Terms used in petroleum evaluation
Bright spot On a seismic section, coda that have high amplitudes due to a formation containing hydrocarbons.
Chance of success An estimate of the chance of all the elements (see above) within a prospect working, described as a probability.
Dry hole A boring that does not contain commercial hydrocarbons. See also Dry-hole clause
Flat spot Possibly an oil-water, gas-water or gas-oil contact on a seismic section; flat due to gravity.
Full Waveform Inversion A supercomputer technique recently use in conjunction with seismic sensors to explore for petroleum deposits offshore.
Hydrocarbon in place Amount of hydrocarbon likely to be contained in the prospect. This is calculated using the volumetric equation - GRV x N/G x Porosity x Sh / FVF
Gross rock volume (GRV) Amount of rock in the trap above the hydrocarbon water contact
Net sand Part of GRV that has the lithological capacity for being a productive zone; i.e. less shale contaminations.
Net reserve Part of net sand that has the minimum reservoir qualities; i.e. minimum porosity and permeability values.
Net/gross ratio (N/G) Proportion of the GRV formed by the reservoir rock (range is 0 to 1)
Porosity Percentage of the net reservoir rock occupied by pores (typically 5-35%)
Hydrocarbon saturation (Sh) Some of the pore space is filled with water - this must be discounted
Formation volume factor (FVF) Oil shrinks and gas expands when brought to the surface. The FVF converts volumes at reservoir conditions (high pressure and high temperature) to storage and sale conditions
Lead Potential accumulation is currently poorly defined and requires more data acquisition and/or evaluation in order to be classified as a prospect.
Play An area in which hydrocarbon accumulations or prospects of a given type occur. For example, the shale gas plays in North America include the Barnett, Eagle Ford, Fayetteville, Haynesville, Marcellus, and Woodford, among many others.
Prospect A lead which has been more fully evaluated.
Recoverable hydrocarbons Amount of hydrocarbon likely to be recovered during production. This is typically 10-50% in an oil field and 50-80% in a gas field.
Licensing
Petroleum resources are typically owned by the government of the host country. In the United States, most onshore (land) oil and gas rights (OGM) are owned by private individuals, in which case oil companies must negotiate terms for a lease of these rights with the individual who owns the OGM. Sometimes this is not the same person who owns the land surface. In most nations the government issues licences to explore, develop and produce its oil and gas resources, which are typically administered by the oil ministry. There are several different types of licence. Oil companies often operate in joint ventures to spread the risk; one of the companies in the partnership is designated the operator who actually supervises the work.
Tax and Royalty Companies would pay a royalty on any oil produced, together with a profits tax (which can have expenditure offset against it). In some cases there are also various bonuses and ground rents (license fees) payable to the government - for example a signature bonus payable at the start of the licence. Licences are awarded in competitive bid rounds on the basis of either the size of the work programme (number of wells, seismic etc.) or size of the signature bonus.
Production Sharing contract (PSA) A PSA is more complex than a Tax/Royalty system - The companies bid on the percentage of the production that the host government receives (this may be variable with the oil price), There is often also participation by the Government owned National Oil Company (NOC). There are also various bonuses to be paid. Development expenditure is offset against production revenue.
Service contract This is when an oil company acts as a contractor for the host government, being paid to produce the hydrocarbons.
Reserves and resources
Resources are hydrocarbons which may or may not be produced in the future. A resource number may be assigned to an undrilled prospect or an unappraised discovery. Appraisal by drilling additional delineation wells or acquiring extra seismic data will confirm the size of the field and lead to project sanction. At this point the relevant government body gives the oil company a production licence which enables the field to be developed. This is also the point at which oil reserves and gas reserves can be formally booked.
Oil and gas reserves
Oil and gas reserves are defined as volumes that will be commercially recovered in the future. Reserves are separated into three categories: proved, probable, and possible. To be included in any reserves category, all commercial aspects must have been addressed, which includes government consent. Technical issues alone separate proved from unproved categories. All reserve estimates involve some degree of uncertainty.
Proved reserves are the highest valued category. Proved reserves have a "reasonable certainty" of being recovered, which means a high degree of confidence that the volumes will be recovered. Some industry specialists refer to this as P90, i.e., having a 90% certainty of being produced. The SEC provides a more detailed definition:
Proved oil and gas reserves are those quantities of oil and gas, which, by analysis of geoscience and engineering data, can be estimated with reasonable certainty to be economically producible—from a given date forward, from known reservoirs, and under existing economic conditions, operating methods, and government regulations—prior to the time at which contracts providing the right to operate expire, unless evidence indicates that renewal is reasonably certain, regardless of whether deterministic or probabilistic methods are used for the estimation. The project to extract the hydrocarbons must have commenced or the operator must be reasonably certain that it will commence the project within a reasonable time.
Probable reserves are volumes defined as "less likely to be recovered than proved, but more certain to be recovered than Possible Reserves". Some industry specialists refer to this as P50, i.e., having a 50% certainty of being produced.
Possible reserves are reserves which analysis of geological and engineering data suggests are less likely to be recoverable than probable reserves. Some industry specialists refer to this as P10, i.e., having a 10% certainty of being produced.
The term 1P is frequently used to denote proved reserves; 2P is the sum of proved and probable reserves; and 3P the sum of proved, probable, and possible reserves. The best estimate of recovery from committed projects is generally considered to be the 2P sum of proved and probable reserves. Note that these volumes only refer to currently justified projects or those projects already in development.
Reserve booking
Oil and gas reserves are the main asset of an oil company. Booking is the process by which they are added to the balance sheet.
In the United States, booking is done according to a set of rules developed by the Society of Petroleum Engineers (SPE). The reserves of any company listed on the New York Stock Exchange have to be stated to the U.S. Securities and Exchange Commission. Reported reserves may be audited by outside geologists, although this is not a legal requirement.
In Russia, companies report their reserves to the State Commission on Mineral Reserves (GKZ).
See also
Abiogenic petroleum origin
Decline curve analysis
Drill baby drill
Energy development
Future energy development
Giant oil and gas fields
Hubbert peak
NORM
Petroleum
Petroleum exploration in the Arctic
Petroleum licensing
Renewable energy
Seismic source
Site survey
Upward continuation
Wildcatter
Extraction of petroleum
References
External links
Oilfield Glossary
Exploration Geology Forums
Exploration
Petroleum geology
Natural gas
Oil exploration
Fossil fuels | Hydrocarbon exploration | [
"Chemistry"
] | 2,418 | [
"Organic compounds",
"Hydrocarbons",
"Petroleum geology",
"Petroleum"
] |
614,397 | https://en.wikipedia.org/wiki/Lake%20breakout | Lake breakout is the collapse of a lake, usually of high-altitude. High-altitude lakes tend to form in volcanic craters – where they are called crater lakes – or in valleys dammed as the result of earthquakes or glacial or volcanic deposition. Lake breakouts are most common a few weeks or months after a volcanic eruption as a river becomes blocked by volcanic debris.
Process
The walls of such lakes can be unstable and may be breached after fresh earthquakes or because of erosion. As water rushes outwards, the initial channel is cut wider and deeper, further increasing the flow. This may cause the lake's rim to collapse abruptly. The usual result is for huge amounts of water to be displaced, incorporating a great deal of sediment which increases it in volume by as much as two or four times, or even more. This produces violent floods and lahars with devastating effects for any settlements in their path.
Historical events
The larger a temporary lake is, the more extreme the likely breakout will be. One of the largest known to have occurred in recent geological history was the collapse, around 15,000 years ago, of the ancient Lake Bonneville which was filled with meltwater from the last ice age and covered large areas of Utah, Idaho and Nevada. Erosion broke through the lake shore's lowest point, the Red Rock Pass in Idaho, releasing as much as 1,000 mile3 (4,000 km3) of water within a period estimated to have lasted only two weeks. The energies released by the outburst were capable not just of stripping surface features, but gouging out bedrock to a depth of many feet. It drastically reshaped the landscape downstream, carving out many of the features of the Snake River and its surrounding area. Glacial Lake Missoula also had lake breakouts, due to breaking of ice dams, see Missoula Floods. These lake breakouts caused extensive erosion throughout eastern Washington.
It has been suggested that gigantic breakouts from underground lakes may have been responsible for carving some of the canyons of Mars.
Mitigation
Although there is little that can be done about many lake breakouts, some have been prevented (or at least delayed) by human intervention. The 1980 eruption of Mount St. Helens blocked nearby Castle Creek, forming a lake which geologists feared would produce a sudden lahar. The United States Army Corps of Engineers excavated an outlet channel which prevented the lake from overtopping its new debris dam.
References
Flood
Volcanoes
Seismology | Lake breakout | [
"Environmental_science"
] | 499 | [
"Hydrology",
"Flood"
] |
614,411 | https://en.wikipedia.org/wiki/High%20Times | High Times is an American monthly magazine (and cannabis brand) that advocates the legalization of cannabis as well as other counterculture ideas. The magazine was founded in 1974 by Tom Forcade. The magazine had its own book publishing division, High Times Books, and its own record label, High Times Records.
From 1974 to 2016, High Times was published by Trans-High Corporation (THC). Hightimes Holding Corp. acquired THC and the magazine in 2017.
Overview
High Times covers a wide range of topics, including politics, activism, drugs, education, sex, music, and film, as well as photography.
Like Playboy, each issue of High Times contains a centerfold photo; however, instead of a nude woman, High Times typically features a cannabis plant. (The magazine, however, often featured women—occasionally crowned as "Ms. High Times"—on the cover to help newsstand sales.) In addition, the magazine "published writers like Hunter S. Thompson, William S. Burroughs, Charles Bukowski, Allen Ginsberg, and Truman Capote."
Publication history
Origins
Forçade's previous attempt—via the Underground Press Syndicate/Alternative Press Syndicate—to reach a wide counterculture audience of underground papers had failed, even though he had the support of several noteworthy writers, photographers, and artists. Through High Times, Forçade was able to get his message to the masses without relying on mainstream media. Forçade was quoted as saying, "Those cavemen must've been stoned, no pun intended."
High Times was originally meant to be a joke: a single-issue lampoon of Playboy, substituting marijuana for sex. Brainstorming for the first issue's contents was conducted by a group that included Forcade, Rex Weiner, Ed Dwyer, Robert Singer, A. J. Weberman, Dana Beal, Ed Rosenthal, the underground cartoonist Yossarian a.k.a. Alan Shenker, and Cindy Ornsteen a.k.a. Anastasia Sirocco.
The first issue, 50 pages in total, with the tagline, "The Magazine of High Society," appeared in the summer of 1974. Advertising for the first issue had been pre-sold at that year's National Fashion and Boutique Show. "High Times #1 made its debut at the June 1974 show and was an instant success, selling out its first run of 10,000 copies and getting reprinted twice."
The magazine's first editor was Ed Dwyer, who had earlier written the text of the Woodstock music festival program booklet (as well as the Woodstock film program booklet). The magazine was initially distributed by Homestead Book Company and Big Rapids Distribution.
High Times was at the beginning funded by drug money from the sale of illegal marijuana, But the magazine found an audience, becoming a monthly publication with a growing circulation, and the staff quickly grew to 40 people. Marijuana hydroponics growers were a large part of the magazine's advertiser base.
Financial struggles and legal battles
High Times founder Forçade committed suicide in November 1978. He bequeathed trusts to benefit High Times and the National Organization for the Reform of Marijuana Laws (NORML). (Forçade had been a supporter of NORML since the organization's founding in 1970.)
Following Forçade's death, the magazine was controlled by "mostly by Forçade’s relatives" and lawyer Michael John Kennedy.
Under the editorship of Larry Sloman (from 1979 to 1984),
the magazine consistently struggled against marijuana prohibition laws, and fought to keep itself alive and publishing in an anti-cannabis atmosphere. Reflecting the time period, High Times began to feature positive coverage of cocaine as a recreational drug.
The magazine's former associate publisher, Rick Cusick, said the only way High Times managed to stay in business and never miss a publication date for over four decades was, "Really, really good lawyers, even though everybody knew I was talking about just one—Michael Kennedy." Kennedy served as the general counsel and chairman of the board for High Times for over 40 years until his death in 2016, when his wife and board member, Eleanora Kennedy, took the reins.
Mainstream success and the Hager era
In 1987, High Times was audited by the Audit Bureau of Circulation as reaching 500,000 copies an issue, rivaling Rolling Stone and National Lampoon.
In 1988, Steven Hager was hired as the magazine's editor. He changed the focus from the promotion of hard drugs (e.g., cocaine and heroin), and instead concentrated on advocating personal cultivation of cannabis. Hager became the first editor to publish and promote the work of hemp activist Jack Herer.
In 1988, under Hager's leadership, the magazine created the Cannabis Cup, a cannabis awards ceremony held every Thanksgiving in Amsterdam that later expanded to a number of U.S. cities. He also formed the High Times Freedom Fighters, the first hemp legalization group. The High Times Freedom Fighters were famous for dressing up in Colonial outfits and organizing hemp rallies across the United States. One rally, the Boston Freedom Rally, quickly became the largest marijuana-related political event in the country, drawing an audience of over 30,00 to the Boston Common in 2013.
The magazine advocated for the widespread use of hemp in the 1990s, publishing a quarterly magazine called Hemp Times and operating a retail location in Manhattan called Planet Hemp.
In 1991, the magazine began featuring celebrities on the cover of the magazine. Over the years, these included Cypress Hill, The Black Crowes, Ziggy Marley, Beavis and Butt-Head, Milla Jovovich, Ice Cube, Wu-Tang Clan, George Carlin, Ozzy Osbourne, Kevin Smith, Frances McDormand, Pauly Shore, Sacha Baron Cohen, Willie Nelson, Woody Harrelson, and Snoop Dogg.
In 1997, the magazine and Hager founded the Counterculture Hall of Fame, with inductions held annually on Thanksgiving as part of the Amsterdam Cannabis Cup event.
In the late 1980s Mike Edison began writing "Shoot the Tube," a featured column about television and politics for High Times. In 1998 Edison was named the magazine's publisher, and later took control of the editorial side of the magazine as well. As editor and publisher, he caused a furor among staffers by putting Black Sabbath singer Ozzy Osbourne on the cover, and then leaking to the New York Posts Page Six gossip column that thousands of dollars of pot had gone missing from the photo shoot. After taking the magazine to new heights in sales and advertising, Edison was instrumental in producing High Times first feature film, High Times' Potluck. Edison left High Times in 2001.
In 2000, the magazine established the Stony Awards to recognize and celebrate notable stoner films and television episodes about cannabis. Six High Times Stony Awards ceremonies were held in New York City beginning in 2000, before the Stonys moved to Los Angeles in 2007. Award winners received a bong-shaped trophy. Starting in 2002, the Stonys presented the Thomas King Forçade Award for "stony achievement" in film.
Later developments
In 2003, Steven Hager was fired, and High Times' board of directors shifted the magazine's focus from marijuana to more literary content, hiring John Buffalo Mailer as executive editor. As a result, the magazine "lost a third of the circulation in nine months." Mailer left the magazine within a year—a succession of editors followed, including David Bienenstock, Rick Cusick, and Steve Bloom.
In 2004, High Times returned to its roots, releasing the CD High Volume: The Stoner Rock Collection. Hager was rehired, first as the creative director, and then in 2006, back to the position of editor-in-chief, but by 2009 he had returned to the role of creative director.
In November 2009, High Times celebrated its 35th anniversary.
In the period 2010–2013, the magazine put out a standalone publication that advocated for medical marijuana.
Hager was again let go by the magazine in 2013, eventually successfully suing High Times for defrauding him of his ownership shares in the company. Hager subsequently released a 20-part series on YouTube, titled The Strategic Meeting, showing the internal machinations inside the company. The video series asserts that Michael Kennedy stole the company from the rightful employees and subverted the original mission for his own private gain.
In October 2014, the magazine celebrated its 40th anniversary with a party attended by celebrities such as Susan Sarandon. In 2014, the High Times website was read by 500,000 to five million users each month.
Relocation to L.A., sale
In January 2017, the magazine announced it would be permanently relocating from New York to Los Angeles. This followed the legalization of marijuana in several West Coast states, including California.
In the summer of 2017, High Times was sold to a group of investors led by Adam Levin of Oreva Capital for an amount estimated from $42 million to $70 million.
High Times acquired cannabis media company Green Rush Daily, Inc. on April 5, 2018. The deal was valued at $6.9 million. Green Rush Daily founder Scott McGovern joined the magazine as senior executive vice president.
Columns
"Almost Infamous" by Bobby Black (2004–2016)—lifestyle and entertainment
“Ask Ed: Your Marijuana Questions Answered" by Ed Rosenthal (1980s–1990s)
"Brain Damage Report" by Paul Krassner (late 1970s–2000s)
"Cannabis Column" by Jon Gettman
"Chef Ra's Psychedelic Kitchen" by Chef Ra ( 1988– 2003)
"Sex Pot" by Hyapatia Lee (from 2013)
"The Stoned Gamer" by Alana Evans (from 2014)—gaming
"Toasted Tweets" by Jessica Delfino (2016)—weekly cannabis-themed Twitter round-up
"The Stone Cold Cop List" by Jon Cappetta (2020) - monthly collection of newly released products
Comics
By 1976, High Times was publishing comics in its pages, by the likes of underground comix creators such as Gilbert Shelton ("The Fabulous Furry Freak Brothers"), Kim Deitch, Josh Alan and Drew Friedman, Bill Griffith ("Zippy the Pinhead"), Paul Kirchner ("Dope Rider"), Milton Knight ("Zoe"), Spain Rodriguez ("Trashman"), Dave Sheridan, Frank Thorne, and Skip Williamson ("Snappy Sammy Smoot"). Later, artists like Bob Fingerman and Mary Wilshire contributed comics to High Times as well.
Notable contributors and staff members
Andrew Weil was a regular contributor to High Times from 1975 to 1983. For a time, William Levy served as the magazine's European editor.
In 1976, Bruce Eisner became a contributing editor for the magazine. Chip Berlet was the magazine's Washington, D.C. bureau chief in the Seventies. Jeff Goldberg was an editor in 1978–1979.
Kyle Kushman is a former cultivation reporter for High Times and has been a contributing writer for over 20 years.
Bobby Black had a long association with High Times, from 1994 to 2015, including being a senior editor and columnist. His involvement at High Times included production director and associate art director; writing the monthly lifestyle and entertainment column "Almost Infamous"; writing feature articles and interviews; creator and producer of the magazine's annual Miss High Times beauty pageant; producer and host of the annual High Times Doobie Awards for music; lead reporter, judge, and competition coordinator for the Cannabis Cup and the High Times Medical Cannabis Cup; and A&R, producer, liner notes and art director for High Volume: The Stoner Rock Collection CD (High Times Records).
At age 19, Zena Tsarfin started as an intern for the magazine. She later returned to High Times, serving as the magazine's managing editor until 2001 and then again from March 2006 to January 2007. From 2014 to 2016, Tsarfin was High Times' director of digital media.
Danny Danko is the magazine's former Senior Cultivation Editor.
The careers of a number of writers/editors from the comics industry overlapped with High Times, including Tsarfin, Josh Alan Friedman (High Times managing editor, 1983), Lou Stathis (High Times editor, late 1980s), Ann Nocenti (High Times editor, 2004), and most significantly, John Holmstrom, who began to work for the magazine as managing editor in 1987, was soon promoted to executive editor, and in 1991 was promoted to publisher and president. In 1996 he stepped aside to launch and oversee the High Times website, and left the magazine for good in 2000.
Andrew James Parker, a.k.a. Chewberto420, is a cannabis photographer and author, based out of the Western United States (predominantly Huntington Beach, California and Pagosa Springs, Colorado), who has made contributions to the magazine since 2016. Parker is known for his images based in macro photography. He discovered naturally occurring purple hash through experimentation with anthocyanins within cannabis.
Book publishing
See also
Cannabis Cup
High Times' Potluck
Counterculture Hall of Fame
Stony Awards
High Times Medical Cannabis Cup
Notes
Further reading
External links
Lifestyle magazines published in the United States
Cannabis magazines
Cannabis media in the United States
Cannabis activism
Cannabis law in the United States
Counterculture
Drug control law
Monthly magazines published in the United States
Magazines established in 1974
Magazines published in New York City
1974 in cannabis
1974 establishments in New York (state) | High Times | [
"Chemistry"
] | 2,752 | [
"Drug control law",
"Regulation of chemicals"
] |
614,515 | https://en.wikipedia.org/wiki/Ship%20model%20basin | A ship model basin is a basin or tank used to carry out hydrodynamic tests with ship models, for the purpose of designing a new (full sized) ship, or refining the design of a ship to improve the ship's performance at sea. It can also refer to the organization (often a company) that owns and operates such a facility.
An engineering firm acts as a contractor to the relevant shipyards, and provides hydrodynamic model tests and numerical calculations to support the design and development of ships and offshore structures.
History
The eminent English engineer William Froude published a series of influential papers on ship designs for maximising stability in the 1860s. The Institution of Naval Architects eventually commissioned him to identify the most efficient hull shape. He validated his theoretical models with extensive empirical testing, using scale models for the different hull dimensions. He established a formula (now known as the Froude number) by which the results of small-scale tests could be used to predict the behaviour of full-sized hulls. He built a sequence of 3, 6 and (shown in the picture) 12 foot scale models and used them in towing trials to establish resistance and scaling laws. His experiments were later vindicated in full-scale trials conducted by the Admiralty and as a result the first ship model basin was built, at public expense, at his home in Torquay. Here he was able to combine mathematical expertise with practical experimentation to such good effect that his methods are still followed today.
Inspired by Froude's successful work, shipbuilding company William Denny and Brothers completed the world's first commercial example of a ship model basin in 1883. The facility was used to test models of a variety of vessels and explored various propulsion methods, including propellers, paddles and vane wheels. Experiments were carried out on models of the Denny-Brown stabilisers and the Denny hovercraft to gauge their feasibility. Tank staff also carried out research and experiments for other companies: Belfast-based Harland & Wolff decided to fit a bulbous bow on the liner Canberra after successful model tests in the Denny Tank.
Test facilities
The hydrodynamic test facilities present at a model basin site include at least a towing tank and a cavitation tunnel and workshops. Some ship model basins have further facilities such as a maneuvering and seakeeping basin and an ice tank.
Towing tank
A towing tank is a basin, several metres wide and hundreds of metres long, equipped with a towing carriage that runs on two rails on either side. The towing carriage can either tow the model or follow the self-propelled model, and is equipped with computers and devices to register or control, respectively, variables such as speed, propeller thrust and torque, rudder angle etc. The towing tank serves for resistance and propulsion tests with towed and self-propelled ship models to determine how much power the engine will have to provide to achieve the speed laid down in the contract between shipyard and ship owner. The towing tank also serves to determine the maneuvering behaviour in model scale. For this, the self-propelled model is exposed to a series of zig-zag maneuvers at different rudder angle amplitudes. Post-processing of the test data by means of system identification results in a numerical model to simulate any other maneuver like Dieudonné spiral test or turning circles. Additionally, a towing tank can be equipped with a PMM (planar motion mechanism) or a CPMC (computerized planar motion carriage) to measure the hydrodynamic forces and moments on ships or submerged objects under the influence of oblique inflow and enforced motions. The towing tank can also be equipped with a wave generator to carry out seakeeping tests, either by simulating natural (irregular) waves or by exposing the model to a wave packet that yields a set of statistics known as response amplitude operators (acronym RAO), that determine the ship's likely real-life sea-going behavior when operating in seas with varying wave amplitudes and frequencies (these parameters being known as sea states). Modern seakeeping test facilities can determine these RAO statistics, with the aid of appropriate computer hardware and software, in a single test.
Cavitation tunnel
A cavitation tunnel is used to investigate propellers. This is a vertical water circuit with large diameter pipes. At the top, it carries the measuring facilities. A parallel inflow is established. With or without a ship model, the propeller, attached to a dynamometer, is brought into the inflow, and its thrust and torque is measured at different ratios of propeller speed (number of revolutions) to inflow velocity. A stroboscope synchronized with the propeller speed serves to visualize cavitation as if the cavitation bubble would not move. By this, one can observe if the propeller would be damaged by cavitation. To ensure similarity to the full-scale propeller, the pressure is lowered, and the gas content of the water is controlled.
Workshops
Ship model basins manufacture their ship models from wood or paraffin with a computerized milling machine. Some of them also manufacture their model propellers. Equipping the ship models with all drives and gauges and manufacturing equipment for non-standard model tests are the main tasks of the workshops.
Maneuvering and seakeeping basin
This is a test facility that is wide enough to investigate arbitrary angles between waves and the ship model, and to perform maneuvers like turning circles, for which the towing tank is too narrow. However, some important maneuvers like the spiral test still require even more space and still have to be simulated numerically after system identification.
Ice tank
An ice tank is used to develop ice breaking vessels, this tank fulfills similar purposes as the towing tank does for open water vessels. Resistance and required engine power as well as maneuvering behaviour are determined depending on the ice thickness. Also ice forces on offshore structures can be determined. Ice layers are frozen with a special procedure to scale down the ice crystals to model scale.
Software
Additionally, these companies or authorities have CFD software and experience to simulate the complicated flow around ships and their rudders and propellers numerically. Today's state of the art does not yet allow software to replace model tests in their entirety by CFD calculations. One reason, but not the only one, is that elementization is still expensive. Also the lines design of some of the ships is carried out by the specialists of the ship model basin, either from the beginning or by optimizing the initial design obtained from the shipyard. The same applies to the design of propellers.
Examples
The ship model basins worldwide are organized in the ITTC (International Towing Tank Conference) to standardize their model test procedures.
Some of the most significant ship model basins are:
Denny Tank in Dumbarton, Scotland. The Denny tank was the first commercial ship model basin.
Current Meter Rating Trolly, CMC Division, CWPRS Pune, India
SINTEF Ocean, towing tank, ocean basin, cavitation tunnel in Trondheim, Norway
High speed towing tank - Wolfson Unit MTIA - specialists in high performance power and sail.
David Taylor Model Basin and the Davidson Laboratory at the Carderock Division of the Naval Surface Warfare Center in the United States
High Speed Towing Tank facility at Naval Science and Technological Labs at Vizag India
The Institute for Ocean Technology in St. Johns, Canada
FORCE Technology in Lyngby, Denmark
SSPA, in Gothenburg, Sweden
Laboratory of Naval and Oceanic Engineering() of Institute for Technological Research of São Paulo in São Paulo, Brazil.
Maritime Research Institute Netherlands (MARIN) in Wageningen, the Netherlands
CNR-INSEAN in Rome, Italy
University of Naples Federico II in Naples, Italy
SVA Potsdam in Potsdam, Germany
HSVA in Hamburg, Germany
"Bassin d'essai des carènes" in Val de Reuil, France
CEHIPAR in Madrid, Spain
CTO S.A.] in Gdansk, Poland
FloWaveTT in Edinburgh, Scotland
Krylov state research centre Крыловский государственный научный центр in Saint-Petersburg, Russia
National Maritime Research Institute (NMRI) in Tokyo, Japan
China Ship Scientific Research Center(CSSRC) in Wuxi, China
in Berlin, Germany
References
Ultramarine Inc. web page on vessel modeling
External links
"Putting More Speed And Power In Our Navy" , June 1943, Popular Science large and well illustrated article on towing basins
Ship design
Physical modeling | Ship model basin | [
"Physics"
] | 1,722 | [
"nan"
] |
614,534 | https://en.wikipedia.org/wiki/Birth%20order | Birth order refers to the order a child is born in their family; first-born and second-born are examples. Birth order is often believed to have a profound and lasting effect on psychological development. This assertion has been repeatedly challenged. Recent research has consistently found that earlier born children score slightly higher on average on measures of intelligence, but has found zero, or almost zero, robust effect of birth order on personality. Nevertheless, the notion that birth-order significantly influences personality continues to have a strong presence in pop psychology and popular culture.
Theory
Alfred Adler (1870–1937), an Austrian psychiatrist, and a contemporary of Sigmund Freud and Carl Jung, was one of the first theorists to suggest that birth order influences personality. He argued that birth order can leave an indelible impression on an individual's style of life, which is one's habitual way of dealing with the tasks of friendship, love, and work. According to Adler, firstborns are "dethroned" when a second child comes along, and this loss of perceived privilege and primacy may have a lasting influence on them. Middle children may feel ignored or overlooked, causing them to develop the so-called middle child syndrome. Younger and only children may be pampered and spoiled, which was suggested to affect their later personalities. All of this assumes what Adler believed to be a typical family situation, e.g., a nuclear family living apart from the extended family, without the children being orphaned, with average spacing between births, without twins and other multiples, and with surviving children not having severe physical, intellectual, or psychiatric disabilities.
Since Adler's time, the influence of birth order on the development of personality has become a controversial issue in psychology. Among the general public, it is widely believed that personality is strongly influenced by birth order, but many psychologists dispute this. One modern theory of personality states that the Big Five personality traits of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism represent most of the important elements of personality that can be measured. Contemporary empirical research shows that birth order does not influence the Big Five personality traits.
In his 1996 book Born to Rebel, Frank Sulloway suggested that birth order had powerful effects on the Big Five personality traits. He argued that firstborns were much more conscientious and socially dominant, less agreeable, and less open to new ideas compared to laterborns. However, critics such as Fred Townsend, Toni Falbo, and Judith Rich Harris, argue against Sulloway's theories. A full issue of Politics and the Life Sciences, dated September, 2000 but not published until 2004 due to legal threats from Sulloway, contains carefully and rigorously researched criticisms of Sulloway's theories and data. Subsequent large independent multi-cohort studies have revealed approximately zero effect of birth order on personality.
In their book Sibling Relationships: Their Nature and Significance across the Lifespan, Michael E. Lamb and Brian Sutton-Smith argue that as individuals continually adjust to competing demands of socialization agents and biological tendencies, any effects of birth order may be eliminated, reinforced, or altered by later experiences.
Personality
Claims about birth order effects on personality have received much attention in scientific research, with the National Academy of Sciences in the USA concluding that effects are zero or near zero. Such research is a challenge because of the difficulty of controlling all the variables that are statistically related to birth order. Family size, and a number of social and demographic variables are associated with birth order and serve as potential confounds. For example, large families are generally lower in socioeconomic status than small families. Hence third-born children are not only third in birth order, but they are also more likely to come from larger, poorer families than firstborn children. If third-born children have a particular trait, it may be due to birth order, or it may be due to family size, or to any number of other variables. Consequently, there are a large number of published studies on birth order that are confounded.
Literature reviews that have examined many studies and attempted to control for confounding variables tend to find minimal effects for birth order. Ernst and Angst reviewed all of the research published between 1946 and 1980. They also did their own study on a representative sample of 6,315 young men from Switzerland. They found no substantial effects of birth order and concluded that birth order research was a "waste of time." More recent research analyzed data from a national sample of 9,664 subjects on the Big Five personality traits of extraversion, neuroticism, agreeableness, conscientiousness, and openness to experience. Contrary to Sulloway's predictions, they found no significant correlation between birth order and self-reported personality. There was, however, some tendency for people to perceive birth order effects when they were aware of the birth order of an individual.
Smaller studies have partially supported Sulloway's claims. Paulhus and colleagues reported that first borns scored higher on conservatism, conscientiousness and achievement orientation, and later borns higher on rebelliousness, openness, and agreeableness. The authors argued that the effect emerges most clearly from studies within families. Results are weak at best, when individuals from different families are compared. The reason is that genetic effects are stronger than birth order effects. Recent studies also support the claim that only children are not markedly different from their peers with siblings. Scientists have found that they share many characteristics with firstborn children including being conscientious as well as parent-oriented.
In her review of the research, Judith Rich Harris suggests that birth order effects may exist within the context of the family of origin, but that they are not enduring aspects of personality. When people are with their parents and siblings, firstborns behave differently from laterborns, even during adulthood. However, most people don't spend their adult lives in their childhood home. Harris provides evidence that the patterns of behavior acquired in the childhood home don't affect the way people behave outside the home, even during childhood. Harris concludes that birth order effects keep turning up because people keep looking for them, and keep analyzing and reanalyzing their data until they find them.
Intelligence
In a metanalysis, Polit and Falbo (1988) found that firstborns, only children, and children with one sibling all score higher on tests of verbal ability than later-borns and children with multiple siblings.
Robert Zajonc argued for a "confluence" model in which the lack of siblings experienced by firstborns exposes them to the more intellectual adult family environment. This predicts similar increases in IQ for siblings who next-oldest sibling is at least five years senior. These children are considered to be "functional firstborns". The theory further predicts that firstborns will be more intelligent than only children, because the latter will not benefit from the "tutor effect" (i.e. teaching younger siblings).
Several studies have found that firstborns have slightly higher IQ than later borns. Such data is, however, commonly confounded with family size, which is in turn correlated with IQ confounds, such as social status. Likewise, an analysis of data from the National Child Development Study has been used in support of an alternate admixture hypothesis, which asserts that the apparent birth-order effect on intelligence is wholly an artifact of family size, i.e. an instance of selection pressure acing against intelligence under modern conditions.
The claim that firstborns have higher IQ scores to begin with, has, however, also been disputed outright. Data from the National Longitudinal Survey of Youth show no relationship between birth order and intelligence.
Sexual orientation
The fraternal birth order effect is the name given to the theory that the more older brothers a man has, the greater the probability is that he will have a homosexual orientation. The fraternal birth order effect is said to be the strongest known predictor of sexual orientation, with each older brother increasing a man's odds of being gay by approximately 33%. (One of the largest studies to date, however, suggests a smaller effect, of 15% higher odds.) Even so, the fraternal birth order effect only accounts for a maximum of one seventh of the prevalence of homosexuality in men. There seems to be no effect on sexual orientation in women, and no effect of the number of older sisters.
In Homosexuality, Birth Order, and Evolution: Toward an Equilibrium Reproductive Economics of Homosexuality, Edward M. Miller suggests that the birth order effect on homosexuality may be a by-product of an evolved mechanism that shifts personality away from heterosexuality in laterborn sons. According to Miller, this would have the consequence of reducing the probability of these sons engaging in unproductive competition with each other. Evolution may have favored biological mechanisms prompting human parents to exert affirmative pressure toward heterosexual behavior in earlier-born children: As more children in a family survive infancy and early childhood, the continued existence of the parents' gene line becomes more assured (cf. the pressure on newly-wed European aristocrats, especially young brides, to produce "an heir and a spare"), and the benefits of encouraging heterosexuality weigh less strongly against the risk of psychological damage that a strongly heteronormative environment poses to a child predisposed toward homosexuality.
More recently, this birth order effect on sexuality in males has been attributed to a very specific biological occurrence. As the mother gives birth to more sons, she is thought to develop an immunity to certain male-specific antigens. This immunity then leads to an effect in the brain that has to do with sexual preference. Yet this biological effect is seen only in right-handed males. If not right-handed, the number of older brothers has been found to have no prediction on the sexuality of a younger brother. This has led researchers to consider if the genes for sexuality and handedness are somehow related.
Not all studies, including some with large, nationally representative samples, have been able to replicate the fraternal birth order effect. Some did not find any statistically significant difference in the sibling composition of gay and straight men; this includes the National Longitudinal Study of Adolescent to Adult Health, the largest U.S. study with relevant data on the subject. Furthermore, at least one study, on the familial correlates of joining a same-sex union or marriage in a sample of two million people in Denmark, found that the only sibling correlate of joining a same-sex union among men was having older sisters, not older brothers.
Traditional naming of children according to their birth order
In some of the world's cultures, birth order is so important that each child within the family is named according to the order in which the child was born. For example, in the Aboriginal Australian Barngarla language, there are nine male birth order names and nine female birth order names, as following:
Male: Biri (1st), Warri (2nd), Gooni (3rd), Mooni (4th), Mari (5th), Yari (6th), Mili (7th), Wanggooyoo (8th) and Ngalai (9th).
Female: Gardanya (1st), Wayooroo (2nd), Goonda (3rd), Moonaga (4th), Maroogoo (5th), Yaranda (6th), Milaga (7th), Wanggoordoo (8th) and Ngalaga (9th).
To determine the suitable name for the newborn child, one first finds out the number of the newborn within the family, and only then chooses the male/female name, according to the gender of the newborn. So, for example, if a baby girl is born after three boys, her name would be Moonaga (4th born, female) as she is the fourth child within the family.
In some modern day Western cultures, it is common for parents to give their children the same name as them. This tradition dates back to the 17th century and is most prevalent in fathers and sons, where the son will receive the same first name, middle name, and surname with either a "Jr.", "II", "III" or "IV", etc. attached after the family surname. This practice started as a symbol of status for 'upper class' citizens, but is now more commonly used as a family tradition, not necessarily implying that they are of a 'higher status' than their peer(s), sibling(s) or other family members.
The tradition of a father naming his son after himself or a male relative from an earlier generation (grandfather, great-grandfather) is referred to as 'patronymic', while the tradition of a mother naming her daughter after herself or a female relative from an earlier generation (grandmother, great-grandmother) is referred to as 'matronymic'.
See also
Adlerian
The Birth Order Book
Family
Firstborn (Judaism)
Individual psychology
Only child
Primogeniture
Sibling rivalry
Sladdbarn
References
External links
"Development of the Firstborn Personality Scale". Self-report scale developed empirically to predict first born status. Includes open-access dataset.
Birth order and intelligence
The Independent article
USA Today article on CEOs
Investigating the effects birth order has on personality, self-esteem, satisfaction with life and age
Human development
Psychological theories
Sibling | Birth order | [
"Biology"
] | 2,752 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
614,697 | https://en.wikipedia.org/wiki/Bcl-2 | Bcl-2, encoded in humans by the BCL2 gene, is the founding member of the Bcl-2 family of regulator proteins. BCL2 blocks programmed cell death (apoptosis) while other BCL2 family members can either inhibit or induce it. It was the first apoptosis regulator identified in any organism.
Bcl-2 derives its name from B-cell lymphoma 2, as it is the second member of a range of proteins initially described in chromosomal translocations involving chromosomes 14 and 18 in follicular lymphomas. Orthologs (such as Bcl2 in mice) have been identified in numerous mammals for which complete genome data are available.
Like BCL3, BCL5, BCL6, BCL7A, BCL9, and BCL10, it has clinical significance in lymphoma.
Isoforms
The two isoforms of Bcl-2, Isoform 1, and Isoform 2, exhibit a similar fold. However, results in the ability of these isoforms to bind to the BAD and BAK proteins, as well as in the structural topology and electrostatic potential of the binding groove, suggest differences in antiapoptotic activity for the two isoforms.
Function
BCL-2 is localized to the outer membrane of mitochondria, where it plays an important role in promoting cellular survival and inhibiting the actions of pro-apoptotic proteins. The pro-apoptotic proteins in the BCL-2 family, including Bax and Bak, normally act on the mitochondrial membrane to promote permeabilization and release of cytochrome c and ROS, that are important signals in the apoptosis cascade. These pro-apoptotic proteins are in turn activated by BH3-only proteins, and are inhibited by the function of BCL-2 and its relative BCL-Xl.
There are additional non-canonical roles of BCL-2 that are being explored. BCL-2 is known to regulate mitochondrial dynamics, and is involved in the regulation of mitochondrial fusion and fission. Additionally, in pancreatic beta-cells, BCL-2 and BCL-Xl are known to be involved in controlling metabolic activity and insulin secretion, with inhibition of BCL-2/Xl showing increasing metabolic activity, but also additional ROS production; this suggests it has a protective metabolic effect in conditions of high demand.
Role in disease
Damage to the Bcl-2 gene has been identified as a cause of a number of cancers, including melanoma, breast, prostate, chronic lymphocytic leukemia, and lung cancer, and a possible cause of schizophrenia and autoimmunity. It is also a cause of resistance to cancer treatments.
Cancer
Cancer can be seen as a disturbance in the homeostatic balance between cell growth and cell death. Over-expression of anti-apoptotic genes, and under-expression of pro-apoptotic genes, can result in the lack of cell death that is characteristic of cancer. An example can be seen in lymphomas. The over-expression of the anti-apoptotic Bcl-2 protein in lymphocytes alone does not cause cancer. But simultaneous over-expression of Bcl-2 and the proto-oncogene myc may produce aggressive B-cell malignancies including lymphoma. In follicular lymphoma, a chromosomal translocation commonly occurs between the fourteenth and the eighteenth chromosomes – t(14;18) – which places the Bcl-2 gene from chromosome 18 next to the immunoglobulin heavy chain locus on chromosome 14. This fusion gene is deregulated, leading to the transcription of excessively high levels of Bcl-2. This decreases the propensity of these cells for apoptosis. Bcl-2 expression is frequent in small cell lung cancer, accounting for 76% cases in one study.
Auto-immune diseases
Apoptosis plays an active role in regulating the immune system. When it is functional, it can cause immune unresponsiveness to self-antigens via both central and peripheral tolerance. In the case of defective apoptosis, it may contribute to etiological aspects of autoimmune diseases. The autoimmune disease type 1 diabetes can be caused by defective apoptosis, which leads to aberrant T cell AICD and defective peripheral tolerance. Due to the fact that dendritic cells are the immune system's most important antigen-presenting cells, their activity must be tightly regulated by mechanisms such as apoptosis. Researchers have found that mice containing dendritic cells that are Bim -/-, thus unable to induce effective apoptosis, have autoimmune diseases more so than those that have normal dendritic cells. Other studies have shown that dendritic cell lifespan may be partly controlled by a timer dependent on anti-apoptotic Bcl-2.
Other
Apoptosis plays an important role in regulating a variety of diseases. For example, schizophrenia is a psychiatric disorder in which an abnormal ratio of pro- and anti-apoptotic factors may contribute towards pathogenesis. Some evidence suggests that this may result from abnormal expression of Bcl-2 and increased expression of caspase-3.
Diagnostic use
Antibodies to Bcl-2 can be used with immunohistochemistry to identify cells containing the antigen. In healthy tissue, these antibodies react with B-cells in the mantle zone, as well as some T-cells. However, positive cells increase considerably in follicular lymphoma, as well as many other forms of cancer. In some cases, the presence or absence of Bcl-2 staining in biopsies may be significant for the patient's prognosis or likelihood of relapse.
Targeted therapies
Targeted and selective Bcl-2 inhibitors that have been in development or are currently in the clinic include:
Oblimersen
An antisense oligonucleotide drug, oblimersen (G3139), was developed by Genta Incorporated to target Bcl-2. An antisense DNA or RNA strand is non-coding and complementary to the coding strand (which is the template for producing respectively RNA or protein). An antisense drug is a short sequence of modified DNA that hybridises with and inactivates mRNA, preventing the protein from being formed.
Human lymphoma cell proliferation (with t(14;18) translocation) could be inhibited by antisense oligonucleotide targeted at the start codon region of Bcl-2 mRNA. In vitro studies led to the identification of Genasense, which is complementary to the first 6 codons of Bcl-2 mRNA.
These showed successful results in Phase I/II trials for lymphoma. A large Phase III trial was launched in 2004. As of 2016, the drug had not been approved and its developer was out of business.
ABT-737 and navitoclax (ABT-263)
In the mid-2000s, Abbott Laboratories developed a novel inhibitor of Bcl-2, Bcl-xL and Bcl-w, known as ABT-737. This compound is part of a group of BH3 mimetic small molecule inhibitors (SMI) that target these Bcl-2 family proteins, but not A1 or Mcl-1. ABT-737 is superior to previous BCL-2 inhibitors given its higher affinity for Bcl-2, Bcl-xL and Bcl-w. In vitro studies showed that primary cells from patients with B-cell malignancies are sensitive to ABT-737.
In animal models, it improves survival, causes tumor regression and cures a high percentage of mice. In preclinical studies utilizing patient xenografts, ABT-737 showed efficacy for treating lymphoma and other blood cancers. Because of its unfavorable pharmacologic properties ABT-737 is not appropriate for clinical trials, while its orally bioavailable derivative navitoclax (ABT-263) has similar activity on small cell lung cancer (SCLC) cell lines and has entered clinical trials. While clinical responses with navitoclax were promising, mechanistic dose-limiting thrombocytopenia was observed in patients under treatment due to Bcl-xL inhibition in platelets.
Venetoclax (ABT-199)
Due to dose-limiting thrombocytopenia of navitoclax as a result of Bcl-xL inhibition, Abbvie successfully developed the highly selective inhibitor venetoclax (ABT-199), which inhibits Bcl-2, but not Bcl-xL or Bcl-w. Clinical trials studied the effects of venetoclax, a BH3-mimetic drug designed to block the function of the Bcl-2 protein, on patients with chronic lymphocytic leukemia (CLL). Good responses have been reported and thrombocytopenia was no longer observed. A phase 3 trial started in Dec 2015.
It was approved by the US FDA in April 2016 as a second-line treatment for CLL associated with 17-p deletion. This was the first FDA approval of a BCL-2 inhibitor. In June 2018, the FDA broadened the approval for anyone with CLL or small lymphocytic lymphoma, with or without 17p deletion, still as a second-line treatment.
Sonrotoclax (BGB-11417)
Venetoclax drug resistance has been noted with the G101V mutation in BCL-2 observed in relapsing patients. Sonrotoclax shows greater tumor growth inhibition in hematologic tumor models than venetoclax and inhibits venetoclax-resistant BCL-2 variants. Sonrotoclax is under clinical investigation as a monotherapy and in combination with other anticancer agents.
Lisaftoclax
Interactions
Bcl-2 has been shown to interact with:
BAK1,
BCAP31,
BCL2-like 1,
BCL2L11,
BECN1,
BID,
BMF,
BNIP2,
BNIP3,
BNIPL,
BAD
BAX,
BIK,
C-Raf,
CAPN2,
CASP8,
Cdk1,
HRK,
IRS1,
Myc,
NR4A1,
Noxa,
PPP2CA,
PSEN1,
RAD9A,
RRAS,
RTN4,
SMN1,
SOD1, and
TP53BP2.
See also
Apoptosome
Bcl-2 homologous antagonist killer (BAK)
Bcl-2-associated X protein (BAX)
BH3 interacting domain death agonist (BID)
Caspases
Noxa
Microphthalmia-associated transcription factor
Protein mimetic
p53 upregulated modulator of apoptosis (PUMA)
Senolytics
References
External links
The Bcl-2 Family Database
The Bcl-2 Family at celldeath.de
Integral membrane proteins
Peripheral membrane proteins
Oncogenes
Apoptosis
Programmed cell death | Bcl-2 | [
"Chemistry",
"Biology"
] | 2,365 | [
"Senescence",
"Programmed cell death",
"Apoptosis",
"Signal transduction"
] |
614,700 | https://en.wikipedia.org/wiki/Chandelier | A chandelier () is an ornamental lighting device, typically with spreading branched supports for multiple lights, designed to be hung from the ceiling. Chandeliers are often ornate, and they were originally designed to hold candles, but now incandescent light bulbs are commonly used, as well as fluorescent lamps and LEDs.
A wide variety of materials ranging from wood and earthenware to silver and gold can be used to make chandeliers. Brass is one of the most popular with Dutch or Flemish brass chandeliers being the best-known, but glass is the material most commonly associated with chandeliers. True glass chandeliers were first developed in Italy, England, France, and Bohemia in the 18th century. Classic glass and crystal chandeliers have arrays of hanging "crystal" prisms to illuminate a room with refracted light. Contemporary chandeliers may assume a more minimalist design, and they may illuminate a room with direct light from the lamps or are equipped with translucent glass shades covering each lamp. Chandeliers produced nowadays can assume a wide variety of styles that span modernized and traditional designs or a combination of both.
Although chandeliers have been called candelabras, chandeliers can be distinguished from candelabras which are designed to stand on tables or the floor, while chandeliers are hung from the ceiling. They are also distinct from pendant lights, as they usually consist of multiple lamps and hang in branched frames, whereas pendant lights hang from a single cord and only contain one or two lamps with few decorative elements. Due to their size, they are often installed in large hallways and staircases, living rooms, lounges, and dining rooms, often as focus of the room. Small chandeliers can be installed in smaller spaces such as bedrooms or small living spaces, while large chandeliers are typically installed in the grand rooms of buildings such as halls and lobbies, or in religious buildings such as churches, synagogues or mosques.
Etymology
The word chandelier was first known in the English language in the sense as used today in 1736, borrowed from the word in French that means a candleholder. It may have been derived from chandelle meaning "tallow candle", or chandelabre in Old French and candēlābrum in Latin, and ultimately from candēla meaning "candle". In the earlier periods, the term "candlestick", chandelier in France, may be used to refer to a candelabra, a hanging branched light, or a wall light or sconce. In English, "hanging candlesticks" or "branches" were used to mean lighting devices hanging from the ceiling until chandelier began to be used in the 18th century.
In France, chandelier still means a candleholder, and what is called chandelier in English is in French, a term first used in the late-17th century. The French lustre, from Italian , can also be used in English to mean a chandelier hung with crystals, or the glass pendant used to decorate such chandelier. The use of words for indoor lighting objects can be confusing, and a number of terms like lustres, branches, chandeliers and candelabras were used interchangeably at various times, which can make the early appearance of these words misleading. Girandole was also once used to refer to all candelabra as well as chandelier, although girandole now usually means an ornate branched candleholder that may be mounted on a wall, often with a mirror. Chandeliers may sometimes be called suspended lights, although not all suspended lights are necessarily chandeliers.
History
Precursors
Hanging lighting devices, some described as chandeliers, were known since ancient times, and circular ceramic lamps with multiple points for wicks or candles were used in the Roman period. The Roman terms or , however, can refer to candlestick, floor lamps, candelabra, or chandelier. By the 4th century, terms such as , , pharicanthari were used, and they were often mentioned as presents of the popes.
In the Byzantine period, flat circular metallic structures suspended with chains that can hold oil lamps known as polycandela (singular polycandelon) were commonly used throughout the eastern Mediterranean. First developed in late antiquity, polycandela were used in churches and synagogues, and took the shape of a bronze or iron frame holding a varying number of globular or conical glass beakers provided with a wick and filled with oil. They may be hung between columns, over the altar or tombs of saints. Polycandela were also commonly used to furnish households up until the 8th century.
Hanging lamps were commonly found in mosques in Islamic countries, while sanctuary lamps were found in churches. In Spain which had significant Moorish influence, hanging farol lanterns made of pierced brass and bronze as well as glass were produced. A type of Spanish silver lampadario with an elongated central reservoir for oil may have developed into a form of chandelier that has a central baluster and branching arms.
The early form of hanging lighting devices in religious buildings may be of considerable size. Huge hanging lamps in Hagia Sophia were described by Paul the Silentiary in 563: "And beneath each chain he has caused to be fitted silver discs, hanging circle-wise in the air, round the space in the center of the church. Thus these discs, pendant from their lofty courses, form a coronet above the heads of men. They have been pierced too by the weapon of the skillful workman, in order that they may receive shafts of fire-wrought glass and hold light on high for men at night." In the late 8th century, Pope Adrian I was said to have presented the St. Peter's Basilica with a chandelier that could hold 1,370 candles, while his successor Pope Leo III presented a golden corona decorated with jewels to the Basilica of St. Andrew. The Venerable Bede mentioned that it was customary to have two hanging lighting devices called phari in a major English church, one in the nave and one in the choir, which may be a large bronze hoop with lamps hung over the figure of a cross.
Early chandeliers
In the medieval period, circular crown-shaped hanging devices made of iron called the corona (couronne de lumière in France and corona de luz in Spain) were used in many European countries in religious buildings since the 9th century. The larger Romanesque or Gothic-style circular wheel chandeliers were also recorded in Germany, France, and the Netherlands in the 11th and 12th century. Four Romanesque wheel chandeliers survive in Germany, including to be the Azelin and Hezilo chandeliers in Hildesheim Cathedral, and the Barbarossa Chandelier in the Aachen Cathedral. These large structures may be considered the first true chandeliers. These chandeliers have prickets (vertical spikes for holding candles) and cups for oil and wicks. A hammered iron corona with floral decorated was recorded in the St Paul's Cathedral in London in the 13th century. The iron chandeliers may have polychrome paint as well as jewel and enamelwork decorations.
Wooden cross-beam chandeliers were the early form of chandelier used in a domestic setting and they were found in the households of the wealthy in the medieval period. The wooden cross beams were attached to a vertical wooden pillar, and on each of the four arms a candle may be placed. Some that could hold two candles in each arm were called "double candlesticks". While simple in design compared to later chandeliers, such wooden chandeliers were still found in the court of Charles VI of France in the 15th century and a double candlestick was listed in the inventory of the estate of Henry VIII of England in the 16th century. In the medieval period, chandeliers may also be lighting devices that could be moved to different rooms. In later periods, wood used in chandeliers may be carved and gilded.
By the late Gothic period, more complex forms of chandeliers appeared. Chandeliers with many branches radiating out from a central stem, sometimes in tiers, were made by the 15th century, and these may be adorned with statuettes and foliated decorations. Chandelier became popular decorative features in palaces and homes of nobility, clergy and merchants, and their high cost made chandeliers symbols of luxury and status. A diverse range of materials were also employed in the making of chandeliers. In Germany, a form of chandeliers made of deer antlers and wooden sculpted figures called lusterweibchen were known to have been made since the 14th century. Ivory chandeliers in the palace of the king of Mutapa, were depicted in a 17th-century description by Olfert Dapper. Porcelain introduced to Europe were also used to make chandeliers in the 18th century.
Brass chandelier
Many different metallic materials have been used to make chandeliers, including iron, pewter, bronze, or more prestigiously silver and even gold. Brass, however, has the warm appearance of gold while being considerably cheaper, and also easy to work with, it therefore became a popular choice for making chandeliers. Brass or brass-like latten has been used to make chandeliers since the medieval period, and many were made with brass-type alloy from Dinant (now in Belgium, brass ware from the town was known as dinanderie) until the mid-15th century. The metal chandeliers may have a central support with curved or S-shaped arms attached, and at the end of each arm is a drip-pan and nozzle for holding a candle; by the 15th century, candle nozzles were used instead of prickets to hold the candles since candle production techniques allowed for the production of identically sized candles. Many such brass chandeliers can be seen depicted in Dutch and Flemish paintings from the 15th to 17th centuries. These Dutch and Flemish chandeliers may be decorated with stylized floral embellishments as well as Gothic symbols and emblems and religious figures. Large numbers of brass chandeliers existed, but most of the early brass chandeliers did not survive destruction during the Reformation.
The Dutch brass chandeliers have distinctive features – a large brass sphere at the end of a central ball stem, and six curved low-swooping arms. The globe helps to keep the chandelier upright and reflect the light from candles, and the arms are curved downward to bring the candles to the level of the sphere to allow for maximum reflection. The arms of early brass chandeliers may also have drooped lower through use over time as the brass used in the earlier period was softer due to lower zinc content. Many Dutch chandeliers were topped by a double-headed eagle by the 16th century. The features of Dutch brass chandeliers were widely copied in other countries, and this form is arguably the most successful and long-lasting of all types of chandeliers. Dutch brass chandeliers were popular across Europe, particularly in England, as well as in the United States. Variations of the Dutch brass chandelier were produced, for example there may be multiple tiers of the arms, the sphere may become elongated, or the arms may emerge from the globe itself. By the early 18th century, ornate cast ormolu forms with long, curved arms and many candles were in the homes of many in the growing merchant class.
Glass and crystal chandeliers
Chandeliers began to be decorated with carved rock crystal (quartz) of Italian origin in the 16th century, a highly expensive material. The rock crystal pieces were hung from a metal frame as pendants or drops. The metal frame of French chandeliers may have a central stem onto which arms are attached, later some may form a cage or "birdcage" without a central stem. Few, however, could afford these rock crystal chandeliers as they were costly to produce. In the 17th century multi-faceted crystals that could reflect light from the candles were used to decorate chandelier and they were called chandeliers de crystal in France. The chandeliers produced in France in the 17th century were in the French Baroque style, and rococo in the 18th century. French rock crystal chandeliers found their finest expression under Louis XIV, as exemplified by chandeliers at the Palace of Versailles.
Rock crystal began to be replaced by cut glass in the late 17th century. and examples of chandeliers made with rock crystal as well as Bohemian glass can be found in the Palace of Versailles. Crystal chandeliers in the early period were literally made of crystals, but what are called crystal chandeliers now are almost always made of cut glass. Glass, although not crystalline in structure, continued to be called crystal, after much clearer cut glass that resembled crystal was produced from the late 17th-century. Quartz is nevertheless still more reflective than the best glass, and lead glass that is perfectly clear was not produced until 1816. Although France is believed to have produced lead glass in the late-17th century, France used imported glass for its chandeliers until the late 18th century when high quality glass was produced in the country.
The origin of the glass chandelier is unclear, but some scholars believed that the first glass chandelier was made in 1673 in Orléans France, where a simple iron rod was encased in multi-coloured glass with glass arms attached. By the turn of the 18th century, glass chandeliers were produced in France, England, Bohemia, and Venice. In Britain, Lead glass was developed by George Ravenscroft 1675, which allowed for the production of cheaper lead crystal that resembles rock crystal without the crisseling defects of other glass. It is also relatively soft compared to soda glass, allowing it to be cut or faceted without shattering. Lead glass also rings when struck, unlike soda glass which has no resonance. The clearness and light scattering properties of lead glass made it a popular addition to the form, and conventionally, lead glass may be the only glass that can be described as crystal. The first mention of a glass chandelier in an advertisement appeared in 1727 (as schandelier) in London.
The design of the first English true glass chandelier was influenced by Dutch and Flemish brass chandeliers. These English chandeliers were made largely of glass, with the metal parts limited to the central stem and receiver plates and bowls. The metallic part may be silvered or silver-plated, and the silver-plating inside the glass stem can create the illusion that the chandelier is made entirely of glass. A glass bowl at the bottom disguises the metal disc onto which the glass arms are attached. The early glass chandeliers were molded and uncut, often with solid rope-twist arms. Later cuts to the arms were introduced to provide sparkle, and additional ornaments added. Cut glass pendant drops were hung from the frame, initially only a small number, but in increasingly large number by 1770. By the 1800s, the decorative ornaments became so abundant that the underlying structure of the chandelier became obscured. The early chandeliers may follow a rococo style, and later neo-classical style, A notable early producer of glass chandeliers was William Parker; Parker replaced the Dutch-influenced ball stem with a vase-shaped stem, as seen in the chandeliers in Bath Assembly Rooms, which were the first datable neo-Classical style chandeliers as well as the first chandeliers that were signed by the maker. Other designers of neo-Classical chandeliers were Robert and James Adam. Neoclassical motifs in cast metal or carved and gilded wood were common elements in these chandeliers. Chandeliers made in this style also drew heavily on the aesthetic of ancient Greece and Rome, incorporating clean lines, classical proportions and mythological creatures.
Bohemia in present-day Czech Republic has been producing glass for centuries. Bohemian glass contains potash that gives it a clear colorless appearance, which became renown in Europe in the 18th century. Production of crystal chandeliers appeared in Bohemia and Germany in the early 18th century, with designs that followed what were popular in England and France, and many early chandeliers were copies of designs from London. Bohemia soon developed its own styles of chandeliers, the best-known of which is the Maria-Theresa, named after the Empress of Austria. This type of chandeliers do not have a central baluster, and their distinctive feature is the curved flat metal arms placed between sections of molded glass joined together with glass rosettes. Some Bohemian chandeliers used wood instead of metal as the central stem due to the abundance of wood and wood carvers in the area. The Bohemian style was largely successful across Europe and its biggest draw was the chance to obtain spectacular light refraction due to the facets and bevels of crystal prisms. Glass chandeliers became the dominant form of chandelier from about 1750 until at least 1900, and the Czech Republic remains a great producer of glass chandeliers today.
Venice has been a center of glass production, particularly on the island of Murano. The Venetians created a form of soda–lime glass by adding manganese dioxide that is clear like crystal, which they called cristallo. This glass was typically used to make mirrors, but around 1700, Italian glass factories in Murano started creating new kinds of artistic chandeliers. Since Murano glass is hard and brittle, it is not suitable for cutting/faceting; however, it is lighter, softer and more malleable when heated, and Venetian glassmakers relied upon the properties of their glass to create elaborate forms of chandelier. Typical features of a Murano chandelier are the intricate arabesques of leaves, flowers and fruits that would be enriched by colored glass, made possible by the specific type of glass used in Murano. Great skill and time was required to twist and shape a chandelier precisely.
The ornate type of murano chandelier is called ciocca (literally "bouquet of flowers") for the characteristic decorations of glazed polychrome flowers. The most sumptuous consisted of a metal frame covered with small elements in blown glass, transparent or colored, with decorations of flowers, fruits and leaves, while simpler models had arms made with unique pieces of glass. Their shape was inspired by an original architectural concept: the space on the inside is left almost empty, since decorations are spread all around the central support, distanced from it by the length of the arms. Huge Murano chandeliers were often used for interior lighting in theaters and rooms in important palaces. Despite periods of decline and revival, designs of Murano glass chandeliers have stayed relatively constant through time, and modern productions of these chandelier may still be stylistically nearly identical to those made in the 18th or 19th centuries. Glass arms that were hollow were produced instead of solid glass to accommodate gas lines or electrical wiring were produced by the late 19th century.
Chandeliers were also produced in other countries in the 18th century, including Russia and Sweden. Russian and Scandinavian chandeliers are similar in designs, with a metal frame that is lighter and more decorative, gilded or finished with brass, and hung with small slender glass drops. Russian chandeliers may be accented with coloured glass.
19th century
The 19th century was a period of great changes and development; the industrial revolution and the growth of wealth from the industries greatly increased the market for chandeliers, new methods of lighting and better techniques of production emerged. Other countries such as the United States also started producing chandeliers; the first American chandelier is believed to date from 1804. New styles and more complex and elaborate chandeliers also appeared, and production of chandeliers reached a peak in the 19th century. France, which only started producing significant amount of high-quality glass in the late 18th century, became renown as a producer of the finest quality chandeliers. One of the best-known French manufacturers, Baccarat, started making chandeliers in 1824. In England, Perry & Co. produced a large quantity of chandeliers, while F. & C. Osler was known for producing spectacular chandeliers, the great proportion of which went to India, the richest market for chandeliers at that time. In 1843, Osler opened a branch in Calcutta to start production of chandeliers in India.
In England, the imposition of the Glass Excise Act on all glass products in 1811 led to a new style of chandelier being created. Chandelier makers, in order to avoid paying the tax, reused broken glass pieces cut into crystal icicles and strung together, and hung from circular frames in the form of tent or canopy above a hoop, with a bag below and/or tiered sheets that resembled waterfalls. A large number of crystals are used to make such chandeliers, and many may contain over 1,000 pieces of crystal. The central stem is hidden by the crystals. These forms of Regency-era chandeliers were popular all over Europe. In France, chandeliers of similar designs are described as Empire style. After the Glass Excise Act was repealed, chandeliers with glass arms became popular again, but they became larger, bolder and heavily decorated. The largest English-made chandelier in the world (by Hancock Rixon & Dunt and probably F. & C. Osler) is in the Dolmabahçe Palace in Istanbul, and it has 750 lamps and weighs 4.5 tons.
In the 19th century, a variety of new methods for producing light that are brighter, cleaner or more convenient than candles began to be used. These included colza oil (Argand lamp), kerosene/paraffin, and gas. Due to its brightness, gas was initially only used for public lighting, later it also appeared in homes. As gas lighting caught on, branched ceiling fixtures called gasoliers (a portmanteau of gas and chandelier) were produced. Many candle chandeliers were converted. Gasoliers may have only slight variations in the decorations from chandeliers, but the arms were hollow to carry the gas to the burners. Examples of gasoliers were the extravagant chandeliers in the Royal Pavilion in Brighton first installed in 1821. While popular, gas lighting was considered too bright and harsh on the eyes, and lacking the pleasing quality of candlelight. Shades that surround the gas light were then added to reduce the glare. Gas lighting was eventually replaced by electric light bulbs in the early 20th century.
Electric lighting began to be introduced widely in the late 19th century. For a time, some chandeliers used both gas and electricity, with gas nozzles pointing upward while the light bulbs hung downward. As distribution of electricity widened, and supplies became dependable, electric-only chandeliers became standard. Another portmanteau word, electrolier, was coined for these, but nowadays they are most commonly still called chandeliers even though no candles are used. Glass chandeliers requires electrical wiring, large areas of metals and light bulbs, but the results were often not aesthetically pleasing. A large number of light bulbs close together can also produce too much glare. Shades for the bulbs of these electroliers were therefore often added.
Modern chandeliers
At the turn of the 20th century, the chandelier still enjoyed the status it had the previous century. Of the many lighting fixtures made that conformed to the popular contemporary styles of Art Nouveau, Art Deco and Modernism, few could be described properly as chandeliers. The popularity of chandeliers declined in the 20th century. A vast array of lighting choices became available, and chandeliers often did not fit the aesthetics of modern architecture and interior design. Light fittings of avant-garde form and material however started to be made 1940. A wide variety of chandeliers of modern design appeared, ranging from the minimalist to the highly extravagant. Towards the end of the 20th century, the popularity of chandeliers revived. A number of glass artists such as Dale Chihuly who produced chandeliers emerged. Chandeliers were often used as decorative focal points for rooms, although some do not necessarily illuminate.
Older styles of chandeliers continued to be produced in the 20th and 21st centuries, and older styles of chandeliers may also be revived, such as the Art Deco-style of chandeliers.
Incandescent light bulbs became the most common source of lighting for modern chandeliers in the 20th century, and a variety of electrical lights such as fluorescent light, halogen. LED lamp are also used. Many antique chandeliers not designed for electrical wiring have also been adapted for electricity. Modern chandeliers produced in older styles and antique chandeliers wired for electricity usually use imitation candles, where incandescent or LED light bulbs are shaped like candle flames. These light bulbs may be dimmable to adjust the brightness. Some may use bulbs containing a shimmering gas discharge that mimics candle flame.
Chandeliers around the world
The biggest chandeliers in the world are now found in the Islamic countries. The chandelier in the prayer hall in the Sultan Qaboos Grand Mosque in Muscat, Oman was the biggest when it was installed in 2001. It is high, has a diameter of , and weighs over eight tonnes (8,000 kg). It is lit by over 1,122 halogen lamps and contains 600,000 pieces of crystal. The biggest chandelier in the Sheikh Zayed Grand Mosque in Abu Dhabi, with a diameter of 10 m, height of 15.5 m, weight of nearly 12 tonnes and lit with 15,500 LED lights, became the world's largest chandelier when it was installed in 2007. In 2010, a chandelier of modern design was built in the foyer of an office building in Doha, Qatar. This chandelier has a height of , width of , length of , and weight of 39,683 pounds (18 tonnes). It has 165,000 LED lights and 2,300 optical crystals and it is considered the biggest interactive LED chandelier in the world. In 2022, a chandelier in height, in length and in width and weighing 16 tonnes was unveiled at the Assima Mall in Kuwait. In Egypt, the largest and heaviest chandelier in the world, weighing 24,300 kg (53,572 lb) with a diameter of 22 m (72.2 ft) in four levels made by Asfour Crystal, was installed in the Grand Mosque of the Islamic Cultural Center in Cairo.
Glossary of terms
Adam style A Neoclassical style, light, airy and elegant chandelier – usually English.
Arm The light-bearing part of a chandelier also sometimes known as a branch.
Arm plate The metal or wooden block placed on the stem, into which the arms slot.
Bag A bag of crystal drops formed by strings hanging from a circular frame and looped back into the center underneath, associated especially with early American crystal and Regency style crystal chandeliers.
Baluster A turned wood or molded stem forming the axis of a chandelier, with alternating narrow and bulbous parts of varying widths.
Bead A glass drop with a hole drilled in it.
Bobèche A dish fitted just below the candle nozzle, designed to catch drips of wax. Also known as a drip pan.
Branch Another name for the light-bearing part of a chandelier, also known as an arm.
Cage An arrangement where the central stem supporting arms and decorations is replaced by a metal structure leaving the center clear for candles and further embellishments. Also called a "bird cage".
Candlebeam A cross made from two wooden beams with one or more cups and prickets at each end for securing candles.
Candle nozzle The small cup into which the end of the candle is slotted.
Canopy An inverted shallow dish at the top of a chandelier from which festoons of beads are often suspended, lending a flourish to the top of the fitting.
Corona Another term for crown-style chandelier.
Crown A circular chandelier reminiscent of a crown, usually of gilded metal or brass, and often with upstanding decorative elements.
Crystal Essentially a traditional marketing term for lead glass with a chemical content that gives it special qualities of clarity, resonance and softness, making it especially suitable for use in cut glass. Some chandeliers, as at the Palace of Versailles are actually made of cut rock crystal (clear quartz), which cut glass essentially imitates.
Drip pan The dish fitted just below the candle nozzle, designed to catch drips of wax. Also known as a bobèche.
Drop A small piece of glass usually cut into one of many shapes and drilled at one end so that it can be hung from the chandelier as a pendant with a brass pin. A chain drop is drilled at both ends so that a series can be hung together to form a necklace or festoon.
Dutch Also known as Flemish, a style of brass chandelier with a bulbous baluster and arms curving down around a low hung ball.
Festoon An arrangement of glass drops or beads draped and hung across or down a glass chandelier, or sometimes a piece of solid glass shaped into a swag. Also known as a garland.
Finial The final flourish at the very bottom of the stem. Some Venetian glass chandeliers have little finials hanging from glass rings on the arms.
Hoop A circular metal support for arms, usually on a regency-styles or other chandelier with glass pieces. Also known as a ring.
Montgolfière chandelier A chandelier with a rounded bottom, like an inverted hot air balloon, named after the Montgolfier brothers, the early French balloonists.
Molded The process by which a pressed glass piece is shaped by being blown into a mold.
Neoclassical style chandelier A glass chandelier featuring many delicate arms, spires and strings of ovals rhomboids or octagons.
Panikadilo Gothic candelabrum chandelier hung from centers of Greek Orthodox cathedrals' domes.
Pendeloque Specific pear- and drop-shaped versions of drops.
Prism A straight, many-sided drop.
Regency style chandelier A larger chandelier with a multitude of drops. Above a hoop, rises strings of beads that diminish in size and attach at the top to form a canopy. A bag, with concentric rings of pointed glass, forms a waterfall beneath. The stem is usually completely hidden.
Soda glass A type of glass used typically in Venetian glass chandeliers. Soda glass remains "plastic" for longer when heated, and can therefore be shaped into elegant curving leaves and flowers. Refracts light poorly and is normally fire polished.
Spire A tall spike of glass, round in section or flat sided, to which arms and decorative elements may be attached, made from wood, metal or glass.
Tent A tent-shaped structure on the upper part of a glass chandelier where necklaces of drops attach at the top to a canopy and at the bottom to a larger ring.
Venetian A glass from the island of Murano, Venice but usually used to describe any chandelier in Venetian style.
Waterfall or wedding cake Concentric rings of icicle drops suspended beneath the hoop or plate.
Source:
Gallery
See also
Candelabra
Ceiling rose
Girandole
Ljuskrona
J. & L. Lobmeyr, the first company to make an electric chandelier
Light fixture
Sconce
Wheel chandelier
References
Sources
Katz, Cheryl and Jeffrey. Chandeliers Rockport Publishers: 2001. .
Parissien, Steven. Regency Style. Phaidon: 1992. .
Light fixtures
Glass art
Ceilings
Chandeliers | Chandelier | [
"Engineering"
] | 6,543 | [
"Structural engineering",
"Ceilings"
] |
614,723 | https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Plasma%20Physics | The Max Planck Institute for Plasma Physics (, IPP) is a physics institute investigating the physical foundations of a fusion power plant.
The IPP is an institute of the Max Planck Society, part of the European Atomic Energy Community, and an associated member of the Helmholtz Association.
The IPP has two sites: Garching near Munich (founded 1960) and Greifswald (founded 1994), both in Germany.
It owns several large devices, namely the experimental tokamak ASDEX Upgrade (in operation since 1991), the experimental stellarator Wendelstein 7-X (in operation since 2016), a tandem accelerator and a high heat flux test facility (GLADIS)
Furthermore it cooperates closely with the ITER, DEMO and JET projects.
The International Helmholtz Graduate School for Plasma Physics partners with the Technical University of Munich (TUM) and the University of Greifswald. Associated partners are the Leibniz Institute for Plasma Science and Technology (INP) in Greifswald and the Leibniz Computational Center (LRZ) in Garching.
External links
References
Fusion power
Plasma physics facilities
Physics research institutes
Plasma Physics
University of Greifswald
Garching bei München
Max Planck | Max Planck Institute for Plasma Physics | [
"Physics",
"Chemistry"
] | 252 | [
"Plasma physics",
"Fusion power",
"Plasma physics stubs",
"Plasma physics facilities",
"Nuclear fusion"
] |
614,750 | https://en.wikipedia.org/wiki/Trastuzumab | Trastuzumab, sold under the brand name Herceptin among others, is a monoclonal antibody used to treat breast cancer and stomach cancer. It is specifically used for cancer that is HER2 receptor positive. It may be used by itself or together with other chemotherapy medication. Trastuzumab is given by slow injection into a vein and injection just under the skin.
Common side effects include fever, infection, cough, headache, trouble sleeping, and rash. Other severe side effects include heart failure, allergic reactions, and lung disease. Use during pregnancy may harm the baby. Trastuzumab works by binding to the HER2 receptor and slowing down cell replication.
Trastuzumab was approved for medical use in the United States in September 1998, and in the European Union in August 2000. It is on the World Health Organization's List of Essential Medicines.
Medical uses
The safety and efficacy of trastuzumab-containing combination therapies (with chemotherapy, hormone blockers, or lapatinib) for the treatment of metastatic breast cancer. The overall hazard ratios (HR) for overall survival and progression free survival were 0.82 and 0.61, respectively. It was difficult to accurately ascertain the true impact of trastuzumab on survival, as in three of the seven trials, over half of the patients in the control arm were allowed to cross-over and receive trastuzumab after their cancer began to progress. Thus, this analysis likely underestimates the true survival benefit associated with trastuzumab treatment in this population.
In early-stage HER2-positive breast cancer, trastuzumab-containing regimens improved overall survival (Hazard ratio (HR) = 0.66) and disease-free survival (HR = 0.60). Increased risk of heart failure (RR = 5.11) and decline in left ventricular ejection fraction (relative risk RR = 1.83) were seen in these trials as well. Two trials involving shorter term treatment with trastuzumab did not differ in efficacy from longer trials, but produced less cardiac toxicity.
The original studies of trastuzumab showed that it improved overall survival in late-stage (metastatic) HER2-positive breast cancer from 20.3 to 25.1 months. In early-stage HER2-positive breast cancer, it reduces the risk of cancer returning after surgery. The absolute reduction in the risk of cancer returning within three years was 9.5%, and the absolute reduction in the risk of death within 3 years was reduced by 3%. However, it increases serious heart problems by an absolute risk of 2.1%, though the problems may resolve if treatment is stopped.
Trastuzumab has had a "major impact in the treatment of HER2-positive metastatic breast cancer." The combination of trastuzumab with chemotherapy has been shown to increase both survival and response rate, in comparison to trastuzumab alone.
It is possible to determine the "erbB2 status" of a tumor, which can be used to predict efficacy of treatment with trastuzumab. If it is determined that a tumor is overexpressing the erbB2 oncogene and the patient has no significant pre-existing heart disease, then a patient is eligible for treatment with trastuzumab. It is surprising that although trastuzumab has great affinity for HER2 and high doses can be administered (because of its low toxicity), 70% of HER2+ patients do not respond to treatment. In fact resistance to the treatment develops rapidly, in virtually all patients. A mechanism of resistance involves failure to downregulate p27 (Kip1) as well as suppressing p27 translocation to the nucleus in breast cancer, enabling cdk2 to induce cell proliferation.
In May 2021, the FDA approved pembrolizumab in combination with trastuzumab, fluoropyrimidine- and platinum-containing chemotherapy for the first-line treatment of people with locally advanced unresectable or metastatic HER2 positive gastric or gastroesophageal junction (GEJ) adenocarcinoma.
Duration of treatment
The optimal duration of add-on trastuzumab treatment after surgery for early breast cancer is unknown. One year of treatment is generally accepted based on clinical trial evidence that demonstrated the superiority of one-year treatment over none. However, a small Finnish trial also showed similar improvement with nine weeks of treatment over no therapy. Because of the lack of direct head-to-head comparison in clinical trials, it is unknown whether a shorter duration of treatment may be just as effective (with fewer side effects) than the accepted practice of treatment for one year. Debate about treatment duration has become a relevant issue for many public health policy makers because administering trastuzumab for a year is very expensive. Consequently, some countries with a taxpayer-funded public health system, such as New Zealand, chose to fund limited adjuvant therapy. However, subsequently New Zealand has revised its policy and now funds trastuzumab treatment for up to 12 months.
Adverse effects
Some of the common side effects of trastuzumab are flu-like symptoms (such as fever, chills and mild pain), nausea and diarrhea.
One of the more serious complications of trastuzumab is its effect on the heart, although this is rare. In 2–7% of cases, trastuzumab is associated with cardiac dysfunction, which includes congestive heart failure. As a result, regular cardiac screening with either a MUGA scan or echocardiography is commonly undertaken during the trastuzumab treatment period. The decline in ejection fraction appears to be reversible.
Trastuzumab downregulates neuregulin-1 (NRG-1), which is essential for the activation of cell survival pathways in cardiomyocytes and the maintenance of cardiac function. NRG-1 activates the MAPK pathway and the PI3K/AKT pathway as well as focal adhesion kinases (FAK). These are all significant for the function and structure of cardiomyocytes. Trastuzumab can therefore lead to cardiac dysfunction.
Trastuzumab may harm a developing fetus.
Mechanism of action
The HER2 gene (also known as HER2/neu and ErbB2 gene) is amplified in 20–30% of early-stage breast cancers. Trastuzumab is a monoclonal antibody targeting HER2, inducing an immune-mediated response that causes internalization and recycling of HER2. It may also upregulate cell cycle inhibitors such as p21Waf1 and p27Kip1.
The HER2 pathway promotes cell growth and division when it is functioning normally; however, when it is overexpressed, cell growth accelerates beyond its normal limits. In some types of cancer, the pathway is exploited to promote rapid cell growth and proliferation and hence tumor formation. The EGF pathway includes the receptors HER1 (EGFR), HER2, HER3, and HER4; the binding of ligands (e.g. EGF etc.) to HER receptors is required to activate the pathway. The pathway initiates the MAP kinase pathway as well as the PI3 kinase/AKT pathway, which in turn activates the NF-κB pathway. In cancer cells the HER2 protein can be expressed up to 100 times more than in normal cells (2 million versus 20,000 per cell).
The HER receptors are proteins that are embedded in the cell membrane and communicate molecular signals from outside the cell (molecules called EGFs) to inside the cell, and turn genes on and off. The HER (human epidermal growth factor receptor) protein, binds to human epidermal growth factor, and stimulates cell proliferation. In some cancers, notably certain types of breast cancer, HER2 is over-expressed and causes cancer cells to reproduce uncontrollably.
HER2 is localized at the cell surface, and carries signals from outside the cell to the inside. Signaling compounds called mitogens (specifically EGF in this case) arrive at the cell membrane, and bind to the extracellular domain of the HER family of receptors. Those bound proteins then link (dimerize), activating the receptor. HER2 sends a signal from its intracellular domain, activating several different biochemical pathways. These include the PI3K/Akt pathway and the MAPK pathway. Signals on these pathways promote cell proliferation and the growth of blood vessels to nourish the tumor (angiogenesis). ERBB2 is the preferred dimerization partner for the other family members and ERBB2 heterodimers signaling is stronger and longer acting compared to heterodimers between other ERBB members. It has been reported that Trastuzumab induces the formation of complementarity-determining regions (CDRs) leading to surface redistribution of ERBB2 and EGFR in CDRs and that the ERBB2-dependent MAPK phosphorylation and EGFR/ERBB1 expression are both required for CDR formation. CDR formation requires activation of both the protein regulator of actin polymerization N-WASP, mediated by ERK1/2, and of the actin-depolymerizing protein cofilin, mediated by EGFR/ERBB1. Furthermore, this latter event may be inhibited by the negative cell motility regulator p140Cap, as we found that p140Cap overexpression led to cofilin deactivation and inhibition of CDR formation.
Normal cell division—mitosis—has checkpoints that keep cell division under control. Some of the proteins that control this cycle are called cdk2 (CDKs). Overexpression of HER2 sidesteps these checkpoints, causing cells to proliferate in an uncontrolled fashion.
Trastuzumab binds to domain IV of the extracellular segment of the HER2/neu receptor. Monoclonal antibodies that bind to this region have been shown to reverse the phenotype of HER2/neu expressing tumor cells. Cells treated with trastuzumab undergo arrest during the G1 phase of the cell cycle so there is reduced proliferation. It has been suggested that trastuzumab does not alter HER-2 expression, but downregulates activation of AKT. In addition, trastuzumab suppresses angiogenesis both by induction of antiangiogenic factors and repression of proangiogenic factors. It is thought that a contribution to the unregulated growth observed in cancer could be due to proteolytic cleavage of HER2/neu that results in the release of the extracellular domain. One of the most relevant proteins that trastuzumab activates is the tumor suppressor p27 (kip1), also known as CDKN1B. Trastuzumab has been shown to inhibit HER2/neu ectodomain cleavage in breast cancer cells.
Experiments in laboratory animals indicate that antibodies, including trastuzumab, when bound to a cell, induce immune cells to kill that cell, and that such antibody-dependent cell-mediated cytotoxicity is another important mechanism of action.
Predicting response
Trastuzumab inhibits the effects of overexpression of HER2. If the breast cancer does not overexpress HER2, trastuzumab will have no beneficial effect (and may cause harm). Doctors use laboratory tests to discover whether HER2 is overexpressed. In the routine clinical laboratory, the most commonly employed methods for this are immunohistochemistry (IHC) and either silver, chromogenic or fluorescent in situ hybridisation (SISH/CISH/FISH). HER2 amplification can be detected by virtual karyotyping of formalin-fixed paraffin embedded tumor. Virtual karyotyping has the added advantage of assessing copy number changes throughout the genome, in addition to detecting HER-2 amplification (but not overexpression). Numerous PCR-based methodologies have also been described in the literature. It is also possible to estimate HER2 copy number from microarray data.
There are two FDA-approved commercial kits available for HER2 IHC; Dako HercepTest and Ventana Pathway.
Fluorescent in situ hybridization (FISH) is viewed as being the "gold standard" technique in identifying patients who would benefit from trastuzumab, but it is expensive and requires fluorescence microscopy and an image capture system. The main expense involved with CISH is in the purchase of FDA-approved kits, and as it is not a fluorescent technique it does not require specialist microscopy and slides may be kept permanently. Comparative studies of CISH and FISH have shown that these two techniques show excellent correlation. The lack of a separate chromosome 17 probe on the same section is an issue with regards to acceptance of CISH. As of June 2011 Roche has obtained FDA approval for the INFORM HER2 Dual ISH DNA Probe cocktail developed by Ventana Medical Systems. The DDISH (Dual-chromagen/Dual-hapten In-situ hybridization) cocktail uses both HER2 and Chromosome 17 hybridization probes for chromagenic visualization on the same tissue section. The detection can be achieved by using a combination of ultraView SISH(silver in-situ hybridization) and ultraView Red ISH for deposition of distinct chromgenic precipitates at the site of DNP or DIG labeled probes.
Resistance
One of the challenges in the treatment of breast cancer patients by herceptin is our understanding towards herceptin resistance. In the last decade, several assays have been performed to understand the mechanism of Herceptin resistance with/without supplementary drugs. Recently, all this information has been collected and compiled in form of a database HerceptinR.
History
The drug was first discovered by scientists including Axel Ullrich and H. Michael Shepard at Genentech, Inc. in South San Francisco, CA. Earlier discovery about the neu oncogene by Robert Weinberg's lab and the monoclonal antibody recognizing the oncogenic receptor by Mark Greene's lab also contributed to the establishment of HER2 targeted therapies. Dr. Dennis Slamon subsequently worked on trastuzumab's development. A book about Dr. Slamon's work was made into a television film called Living Proof, that premiered in 2008. Genentech developed trastuzumab jointly with UCLA, beginning the first clinical trial with 15 women in 1992. By 1996, clinical trials had expanded to over 900 women, but due to pressure from advocates based on early success, Genentech worked with the FDA to begin a lottery system allowing 100 women each quarter access to the medication outside the trials. Herceptin was Fast-tracked by the FDA and gained approval in September 1998.
Biocon Ltd and its partner Mylan obtained regulatory approval to sell a biosimilar in 2014, but Roche contested the legality of the approval; that litigation ended in 2016, and Biocon and Mylan each introduced their own branded biosimilars.
Society and culture
Economics
Trastuzumab costs about for a full course of treatment.
Australia has negotiated a lower price of A$50,000 per course of treatment.
Since October 2006, trastuzumab has been made available for Australian women and men with early-stage breast cancer via the Pharmaceutical Benefits Scheme. This is estimated to cost the country over A$470 million for 4–5 years supply of the drug.
Roche has agreed with Emcure in India to make an affordable version of this cancer drug available to the Indian market.
Roche has changed the brand name of the drug and has re-introduced an affordable version of the same in the Indian market. The new drug named Herclon would cost approximately RS75,000 INR (1,200) in the Indian market.
On 16 September 2014, Genentech notified hospitals in the United States that, as of October, trastuzumab could only be purchased through their selected specialty drugs distributors not through the usual general line wholesalers. By being forced to purchase through specialty pharmacies, hospitals lost rebates from the big wholesalers and the ability to negotiate cost-minus discounts with their wholesalers.
Biosimilars
By 2014, around 20 companies, particularly from emerging markets, were developing biosimilar versions of trastuzumab
after Roche/Genentech's patents expired in 2014 in Europe, and in 2019 in the United States.
In January 2015, BIOCAD announced the first trastuzumab biosimilar approved by the Ministry of Health of the Russian Federation. Iran also approved its own version of the monoclonal antibody in January 2016, as AryoTrust, and announced its readiness to export the drug to other countries in the Middle-East and Central Asia when trade sanctions were lifted.
In 2016, the investigational biosimilar MYL-1401O showed comparable efficacy and safety to the Herceptin branded trastuzumab. Trastuzumab-dkst (Ogivri, Mylan GmbH) was approved in the United States in December 2017, to "treat people with breast cancer or gastric or gastroesophageal junction adenocarcinoma whose tumors overexpress the HER-2 gene." Ogivri was authorized for medical use in the European Union in December 2018.
In November 2017, the European Commission authorized Ontruzant, a biosimilar from Samsung Bioepis Co., Ltd, for the treatment of early breast cancer, metastatic breast cancer and metastatic gastric cancer. Ontruzant is the first trastuzumab biosimilar to receive regulatory approval in the European Union.
Herzuma was authorized for medical use in the European Union in February 2018. Herzuma, a trastuzumab biosimilar, was approved in the United States in December 2018. The approval was based on comparisons of extensive structural and functional product characterization, animal data, human pharmacokinetic, clinical immunogenicity, and other clinical data demonstrating that Herzuma is biosimilar to US Herceptin. Herzuma has been approved as a biosimilar, not as an interchangeable product.
Kanjinti was authorized for medical use in the European Union in May 2018.
Trazimera was authorized for medical use in the European Union in July 2018.
Ogivri was approved for medical use in Canada in May 2019.
Trazimera was approved for medical use in Canada in August 2019.
Herzuma was approved for medical use in Canada in September 2019.
Kanjinti was approved for medical use in Canada in February 2020.
Zercepac was authorized for medical use in the European Union in July 2020.
Trastucip and Tuzucip were approved for medical use in Australia in July 2022.
In September 2023, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Herwenda, intended for the treatment of HER2-positive breast and gastric cancer. The applicant for this medicinal product is Sandoz GmbH. Herwenda was authorized for medical use in the European Union in November 2023.
Trastuzumab-strf (Hercessi) was approved for medical use in the United States in April 2024.
In July 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Tuznue, intended for the treatment of breast and gastric cancer. The applicant for this medicinal product is Prestige Biopharma Belgium BVBA. Tuznue is a biosimilar medicinal product. Tuznue was authorized for medical use in the European Union in September 2024.
Adheroza was approved for medical use in Canada in August 2024.
Related conjugates
Trastuzumab is also a component of some antibody-drug conjugates, such as trastuzumab emtansine, and trastuzumab deruxtecan.
References
Further reading
External links
Drugs developed by Genentech
Drugs developed by Hoffmann-La Roche
Immunology
Drugs developed by Merck & Co.
Monoclonal antibodies for tumors
Wikipedia medicine articles ready to translate
Specialty drugs
World Health Organization essential medicines | Trastuzumab | [
"Biology"
] | 4,289 | [
"Immunology",
"Specialty drugs"
] |
614,763 | https://en.wikipedia.org/wiki/Stark%20effect | The Stark effect is the shifting and splitting of spectral lines of atoms and molecules due to the presence of an external electric field. It is the electric-field analogue of the Zeeman effect, where a spectral line is split into several components due to the presence of the magnetic field. Although initially coined for the static case, it is also used in the wider context to describe the effect of time-dependent electric fields. In particular, the Stark effect is responsible for the pressure broadening (Stark broadening) of spectral lines by charged particles in plasmas. For most spectral lines, the Stark effect is either linear (proportional to the applied electric field) or quadratic with a high accuracy.
The Stark effect can be observed both for emission and absorption lines. The latter is sometimes called the inverse Stark effect, but this term is no longer used in the modern literature.
History
The effect is named after the German physicist Johannes Stark, who discovered it in 1913. It was independently discovered in the same year by the Italian physicist Antonino Lo Surdo. The discovery of this effect contributed importantly to the development of quantum theory and Stark was awarded with the Nobel Prize in Physics in the year 1919.
Inspired by the magnetic Zeeman effect, and especially by Hendrik Lorentz's explanation of it, Woldemar Voigt performed classical mechanical calculations of quasi-elastically bound electrons in an electric field. By using experimental indices of refraction he gave an estimate of the Stark splittings. This estimate was a few orders of magnitude too low. Not deterred by this prediction, Stark undertook measurements on excited states of the hydrogen atom and succeeded in observing splittings.
By the use of the Bohr–Sommerfeld ("old") quantum theory, Paul Epstein and Karl Schwarzschild were independently able to derive equations for the linear and quadratic Stark effect in hydrogen. Four years later, Hendrik Kramers derived formulas for intensities of spectral transitions. Kramers also included the effect of fine structure, with corrections for relativistic kinetic energy and coupling between electron spin and orbital motion. The first quantum mechanical treatment (in the framework of Werner Heisenberg's matrix mechanics) was by Wolfgang Pauli. Erwin Schrödinger discussed at length the Stark effect in his third paper on quantum theory (in which he introduced his perturbation theory), once in the manner of the 1916 work of Epstein (but generalized from the old to the new quantum theory) and once by his (first-order) perturbation approach.
Finally, Epstein reconsidered the linear and quadratic Stark effect from the point of view of the new quantum theory. He derived equations for the line intensities which were a decided improvement over Kramers's results obtained by the old quantum theory.
While the first-order-perturbation (linear) Stark effect in hydrogen is in agreement with both the old Bohr–Sommerfeld model and the quantum-mechanical theory of the atom, higher-order corrections are not. Measurements of the Stark effect under high field strengths confirmed the correctness of the new quantum theory.
Mechanism
Overview
Imagine an atom with occupied 2s and 2p electron states. In the Bohr model, these states are degenerate. However, in the presence of an external electric field, these electron orbitals will hybridize into eigenstates of the perturbed Hamiltonian (where each perturbed hybrid state can be written as a superpositon of unperturbed states). Since the 2s and 2p states have opposite parity, these hybrid states will lack inversion symmetry and will possess a time-averaged electric dipole moment. If this dipole moment is aligned with the electric field, the energy of the state will shift down; if this dipole moment is anti-aligned with the electric field, the energy of the state will shift up. Thus, the Stark effect causes a splitting of the original degeneracy.
Other things being equal, the effect of the electric field is greater for outer electron shells because the electron is more distant from the nucleus, resulting in a larger electric dipole moment upon hybridization.
Multipole expansion
The Stark effect originates from the interaction between a charge distribution (atom or molecule) and an external electric field.
The interaction energy of a continuous charge distribution , confined within a finite volume , with an external electrostatic potential is
This expression is valid classically and quantum-mechanically alike.
If the potential varies weakly over the charge distribution, the multipole expansion converges fast, so only a few first terms give an accurate approximation. Namely, keeping only the zero- and first-order terms,
where we introduced the electric field and assumed the origin 0 to be somewhere within .
Therefore, the interaction becomes
where and are, respectively, the total charge (zero moment) and the dipole moment of the charge distribution.
Classical macroscopic objects are usually neutral or quasi-neutral (), so the first, monopole, term in the expression above is identically zero. This is also the case for a neutral atom or molecule. However, for an ion this is no longer true. Nevertheless, it is often justified to omit it in this case, too. Indeed, the Stark effect is observed in spectral lines, which are emitted when an electron "jumps" between two bound states. Since such a transition only alters the internal degrees of freedom of the radiator but not its charge, the effects of the monopole interaction on the initial and final states exactly cancel each other.
Perturbation theory
Turning now to quantum mechanics an atom or a molecule can be thought of as a collection of point charges (electrons and nuclei), so that the second definition of the dipole applies. The interaction of atom or molecule with a uniform external field is described by the operator
This operator is used as a perturbation in first- and second-order perturbation theory to account for the first- and second-order Stark effect.
First order
Let the unperturbed atom or molecule be in a g-fold degenerate state with orthonormal zeroth-order state functions . (Non-degeneracy is the special case g = 1). According to perturbation theory the first-order energies are the eigenvalues of the g × g matrix with general element
If g = 1 (as is often the case for electronic states of molecules) the first-order energy becomes proportional to the expectation (average) value of the dipole operator ,
Since the electric dipole moment is a vector (tensor of the first rank), the diagonal elements of the perturbation matrix Vint vanish between states that have a definite parity. Atoms and molecules possessing inversion symmetry do not have a (permanent) dipole moment and hence do not show a linear Stark effect.
In order to obtain a non-zero matrix Vint for systems with an inversion center it is necessary that some of the unperturbed functions have opposite parity (obtain plus and minus under inversion), because only functions of opposite parity give non-vanishing matrix elements. Degenerate zeroth-order states of opposite parity occur for excited hydrogen-like (one-electron) atoms or Rydberg states. Neglecting fine-structure effects, such a state with the principal quantum number n is n2-fold degenerate and
where is the azimuthal (angular momentum) quantum number. For instance, the excited n = 4 state contains the following states,
The one-electron states with even are even under parity, while those with odd are odd under parity. Hence hydrogen-like atoms with n>1 show first-order Stark effect.
The first-order Stark effect occurs in rotational transitions of symmetric top molecules (but not for linear and asymmetric molecules). In first approximation a molecule may be seen as a rigid rotor. A symmetric top rigid rotor has the unperturbed eigenstates
with 2(2J+1)-fold degenerate energy for |K| > 0 and (2J+1)-fold degenerate energy for K=0.
Here DJMK is an element of the Wigner D-matrix. The first-order perturbation matrix on basis of the unperturbed rigid rotor function is non-zero and can be diagonalized. This gives shifts and splittings
in the rotational spectrum. Quantitative analysis of these Stark shift yields the permanent electric dipole moment of the symmetric top molecule.
Second order
As stated, the quadratic Stark effect is described by second-order perturbation theory. The zeroth-order eigenproblem
is assumed to be solved. The perturbation theory gives
with the components of the polarizability tensor α defined by
The energy E(2) gives the quadratic Stark effect.
Neglecting the hyperfine structure (which is often justified — unless extremely weak electric fields are considered), the polarizability tensor of atoms is isotropic,
For some molecules this expression is a reasonable approximation, too.
For the ground state is always positive, i.e., the quadratic Stark shift is always negative.
Problems
The perturbative treatment of the Stark effect has some problems. In the presence of an electric field, states of atoms and molecules that were previously bound (square-integrable), become formally (non-square-integrable) resonances of finite width. These resonances may decay in finite time via field ionization. For low lying states and not too strong fields the decay times are so long, however, that for all practical purposes the system can be regarded as bound. For highly excited states and/or very strong fields ionization may have to be accounted for. (See also the article on the Rydberg atom).
Applications
The Stark effect is at the basis of the spectral shift measured for voltage-sensitive dyes used for imaging of the firing activity of neurons.
See also
Zeeman effect
Autler–Townes effect
Quantum-confined Stark effect
Stark spectroscopy
Inglis–Teller equation
Electric field NMR
Stark effect in semiconductor optics
References
Further reading
(Early history of the Stark effect)
(Chapter 17 provides a comprehensive treatment, as of 1935.)
(Stark effect for atoms)
(Stark effect for rotating molecules)
Atomic physics
Foundational quantum physics
Physical phenomena
Spectroscopy | Stark effect | [
"Physics",
"Chemistry"
] | 2,110 | [
"Physical phenomena",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Foundational quantum physics",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
614,780 | https://en.wikipedia.org/wiki/USG%20Corporation | USG Corporation, also known as United States Gypsum Corporation, is an American company which manufactures construction materials, most notably drywall and joint compound. The company is the largest distributor of wallboard in the United States and the largest manufacturer of gypsum products in North America. It is also a major consumer of synthetic gypsum, a byproduct of flue-gas desulfurization. Its corporate offices are located at 550 West Adams Street in Chicago, Illinois.
USG's most significant brands include Sheetrock Brand Gypsum Panels, Securock Brand Glass-Mat Sheathing and Sheetrock Brand All Purpose Joint Compound.
In December 2013, Warren Buffett's Berkshire Hathaway became the largest shareholder in the company (holding roughly 30%) when it converted USG convertible notes it had acquired in 2008 to common stock.
In June 2018, USG entered into an agreement to be purchased by the privately-held building materials company Knauf. It operates as an independent subsidiary of Knauf and continues to remain headquartered in Chicago, Illinois. The deal closed in April 2019.
History
In 1890, New York Coal Tar Chemical Company employees Fred L. Kane and Augustine Sackett developed plaster board by, first, strengthening the plaster with Plaster of Paris sandwiched between heavy paper, creating a viable competitor to traditional lime plaster. Sackett patented the new drywall product as Sackett Board in 1894.
Since gypsum was plentiful, available at a relatively low price, and used a simple manufacturing process, new firms flooded and fragmented the market, placing constant downward pressure on prices.
On December 27, 1901, 30 gypsum and plaster companies merged to form the United States Gypsum Company (USG), resulting in the creation of the first nationwide gypsum company in the United States. The new company combined the operations of 37 mining and calcining plants producing agricultural and construction plaster. Directors of the new firm selected B.W. McCausland of Michigan for its first president; in 1905, he was succeeded by his previous Alabaster Company business partner's son, Sewell Avery, who served in the role for 35 years.
In 1909, Avery led the USG acquisition of the Sackett Plaster Board Company, inventor of Sackett Board, which was a panel made of multiple layers of plaster and paper. Patented by USG in 1912, a new manufacturing process produced boards with a single layer of plaster and paper that could be joined flush along a wall with a relatively smooth surface. Originally called Adamant Plaster Board, the product became known as Sheetrock in 1917, with the new term credited to USG sales representative D.L. Hunter of Fort Dodge, Iowa.
By the 1930s, the company's policy of diffusion of manufacturing facilities, vertical integration and product diversification allowed it to operate profitably every year during the Great Depression. The 1933 Chicago World’s Fair featured buildings made almost entirely out of sheetrock panels, which led to the brand's first major advertising campaign.
The 1950s and 1960s saw expansion into Mexico and other international markets.
Recession and its effect on the bottom line dominated the 1980s and led to a restructuring of the company. On January 1, 1985, USG Corporation was formed as a holding company a reverse merger in which United States Gypsum Company became one of just nine operating subsidiaries.
In the mid- to late-1990s, the company invested in a significant expansion of its manufacturing network, adding new high-speed wallboard manufacturing operations in Rainier, OR, Bridgeport, AL, and Aliquippa, PA. Other existing operations were substantially rebuilt or modernized, including the wallboard manufacturing plant in East Chicago, Indiana.
In 1999, USG acquired Sybex, Inc. the holding company for Beadex (a competing joint compound manufacturer) and Synkoloid. Other USG subsidiaries at the time included Alabaster Assurance Company, CGC, Donn Products, Exploracion de Yeso, Grupo Yeso, Gypsum Engineering, H & B Gypsum, L&W Supply, La Mirada Products Co., Inc, Red Top Technology, and Yeso Panamericano.
In 2001, the company entered Chapter 11 bankruptcy proceedings to resolve legacy asbestos lawsuits. Asbestos was a minor ingredient in some specialty products that the company had stopped selling almost 40 years earlier, in the 1970s. The company's operations remained healthy and profitable while it was in Chapter 11. When the bankruptcy was completed in 2006, all creditors were repaid in full and USG shareholders retained equity in the company. In a Wall Street Journal article dated February 15, 2006, Warren Buffett said, "It's the most successful managerial performance in bankruptcy that I've ever seen." A $3.95 billion trust was created to handle all existing and potential future asbestos lawsuits, thus permanently resolving the asbestos litigation issue.
USG adapted during the Great Recession, which hit the residential and commercial construction markets in mid-2006, resulting in a decreased demand for drywall. USG cut costs by closing some of its operations, including the shuttering of its Empire, Nevada facility in 2011.
William C. Foote, the company's CEO for almost 20 years, retired in 2010, and 30-year USG veteran James S. Metcalf was elected Chairman, President and CEO. Metcalf implemented the company's "Plan to Win" which involved strengthening its core manufacturing operations and L&W Supply distribution business, diversifying sources of revenues and profitability, and differentiating the company from competitors through innovative products and services. The company returned to profitability in the first quarter of 2013, posting net earnings of $2 million, followed by $26 million in net income in the second quarter of 2013.
In 2020, the 2011 closure of its Empire, Nevada operation was referenced in the movie Nomadland.
Corporate structure
USG Corporation has the following significant subsidiaries:
United States Gypsum Company
USG Interiors, LLC
Otsego Paper, Inc
USG Foreign Investments, Ltd
CGC Inc.
USG Latin America, LLC
USG Holding de Mexico S.A. de C.V.
USG Mexico S.A. de C.V.
Corporate headquarters building
In 1992, USG moved its corporate headquarters from 101 South Wacker Drive to 125 S. Franklin Street in Chicago, a site which it occupied until March 2007. Known as the USG building, the structure is part of the dual-tower AT&T Corporate Center, which was completed in 1989. The building was designed by Adrian D. Smith, FAIA, RIBA Design Partner at Skidmore, Owings & Merrill and constructed by Morse Diesel within its $110 million construction budget. The USG building is tall and houses 35 floors and of space, including of retail, a 650-seat restaurant expansion, and two levels of below-grade parking for 160 cars. USG had its own entrance with a lobby and occupied the first nine floors exclusively and parts of the 11th floor. Italian marble is used as cladding and also in the highly ornate interior. The interior also features gold leaf and satin-finish brown and American oak wood trim. Parts of the building lobbies were used in the filming of the 1994 film, Ri¢hie Ri¢h.
In 2005, USG announced it would not be renewing its lease at the 125 S. Franklin Street building and instead would move to a new building at 550 W. Adams developed by Fifield Companies. The base building architect is De Stefano + Partners, with The Environments Group providing the interior space design and construction. USG entered a 15-year lease, and occupied the building in early 2007. The new building is occupied 65% by USG and 10% by Humana Inc. As an incentive for USG to remain in the downtown Chicago area, the city of Chicago created a redevelopment agreement that contributed $6.5 million to the construction of the new building. In turn, USG agreed to maintain at least 500 full-time equivalent jobs at all times for a period of ten years at the new corporate headquarters.
Manufacturing and mining facilities
Gypsum wallboard manufacturing facilities are reported to the SEC based on the extent to which the gypsum they use comes from synthetic or natural sources.
Plaster City, California facility
USG has a large gypsum plant located west of El Centro, California, along highway Interstate 8, at Plaster City. The Plaster City location makes Sheetrock brand gypsum panels. The gypsum is mined from a quarry located to the north, in the Fish Creek Mountains of Imperial County. The quarry is estimated to contain a deposit of 25 million tons of gypsum.
USG operates an active narrow gauge railway, the last industrial narrow gauge railway in the United States. The gauge line runs north for from the plant at Plaster City (formerly known as Maria) to the gypsum quarry. The line hauls gypsum rock from the quarry to the plant.
The line was originally built by the Imperial Gypsum Company Railroad and was owned by the Imperial Valley Gypsum and Oil Corporation. The railroad built from the San Diego & Arizona Railway at Plaster City to the quarry. Surveying commenced in April 1921, grading on October 3, 1921 and construction was completed on September 15, 1922. Commercial operation commenced on October 14, 1922. The total length of the line was . Two years after completion of the line (1924), the track was sold to the Pacific Portland Cement Company.
USG purchased the line from the Pacific Portland Cement Company in 1946. In 1947, the first diesel engine was operated on the line.
The USG plant at Plaster City is currently served by the Union Pacific Railroad (UP).
Significant events
Antitrust cases
Criminal
In 1973, six wallboard manufacturers (including USG) were charged with violating §1 of the Sherman Act during the period 19601973, through engaging in a combination and conspiracy in restraint of interstate trade and commerce in the manufacture and sale of gypsum board. In July 1975, after the jury was committed to deliberate, it became apparent that the jury was heading for a deadlock. Defense counsel moved for a mistrial, but the trial judge denied the request, although he indicated that, if no verdict were rendered by the end of the week, he would then reconsider the mistrial motions. The following morning, the jury returned guilty verdicts against each of the defendants.
The Court of Appeals for the Third Circuit reversed the convictions, and that ruling was subsequently affirmed by the United States Supreme Court on the grounds that:
The trial judge's instruction to the jury was improper, as it emphasized a presumption of wrongful intent, rather than concentrating on verifying the defendant's state of mind through evidence and inferences drawn therefrom. In that regard, the Sherman Act does not create a regime of strict liability.
A good faith belief, rather than an absolute certainty, that a price concession is being offered to meet an equally low price offered by a competitor suffices to invoke the defense available under § 2(b) of the Clayton Act.
The ex parte meeting between the trial judge and the jury foreman was improper, and the Court of Appeals would have been justified in reversing the convictions solely because of the risk that the foreman believed the judge was insisting on a dispositive verdict.
The trial judge's charge concerning participation in the conspiracy, although perhaps not completely clear, was sufficient, but his charge on withdrawal from the conspiracy was erroneous.
Civil
In 1940, the U.S. Justice Department filed suit against USG and six other wallboard manufacturers, charging them with price fixing under §§ 1 and 2 of the Sherman Act. The claim stemmed from US Gypsum's 1929 cross-licensing agreements for its patented wallboard, which set prices at which the wallboard must be sold. In 1950 the Supreme Court forced US Gypsum and its six licensees who produced all of the wallboard sold east of the Rocky Mountains to cease setting prices, and US Gypsum was enjoined from exercising its patent-licensing privilege.
During 19691974 in the United States District Court for the Northern District of California, a series of civil antitrust cases were heard that came to be known as In re Gypsum Antitrust Cases. As a result, USG (together with National Gypsum Company and Kaiser Gypsum Company) were found to have violated § 1 of the Sherman Act for conspiring to establish and maintain prices of gypsum wallboard.
In December 2012, USG (together with National Gypsum, Lafarge North America and Georgia-Pacific), was accused in a class action for allegedly violating federal antitrust laws, through raising prices on drywall products by as much as 35 percent, as well as halting a longstanding practice of letting customers lock in prices for the duration of a construction project. USG stated that it did not participate, or engage in, any unlawful conduct.
Hostile takeover attempts
In November 1986 the Belzberg brothers of Canada attempted a hostile takeover of USG. USG immediately instituted a plan to buy back 20 percent of its common stock in an effort to fend off the takeover. By December 1986, however, USG had purchased Samuel, William, and Hyman Belzberg's 4.9 percent stake, for $139.6 million.
In October 1987, Texas oilman Cyril Wagner Jr. and Jack E. Brown, through Desert Partners, LP, attempted a hostile takeover of USG, buying 9.84% of USG's outstanding stock. USG decided to fight this attempt by offering $42 per share ($37 in cash and $5 in pay in kind debenture) plus a stub stock worth $7. Desert Partners was unable to match the offer and lost the proxy fight at a shareholder's meeting. To pay for the offer, USG took a poison pill by borrowing $1.6 billion from 135 banks, and issuing $600 million in 13.25% subordinated debentures due in 2000 and $260 million in 16% pay-in-kind debentures due in 2008. To help pay for all the new debt, USG sold off:
subsidiaries Castlegate, A. P. Green, Masonite, DAP, and Wiss, Janney, Elstner Associates, Inc.
its construction metals plants, a paper-bag plant, and a lime plant
its headquarters building at 101 South Wacker Drive in Chicago and its corporate jets,
and instituted large workforce reductions.
The sell-off and workforce reduction of 7% were not enough to allow USG to service the debt payments ($800,000 per day) in the economic downturn. The poison pill was too much for the corporation to survive.
Bankruptcy
On March 17, 1993, USG filed a pre-packaged bankruptcy petition that included a 50-to-1 reverse stock split. USG's stock dipped to 28 cents per share and the corporation emerged from bankruptcy 38 days later on May 6, 1993. The corporation's debt was reduced by $1.4 billion and interest costs dropped from $320 million per year to $170 million per year. The plan worked and USG re-emerged to be a profitable corporation.
USG once again declared bankruptcy on June 25, 2001, under Chapter 11 to manage the growing asbestos litigation costs. USG was the eighth company in an 18-month period that was forced to utilize Chapter 11 to resolve asbestos claims. In the prior two decades, 27 companies filed for protection under Chapter 11 because of asbestos litigation. Since 1994, U.S. Gypsum was named in more than 250,000 asbestos-related personal injury claims, and paid more than $450 million (before insurance) to manage and resolve asbestos-related litigation. USG received more than 22,000 new claims since the beginning of 2001. USG's asbestos personal injury costs (before insurance) rose from $30 million in 1997 to more than $160 million in 2000, and were expected to exceed $275 million in 2001.
On February 17, 2006, USG announced a Joint Plan of Reorganization to emerge from bankruptcy. Under the agreement, USG would create a trust to pay asbestos personal injury claims. USG's bank lenders, bondholders and trade suppliers would be paid in full with interest. Stockholders would retain ownership of the company. To pay for the trust USG would use cash it had accumulated during the bankruptcy, new long-term debt, a tax rebate from the federal government, and an innovative rights offering. Existing USG stock owners would be issued rights to buy new USG stock at a set price of $40 per share. These rights could be exercised or sold. The $1.8 billion rights offering would be backstopped by Berkshire Hathaway Inc., meaning Berkshire Hathaway would buy all the new shares not bought. For the service, USG would pay Berkshire Hathaway a $67 million non-refundable fee.
On June 20, 2006, USG announced their Joint Plan of Reorganization was confirmed by two judges for the United States Bankruptcy Court and the United States District Court for the District of Delaware, allowing the company to complete the bankruptcy case and emerge from bankruptcy. USG announced a $900 million payment to the new trust was made that day and two subsequent payments totaling $3.05 billion would be made within the next 12 months if Congress failed to enact legislation establishing a national asbestos personal injury trust fund, such as the FAIR Act.
References
Citations
Book references
Web references
2011 Annual Report on Form 10-K
USG Facts and Figures
External links
Building materials companies of the United States
Asbestos
Industrial railroads in the United States
Rail transportation in California
Narrow gauge railroads in California
Manufacturing companies based in Chicago
Manufacturing companies established in 1901
Companies formerly listed on the New York Stock Exchange
Companies that filed for Chapter 11 bankruptcy in 1993
Companies that filed for Chapter 11 bankruptcy in 2001
2019 mergers and acquisitions
American subsidiaries of foreign companies | USG Corporation | [
"Environmental_science"
] | 3,693 | [
"Toxicology",
"Asbestos"
] |
614,794 | https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Astrophysics | The Max Planck Institute for Astrophysics (MPA) is a research institute located in Garching, just north of Munich, Bavaria, Germany. It is one of many scientific research institutes belonging to the Max Planck Society.
The MPA is widely considered to be one of the leading institutions in the world for theoretical astrophysics research. According to Thomson Reuters, from 1999-2009 the Max Planck Society as a whole published more papers and accumulated more citations in the fields of physics and space science than any other research organization in the world.
History
The Max Planck Society was founded on 26 February 1948. It effectively replaced the Kaiser Wilhelm Society for the Advancement of Science, which was dissolved after World War II. The society is named after Max Planck, one of the founders of quantum theory.
The MPA was founded as the Max Planck Institute for Physics and Astrophysics in 1958 and split into the Max Planck Institute for Astrophysics and the Max Planck Institute for Physics in 1991. In 1995, the numerical relativity group moved to the Max Planck Institute for Gravitational Physics.
Organization
The MPA is one of several Max Planck Institutes that specialize in astronomy and astrophysics. Others are the Max Planck Institute for Extraterrestrial Physics in Garching (located next-door to the MPA), the Max Planck Institute for Astronomy in Heidelberg, the Max Planck Institute for Radio Astronomy in Bonn, the Max Planck Institute for Solar System Research in Göttingen, and the Max Planck Institute for Gravitational Physics (a.k.a. Albert Einstein Institute) in Golm.
The institute is located next-door to the MPI for Extraterrestrial Physics, as well as the headquarters of the European Southern Observatory. It also enjoys close working relationships with the Ludwig Maximilian University of Munich and Technical University Munich.
At any given time, the institute employs approximately 50 scientists, instructs over 30 PhD students, and hosts about 20 visiting scientists (some 60 visitors stay for longer than 2 weeks in any given year).
As of 2021, the four directors of the MPA are Selma de Mink, Guinevere Kauffmann, Eiichiro Komatsu, and Volker Springel.
Previous directors include Ludwig Biermann (1958 – 75), Rudolf Kippenhahn (1975 – 91), Simon White (1994 – 2019), Rashid Sunyaev (1995 – 2018), Wolfgang Hillebrandt (1997 – 2009) and Martin Asplund (2007 – 2011).
Science
Focusing on theoretical investigations, the MPA covers a wide range of topics in astrophysics. These include:
Cosmology, in particular galaxy formation and evolution, reionization, and the cosmic microwave background;
High-energy astronomy and astrophysics, including supermassive black holes, galaxy clusters, active galactic nuclei and quasars, X-ray binaries, and accretion discs;
Stellar physics, including stellar evolution and stellar explosions such as supernovae and gamma-ray bursts.
Public outreach
The MPA works to explain astrophysical concepts and disseminate its findings to the public. These activities include popular science articles written by MPA scientists, events hosting school groups, events open to the general public, and monthly research highlights written for a general audience.
Graduate program
The International Max Planck Research School (IMPRS) for Astrophysics is a graduate program offering a PhD in astrophysics. The school is a cooperation with the Ludwig Maximilian University of Munich and Technical University Munich.
External links
Homepage of the Max Planck Institute for Astrophysics
Homepage of the International Max Planck Research School (IMPRS) for Astrophysics
References
Astrophysics
Astrophysics research institutes
1958 establishments in West Germany
Garching bei München | Max Planck Institute for Astrophysics | [
"Physics"
] | 748 | [
"Astrophysics research institutes",
"Astrophysics"
] |
614,798 | https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Extraterrestrial%20Physics | The Max Planck Institute for Extraterrestrial Physics is part of the Max Planck Society, located in Garching, near Munich, Germany.
In 1991 the Max Planck Institute for Physics and Astrophysics split up into the Max Planck Institute for Extraterrestrial Physics, the Max Planck Institute for Physics and the Max Planck Institute for Astrophysics. The Max Planck Institute for Extraterrestrial Physics was founded as sub-institute in 1963. The scientific activities of the institute are mostly devoted to astrophysics with telescopes orbiting in space. A large amount of the resources are spent for studying black holes in the Milky Way Galaxy and in the remote universe.
History
The Max-Planck-Institute for extraterrestrial physics (MPE) was preceded by the department for extraterrestrial physics in the Max-Planck-Institute for physics and astrophysics. This department was established by Professor Reimar Lüst on October 23, 1961. A Max-Planck Senate resolution transformed this department into a sub-institute of the Max-Planck-Institute for Physics and Astrophysics on May 15, 1963. Professor Lüst was appointed director of the institute. Another Senate resolution on March 8, 1991, finally established MPE as an autonomous institute within the Max-Planck-Gesellschaft. It is dedicated to the experimental and theoretical exploration of the space outside of Earth as well as astrophysical phenomena.
Timeline
Major events in the history of the institute include:
1963 Foundation as a sub-institute within the MPI für Physik und Astrophysik; director Reimar Lüst
1969 Klaus Pinkau becomes director at the institute (cosmic rays, gamma-astronomy)
1972 Gerhard Haerendel becomes director at the institute (plasma physics)
1975 Joachim Trümper becomes director and scientific member at the institute (X-ray astronomy)
1981 The MPE X-ray test facility "Panter" located in Neuried starts operation
1985 Gregor Morfill becomes director and scientific member at the institute (theory)
1986 Reinhard Genzel becomes director and scientific member at the institute (infrared astronomy)
1990 Joachim Trümper together with the MPI for Physics (MPP) founds the semiconductor laboratory as a joint project between the MPE and the MPP (since 2012 operated by the MPG)
2000 R. Genzel together with the University of California Berkeley founds the "UCB-MPG Center for International Exchange in Astrophysics and Space Science"
2000 G. Morfill together with the IPP founds the "Center for Interdisciplinary Plasma Science" (CIPS) (until 2004)
2001 The "International Max-Planck- Research School on Astrophysics" (IMPRS) is opened by MPE, MPA, ESO, MPP and the universities of Munich
2001 Günther Hasinger becomes scientific member and director at the institute (X-ray astronomy)
2002 Ralf Bender becomes scientific member and director at the institute (optical and interpretative astronomy)
2010 Kirpal Nandra becomes scientific member and director at the institute (high-energy astrophysics)
2014 Paola Caselli becomes scientific member and director at the institute (Center for Astrochemical Studies)
2020 Nobel Prize in Physics for Reinhard Genzel for his research on the black hole at the centre of the Milky Way (Sagittarius A*)
2023 Frank Eisenhauer becomes scientific member and director at the institute (infrared-/submillimeter astronomy)
Detailed history
The Max Planck Institute for Extraterrestrial Physics (MPE) was preceded by the department for extraterrestrial Physics in the Max Planck Institute for Physics and Astrophysics. This department was established by Professor Reimar Lüst on October 23, 1961. A Max-Planck Senate resolution transformed this department into a sub-institute of the Max Planck Institute for Physics and Astrophysics on May 15, 1963. Professor Lüst was appointed director of the institute. Another Senate resolution on March 8, 1991, finally established MPE as an autonomous institute within the Max Planck Society. It is dedicated to the experimental and theoretical exploration of the space outside of Earth as well as astrophysical phenomena. A continuous reorientation to new, promising fields of research and the appointment of new members ensures steady advancement.
Among the 29 employees of the Institute when it was founded in 1963 were 9 scientists and 1 Ph.D. student. Twelve years later in 1975 the number of employees had grown to 180 with 55 scientists and 13 Ph.D. students, and today (status 2015) there are some 400 staff (130 scientists and 75 PhD students). It is noteworthy that permanent positions at the institute have not increased since 1973 - despite its celebrated scientific achievements. The increasingly complex tasks and international obligations have been mainly maintained by staff members with positions having limited duration and funded by external organizations.
Because the institute has assumed a leading position in astronomy internationally, it has attracted guest scientists throughout the world. The number of long-term guests increased from 12 in 1974 to a maximum of 72 in 2000. In recent years MPE has hosted an average of about 50 guest scientists each year.
During the early years, the scientific work at the Institute concentrated on the investigation of extraterrestrial plasmas and the magnetosphere of the Earth. This work was performed with measurements of particles and electromagnetic fields as well as a specially developed ion-cloud technique using sounding rockets.
Another field of research also became important: astrophysical observations of electromagnetic radiation which could not be observed from the surface of the Earth because the wavelengths are such that the radiation is absorbed by the Earth's atmosphere. These observations and inferences therefrom are the subject matter of infra-red astronomy as well as X-ray- and gamma-ray-astronomy. In addition to more than 100 rockets, an increasing number of high-altitude balloons (up to now more than 50; e.g. HEXE) have been used to carry experiments to high altitudes.
Since the 1990s, satellites have become the preferred observation platforms because of their favorable observation-time/cost ratio. Nevertheless, high-flying observation airplanes and ground-based telescopes are also used to obtain data, especially for optical and near-infrared observations.
New observation techniques using satellites has necessitated the recording, processing and accessible storage of high data fluxes over long periods of time. This demanding task is performed by a data processing group, which has grown quickly in the last decade. Special data centers were established for the large satellite projects.
Besides the many successes, there have also been disappointments. The malfunctioning of the Ariane carrier rockets on test launches in 1980 and 1996 were particularly bitter setbacks. The satellite "Firewheel", in which many members of the Institute had invested years of work, was lost on May 23, 1980, because of a burning instability in the first stage of the launch rocket. The same fate was to overtake the four satellites of the CLUSTER-Mission on June 4, 1996, when the first Ariane 5 was launched. This time the disaster was attributed to an error in the rocket's software. The most recent loss was "ABRIXAS", an X-ray satellite built by industry under the leadership of MPE. After few hours in orbit, a malfunction of the power system caused the total loss of the satellite.
Over the years, however, the history of MPE is primarily a story of scientific successes.
Selected achievements
Exploration of the Ionosphere and Magnetosphere by means of ion clouds (1963–1985)
The first map of the galactic gamma-ray emission ( > 70 MeV) as measured with the satellite COS-B (1978)
Measurement of the magnetic field of the neutron star Her-X1 using the cyclotron line emission (balloon experiments 1978)
Experimental proof of the reconnection process (1979)
The artificial comet (AMPTE 1984/85)
Numerical simulation of a collision-free shock wave (1990)
The first map of the X-ray sky as measured with the imaging X-ray telescope on board the ROSAT satellite (1993)
First gamma-ray sky map in the energy range 3 to 10 MeV as measured with the imaging Compton telescope COMPTEL on board CGRO (1994)
The plasma-crystal experiment and its successors on the International Space Station (1996–2013)
The measurement of the element- and isotope-composition of the solar wind by the CELIAS experiment on board the SOHO satellite (1996)
The first detection of water-molecule lines in an expanding shell of a star using the Fabry-Perot spectrometer on board the ISO satellite (1996)
First detection of X-ray emission from comets and planets (1996, 2001)
Determining the energy source for ultraluminous infrared galaxies with the satellite ISO (1998)
Detection of gamma-ray line emission (44Ti) from supernova remnants (1998)
Deep observations of the extragalactic X-ray sky with ROSAT, XMM-Newton and Chandra and resolving the background radiation into individual sources (since 1998)
Confirmation that a supermassive black hole resides at the centre of the Milky Way Galaxy (2002)
Detection of a binary active galactic nucleus in X-rays (2003)
Reconstruction of the evolution history of stars in elliptical galaxies (2005)
Stellar disks rotating around the black hole in the Andromeda galaxy (2005)
Determining the gas content of normal galaxies in the early universe (since 2010)
Resolving the cosmic infrared background into individual galaxies with Herschel (2011)
Scientific work
The institute was founded in 1963 as a sub-institute of the Max-Planck-Institut für Physik und Astrophysik and established as an independent institute in 1991.
Its main research topics are astronomical observations in spectral regions which are only accessible from space because of the absorbing effects of the Earth's atmosphere, but also instruments on ground-based observatories are used whenever possible.
Scientific work is done in four major research areas that are supervised by one of the directors, respectively: optical and interpretative astronomy (Bender), infrared and sub-millimeter/millimeter astronomy (Genzel), high-energy astrophysics (Nandra), and in the Centre for Astrochemical Studies (Caselli). Within these areas scientists lead individual experiments and research projects organised in about 25 project teams. The research topics pursued at MPE range from the physics of cosmic plasmas and of stars to the physics and chemistry of interstellar matter, from star formation and nucleosynthesis to extragalactic astrophysics and cosmology.
Many experiments of the Max Planck Institute for Extraterrestrial Physics (MPE) have to be carried out above the dense Earth's atmosphere using aircraft, rockets, satellites and space probes. In the early days experiments were also flown on balloons. To run advanced extraterrestrial physics and state-of-the-art experimental astrophysics, the institute continues to develop high-tech instrumentation in-house. This includes detectors, spectrometers, and cameras as well as telescopes and complete payloads (e.g. ROSAT and eROSITA) and even entire satellites (as in case of AMPTE and EQUATOR-S). For this purpose the technical and engineering departments are of particular importance for the institute's research work.
Observers and experimenters perform their research work at the institute in close contact with each other. Their interaction while interpreting observations and propounding new hypotheses underlies the successful progress of the institute's research projects.
At the end of the year 2022 a total of 508 employees were working at the institute, numbering among them about 100 scientists, 60 junior scientists, 10 apprentices and 140 visiting researchers.
Projects
Scientific projects at the MPE are often the efforts of the different research departments to build, maintain, and use experiments and facilities which are needed by the many different scientific research interest at the institute. Apart from hardware projects, there are also projects that use archival data and are not necessarily connected to a new instrument. A brief overview of the most recent projects.
For the EUCLID space telescope, which has been launched in July 2023 and from which researchers hope to gain new insights into dark matter and dark energy, the institute contributed the NISP optical system.
The GRAVITY instrument enables the four 8-metre telescopes at the Very Large Telescope (VLT) in Chile to be interconnected by means of interferometry to form a virtual telescope with a diameter of 130 metres. The follow-up project GRAVITY Plus is currently being developed, which is expected to achieve an even sharper resolution thanks to a new system of adaptive optics, laser guide stars and an extended field of view.
For the 39-metre European Extremely Large Telescope (E-ELT), which is currently being built in the Chilean Atacama Desert and is planned to be finished by 2027, MPE is developing the first-light instrument MICADO (Multi-AO Imaging Camera for Deep Observations).
The ERIS (Enhanced Resolution Imager and Spectrograph) infrared camera will replace the NACO and SINFONI instruments at the VLT.
With eROSITA (extended ROentgen Survey with an Imaging Telescope Array), the main instrument of the Russian X-ray gamma-ray satellite Spektr-RG launched from Baikonur in July 2019, the first complete sky survey in the X-ray range was achieved.
External links
http://www.mpe.mpg.de
https://web.archive.org/web/20120609132517/http://www.mpia.de/Public/menu-e.php
http://www.nasa.gov/
http://www.esa.int/esaCP/index.html
References
Extraterrestrial Physics
Education in Munich
Astronomy institutes and departments
Physics research institutes
Garching bei München | Max Planck Institute for Extraterrestrial Physics | [
"Astronomy"
] | 2,805 | [
"Astronomy organizations",
"Astronomy institutes and departments"
] |
614,874 | https://en.wikipedia.org/wiki/Catalina%20Sky%20Survey | Catalina Sky Survey (CSS; obs. code: 703) is an astronomical survey to discover comets and asteroids. It is conducted at the Steward Observatory's Catalina Station, located near Tucson, Arizona, in the United States.
CSS focuses on the search for near-Earth objects, in particular on any potentially hazardous asteroid that may pose a threat of impact. Its counterpart in the southern hemisphere was the Siding Spring Survey (SSS), closed in 2013 due to loss of funding. CSS supersedes the photographic Bigelow Sky Survey.
Mission
The NEO Observations Program is a result of a United States 1998 congressional directive to NASA to begin a program to identify objects or larger to a confidence level of 90% or better. The Catalina Sky Survey, located at the Mount Lemmon Observatory in the Catalina Mountains north of Tucson, carries out searches for near-earth objects (NEOs), contributing to the congressionally-mandated goal.
In addition to identifying impact risks, the project also obtains other scientific information, including: improving the known population distribution in the main belt, finding the cometary distribution at larger perihelion distances, determining the distribution of NEOs as a product of collisional history and transport to the inner Solar System, and identifying potential targets for flight projects.
Techniques
The Catalina Sky Survey (CSS) uses three telescopes, a f/1.6 telescope on the peak of Mount Lemmon (MPC code G96), a f/1.7 Schmidt telescope near Mount Bigelow (MPC code 703), and a f/2.6 follow-up telescope also on Mount Lemmon (MPC code I52). The three telescopes are located in the Santa Catalina Mountains near Tucson, Arizona. The CSS southern hemisphere counterpart, the Siding Spring Survey (SSS), used a f/3 Uppsala Schmidt telescope at Siding Spring Observatory in Australia. The 1.5-meter and 68-cm survey telescopes use identical, thermo-electrically cooled cameras and common software written by the CSS team. The cameras are cooled to approximately so their dark current is about 1 electron per hour. These 10,560×10,560-pixel cameras provide a field of view of 5 square degrees with the 1.5-m telescope and nearly 20 square degrees with the Catalina Schmidt. Nominal exposures are 30 seconds and the 1.5-m can reach objects fainter than 21.5 V in that time. The 1-meter follow-up telescope uses a 2000×2000-pixel CCD detector which provides a field of view of 0.3 square degrees. Starting in 2019, CSS started using the Kuiper telescope situated on Mt. Bigelow for targeted follow-up for 7–12 nights per lunation.
CSS typically operates every clear night with the exception of a few nights centered on the full moon. The southern hemispheres' SSS in Australia ended in 2013 after funding was discontinued.
Discoveries
In 2005, CSS became the most prolific NEO survey, surpassing Lincoln Near-Earth Asteroid Research (LINEAR) in total number of NEOs and potentially hazardous asteroids discovered each year since. As of 2020, the Catalina Sky Survey is responsible for the discovery of 47% of the total known NEO population.
Notable discoveries
List of discovered minor planets
For a complete listing of all minor planets discovered by the Catalina Sky Survey, see the index section in list of minor planets.
CSS/SSS team
The CSS team is headed by D. Carson Fuls of the Lunar and Planetary Laboratory of the University of Arizona.
The full CSS team is:
D. Carson Fuls (principal investigator)
Stephen M. Larson
Alex R. Gibbs
Albert D. Grauer
Richard E. Hill (Retired)
Richard A. Kowalski
Joshua Hogan
Hannes Gröller
Frank Shelly
David Rankin
Gregory J. Leonard
Rob Seaman
Vivian Carvajal
Tracie Beuden
Jacqueline Fazekas
Kacper Wierzchos
SSS
Robert H. McNaught
Gordon J. Garradd
Educational outreach
The CSS has helped with Astronomy Camp by showing campers how they detect NEOs. They even played a role in an astrophotography exercise with the 2006 Adult Astronomy Camp ending up with a picture that was featured on Astronomy Picture of the Day.
Catalina Outer Solar System Survey
The Zooniverse project Catalina Outer Solar System Survey is a citizen science project and is listed as a NASA citizen science project. In this project, the volunteers search for trans-Neptunian objects (TNOs) in pre-processed images of the Catalina Sky Survey. Computers can detect the motion of TNOs, but humans must check whether this motion is real. Upon agreement with the volunteers, they will be cited as "measurers" in the submission of the astrometry to the Minor Planet Center. The project already found previously known TNOs, including 47171 Lempo, , and .
See also
Asteroid Zoo
Astronomical survey
Large Synoptic Survey Telescope
Minor Planet Center (MPC)
Planetary Data System (PDS)
Spaceguard
Asteroid Terrestrial-impact Last Alert System
List of near-Earth object observation projects
References
External links
Catalina Sky Survey Website
Overview and history
Astronomical surveys
Discoverers of asteroids
Near-Earth object tracking
Discoveries by the Catalina Sky Survey | Catalina Sky Survey | [
"Astronomy"
] | 1,073 | [
"Astronomical surveys",
"Works about astronomy",
"Astronomical objects"
] |
614,947 | https://en.wikipedia.org/wiki/Yousef%20Alavi | Yousef Alavi (March 19, 1928 – May 21, 2013) was an Iranian born American mathematician who specialized in combinatorics and graph theory. He received his PhD from Michigan State University in 1958. He was a professor of mathematics at Western Michigan University from 1958 until his retirement in 1996; he chaired the department from 1989 to 1992.
Alavi was known for complaining that "this is highly irregular!" He was also a frequent host for Paul Erdős on his visits to Western Michigan. On one of these visits, these two things came together: he made his usual complaint at a time when Erdős and Ronald Graham were present. This sparked a discussion on what it might mean for a graph to be highly irregular, kicking off a line of joint research on highly irregular graphs through which Alavi obtained Erdős number one.
In 1987 he received the first Distinguished Service Award of the Michigan Section of the Mathematical Association of America due to his 30 years of service to the MAA; at that time, the Michigan House and Senate issued a special resolution honoring him.
References
20th-century American mathematicians
Combinatorialists
2013 deaths
Michigan State University alumni
Western Michigan University faculty
1928 births | Yousef Alavi | [
"Mathematics"
] | 238 | [
"Combinatorialists",
"Combinatorics"
] |
614,956 | https://en.wikipedia.org/wiki/B%C3%A9la%20Bollob%C3%A1s | Béla Bollobás FRS (born 3 August 1943) is a Hungarian-born British mathematician who has worked in various areas of mathematics, including functional analysis, combinatorics, graph theory, and percolation. He was strongly influenced by Paul Erdős from the age of 14.
Early life and education
As a student, he took part in the first three International Mathematical Olympiads, winning two gold medals. Paul Erdős invited Bollobás to lunch after hearing about his victories, and they kept in touch afterward. Bollobás' first publication was a joint publication with Erdős on extremal problems in graph theory, written when he was in high school in 1962.
With Erdős's recommendation to Harold Davenport and a long struggle for permission from the Hungarian authorities, Bollobás was able to spend an undergraduate year in Cambridge, England. However, the authorities denied his request to return to Cambridge for doctoral study. A similar scholarship offer from Paris was also quashed. He wrote his first doctorate in discrete geometry under the supervision of László Fejes Tóth and Paul Erdős in Budapest University, 1967, after which he spent a year in Moscow with Israïl Moiseevich Gelfand. After spending a year at Christ Church, Oxford, where Michael Atiyah held the Savilian Chair of Geometry, he vowed never to return to Hungary due to his disillusion with the 1956 Soviet intervention. He then went to Trinity College, Cambridge, where in 1972 he received a second PhD in functional analysis, studying Banach algebras under the supervision of Frank Adams. Bollobás recalled, "By then, I said to myself, 'If I ever manage to leave Hungary, I won't return.'" In 1970, he was awarded a fellowship to the college.
His main area of research is combinatorics, particularly graph theory. His chief interests are in extremal graph theory and random graph theory. In 1996 he resigned his university post, but remained a Fellow of Trinity College, Cambridge.
Career
Bollobás has been a Fellow of Trinity College, Cambridge, since 1970; in 1996 he was appointed to the Jabie Hardin Chair of Excellence at the University of Memphis, and in 2005 he was awarded a senior research fellowship at Trinity College.
Bollobás has proved results on extremal graph theory, functional analysis, the theory of random graphs, graph polynomials and percolation. For example, with Paul Erdős he proved results about the structure of dense graphs; he was the first to prove detailed results about the phase transition in the evolution of random graphs; he proved that the chromatic number of the random graph on n vertices is asymptotically n/2 log n; with Imre Leader he proved basic discrete isoperimetric inequalities; with Richard Arratia and Gregory Sorkin he constructed the interlace polynomial; with Oliver Riordan he introduced the ribbon polynomial (now called the Bollobás–Riordan polynomial); with Andrew Thomason, József Balogh, Miklós Simonovits, Robert Morris and Noga Alon he studied monotone and hereditary graph properties; with Paul Smith and Andrew Uzzell he introduced and classified random cellular automata with general homogeneous monotone update rules; with József Balogh, Hugo Duminil-Copin and Robert Morris he studied bootstrap percolation; with Oliver Riordan he proved that the critical probability in random Voronoi percolation in the plane is 1/2; and with Svante Janson and Oliver Riordan he introduced a very general model of heterogeneous sparse random graphs.
In addition to over 350 research papers on mathematics, Bollobás has written several books, including the research monographs Extremal Graph Theory in 1978, Random Graphs in 1985 and Percolation (with Oliver Riordan) in 2006, the introductory books Modern Graph Theory for undergraduate courses in 1979, Combinatorics and Linear Analysis in 1990, and the collection of problems The Art of Mathematics – Coffee Time in Memphis in 2006, with drawings by Gabriella Bollobás. He has also edited a number of books, including Littlewood's Miscellany.
Bollobás's research students have included Keith Ball at Warwick, Graham Brightwell at LSE, Timothy Gowers (who was awarded a Fields Medal in 1998 and is Rouse Ball Professor of Mathematics), Imre Leader at the University of Cambridge, Jonathan Partington at Leeds, and Charles Read at Leeds, who died in 2015.
Bollobás is an External Member of the Hungarian Academy of Sciences; in 2007 he was awarded the Senior Whitehead Prize by the London Mathematical Society. In 2011 he was elected a Fellow of the Royal Society for his major contributions to many
different areas of mathematics within the broad field of combinatorics, including random graphs, percolation, extremal graphs, set systems and isoperimetric inequalities. The citation also recognises the profound influence of his
textbooks in many of these areas, and his key role in establishing Britain as one of the leading countries in probabilistic and extremal combinatorics. In 2012 he became a fellow of the American Mathematical Society.
Awards and honours
Bollobás was elected a Fellow of the Royal Society in 2011. His nomination reads
In 1998 he was an invited speaker of the International Congress of Mathematicians in Berlin. He was elected Foreign Member of the Polish Academy of Sciences in 2013, a Member of the Academy of Europea in 2017 and a member of Academia Brasileira Ciencias (ABC) in 2023. He received an honorary doctorate from Adam Mickiewicz University, Poznań in 2013. In 2016 he received the Bocskai Prize and the Széchenyi Prize in 2017.
Personal life
His father was a physician. His wife, Gabriella Bollobás, born in Budapest, was an actress and a musician in Hungary before moving to England to become a sculptor. She made busts of mathematicians and scientists, including Paul Erdős, Bill Tutte, George Batchelor, John von Neumann, Paul Dirac, and Stephen Hawking, as well as a cast bronze of David Hilbert. He has one son, Mark.
Bollobás is also a sportsman, having represented the University of Oxford at modern pentathlon and the University of Cambridge at fencing.
Selected works
Extremal Graph Theory. Academic Press 1978, Dover 2004 (see here).
Graph theory- an introductory course. Springer 1979,, .
Random Graphs. Academic Press 1985. Cambridge University Press 2001 , .
Combinatorics - set systems, hypergraphs, families of vectors, and combinatorial probability. Cambridge University Press 1986 .
Linear Analysis – an introductory course. Cambridge University Press 1990, 1999 ,.
with Alan Baker, András Hajnal (ed.): A tribute to Paul Erdös. Cambridge University Press 1990 ,.
(ed.): Probabilistic combinatorics and its applications. American Mathematical Society 1991 .
with Andrew Thomason (ed.): Combinatorics, Geometry and Probability- a tribute to Paul Erdös. Cambridge University Press 1997 , .
Modern Graph Theory. Springer 1998, , .
(ed.): Contemporary Combinatorics. Springer und Janos Bolyai Mathematical Society, Budapest 2002 .
with Oliver Riordan: Percolation. Cambridge University Press 2006 , .
The Art of Mathematics – Coffee Time in Memphis. Cambridge University Press 2006 , (with drawings by his wife Gabrielle Bollobás)
with Robert Kozma, Dezső Miklós: Handbook of Large-Scale Random Networks. Springer 2009, , .
References
External links
Interview in the magazine Imprints, Institute of Mathematical Sciences, National University of Singapore
Béla Bollobás on Budapest protest, January 2012
1943 births
Living people
20th-century Hungarian mathematicians
21st-century Hungarian mathematicians
Members of the Hungarian Academy of Sciences
Graph theorists
Combinatorialists
Fellows of Trinity College, Cambridge
Fellows of the American Mathematical Society
University of Memphis faculty
Fellows of the Royal Society
British people of Hungarian descent
International Mathematical Olympiad participants
Scientists from Budapest
Network scientists | Béla Bollobás | [
"Mathematics"
] | 1,650 | [
"Graph theory",
"Combinatorics",
"Combinatorialists",
"Mathematical relations",
"Graph theorists"
] |
614,962 | https://en.wikipedia.org/wiki/Ralph%20Faudree | Ralph Jasper Faudree (August 23, 1939 – January 13, 2015) was a mathematician, a professor of mathematics and the former provost of the University of Memphis.
Faudree was born in Durant, Oklahoma. He did his undergraduate studies at Oklahoma Baptist University, graduating in 1961, and received his Ph.D. in 1964 from Purdue University under the supervision of Eugene Schenkman (1922–1977). Faudree was an instructor at the University of California, Berkeley and an assistant professor at the University of Illinois before joining the Memphis State University faculty as an associate professor in 1971. Memphis State became renamed as the University of Memphis in 1994, and Faudree was appointed as provost in 2001.
Faudree specialized in combinatorics, and specifically in graph theory and Ramsey theory. He published more than 200 mathematical papers on these topics together with such notable mathematicians as Béla Bollobás, Stefan Burr, Paul Erdős, Ron Gould, András Gyárfás, Brendan McKay, Cecil Rousseau, Richard Schelp, Miklós Simonovits, Joel Spencer, and Vera Sós. He was the 2005 recipient of the Euler Medal for his contributions to combinatorics. His Erdős number was 1: he cowrote 50 joint papers with Paul Erdős beginning in 1976 and was among the three mathematicians who most frequently co-authored with Erdős.
Selected publications
References
External links
Archived version of the professional webpage
1939 births
2015 deaths
20th-century American mathematicians
21st-century American mathematicians
Graph theorists
Oklahoma Baptist University alumni
Purdue University alumni
University of California, Berkeley faculty
University of Illinois faculty
University of Memphis faculty
People from Durant, Oklahoma | Ralph Faudree | [
"Mathematics"
] | 339 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
614,969 | https://en.wikipedia.org/wiki/Andr%C3%A1s%20Gy%C3%A1rf%C3%A1s | András Gyárfás (born 1945) is a Hungarian mathematician who specializes in the study of graph theory. He is famous for two conjectures:
Together with Paul Erdős he conjectured what is now called the Erdős–Gyárfás conjecture which states that any graph with minimum degree 3 contains a cycle whose length is a power of two.
He and David Sumner independently formulated the Gyárfás–Sumner conjecture according to which, for every tree T, the T-free graphs are χ-bounded.
Gyárfás began working as a researcher for the Computer and Automation Research Institute of the Hungarian Academy of Sciences in 1968. He earned a candidate degree in 1980, and a doctorate (Dr. Math. Sci.) in 1992. He won the Géza Grünwald Commemorative Prize for young researchers of the János Bolyai Mathematical Society in 1978. He was co-author with Paul Erdős on 15 papers, and thus has Erdős number one.
References
External links
András Gyárfás at the Computer and Automation Research Institute, Hungarian Academy of Sciences
Google scholar profile
20th-century Hungarian mathematicians
21st-century Hungarian mathematicians
1945 births
Combinatorialists
Living people | András Gyárfás | [
"Mathematics"
] | 242 | [
"Combinatorialists",
"Combinatorics"
] |
614,984 | https://en.wikipedia.org/wiki/Richard%20Rado | Richard Rado FRS (28 April 1906 – 23 December 1989) was a German-born British mathematician whose research concerned combinatorics and graph theory. He was Jewish and left Germany to escape Nazi persecution. He earned two PhDs: in 1933 from the University of Berlin, and in 1935 from the University of Cambridge. He was interviewed in Berlin by Lord Cherwell for a scholarship given by the chemist Sir Robert Mond which provided financial support to study at Cambridge. After he was awarded the scholarship, Rado and his wife left for the UK in 1933. He was appointed Professor of Mathematics at the University of Reading in 1954 and remained there until he retired in 1971.
Contributions
Rado made contributions in combinatorics and graph theory including 18 papers with Paul Erdős.
In graph theory, the Rado graph, a countably infinite graph containing all countably infinite graphs as induced subgraphs, is named after Rado. He rediscovered it in 1964 after previous works on the same graph by Wilhelm Ackermann, Erdős, and Alfréd Rényi.
In combinatorial set theory, the Erdős–Rado theorem extends Ramsey's theorem to infinite sets. It was published by Erdős and Rado in 1956. Rado's theorem is another Ramsey-theoretic result concerning systems of linear equations, proved by Rado in his thesis. The Milner–Rado paradox, also in set theory, states the existence of a partition of an ordinal into subsets of small order-type; it was published by Rado and E. C. Milner in 1965.
The Erdős–Ko–Rado theorem can be described either in terms of set systems or hypergraphs. It gives an upper bound on the number of sets in a family of finite sets, all the same size, that all intersect each other. Rado published it with Erdős and Chao Ko in 1961, but according to Erdős it was originally formulated in 1938.
In matroid theory, Rado proved a fundamental result of transversal theory by generalizing the Marriage Theorem for matchings between sets S and X to the case where X has a matroid structure and matchings must match to an independent set in the matroid on X.
The Klarner–Rado Sequence is named after Rado and David A. Klarner.
Awards and honours
In 1972, Rado was awarded the Senior Berwick Prize.
References
Further reading
"Richard Rado", The Times (London), 2 January 1990, p. 12.
1906 births
1989 deaths
20th-century British mathematicians
20th-century German mathematicians
Fellows of the Royal Society
Jewish emigrants from Nazi Germany to the United Kingdom
Set theorists
Graph theorists
Humboldt University of Berlin alumni
Alumni of Fitzwilliam College, Cambridge | Richard Rado | [
"Mathematics"
] | 561 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
614,988 | https://en.wikipedia.org/wiki/Alfr%C3%A9d%20R%C3%A9nyi | Alfréd Rényi (20 March 1921 – 1 February 1970) was a Hungarian mathematician known for his work in probability theory, though he also made contributions in combinatorics, graph theory, and number theory.
Life
Rényi was born in Budapest to Artúr Rényi and Borbála Alexander; his father was a mechanical engineer, while his mother was the daughter of philosopher and literary critic Bernhard Alexander; his uncle was Franz Alexander, a Hungarian-American psychoanalyst and physician.
He was prevented from enrolling in university in 1939 due to the anti-Jewish laws then in force, but enrolled at the University of Budapest in 1940 and finished his studies in 1944. At this point, he was drafted to forced labour service, from which he managed to escape during transportation of his company. He was in hiding with false documents for six months. Biographers tell an incredible story about Rényi: after half of a year in hiding, he managed to get hold of a soldier's uniform and march his parents out of the Budapest Ghetto, where they were captive. That mission required enormous courage and planning skills.
Rényi then completed his PhD in 1947 at the University of Szeged, under the advisement of Frigyes Riesz. He did his postgraduate in Moscow and Leningrad, where he collaborated with a prominent Soviet mathematician Yuri Linnik.
Rényi married Katalin Schulhof (who used Kató Rényi as her married name), herself a mathematician, in 1946; their daughter Zsuzsanna was born in 1948. After a brief assistant professorship at Budapest, he was appointed Professor Extraordinary at the University of Debrecen in 1949. In 1950, he founded the Mathematics Research Institute of the Hungarian Academy of Sciences, now bearing his name, and directed it until his early death. He also headed the Department of Probability and Mathematical Statistics of the Eötvös Loránd University, from 1952. He was elected a corresponding member (1949), then full member (1956), of the Hungarian Academy of Sciences.
Work
Rényi proved, using the large sieve, that there is a number such that every even number is the sum of a prime number and a number that can be written as the product of at most primes. Chen's theorem, a strengthening of this result, shows that the theorem is true for K = 2, for all sufficiently large even numbers. The case K = 1 is the still-unproven Goldbach conjecture.
In information theory, he introduced the spectrum of Rényi entropies of order α, giving an important generalisation of the Shannon entropy and the Kullback–Leibler divergence. The Rényi entropies give a spectrum of useful diversity indices, and lead to a spectrum of fractal dimensions. The Rényi–Ulam game is a guessing game where some of the answers may be wrong.
In probability theory, he is also known for his parking constants, which characterize the solution to the following problem: given a street of some length and cars of unit length parking on a random free position on the street, what is the mean density of cars when there are no more free positions? The solution to that problem is asymptotically equal to 0.7475979 . Thus, random parking is 25.2% less efficient than optimal packing.
He wrote 32 joint papers with Paul Erdős, the most well-known of which are his papers introducing the Erdős–Rényi model of random graphs.
The corpus of his bibliography was compiled by the mathematician Pál Medgyessy.
Quotations
Rényi, who was addicted to coffee, is the source of the quote: "A mathematician is a device for turning coffee into theorems", which is often ascribed to Erdős. It has been suggested that this sentence was originally formulated in German, where it can be interpreted as a double entendre on the meaning of the word Satz (theorem or coffee residue), but it is more likely that the original formulation was in Hungarian.
He is also famous for having said, "If I feel unhappy, I do mathematics to become happy. If I am happy, I do mathematics to keep happy."
Remembrance
The Alfréd Rényi Prize, awarded by the Hungarian Academy of Science, was established in his honor.
In 1950 Rényi founded the Mathematics Research Institute of the Hungarian Academy of Sciences. It was renamed the Alfréd Rényi Institute of Mathematics in July 1999.
Books
A. Rényi: Dialogues on Mathematics, Holden-Day, 1967.
A. Rényi: A diary on information theory, Akadémiai Kiadó
A. Rényi, Foundations of Probability, Holden-Day, Inc., San Francisco, 1970, xvi + 366 pp
A. Rényi, Probability Theory. American Elsevier Publishing Company, New York, 1970, 666 pp.
A. Rényi, Letters on Probability, Wayne State University Press, Detroit, 1972, 86pp.
Foundations of Probability and Probability Theory have both been reprinted by Dover Publications.
References
Sources
External links
The life of Alfréd Rényi, by Pál Turán
.
1921 births
1970 deaths
20th-century Hungarian mathematicians
Number theorists
Graph theorists
Probability theorists
Members of the Hungarian Academy of Sciences
Mathematicians from Budapest
Academic staff of the University of Debrecen
Hungarian World War II forced labourers
Hungarian escapees
Escapees from Nazi concentration camps
Network scientists | Alfréd Rényi | [
"Mathematics"
] | 1,085 | [
"Graph theory",
"Number theorists",
"Mathematical relations",
"Graph theorists",
"Number theory"
] |
614,998 | https://en.wikipedia.org/wiki/P%C3%A1l%20Tur%C3%A1n | Pál Turán (; 18 August 1910 – 26 September 1976) also known as Paul Turán, was a Hungarian mathematician who worked primarily in extremal combinatorics.
In 1940, because of his Jewish origins, he was arrested by the Nazis and sent to a labour camp in Transylvania, later being transferred several times to other camps. While imprisoned, Turán came up with some of his best theories, which he was able to publish after the war.
Turán had a long collaboration with fellow Hungarian mathematician Paul Erdős, lasting 46 years and resulting in 28 joint papers.
Biography
Early years
Turán was born into a Hungarian Jewish family in Budapest on 18 August 1910. Pál's outstanding mathematical abilities showed early, already in secondary school he was the best student.
At the same period of time, Turán and Pál Erdős were famous answerers in the journal KöMaL. On 1 September 1930, at a mathematical seminar at the University of Budapest, Turan met Erdős. They would collaborate for 46 years and produce 28 scientific papers together.
Turán received a teaching degree at the University of Budapest in 1933. In the same year he published two major scientific papers in the journals of the American and London Mathematical Societies. He got the PhD degree under Lipót Fejér in 1935 at Eötvös Loránd University.
As a Jew, he fell victim to numerus clausus, and could not get a stable job for several years. He made a living as a tutor, preparing applicants and students for exams. It was not until 1938 that he got a job at a rabbinical training school in Budapest as a teacher's assistant, by which time he had already had 16 major scientific publications and an international reputation as one of Hungary's leading mathematicians.
He married Edit (Klein) Kóbor in 1939; they had one son, Róbert.
In World War II
In September 1940 Turán was interned in labour service. As he recalled later, his five years in labour camps eventually saved his life: they saved him from ending up in a concentration camp, where 550,000 of the 770,000 Hungarian Jews were murdered during World War II. In 1940 Turán ended up in Transylvania for railway construction. Turán said that one day while working another prisoner addressed him by his surname, saying that he was working extremely clumsily:
"An officer was standing nearby, watching us work. When he heard my name, he asked the comrade whether I was a mathematician. It turned out, that the officer, Joshef Winkler, was an engineer. In his youth, he had placed in a mathematical competition; in civilian life he was a proof-reader at the print shop where the periodical of the Third Class of the Academy (Mathematical and Natural sciences) was printed. There he had seen some of my manuscripts."
Winkler wanted to help Turán and managed to get him transferred to an easier job. Turán was sent to the sawmill's warehouse, where he had to show the carriers the right-sized timbers. During this period, Turán composed and was partly able to record a long paper on the Riemann zeta function.
Turán was subsequently transferred several times to other camps. As he later recalled, the only way he was able to keep his sanity was through mathematics, solving problems in his head and thinking through problems.
In July 1944 Turán worked on a brick factory near Budapest. His and the other prisoners' task was to carry the brick cars from the kilns to the warehouses on rails that crossed at several points with other tracks. At these crossings the trolleys would "bounce" and some of the bricks would fall out, causing a lot of problems for the workers. This situation led Turan to consider how to achieve the minimum number of crossings for m kilns and n warehouses. It was only after the war, in 1952, that he was able to work seriously on this problem.
Turán was liberated in 1944, after which he was able to return to work at the rabbinical school in Budapest.
After WWII
Turán became associate professor at the University of Budapest in 1945 and full professor in 1949. In the early post-war years, the streets were patrolled by soldiers. On occasion, random people were seized and sent to penal camps in Siberia. Once such a patrol stopped Turan, who was on his way home from university. The soldiers questioned the mathematician and then forced him to show them the contents of his briefcase. Seeing a reprint of an article from a pre-War Soviet magazine among the papers, the soldiers immediately let the mathematician go. The only thing Turán said about that day in his correspondence with Erdös was that he had "come across an extremely interesting way of applying number theory..."
In 1952 he married again, the second marriage was to Vera Sós, a mathematician. They had a son, György, in 1953. The couple published several papers together.
One of his students said Turán was a very passionate and active man - in the summer he held maths seminars by the pool in between his swimming and rowing training. In 1960 he celebrated his 50th birthday and the birth of his third son, Tamás, by swimming across the Danube.
Turán was a member of the editorial boards of leading mathematical journals, he worked as a visiting professor at many of the top universities in the world. He was a member of the Polish, American and Austrian Mathematical Societies. In 1970, he was invited to serve on the committee of the Fields Prize. Turán also founded and served as the president of the János Bolyai Mathematical Society.
Death
Around 1970 Turán was diagnosed with leukaemia, but the diagnosis was revealed only to his wife Vera Sós, who decided not to tell him about his illness. In 1976 she told Erdős. Sós was sure that Turán was ‘too much in love with life’ and would have fallen into despair at the news of his fatal illness, and would not have been able to work properly. Erdős said that Turán did not lose his spirit even in the Nazi camps and did brilliant work there. Erdős regretted that Turán had been kept unaware of his illness because he had put off certain works and books 'for later', hoping that he would soon feel better, and in the end was never able to finish them. Turán died in Budapest on 26 September 1976 of leukemia, aged 66.
Work
Turán worked primarily in number theory, but also did much work in analysis and graph theory.
Number theory
In 1934, Turán used the Turán sieve to give a new and very simple proof of a 1917 result of G. H. Hardy and Ramanujan on the normal order of the number of distinct prime divisors of a number n, namely that it is very close to . In probabilistic terms he estimated the variance from . Halász says "Its true significance lies in the fact that it was the starting point of probabilistic number theory". The Turán–Kubilius inequality is a generalization of this work.
Turán was very interested in the distribution of primes in arithmetic progressions, and he coined the term "prime number race" for irregularities in the distribution of prime numbers among residue classes. With his coauthor Knapowski he proved results concerning Chebyshev's bias. The Erdős–Turán conjecture makes a statement about primes in arithmetic progression. Much of Turán's number theory work dealt with the Riemann hypothesis and he developed the power sum method (see below) to help with this. Erdős said "Turán was an 'unbeliever,' in fact, a 'pagan': he did not believe in the truth of Riemann's hypothesis."
Analysis
Much of Turán's work in analysis was tied to his number theory work. Outside of this he proved Turán's inequalities relating the values of the Legendre polynomials for different indices, and, together with Paul Erdős, the Erdős–Turán equidistribution inequality.
Graph theory
Erdős wrote of Turán, "In 1940–1941 he created the area of extremal problems in graph theory which is now one of the fastest-growing subjects in combinatorics." The field is known more briefly today as extremal graph theory. Turán's best-known result in this area is Turán's graph theorem, that gives an upper bound on the number of edges in a graph that does not contain the complete graph Kr as a subgraph. He invented the Turán graph, a generalization of the complete bipartite graph, to prove his theorem. He is also known for the Kővári–Sós–Turán theorem bounding the number of edges that can exist in a bipartite graph with certain forbidden subgraphs, and for raising Turán's brick factory problem, namely of determining the crossing number of a complete bipartite graph.
Power sum method
Turán developed the power sum method to work on the Riemann hypothesis. The method deals with inequalities giving lower bounds for sums of the form
hence the name "power sum".
Aside from its applications in analytic number theory, it has been used in complex analysis, numerical analysis, differential equations, transcendental number theory, and estimating the number of zeroes of a function in a disk.
Publications
Deals with the power sum method.
Honors
Hungarian Academy of Sciences elected corresponding member in 1948 and ordinary member in 1953
Kossuth Prize in 1948 and 1952
Tibor Szele Prize of János Bolyai Mathematical Society 1975
Notes
Sources
External links
Paul Turán memorial lectures at the Rényi Institute
1910 births
1976 deaths
20th-century Hungarian mathematicians
Mathematicians from Austria-Hungary
Graph theorists
Number theorists
Members of the Hungarian Academy of Sciences
Hungarian Jews
Deaths from leukemia
Deaths from cancer in Hungary
Eötvös Loránd University alumni
Hungarian World War II forced labourers | Pál Turán | [
"Mathematics"
] | 2,025 | [
"Graph theory",
"Number theorists",
"Mathematical relations",
"Graph theorists",
"Number theory"
] |
615,075 | https://en.wikipedia.org/wiki/Stone%20carving | Stone carving is an activity where pieces of rough natural stone are shaped by the controlled removal of stone. Owing to the permanence of the material, stone work has survived which was created during our prehistory or past time.
Work carried out by paleolithic societies to create stone tools is more often referred to as knapping. Stone carving that is done to produce lettering is more often referred to as lettering. The process of removing stone from the earth is called mining or quarrying.
Stone carving is one of the processes which may be used by an artist when creating a sculpture. The term also refers to the activity of masons in dressing stone blocks for use in architecture, building or civil engineering. It is also a phrase used by archaeologists, historians, and anthropologists to describe the activity involved in making some types of petroglyphs.
History
The earliest known works of representational art are stone carvings. Often marks carved into rock or petroglyphs will survive where painted work will not. Prehistoric Venus figurines such as the Venus of Berekhat Ram may be as old as 250,000 years, and are carved in stones such as tuff and limestone.
These earliest examples of the stone carving are the result of hitting or scratching a softer stone with a harder one, although sometimes more resilient materials such as antlers are known to have been used for relatively soft stone. Another early technique was to use an abrasive that was rubbed on the stone to remove the unwanted area.
Prior to the discovery of steel by any culture, all stone carving was carried out by using an abrasion technique, following rough hewing of the stone block using hammers. The reason for this is that bronze, the hardest available metal until steel, is not hard enough to work any but the softest stone. The Ancient Greeks used the ductility of bronze to trap small granules of carborundum, that are naturally occurring on the island of Milos, thus making a very efficient file for abrading the stone.
The development of iron made possible stone carving tools, such as chisels, drills and saws made from steel, that were capable of being hardened and tempered to a state hard enough to cut stone without deforming, while not being so brittle as to shatter. Carving tools have changed little since then.
Modern, industrial, large quantity techniques still rely heavily on abrasion to cut and remove stone, although at a significantly faster rate with processes such as water erosion and diamond saw cutting.
One modern stone carving technique uses a new process: The technique of applying sudden high temperature to the surface. The expansion of the top surface due to the sudden increase in temperature causes it to break away. On a small scale, Oxy-acetylene torches are used. On an industrial scale, lasers are used. On a massive scale, carvings such as the Crazy Horse Memorial carved from the Harney Peak granite of Mount Rushmore and the Confederate Memorial Park in Albany, Georgia are produced using jet heat torches.
Stone sculpture
Carving stone into sculpture is an activity older than civilization itself. Prehistoric sculptures were usually human forms, such as the Venus of Willendorf and the faceless statues of the Cycladic cultures. Later cultures devised animal, human-animal and abstract forms in stone. The earliest cultures used abrasive techniques, and modern technology employs pneumatic hammers and other devices. But for most of human history, sculptors used hammer and chisel as the basic tools for carving stone.
The process begins with the selection of a stone for carving. Some artists use the stone itself as inspiration; the Renaissance artist Michelangelo claimed that his job was to free the human form trapped inside the block. Other artists begin with a form already in mind and find a stone to complement their vision. The sculptor may begin by forming a model in clay or wax, sketching the form of the statue on paper or drawing a general outline of the statue on the stone itself.
When ready to carve, the artist usually begins by knocking off large portions of unwanted stone. This is the "roughing out" stage of the sculpting process. For this task they may select a point chisel, which is a long, hefty piece of steel with a point at one end and a broad striking surface at the other. A pitching tool may also be used at this early stage; which is a wedge-shaped chisel with a broad, flat edge. The pitching tool is useful for splitting the stone and removing large, unwanted chunks. Those two chisels are used in combination with a masons driving hammer.
Once the general shape of the statue has been determined, the sculptor uses other tools to refine the figure. A toothed chisel or claw chisel has multiple gouging surfaces which create parallel lines in the stone. These tools are generally used to add texture to the figure. An artist might mark out specific lines by using calipers to measure an area of stone to be addressed, and marking the removal area with pencil, charcoal or chalk. The stone carver generally uses a shallower stroke at this point in the process, usually in combination with a wooden mallet.
Eventually the sculptor has changed the stone from a rough block into the general shape of the finished statue. Tools called rasps and rifflers are then used to enhance the shape into its final form. A rasp is a flat, steel tool with a coarse surface. The sculptor uses broad, sweeping strokes to remove excess stone as small chips or dust. A riffler is a smaller variation of the rasp, which can be used to create details such as folds of clothing or locks of hair.
The final stage of the carving process is polishing. Sandpaper can be used as a first step in the polishing process, or sand cloth. Emery, a stone that is harder and rougher than the sculpture media, is also used in the finishing process. This abrading, or wearing away, brings out the color of the stone, reveals patterns in the surface and adds a sheen. Tin and iron oxides are often used to give the stone a highly reflective exterior.
Sculptures can be carved via either the direct or the indirect carving method. Indirect carving is a way of carving by using an accurate clay, wax or plaster model, which is then copied with the use of a compass or proportional dividers or a pointing machine. The direct carving method is a way of carving in a more intuitive way, without first making an elaborate model. Sometimes a sketch on paper or a rough clay draft is made.
Stone carving considerations
Stone has been used for carving since ancient times for many reasons. Most types of stone are easier to find than metal ores, which have to be mined and smelted. Stone can be dug from the surface and carved with hand tools. Stone is more durable than wood, and carvings in stone last much longer than wooden artifacts. Stone comes in many varieties and artists have abundant choices in color, quality and relative hardness.
Soft stone such as chalk, soapstone, pumice and Tufa can be easily carved with found items such as harder stone or in the case of chalk even the fingernail.
Limestones and marbles can be worked using abrasives and simple iron tools.
Granite, basalt and some metamorphic stone is difficult to carve even with iron or steel tools; usually tungsten carbide tipped tools are used, although abrasives still work well. Modern techniques often use abrasives attached to machine tools to cut the stone.
Precious and semi-precious gemstones are also carved into delicate shapes for jewellery or larger items, and polished; this is sometimes referred to as lapidary, although strictly speaking lapidary refers to cutting and polishing alone.
When worked, some stones release dust that can damage lungs (silica crystals are usually to blame), so a respirator is sometimes needed.
Stone shaping and tools
Basic stone carving tools fall into five categories:
Percussion tools for hitting - such as mallets, axes, adzes, bouchards and toothed hammers.
Tools for rough shaping of stone, to form a block the size needed for the carving. These include feathers and wedges and pitching tools.
Chisels for cutting - such as lettering chisels, points, pitching tools, and claw chisels. Chisels, in turn, may be handheld and hammered or pneumatic powered.
Diamond tools which include burrs, cup wheels, and blades mounted on a host of power tools. These are used sometimes through the entire carving process from rough work to the final finish.
Abrasives for material removals - such as carborundum blocks, drills, saws, grinding and cutting wheels, water-abrasive machinery and dressing tools such as French and English drags.
More advanced processes, such as laser cutting and jet torches, use sudden high temperature with a combination of cooling water to spall flakes of stone. Other modern processes may involve diamond-wire machines or other large scale production equipment to remove large sections of undesired stone.
The use of chisels for stone carving is possible in several ways. Two are:
The mason's stroke, in which a flat chisel is used at approximately 90 degrees to the surface in an organized sweep. It shatters the stone beneath it and each successive pass lowers the surface.
The lettering stroke, in which the chisel is used along the surface at approximately 30 degrees to cut beneath the existing surface.
There are many types and styles of stone carving tools, each carver will decide for themselves which tools to use. Traditionalists might use hand tools only.
Lettering chisels for incising small strokes create the details of letters in larger applications.
Fishtail carving chisels are used to create pockets, valleys and for intricate carving, whilst providing good visibility around the stone.
Masonry chisels are used for the general shaping of stones.
Stone point tools are used to rough out the surface of the stone.
Stone claw tools are used to remove the peaks and troughs left from the previously used tools.
Stone pitching tools are used to remove large quantities of stone.
Stone nickers are used to split stones by tracing a line along the stone with progressive strikes until the stone breaks along the line.
Powered pneumatic hammers make the hard work easier. Progress on shaping stone is faster with pneumatic carving tools. Air hammers (such as Cuturi) place many thousands of impacts per minute upon the end of the tool, which would usually be manufactured or modified to suit the purpose. This type of tool creates the ability to 'shave' the stone, providing a smooth and consistent stroke, allowing for larger surfaces to be worked.
Among modern tool types, there are two main stone carving chisels:
Heat treated high carbon steel tools - Generally forged
Tungsten carbide tipped tools - Generally forged, slotted, and carbide inserts brazed in to provide a harder and longer-wearing cutting edge.
Gallery
See also
Chalk carving
List of colossal sculptures in situ
Khachkar
Megalith
Rock-cut architecture
Stone sculpture
Stonemasonry
The Stonemason (2020 book)
Songjiang Tangjing Building
References
External links
Stone Carving: A How-To Demonstration, video
Stone Carver Interview, Gargoyles and the Gothic Style: An Interview with professional stone carver Walter S. Arnold, video
The Cesnola Collection of Cypriot art: stone sculpture, a fully digitized collection catalog from The Metropolitan Museum of Art Libraries, which contains material on stone carvings
C
Carving
Stone (material)
Visual arts media
Monumental masons | Stone carving | [
"Engineering"
] | 2,364 | [
"Construction",
"Stonemasonry"
] |
615,108 | https://en.wikipedia.org/wiki/Poynting%E2%80%93Robertson%20effect | The Poynting–Robertson effect, also known as Poynting–Robertson drag, named after John Henry Poynting and Howard P. Robertson, is a process by which solar radiation causes a dust grain orbiting a star to lose angular momentum relative to its orbit around the star. This is related to radiation pressure tangential to the grain's motion.
This causes dust that is small enough to be affected by this drag, but too large to be blown away from the star by radiation pressure, to spiral slowly into the star. In the Solar System, this affects dust grains from about to in diameter. Larger dust is likely to collide with another object long before such drag can have an effect.
Poynting initially gave a description of the effect in 1903 based on the luminiferous aether theory, which was superseded by the theories of relativity in 1905–1915. In 1937 Robertson described the effect in terms of general relativity.
History
Robertson considered dust motion in a beam of radiation emanating from a point source. A. W. Guess later considered the problem for a spherical source of radiation and found that for particles far from the source the resultant forces are in agreement with those concluded by Poynting.
Cause
The effect can be understood in two ways, depending on the reference frame chosen.
From the perspective of the grain of dust circling a star (panel (a) of the figure), the star's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. The angle of aberration is extremely small since the radiation is moving at the speed of light while the dust grain is moving many orders of magnitude slower than that.
From the perspective of the star (panel (b) of the figure), the dust grain absorbs sunlight entirely in a radial direction, thus the grain's angular momentum is not affected by it. But the re-emission of photons, which is isotropic in the frame of the grain (a), is no longer isotropic in the frame of the star (b). This anisotropic emission causes the photons to carry away angular momentum from the dust grain.
The Poynting–Robertson drag acts in the opposite direction to the dust grain's orbital motion, leading to a drop in the grain's angular momentum. While the dust grain thus spirals slowly into the star, its orbital speed increases continuously.
The Poynting–Robertson force is equal to
where v is the grain's velocity, c is the speed of light, W is the power of the incoming radiation, r the grain's radius, G is the universal gravitational constant, M☉ the Sun's mass, L☉ is the solar luminosity, and R the grain's orbital radius.
Relation to other forces
The Poynting–Robertson effect is more pronounced for smaller objects. Gravitational force varies with mass, which is (where is the radius of the dust), while the power it receives and radiates varies with surface area (). So for large objects the effect is negligible.
The effect is also stronger closer to the Sun. Gravity varies as (where R is the radius of the orbit), whereas the Poynting–Robertson force varies as , so the effect also gets relatively stronger as the object approaches the Sun. This tends to reduce the eccentricity of the object's orbit in addition to dragging it in.
In addition, as the size of the particle increases, the surface temperature is no longer approximately constant, and the radiation pressure is no longer isotropic in the particle's reference frame. If the particle rotates slowly, the radiation pressure may contribute to the change in angular momentum, either positively or negatively.
Radiation pressure affects the effective force of gravity on the particle: it is felt more strongly by smaller particles, and blows very small particles away from the Sun. It is characterized by the dimensionless dust parameter , the ratio of the force due to radiation pressure to the force of gravity on the particle:
where is the Mie scattering coefficient, is the density, and is the size (the radius) of the dust grain.
Impact of the effect on dust orbits
Particles with have radiation pressure at least half as strong as gravity and will pass out of the Solar System on hyperbolic orbits if their initial velocities were Keplerian.
For rocky dust particles, this corresponds to a diameter of less than 1 μm.
Particles with may spiral inwards or outwards, depending on their size and initial velocity vector; they tend to stay in eccentric orbits.
Particles with take around 10,000 years to spiral into the Sun from a circular orbit at 1 AU. In this regime, inspiraling time and particle diameter are both roughly .
If the initial grain velocity was not Keplerian, then circular or any confined orbit is possible for .
It has been theorized that the slowing down of the rotation of Sun's outer layer may be caused by a similar effect.
See also
Differential Doppler effect
Radiation pressure
Yarkovsky effect
Speed of gravity
References
Additional sources
(Abstract of Philosophical Transactions paper)
Orbital perturbations
Doppler effects
Cosmic dust
Special relativity
Radiation effects | Poynting–Robertson effect | [
"Physics",
"Materials_science",
"Astronomy",
"Engineering"
] | 1,075 | [
"Physical phenomena",
"Outer space",
"Materials science",
"Astrophysics",
"Special relativity",
"Radiation",
"Condensed matter physics",
"Theory of relativity",
"Doppler effects",
"Radiation effects",
"Astronomical objects",
"Cosmic dust"
] |
615,194 | https://en.wikipedia.org/wiki/Oil%20Pollution%20Act%20of%201990 | The Oil Pollution Act of 1990 (OPA) was passed by the 101st United States Congress and signed by President George H. W. Bush. It works to avoid oil spills from vessels and facilities by enforcing removal of spilled oil and assigning liability for the cost of cleanup and damage; requires specific operating procedures; defines responsible parties and financial liability; implements processes for measuring damages; specifies damages for which violators are liable; and establishes a fund for damages, cleanup, and removal costs. This statute has resulted in instrumental changes in the oil production, transportation, and distribution industries.
Background
Laws governing oil spills in the United States began in 1851 with the Limitation of Liability Act. This statue, in an attempt to protect the shipping industry, stated that vessel owners were liable for incident-related costs up to the post-incident value of their vessel. The shortcomings of this law were revealed in 1967 with the release of over 100,000 tons of crude oil into the English Channel from the Torrey Canyon. Of the $8 million of cleanup-related costs, the owners of the Torrey Canyon were held liable for only $50—the value of the only remaining Torrey Canyon lifeboat. In the meantime, the Oil Pollution Act of 1924 had passed, but this statute only limited liability for deliberate discharge of oil into marine waters.
Two years following the Torrey Canyon spill, an oil platform eruption in the Santa Barbara Channel made national headlines and thrust oil pollution into the public spot light. As a result Congress placed oil pollution under the authority of the Water Quality Improvement Act of 1970 (later amended by the Clean Water Act in 1972). The 1970 law set specific liability limitations. For example, vessels transporting oil were liable only up to $250,000 or $150 per gross ton. These limitations rarely covered the cost of removal and cleanup, let alone damages.
In the decades to follow, several other laws that dealt with oil spill liability and compensation were passed. These statues include: the Ports and Waterways Safety Act of 1972, the Trans-Alaska Pipeline Authorization Act of 1973, the Deep Water Port Act of 1974, the Outer Continental Shelf Lands Act of 1978, and the Alaska Oil Spill Commission of 1990. However, this fragmented collection of federal and state laws provided only limited safeguards against the hazards of oils spills. In 1976, a bill to create a cohesive safe measure for oil pollution was introduced to Congress. Neither the House of Representatives nor the Senate could agree on a single statue and the bill fell out of consideration numerous times.
On March 24, 1989, the Exxon Valdez ran aground in the Prince William Sound and spilled nearly 11 million gallons of crude oil—the largest marine oil spill in recorded history up to that point. Soon afterward, in June 1989, three smaller spills occurred within coastal waters of the United States. This was timely evidence that oil spills were not uncommon.
Alaska Governor Steve Cowper authorized the creation of the Alaska Oil Spill Commission in 1989 to examine the causes of the Exxon Valdez oil spill and issue recommendations on potential policy changes. Cowper appointed Walter B. Parker, a longtime transportation consultant and public official, as the chairman of the commission. Under Parker, the Commission issued 52 recommendations for improvements to industry, state, and federal regulations. Fifty of these recommendations were worked into the Oil Pollution Act bill that was introduced into legislation on March 16, 1989, by Walter B. Jones, Sr., a Democratic Party congressman from North Carolina's 1st congressional district.
Enactment timeline
March 16, 1989: H.R. 1465, the Oil Pollution Act of 1990, was introduced in the House of Representatives.
June 21, 1989: The Committee on Merchant Marine and Fisheries reported the bill as amended.
November 9, 1989: H.R. 1465 was passed by a vote in the House of Representatives.
November 19, 1989: the bill was passed by the Senate, with revisions. The bill was sent back to the House of Representatives for approval of the changes added by the Senate. However, the House of Representatives did not agree to the revisions.
August 2, 1990: a conference committee was created, including members of both the House of Representatives and Senate, in order to resolve differences and propose a final bill for approval. Initially, the Senate agreed to the committee's final proposed report.
August 4, 1990: both chambers of Congress had passed the bill in identical form. The final step in the legislative process was for the bill to go to the President to either approve and sign or veto it.
August 18, 1990: the bill was signed by the President and the Oil Pollution Act was officially enacted.
Enforcement
A responsible party under the Oil Pollution Act is one who is found accountable for the discharge or substantial threat of discharge of oil from a vessel or facility into navigable waters, exclusive economic zones, or the shorelines of such covered waters. Responsible parties are strictly, jointly, and severally liable for the cost of removing the oil in addition to any damages linked to the discharge. Unlike the liability for removal costs which are uncapped, liability for damages is limited as discussed in further detail below. Furthermore, the Oil Pollution Act allows for additional liability enacted by other relevant state laws.
Under the Oil Pollution Act, federal, tribal, state, and any other person can recover removal costs from a responsible party so long as such entity has incurred costs from carrying out oil removal activities in accordance with the Clean Water Act National Contingency Plan. Reimbursement claims must first be made to the responsible party. If the potentially responsible party refutes liability or fails to distribute the reimbursement within 90 days of the claim, the claimant may file suit in court or bring the claim to the Oil Spill Liability Trust Fund described below. In some instances, claims for removal cost reimbursement can be initially brought to the Oil Spill Liability Trust Fund thus sidestepping the responsible party. For example, claimants advised by the U.S. Environmental Protection Agency (EPA), governors of affected states and American claimants for incidents involving foreign vessels or facilities may initially present their claims to the Oil Spill Liability Trust Fund. When claims for removal cost reimbursement are brought to the fund, the claimant must prove that removal costs were sustained from activities required to avoid or alleviate effects of the incident and that such actions were approved or directed by the federal on-scene coordinator.
In a manner similar to that described above, costs for damages can be recovered from a responsible party. However, the Oil Pollution Act only covers certain categories of damages. These categories include: natural resource damages, damages to real or personal property, loss of subsistence use, loss of government revenues, loss of profits or impaired earning capacity, damaged public services, and damage assessment costs. Additionally, some categories are recoverable for any person impacted by the incident while others are only recoverable by federal, tribal, and state governments. Furthermore, the act proscribes limits to liability for damages based on the responsible party, the particular incident, and the type of vessel or facility from which the discharge occurred.
The Oil Spill Liability Trust Fund is a trust fund managed by the federal government and financed by a per-barrel tax on crude oil produced domestically in the United States and on petroleum products imported to the United States for consumption. The fund was created in 1986, but use of the fund was not authorized until the Oil Pollution Act's passage in 1990. The funds may be called upon to cover the cost of federal, tribal, state, and claimant oil spill removal actions and damage assessments as well as unpaid liability and damages claims. No more than one billion dollars may be withdrawn from the fund per spill incident. Over two decades of court cases have demonstrated that obtaining funding from the Oil Pollution Spill Liability Fund can be a difficult task.
Concerns and reactions
President Bush acknowledged the changes the world would have to endure when signing the Oil Pollution Act and as a result, he pushed the Senate to quickly ratify the new international protocols. The reactions from industries were negative. Industry objected that the act would hinder the free flow in the trade of imported oil in the Waters of the United States. Not only does the OPA impose restrictions on trading imported oil overseas, but it also implements the state oil liability and compensation statutes, which they view as further restricting free trade. After OPA was enacted, the shipping industry threatened to boycott the ports of the United States to protest this new industry liability in both federal and state laws. In particular, the oil and shipping industries objected to the inconsistency between the OPA and the international, federal and state laws that are impacted. As a result of the OPA enactment, certain insurance companies refused to issue certifications of financial liability under the act to avoid potential responsibility and compensation in the case of a disaster.
President Bush also predicted that the enactment of the OPA could lead to larger oil shipping companies being replaced by the smaller shipping companies to avoid liability. In particular, smaller companies with limited resources would lack the finances to remediate oil spill disasters. Not just the oil industry, but also the vessel owners and operators would be held liable for an oil spill, facing a significant increase in financial responsibility. The OPA's liability increase for vessel owners raised fears and concerns from the vast majority of the shipping industry. Vessel owners objected that additional oil spill penalties imposed by the states are free from OPA limitations of the Limitation of Liability Act of 1851. Ultimately, the threat of unlimited liability under the OPA and other state statutes has led countless oil shipping companies to reduce oil trade to and from the ports of the United States.
However, there were positive reactions from the oil industries despite the newly enforced codes and regulations. In 1990, the oil industry united to form the Marine Spill Response Corporation (MSRC), a non-profit corporation whose expenses would be compensated by the oil producers and transporters. The major MSRC responsibility was to develop new response plans for oil spills cleanups and for the OPA-required remediation. Shipping companies like the Exxon Shipping reacted positively to OPA's efforts to reduce their risk of liability for oil spill disasters. To help ensure OPA compliance, Exxon Shipping compiled all state and federal regulations to which they must abide. Several independent and non-U.S. companies and operators, however, may avoid operations in the United States ports due to the OPA liability. Though the majority of elicited reactions and criticism from the enactment of OPA has been negative, it has nevertheless led to founding and designing safer requirements for ships and global oil trade.
Long-term effects of OPA
The Oil Pollution act imposes long-term impacts due to the potential for unlimited liability and the statute's that hold insurers to serve as guarantors, which has ultimately resulted in the refusal of insurance companies to issue agreements of financial liability to vessel operators and owners. Thus, the inability to acquire proof of financial liability results in vessels not being able to legally enter waters of the United States. Since OPA does not exempt vessel creditors to enter U.S. waters, there is a disincentive for any lender to finance fleet modernization and or replacement. Lastly, OPA has the ability to directly impact the domestic oil production industry due to the rigorous offshore facility provisions.
Financial responsibility: The U.S. Coast Guard is responsible for the implementation of the vessel provisions mandated by the Oil Pollution Act. Pursuant to the OPA, vessel owners need evidence of financial liability that covers complete responsibility of a disaster if their vessel weighs more than 300 gross tons. Vessel owners are required by OPA to apply to the Coast Guard to acquire a "Certificate of Financial Responsibility" that serves as proof of their ability to financially responsible for cleanup and damages of an oil spill. In the case of an uncertified vessel entering the waters of the United States, the vessel will have to be forfeited to the United States. This is not a new protocol because vessel owners were always mandated to acquire certificates under the Clean Water Act and Comprehensive Environmental Response Compensation and Liability Act of 1980 (CERCLA). Since 2011, over 23,000 vessels have obtained the Coast Guard certificates to allow access to waters of the U.S.
Disincentives for fleet replacement and modernization: Since the Oil Pollution Act holds the vessel owners fully liable, it has created a disincentive for oil companies to transport crude oil in their vessels and for charterers to transport their oil on the most suitable vessels. Many financially successful oil companies select the highest quality of ships to transport their products, however, other companies continue to transport their product on the lower quality, older vessels due to the cheaper costs. The majority of charterers refuse to pay more for higher grade vessels despite the liability and compensation regulations enforced by OPA. The new and safer double hull tanker vessels are approximately 15-20% more costly to operate. In 1992, approximately 60% of global vessels was at least fifteen years old or older. The major oil companies are still delaying the fleet replacement requirement of retiring single hull vessels mandated by OPA. For example, Exxon and Texaco have delayed the replacement of their single hull vessels for new double-hulled ships. However, companies like Chevron and Mobil have ordered two new double hull tankers. Leading by example, other independent shipping companies to invest in new double hull tankers as well. Despite the change from single to double tanker vessels, it is still insufficient to accommodate the needs of the oil industry. It is expected that over the next decade there will be a serious lack of suitable tonnage to meet the expected demand for newer vessels. It is estimated that the global oil industry must invest approximately 200-350 billion dollars to meet global demands for new and environmentally sound vessels.
Domestic production: In the Oil Pollution Act, the Coast Guard is in charge of screening the application process for vessels. The Bureau of Ocean Energy Management (BOEM) in the Department of the Interior implements and enforces all of the act's regulations for offshore oil facilities. Under OPA, the responsible parties are mandated to provide evidence declaring financial responsibility of $150 million for potential liability. If a party is unable to provide evidence declaring financial responsibility of $150 million, they will be subject to pay a penalty of $25,000 per day in violation of OPA and may also be subject to judicial decision of terminating all operations.
Before enactment of the OPA, offshore facilities were required to provide evidence that declared financial responsibility of $35 million. Under OPA these offshore facilities had to increase their proof of financial responsibility by four times, and OPA's requirement of financial responsibility expanded to include facilities in state waters as well. Facilities in state waters that are subject to the $150 million requirement includes pipelines, marina fuel docks, tanks, and oil production facilities that are located in, on, or under state coastal waters, and are adjacent to inland channels, lakes, and wetlands. The most evident impact of the enactment of OPA, is on the oil producers within the Gulf of Mexico. Many offshore facilities are located in the Gulf of Mexico and in the marshes and wetlands of Louisiana. Major producers are most likely able to meet OPA's requirement of financial responsibility, however, the major oil companies within the Gulf of Mexico have largely withdrawn the operation of their offshore facilities.
Due to environmental pressures and the restrictive governmental regulations enforced by OPA, substantial proposals of exploration and production in the United States have been withdrawn. As a result of major companies withdrawing their plans to drill, many smaller, independent producers had entered to make a profit. By October 1993, 93% of all oil and natural gas exploration and drilling were from independent producers. Of these new exploration projects, approximately 85% of drilling operations were in the Gulf of Mexico. The independent oil producers generated nearly 40% of the crude oil in the United States and 60% of domestic natural gas.
International treaties
In the case of oil pollution caused by other nations (especially ships), International treaties such as the International Convention on Civil Liability for Oil Pollution Damage and International Convention on Civil Liability for Bunker Oil Pollution Damage which have a similar intention as the Act, have not been signed by the United States, as it was deemed the Oil Pollution Act provided sufficient coverage.
See also
Deepwater Horizon drilling rig explosion (2010)
References
External links
Oil Pollution Act of 1990 (PDF/details) as amended in the GPO Statute Compilations collection
Summary of the Oil Pollution Act - EPA
Roll call of voting members of the House
Cosponsors
101st United States Congress
Exxon Valdez oil spill
1990 in Alaska
1990 in the environment
1990 in American law
Ocean pollution
United States federal environmental legislation
Oil and gas law
Water pollution in the United States | Oil Pollution Act of 1990 | [
"Chemistry",
"Environmental_science"
] | 3,394 | [
"Ocean pollution",
"Water pollution"
] |
615,209 | https://en.wikipedia.org/wiki/Speaking%20clock | A speaking clock or talking clock is a live or recorded human voice service, usually accessed by telephone, that gives the correct time. The first telephone speaking clock service was introduced in France, in association with the Paris Observatory, on 14 February 1933.
The format of the service is similar to that of radio time signal services. At set intervals (e.g. ten seconds) a voice announces (for example) "At the third stroke, the time will be twelve forty-six and ten seconds……", with three beeps following. Some countries have sponsored time announcements and include the sponsor's name in the message.
List by country
Australia
In Australia, the number 1194 was the speaking clock in all areas. The service started in 1953 by the Post Master General's Department, originally to access the talking clock on a rotary dial phone, callers would dial "B074", during the transition from a rotary dial to a DTMF based phone system, the talking clock number changed from "B074" to 1194. It was always the current time from where the call originated, in part due to Telstra's special call routing systems. Landline, Payphone and Mobile customers who called the 1194 time service would receive the time. A male voice, often known by Australians as "George", would say "At the third stroke, it will be (hours) (minutes) and (seconds) seconds/precisely. (three beeps)" e.g. "At the third stroke, it will be three thirty three and forty seconds". The time announcement was announced in 10 second increments and the beep was 1 kHz. Originally there was only one stroke e.g.:“At the stroke, it will be……” etc.
Prior to automatic systems, the subscriber rang an operator who would quote the time from a central clock in the exchange with a phrase such as "The time by the exchange clock is ……". This was not precise and the operator could not always answer when the subscriber wanted. In 1954, British-made systems were installed in Melbourne (1st floor, City West Exchange) and Sydney. The mechanical speaking clock used rotating glass discs where different parts of the time were recorded on the disc. A synchronous motor drove the disc with the driving source derived from a 5 MHz quartz oscillator via a multi stage valve divider. This was amplified to give sufficient impetus to drive the motor. Because of the low torque available, a hand wheel was used to spin the motor on start up. The voice was provided by Gordon Gow. The units were designed for continuous operation. Both units in Melbourne and Sydney were run in tandem (primary and backup). For daylight saving time changes, one would be on line while the second was advanced or delayed by one hour and at the 02:00:00 Australian Eastern Standard time, would be switched over to the standby unit.
In addition to the speaking clocks, there was ancillary equipment to provide timing signals, 1 pulse per second, 8 pulses per minute and 8 pulses per hour. The Time and Frequency Standards Section in the PMG Research Laboratories at 59 Little Collins Street, Melbourne maintained the frequency checks to ensure that the system was "on time". From a maintenance point of view, the most important part of the mechanical clocks was to ensure that they were well oiled to minimise wear on the cams and to replace blown bulbs in the optical pickups from the glass disk recordings. When Time & Frequency Standards moved from 59 Collins Street to Clayton Research Labs (3rd Flr. Building M5), the control signals were duplicated and a second bank of Caesium Beam Primary standards installed so the cutover was transparent with no loss of service.
This mechanical system was replaced with a digital system in 1990. Each speaking clock ensemble consisted of two announcing units (Zag 500), a supervisory unit (CCU 500), two phase-locked oscillators, two pulse distribution units, a Civil Time Receiver (plus a spare), and two or four Computime 1200 baud modems. The voice was provided by Richard Peach, a former ABC broadcaster. The various components were sent for commercial production after a working prototype was built in the Telstra Research Laboratory (TRL). Assmann Australia used a German announcing unit and built a supervisory unit to TRL specifications. Design 2000 incorporated TRL oscillators in the phase locked oscillator units designed at TRL and controlled by two tone from the Telstra Caesium beam frequency standards. Ged Company built civil time receivers. The civil time code generators and two tone generators were designed and built within TRL. The changeover occurred at 12 noon, September 12, 1990.
Each state capital had a digital speaking clock for the local time of day with one access number for all Australia, 1194. In 2002 the Telstra 1194 service was migrated to Informatel (which uses its own digital technology, in conjunction with the National Measurement Institute — but kept the original voice of Richard Peach), whilst the other time services (e.g. hourly pips to radio stations) were retained as a service by Telstra. In May 2006 the remaining Telstra services were withdrawn and the digital hardware was decommissioned. Telstra ended the 1194 service on the midnight of October 1, 2019 and Australians no longer have access to this service. A web-based simulation of the 1194 service was created by musician Ryan Monro on the day of the original service's shutdown.
Austria
In Austria, the speaking clock ("Zeitansage", which literally means "time announcement") number is 0810 00 1503 since 2009. A recorded female voice says: "Es wird mit dem Summerton 15 Uhr, 53 Minuten und 10 Sekunden", meaning "At the buzzing tone, the time will be 15 hours, 53 minutes and 10 seconds", followed by a short pause and a 1 kHz, 0.25 seconds long beep (even though the announcement "buzzing tone" suggests otherwise). The time is announced in 10 second intervals using the voice of radio host Angelika Lang.
Before 2009, the speaking clock was available at local call rates by dialing 1503. Until then, the voice was generated by an Assmann ZAG500 time announcement device. The announcements were voiced by former switchboard operator Renate Fuczik.
Telephone time signals first became available in Vienna in 1929, with an automatic voice announcement being added in 1941.
Belgium
In Belgium, the speaking clock used to be on the numbers 1200 (Dutch language), 1300 (French language), and 1400 (German language). Starting in September 2012, the service is only contactable on the numbers +32 78 05 12 00 (Dutch Language), +32 78 05 13 00 (French language) and +32 78 05 14 00 (German language). At the time of the number change, the service received 5,000 calls per day. The signal for the speaking clock came directly from the time service of the Royal Observatory of Belgium. First it came from a Zeiss clock, later from an atomic clock.
Canada
The NRC provides a Telephone Talking Clock service; voice announcements of Eastern Time are made every 10 seconds, followed by a tone indicating the exact time. This service is available to the general public by dialing +1 613 745-1576 for English service and +1 613 745-9426 for French service. Long-distance charges apply for those calling from outside the Ottawa/Gatineau area. The voices of the time announcements are Harry Mannis in English and Simon Durivage in French.
China
Dialling 117 in any city connects to a speaking clock that tells the current time in China. Currently 12117. Despite China spanning five time zones, only one time is kept over the country, therefore only one zone related service is required and the same time would be announced regardless of where the call was made. Rates are charged according to the ordinary local number, generally around 0.25 RMB/minute.
Finland
In Finland the speaking clock service is known as Neiti Aika in Finnish or Fröken Tid in Swedish, both of which mean "Miss Time". The first Neiti Aika service was started in 1936 and was the first automated phone service in Finland. The service is provided by regional phone companies by dialling 10061 from any part of the country. The voice of the speaking clock is male or female depending on the phone company service. Nowadays the use of the Neiti Aika service has decreased significantly, and the press officer of Auria, the regional phone company of Turku, stated in an article of the Turun Sanomat newspaper that when the company started the service in 1938 it was used 352,310 times in its starting year, compared to 1,300 times in September 2006.
France
In France, the speaking clock (horloge parlante) was launched on 14 February 1933 and was the first service of its kind worldwide. It is available by dialing 3699 from within France, and was formerly available from overseas by dialing +33 8.36.99. - - . - - (where the - - could be any number). However, since September 2011, calls placed from outside France only work from some countries and networks. In May 2022, French telecom company Orange announced that the service will be discontinued on 1 July 2022, due to the "steady and significant decrease" of calls.
Ireland
In Ireland, the speaking clock () was first offered by P&T in 1970, and was accessed by dialling 1191. It announced the time in 24-hour format, in English only, at ten second intervals punctuated by a high pitched signal, as follows: “At the signal it will be HH:MM and …… seconds (signal). P&T operator Frances Donegan was the original voice. Antoinette Rocks, also a P&T/Telecom Éireann operator, provided the voice of the speaking clock when it was updated to digital technology in 1980. Her voice was selected as part of a competition on a radio phone-in show, RTÉ Radio 1‘s Morning Call with Mike Murphy. Listeners voted for one of 8 voices. At its peak, it received almost three million calls a year (about 8,000 a day). The Irish speaking clock service was permanently shut down by eir (P&T’s successor) on 27 August 2018 due to lack of use and reliance on ageing equipment.
Italy
In Italy, the number of the speaking clock ("il numero dell'ora esatta", "the exact time number") was originally 16, the time was given by a recorded female voice. In the mid-seventies, 16 was replaced by 161. Presently, the number to be dialled is 4261.
Netherlands
On 1 October 1930, a system was installed in the Haarlem telephone exchange (automated in 1925) which indicated the time using a series of tones, accessed by the number 15290.
In 1934, electronic engineer and inventor F.H. Leeuwrik built a speaking clock for the municipal telephone service of The Hague using optically recorded speech, looping on a large drum. The female voice was provided by the then 24-year-old school teacher Cor Hoogendam, hence the machine was nicknamed Tante Cor (Aunt Cor).
In 1969, this system was replaced by a magnetic disk machine resembling a record player with three pick-up arms, telling the time at 10 second intervals followed by a beep. The text was spoken by actress Willie Brill. The service was now called over 130 million times a year.
In April 1992, the machinery was replaced by a digital device with no moving parts. The voice was provided by actress Joke Driessen and the clock's accuracy is maintained by linking it to the German longwave radio transmitter DCF77. To comply with international guidelines limiting double-zero to use as an international prefix, the 002 number was changed on 3 December 1990 to 06–8002, and later to 0900–8002. The service still receives approximately four million calls a year.
New Zealand
The speaking clock in New Zealand is run by the Measurement Standards Laboratory of New Zealand. The service is accessed by dialling 0800 MSLTIME (0800 675846). MSL has been running the service since 1989.
Poland
The speaking clock in Poland is known as Zegarynka which means the clock girl. The service became first available in 1936, using a device invented and patented in Poland. It was speaking with the recorded voice of actress Lidia Wysocka. The first cities to be equipped with this device were Katowice, Warsaw (dialing number 05), Gdynia, Toruń and Kraków (July 1936).
Russia
In 1935, Soviet Central Scientific Research Institute of Communications received a government order to design the "Speaking Clock" for Moscow City Telephone Network. "Speaking Clock" was constructed based on cinematic techniques and consists of discs with pulse-density modulation optical marks on photographic tapes, photocell with actuator, and audio tube amplifier. On May 14, 1937 "speaking clock" connected to Moscow City Telephone Network for test operation and it was contactable on the numbers and . It was speaking with the recorded voice of Soviet actor and broadcaster Emmanuil Tobiash. In 1937, the first cities to be equipped with this devices were Moscow and Leningrad.
In 1969, the first Soviet "Speaking Clock" was replaced in Moscow City Telephone Network by a magnetic tape machine. Old ones were transferred to the Polytechnic Museum.
To hear the current time in Russia, either 100 or 060 can be dialed, depending on the city where this service is available. These calls are free if made from non-mobile phones. In Moscow, the Speaking Clock number is 100 if dialed from within the city, or +7-495-100-. . . . from other countries (where . . . . can be any number). At one time in Moscow, there were advertisements before and after the announcement of the current time; this practice has since ceased.
Spain
The speaking clock in Spain is run by the Spanish Navy from the Royal Observatory in San Fernando, and is accessed by dialling the number 956599429 free of charge.
Sweden
The speaking clock in Sweden is run by Telia and can be accessed by calling 90 510 from landline phones or 08-90 510 from mobile phones. The service is called Fröken Ur which means Miss Clock. It has been in use since 1934. Various voices have stated the time. Since 2000 the voice which states the time belongs to Johanna Hermann Lundberg. In 1977 the speaking clock in Sweden received 64 000 000 calls - which is the record for a year. In 2020 the number of calls was about 2 000 per day, meaning a total of a bit less than 1 000 000 calls annually.
South Africa
The speaking clock in South Africa is run by Telkom, the country's national telecommunications provider, and can be contacted by dialling 1026 either from a fixed line or a cellular phone. The time is announced every 10 seconds and alternates between English and Afrikaans languages. An example of an English announcement of the time would be: "When you hear the signal, it will be four hours, fifteen minutes and ten seconds", followed by a short audible tone to signal the exact time previously announced. The voice of the announcements is that of broadcaster and voiceover artist Helen Naudé. Recorded in 1989, the same speaking clock announcements with Naudé's voice are still in use to the present day. Naudé also provided her voice talent to other Telkom services, such as 1023 directory enquiries, as well as the pre-recorded message "The subscriber you have dialled does not exist", which can be heard when dialling an invalid phone number.
Ukraine
The speaking clock in Ukraine is run in Odesa and is available by dialling +380-48-737 6060.
United Kingdom
Usage
In the United Kingdom, the speaking clock can be heard by dialling 123 on a BT phone line; the number may vary on other networks. Every ten seconds, a voice announces:
The service was started in 1936 by the General Post Office (which handled telephones at that time) and was continued by BT after its formation in 1980 and privatisation in 1984. Between 1986 and 2008, the message included the phrase "sponsored by Accurist"; Accurist withdrew their sponsorship in 2008. The "from BT" part was added, then removed at some point, then reinstated.
For times that are an exact minute, "precisely" is substituted for the seconds portion of the announcement. Similarly, announcements for times between the hour and one minute past the hour substitute "o'clock" for the (zero) minutes. Other operators run their own speaking clocks, with broadly similar formats, or redirect to BT's service. Virgin Media have their own service available by dialling 123 from a Virgin Media line. Sky also have their own service accessible by dialling 123 from a Sky telephone line. Dialling 123 from a few mobile services, such as O2, also obtains a speaking clock service. The Giffgaff network uses the same service as O2. The service is not available on the 3 mobile telephone network, as they use 123 as the number for their voicemail services. It was also unavailable on the Orange network for the same reason.
On the occasion of a leap second, such as at 23:59:60 on December 31, 2005, there is an extra second pause between the second and third beeps, to keep the speaking clock synchronised with Coordinated Universal Time: "At the third stroke, the time from BT will be, twelve o'clock precisely. Beep, Beep, <pause> Beep." The current UK time source is the National Physical Laboratory, UK.
In 2011, the BBC reported: "The service still receives 30 million calls each year."
History
A speaking clock service was first introduced in the United Kingdom on July 24, 1936. The mechanism used was an array of motors, glass discs, photocells and valves which took up the floorspace of a small room. The voice was that of London telephonist Ethel Jane Cain, who had won a prize of 10 guineas () in a competition to find the "Golden Voice". Cain's voice was recorded optically onto the glass disks in a similar way to a film soundtrack. The service was obtained by dialling the letters TIM (846) on a dial telephone, and hence the service was often colloquially referred to as "Tim". However this code was only used in the Director telephone system of the cities of London, Birmingham, Edinburgh, Glasgow, Liverpool and Manchester. Other areas initially dialled 952, but with the introduction of subscriber trunk dialling it was changed to 80 and later 8081 as more 'recorded services' were introduced. It was standardised to 123 by the early 1990s.
The time announcements were made by playing short, recorded phrases or words in the correct sequence. In an interview with Manchester Radio in 1957 Miss Cain said:
In 1963, the original device was replaced by more modern recording technology using a magnetic drum, similar to the Audichron technology used in the United States. The company that manufactured the rotating magnetic drum part of the Speaking Clock was Roberts & Armstrong (Engineers) Ltd of North Wembley. They took on the licence from the British Post Office to manufacture complete clocks for the telecommunications authorities of Denmark, Sweden and the Republic of Ireland, and a third (spare) clock for the British Post Office. The latter was installed in Bow Street, London. The European clocks were modified for the 24-hour system by lengthening the drum and adding extra heads. Roberts & Armstrong subcontracted the electronic aspects to the Synchronome Company of Westbury. The clocks were designed to run non-stop for 20 years. This system gave way to the present digital system in 1984, which uses a built-in crystal oscillator and microprocessor logic control. The complete apparatus comprises solid-state microchips, occupies no more shelf space than a small suitcase and has no moving parts at all. The BT service is assured to be accurate to five-thousandths of a second.
In 1986, BT allowed Accurist to sponsor its franchise, the first time a sponsor had been used for the service. In the latter years of this sponsorship, it cost 30 pence to call the speaking clock. Accurist announced its withdrawal from the deal and the launch of an online "British Real Time" website on 24 August 2008.
During the Cold War, the British Telecom speaking clock network was designed to be used in case of nuclear attack to broadcast messages from Strike Command at RAF High Wycombe to HANDEL units at regional police stations. From there, automatic warning sirens could be started and alerts sent to Royal Observer Corps monitoring posts and other civil defence volunteers equipped with manual warning devices. The rationale for using an existing rather than a dedicated system was that it was effectively under test at all times, rather than being activated (and possibly found to be faulty) only in the event of war. The signals to automatic sirens were sent down the wires of individual (unaware) subscribers for the same reason—a customer would report any fault as soon as it occurred, whereas a problem with a dedicated line would not be noticed until it was needed.
A version of the speaking clock was also used on recordings of proceedings at the Houses of Parliament made by the BBC Parliament Unit, partly as a time reference and partly to prevent editing. On a stereo recording, one track was used for the sound and the other for an endless recording of the speaking clock—without the pips, as these were found to cause interference.
BT "Speaking Clock" voices
There have been five permanent voices for the speaking clock. Temporary voices have been used on special occasions, usually with BT donating the call fees collected to charity.
Permanent voices
Ethel Jane Cain, first permanent voice: from July 24, 1936, to 1963.
Pat Simmons, second permanent voice: from 1963 to April 2, 1985.
Brian Cobby, third permanent voice: from April 2, 1985, to April 2, 2007.
Sara Mendes da Costa, fourth permanent voice: from April 2, 2007, to November 9, 2016.
Alan Steadman, fifth permanent voice: from November 9, 2016.
Temporary voices
Lenny Henry, comedian, temporary voice for Comic Relief: from March 10 to March 23, 2003.
Alicia Roland, 12-year-old schoolgirl, temporary voice for the children's charity ChildLine, from October 13 to October 20, 2003, having won a BBC TV Newsround competition and stating, before announcing the time, "It's time to listen to young people".
Mae Whitman, temporary voice as part of a deal to promote the Disney production of Tinker Bell, for three months from 26 October 2008 until 2 February 2009.
UK celebrities Kimberley Walsh, Cheryl Fernandez-Versini, Gary Barlow, Chris Moyles, and Fearne Cotton for Comic Relief charity: from 3 February to 23 March 2009.
UK celebrities David Walliams, Gary Barlow, Chris Moyles, Kimberley Walsh, Fearne Cotton and a mystery voice for Sport Relief charity from 7 March to 9 April 2012.
Clare Balding temporary voice for Comic Relief from 12 February to 15 March 2013 (with the help of a barking dog, time announced as "at the third woof".)
Davina McCall temporary voice for Sport Relief from 27 January to 23 March 2014.
Ian McKellen temporary voice for Comic Relief from 24 February to 13 March 2015.
Jo Brand temporary voice for Sport Relief from 22 January to 30 March 2016.
United States
The first automated time service in the United States began in Atlanta, Georgia in 1934 as a promotion for Tick Tock Ginger Ale. Company owner John Franklin modified Western Electric technology to create the machine that would become known as the Audichron. The Audichron Company became the chief supplier of talking clocks in the US, maintained by local businesses and, later, the regional Bell System companies.
The service became typically known as the "Time of Day" service, with the term "speaking clock" never being used. Occasionally it would be called "Time and Temperature" or simply "Time". However, the service has been phased out in most states (Nevada and Connecticut still maintain service). AT&T discontinued its California service in September 2007, citing the widespread availability of sources such as mobile phones and computers. , calling 202-762-1401 from anywhere in the US will give a correct time from EST or UTC time.
For all area codes in Northern California, and on the West Coast generally, the reserved exchange was 767 which was often indicated by its phoneword, POPCORN; the service was discontinued in 2007. In other locations, different telephone exchanges are or were used for the speaking clock service.
Many shortwave radio time signal services provide speaking clock services, such as WWV (voiced by John Doyle) and WWVH (voiced by Jane Barbe), operated by the National Institute of Standards and Technology from the United States of America. To avoid disruption with devices that rely on the accurate timings and placement of the service tones from the radio, the voice recording is "notched" clear of some of the tones.
The website Telephone World has recordings of past and present "Time of Day" services that also include temperature and weather announcements.
See also
:Category:Telephone voiceover talent
Greenwich Time Signal
References
External links
Photograph of the Speaking clock announcer module (ZBA4264) built in 1955
Website about the history of speaking clocks
& http://www.audiovis.nac.gov.pl/obraz/88783/ - Polish speaking clock device from 1936
"The Post Office Speaking Clock in Great Britain" - Nature, Volume 139, pp 892–893, published: 22 May 1937 (downloadable pdf)
Telephone numbers
Clocks
Information by telephone
French inventions | Speaking clock | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 5,377 | [
"Machines",
"Telephone numbers",
"Mathematical objects",
"Clocks",
"Measuring instruments",
"Physical systems",
"Numbers"
] |
615,222 | https://en.wikipedia.org/wiki/Multivariable%20calculus | Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one.
Multivariable calculus may be thought of as an elementary part of calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus.
Introduction
In single-variable calculus, operations like differentiation and integration are made to functions of a single variable. In multivariate calculus, it is required to generalize these to multiple variables, and the domain is therefore multi-dimensional. Care is therefore required in these generalizations, because of two key differences between 1D and higher dimensional spaces:
There are infinite ways to approach a single point in higher dimensions, as opposed to two (from the positive and negative direction) in 1D;
There are multiple extended objects associated with the dimension; for example, for a 1D function, it must be represented as a curve on the 2D Cartesian plane, but a function with two variables is a surface in 3D, while curves can also live in 3D space.
The consequence of the first difference is the difference in the definition of the limit and differentiation. Directional limits and derivatives define the limit and differential along a 1D parametrized curve, reducing the problem to the 1D case. Further higher-dimensional objects can be constructed from these operators.
The consequence of the second difference is the existence of multiple types of integration, including line integrals, surface integrals and volume integrals. Due to the non-uniqueness of these integrals, an antiderivative or indefinite integral cannot be properly defined.
Limits
A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions.
A limit along a path may be defined by considering a parametrised path in n-dimensional Euclidean space. Any function can then be projected on the path as a 1D function . The limit of to the point along the path can hence be defined as
Note that the value of this limit can be dependent on the form of , i.e. the path chosen, not just the point which the limit approaches. For example, consider the function
If the point is approached through the line , or in parametric form:
Then the limit along the path will be:
On the other hand, if the path (or parametrically, ) is chosen, then the limit becomes:
Since taking different paths towards the same point yields different values, a general limit at the point cannot be defined for the function.
A general limit can be defined if the limits to a point along all possible paths converge to the same value, i.e. we say for a function that the limit of to some point is L, if and only if
for all continuous functions such that .
Continuity
From the concept of limit along a path, we can then derive the definition for multivariate continuity in the same manner, that is: we say for a function that is continuous at the point , if and only if
for all continuous functions such that .
As with limits, being continuous along one path does not imply multivariate continuity.
Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example. For example, for a real-valued function with two real-valued parameters, , continuity of in for fixed and continuity of in for fixed does not imply continuity of .
Consider
It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle . Furthermore, the functions defined for constant and and by
and
are continuous. Specifically,
for all and . Therefore, and moreover, along the coordinate axes, and . Therefore the function is continuous along both individual arguments.
However, consider the parametric path . The parametric function becomes
Therefore,
It is hence clear that the function is not multivariate continuous, despite being continuous in both coordinates.
Theorems regarding multivariate limits and continuity
All properties of linearity and superposition from single-variable calculus carry over to multivariate calculus.
Composition: If and are both multivariate continuous functions at the points and respectively, then is also a multivariate continuous function at the point .
Multiplication: If and are both continuous functions at the point , then is continuous at , and is also continuous at provided that .
If is a continuous function at point , then is also continuous at the same point.
If is Lipschitz continuous (with the appropriate normed spaces as needed) in the neighbourhood of the point , then is multivariate continuous at .
From the Lipschitz continuity condition for we have
where is the Lipschitz constant. Note also that, as is continuous at , for every there exists a such that .
Hence, for every , choose ; there exists an such that for all satisfying , , and . Hence converges to regardless of the precise form of .
Differentiation
Directional derivative
The derivative of a single-variable function is defined as
Using the extension of limits discussed above, one can then extend the definition of the derivative to a scalar-valued function along some path :
Unlike limits, for which the value depends on the exact form of the path , it can be shown that the derivative along the path depends only on the tangent vector of the path at , i.e. , provided that is Lipschitz continuous at , and that the limit exits for at least one such path.
For continuous up to the first derivative (this statement is well defined as is a function of one variable), we can write the Taylor expansion of around using Taylor's theorem to construct the remainder:
where .
Substituting this into ,
where .
Lipschitz continuity gives us for some finite , . It follows that .
Note also that given the continuity of , as .
Substituting these two conditions into ,
whose limit depends only on as the dominant term.
It is therefore possible to generate the definition of the directional derivative as follows: The directional derivative of a scalar-valued function along the unit vector at some point is
or, when expressed in terms of ordinary differentiation,
which is a well defined expression because is a scalar function with one variable in .
It is not possible to define a unique scalar derivative without a direction; it is clear for example that . It is also possible for directional derivatives to exist for some directions but not for others.
Partial derivative
The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant.
A partial derivative may be thought of as the directional derivative of the function along a coordinate axis.
Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator () is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function.
Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable.
Multiple integration
The multiple integral extends the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration.
The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves.
Fundamental theorem of calculus in multiple dimensions
In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus:
Gradient theorem
Stokes' theorem
Divergence theorem
Green's theorem.
In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds.
Applications and uses
Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular,
Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics.
Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data.
Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus.
Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus.
See also
List of multivariable calculus topics
Multivariate statistics
References
External links
UC Berkeley video lectures on Multivariable Calculus, Fall 2009, Professor Edward Frenkel
MIT video lectures on Multivariable Calculus, Fall 2007
Multivariable Calculus: A free online textbook by George Cain and James Herod
Multivariable Calculus Online: A free online textbook by Jeff Knisley
Multivariable Calculus – A Very Quick Review, Prof. Blair Perot, University of Massachusetts Amherst
Multivariable Calculus, Online text by Dr. Jerry Shurman | Multivariable calculus | [
"Mathematics"
] | 2,028 | [
"Multivariable calculus",
"Calculus"
] |
615,385 | https://en.wikipedia.org/wiki/Gate%20valve | A gate valve, also known as a sluice valve, is a valve that opens by lifting a barrier (gate) out of the path of the fluid. Gate valves require very little space along the pipe axis and hardly restrict the flow of fluid when the gate is fully opened. The gate faces can be parallel but are most commonly wedge-shaped (in order to be able to apply pressure on the sealing surface).
Typical use
Gate valves are used to shut off the flow of liquids rather than for flow regulation, which is frequently done with a globe valve. When fully open, the typical gate valve has no obstruction in the flow path, resulting in very low flow resistance. The size of the open flow path generally varies in a nonlinear manner as the gate is moved. This means that the flow rate does not change evenly with stem travel. Depending on the construction, a partially open gate can vibrate from the fluid flow.
Gate valves are mostly used with larger pipe diameters (from 2" to the largest pipelines) since they are less complex to construct than other types of valves in large sizes.
At high pressures, friction can become a problem. As the gate is pushed against its guiding rail by the pressure of the medium, it becomes harder to operate the valve. Large gate valves are sometimes fitted with a bypass controlled by a smaller valve to be able to reduce the pressure before operating the gate valve itself.
Gate valves without an extra sealing ring on the gate or the seat are used in applications where minor leaking of the valve is not an issue, such as heating circuits or sewer pipes.
Valve construction
Common gate valves are actuated by a threaded stem that connects the actuator (e.g. handwheel or motor) to the gate. They are characterised as having either a rising or a nonrising stem, depending on which end of the stem is threaded. Rising stems are fixed to the gate and rise and lower together as the valve is operated, providing a visual indication of valve position. The actuator is attached to a nut that is rotated around the threaded stem to move it. Nonrising stem valves are fixed to, and rotate with, the actuator, and are threaded into the gate. They may have a pointer threaded onto the stem to indicate valve position, since the gate's motion is concealed inside the valve. Nonrising stems are used where vertical space is limited.
Gate valves may have flanged ends drilled according to pipeline-compatible flange dimensional standards.
Gate valves are typically constructed from cast iron, cast carbon steel, ductile iron, gunmetal, stainless steel, alloy steels, and forged steels.
All-metal gate valves are used in ultra-high vacuum chambers to isolate regions of the chamber.
Bonnet
Bonnets provide leakproof closure for the valve body. Gate valves may have a screw-in, union, or bolted bonnet. A screw-in bonnet is the simplest, offering a durable, pressure-tight seal. A union bonnet is suitable for applications requiring frequent inspection and cleaning. It also gives the body added strength. A bolted bonnet is used for larger valves and higher pressure applications.
Pressure seal bonnet
Another type of bonnet construction in a gate valve is pressure seal bonnet. This construction is adopted for valves for high pressure service, typically in excess of 2250 psi (15 MPa). The unique feature of the pressure seal bonnet is that the bonnet ends in a downward-facing cup that fits inside the body of the valve. As the internal pressure in the valve increases, the sides of the cup are forced outward. improving the body-bonnet seal. Other constructions where the seal is provided by external clamping pressure tend to create leaks in the body-bonnet joint.
Knife gate valve
For plastic solids and high-viscosity slurries such as paper pulp, a specialty valve known as a knife gate valve is used to cut through the material to stop the flow. A knife gate valve is usually not wedge shaped and has a tapered knife-like edge on its lower surface.
Images
See also
Ball valve
Blast gate
Butterfly valve
Control valve
Diaphragm valve
Globe valve
Needle valve
Process flow diagram
Piping and instrumentation diagram
References
Plumbing valves
Valves
Articles containing video clips | Gate valve | [
"Physics",
"Chemistry"
] | 857 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
615,402 | https://en.wikipedia.org/wiki/Ball%20valve | A ball valve is a flow control device which uses a hollow, perforated, and pivoting ball to control fluid flowing through it. It is open when the hole through the middle of the ball is in line with the flow inlet, and closed when it is pivoted 90 degrees by the valve handle, blocking the flow. The handle lies flat in alignment with the flow when open, and is perpendicular to it when closed, making for easy visual confirmation of the valve's status. The shut position 1/4 turn could be in either clockwise or counter-clockwise direction.
Ball valves are durable, performing well after many cycles, and reliable, closing securely even after long periods of disuse. These qualities make them an excellent choice for shutoff and control applications, where they are often preferred to gates and globe valves, but they lack the fine control of those alternatives, in throttling applications.
The ball valve's ease of operation, repair, and versatility lend it to extensive industrial use, supporting pressures up to and temperatures up to , depending on design and materials used. Sizes typically range from 0.2 to 48 in (5 to 1200 mm). Valve bodies are made of metal, plastic, or metal with a ceramic; floating balls are often chrome plated for durability. One disadvantage of a ball valve is that when used for controlling water flow, they trap water in the center cavity while in the closed position. In the event of ambient temperatures falling below freezing point, the sides can crack due to the expansion associated with ice formation. Some means of insulation or heat tape in this situation will usually prevent damage. Another option for cold climates is the "freeze tolerant ball valve". This style of ball valve incorporates a freeze plug in the side so in the event of a freeze-up, the freeze plug ruptures, acting as a 'sacrificial' fail point, allowing an easier repair. Instead of replacing the whole valve, all that is required is the fitting of a new freeze plug.
For cryogenic fluids, or product that may expand inside of the ball, there is a vent drilled into the upstream side of the valve. This is referred to as a vented ball.
A ball valve should not be confused with a "ball-check valve", a type of check valve that uses a solid ball to prevent undesired backflow.
Other types of quarter-turn valves include the butterfly valve and plug valve and freeze proof ball valve.
Types
There are five general body styles of ball valves: single body, three-piece body, split body, top entry, and welded. The difference is based on how the pieces of the valve—especially the casing that contains the ball itself—are manufactured and assembled. The valve operation is the same in each case.
The one-piece bodies provide a very rigid construction, in some versions the ball is removable from the valve without taking the entire valve out of the line however multi-piece bodies offer greater scope for ingenuity of design.
In addition, there are different styles related to the bore of the ball mechanism itself. And depending on the working pressure, the ball valves are categorized as low-pressure ball valves and high-pressure ball valves. In most industries, the ball valves with working pressures higher than are considered high-pressure ball valves. Usually the maximum working pressure for the high-pressure ball valves is and depends on the structure, sizes and sealing materials. The maximum working pressure of high-pressure ball valves can be up to . High-pressure ball valves are often used in hydraulic systems, so they are also known as hydraulic ball valves.
Ball valves in sizes up to generally come in a single piece, two or three-piece designs. One-piece ball valves are almost always reduced bore, are relatively inexpensive, and are generally replaced instead of repaired. Two-piece ball valves generally have a slightly reduced (or standard) bore, and can be either throw-away or repairable. The three-piece design allows for the center part of the valve containing the ball, stem and seats to be easily removed from the pipeline. This facilitates efficient cleaning of deposited sediments, replacement of seats and gland packings, polishing out of small scratches on the ball, all this without removing the pipes from the valve body. The design concept of a three-piece valve is for it to be repairable. Each valve is heated to a certain degree, while the excess material is trimmed from the body.
Full bore
A full bore (sometimes full port) ball valve has an oversized ball so that the hole in the ball is the same size as the pipeline resulting in lower friction loss. Flow is unrestricted but the valve is larger and more expensive so this is only used where free flow is required, for example in pipelines that require pigging.
Reduced bore, or reduced port
In reduced bore (sometimes reduced port) ball valves, flow through the valve is one pipe size smaller than the valve's pipe size resulting in the flow area being smaller than the pipe. As the flow discharge remains constant and is equal to the area of flow (A) times velocity (V), the velocity increases with reduced area of flow.
V port
A V port ball valve has either a 'v' shaped ball or a 'v' shaped seat. This allows for linear and even equal percentage flow characteristics. When the valve is in the closed position and opening is commenced the small end of the 'v' is opened first allowing stable flow control during this stage. This type of design requires a generally more robust construction due to higher velocities of the fluids, which might damage a standard valve. When machined correctly these are excellent control valves, offering superior leakage performance.
Cavity filler
Many industries encounter problems with residues in the ball valve. Where the fluid is meant for human consumption, residues may also be a health hazard, and where the fluid changes from time to time contamination of one fluid with another may occur. Residues arise because in the half-open position of the ball valve a gap is created between the ball bore and the body in which fluid can be trapped. To avoid the fluid getting into this cavity, the cavity has to be plugged, which can be done by extending the seats in such a manner that it is always in contact with the ball. This type of ball valve is known as Cavity Filler Ball Valve.
There are a few types of ball valves related to the attachment and lateral movement of the ball:
Trunnion, floating and actuated
A trunnion ball valve has additional mechanical anchoring of the ball at the top and the bottom, suitable for larger and higher pressure valves (generally above and ).
A floating ball valve is one where the ball is not held in place by a trunnion. In normal operation, this will cause the ball to float downstream slightly. This causes the seating mechanism to compress under the ball pressing against it. Furthermore, in some types, in the event of some force causing the seat mechanism to dissipate (such as extreme heat from fire outside the valve), the ball will float all the way to metal body which is designed to seal against the ball providing a somewhat failsafe design.
Manually operated ball valves can be closed quickly and thus there is a danger of water hammer. Some ball valves are equipped with an actuator that may be pneumatically, hydraulically or motor operated. These valves can be used either for on/off or flow control. A pneumatic flow control valve is also equipped with a positioner which transforms the control signal into actuator position and valve opening accordingly.
Multiport
Three- and four-way have an L- or T-shaped hole through the middle. The different combinations of flow are shown in the figure. It is easy to see that a T valve can connect any pair of ports, or all three, together, but the 45 degree position which might disconnect all three leaves no margin for error. The L valve can connect the center port to either side port, or disconnect all three, but it cannot connect the side ports together.
Multi-port ball valves with 4 ways, or more, are also commercially available, the inlet way often being orthogonal to the plane of the outlets. For special applications, such as driving air-powered motors from forward to reverse, the operation is performed by rotating a single lever four-way valve. The 4-way ball valve has two L-shaped ports in the ball that do not interconnect, sometimes referred to as an "×" port.
Materials of construction
Body materials may include, but are not limited to, any of these materials:
Stainless steel
Brass
Bronze
Chrome
Titanium
PVC
CPVC
PFA-lined
There are many different types of seats and seals that are used in ball valves as well. Valves are usually manufactured with different materials, each with specific applications they are good for due to their chemical compatibility, pressures, and temperatures. Some of the commonly used materials include brass, stainless steel, bronze etc. These material choices ensure that valves are suitable for their respective functions, providing efficient and reliable performance in various industries and applications.
TMF (valve seat)
Delrin
Reinforced PTFE (RTFE)
Polychlorotrifluoroethylene (PCTFE; Kel F)
Metal
Nylon
PEEK
50/50 (valve seat)
Virgin (unfilled) PTFE
Ultra-high-molecular-weight polyethylene (UHMWPE)
Graphoil
Viton
See also
Butterfly valve
Control valve
Gate valve
Globe valve
Hydrogen valve
Needle valve
Pinch valve
Piston valve
Plastic pressure pipe systems
Thermal expansion
References
Plumbing valves
Valves | Ball valve | [
"Physics",
"Chemistry"
] | 1,969 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
615,421 | https://en.wikipedia.org/wiki/Lombard%20rhythm | The Lombard rhythm or Scotch snap is a syncopated musical rhythm in which a short, accented note is followed by a longer one. This reverses the pattern normally associated with dotted notes or notes inégales, in which the longer value precedes the shorter.
In Baroque music, a Lombard rhythm consists of a stressed sixteenth note, or semiquaver, followed by a dotted eighth note, or dotted quaver.
Baroque composers often implemented these rhythms. For instance, Johann Georg Pisendel utilized Lombard rhythms within the largo and allegro sections of his sonata for Violin Solo in A Minor. Carl Philipp Emanuel Bach included dotted rhythms within certain excerpts of his concerto for flute, cello, and keyboard.
Not only did Baroque performers and composers such as Johann Joachim Quantz, introduce these uneven rhythms in their studies and pedagogy, but jazz also possesses these rhythms which are in the very essence of its style.
In Scottish country dances, the Scotch snap (or Scots snap) is a prominent feature of the strathspey.
Due to the immigration of Scots to Appalachia, elements of Scottish music such as the Lombard rhythm have been appropriated into popular music forms of the 20th and 21st century. In modern North American pop and rap music, the Lombard rhythm is very common; recent releases by Post Malone, Cardi B, and Ariana Grande feature the Scotch snap. Grande's song ‘7 Rings’ was the subject of controversy surrounding this rhythm, wherein several hip-hop artists (Princess Nokia and Soulja Boy) who had used the rhythm in an iconic fashion raised accusations of plagiarism.
References
Babitz, Sol. “A Problem of Rhythm in Baroque Music.” The Musical Quarterly 38, no. 4 (October 1952): 533–565. https://www.jstor.org/stable/740138
Fuller, David. “Notes inégales (Fr.: ‘unequal notes’),” Grove Music Online (January 2001) https://doi.org/10.1093/gmo/9781561592630.article.20126
Gábor, Elod and Ignác-Csaba FILIP. “Johann Georg Pisendel: Sonata for Violin Solo in A Minor.” Series VIII: Performing Arts 12, no. 61 (2019): pp. 72–76. https://doi.org/10.31926/but.pa.2019.12.61.30
Miller, Leta. “C.P.E. Bach’s Instrumental ‘Recompositions’: Revisions or Alternatives?” Current Musicology 59, (1995) p. 29.
Further reading
Baroque music
Rhythm and meter
Scottish country dance
Scottish fiddling
Scottish folk music | Lombard rhythm | [
"Physics"
] | 566 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
615,487 | https://en.wikipedia.org/wiki/Heterogeneous%20Element%20Processor | The Heterogeneous Element Processor (HEP) was introduced by Denelcor, Inc. in 1982. The HEP's architect was Burton Smith. The machine was designed to solve fluid dynamics problems for the Ballistic Research Laboratory. A HEP system, as the name implies, was pieced together from many heterogeneous components -- processors, data memory modules, and I/O modules. The components were connected via a switched network.
A single processor, called a PEM (Process Execution Module), in a HEP system (up to sixteen PEMs could be connected) was rather unconventional; via a "program status word (PSW) queue" up to fifty processes could be maintained in hardware at once. The largest system ever delivered had 4 PEMs. The eight-stage instruction pipeline allowed instructions from eight different processes to proceed at once. In fact, only one instruction from a given process was allowed to be present in the pipeline at any point in time. Therefore, the full processor throughput of 10 MIPS could only be achieved when eight or more processes were active; no single process could achieve throughput greater than 1.25 MIPS. This type of multithreading processing classifies today the HEP as a barrel processor, while it was described as an MIMD pipelined processor by its designers. The hardware implementation of the HEP PEM was emitter-coupled logic.
Processes were classified as either user-level or supervisor-level. User-level processes could create supervisor-level processes, which were used to manage user-level processes and perform I/O. Processes of the same class were required to be grouped into one of seven user tasks and seven supervisor tasks.
Each processor, in addition to the PSW queue and instruction pipeline, contained instruction memory, 2,048 64-bit general purpose registers and 4,096 constant registers. Constant registers were differentiated by the fact that only supervisor processes could modify their contents. The processors themselves contained no data memory; instead, data memory modules could be separately attached to the switched network.
The HEP memory consisted of completely separate instruction memory (up to 128 MBs) and data memory (up to 1 GB). Users saw 64-bit words, but in reality, data memory words were 72-bit with the extra bits used for state, see next paragraph, parity, tagging, and other uses.
The HEP implemented a type of mutual exclusion in which all registers and locations in data memory had associated "empty" and "full" states. Reading from a location set the state to "empty," while writing to it set the state to "full." A programmer could allow processes to halt after trying to read from an empty location or write to a full location, enforcing critical sections.
The switched network between elements resembled, in many ways, a modern computer network. On the network were sets of nodes, each of which had three links. When a packet arrived at a node, it consulted a routing table and attempted to forward the packet closer to its destination. If a node became congested, any incoming packets were passed on without routing. Packets treated in such a manner had their priority level increased; when several packets vied for a single node, a packet with a higher priority level would be routed before ones with lower priority levels.
Another component of the switched network was the sO System, with its own memory and many individual DEC UNIBUS buses attached for disks and other peripherals. The system also had the ability to save the full/empty bits not normally visible directly. The initial IO System performance was shown to be woefully inadequate due to the high latency in starting the IO operations. Ron Natalie (from BRL) and Burton Smith designed a new system out of spare parts on napkins at a local steakhouse and put it into operation in the course of the ensuing week.
The HEP's primary application programming language was a unique Fortran variant. In time C, Pascal, and SISAL were added. The syntax of data variables using full-empty bits prepended '$' before their name. So 'A' would name a local variable, but $A would be a locking full-empty variable. Application dead-lock was thus possible. Problematic, failure to '$' could introduce unintended numerical inaccuracy.
The first HEP operating system was HEPOS. Mike Muuss was involved in a Unix port for the Ballistic Research Laboratory. HEPOS was not a Unix-like operating system.
Although it was known to have poor cost-performance, the HEP received attention due to what were, at the time, several revolutionary features. The HEP had the performance of a CDC 7600-class computer in the Cray-1 era. HEP systems were leased by the Ballistic Research Laboratory (four PEM system), Los Alamos, the Argonne National Laboratory (single PEM), the National Security Agency and Shoko Ltd (Japan, 1 PEM). Germany's Messerschmitt (three PEMS system) is the only client who bought it. Denelcor also delivered a two PEM system to the University of Georgia in exchange for them providing software assistance (the system had also been offered to the University of Maryland). Messerschmitt was the only client to put the HEP into use for "real" applications; the other clients used it for experimenting with parallel algorithms. The BRL system was used to prepare a movie using the BRL-CAD software as its only real application.
Faster and larger designs for HEP-2 and HEP-3 were started but never completed. The architectural concept would later be embodied with the code-name Horizon.
See also
Multithreading (computer architecture)
Hyper-threading
Cray MTA
Tera Computer Company
VLIW
References
Parallel computing
Supercomputers | Heterogeneous Element Processor | [
"Technology"
] | 1,207 | [
"Supercomputers",
"Supercomputing"
] |
615,499 | https://en.wikipedia.org/wiki/Phenylpropanolamine | Phenylpropanolamine (PPA), sold under many brand names, is a sympathomimetic agent which is used as a decongestant and appetite suppressant. It was previously commonly used in prescription and over-the-counter cough and cold preparations. The medication is taken by mouth.
Side effects of phenylpropanolamine include increased heart rate and blood pressure, among others. Rarely, phenylpropanolamine has been associated with hemorrhagic stroke. Phenylpropanolamine acts as a norepinephrine releasing agent, thereby indirectly activating adrenergic receptors. As such, it is an indirectly acting sympathomimetic. It was previously thought to act as a mixed acting sympathomimetic with additional direct agonist actions on adrenergic receptors, but this proved not to be the case. Chemically, phenylpropanolamine is a substituted amphetamine and is closely related to ephedrine, pseudoephedrine, amphetamine, and cathinone. It is most commonly a racemic mixture of the (1R,2S)- and (1S,2R)-enantiomers of β-hydroxyamphetamine and is also known as dl-norephedrine.
Phenylpropanolamine was first synthesized around 1910 and its effects on blood pressure were first characterized around 1930. It was introduced for medical use by the 1930s. The medication was withdrawn from many markets starting in 2000 following findings that it was associated with increased risk of hemorrhagic stroke. It was previously available both over-the-counter and by prescription. Phenylpropanolamine is available for medical and/or veterinary use in some countries.
Medical uses
Phenylpropanolamine is used as a decongestant to treat nasal congestion. It has also been used to suppress appetite and promote weight loss in the treatment of obesity and has shown effectiveness for this indication.
Available forms
Phenylpropanolamine was previously available over-the-counter and in certain combination forms by prescription in the United States. However, these forms have all been discontinued. Phenylpropanolamine is available in some countries.
Side effects
Phenylpropanolamine produces sympathomimetic effects and can cause side effects such as increased heart rate and blood pressure. It has been associated rarely with incidence of hemorrhagic stroke.
Certain drugs increase the chances of déjà vu occurring in the user, resulting in a strong sensation that an event or experience currently being experienced has already been experienced in the past. Some pharmaceutical drugs, when taken together, have also been implicated in the cause of déjà vu. The reported the case of an otherwise healthy male who started experiencing intense and recurrent sensations of déjà vu upon taking the drugs amantadine and phenylpropanolamine together to relieve flu symptoms. He found the experience so interesting that he completed the full course of his treatment and reported it to the psychologists to write up as a case study. Because of the dopaminergic action of the drugs and previous findings from electrode stimulation of the brain, it was speculated that déjà vu occurs as a result of hyperdopaminergic action in the mesial temporal areas of the brain.
Interactions
There has been very little research on drug interactions with phenylpropanolamine. In one study, phenylpropanolamine taken with caffeine was found to quadruple caffeine levels. In another study, phenylpropanolamine reduced theophylline clearance by 50%.
Pharmacology
Pharmacodynamics
Phenylpropanolamine acts primarily as a selective norepinephrine releasing agent. It also acts as a dopamine releasing agent with around 10-fold lower potency. The stereoisomers of the drug have only weak or negligible affinity for α- and β-adrenergic receptors.
Phenylpropanolamine was originally thought to act as a direct agonist of adrenergic receptors and hence to act as a mixed acting sympathomimetic, However, phenylpropanolamine was subsequently found to show only weak or negligible affinity for these receptors and has been instead characterized as exclusively an indirectly acting sympathomimetic. It acts by inducing norepinephrine release and thereby indirectly activating adrenergic receptors.
Many sympathetic hormones and neurotransmitters are based on the phenethylamine skeleton, and function generally in "fight or flight" type responses, such as increasing heart rate, blood pressure, dilating the pupils, increased energy, drying of mucous membranes, increased sweating, and a significant number of additional effects.
Phenylpropanolamine has relatively low potency as a sympathomimetic. It is about 100 to 200times less potent than epinephrine (adrenaline) or norepinephrine (noradrenaline) in its sympathomimetic effects, although responses are variable depending on tissue.
Pharmacokinetics
Absorption
Phenylpropanolamine is readily- and well-absorbed with oral administration. Immediate-release forms of the drug reached peak levels about 1.5hours (range 1.0 to 2.3hours) following administration. Conversely, extended-release forms of phenylpropanolamine reach peak levels after 3.0 to 4.5hours. The pharmacokinetics of phenylpropanolamine are linear across an oral dose range of 25 to 100mg. Steady-state levels of phenylpropanolamine are achieved within 12hours when the drug is taken once every 4hours. There is 62% accumulation of phenylpropanolamine at steady state in terms of peak levels, whereas area-under-the-curve levels are not increased with steady state.
Distribution
The volume of distribution of phenylpropanolamine is 3.0 to 4.5L/kg. Levels of phenylpropanolamine in the brain are about 40% of those in the heart and 20% of those in the lungs. The hydroxyl group of phenylpropanolamine at the β carbon increases its hydrophilicity, reduces its permeation through the blood–brain barrier, and limits its central nervous system (CNS) effects. Hence, phenylpropanolamine crosses into the brain only to some extent, has only weak CNS effects, and most of its effects are peripheral. In any case, phenylpropanolamine can produce amphetamine-like psychostimulant effects at very high doses. Phenylpropanolamine is more lipophilic than structurally related sympathomimetics with hydroxyl groups on the phenyl ring like epinephrine (adrenaline) and phenylephrine and has greater brain permeability than these agents.
The plasma protein binding of phenylpropanolamine is approximately 20%. However, it has been said that no recent studies have substantiated this value.
Metabolism
Phenylpropanolamine is not substantially metabolized. It also does not undergo significant first-pass metabolism. Only about 3 to 4% of an oral dose of phenylpropanolamine is metabolized. Metabolites include hippuric acid (via oxidative deamination of the side chain) and 4-hydroxynorephedrine (via para-hydroxylation). The methyl group at the α carbon of phenylpropanolamine blocks metabolism by monoamine oxidases (MAOs). Phenylpropanolamine is also not a substrate of catechol O-methyltransferase. The hydroxyl group at the β carbon of phenylpropanolamine also helps to increase metabolic stability.
Elimination
Approximately 90% of a dose of phenylpropanolamine is excreted in the urine unchanged within 24hours. About 4% of excreted material is in the form of metabolites.
The elimination half-life of immediate-release phenylpropanolamine is about 4hours, with a range in different studies of 3.7 to 4.9hours. The half-life of extended-release phenylpropanolamine has ranged from 4.3 to 5.8hours.
The elimination of phenylpropanolamine is dependent on urinary pH. At a more acidic urinary pH, the elimination of phenylpropanolamine is accelerated and its half-life and duration are shortened, whereas at more basic urinary pH, the elimination of phenylpropanolamine is reduced and its half-life and duration are extended. Urinary acidifying agents like ascorbic acid and ammonium chloride can increase the excretion of and thereby reduce exposure to amphetamines including phenylpropanolamine, whereas urinary alkalinizing agents including antacids like sodium bicarbonate as well as acetazolamide can reduce the excretion of these agents and thereby increase exposure to them.
Total body clearance of phenylpropanolamine has been reported to be 0.546L/h/kg, while renal clearance was 0.432L/h/kg.
Miscellaneous
As phenylpropanolamine is not extensively metabolized, it would probably not be affected by hepatic impairment. Conversely, there is likely to be accumulation of phenylpropanolamine with renal impairment due to its dependence on urinary excretion.
Norephedrine is a minor metabolite of amphetamine and methamphetamine, as shown below. It is also a minor metabolite of ephedrine and a major metabolite of cathinone.
Chemistry
Phenylpropanolamine, also known as (1RS,2SR)-α-methyl-β-hydroxyphenethylamine or as (1RS,2SR)-β-hydroxyamphetamine, is a substituted phenethylamine and amphetamine derivative. It is closely related to the cathinones (β-ketoamphetamines). β-Hydroxyamphetamine exists as four stereoisomers, which include d- (dextrorotatory) and l-norephedrine (levorotatory), and d- and l-norpseudoephedrine. d-Norpseudoephedrine is also known as cathine, and is found naturally in Catha edulis (khat). Pharmaceutical drug preparations of phenylpropanolamine have varied in their stereoisomer composition in different countries, which may explain differences in misuse and side effect profiles. In any case, racemic dl-norephedrine, or (1RS,2SR)-phenylpropanolamine, appears to be the most commonly used formulation of phenylpropanolamine pharmaceutically. Analogues of phenylpropanolamine include ephedrine, pseudoephedrine, amphetamine, methamphetamine, and cathinone.
Phenylpropanolamine, structurally, is in the substituted phenethylamine class, consisting of a cyclic benzene or phenyl group, a two carbon ethyl moiety, and a terminal nitrogen, hence the name phen-ethyl-amine. The methyl group on the alpha carbon (the first carbon before the nitrogen group) also makes this compound a member of the substituted amphetamine class. Ephedrine is the N-methyl analogue of phenylpropanolamine.
Exogenous compounds in this family are degraded too rapidly by monoamine oxidase to be active at all but the highest doses. However, the addition of the α-methyl group allows the compound to avoid metabolism and confer an effect. In general, N-methylation of primary amines increases their potency, whereas β-hydroxylation decreases CNS activity, but conveys more selectivity for adrenergic receptors.
Phenylpropanolamine is a small-molecule compound with the molecular formula C9H13NO and a molecular weight of 151.21g/mol. It has an experimental log P of 0.67, while its predicted log P values range from 0.57 to 0.89. The compound is relatively lipophilic, but is also more hydrophilic than other amphetamines. The lipophilicity of amphetamines is closely related to their brain permeability. For comparison to phenylpropanolamine, the experimental log P of methamphetamine is 2.1, of amphetamine is 1.8, of ephedrine is 1.1, of pseudoephedrine is 0.7, of phenylephrine is -0.3, and of norepinephrine is -1.2. Methamphetamine has high brain permeability, whereas phenylephrine and norepinephrine are peripherally selective drugs. The optimal log P for brain permeation and central activity is about 2.1 (range 1.5–2.7).
Phenylpropanolamine has been used pharmaceutically exclusively as the hydrochloride salt.
History
Phenylpropanolamine was first synthesized in the early 20th century, in or around 1910. It was patented as a mydriatic in 1913. The pressor effects of phenylpropanolamine were characterized in the late 1920s and the 1930s. Phenylpropanolamine was first introduced for medical use by the 1930s.
In the United States, phenylpropanolamine is no longer sold due to an increased risk of haemorrhagic stroke. In a few countries in Europe, however, it is still available either by prescription or sometimes over-the-counter. In Canada, it was withdrawn from the market on 31 May 2001. It was voluntarily withdrawn from the Australian market by July 2001. In India, human use of phenylpropanolamine and its formulations was banned on 10 February 2011, but the ban was overturned by the judiciary in September 2011.
Society and culture
Names
Phenylpropanolamine is the generic name of the drug and its , , and , while phenylpropanolamine hydrochloride is its and in the case of the hydrochloride salt. It is also known by the synonym norephedrine.
Brand names of phenylpropanolamine include Acutrim, Appedrine, Capton Diet, Control, Dexatrim, Emagrin Plus A.P., Glifentol, Kontexin, Merex, Monydrin, Mydriatine, Prolamine, Propadrine, Propagest, Recatol, Rinexin, Tinaroc, and Westrim, among many others. It has also been used in combinations under brand names including Allerest, Demazin, Dimetapp, and Sinarest, among others.
Availability
Phenylpropanolamine is available for medical and veterinary use in some countries.
Exercise and sports
There has been interest in phenylpropanolamine as a performance-enhancing drug in exercise and sports. However, clinical studies suggest that phenylpropanolamine is not effective in this regard. Phenylpropanolamine is not on the World Anti-Doping Agency (WADA) list of prohibited substances as of 2024.
Legal status
In Sweden, phenylpropanolamine is still available in prescription decongestants; Phenylpropanolamine is also still available in Germany. It is used in some polypill medications like Wick DayMed capsules.
In the United Kingdom, phenylpropanolamine was available in many "all in one" cough and cold medications which usually also feature paracetamol or another analgesic and caffeine and could also be purchased on its own; however, it is no longer approved for human use. A European Category 1 Licence is required to purchase phenylpropanolamine for academic use.
In the United States, the Food and Drug Administration (FDA) issued a public health advisory against the use of the drug in November 2000. In this advisory, the FDA requested but did not require that all drug companies discontinue marketing products containing phenylpropanolamine. The agency estimates that phenylpropanolamine caused between 200 and 500 strokes per year among 18-to-49-year-old users. In 2005, the FDA removed phenylpropanolamine from over-the-counter sale and removed its "generally recognized as safe and effective" (GRASE) status. Under the 2020 CARES Act, it requires FDA approval before it can be marketed again effectively banning the drug even as a prescription drug.
Because of its potential use in amphetamine manufacture, phenylpropanolamine is controlled by the Combat Methamphetamine Epidemic Act of 2005. It is still available for veterinary use in dogs, however, as a treatment for urinary incontinence.
Internationally, an item on the agenda of the 2000 Commission on Narcotic Drugs session called for including the stereoisomer norephedrine in Table I of United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances.
Drugs containing phenylpropanolamine were banned in India on 27 January 2011. On 13 September 2011, Madras High Court revoked a ban on manufacture and sale of pediatric drugs phenylpropanolamine and nimesulide.
Veterinary use
Phenylpropanolamine is available for use in veterinary medicine. It is used to control urinary incontinence in dogs.
In June 2024, the US Food and Drug Administration (FDA) approved Phenylpropanolamine Hydrochloride chewable tablets for the control of urinary incontinence due to a weakening of the muscles that control urination (urethral sphincter hypotonus) in dogs. This is the first generic phenylpropanolamine hydrochloride chewable tablets for dogs.
Urinary incontinence happens when a dog loses its ability to control when it urinates. Urinary incontinence due to urethral sphincter hypotonus can happen as dogs age and as the dog’s muscle in its urethra (the body part that leads from the dog’s bladder to outside its body) weakens and loses control over its ability to hold urine.
Phenylpropanolamine Hydrochloride chewable tablets contain the same active ingredient (phenylpropanolamine hydrochloride) in the same concentration and dosage form as the approved brand name drug product, Proin chewable tablets, which were first approved in August 2011. In addition, the FDA determined that Phenylpropanolamine Hydrochloride chewable tablets contain no inactive ingredients that may significantly affect the bioavailability of the active ingredient.
Notes
Reference notes
References
Amphetamine alkaloids
Anorectics
Anti-obesity drugs
Antihypotensive agents
Beta-Hydroxyamphetamines
Decongestants
Drugs acting on the cardiovascular system
Drugs acting on the nervous system
Drugs in sport
Enantiopure drugs
Ergogenic aids
Human drug metabolites
Norepinephrine releasing agents
Peripherally selective drugs
Recreational drug metabolites
Stimulants
Sympathomimetics
Veterinary drugs
Withdrawn anti-obesity drugs
World Anti-Doping Agency prohibited substances | Phenylpropanolamine | [
"Chemistry"
] | 4,166 | [
"Chemicals in medicine",
"Stereochemistry",
"Human drug metabolites",
"Enantiopure drugs"
] |
615,574 | https://en.wikipedia.org/wiki/Berm | A berm is a level space, shelf, or raised barrier (usually made of compacted soil) separating areas in a vertical way, especially partway up a long slope. It can serve as a terrace road, track, path, a fortification line, a border/separation barrier for navigation, good drainage, industry, or other purposes.
Etymology
The word is from Middle Dutch and came into usage in English via French.
Military use
History
In medieval military engineering, a berm (or berme) was a level space between a parapet or defensive wall and an adjacent steep-walled ditch or moat. It was intended to reduce soil pressure on the walls of the excavated part to prevent its collapse. It also meant that debris dislodged from fortifications would not fall into (and fill) a ditch or moat.
In the trench warfare of World War I, the name was applied to a similar feature at the lip of a trench, which served mainly as an elbow-rest for riflemen.
Modern usage
In modern military engineering, a berm is the earthen or sod wall or parapet, especially a low earthen wall adjacent to a ditch. The digging of the ditch (often by a bulldozer or military engineering vehicle) can provide the soil from which the berm is constructed. Walls constructed in this manner are an obstacle to vehicles, including most armoured fighting vehicles but are easily crossed by infantry. Because of the ease of construction, such walls can be made hundreds or thousands of kilometres long. A prominent example of such a berm is the Moroccan Western Sahara Wall.
Erosion control
Berms are also used to control soil erosion and sedimentation by reducing the rate of surface runoff. The berms either reduce the velocity of the water, or direct water to areas that are not susceptible to erosion, thereby reducing the adverse effects of running water on exposed topsoil. Following the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, the construction of berms designed to prevent oil from reaching the fragile Louisiana wetlands (which would result in massive erosion) was proposed early on, and was officially approved by the federal government in mid-June, 2010, after numerous failures to stop and contain the oil leak with more advanced technologies.
Geography
In coastal geography, a berm is a bank of sand or gravel ridge parallel to the shoreline and a few tens of centimetres high, created by wave action throwing material beyond the average level of the sea.
House construction
Earth is piled up against exterior walls and packed, sloping down away from the house. The roof may or may not be fully earth covered, and windows/openings may occur on one or more sides of the shelter. Due to the building being above ground, fewer moisture problems are associated with earth berming in comparison to underground/fully recessed construction.
Other applications
For general applications, a berm is a physical, stationary barrier of some kind. For example, in highway construction, a berm is a noise barrier constructed of earth, often landscaped, running along a highway to protect adjacent land users from noise pollution. The shoulder of a road is also called a berm and in New Zealand the word describes a publicly owned grassed nature strip sometimes planted with trees alongside urban roads (usually called a verge). In snowboard cross, a berm is a wall of snow built up in a corner. In mountain biking, a berm is a banked turn formed by soil, commonly dug from the track, being deposited on the outer rim of the turn. In coastal systems, a berm is a raised ridge of pebbles or sand found at high tide or storm tide marks on a beach. In snow removal, a berm or windrow refers to the linear accumulation of snow cast aside by a plow. Earth berms are used above particle accelerator tunnels to provide shielding from radiation. In open-pit mining, a berm refers to dirt and rock piled alongside a haulage road or along the edge of a dump point. Intended as a safety measure, they are commonly required by government organizations to be at least half as tall as the wheels of the largest mining machine on-site.
Physical security systems employ berms to exclude hostile vehicles and slow attackers on foot (similar to the military application without the trench). Security berms are common around military and nuclear facilities. An example is the berm proposed for Vermont Yankee nuclear power plant in Vermont. At Baylor Ballpark, a baseball stadium on the campus of Baylor University, a berm is constructed down the right field line. The berm replaces bleachers, and general admission tickets are sold for fans who wish to sit on the grass or watch the game from the top of the hill.
Berms are also used as a method of environmental spill containment and liquid spill control. Bunding is the construction of a secondary impermeable barrier around and beneath storage or processing plant, sufficient to contain the plant's volume after a spill. This is often achieved on large sites by surrounding the plant with a berm. The US Environmental Protection Agency (EPA) requires that oils and fuels stored over certain volume levels be placed in secondary spill containment. Berms for spill containment are typically manufactured from polyvinyl chloride (PVC) or geomembrane fabric that provide a barrier to keep spills from reaching the ground or navigable waterways. Most berms have sidewalls to keep liquids contained for future capture and safe disposal.
See also
Road verge
Earthworks (engineering)
Bund
Moroccan Wall
Marches
Limes (Roman Empire)
Long acre
Flood-meadow
Floodplain
References
External links
Engineering barrages
Archaeological features
Artificial landforms
Fortification (architectural elements)
Fortification lines
Snow removal | Berm | [
"Engineering"
] | 1,156 | [
"Military engineering",
"Engineering barrages",
"Fortification lines"
] |
615,643 | https://en.wikipedia.org/wiki/Deep%20Ecliptic%20Survey | The Deep Ecliptic Survey (DES) is a project to find Kuiper belt objects (KBOs), using the facilities of the National Optical Astronomy Observatory (NOAO). The principal investigator is Robert L. Millis.
Since 1998 through the end of 2003, the survey covered 550 square degrees with sensitivity of 22.5, which means an estimated 50% of objects of this magnitude have been found.
The survey has also established the mean Kuiper Belt plane and introduced new formal definitions of the dynamical classes of Kuiper belt objects.
The remarkable first observations and/or discoveries include:
28978 Ixion, large plutino
19521 Chaos (cubewano)
, the first binary trans-Neptunian object (TNO)
, the first object with perihelion too far to be affected (scattered) by Neptune and a large semi-major axis
, remarkable for its semi-major axis of more than 500 AU and extreme eccentricity (0.96) taking the object from the inside of the Neptune's orbit to more than 1000 AU
, the first Neptune trojan
, with one of the most inclined orbits (>68°)
References
External links
https://web.archive.org/web/20040612003417/http://www.lowell.edu/Research/DES/
Astronomical surveys
Asteroid surveys | Deep Ecliptic Survey | [
"Astronomy"
] | 283 | [
"Astronomical surveys",
"Works about astronomy",
"Astronomical objects"
] |
615,703 | https://en.wikipedia.org/wiki/Steganalysis | Steganalysis is the study of detecting messages hidden using steganography; this is analogous to cryptanalysis applied to cryptography.
Overview
The goal of steganalysis is to identify suspected packages, determine whether or not they have a payload encoded into them, and, if possible, recover that payload.
Unlike cryptanalysis, in which intercepted data contains a message (though that message is encrypted), steganalysis generally starts with a pile of suspect data files, but little information about which of the files, if any, contain a payload. The steganalyst is usually something of a forensic statistician, and must start by reducing this set of data files (which is often quite large; in many cases, it may be the entire set of files on a computer) to the subset most likely to have been altered.
Basic techniques
The problem is generally handled with statistical analysis. A set of unmodified files of the same type, and ideally from the same source (for example, the same model of digital camera, or if possible, the same digital camera; digital audio from a CD MP3 files have been "ripped" from; etc.) as the set being inspected, are analyzed for various statistics. Some of these are as simple as spectrum analysis, but since most image and audio files these days are compressed with lossy compression algorithms, such as JPEG and MP3, they also attempt to look for inconsistencies in the way this data has been compressed. For example, a common artifact in JPEG compression is "edge ringing", where high-frequency components (such as the high-contrast edges of black text on a white background) distort neighboring pixels. This distortion is predictable, and simple steganographic encoding algorithms will produce artifacts that are detectably unlikely.
One case where detection of suspect files is straightforward is when the original, unmodified carrier is available for comparison. Comparing the package against the original file will yield the differences caused by encoding the payload—and, thus, the payload can be extracted.
Advanced techniques
Noise floor consistency analysis
In some cases, such as when only a single image is available, more complicated analysis techniques may be required. In general, steganography attempts to make distortion to the carrier indistinguishable from the carrier's noise floor. In practice, however, this is often improperly simplified to deciding to make the modifications to the carrier resemble white noise as closely as possible, rather than analyzing, modeling, and then consistently emulating the actual noise characteristics of the carrier. In particular, many simple steganographic systems simply modify the least-significant bit (LSB) of a sample; this causes the modified samples to have not only different noise profiles than unmodified samples, but also for their LSBs to have different noise profiles than could be expected from analysis of their higher-order bits, which will still show some amount of noise. Such LSB-only modification can be detected with appropriate algorithms, in some cases detecting encoding densities as low as 1% with reasonable reliability.
Further complications
Encrypted payloads
Detecting a probable steganographic payload is often only part of the problem, as the payload may have been encrypted first. Encrypting the payload is not always done solely to make recovery of the payload more difficult. Most strong ciphers have the desirable property of making the payload appear indistinguishable from uniformly-distributed noise, which can make detection efforts more difficult, and save the steganographic encoding technique the trouble of having to distribute the signal energy evenly (but see above concerning errors emulating the native noise of the carrier).
Barrage noise
If inspection of a storage device is considered very likely, the steganographer may attempt to barrage a potential analyst with, effectively, misinformation. This may be a large set of files encoded with anything from random data, to white noise, to meaningless drivel, to deliberately misleading information. The encoding density on these files may be slightly higher than the "real" ones; likewise, the possible use of multiple algorithms of varying detectability should be considered. The steganalyst may be forced into checking these decoys first, potentially wasting significant time and computing resources. The downside to this technique is it makes it much more obvious that steganographic software was available, and was used.
Conclusions and further action
Obtaining a warrant or taking other actions based solely on steganalytic evidence is a very dicey proposition unless a payload has been completely recovered and decrypted, because otherwise all the analyst has is a statistic indicating that a file may have been modified, and that modification may have been the result of steganographic encoding. Because this is likely to frequently be the case, steganalytic suspicions will often have to be backed up with other investigative techniques.
See also
Audio watermark detection
BPCS-Steganography
Computer forensics
Covert channel
Cryptography
Data compression
Steganographic file system
Steganography
Steganography tools
References
Bibliography
External links
Steganalysis research and papers by Neil F. Johnson addressing attacks against Steganography and Watermarking, and Countermeasures to these attacks.
Research Group . Ongoing research in Steganalysis.
Steganography - Implementation and detection Short introduction on steganography, discussing several information sources in which information can be stored
Cryptographic attacks
Steganography | Steganalysis | [
"Technology"
] | 1,102 | [
"Cryptographic attacks",
"Computer security exploits"
] |
615,799 | https://en.wikipedia.org/wiki/Computer%20forensics | Computer forensics (also known as computer forensic science) is a branch of digital forensic science pertaining to evidence found in computers and digital storage media. The goal of computer forensics is to examine digital media in a forensically sound manner with the aim of identifying, preserving, recovering, analyzing, and presenting facts and opinions about the digital information.
Although it is most often associated with the investigation of a wide variety of computer crime, computer forensics may also be used in civil proceedings. The discipline involves similar techniques and principles to data recovery, but with additional guidelines and practices designed to create a legal audit trail.
Evidence from computer forensics investigations is usually subjected to the same guidelines and practices as other digital evidence. It has been used in a number of high-profile cases and is accepted as reliable within U.S. and European court systems.
Overview
In the early 1980s, personal computers became more accessible to consumers, leading to their increased use in criminal activity (for example, to help commit fraud). At the same time, several new "computer crimes" were recognized (such as cracking). The discipline of computer forensics emerged during this time as a method to recover and investigate digital evidence for use in court. Since then, computer crime and computer-related crime has grown, with the FBI reporting a suspected 791,790 internet crimes in 2020, a 69% increase over the amount reported in 2019. Today, computer forensics is used to investigate a wide variety of crimes, including child pornography, fraud, espionage, cyberstalking, murder, and rape. The discipline also features in civil proceedings as a form of information gathering (e.g., Electronic discovery).
Forensic techniques and expert knowledge are used to explain the current state of a digital artifact, such as a computer system, storage medium (e.g., hard disk or CD-ROM), or an electronic document (e.g., an email message or JPEG image). The scope of a forensic analysis can vary from simple information retrieval to reconstructing a series of events. In a 2002 book, Computer Forensics, authors Kruse and Heiser define computer forensics as involving "the preservation, identification, extraction, documentation and interpretation of computer data". They describe the discipline as "more of an art than a science," indicating that forensic methodology is backed by flexibility and extensive domain knowledge. However, while several methods can be used to extract evidence from a given computer, the strategies used by law enforcement are fairly rigid and lack the flexibility found in the civilian world.
Cybersecurity
Computer forensics is often confused with cybersecurity. Cybersecurity focuses on prevention and protection, while computer forensics is more reactionary and active, involving activities such as tracking and exposing. System security usually encompasses two teams: cybersecurity and computer forensics, which work together. A cybersecurity team creates systems and programs to protect data; if these fail, the computer forensics team recovers the data and investigates the intrusion and theft. Both areas require knowledge of computer science.
Computer-related crimes
Computer forensics are used to convict those involved in physical and digital crimes. Some of these computer-related crimes include interruption, interception, copyright infringement, and fabrication. Interruption relates to the destruction and stealing of computer parts and digital files. Interception is the unauthorized access of files and information stored on technological devices. Copyright infringement refers to using, reproducing, and distributing copyrighted information, including software piracy. Fabrication involves accusing someone of using false data and information inserted into the system through an unauthorized source. Examples of interceptions include the Bank NSP case, Sony.Sambandh.com case, and business email compromise scams.
Use as evidence
In court, computer forensic evidence is subject to the usual requirements for digital evidence. This requires that information be authentic, reliably obtained, and admissible. Different countries have specific guidelines and practices for evidence recovery. In the United Kingdom, examiners often follow Association of Chief Police Officers guidelines that help ensure the authenticity and integrity of evidence. While voluntary, the guidelines are widely accepted in British courts.
Computer forensics has been used as evidence in criminal law since the mid-1980s. Some notable examples include:
BTK Killer: Dennis Rader was convicted of a string of serial killings over sixteen years. Towards the end of this period, Rader sent letters to the police on a floppy disk. Metadata within the documents implicated an author named "Dennis" at "Christ Lutheran Church," helping lead to Rader's arrest.
Joseph Edward Duncan: A spreadsheet recovered from Duncan's computer contained evidence showing him planning his crimes. Prosecutors used this to demonstrate premeditation and secure the death penalty.
Sharon Lopatka: Hundreds of emails on Lopatka's computer led investigators to her killer, Robert Glass.
Corcoran Group: In this case, computer forensics confirmed parties' duties to preserve digital evidence when litigation had commenced or was reasonably anticipated. Hard drives were analyzed, though the expert found no evidence of deletion, and evidence showed that the defendants intentionally destroyed emails.
Dr. Conrad Murray: Dr. Conrad Murray, the doctor of Michael Jackson, was convicted partially by digital evidence, including medical documentation showing lethal amounts of propofol.
Forensic process
Computer forensic investigations typically follow the standard digital forensic process, consisting of four phases: acquisition, examination, analysis, and reporting. Investigations are usually performed on static data (i.e., acquired images) rather than "live" systems. This differs from early forensic practices, when a lack of specialized tools often required investigators to work on live data.
Computer forensics lab
The computer forensics lab is a secure environment where electronic data can be preserved, managed, and accessed under controlled conditions, minimizing the risk of damage or alteration to the evidence. Forensic examiners are provided with the resources necessary to extract meaningful data from the devices they examine.
Techniques
Various techniques are used in computer forensic investigations, including:
Cross-drive analysis
This technique correlates information found on multiple hard drives and can be used to identify social networks or detect anomalies.
Live analysis
The examination of computers from within the operating system using forensic or existing sysadmin tools to extract evidence. This technique is particularly useful for dealing with encrypting file systems where encryption keys can be retrieved, or for imaging the logical hard drive volume (a live acquisition) before shutting down the computer. Live analysis is also beneficial when examining networked systems or cloud-based devices that cannot be accessed physically.
Deleted files
A common forensic technique involves recovering deleted files. Most operating systems and file systems do not erase the physical file data, allowing investigators to reconstruct it from the physical disk sectors. Forensic software can "carve" files by searching for known file headers and reconstructing deleted data.
Stochastic forensics
This method leverages the stochastic properties of a system to investigate activities without traditional digital artifacts, often useful in cases of data theft.
Steganography
Steganography involves concealing data within another file, such as hiding illegal content within an image. Forensic investigators detect steganography by comparing file hashes, as any hidden data will alter the hash value of the file.
Mobile device forensics
Phone logs
Phone companies typically retain logs of received calls, which can help create timelines and establish suspects' locations at the time of a crime.
Contacts
Contact lists are useful in narrowing down suspects based on their connections to the victim.
Text messages
Text messages contain timestamps and remain in company servers, often indefinitely, even if deleted from the device. These records are valuable evidence for reconstructing communication between individuals.
Photos
Photos can provide critical evidence, supporting or disproving alibis by showing the location and time they were taken.
Audio recordings
Some victims may have recorded pivotal moments, capturing details like the attacker's voice, which could provide crucial evidence.
Volatile data
Volatile data is stored in memory or in transit and is lost when the computer is powered down. It resides in locations such as registries, cache, and RAM. The investigation of volatile data is referred to as "live forensics."
When seizing evidence, if a machine is still active, volatile data stored solely in RAM may be lost if not recovered before shutting down the system. "Live analysis" can be used to recover RAM data (e.g., using Microsoft's COFEE tool, WinDD, WindowsSCOPE) before removing the machine. Tools like CaptureGUARD Gateway allow for the acquisition of physical memory from a locked computer.
RAM data can sometimes be recovered after power loss, as the electrical charge in memory cells dissipates slowly. Techniques like the cold boot attack exploit this property. Lower temperatures and higher voltages increase the chance of recovery, but it is often impractical to implement these techniques in field investigations.
Tools that extract volatile data often require the computer to be in a forensic lab to maintain the chain of evidence. In some cases, a live desktop can be transported using tools like a mouse jiggler to prevent sleep mode and an uninterruptible power supply (UPS) to maintain power.
Page files from file systems with journaling features, such as NTFS and ReiserFS, can also be reassembled to recover RAM data stored during system operation.
Analysis tools
Numerous open-source and commercial tools exist for computer forensics. Common forensic analysis includes manual reviews of media, Windows registry analysis, password cracking, keyword searches, and the extraction of emails and images. Tools such as Autopsy (software), Belkasoft Evidence Center, Forensic Toolkit (FTK), and EnCase are widely used in digital forensics.
Professional education and careers
Digital forensics analyst
A digital forensics analyst is responsible for preserving digital evidence, cataloging collected evidence, analyzing evidence relevant to the ongoing case, responding to cyber breaches (often in a corporate context), writing reports containing findings, and testifying in court. A digital forensic analyst may also be referred to as a computer forensic analyst, digital forensic examiner, cyber forensic analyst, forensic technician, or other similarly named titles, though these roles perform similar duties.
Certifications
Several computer forensics certifications are available, such as the ISFCE Certified Computer Examiner, Digital Forensics Investigation Professional (DFIP), and IACRB Certified Computer Forensics Examiner. The top vendor-independent certification, particularly within the EU, is the Certified Cyber Forensics Professional (CCFP).
Many commercial forensic software companies also offer proprietary certifications.
See also
Certified Forensic Computer Examiner
Counter forensics
Cryptanalysis
Cyber attribution
Data remanence
Disk encryption
Encryption
Hidden file and hidden directory
Information technology audit
MAC times
Steganalysis
United States v. Arnold
References
Further reading
A Practice Guide to Computer Forensics, First Edition (Paperback) by David Benton (Author), Frank Grindstaff (Author)
Incident Response and Computer Forensics, Second Edition (Paperback) by Chris Prosise (Author), Kevin Mandia (Author), Matt Pepe (Author) "Truth is stranger than fiction..." (more)
Related journals
IEEE Transactions on Information Forensics and Security
Journal of Digital Forensics, Security and Law
International Journal of Digital Crime and Forensics
Journal of Digital Investigation
International Journal of Digital Evidence
International Journal of Forensic Computer Science
Journal of Digital Forensic Practice
Cryptologia
Small Scale Digital Device Forensic Journal
Computer security procedures
Information technology audit | Computer forensics | [
"Engineering"
] | 2,332 | [
"Cybersecurity engineering",
"Computer security procedures",
"Computer forensics"
] |
615,814 | https://en.wikipedia.org/wiki/Fiberscope | A fiberscope is a flexible optical fiber bundle with a lens on one end and an eyepiece or camera on the other. It is used to examine and inspect small, difficult-to-reach places such as the insides of machines, locks, and the human body.
History
Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon and Jacques Babinet in Paris in the early 1840s. Then in 1930, Heinrich Lamm, a German medical student, became the first person to put together a bundle of optical fibers to carry an image. These discoveries led to the invention of endoscopes and fiberscopes.
In the 1960s the endoscope was upgraded with glass fiber, a flexible material that allowed light to transmit, even when bent. While this provided users with the capability of real-time observation, it did not provide them with the ability to take photographs. In 1964 the fiberscope, the first gastro camera, was invented. It was the first time an endoscope had a camera that could take pictures. This innovation led to more careful observations, and more accurate diagnoses.
Optics
Fiberscopes work by utilizing the science of fiber-optic bundles, which consist of numerous fiber-optic cables. Fiber-optic cables are made of optically pure glass and are as thin as a human’s hair. The three main components of a fiber-optic cable are:
core – the center made of high purity glass
cladding – the outer material surrounding the core that prevents light from leaking
buffer coating – the protective plastic coating
The following are the two different types of fiber-optic bundles in a fiberscope:
illumination bundle – designed to carry light to the area in front of the lens
imaging bundle – designed to carry an image from the lens to the eyepiece
Total internal reflection
Fiber-optic cables use total internal reflection to carry information. When light travels from one medium to another it is refracted. If the light is traveling from a less dense medium to a dense medium it is refracted away from the normal. The opposite applies if the light is traveling from a dense medium to a less dense medium. In optic cables, light travels through the dense glass core (high refractive index) by constantly reflecting from the less dense cladding (lower refractive index). This happens because the surface of the core acts like a perfect mirror and the angle of the light is always larger than the critical angle.
Components
Eyepiece – Magnifies the image carried back by the imaging bundle so the human eye can view it.
Imaging bundle – Continuous strand of flexible glass fibers that transmit the image to the eyepiece.
Distal lens – The combination of micro lenses that take images and focus them into the small imaging bundle.
Illumination system – A fiber optic light guide that relays light from the source to the target area
Articulation system – The ability of the user to control the movement of the bending section of the fiberscope that is directly attached to the distal lens.
Fiberscope body – The control section that is designed to help aide one hand operation.
Insertion tube – Most of the length of the fiberscope, made to be durable and flexible. This protects the optical fiber bundle and the articulation cables.
Bending section – The most flexible part of the fiberscope, it connects the insertion tube to the distal viewing section.
Distal section – Where the ending points of both the illumination and imaging fiber bundle are.
Medical applications
Fiberscopes are used in the medical field as a tool to help doctors and surgeons examine problems in a patient’s body without having to make large incisions. This procedure is called an endoscopy. Doctors use this when they suspect that a patient’s organ is infected, damaged, or cancerous. There are numerous types based on the area of the body being examined. They include:
Arthroscopy – Joints
Bronchoscopy – Lungs
Colonoscopy – Colon
Cystoscopy – Bladder
Enteroscopy – Small Intestine
Hysteroscopy – Uterus
Laparoscopy – Abdomen/Pelvis
Laryngoscopy – Larynx (voice box)
Mediastinoscopy – Area between lungs
Upper Gastrointestinal Endoscopy – Esophagus and upper intestinal tract
Although any medical technique has its potential risks, using a fiberscope for endoscopy has a very low risk of causing infection and blood loss.
Other applications
Locksmiths use fiberscopes to check the position of pins. Technicians and inspectors use fiberscopes to look at the inside of machines without having to disassemble them. Fiberscopes can also be used in a military or police application to check beneath doors or around corners, or otherwise perform surveillance or reconnaissance.
In popular media
The 1982 film Who Dares Wins, about the Special Air Service, depicted the use of fiberscopes in counterterrorism.
Fiberscopes are an important tool in tactical shooter video games such as Tom Clancy's Rainbow Six, Splinter Cell, SWAT, Ready or Not, and Door Kickers, where they are used to check under doors or around corners without revealing the player's position or exposing them to enemy attack.
See also
Borescope
Endoscope
Ulexite or "TV rock", a naturally occurring fiber bundle
Hidden camera
References
Endoscopes
Medical equipment
Optical devices
Fiber optics
Danish inventions | Fiberscope | [
"Materials_science",
"Engineering",
"Biology"
] | 1,102 | [
"Glass engineering and science",
"Optical devices",
"Medical equipment",
"Medical technology"
] |
615,823 | https://en.wikipedia.org/wiki/Borescope | A borescope (occasionally called a boroscope, though this spelling is nonstandard) is an optical instrument designed to assist visual inspection of narrow, difficult-to-reach cavities, consisting of a rigid or flexible tube with an eyepiece or display on one end, an objective lens or camera on the other, linked together by an optical or electrical system in between. The optical system in some instances is accompanied by (typically fiberoptic) illumination to enhance brightness and contrast. An internal image of the illuminated object is formed by the objective lens and magnified by the eyepiece which presents it to the viewer's eye.
Rigid or flexible borescopes may be externally linked to a photography or videography device. For medical use, similar instruments are called endoscopes.
Uses
Borescopes are used for visual inspection work where the target area is inaccessible by other means, or where accessibility may require destructive, time consuming and/or expensive dismounting activities. Similar devices for use inside the human body are referred to as endoscopes. Borescopes are mostly used in nondestructive testing techniques for recognizing defects or imperfections.
Borescopes are commonly used in the visual inspection of aircraft engines, aeroderivative industrial gas turbines, steam turbines, diesel engines, and automotive and truck engines. Gas and steam turbines require particular attention because of safety and maintenance requirements. Borescope inspection of engines can be used to prevent unnecessary maintenance, which can become extremely costly for large turbines. They are also used in manufacturing of machined or cast parts to inspect critical interior surfaces for burrs, surface finish or complete through-holes. Other common uses include forensic applications in law enforcement and building inspection, and in gunsmithing for inspecting the interior bore of a firearm. In World War II, primitive rigid borescopes were used to examine the interior bores (hence "bore" scope) of large guns for defects.
Flexible versus rigid
The traditional flexible borescope includes a bundle of optical fibers which divide the image into pixels. It is also known as a fiberscope and can be used to access cavities which are around a bend, such as a combustion chamber or "burner can", in order to view the condition of the compressed air inlets, turbine blades and seals without disassembling the engine. Traditional flexible borescopes suffer from pixelation and pixel crosstalk due to the fiber image guide. Image quality varies widely among different models of flexible borescopes depending on the number of fibers and construction used in the fiber image guide. Some high-end borescopes offer a "visual grid" on image captures to assist in evaluating the size of any area with a problem. For flexible borescopes, articulation mechanism components, range of articulation, field of view and angles of view of the objective lens are also important. Fiber content in the flexible relay is also critical to provide the highest possible resolution to the viewer. Minimal quantity is 10,000 pixels while the best images are obtained with higher numbers of fibers in the 15,000 to 22,000 range for the larger diameter borescopes. The ability to control the light at the end of the insertion tube allows the borescope user to make adjustments that can greatly improve the clarity of video or still images.
Rigid borescopes are similar to fiberscopes but generally provide a superior image at lower cost compared to a flexible borescope. Rigid borescopes have the limitation that access to what is to be viewed must be in a straight line. Rigid borescopes are therefore better suited to certain tasks such as inspecting automotive cylinders, fuel injectors and hydraulic manifold bodies, and gunsmithing. Criteria for selecting a borescope are usually image clarity and access. For similar-quality instruments, the largest rigid borescope that will fit the hole gives the best image. Optical systems in rigid borescopes can be of three basic types: Harold Hopkins rod lenses, achromatic doublets, and gradient index rod lenses. For large-diameter borescopes (over ), the achromatic doublet relays work quite well, but as the diameter of the borescope tube gets smaller the Hopkins rod lens and gradient index rod lens designs provide superior images. For very small rigid borescopes (under ), the gradient index lens relays are better.
Video borescopes
A video borescope, videoscope, or "inspection camera" is similar to the flexible borescope but uses a miniature video camera at the end of the flexible tube. The end of the insertion tube includes a light which makes it possible to capture video or still images deep within equipment, engines and other dark spaces. As a tool for remote visual inspection the ability to capture video or still images for later inspection is a huge benefit. A display at the other end shows the camera view, and in some models the viewing position can be changed via a joystick or similar control. Because a complex fiber optic waveguide in a traditional borescope is replaced with an inexpensive electrical cable, video borescopes can be much less costly and potentially better resolution (depending on the specifications of the camera). Easy-to-use, battery-powered video borescopes, with LCD displays of 320×240 pixels or better, became available from several manufacturers and are adequate for some applications. On many of these models, the video camera and flexible tube is submersible. Later models offered improved features, such as better resolution, adjustable illumination or replacing the built-in display with a computer connection, such as a USB cable.
References
Endoscopes
Optical devices
Nondestructive testing
Medical equipment | Borescope | [
"Materials_science",
"Engineering",
"Biology"
] | 1,149 | [
"Glass engineering and science",
"Optical devices",
"Medical equipment",
"Nondestructive testing",
"Materials testing",
"Medical technology"
] |
615,845 | https://en.wikipedia.org/wiki/Alchemical%20symbol | Alchemical symbols were used to denote chemical elements and compounds, as well as alchemical apparatus and processes, until the 18th century. Although notation was partly standardized, style and symbol varied between alchemists. Lüdy-Tenger published an inventory of 3,695 symbols and variants, and that was not exhaustive, omitting for example many of the symbols used by Isaac Newton. This page therefore lists only the most common symbols.
Three primes
According to Paracelsus (1493–1541), the three primes or tria prima – of which material substances are immediately composed – are:
Sulfur or soul, the principle of combustibility: 🜍 ()
Mercury or spirit, the principle of fusibility and volatility: ☿ ()
Salt or body, the principle of non-combustibility and non-volatility: 🜔 ()
Four basic elements
Western alchemy makes use of the four classical elements. The symbols used for these are:
Air 🜁 ()
Earth 🜃 ()
Fire 🜂 ()
Water 🜄 ()
Seven
The seven metals known since Classical times in Europe were associated with the seven classical planets; this figured heavily in alchemical symbolism. The exact correlation varied over time, and in early centuries bronze or electrum were sometimes found instead of mercury, or copper for Mars instead of iron; however, gold, silver, and lead had always been associated with the Sun, Moon, and Saturn.
The associations below are attested from the 7th century and had stabilized by the 15th. They started breaking down with the discovery of antimony, bismuth, and zinc in the 16th century. Alchemists would typically call the metals by their planetary names, e.g. "Saturn" for lead, "Mars" for iron; compounds of tin, iron, and silver continued to be called "jovial", "martial", and "lunar"; or "of Jupiter", "of Mars", and "of the moon", through the 17th century. The tradition remains today with the name of the element mercury, where chemists decided the planetary name was preferable to common names like "quicksilver", and in a few archaic terms such as lunar caustic (silver nitrate) and saturnism (lead poisoning).
Lead, corresponding with Saturn ♄ ()
Tin, corresponding with Jupiter ♃ ()
Iron, corresponding with Mars ♂ ()
Gold, corresponding with the Sun ☉ 🜚 ☼ ( )
Copper, corresponding with Venus ♀ ()
Quicksilver, corresponding with Mercury ☿ ()
Silver, corresponding with the Moon ☽ or ☾ ( or ) [also 🜛 in Newton]
Mundane elements and later metals
Antimony ♁ () (in Newton), also
Arsenic 🜺 ()
Bismuth ♆ () (in Newton), 🜘 () (in Bergman)
Cobalt (approximately 🜶) (in Bergman)
Manganese (in Bergman)
Nickel (in Bergman; previously used for regulus of sulfur)
Oxygen (in Lavoisier)
Phlogiston (in Bergman)
Phosphorus or
Platinum or (in Bergman et al.)
Sulfur 🜍 () (in Newton)
Zinc (in Bergman)
Alchemical compounds
The following symbols, among others, have been adopted into Unicode.
Acid (incl. vinegar) 🜊 ()
Sal ammoniac (ammonium chloride) 🜹 ()
Aqua fortis (nitric acid) 🜅 (), A.F.
Aqua regia (nitro-hydrochloric acid) 🜆 (), 🜇 (), A.R.
Spirit of wine (concentrated ethanol; called aqua vitae or spiritus vini) 🜈 (), S.V. or 🜉 ()
Amalgam (alloys of a metal and mercury) 🝛 () = a͞a͞a, ȧȧȧ (among other abbreviations).
Cinnabar (mercury sulfide) 🜓 ()
Vinegar (distilled) 🜋 () (in Newton)
Vitriol (sulfates) 🜖 ()
Black sulphur (residue from sublimation of sulfur) 🜏 ()
Alchemical processes
The alchemical magnum opus was sometimes expressed as a series of chemical operations. In cases where these numbered twelve, each could be assigned one of the Zodiac signs as a form of cryptography. The following example can be found in Pernety's Dictionnaire mytho-hermétique (1758):
Calcination (Aries ) ♈︎
Congelation (Taurus ) ♉︎
Fixation (Gemini ) ♊︎
Solution (Cancer ) ♋︎
Digestion (Leo ) ♌︎
Distillation (Virgo ) ♍︎
Sublimation (Libra ) ♎︎
Separation (Scorpio ) ♏︎
Ceration (Sagittarius ) ♐︎
Fermentation (Capricorn ) ♑︎ (Putrefaction)
Multiplication (Aquarius ) ♒︎
Projection (Pisces ) ♓︎
Units
Several symbols indicate units of time.
Month 🝱 () or or xXx
Day-Night 🝰 ()
Hour 🝮 ()
Unicode
The Alchemical Symbols block was added to Unicode in 2010 as part of Unicode 6.0.
Gallery
A list of symbols published in 1931:
An 1888 reproduction of a Venetian list of medieval Greek alchemical symbols from about the year 1100 but circulating since about 300 and attributed to Zosimos of Panopolis. The list starts with 🜚 for gold and has early conventions that would later change: here ☿ is tin and ♃ electrum; ☾ is silver but ☽ is mercury. Many of the 'symbols' are simply abbreviations of the Greek word or phrase. View the files on Commons for the list of symbols.
See also
Other symbols commonly used in alchemy and related esoteric traditions:
Circled dot (disambiguation)
, as used by Hermetic theurgists
Footnotes
References
Works cited
External links
wikt:Appendix:Unicode/Alchemical Symbols
Alchemical symbols in Unicode
Lists of symbols | Alchemical symbol | [
"Mathematics"
] | 1,193 | [
"Symbols",
"Lists of symbols"
] |
615,924 | https://en.wikipedia.org/wiki/Windows%20XP%20Professional%20x64%20Edition | Windows XP Professional x64 Edition is an edition of Microsoft's Windows XP operating system that supports the x86-64 architecture. It was released on April 25, 2005, alongside the x86-64 versions of Windows Server 2003.
Windows XP Professional x64 Edition is designed to use the expanded 64-bit memory address space provided by the x86-64 64-bit extensions to the x86 IA-32 architecture, which was implemented by AMD as "AMD64", found in AMD's Opteron, Athlon 64 chips (and in selected Sempron processors), and implemented by Intel as "Intel 64" (formerly known as IA-32e and EM64T), found in some of Intel's Pentium 4 and most of Intel's later chips since the Core series.
Windows XP Professional x64 Edition uses the same kernel and code tree as Windows Server 2003 and is serviced by the same service packs. However, it includes client features of Windows XP such as System Restore, Windows Messenger, Fast User Switching, Welcome Screen, Security Center and games, of which Windows Server 2003 does not have.
During the initial development phases (2003–2004), Windows XP Professional x64 Edition was named Windows XP 64-Bit Edition for 64-Bit Extended Systems and later as Windows XP 64-Bit Edition for Extended Systems, as opposed to 64-Bit Edition for Itanium Systems for Windows XP 64-Bit Edition, as the latter was designed for the IA-64 (Itanium) architecture.
Features
Windows XP Professional x64 Edition offers a number of new and updated features not found in the main 32-bit x86 versions of Windows XP:
End-user
Internet Information Services (IIS) version 6.0, the same version that was included in Windows Server 2003, is included with Windows XP Professional x64 Edition. All other 32-bit editions of Windows XP have IIS v5.1.
Windows Media Player version 10, the version that came with Windows Server 2003 Service Pack 1, is included with Windows XP Profesional x64 Edition. Windows XP Professional for x86 originally shipped with Windows Media Player version 8 from RTM to Service Pack 1 and later came with Windows Media Player 9 from Service Pack 2 onwards, with Windows XP Media Center Edition 2005 receiving Windows Media Player 10. Windows Media Player 11 is available for x86 versions of Windows XP Service Pack 2 or later.
Internet Protocol Security (IPsec) features and improvements made in Windows Server 2003 were included with Windows XP Professional x64 Edition.
Shadow Copy, a feature that automatically creates daily backups of files and folders, was first introduced in Windows Server 2003 and is available in Windows XP Professional x64 Edition.
Remote Desktop Services supports Unicode keyboard input, client-side time-zone redirection, GDI+ rendering primitives for improved performance, FIPS encryption, fallback printer driver, auto-reconnect and new Group Policy settings.
Files and Settings Transfer Wizard supports migrating settings from both 32-bit and 64-bit Windows XP PCs.
Core
Windows XP Professional x64 Edition is based on the Windows Server 2003 kernel and codebase, which is newer than 32-bit Windows XP (by about two years) and has improvements to enhance scalability. It also introduces Kernel Patch Protection (also known as PatchGuard) to improve security by helping to eliminate rootkits.
Advantages
The primary benefit of moving to 64-bit is the increase in the maximum allocatable random-access memory (RAM). 32-bit editions of Windows XP are limited to a total of 4 gigabytes. Although the theoretical memory limit of a 64-bit computer is about 16 exabytes (17.1 billion gigabytes), Windows XP Professional x64 Edition is limited to 128GB of physical memory and 16 terabytes of virtual memory.
Windows XP Professional x64 Edition also offers a number of benefits/advantages over the main 32-bit x86 versions of Windows XP:
Supports up to two physical CPUs (in separate physical sockets) and up to 64 logical processors (i.e. cores or threads on a single CPU). Windows XP Professional for x86 supported up to two physical CPUs but is limited to a maximum of 32 logical processors.
Supports GPT-partitioned disks for data volumes (but not bootable volumes) after SP1, which allows disks greater than 2TB to be used as a single GPT partition for storing data.
Allows for faster encoding of audio or video, higher video game performance and faster 3D rendering than with 32-bit versions of Windows XP, in 64-bit optimized software.
Immunity from certain types of viruses and malware targeted at 32-bit versions of Windows XP, as most system files are 64-bit.
Disadvantages/limitations
There are some limitations which apply to Windows XP Professional x64 Edition:
Only 64-bit drivers are supported.
Any 32-bit Windows Explorer shell extensions fail to work with the 64-bit version of Windows Explorer, however Windows XP x64 Edition also ships with a 32-bit version of Windows Explorer. It is possible to make this as the default Windows Shell.
No native support for Type 1 fonts.
IEEE 1394 (FireWire) audio is not supported.
Hibernation is not supported if the RAM is greater than 4GB. This would later be resolved by Windows 7.
EFI and/or UEFI are not supported. A BIOS with Advanced Configuration and Power Interface (ACPI) is required.
English or Japanese are only provided as native display languages. Chinese, French, German, Italian, Japanese, Korean, Spanish and Swedish are available as Multilingual User Interface (MUI) packs for the English version.
Additionally, the extra registers of the x86-64 architecture can cause a slight decrease in performance with certain applications compared to the same application compiled in 32-bit only x86 code running on 32-bit versions of Windows XP.
Software compatibility
Windows XP Professional x64 Edition uses a technology named Windows-on-Windows 64-bit (WoW64), which permits the execution of 32-bit software. It was first used in Windows XP 64-bit Edition (for Itanium architecture). Later, it was adopted for x64 editions of Windows XP and Windows Server 2003.
Since the x86-64 architecture includes hardware-level support for 32-bit instructions, WoW64 simply switches the process between 32- and 64-bit modes. As a result, x86-64 architecture microprocessors suffer no performance loss when executing 32-bit Windows applications. On the Itanium architecture, WoW64 was required to translate 32-bit x86 instructions into their 64-bit Itanium equivalents—which in some cases were implemented in quite different ways—so that the processor could execute them. All 32-bit processes are shown with *32 in the task manager, while 64-bit processes have no extra text present.
Although 32-bit applications can be run transparently, the mixing of the two types of code within the same process is not allowed. A 64-bit program cannot use a 32-bit dynamic-link library (DLL) and similarly a 32-bit program cannot use a 64-bit DLL. This may lead to the need for library developers to provide both 32-bit and 64-bit binary versions of their libraries. Specifically, 32-bit shell extensions for Windows Explorer fail to work with 64-bit Windows Explorer. Windows XP x64 Edition ships with both 32-bit and 64-bit versions of Windows Explorer. The 32-bit version can become the default Windows Shell. Windows XP x64 Edition also includes both 32-bit and 64-bit versions of Internet Explorer 6, so that users can still use browser extensions or ActiveX controls that are not available in 64-bit versions.
Only 64-bit drivers are supported in Windows XP x64 Edition, but 32-bit codecs are supported as long as the media player that uses them is 32-bit.
Installation of programs
By default, 64-bit (x86-64) Windows programs are installed onto their own folders under C:\Program Files, while 32-bit (x86/IA-32) Windows programs are installed onto their own folders under C:\Program Files (x86).
Removed features
Some features are not included at all on Windows XP Professional x64 Edition. Most of them are inherited from Windows Server 2003 Service Pack 1 (the version that Windows XP Professional x64 Edition is based on), which includes some changes from Windows XP Service Pack 2 for x86:
NTVDM and Windows on Windows were removed, so 16-bit Windows applications or native MS-DOS applications cannot run. A similar case happened with all versions of Windows XP for Itanium. Some old 32-bit programs use 16-bit installers which do not run; however, replacements for 16-bit installers such as ACME Setup versions 2.6, 3.0, 3.01, 3.1 and InstallShield 5.x are hardcoded into WoW64 to mitigate this issue. This is true for later 64-bit versions of Windows.
COMMAND.COM, which is a command interpreter exclusive to MS-DOS and Windows 9x, is no longer included.
Program Manager was removed and replaced with Windows Explorer. The executable is still present, but it was replaced with a compatibility stub that redirects to Explorer. The executable itself would not be removed entirely until Windows Vista.
Win32 console programs (including Command Prompt) no longer loads in full-screen. This also applies to later versions of Windows.
Media Bar, which replaced the Radio Toolbar in Internet Explorer 6, was removed.
The Web Extender Client component for Web Folders (WebDAV) was not included.
Spell checking in Outlook Express was removed.
Service packs
The RTM version of Windows XP Professional x64 Edition was built from the Windows Server 2003 Service Pack 1 codebase. Because Windows XP Professional x64 Edition comes from a different codebase than 32-bit Windows XP, its service packs are also developed separately. For the same reason, Service Pack 2 for Windows XP x64 Edition, released on March 13, 2007, is not the same as Service Pack 2 for 32-bit versions of Windows XP. In fact, due to the earlier release date of the 32-bit version, many of the key features introduced by Service Pack 2 for 32-bit (x86) editions of Windows XP were already present in the RTM version of its x64 counterpart. Service Pack 2 is the last released service pack for Windows XP Professional x64 Edition.
Upgradeability
A machine running Windows XP Professional x64 Edition cannot be directly upgraded to Windows Vista because the 64-bit Vista DVD mistakenly recognizes XP x64 as a 32-bit system. Windows XP x64 does qualify the customer to use an upgrade copy of Windows Vista or Windows 7, however it must be installed as a clean install. Despite this, there is a workaround available via third-party tools that makes upgrading from XP x64 to Windows Vista possible.
The last version of Microsoft Office to be officially compatible with Windows XP Professional x64 Edition is Office 2007, however Office 2010 can be unofficially installed by disguising the Windows version using Application Verifier. The last version of Internet Explorer compatible with Windows XP Professional x64 Edition is Internet Explorer 8 (Service Pack 2 is required).
Support lifecycle
Windows XP Professional x64 Edition follows the same support lifecycles as with all other versions of Windows XP. On April 14, 2009, Windows XP Professional x64 Edition's mainstream support expired and the extended support phase began. During the extended support phase, Microsoft continued to provide security updates; however, free technical support, warranty claims, and design changes are no longer being offered. Extended support lasted until April 8, 2014, in line with all other Windows XP editions. After this date, no more security patches or support information are offered.
Although Windows XP Professional x64 Edition is unsupported, Microsoft released an emergency security patch in May 2017 for the OS as well as other unsupported versions of Windows (including Windows Server 2003, Windows Vista and Windows 7 RTM without a service pack), to address a vulnerability that was being leveraged by the WannaCry ransomware attack. In May 2019, an emergency patch was released to address a critical code execution vulnerability in Remote Desktop Services which can be exploited in a similar way as the WannaCry vulnerability.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows XP and Windows Me would end on July 31, 2019 (and for Windows 7 on January 22, 2020). Others, such as Steam, had done the same, ending support for Windows XP and Windows Vista in January 2019.
In 2020, Microsoft announced that it would disable the Windows Update service for SHA-1 endpoints. Since Windows XP Professional x64 Edition did not get an update for SHA-2, Windows Update Services are no longer available on the OS as of late July 2020. However, as of April 2021, the old updates for Windows XP Professional x64 Edition are still available on the Microsoft Update Catalog.
References
Further reading
External links
Windows XP
X86-64 operating systems
Microsoft Windows
ca:Windows XP#64 bits | Windows XP Professional x64 Edition | [
"Technology"
] | 2,725 | [
"Computing platforms",
"Microsoft Windows"
] |
615,948 | https://en.wikipedia.org/wiki/James%20H.%20Ellis | James Henry Ellis (25 September 1924 – 25 November 1997) was a British engineer and cryptographer. Born in Australia but raised and educated in Britain, Ellis joined GCHQ in 1952. He worked on a number of cryptographic projects, but is credited with some of the original thinking that developed into the field of Public Key Cryptography (PKC).
Personal life
Ellis was born in Australia, but was raised in Britain and orphaned at an early age. He lived with his grandparents in London's East End. Ellis showed an early gift for mathematics and physics while attending grammar school in Leyton. He attended Imperial College London. In 1949, Ellis married Brenda, an artist and designer.
Development of non-secret encryption
Ellis first proposed his scheme for "non-secret encryption" in 1970, in a (then) secret GCHQ internal report "The Possibility of Secure Non-Secret Digital Encryption". Ellis said that the idea first occurred to him after reading a paper from World War II by someone at Bell Labs describing the scheme named Project C43, a way to protect voice communications by the receiver adding (and then later subtracting) random noise.
Clifford Cocks and Malcom Williamson, two other GCHQ cryptographers, furthered Ellis' initial PKC related work. As all of this work prior to 1997 was classified, it never became part of very significant mainstream initiatives that developed into modern PKC commercial endeavors, such as the work on Diffie–Hellman key exchange, RSA and other PKC linked initiatives which have become part of the modern world of Internet security.
On 18 December 1997, Clifford Cocks delivered a public talk which contained a brief history of GCHQ's contribution to PKC. In March 2016, Robert Hannigan, the director of GCHQ made a speech at MIT re-emphasising GCHQ's early contribution to public-key cryptography and in particular the contributions of Ellis, Cocks and Williamson.
References
External links
Ellis, J.H., The possibility of secure non-secret digital encryption, CSEG Report 3006, January 1970.
Ellis, J.H., The possibility of secure non-secret analogue encryption, CSEG Report 3007, May 1970.
Alumni of Imperial College London
GCHQ cryptographers
History of computing in the United Kingdom
People from Leytonstone
Public-key cryptographers
1924 births
1997 deaths
Engineers from London | James H. Ellis | [
"Technology"
] | 499 | [
"History of computing",
"History of computing in the United Kingdom"
] |
616,019 | https://en.wikipedia.org/wiki/Distributivity%20%28order%20theory%29 | In the mathematical area of order theory, there are various notions of the common concept of distributivity, applied to the formation of suprema and infima. Most of these apply to partially ordered sets that are at least lattices, but the concept can in fact reasonably be generalized to semilattices as well.
Distributive lattices
Probably the most common type of distributivity is the one defined for lattices, where the formation of binary suprema and infima provide the total operations of join () and meet (). Distributivity of these two operations is then expressed by requiring that the identity
hold for all elements x, y, and z. This distributivity law defines the class of distributive lattices. Note that this requirement can be rephrased by saying that binary meets preserve binary joins. The above statement is known to be equivalent to its order dual
such that one of these properties suffices to define distributivity for lattices. Typical examples of distributive lattice are totally ordered sets, Boolean algebras, and Heyting algebras. Every finite distributive lattice is isomorphic to a lattice of sets, ordered by inclusion (Birkhoff's representation theorem).
Distributivity for semilattices
A semilattice is partially ordered set with only one of the two lattice operations, either a meet- or a join-semilattice. Given that there is only one binary operation, distributivity obviously cannot be defined in the standard way. Nevertheless, because of the interaction of the single operation with the given order, the following definition of distributivity remains possible. A meet-semilattice is distributive, if for all a, b, and x:
If a ∧ b ≤ x then there exist a and b such that a ≤ a, b ≤ b' and x = a ∧ b' .
Distributive join-semilattices are defined dually: a join-semilattice is distributive, if for all a, b, and x:
If x ≤ a ∨ b then there exist a and b such that a ≤ a, b ≤ b and x = a ∨ b' .
In either case, a' and b' need not be unique.
These definitions are justified by the fact that given any lattice L, the following statements are all equivalent:
L is distributive as a meet-semilattice
L is distributive as a join-semilattice
L is a distributive lattice.
Thus any distributive meet-semilattice in which binary joins exist is a distributive lattice.
A join-semilattice is distributive if and only if the lattice of its ideals (under inclusion) is distributive.
This definition of distributivity allows generalizing some statements about distributive lattices to distributive semilattices.
Distributivity laws for complete lattices
For a complete lattice, arbitrary subsets have both infima and suprema and thus infinitary meet and join operations are available. Several extended notions of distributivity can thus be described. For example, for the infinite distributive law, finite meets may distribute over arbitrary joins, i.e.
may hold for all elements x and all subsets S of the lattice. Complete lattices with this property are called frames, locales or complete Heyting algebras. They arise in connection with pointless topology and Stone duality. This distributive law is not equivalent to its dual statement
which defines the class of dual frames or complete co-Heyting algebras.
Now one can go even further and define orders where arbitrary joins distribute over arbitrary meets. Such structures are called completely distributive lattices. However, expressing this requires formulations that are a little more technical. Consider a doubly indexed family {xj,k | j in J, k in K(j)} of elements of a complete lattice, and let F be the set of choice functions f choosing for each index j of J some index f(j) in K(j). A complete lattice is completely distributive if for all such data the following statement holds:
Complete distributivity is again a self-dual property, i.e. dualizing the above statement yields the same class of complete lattices. Completely distributive complete lattices (also called completely distributive lattices for short) are indeed highly special structures. See the article on completely distributive lattices.
Distributive elements in arbitrary lattices
In an arbitrary lattice, an element x is called a distributive element if ∀y,z: =
An element x is called a dual distributive element if ∀y,z: =
In a distributive lattice, every element is of course both distributive and dual distributive.
In a non-distributive lattice, there may be elements that are distributive, but not dual distributive (and vice versa).
For example, in the depicted pentagon lattice N5, the element x is distributive, but not dual distributive, since = = x ≠ z = =
In an arbitrary lattice L, the following are equivalent:
x is a distributive element;
The map φ defined by φ(y) = x ∨ y is a lattice homomorphism from L to the upper closure ↑x = { y ∈ L: x ≤ y };
The binary relation Θx on L defined by y Θx z if x ∨ y = x ∨ z is a congruence relation, that is, an equivalence relation compatible with ∧ and ∨.
In an arbitrary lattice, if x1 and x2 are distributive elements, then so is x1 ∨ x2.
Literature
Distributivity is a basic concept that is treated in any textbook on lattice and order theory. See the literature given for the articles on order theory and lattice theory. More specific literature includes:
G. N. Raney, Completely distributive complete lattices, Proceedings of the American Mathematical Society, 3: 677 - 680, 1952.
References
Order theory | Distributivity (order theory) | [
"Mathematics"
] | 1,377 | [
"Order theory"
] |
616,048 | https://en.wikipedia.org/wiki/Andrey%20Tikhonov%20%28mathematician%29 | Andrey Nikolayevich Tikhonov (; 17 October 1906 – 7 October 1993) was a leading Soviet Russian mathematician and geophysicist known for important contributions to topology, functional analysis, mathematical physics, and ill-posed problems. He was also one of the inventors of the magnetotellurics method in geophysics. Other transliterations of his surname include "Tychonoff", "Tychonov", "Tihonov", "Tichonov".
Biography
Born in Gzhatsk, he studied at the Moscow State University where he received a Ph.D. in 1927 under the direction of Pavel Sergeevich Alexandrov. In 1933 he was appointed as a professor at Moscow State University. He became a corresponding member of the USSR Academy of Sciences on 29 January 1939 and a full member of the USSR Academy of Sciences on 1 July 1966.
Research work
Tikhonov worked in a number of different fields in mathematics. He made important contributions to topology, functional analysis, mathematical physics, and certain classes of ill-posed problems. Tikhonov regularization, one of the most widely used methods to solve ill-posed inverse problems, is named in his honor. He is best known for his work on topology, including the metrization theorem he proved in 1926, and the Tychonoff's theorem, which states that every product of arbitrarily many compact topological spaces is again compact. In his honor, completely regular topological spaces are also named Tychonoff spaces.
In mathematical physics, he proved the fundamental uniqueness theorems for the heat equation and studied Volterra integral equations.
He founded the theory of asymptotic analysis for differential equations with small parameter in the leading derivative.
Organizer work
Tikhonov played the leading role in founding the Faculty of Computational Mathematics and Cybernetics of Moscow State University and served as its first dean during the period of 1970–1990.
Awards
Tikhonov received numerous honors and awards for his work, including the Lenin Prize (1966) and the Hero of Socialist Labor (1954, 1986).
Publications
Books
(English translation.)
Papers
See also
Regularization
Stone–Čech compactification
Tikhonov cube
Tikhonov distribution
Tikhonov plank
Tikhonov space
Tikhonov's theorem on dynamical systems
References
External links
1906 births
1993 deaths
Soviet mathematicians
Topologists
Full Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Moscow State University alumni
Heroes of Socialist Labour
Members of the German Academy of Sciences at Berlin
Soviet inventors
Academic staff of Moscow State University
Russian scientists | Andrey Tikhonov (mathematician) | [
"Mathematics"
] | 530 | [
"Topologists",
"Topology"
] |
616,196 | https://en.wikipedia.org/wiki/Quanta%20Computer | Quanta Computer Incorporated () () is a Taiwan-based manufacturer of notebook computers and other electronic hardware. Its customers include Apple Inc., Dell, Hewlett-Packard Inc., Acer Inc., Alienware, Amazon.com, Cisco, Fujitsu, Gericom, Lenovo, LG, Maxdata, Microsoft, MPC, BlackBerry Ltd, Sharp Corporation, Siemens AG, Sony, Sun Microsystems, Toshiba, Valve, Verizon Wireless, and Vizio.
Quanta has extended its businesses into enterprise network systems, home entertainment, mobile communication, automotive electronics, and digital home markets. The company also designs, manufactures and markets GPS systems, including handheld GPS, in-car GPS, Bluetooth GPS and GPS with other positioning technologies.
Quanta Computer was announced as the original design manufacturer (ODM) for the XO-1 by the One Laptop per Child project on December 13, 2005, and took an order for one million laptops as of February 16, 2007. In October 2008, it was announced that Acer would phase out Quanta from the production chain, and instead outsource manufacturing of 15 million Aspire One netbooks to Compal Electronics.
In 2011, Quanta designed servers in conjunction with Facebook as part of the Open Compute Project.
It was estimated that Quanta had a 31% worldwide market share of notebook computers in the first quarter of 2008.
History
The firm was founded in 1988 by Barry Lam, a Shanghai-born businessman who grew up in Hong Kong and received his education in Taiwan, with a starting capital of less than $900,000. A first notebook prototype was completed in November 1988, with factory production beginning in 1990.
Throughout the 1990s, Quanta established contracts with Apple Computers and Gateway, among others, opening an after-sales office in California in 1991 and another one in Augsburg, Germany in 1994. In 1996, Quanta signed a contract with Dell, making the firm Quanta's largest customer at the time.
In 2014, Quanta ranked 409th on Fortune's Global 500 list. 2016 is the strongest period with it being in 326. In 2020, Quanta dropped to rank 377.
Products
Apple Watch
Apple Macbook Air
Apple Macbook Pro
ThinkPad Z60m
Subsidiaries
Subsidiaries of Quanta Computer include:
Quanta Cloud Technology Inc - provider of data center hardware.
FaceVsion Technology Inc - telecommunications, webcam, and electronic products.
CloudCast Technology Inc - information software and data processing - liquidated in February 2017.
TWDT Precision Co., Ltd. (TWDT) - 55% ownership, which was sold in June 2016.
RoyalTek International - In January 2006, RoyalTek became a member of Quanta Inc. This allows Quanta to create a top-down integration of technology and manufacturing, and we now have manufacturing factories in Taiwan and Shanghai.
Techman Robot Inc.
Techman Robot Inc. is a cobot manufacturer founded by Quanta in 2016. It is based in Taoyuan's Hwa Ya Technology Park. It is the world's second-largest manufacturer of robots after Universal Robots.
Major facilities
Shanghai, China (QSMC)
This was the first mainland China plant built by Quanta Computer in December 2000 to focus on OEM and ODM production and currently employs nearly 30,000 people. Huangjian Tang, Quanta's Chairman for China, manages seven major plants, F1 to F7, two large warehouses, H1 and H2, and the Q-BUS Research and Development facility.
Chongqing, China (QCMC)
Constructed in April 2010. Quanta Computer invested and built a plant in Chongqing, China, the third plant built by Quanta Computer in China.
Court case
In 2008, LG Electronics sued Quanta Computer company for patent infringement, when Quanta used Intel components with non-Intel components. The Supreme Court of the United States ruled that LG, who had a patent sharing deal with Intel did not have the right to sue, because Quanta, being a consumer, did not need to abide by patent agreements with Intel and LG.
See also
List of companies of Taiwan
References
External links
Quanta market share
Computer hardware companies
Computer systems companies
Companies based in Taoyuan City
Manufacturing companies established in 1988
Technology companies established in 1988
Taiwanese companies established in 1988
Electronics manufacturing companies | Quanta Computer | [
"Technology"
] | 887 | [
"Computer hardware companies",
"Computer systems companies",
"Computers",
"Computer systems"
] |
616,293 | https://en.wikipedia.org/wiki/Soft%20matter | Soft matter or soft condensed matter is a type of matter that can be deformed or structurally altered by thermal or mechanical stress which is of similar magnitude to thermal fluctuations.
The science of soft matter is a subfield of condensed matter physics. Soft materials include liquids, colloids, polymers, foams, gels, granular materials, liquid crystals, flesh, and a number of biomaterials. These materials share an important common feature in that predominant physical behaviors occur at an energy scale comparable with room temperature thermal energy (of order of kT), and that entropy is considered the dominant factor. At these temperatures, quantum aspects are generally unimportant. When soft materials interact favorably with surfaces, they become squashed without an external compressive force.
Pierre-Gilles de Gennes, who has been called the "founding father of soft matter," received the Nobel Prize in Physics in 1991 for discovering that methods developed for studying order phenomena in simple systems can be generalized to the more complex cases found in soft matter, in particular, to the behaviors of liquid crystals and polymers.
History
The current understanding of soft matter grew from Albert Einstein's work on Brownian motion, understanding that a particle suspended in a fluid must have a similar thermal energy to the fluid itself (of order of kT). This work built on established research into systems that would now be considered colloids.
The crystalline optical properties of liquid crystals and their ability to flow were first described by Friedrich Reinitzer in 1888, and further characterized by Otto Lehmann in 1889. The experimental setup that Lehmann used to investigate the two melting points of cholesteryl benzoate are still used in the research of liquid crystals as of about 2019.
In 1920, Hermann Staudinger, recipient of the 1953 Nobel Prize in Chemistry, was the first person to suggest that polymers are formed through covalent bonds that link smaller molecules together. The idea of a macromolecule was unheard of at the time, with the scientific consensus being that the recorded high molecular weights of compounds like natural rubber were instead due to particle aggregation.
The use of hydrogel in the biomedical field was pioneered in 1960 by Drahoslav Lím and Otto Wichterle. Together, they postulated that the chemical stability, ease of deformation, and permeability of certain polymer networks in aqueous environments would have a significant impact on medicine, and were the inventors of the soft contact lens.
These seemingly separate fields were dramatically influenced and brought together by Pierre-Gilles de Gennes. The work of de Gennes across different forms of soft matter was key to understanding its universality, where material properties are not based on the chemistry of the underlying structure, more so on the mesoscopic structures the underlying chemistry creates. He extended the understanding of phase changes in liquid crystals, introduced the idea of reptation regarding the relaxation of polymer systems, and successfully mapped polymer behavior to that of the Ising model.
Distinctive physics
Interesting behaviors arise from soft matter in ways that cannot be predicted, or are difficult to predict, directly from its atomic or molecular constituents. Materials termed soft matter exhibit this property due to a shared propensity of these materials to self-organize into mesoscopic physical structures. The assembly of the mesoscale structures that form the macroscale material is governed by low energies, and these low energy associations allow for the thermal and mechanical deformation of the material. By way of contrast, in hard condensed matter physics it is often possible to predict the overall behavior of a material because the molecules are organized into a crystalline lattice with no changes in the pattern at any mesoscopic scale. Unlike hard materials, where only small distortions occur from thermal or mechanical agitation, soft matter can undergo local rearrangements of the microscopic building blocks.
A defining characteristic of soft matter is the mesoscopic scale of physical structures. The structures are much larger than the microscopic scale (the arrangement of atoms and molecules), and yet are much smaller than the macroscopic (overall) scale of the material. The properties and interactions of these mesoscopic structures may determine the macroscopic behavior of the material. The large number of constituents forming these mesoscopic structures, and the large degrees of freedom this causes, results in a general disorder between the large-scale structures. This disorder leads to the loss of long-range order that is characteristic of hard matter.
For example, the turbulent vortices that naturally occur within a flowing liquid are much smaller than the overall quantity of liquid and yet much larger than its individual molecules, and the emergence of these vortices controls the overall flowing behavior of the material. Also, the bubbles that compose a foam are mesoscopic because they individually consist of a vast number of molecules, and yet the foam itself consists of a great number of these bubbles, and the overall mechanical stiffness of the foam emerges from the combined interactions of the bubbles.
Typical bond energies in soft matter structures are of similar scale to thermal energies. Therefore the structures are constantly affected by thermal fluctuations and undergo Brownian motion. The ease of deformation and influence of low energy interactions regularly result in slow dynamics of the mesoscopic structures which allows some systems to remain out of equilibrium in metastable states. This characteristic can allow for recovery of initial state through an external stimulus, which is often exploited in research.
Self-assembly is an inherent characteristic of soft matter systems. The characteristic complex behavior and hierarchical structures arise spontaneously as a system evolves towards equilibrium. Self-assembly can be classified as static when the resulting structure is due to a free energy minimum, or dynamic when the system is caught in a metastable state. Dynamic self-assembly can be utilized in the functional design of soft materials with these metastable states through kinetic trapping.
Soft materials often exhibit both elasticity and viscous responses to external stimuli such as shear induced flow or phase transitions. However, excessive external stimuli often result in nonlinear responses. Soft matter becomes highly deformed before crack propagation, which differs significantly from the general fracture mechanics formulation. Rheology, the study of deformation under stress, is often used to investigate the bulk properties of soft matter.
Classes of soft matter
Soft matter consists of a diverse range of interrelated systems and can be broadly categorized into certain classes. These classes are by no means distinct, as often there are overlaps between two or more groups.
Polymers
Polymers are large molecules composed of repeating subunits whose characteristics are governed by their environment and composition. Polymers encompass synthetic plastics, natural fibers and rubbers, and biological proteins. Polymer research finds applications in nanotechnology, from materials science and drug delivery to protein crystallization.
Foams
Foams consist of a liquid or solid through which a gas has been dispersed to form cavities. This structure imparts a large surface-area-to-volume ratio on the system. Foams have found applications in insulation and textiles, and are undergoing active research in the biomedical field of drug delivery and tissue engineering. Foams are also used in automotive for water and dust sealing and noise reduction.
Gels
Gels consist of non-solvent-soluble 3D polymer scaffolds, which are covalently or physically cross-linked, that have a high solvent/content ratio. Research into functionalizing gels that are sensitive to mechanical and thermal stress, as well as solvent choice, has given rise to diverse structures with characteristics such as shape-memory, or the ability to bind guest molecules selectively and reversibly.
Colloids
Colloids are non-soluble particles suspended in a medium, such as proteins in an aqueous solution. Research into colloids is primarily focused on understanding the organization of matter, with the large structures of colloids, relative to individual molecules, large enough that they can be readily observed.
Liquid crystals
Liquid crystals can consist of proteins, small molecules, or polymers, that can be manipulated to form cohesive order in a specific direction. They exhibit liquid-like behavior in that they can flow, yet they can obtain close-to-crystal alignment. One feature of liquid crystals is their ability to spontaneously break symmetry. Liquid crystals have found significant applications in optical devices such as liquid-crystal displays (LCD).
Biological membranes
Biological membranes consist of individual phospholipid molecules that have self-assembled into a bilayer structure due to non-covalent interactions. The localized, low energy associated with the forming of the membrane allows for the elastic deformation of the large-scale structure.
Experimental characterization
Due to the importance of mesoscale structures in the overarching properties of soft matter, experimental work is primarily focused on the bulk properties of the materials. Rheology is often used to investigate the physical changes of the material under stress. Biological systems, such as protein crystallization, are often investigated through X-ray and neutron crystallography, while nuclear magnetic resonance spectroscopy can be used in understanding the average structure and lipid mobility of membranes.
Scattering
Scattering techniques, such as wide-angle X-ray scattering, small-angle X-ray scattering, neutron scattering, and dynamic light scattering can also be used for materials when probing for the average properties of the constituents. These methods can determine particle-size distribution, shape, crystallinity and diffusion of the constituents in the system. There are limitations in the application of scattering techniques to some systems, as they can be more suited to isotropic and dilute samples.
Computational
Computational methods are often employed to model and understand soft matter systems, as they have the ability to strictly control the composition and environment of the structures being investigated, as well as span from microscopic to macroscopic length scales. Computational methods are limited, however, by their suitability to the system and must be regularly validated against experimental results to ensure accuracy. The use of informatics in the prediction of soft matter properties is also a growing field in computer science thanks to the large amount of data available for soft matter systems.
Microscopy
Optical microscopy can be used in the study of colloidal systems, but more advanced methods like transmission electron microscopy (TEM) and atomic force microscopy (AFM) are often used to characterize forms of soft matter due to their applicability to mapping systems at the nanoscale. These imaging techniques are not universally appropriate to all classes of soft matter and some systems may be more suited to one kind of analysis than another. For example, there are limited applications in imaging hydrogels with TEM due to the processes required for imaging. However, fluorescence microscopy can be readily applied. Liquid crystals are often probed using polarized light microscopy to determine the ordering of the material under various conditions, such as temperature or electric field.
Applications
Soft materials are important in a wide range of technological applications, and each soft material can often be associated with multiple disciplines. Liquid crystals, for example, were originally discovered in the biological sciences when the botanist and chemist Friedrich Reinitzer was investigating cholesterols. Now, however, liquid crystals have also found applications as liquid-crystal displays, liquid crystal tunable filters, and liquid crystal thermometers. Active liquid crystals are another example of soft materials, where the constituent elements in liquid crystals can self-propel.
Polymers have found diverse applications, from the natural rubber found in latex gloves to the vulcanized rubber found in tires. Polymers encompass a large range of soft matter, with applications in material science. An example of this is hydrogel. With the ability to undergo shear thinning, hydrogels are well suited for the development of 3D printing. Due to their stimuli responsive behavior, 3D printing of hydrogels has found applications in a diverse range of fields, such as soft robotics, tissue engineering, and flexible electronics. Polymers also encompass biological molecules such as proteins, where research insights from soft matter research have been applied to better understand topics like protein crystallization.
Foams can naturally occur, such as the head on a beer, or be created intentionally, such as by fire extinguishers. The physical properties available to foams have resulted in applications which can be based on their viscosity, with more rigid and self-supporting forms of foams being used as insulation or cushions, and foams that exhibit the ability to flow being used in the cosmetic industry as shampoos or makeup. Foams have also found biomedical applications in tissue engineering as scaffolds and biosensors.
Historically the problems considered in the early days of soft matter science were those pertaining to the biological sciences. As such, an important application of soft matter research is biophysics, with a major goal of the discipline being the reduction of the field of cell biology to the concepts of soft matter physics. Applications of soft matter characteristics are used to understand biologically relevant topics such as membrane mobility, as well as the rheology of blood.
See also
Biological membranes
Biomaterials
Colloids
Complex fluids
Foams
Fracture of soft materials
Gels
Granular materials
Liquids
Liquid crystals
Microemulsions
Polymers
Protein dynamics
Protein structure
Surfactants
Active matter
Roughness
References
I. Hamley, Introduction to Soft Matter (2nd edition), J. Wiley, Chichester (2000).
R. A. L. Jones, Soft Condensed Matter, Oxford University Press, Oxford (2002).
T. A. Witten (with P. A. Pincus), Structured Fluids: Polymers, Colloids, Surfactants, Oxford (2004).
M. Kleman and O. D. Lavrentovich, Soft Matter Physics: An Introduction, Springer (2003).
M. Mitov, Sensitive Matter: Foams, Gels, Liquid Crystals and Other Miracles, Harvard University Press (2012).
J. N. Israelachvili, Intermolecular and Surface Forces, Academic Press (2010).
A. V. Zvelindovsky (editor), Nanostructured Soft Matter - Experiment, Theory, Simulation and Perspectives, Springer/Dordrecht (2007), .
M. Daoud, C.E. Williams (editors), Soft Matter Physics, Springer Verlag, Berlin (1999).
Gerald H. Ristow, Pattern Formation in Granular Materials, Springer Tracts in Modern Physics, v. 161. Springer, Berlin (2000). .
de Gennes, Pierre-Gilles, Soft Matter, Nobel Lecture, December 9, 1991
S. A. Safran, Statistical thermodynamics of surfaces, interfaces and membranes, Westview Press (2003)
R.G. Larson, "The Structure and Rheology of Complex Fluids," Oxford University Press (1999)
Gang, Oleg, "Soft Matter and Biomaterials on the Nanoscale: The WSPC Reference on Functional Nanomaterials — Part I (In 4 Volumes)", World Scientific Publisher (2020)
External links
Pierre-Gilles de Gennes' Nobel Lecture
American Physical Society Topical Group on Soft Matter (GSOFT)
Softbites - a blog run by graduate students and postdocs that makes soft matter more accessible through bite-sized posts that summarize current and classic soft matter research
Softmatterworld.org
Softmatterresources.com
SklogWiki - a wiki dedicated to simple liquids, complex fluids, and soft condensed matter.
Harvard School of Engineering and Applied Sciences Soft Matter Wiki - organizes, reviews, and summarizes academic papers on soft matter.
Soft Matter Engineering - A group dedicated to Soft Matter Engineering at the University of Florida
Google Scholar page on soft matter
Condensed matter physics | Soft matter | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,150 | [
"Soft matter",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
616,340 | https://en.wikipedia.org/wiki/Franking | Franking comprises all devices, markings, or combinations thereof ("franks") applied to mails of any class which qualifies them to be postally serviced. Types of franks include uncanceled and precanceled postage stamps (both adhesive and printed on postal stationery), impressions applied via postage meter (via so-called "postage evidencing systems"), official use "Penalty" franks, Business Reply Mail (BRM), and other permit Imprints (Indicia), manuscript and facsimile "franking privilege" signatures, "soldier's mail" markings, and any other forms authorized by the 192 postal administrations that are members of the Universal Postal Union.
Types and methods
While all affixed postage stamps and other markings applied to mail to qualify it for postal service is franking, not all types and methods are used to frank all types or classes of mails. Each of the world's national and other postal administrations establishes and regulates the specific methods and standards of franking as they apply to domestic operations within their own postal systems. Although there are differences in the manner that the postal systems of the 192 nations that belong to the Universal Postal Union (UPU) apply and regulate the way their mails are franked, most mail types fall under one (and sometimes more) of four major types and/or methods of franking: postage (stamps, etc.), privilege, official business, and business reply mail.
Any and all conflicts that might arise affecting the franking of mail types serviced by multiple administrations which result from differences in these various postal regulations and/or practices are mediated by the UPU, a specialized agency of the United Nations which sets the rules and technical standards for international mail exchanges. The UPU co-ordinates the application of the regulations of postal systems of its member nations, including as they relate to franking, to permit the servicing and exchange of international mail. Prior to the establishment of the UPU in 1874, international mails sometimes bore mixed franking (the application of franking of more than one country) before the world's postal services universally agreed to deliver international mails bearing only the franking of the country of origin.
Postage (stamps, etc)
"Postage" franking is the physical application and presence of postage stamps, or any other markings recognized and accepted by the postal system or systems providing service, which indicate the payment of sufficient fees for the class of service which the item of mail is to be or had been afforded. Prior to the introduction to the world's first postage stamps in Britain in 1840 ("Penny Black") and 1841 ("Penny Red"), pre-paid franking was applied exclusively by a manuscript or handstamped "Paid" marking and the amount of the fee collected. The first US postage stamp was the red brown Five cent Franklin (SC-1) issued in 1847.
In addition to stamps, postage franking can be in the form of printed or stamped impressions made in an authorized format and applied directly by a franking machine, postage meter, computer generated franking labels or other similar methods ("Postage Evidencing Systems"), any form of preprinted "Postage Paid" notice authorized by a postal service permit ("Indicia"), or any other marking method accepted by the postal service and specified by its regulations, as proof of the prepayment of the appropriate fees. Postal franking also includes "Postage Due" stamps or markings affixed by a postal service which designate any amount of insufficient or omitted postage fees to be collected on delivery. Some countries allow senders to purchase one-time codes online that can be hand-written onto the piece of mail, such as the Netherlands' Postzegelcodes introduced in 2013.
Franking privilege
"Privilege" franking is a personally pen-signed or printed facsimile signature of a person with a "franking privilege" such as certain government officials (especially legislators) and others designated by law or postal regulations. This allows the letter or other parcel to be sent without the application of a postage stamp. In the United States this is called the "Congressional frank" which can only be used for "Official Business" mail.
In addition to this type of franking privilege, from time to time (especially during wartimes) governments and/or postal administrations also authorize active duty service members and other designated individuals to send mail for free by writing "Free" or "Soldier's Mail" (or equivalent) on the item of mail in lieu of paid postal franking, or by using appropriate free franked postal stationery. In the United States, unless otherwise designated, such mail is serviced by both the military and civil postal systems that accept them as First Class letter mail.
"Official Business"
"Official Business" franking is any frank printed on or affixed to mail which is designated as being for official business of national governments (i.e. governments which also have postal administrations) and thus qualify for postal servicing without any additional paid franking. In Commonwealth countries the printed frank reads "Official Paid" and is used by government departments on postmarks, stationery, adhesive labels, official stamps, and handstruck or machine stamps.
In Canada, the monarch, the Governor General, members of the Senate of Canada, members of the House of Commons, the Clerk of the House of Commons, Parliamentary Librarian, Associate Parliamentary Librarian, officers of parliament, and the Senate Ethics Officer all have franking privilege, and mail sent to or from these people are sent free of charge. Bulk mail from members of the House of Commons is limited to four mailings per year and to the member's own electoral district. Individuals may send letters to any of the above office-holders without charge.
In the United States, such mails are sent using postal stationery or address labels that include a "Penalty" frank ("Penalty For Private Use To Avoid Payment of Postage $300") printed on the piece of mail, and/or is franked with Penalty Mail Stamps (PMS) of appropriate value. Such mails are generally serviced as First Class Mail (or equivalent) unless otherwise designated (such as "bulk" mailings).
"Business Reply Mail"
"Business Reply Mail" (BRM) franking is a preprinted frank with a Permit number which authorizes items so marked to be posted as First Class Mail with the authorizing postal service without advance payment by the person posting the item. (International Reply Mail may specify Air Mail as the class of service.) Postage fees for BRM are paid by the permit holder upon its delivery to the specified address authorized by the permit and preprinted on the item of business reply mail. Governments also use BRM to permit replies associated with official business purposes.
History of the "franking privilege"
A limited form of franking privilege originated in the British Parliament in 1660, with the passage of an act authorizing the formation of the General Post Office. By 1772, the abundance of franked letters represented lost revenue of more than one third the total collections of the Post Office. In the 19th century, as use of the post office increased significantly in Britain, it was expected that anybody with a Parliament connection would get his friends' mail franked.
In the United States, the franking privilege predates the establishment of the republic itself, as the Continental Congress bestowed it on its members in 1775. The First United States Congress enacted a franking law in 1789 during its very first session. Congress members would spend much time "inscribing their names on the upper right-hand corner of official letters and packages" until the 1860s for the purpose of sending out postage-free mail. Yet, on January 31, 1873, the Senate abolished "the congressional franking privilege after rejecting a House-passed provision that would have provided special stamps for the free mailing of printed Senate and House documents." Within two years, however, Congress began to make exceptions to this ban, including free mailing of the Congressional Record, seeds, and agricultural reports. Finally, in 1891, noting that its members were the only government officials required to pay postage, Congress restored full franking privileges. Since then, the franking of congressional mail has been subject to ongoing review and regulation.
The phrase franking is derived from the Franks, a Germanic tribe that conquered Gallia—modern-day France—during the last days of the Western Roman Empire. The Franks held more legal rights than the Gallo-Roman natives. To be a Frank was to be "free" under the law. Another use of that term is speaking "frankly", i.e. "freely". Because Benjamin Franklin was an early United States Postmaster General, satirist Richard Armour referred to free congressional mailings as the "Franklin privilege."
The use of a franking privilege is not absolute but is generally limited to official business, constituent bulk mails, and other uses as prescribed by law, such as the "Congressional Frank" afforded to Members of Congress in the United States. This is not "free" franking, however, as each member is appropriated a budgeted amount to compensate the USPS for servicing the mail.
A six-member bipartisan Commission on Congressional Mailing Standards, colloquially known as the "Franking Commission," is responsible for oversight and regulation of the franking privilege in the Congress. Among the Commission's responsibilities is to establish the "Official Mail Allowance" for each Member based proportionally on the number of constituents they serve. Certain other persons are also accorded the privilege such as Members-elect and former presidents and their spouse or widow as well. A president who is convicted in the Senate as a result of an impeachment trial would not have a franking privilege after being forced to leave office. The sitting president does not have personal franking privileges but the vice president, who is also President of the Senate, does.
In Italy, mail sent to the President was free of charge until this franking privilege was abolished in 1999.
In New Zealand, individuals writing to a Member of Parliament can do so without paying for postage.
See also
Postage meter
Postzegelcode
References
External links
History of Franked Mail from the Senate.gov
E050 Official Mail (Franked) from the United States Post Office
Description of franked mail in the United Kingdom
Postal systems
Philatelic terminology
Postal markings | Franking | [
"Technology"
] | 2,122 | [
"Transport systems",
"Postal systems"
] |
616,351 | https://en.wikipedia.org/wiki/Molniya%20orbit | A Molniya orbit (, "Lightning") is a type of satellite orbit designed to provide communications and remote sensing coverage over high latitudes. It is a highly elliptical orbit with an inclination of 63.4 degrees, an argument of perigee of 270 degrees, and an orbital period of approximately half a sidereal day. The name comes from the Molniya satellites, a series of Soviet/Russian civilian and military communications satellites which have used this type of orbit since the mid-1960s. A variation on the Molniya orbit is the so-called Three Apogee (TAP) orbit, whose period is a third of a sidereal day.
The Molniya orbit has a long dwell time over the hemisphere of interest, while moving very quickly over the other. In practice, this places it over either Russia or Canada for the majority of its orbit, providing a high angle of view to communications and monitoring satellites covering these high-latitude areas. Geostationary orbits, which are necessarily inclined over the equator, can only view these regions from a low angle, hampering performance. In practice, a satellite in a Molniya orbit serves the same purpose for high latitudes as a geostationary satellite does for equatorial regions, except that multiple satellites are required for continuous coverage.
Satellites placed in Molniya orbits have been used for television broadcasting, telecommunications, military communications, relaying, weather monitoring, early warning systems and classified surveillance purposes.
History
The Molniya orbit was discovered by Soviet scientists in the 1960s as a high-latitude communications alternative to geostationary orbits, which require large launch energies to achieve a high perigee and to change inclination to orbit over the equator (especially when launched from Russian latitudes). As a result, OKB-1 sought a less energy-demanding orbit. Studies found that this could be achieved using a highly elliptical orbit with an apogee over Russian territory. The orbit's name refers to the "lightning" speed with which the satellite passes through the perigee.
The first use of the Molniya orbit was by the communications satellite series of the same name. After two launch failures, and one satellite failure in 1964, the first successful satellite to use this orbit, Molniya 1-1, launched on 23 April 1965. The early Molniya-1 satellites were used for civilian television, telecommunication and long-range military communications, but they were also fitted with cameras used for weather monitoring, and possibly for assessing clear areas for Zenit spy satellites. The original Molniya satellites had a lifespan of approximately 1.5 years, as their orbits were disrupted by perturbations, and they had to be constantly replaced.
The succeeding series, the Molniya-2, provided both military and civilian broadcasting and was used to create the Orbita television network, spanning the Soviet Union. These were in turn replaced by the Molniya-3 design. A satellite called Mayak was designed to supplement and replace the Molniya satellites in 1997, but the project was cancelled, and the Molniya-3 was replaced by the Meridian satellites, the first of which launched in 2006. The Soviet US-K early warning satellites, which watch for American rocket launches, were launched in Molniya orbits from 1967, as part of the Oko system.
From 1971, the American Jumpseat and Trumpet military satellites were launched into Molniya orbits (and possibly used to intercept Soviet communications from the Molniya satellites). Detailed information about both projects remains classified . This was followed by the American SDS constellation, which operates with a mixture of Molniya and geostationary orbits. These satellites are used to relay signals from lower flying satellites back to ground stations in the United States and have been active in some capacity since 1976. A Russian satellite constellation called Tyulpan was designed in 1994 to support communications at high latitudes, but it did not progress past the planning phase.
In 2015 and 2017 Russia launched two Tundra satellites into a Molniya orbit, despite their name, as part of its EKS early warning system.
Uses
Much of the area of the former Soviet Union, and Russia in particular, is located at high northern latitudes. To broadcast to these latitudes from a geostationary orbit (above the Earth's equator) requires considerable power due to the low elevation angles, and the extra distance and atmospheric attenuation that comes with it. Sites located above 81° latitude are unable to view geostationary satellites at all, and as a rule of thumb, elevation angles of less than 10° can cause problems, depending on the communications frequency.
A satellite in a Molniya orbit is better suited to communications in these regions, because it looks more directly down on them during large portions of its orbit. With an apogee altitude as high as and an apogee sub-satellite point of 63.4 degrees north, it spends a considerable portion of its orbit with excellent visibility in the northern hemisphere, from Russia as well as from northern Europe, Greenland and Canada.
While satellites in Molniya orbits require considerably less launch energy than those in geostationary orbits (especially launching from high latitudes), their ground stations need steerable antennas to track the spacecraft, links must be switched between satellites in a constellation and range changes cause variations in signal amplitude. Additionally, there is a greater need for station-keeping, and the spacecraft will pass through the Van Allen radiation belt four times per day.
Southern hemisphere proposals
Similar orbits with an argument of perigee of 90° could allow high-latitude coverage in the southern hemisphere. A proposed constellation, the Antarctic Broadband Program, would have used satellites in an inverted Molniya orbit to provide broadband internet service to facilities in Antarctica. Initially funded by the now defunct Australian Space Research Programme, it did not progress beyond initial development.
Molniya constellations
Permanent high-latitude coverage of a large area of Earth (like the whole of Russia, where the southern parts are about 45°N) requires a constellation of at least three spacecraft in Molniya orbits. If three spacecraft are used, then each spacecraft will be active for a period of eight hours per orbit, centered around apogee, as illustrated in figure 4. Figure 5 shows the satellite's field of view around the apogee.
The Earth completes half a rotation in twelve hours, so the apogees of successive Molniya orbits will alternate between one half of the northern hemisphere and the other. For the original Molniya orbit, the apogees were placed over Russia and North America, but by changing the right ascension of the ascending node this can be varied. The coverage from a satellite in a Molniya orbit over Russia is shown in figures 6 to 8, and over North America in figures 9 to 11.
The orbits of the three spacecraft should then have the same orbital parameters, but different right ascensions of the ascending nodes, with their passes over the apogees separated by 7.97 hours. Since each satellite has an operational period of approximately eight hours, when one spacecraft travels four hours after its apogee passage (see figure 8 or figure 11), then the next satellite will enter its operational period, with the view of the earth shown in figure 6 (or figure 9), and the switch-over can take place. Note that the two spacecraft at the time of switch-over are separated by about , so that the ground stations only have to move their antennas a few degrees to acquire the new spacecraft.
Diagrams
Properties
A typical Molniya orbit has the following properties:
Argument of perigee: 270°
Inclination: 63.4°
Period: 718 minutes
Eccentricity: 0.74
Semi-major axis:
Argument of perigee
The argument of perigee is set at 270°, causing the satellite to experience apogee at the most northerly point of its orbit. For any future applications over the southern hemisphere, it would instead be set at 90°.
Orbital inclination
In general, the oblateness of the Earth perturbs the argument of perigee (), so that it gradually changes with time. If we only consider the first-order coefficient , the perigee will change according to equation , unless it is constantly corrected with station-keeping thruster burns.
where is the orbital inclination, is the eccentricity, is mean motion in degrees per day, is the perturbing factor, is the radius of the earth, is the semimajor axis, and is in degrees per day.
To avoid this expenditure of fuel, the Molniya orbit uses an inclination of 63.4°, for which the factor is zero, so that there is no change in the position of perigee over time. An orbit designed in this manner is called a frozen orbit.
Orbital period
To ensure the geometry relative to the ground stations repeats every 24 hours, the period should be about half a sidereal day, keeping the longitudes of the apogees constant.
However, the oblateness of the Earth also perturbs the right ascension of the ascending node (), changing the nodal period and causing the ground track to drift over time at the rate shown in equation .
where is in degrees per day.
Since the inclination of a Molniya orbit is fixed (as above), this perturbation is degrees per day. To compensate, the orbital period is adjusted so that the longitude of the apogee changes enough to cancel out this effect.
Eccentricity
The eccentricity of the orbit is based on the differences in altitudes of its apogee and perigee. To maximise the amount of time that the satellite spends over the apogee, the eccentricity should be set as high as possible. However, the perigee needs to be high enough to keep the satellite substantially above the atmosphere to minimize drag (~600km), and the orbital period needs to be kept to approximately half a sidereal day (as above). These two factors constrain the eccentricity, which becomes approximately 0.737.
Semi-major axis
The exact height of a satellite in a Molniya orbit varies between missions, but a typical orbit will have a perigee altitude of approximately and an apogee altitude of , for a semi-major axis of .
Modelling
To track satellites using Molniya orbits, scientists use the SDP4 simplified perturbations model, which calculates the location of a satellite based on orbital shape, drag, radiation, gravitation effects from the sun and moon, and earth resonance terms.
See also
List of orbits
Tundra orbit
References
External links
Illustration of the communication geometry provided by satellites in 12-hour Molniya orbits (video)
Earth orbits
Satellite broadcasting
Soviet inventions | Molniya orbit | [
"Engineering"
] | 2,194 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
616,448 | https://en.wikipedia.org/wiki/Wilson%20loop | In quantum field theory, Wilson loops are gauge invariant operators arising from the parallel transport of gauge variables around closed loops. They encode all gauge information of the theory, allowing for the construction of loop representations which fully describe gauge theories in terms of these loops. In pure gauge theory they play the role of order operators for confinement, where they satisfy what is known as the area law. Originally formulated by Kenneth G. Wilson in 1974, they were used to construct links and plaquettes which are the fundamental parameters in lattice gauge theory. Wilson loops fall into the broader class of loop operators, with some other notable examples being 't Hooft loops, which are magnetic duals to Wilson loops, and Polyakov loops, which are the thermal version of Wilson loops.
Definition
To properly define Wilson loops in gauge theory requires considering the fiber bundle formulation of gauge theories. Here for each point in the -dimensional spacetime there is a copy of the gauge group forming what's known as a fiber of the fiber bundle. These fiber bundles are called principal bundles. Locally the resulting space looks like although globally it can have some twisted structure depending on how different fibers are glued together.
The issue that Wilson lines resolve is how to compare points on fibers at two different spacetime points. This is analogous to parallel transport in general relativity which compares tangent vectors that live in the tangent spaces at different points. For principal bundles there is a natural way to compare different fiber points through the introduction of a connection, which is equivalent to introducing a gauge field. This is because a connection is a way to separate out the tangent space of the principal bundle into two subspaces known as the vertical and horizontal subspaces. The former consists of all vectors pointing along the fiber while the latter consists of vectors that are perpendicular to the fiber. This allows for the comparison of fiber values at different spacetime points by connecting them with curves in the principal bundle whose tangent vectors always live in the horizontal subspace, so the curve is always perpendicular to any given fiber.
If the starting fiber is at coordinate with a starting point of the identity , then to see how this changes when moving to another spacetime coordinate , one needs to consider some spacetime curve between and . The corresponding curve in the principal bundle, known as the horizontal lift of , is the curve such that and that its tangent vectors always lie in the horizontal subspace. The fiber bundle formulation of gauge theory reveals that the Lie-algebra valued gauge field is equivalent to the connection that defines the horizontal subspace, so this leads to a differential equation for the horizontal lift
This has a unique formal solution called the Wilson line between the two points
where is the path-ordering operator, which is unnecessary for abelian theories. The horizontal lift starting at some initial fiber point other than the identity merely requires multiplication by the initial element of the original horizontal lift. More generally, it holds that if then for all .
Under a local gauge transformation the Wilson line transforms as
This gauge transformation property is often used to directly introduce the Wilson line in the presence of matter fields transforming in the fundamental representation of the gauge group, where the Wilson line is an operator that makes the combination gauge invariant. It allows for the comparison of the matter field at different points in a gauge invariant way. Alternatively, the Wilson lines can also be introduced by adding an infinitely heavy test particle charged under the gauge group. Its charge forms a quantized internal Hilbert space, which can be integrated out, yielding the Wilson line as the world-line of the test particle. This works in quantum field theory whether or not there actually is any matter content in the theory. However, the swampland conjecture known as the completeness conjecture claims that in a consistent theory of quantum gravity, every Wilson line and 't Hooft line of a particular charge consistent with the Dirac quantization condition must have a corresponding particle of that charge be present in the theory. Decoupling these particles by taking the infinite mass limit no longer works since this would form black holes.
The trace of closed Wilson lines is a gauge invariant quantity known as the Wilson loop
Mathematically the term within the trace is known as the holonomy, which describes a mapping of the fiber into itself upon horizontal lift along a closed loop. The set of all holonomies itself forms a group, which for principal bundles must be a subgroup of the gauge group. Wilson loops satisfy the reconstruction property where knowing the set of Wilson loops for all possible loops allows for the reconstruction of all gauge invariant information about the gauge connection. Formally the set of all Wilson loops forms an overcomplete basis of solutions to the Gauss' law constraint.
The set of all Wilson lines is in one-to-one correspondence with the representations of the gauge group. This can be reformulated in terms of Lie algebra language using the weight lattice of the gauge group . In this case the types of Wilson loops are in one-to-one correspondence with where is the Weyl group.
Hilbert space operators
An alternative view of Wilson loops is to consider them as operators acting on the Hilbert space of states in Minkowski signature. Since the Hilbert space lives on a single time slice, the only Wilson loops that can act as operators on this space are ones formed using spacelike loops. Such operators create a closed loop of electric flux, which can be seen by noting that the electric field operator is nonzero on the loop but it vanishes everywhere else. Using Stokes theorem it follows that the spatial loop measures the magnetic flux through the loop.
Order operator
Since temporal Wilson lines correspond to the configuration created by infinitely heavy stationary quarks, Wilson loop associated with a rectangular loop with two temporal components of length and two spatial components of length , can be interpreted as a quark-antiquark pair at fixed separation. Over large times the vacuum expectation value of the Wilson loop projects out the state with the minimum energy, which is the potential between the quarks. The excited states with energy are exponentially suppressed with time and so the expectation value goes as
making the Wilson loop useful for calculating the potential between quark pairs. This potential must necessarily be a monotonically increasing and concave function of the quark separation. Since spacelike Wilson loops are not fundamentally different from the temporal ones, the quark potential is really directly related to the pure Yang–Mills theory structure and is a phenomenon independent of the matter content.
Elitzur's theorem ensures that local non-gauge invariant operators cannot have a non-zero expectation values. Instead one must use non-local gauge invariant operators as order parameters for confinement. The Wilson loop is exactly such an order parameter in pure Yang–Mills theory, where in the confining phase its expectation value follows the area law
for a loop that encloses an area . This is motivated from the potential between infinitely heavy test quarks which in the confinement phase is expected to grow linearly where is known as the string tension. Meanwhile, in the Higgs phase the expectation value follows the perimeter law
where is the perimeter length of the loop and is some constant. The area law of Wilson loops can be used to demonstrate confinement in certain low dimensional theories directly, such as for the Schwinger model whose confinement is driven by instantons.
Lattice formulation
In lattice field theory, Wilson lines and loops play a fundamental role in formulating gauge fields on the lattice. The smallest Wilson lines on the lattice, those between two adjacent lattice points, are known as links, with a single link starting from a lattice point going in the direction denoted by . Four links around a single square are known as a plaquette, with their trace forming the smallest Wilson loop. It is these plaquettes that are used to construct the lattice gauge action known as the Wilson action. Larger Wilson loops are expressed as products of link variables along some loop , denoted by
These Wilson loops are used to study confinement and quark potentials numerically. Linear combinations of Wilson loops are also used as interpolating operators that give rise to glueball states. The glueball masses can then be extracted from the correlation function between these interpolators.
The lattice formulation of the Wilson loops also allows for an analytic demonstration of confinement in the strongly coupled phase, assuming the quenched approximation where quark loops are neglected. This is done by expanding out the Wilson action as a power series of traces of plaquettes, where the first non-vanishing term in the expectation value of the Wilson loop in an gauge theory gives rise to an area law with a string tension of the form
where is the inverse coupling constant and is the lattice spacing. While this argument holds for both the abelian and non-abelian case, compact electrodynamics only exhibits confinement at strong coupling, with there being a phase transition to the Coulomb phase at , leaving the theory deconfined at weak coupling. Such a phase transition is not believed to exist for gauge theories at zero temperature, instead they exhibit confinement at all values of the coupling constant.
Properties
Makeenko–Migdal loop equation
Similarly to the functional derivative which acts on functions of functions, functions of loops admit two types of derivatives called the area derivative and the perimeter derivative. To define the former, consider a contour and another contour which is the same contour but with an extra small loop at in the - plane with area . Then the area derivative of the loop functional is defined through the same idea as the usual derivative, as the normalized difference between the functional of the two loops
The perimeter derivative is similarly defined whereby now is a slight deformation of the contour which at position has a small extruding loop of length in the direction and of zero area. The perimeter derivative of the loop functional is then defined as
In the large N-limit, the Wilson loop vacuum expectation value satisfies a closed functional form equation called the Makeenko–Migdal equation
Here with being a line that does not close from to , with the two points however close to each other. The equation can also be written for finite , but in this case it does not factorize and instead leads to expectation values of products of Wilson loops, rather than the product of their expectation values. This gives rise to an infinite chain of coupled equations for different Wilson loop expectation values, analogous to the Schwinger–Dyson equations. The Makeenko–Migdal equation has been solved exactly in two dimensional theory.
Mandelstam identities
Gauge groups that admit fundamental representations in terms of matrices have Wilson loops that satisfy a set of identities called the Mandelstam identities, with these identities reflecting the particular properties of the underlying gauge group. The identities apply to loops formed from two or more subloops, with being a loop formed by first going around and then going around .
The Mandelstam identity of the first kind states that , with this holding for any gauge group in any dimension. Mandelstam identities of the second kind are acquired by noting that in dimensions, any object with totally antisymmetric indices vanishes, meaning that . In the fundamental representation, the holonomies used to form the Wilson loops are matrix representations of the gauge groups. Contracting holonomies with the delta functions yields a set of identities between Wilson loops. These can be written in terms the objects defined iteratively so that and
In this notation the Mandelstam identities of the second kind are
For example, for a gauge group this gives .
If the fundamental representation are matrices of unit determinant, then it also holds that . For example, applying this identity to gives
Fundamental representations consisting of unitary matrices satisfy . Furthermore, while the equality holds for all gauge groups in the fundamental representations, for unitary groups it moreover holds that .
Renormalization
Since Wilson loops are operators of the gauge fields, the regularization and renormalization of the underlying Yang–Mills theory fields and couplings does not prevent the Wilson loops from requiring additional renormalization corrections. In a renormalized Yang–Mills theory, the particular way that the Wilson loops get renormalized depends on the geometry of the loop under consideration. The main features are
Smooth non-intersecting curve: This can only have linear divergences proportional to the contour which can be removed through multiplicative renormalization.
Non-intersecting curve with cusps: Each cusp results in an additional local multiplicative renormalization factor that depends on the cusp angle .
Self-intersections: This leads to operator mixing between the Wilson loops associated with the full loop and the subloops.
Lightlike segments: These give rise to additional logarithmic divergences.
Additional applications
Scattering amplitudes
Wilson loops play a role in the theory of scattering amplitudes where a set of dualities between them and special types of scattering amplitudes has been found. These have first been suggested at strong coupling using the AdS/CFT correspondence. For example, in supersymmetric Yang–Mills theory maximally helicity violating amplitudes factorize into a tree-level component and a loop level correction. This loop level correction does not depend on the helicities of the particles, but it was found to be dual to certain polygonal Wilson loops in the large limit, up to finite terms. While this duality was initially only suggested in the maximum helicity violating case, there are arguments that it can be extended to all helicity configurations by defining appropriate supersymmetric generalizations of the Wilson loop.
String theory compactifications
In compactified theories, zero mode gauge field states that are locally pure gauge configurations but are globally inequivalent to the vacuum are parameterized by closed Wilson lines in the compact direction. The presence of these on a compactified open string theory is equivalent under T-duality to a theory with non-coincident D-branes, whose separations are determined by the Wilson lines. Wilson lines also play a role in orbifold compactifications where their presence leads to greater control of gauge symmetry breaking, giving a better handle on the final unbroken gauge group and also providing a mechanism for controlling the number of matter multiplets left after compactification. These properties make Wilson lines important in compactifications of superstring theories.
Topological field theory
In a topological field theory, the expectation value of Wilson loops does not change under smooth deformations of the loop since the field theory does not depend on the metric. For this reason, Wilson loops are key observables on in these theories and are used to calculate global properties of the manifold. In dimensions they are closely related to knot theory with the expectation value of a product of loops depending only on the manifold structure and on how the loops are tied together. This led to the famous connection made by Edward Witten where he used Wilson loops in Chern–Simons theory to relate their partition function to Jones polynomials of knot theory.
See also
Winding number
References
Gauge theories
Quantum chromodynamics
Lattice field theory
Phase transitions | Wilson loop | [
"Physics",
"Chemistry"
] | 3,020 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
616,618 | https://en.wikipedia.org/wiki/Heterochromia%20iridum | Heterochromia is a variation in coloration most often used to describe color differences of the iris, but can also be applied to color variation of hair or skin. Heterochromia is determined by the production, delivery, and concentration of melanin (a pigment). It may be inherited, or caused by genetic mosaicism, chimerism, disease, or injury. It occurs in humans and certain breeds of domesticated animals.
Heterochromia of the eye is called heterochromia iridum or heterochromia iridis. It can be complete, sectoral, or central. In complete heterochromia, one iris is a different color from the other. In sectoral heterochromia, part of one iris is a different color from its remainder. In central heterochromia, there is a ring around the pupil or possibly spikes of different colors radiating from the pupil.
Though multiple causes have been posited, the scientific consensus is that a lack of genetic diversity is the primary reason behind heterochromia, at least in domestic animals. This is due to a mutation of the genes that determine melanin distribution at the 8-HTP pathway, which usually only become corrupted due to chromosomal homogeneity. Though common in some breeds of cats, dogs, cattle and horses due to inbreeding, heterochromia is uncommon in humans, affecting fewer than 200,000 people in the United States, and is not associated with lack of genetic diversity.
The affected eye may be hyperpigmented (hyperchromic) or hypopigmented (hypochromic). In humans, an increase of melanin production in the eyes indicates hyperplasia of the iris tissues, whereas a lack of melanin indicates hypoplasia.
The term is derived from Ancient Greek: , "different" and , "color".
Background
Eye color, specifically the color of the irises, is determined primarily by the concentration and distribution of melanin. Although the processes determining eye color are not fully understood, it is known that inherited eye color is determined by multiple genes. Environmental or acquired factors can alter these inherited traits.
The color of the mammalian, including human, iris is very variable. However, there are only two pigments present, eumelanin and pheomelanin. The overall concentration of these pigments, the ratio between them, variation in the distribution of pigment in the layers of the stroma of the iris and the effects of light scattering all play a part in determining eye color. In the United States, July 12 is observed by some as National Different Colored Eyes Day.
Classification
Heterochromia is classified primarily by onset: as either genetic or acquired. Although a distinction is frequently made between heterochromia that affects an eye completely or only partially (sectoral heterochromia), it is often classified as either genetic (due to mosaicism or congenital) or acquired, with mention as to whether the affected iris or portion of the iris is darker or lighter. Most cases of heterochromia are hereditary, or caused by genetic factors such as chimerism, and are entirely benign and unconnected to any pathology, however, some are associated with certain diseases and syndromes. Sometimes one eye may change color following disease or injury.
Genetic
Abnormal iris darker
Lisch nodules – iris hamartomas seen in neurofibromatosis.
Ocular melanosis – a condition characterized by increased pigmentation of the uveal tract, episclera, and anterior chamber angle.
Oculodermal melanocytosis (nevus of Ota)
Pigment dispersion syndrome – a condition characterized by loss of pigmentation from the posterior iris surface which is disseminated intraocularly and deposited on various intraocular structures, including the anterior surface of the iris.
Sturge–Weber syndrome – a syndrome characterized by a port-wine stain nevus in the distribution of the trigeminal nerve, ipsilateral leptomeningeal angiomas with intracranial calcification and neurologic signs, and angioma of the choroid, often with secondary glaucoma.
Abnormal iris lighter
Simple heterochromia – a rare condition characterized by the absence of other ocular or systemic problems. The lighter eye is typically regarded as the affected eye as it usually shows iris hypoplasia. It may affect an iris completely or only partially.
Congenital Horner's syndrome – sometimes inherited, although usually acquired.
Waardenburg syndrome – a syndrome in which heterochromia is expressed as a bilateral iris hypochromia in some cases. A Japanese review of 11 children with albinism found that the condition was present. All had sectoral/partial heterochromia.
Piebaldism – similar to Waardenburg's syndrome, a rare disorder of melanocyte development characterized by a white forelock and multiple symmetrical hypopigmented or depigmented macules.
Hirschsprung's disease – a bowel disorder associated with heterochromia in the form of a sector hypochromia. The affected sectors have been shown to have reduced numbers of melanocytes and decreased stromal pigmentation.
Incontinentia pigmenti
Parry–Romberg syndrome
Acquired
Acquired heterochromia is usually due to injury, inflammation, the use of certain eyedrops that damage the iris, or tumors, both benign and malignant.
Abnormal iris darker
Deposition of material
Siderosis – iron deposition within ocular tissues due to a penetrating injury and a retained iron-containing, intraocular foreign body.
Hemosiderosis – long standing hyphema (blood in the anterior chamber) following blunt trauma to the eye may lead to iron deposition from blood products.
Certain eyedrops – prostaglandin analogues (latanoprost, isopropyl unoprostone, travoprost, and bimatoprost) are used topically to lower intraocular pressure in glaucoma patients. A concentric heterochromia has developed in some patients applying these drugs. A stimulation of melanin synthesis within iris melanocytes has been postulated.
Neoplasm – Nevi and melanomatous tumors.
Iridocorneal endothelium syndrome
Iris ectropion syndrome
Abnormal iris lighter
Fuchs heterochromic iridocyclitis – a condition characterized by a low grade, asymptomatic uveitis in which the iris in the affected eye becomes hypochromic and has a washed-out, somewhat moth eaten appearance. The heterochromia can be very subtle, especially in patients with lighter colored irides. It is often most easily seen in daylight. The prevalence of heterochromia associated with Fuchs has been estimated in various studies with results suggesting that there is more difficulty recognizing iris color changes in dark-eyed individuals.
Acquired Horner's syndrome – usually acquired, as in neuroblastoma, although sometimes inherited.
Neoplasm – Melanomas can also be very lightly pigmented, and a lighter colored iris may be a rare manifestation of metastatic disease to the eye.
Parry–Romberg syndrome – due to tissue loss.
Heterochromia has also been observed in those with Duane syndrome.
Chronic iritis
Juvenile xanthogranuloma
Leukemia and lymphoma
Partial heterochromia – different colors in the same iris
Partial heterochromia is most often a benign trait of genetic origins, but, like complete heterochromia, can be acquired or be related to clinical syndromes.
Sectoral
In sectoral heterochromia, areas of the same iris contain two different colors, the contrasting colors being demarcated in a radial, or sectoral, manner. Sectoral heterochromia may affect one or both eyes.
It is unknown how rare sectoral heterochromia is in humans, but it is considered to be less common than complete heterochromia.
Central
Central heterochromia is also an eye condition where there are two colors in the same iris; but the arrangement is concentric, rather than sectoral. The central (pupillary) zone of the iris is a different color than the mid-peripheral (ciliary) zone. Central heterochromia is more noticeable in irises containing low amounts of melanin.
In history and culture
Heterochromia of the eye was first described as a human condition by Aristotle, who termed it heteroglaucos.
Notable historical figures thought to have heterochromia include the Byzantine emperor Anastasius the First, dubbed dikoros (Greek for 'having two pupils'). "His right eye was light blue, while the left was black, nevertheless his eyes were most attractive", is the description of the historian John Malalas. A more recent example is the German poet, playwright, novelist, scientist, statesman, theatre director, and critic, Johann Wolfgang Goethe.
The Alexander Romance, an early literary treatment of the life of Alexander the Great, attributes heterochromia to him. In it he is described as having one eye light and one eye dark. However, no ancient historical source mentions this. It is used to emphasise the otherworldly and heroic qualities of Alexander.
In the Ars Amatoria, the Roman poet Ovid describes the witch Dipsas as having 'double pupils'. Kirby Flower Smith suggested that this could be understood as heterochromia, though other scholars have disagreed. The Roman jurist and writer Cicero also mentions the same feature of 'double pupils' as being found in some Italic women. Pliny the Elder related this feature to the concept of 'the evil eye'.
The twelfth-century scholar Eustathius, in his commentary on the Iliad, reports a tradition in which the Thracian Thamyris (son of the nymph Argiope), who was famed for his musical abilities, had one eye that was grey, whilst the other was black. W. B. McDaniel suggests that this should be interpreted as heterochromia.
In other animals
Although infrequently seen in humans, complete heterochromia is more frequently observed in species of domesticated mammals. The blue eye occurs within a white spot, where melanin is absent from the skin and hair (see Leucism). These species include the cat, particularly breeds such as Turkish Van, Khao Manee and (rarely) Japanese Bobtail. These so-called odd-eyed cats are white, or mostly white, with one normal eye (copper, orange, yellow, green), and one blue eye. Among dogs, complete heterochromia is seen often in the Siberian Husky and few other breeds, usually Australian Shepherd and Catahoula Leopard Dog and rarely in Shih Tzu. Horses with complete heterochromia have one brown and one white, gray, or blue eye—complete heterochromia is more common in horses with pinto coloring. Complete heterochromia occurs also in cattle and even water buffalo. It can also be seen in ferrets with Waardenburg syndrome, although it can be very hard to tell at times as the eye color is often a midnight blue.
Sectoral heterochromia, usually sectoral hypochromia, is often seen in dogs, specifically in breeds with merle coats. These breeds include the Australian Shepherd, Border Collie, Collie, Shetland Sheepdog, Welsh Corgi, Pyrenean Shepherd, Mudi, Beauceron, Catahoula Cur, Dunker, Great Dane, Dachshund and Chihuahua. It also occurs in certain breeds that do not carry the merle trait, such as the Siberian Husky, Dalmatian, and rarely, Shih Tzu. There are examples of cat breeds that have the condition such as Van cat.
See also
References
External links
Disturbances of pigmentation
Eye color
Eye diseases
Genodermatoses
Medical signs | Heterochromia iridum | [
"Biology"
] | 2,537 | [
"Disturbances of pigmentation",
"Pigmentation"
] |
616,670 | https://en.wikipedia.org/wiki/Biochemical%20engineering | Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.
History
For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs.
Applications
Biotechnology
Biotechnology and biochemical engineering are closely related to each other as biochemical engineering can be considered a sub-branch of biotechnology. One of the primary focuses of biotechnology is in the medical field, where biochemical engineers work to design pharmaceuticals, artificial organs, biomedical devices, chemical sensors, and drug delivery systems. Biochemical engineers use their knowledge of chemical processes in biological systems in order to create tangible products that improve people's health. Specific areas of studies include metabolic, enzyme, and tissue engineering. The study of cell cultures is widely used in biochemical engineering and biotechnology due to its many applications in developing natural fuels, improving the efficiency in producing drugs and pharmaceutical processes, and also creating cures for disease. Other medical applications of biochemical engineering within biotechnology are genetics testing and pharmacogenomics.
Food Industry
Biochemical engineers primarily focus on designing systems that will improve the production, processing, packaging, storage, and distribution of food. Some commonly processed foods include wheat, fruits, and milk which undergo processes such as milling, dehydration, and pasteurization in order to become products that can be sold. There are three levels of food processing: primary, secondary, and tertiary. Primary food processing involves turning agricultural products into other products that can be turned into food, secondary food processing is the making of food from readily available ingredients, and tertiary food processing is commercial production of ready-to eat or heat-and-serve foods. Drying, pickling, salting, and fermenting foods were some of the oldest food processing techniques used to preserve food by preventing yeasts, molds, and bacteria to cause spoiling. Methods for preserving food have evolved to meet current standards of food safety but still use the same processes as the past. Biochemical engineers also work to improve the nutritional value of food products, such as in golden rice, which was developed to prevent vitamin A deficiency in certain areas where this was an issue. Efforts to advance preserving technologies can also ensure lasting retention of nutrients as foods are stored. Packaging plays a key role in preserving as well as ensuring the safety of the food by protecting the product from contamination, physical damage, and tampering. Packaging can also make it easier to transport and serve food. A common job for biochemical engineers working in the food industry is to design ways to perform all these processes on a large scale in order to meet the demands of the population. Responsibilities for this career path include designing and performing experiments, optimizing processes, consulting with groups to develop new technologies, and preparing project plans for equipment and facilities.
Pharmaceuticals
In the pharmaceutical industry, bioprocess engineering plays a crucial role in the large-scale production of biopharmaceuticals, such as monoclonal antibodies, vaccines, and therapeutic proteins. The development and optimization of bioreactors and fermentation systems are essential for the mass production of these products, ensuring consistent quality and high yields. For example, recombinant proteins like insulin and erythropoietin are produced through cell culture systems using genetically modified cells. The bioprocess engineer’s role is to optimize variables like temperature, pH, nutrient availability, and oxygen levels to maximize the efficiency of these systems. The growing field of gene therapy also relies on bioprocessing techniques to produce viral vectors, which are used to deliver therapeutic genes to patients. This involves scaling up processes from laboratory to industrial scale while maintaining safety and regulatory compliance . As the demand for biopharmaceutical products increases, advancements in bioprocess engineering continue to enable more sustainable and cost-effective manufacturing methods.
Education
Auburn University
University of Georgia (Biochemical Engineering)
Michigan Technological University
McMaster University
Technical University of Munich
University of Natural Resources and Life Sciences, Vienna
Keck Graduate Institute of Applied Life Sciences (KGI Amgen Bioprocessing Center)
Kungliga Tekniska högskolan- KTH – Royal Institute of Technology (Dept. of Industrial Biotechnology)
Queensland University of Technology (QUT)
University of Cape Town (Centre for Bioprocess Engineering Research)
SUNY-ESF (Bioprocess Engineering Program)
Université de Sherbrooke
University of British Columbia
UC Berkeley
UC Davis
Savannah Technical College
University of Illinois Urbana-Champaign (Integrated Bioprocessing Research Laboratory)
University of Iowa (Chemical and Biochemical Engineering)
University of Minnesota (Bioproducts and Biosystems Engineering)
East Carolina University
Jacob School of Biotechnology and Bioengineering, Allahabad, India
Indian Institute of Technology, Varanasi
Indian Institute of Technology Kharagpur
Institute of Chemical Technology, Mumbai
Jadavpur University
Universidade Federal de Itajubá (UNIFEI)
Universiti Malaysia Kelantan (UMK)
Universidade Federal de São João del Rei-UFSJ
Federal University of Technology – Paraná
Universidade Federal do Paraná-UFPR
São Paulo State University
Universidade Federal do Pará-UFPA
University of Louvain (UCLouvain)
University of Stellenbosch
North Carolina Agricultural and Technical State University
North Carolina State University
Virginia Tech
Ege University/Turkey (Department of Bioengineering)
National University of Costa Rica
University of Brawijaya (Department of Agricultural Engineering)
University of Indonesia
University College London (Department of Biochemical Engineering)
Universiti Teknologi Malaysia
Universiti Kuala Lumpur Malaysian Institute of Chemical and Bioengineering Technology
University of Zagreb, Faculty of food technology and biotechnology, Croatia
Villanova University
Wageningen University
University College Dublin
Obafemi Awolowo University
University of Birmingham
Universidad Autónoma de Coahuila (Facultad de Ciencias Biológicas)
Silpakorn University Thailand
Universiti Malaysia Perlis (UniMAP), School of Bioprocess Engineering (SBE)
Technische Universität Berlin, Chair of Bioprocess Engineering
University of Queensland
Technical University of Denmark, Department of Chemical and Biochemical Engineering, BioEng Research Centre
South Dakota School of Mines and Technology
National Institute of Applied Science and Technology Tunis (Industrial Biology Engineering Program)
Technical University Hamburg (TUHH)
Mapua University
Biochemical engineering is not a major offered by many universities and is instead an area of interest under the chemical engineering. The following universities are known to offer degrees in biochemical engineering:
Brown University – Providence, RI
Christian Brothers University – Memphis, TN
Colorado School of Mines – Golden, CO
Rowan University – Glassboro, NJ
University of Colorado Boulder – Boulder, CO
University of Georgia – Athens, GA
University of California, Davis – Davis, CA
University College London – London, United Kingdom
University of Southern California – Los Angeles, CA
University of Western Ontario – Ontario, Canada
Indian Institute of Technology (BHU) Varanasi – Varanasi, UP
Indian Institute of Technology Delhi – Delhi
Institute of Technology Tijuana – México
University of Baghdad, College of Engineering, Al-Khwarizmi Biochemical
See also
Biochemical engineering
Biofuel from algae
Biological hydrogen production (algae)
Bioprocess
Bioproducts engineering
Bioproducts
Bioreactor landfill
Biosystems engineering
Cell therapy
Downstream (bioprocess)
Electrochemical energy conversion
Food engineering
Industrial biotechnology
Microbiology
Moss bioreactor
Photobioreactor
Physical chemistry
Unit operations
Upstream (bioprocess)
Use of biotechnology in pharmaceutical manufacturing
References
Shukla, A. A., Thömmes, J., & Hackl, M. (2012). Recent advances in downstream processing of therapeutic monoclonal antibodies. Biotechnology Advances, 30(3), 1548-1557.
Walsh, G. (2018). Biopharmaceuticals: Biochemistry and Biotechnology (3rd ed.). Wiley. | Biochemical engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 1,919 | [
"Biochemistry",
"Chemical engineering",
"Biological engineering",
"Biochemical engineering"
] |
616,681 | https://en.wikipedia.org/wiki/Rush%20Hour%20%28puzzle%29 | Rush Hour is a sliding block puzzle invented by Nob Yoshigahara in the 1970s. It was first sold in the United States in 1996. It is now being manufactured by ThinkFun (formerly Binary Arts).
ThinkFun now sells Rush Hour spin-offs Rush Hour Jr., Safari Rush Hour, Railroad Rush Hour, Rush Hour Brain Fitness and Rush Hour Shift, with puzzles by Scott Kim. The game sold more than 1 million units.
Game
The board is a 6×6 grid with grooves in the tiles to allow cars to slide, card tray to hold the cards, current active card holder and an exit hole. The game comes with 16 vehicles (12 cars, 4 trucks), each colored differently, and 40 puzzle cards. Cars and trucks are both one square wide, but cars are two squares long and trucks are three squares long. Vehicles can only be moved along a straight line on the grid; rotation is forbidden. Puzzle cards, each with a level number that indicates the difficulty of the challenge, show the starting positions of cars and trucks. Not all cars and trucks are used in all challenges.
Objective
The goal of the game is to get only the red car out through the exit of the board by moving the other vehicles out of its way. However, the cars and trucks (set up before play, according to a puzzle card) obstruct the path of both the red car and each other, which makes the puzzle even more difficult.
Editions
The Regular Edition comes with forty puzzles split into four different difficulties, ranging from Beginner to Expert. The Deluxe Edition has a black playing board, card box in place of the Regular Edition's card tray, and sixty new puzzles with an extra difficulty: the Grand Master. The Ultimate Collector's Edition has a playing board that can hold vehicles not in play and can display the active card in a billboard-like display. The Ultimate Collectors Edition also includes 155 new puzzles (with some of them being from card set three) and a white limo. In 2011, the board was changed to black, like the Deluxe Edition.
An iOS version of the game was released in 2010.
Expansions
Three official expansions, called "add-on packs", were released: Card Set 2, which comes with a red sports car that takes up 2 squares; Card Set 3, which comes with a white limo that takes up 3 squares; and Card Set 4, which comes with a taxi that takes up 2 squares. Each set also comes with 40 new exclusive challenges—from Intermediate to Grand Master—that make use of the new vehicles in place of (or in addition to) the red car. All three of the expansion packs will work with all editions of the game. Also, like the Regular Edition of the game in 2011, the cards of all three expansions were changed to have new levels and design to match the new board color of the Regular Edition.
Computational complexity on larger boards
When generalized so that it can be played on an arbitrarily large board, the problem of deciding if a Rush Hour problem has a solution is PSPACE-complete. This is proved by reducing a graph game called nondeterministic constraint logic, which is known to be PSPACE-complete, to generalized Rush Hour positions. In 2005, Tromp and Cilibrasi showed that Rush Hour is still PSPACE-complete when the cars are of size 2 only. They also conjectured that Rush Hour is still nontrivial when the cars are of size 1 only.
Most difficult configurations
The hardest possible initial configuration has been shown to take 93 steps. A shortest solution can be seen on the right.
If you count the necessary moves instead of the steps, the most difficult start configuration in this sense requires 51 moves.
See also
Combination puzzles
Mechanical puzzles
Klotski (or Chinese: Huarong Dao), a similar sliding block puzzle
Blocked (video game):2009 mobile video game based on Rush hour.
References
Mechanical puzzles
Combination puzzles
Mensa Select winners
Puzzle video games
PSPACE-complete problems
Single-player games | Rush Hour (puzzle) | [
"Mathematics"
] | 824 | [
"Recreational mathematics",
"PSPACE-complete problems",
"Mechanical puzzles",
"Computational problems",
"Mathematical problems"
] |
616,752 | https://en.wikipedia.org/wiki/Ozone%E2%80%93oxygen%20cycle | The ozone–oxygen cycle is the process by which ozone is continually regenerated in Earth's stratosphere, converting ultraviolet radiation (UV) into heat. In 1930 Sydney Chapman resolved the chemistry involved. The process is commonly called the Chapman cycle by atmospheric scientists.
Most of the ozone production occurs in the tropical upper stratosphere and mesosphere. The total mass of ozone produced per day over the globe is about 400 million metric tons. The global mass of ozone is relatively constant at about 3 billion metric tons, meaning the Sun produces about 12% of the ozone layer each day.
Photochemistry
The Chapman cycle describes the main reactions that naturally determine, to first approximation, the concentration of ozone in the stratosphere. It includes four processes - and a fifth, less important one - all involving oxygen atoms and molecules, and UV radiation:
Creation
An oxygen molecule is split (photolyzed) by higher frequency UV light (top end of UV-B, UV-C and above) into two oxygen atoms (see figure):
1. oxygen photodissociation: O2 + ℎν(<242 nm) → 2 O
Each oxygen atom may then combine with an oxygen molecule to form an ozone molecule:
2. ozone creation: O + O2 + A → O3 + A
where A denotes an additional molecule or atom, such as N2 or O2, required to maintain the conservation of energy and momentum in the reaction. Any excess energy is produced as kinetic energy.
The ozone–oxygen cycle
The ozone molecules formed by the reaction (above) absorb radiation with an appropriate wavelength between UV-C and UV-B. The triatomic ozone molecule becomes diatomic molecular oxygen, plus a free oxygen atom (see figure):
3. ozone photodissociation: O3 + ℎν(240–310 nm) → O2 + O
The atomic oxygen produced may react with another oxygen molecule to reform ozone via the ozone creation reaction (reaction 2 above).
These two reactions thus form the ozone–oxygen cycle, wherein the chemical energy released by ozone creation becomes molecular kinetic energy. The net result of the cycle is the conversion of penetrating UV-B light into heat, without any net loss of ozone. While keeping the ozone layer in stable balance, and protecting the lower atmosphere from harmful UV radiation, the cycle also provides one of two major heat sources in the stratosphere (the other being kinetic energy, released when O2 is photolyzed into individual O atoms).
Removal
If an oxygen atom and an ozone molecule meet, they recombine to form two oxygen molecules:
4. ozone conversion: O3 + O → 2 O2
Two oxygen atoms may react to form one oxygen molecule:
5. oxygen recombination: 2O + A → O2 + A
as in reaction 2 (above), A denotes another molecule or atom, like N2 or O2 required for the conservation of energy and momentum.
Note that reaction 5 is of the least importance in the stratosphere, since, under normal conditions, the concentration of oxygen atoms is much lower than that of diatomic oxygen molecules. This reaction is therefore less common than ozone creation (reaction 2).
The overall amount of ozone in the stratosphere is determined by the balance between production from solar radiation and its removal. The removal rate is slow, since the concentration of free O atoms is very low.
Additional reactions
In addition to these five reactions, certain free radicals - the most important being hydroxyl (OH), nitric oxide (NO), and atomic chlorine (Cl) and bromine (Br) - catalyze the recombination reaction, leading to an ozone layer that is thinner than it would be if the catalysts were not present.
Most OH and NO are naturally present in the stratosphere, but human activity - especially emissions of chlorofluorocarbons (CFCs) and halons - has greatly increased the concentration of Cl and Br, leading to ozone depletion. Each Cl or Br atom can catalyze tens of thousands of decomposition reactions before it is removed from the stratosphere.
Main reactions in different atmospheric layers
Thermosphere
For given relative reactants concentrations, The rates of ozone creation and oxygen recombination (reactions 2 and 5) are proportional to the air density cubed, while the rate of ozone conversion (reaction 4) is proportional to the air density squared, and the photodissociation reactions (reactions 1 and 3) have a linear dependence on air density. Thus, at the upper thermosphere, where air density is very low and photon flux is high, oxygen photodissociation is fast while ozone creation is low, thus its concentration is low. Thus the most important reactions are oxygen photodissociation and oxygen recombination, with most of the oxygen molecules dissociated to oxygen atoms.
As we go to the lower thermosphere (e.g. 100 km height and below), the photon flux in the <170 nm wavelengths drops sharply due to absorption by oxygen in the oxygen photodissociation reaction (reaction 1). This wavelength regime has the highest cross section for this reaction (10−17 cm2 per oxygen molecule), and thus the rate of oxygen photodissociation per oxygen molecule decreases significantly at these altitudes, from more than 10−7 per second (about once a month) at 100 km to 10−8 per second (about once every few years) at 80 km . As a result, the atomic oxygen concentration (both relative and absolute) decreases sharply, and ozone creation (reaction 2) is ongoing, leading to a small but non-negligible ozone presence.
Note that temperatures also drop as altitude decreases, because lower photon photodissociation rates mean lower heat production per air molecule.
Below thermosphere: Reaction rates at steady state
Odd oxygen species (atomic oxygen and ozone) have net creation rate only by oxygen dissociation (reaction 1), and net destruction by either ozone conversion or oxygen recombination (reactions 4 and 5). At steady state these processes are balanced, so the rates of these reactions obey:
(rate of reaction 1) = (rate of reaction 4) + (rate of reaction 5).
At steady state, ozone creation is also balanced with its removal. so:
(rate of reaction 2) = (rate of reaction 3) + (rate of reaction 4).
It thus follows that:
(rate of reaction 2) + (rate of reaction 5) = (rate of reaction 3) + (rate of reaction 1).
The right-hand side is the total photodissociation rate, of either oxygen or ozone.
Below the thermosphere, the atomic oxygen concentration is very low compared to molecular oxygen. Therefore, oxygen atoms are much more likely to hit oxygen (diatomic) molecules than to hit other oxygen atoms, making oxygen recombination (reaction 5) far rarer than ozone creation (reaction 2). Following the steady-state relation between the reaction rates, we may therefore approximate:
(rate of reaction 2) = (rate of reaction 3) + (rate of reaction 1)
Mesosphere
In the mesosphere, oxygen photodissociation dominates over ozone photodissociation, so we have approximately:
(rate of reaction 2) = (rate of reaction 1) = (rate of reaction 4)
Thus, ozone is mainly removed by ozone conversion. Both ozone creation and conversion depend linearly on oxygen atom concentration, but in ozone creation an oxygen atom must encounter an oxygen molecule and another air molecule (typically nitrogen) simultaneously, while in ozone conversion an oxygen atom must only encounter an ozone molecule. Thus, when both reactions are balanced, the ratio between ozone and molecular oxygen concentrations is approximately proportional to air density.
Therefore, the relative ozone concentration is higher at lower altitudes, where air density is higher. This trend continues to some extent lower into the stratosphere, and thus as we go from 60 km to 30 km altitude, both air density and ozone relative concentration increase by ~40-50-fold.
Stratosphere
Absorption by oxygen in the mesosphere and thermosphere (in the oxygen photodissociation reaction) reduces photon flux at wavelengths below 200 nanometer, where oxygen photodissociation is dominated by Schumann–Runge bands and continuum, with cross-section of up to 10−17 cm2.
Due to this absorption, photon flux in these wavelengths is so low in the stratosphere, that oxygen photodissociation becomes dominated by the Hertzberg band of the 200-240 nm photon wavelength, even though the cross-section of this process is as low as 10−24 - 10−23 cm2. The ozone photodissociation rate per ozone molecule has a cross-section 6 orders of magnitude higher in the 220-300 nm wavelength range. With ozone concentrations in the order of 10−6-10−5 relative to molecular oxygen, ozone photodissociation becomes the dominant photodissociation reaction, and most of the stratosphere heat is generated through this procsees, with highest heat generation rate per molecule at the upper limit of the stratosphere (stratopause), where ozone concentration is already relatively high while UV flux is still high as well in those wavelengths, before being depleted by this same photodissociation process.
In addition to ozone photodissociation becoming a more dominant removal reaction, catalytic ozone destruction due to free radicals (mainly atomic hydrogen, hydroxyl, nitric oxide, chlorine and bromide) increases the effective ozone conversion reaction rate. Both processes act to increase ozone removal, leading to a more moderate increase of ozone relative concentration as altitude decreases, even though air density continues to increase.
Due to both ozone and oxygen growing density as we go to lower altitudes, UV photon flux at wavelengths below 300 nm decreases substantially, and oxygen photodissociation rates fall below 10−9 per second per molecule at 30 km. With decreasing oxygen photodissociation rates, odd-oxygen species (atomic oxygen and ozone molecules) are hardly formed de novo (rather than being transmuted to each other by the other reactions), and most atomic oxygen needed for ozone creation is derived almost exclusively from ozone removal by ozone photodissociation. Thus, ozone becomes depleted as we go below 30 km altitude and reaches very low concentrations at the tropopause.
Troposphere
In the troposphere, ozone formation and destruction are no longer controlled by the ozone-oxygen cycle. Rather, tropospheric ozone chemistry is dominated today by industrial pollutants other gases of volcanic source.
External links
Stratospheric Ozone: An Electronic Textbook
The Sun and the Earth's Climate
References
Cycle
Oxygen
Biogeochemical cycle
Atmospheric chemistry | Ozone–oxygen cycle | [
"Chemistry"
] | 2,213 | [
"Biogeochemical cycle",
"Biogeochemistry",
"nan"
] |
616,769 | https://en.wikipedia.org/wiki/Local%20Bubble | The Local Bubble, or Local Cavity, is a relative cavity in the interstellar medium (ISM) of the Orion Arm in the Milky Way. It contains the closest of celestial neighbours and among others, the Local Interstellar Cloud (which contains the Solar System), the neighbouring G-Cloud, the Ursa Major moving group (the closest stellar moving group) and the Hyades (the nearest open cluster). It is estimated to be at least 1000 light years in size, and is defined by its neutral-hydrogen density of about 0.05 atoms/cm3, or approximately one tenth of the average for the ISM in the Milky Way (0.5 atoms/cm3), and one sixth that of the Local Interstellar Cloud (0.3 atoms/cm3).
The exceptionally sparse gas of the Local Bubble is the result of supernovae that exploded within the past ten to twenty million years. Geminga, a pulsar in the constellation Gemini, was once thought to be the remnant of a single supernova that created the Local Bubble, but now multiple supernovae in subgroup B1 of the Pleiades moving group are thought to have been responsible, becoming a remnant supershell. Other research suggests that the subgroups Lower Centaurus–Crux (LCC) and Upper Centaurus–Lupus (UCL), of the Scorpius–Centaurus association created both the local bubble and the Loop I Bubble. With LCC being responsible for the Local Bubble and UCL being responsible for the Loop I Bubble. It was found that 14 to 20 supernovae originated from LCC and UCL, which could have formed these bubbles.
Description
The Solar System has been traveling through the region currently occupied by the Local Bubble for the last five to ten million years. Its current location lies in the Local Interstellar Cloud (LIC), a minor region of denser material within the Bubble. The LIC formed where the Local Bubble and the Loop I Bubble met. The gas within the LIC has a density of approximately 0.3 atoms per cubic centimeter.
The Local Bubble is not spherical, but seems to be narrower in the galactic plane, becoming somewhat egg-shaped or elliptical, and may widen above and below the galactic plane, becoming shaped like an hourglass. It abuts other bubbles of less dense interstellar medium (ISM), including, in particular, the Loop I Bubble. The Loop I Bubble was cleared, heated and maintained by supernovae and stellar winds in the Scorpius–Centaurus association, some 500 light years from the Sun. The Loop I Bubble contains the star Antares (also known as α Sco, or Alpha Scorpii), as shown on the diagram above right. Several tunnels connect the cavities of the Local Bubble with the Loop I Bubble, called the "Lupus Tunnel". Other bubbles which are adjacent to the Local Bubble are the Loop II Bubble and the Loop III Bubble. In 2019, researchers found interstellar iron in Antarctica which they relate to the Local Interstellar Cloud, which might be related to the formation of the Local Bubble.
Observation
Launched in February 2003 and active until April 2008, a small space observatory called Cosmic Hot Interstellar Plasma Spectrometer (CHIPS or CHIPSat) examined the hot gas within the Local Bubble. The Local Bubble was also the region of interest for the Extreme Ultraviolet Explorer mission (1992–2001), which examined hot EUV sources within the bubble. Sources beyond the edge of the bubble were identified but attenuated by the denser interstellar medium. In 2019, the first 3D map of the Local Bubble has been reported using the observations of diffuse interstellar bands.
In 2020, the shape of the dusty envelope surrounding the Local Bubble was retrieved and modeled from 3D maps of the dust density obtained from stellar extinction data.
Impact on star formation
In January 2022, a paper in the journal Nature found that observations and modelling had determined that the action of the expanding surface of the bubble had collected gas and debris and was responsible for the formation of all young, nearby stars.
These new stars are typically in molecular clouds like the Taurus molecular cloud and the open star cluster Pleiades.
Connection to radioactive isotopes on earth
On earth several radioactive isotopes were connected to supernovae occurring relatively nearby to the solar system. The most common source is found in deep sea ferromanganese crusts. Such nodules are constantly growing and deposit iron, manganese and other elements. Samples are divided into layers which are dated for example with Beryllium-10. Some of these layers have higher concentrations of radioactive isotopes. The isotope most commonly associated with supernovae on earth is Iron-60 from deep sea sediments, Antarctic snow, and lunar soil. Other isotopes are Manganese-53 and Plutonium-244 from deep sea materials. Supernova-originated Aluminium-26, which was expected from cosmic ray studies, was not confirmed. Iron-60 and Manganese-53 have a peak 1.7–3.2 Million years ago and Iron-60 has a second peak 6.5–8.7 Million years ago. The older peak likely originated when the solar system moved through the Orion-Eridanus superbubble and the younger peak was generated when the solar system entered the local bubble 4.5 Million years ago. One of the supernovae creating the younger peak might have created the pulsar PSR B1706-16 and turned Zeta Ophiuchi into a runaway star. Both originated from UCL and were released by a supernova 1.78 ± 0.21 Million years ago. Another explanation for the older peak is that it was produced by one supernova in the Tucana-Horologium association 7-9 Million years ago.
See also
Gould Belt
List of nearest stars and brown dwarfs
List of nearby stellar associations and moving groups
List of Milky Way streams
Orion–Eridanus Superbubble
Orion Arm
Superbubble
References
Further reading
External links
Interstellar media
Superbubbles
Solar System | Local Bubble | [
"Astronomy"
] | 1,237 | [
"Interstellar media",
"Outer space",
"Solar System"
] |
616,775 | https://en.wikipedia.org/wiki/Loop%20I%20Bubble | The Loop I Bubble is a cavity in the interstellar medium (ISM) of the Orion Arm of the Milky Way. From our Sun's point of view, it is situated towards the Galactic Center of the Milky Way galaxy. Two conspicuous tunnels connect the Local Bubble with the Loop I Bubble cavity (the Lupus Tunnel). The Loop I Bubble is a supershell.
The Loop I Bubble is located roughly 100 parsecs, or 330 light years, from the Sun. The Loop I Bubble was created by supernovae and stellar winds in the Scorpius–Centaurus association, some 500 light years from the Sun. The Loop I Bubble contains the star Antares (also known as Alpha Scorpii). Several tunnels connect the cavities of the Local Bubble with the Loop I Bubble, called the "Lupus Tunnel".
See also
Local Bubble
Orion Arm
Superbubble
References
Interstellar media
Orion–Cygnus Arm | Loop I Bubble | [
"Astronomy"
] | 196 | [
"Stellar astronomy stubs",
"Interstellar media",
"Astronomy stubs",
"Outer space"
] |
616,833 | https://en.wikipedia.org/wiki/Cutler%27s%20resin | Cutler's resin, also known as cutler's pitch, is a waterproof adhesive used to secure a blade or device to a handle. It is made by including wax when making a pine pitch glue. Cutler's resin commonly consists of pine pitch, beeswax and/or carnauba wax, and usually employs a filler like charcoal, sawdust and/or animal dung to aid with the bond. It has been used for centuries by cutlers to attach knife and sword handles, and as a fastener for other tools and weapons.
References
Synthetic resins | Cutler's resin | [
"Chemistry"
] | 119 | [
"Synthetic materials",
"Synthetic resins"
] |
616,867 | https://en.wikipedia.org/wiki/Dwingeloo%201 | Dwingeloo 1 is a barred spiral galaxy about 10 million light-years away from the Earth, in the constellation Cassiopeia. It lies in the Zone of Avoidance and is heavily obscured by the Milky Way. The size and mass of Dwingeloo 1 are comparable to those of Triangulum Galaxy.
Dwingeloo 1 has two smaller satellite galaxies – Dwingeloo 2 and MB 3 – and is a member of the IC 342/Maffei Group of galaxies.
Discovery
The Dwingeloo 1 galaxy was discovered in 1994 by the Dwingeloo Obscured Galaxy Survey (DOGS) using the Dwingeloo Radio Observatory, which searched for neutral hydrogen (HI) radio emissions at the wavelength of 21 cm from objects in the Zone of Avoidance. In this zone gas and dust in the disk of the Milky Way galaxy block the light from the galaxies lying behind it.
The galaxy was, however, first noted as an unremarkable feature on Palomar Sky Survey plates earlier in the same year, but was not recognized as such. It was also independently discovered a few weeks later by another team of astronomers working with Effelsberg 100-m Radio Telescope.
Dwingeloo 1 was eventually named after the 25m radio telescope in the Netherlands that was used in the DOGS survey and first detected it.
Distance and group membership
Dwingeloo 1 is a highly obscured galaxy, which makes distance determination a difficult problem. The initial estimate, made soon after the discovery and based on the Tully–Fisher relation, was about 3 Mpc. Later, this value was slightly increased to 3.5–4 Mpc.
In 1999 another estimate was published, claiming a distance of more than 5 Mpc. It was based on the infrared Tully–Fisher relation. As of 2011, the distance to Dwingeloo 1 is thought to be approximately 3 Mpc, based on its likely membership in the IC 342/Maffei group.
Dwingeloo 1 has two smaller satellite galaxies. The first one, Dwingeloo 2, is an irregular galaxy, and the second, MB 3, is likely a dwarf spheroidal galaxy.
Properties
After the discovery Dwingeloo 1 was classified as a barred spiral galaxy. It has a central bar and two distinct spiral arms that begin at the ends of the bar at nearly right angles and wind counterclockwise. The length of the arms is up to 180°. The disk of the galaxy is inclined with respect to the observer, with the inclination angle being 50°. The galaxy recedes from the Milky Way at a speed of about 256 km/s.
The visible radius of Dwingeloo 1 is approximately 4.2', which at the distance of 3 Mpc corresponds to about 4 kpc. The neutral hydrogen is detected as far as 6 kpc (7.5') from the center. The total mass of the galaxy is about 1/4 that of the Milky Way out to the measured distance of 6 kpc or about 31 billion Solar masses.
The distribution of the neutral hydrogen in Dwingeloo 1 is typical for barred spiral galaxies—it is rather flat with a minimum in the center or along the bar. The total mass of the neutral hydrogen is estimated at 370–450 million Solar masses. Dwingeloo 1 is a molecular gas-poor galaxy. The total mass of the molecular hydrogen does not exceed 10% of that of neutral hydrogen. Optical observations detected around 15 H II regions situated mainly along the spiral arms.
In its overall size and mass, the galaxy is comparable to Triangulum Galaxy.
See also
Dwingeloo 2
Maffei 1
Maffei 2
IC 342
References
External links
Barred spiral galaxies
IC 342/Maffei Group
Cassiopeia (constellation)
100170
Astronomical objects discovered in 1994
100170 | Dwingeloo 1 | [
"Astronomy"
] | 789 | [
"Cassiopeia (constellation)",
"Constellations"
] |
616,868 | https://en.wikipedia.org/wiki/Dwingeloo%202 | Dwingeloo 2 is a small irregular galaxy discovered in 1996 and located about 10 million light-years away from the Earth. Its discovery was a result of the Dwingeloo Obscured Galaxy Survey (DOGS) of the Zone of Avoidance using the Dwingeloo Radio Observatory. Dwingeloo 2 is a companion galaxy of Dwingeloo 1.
Dwingeloo 2 was first detected at radio wavelengths from the 21 cm emission line of neutral atomic hydrogen (known to astronomers as HI) in the course of follow-up observations after the discovery of Dwingeloo 1. Dwingeloo 2 is thought to be a member of the IC 342/Maffei Group, a galaxy group adjacent to the Local Group. The galaxy recedes from the Milky Way at the speed of about 241 km/s.
The visible radius of Dwingeloo 2 is approximately 2′, which at the distance of 3 Mpc corresponds to about 2 kpc. Dwingeloo 2 has a well defined rotating HI disk inclined at approximately 69° with respect to observer. The distribution of the neutral hydrogen in Dwingeloo 2 is quite irregular, and it is detected as far as 3.2 kpc from the center of the galaxy. The total mass of the galaxy within this radius is estimated at 2.3 billion Solar masses, while the mass of the neutral hydrogen is estimated at 100 million Solar masses. The total mass of the galaxy is about five times less than that of Dwingeloo 1.
The irregular structure of Dwingeloo 2 is likely related to its interaction with the much larger nearby galaxy Dwingeloo 1, which lies at a distance of only 24 kpc from Dwingeloo 2.
References
External links
Irregular galaxies
IC 342/Maffei Group
Astronomical objects discovered in 1996
101304
Cassiopeia (constellation) | Dwingeloo 2 | [
"Astronomy"
] | 382 | [
"Cassiopeia (constellation)",
"Constellations"
] |
616,886 | https://en.wikipedia.org/wiki/Flicker%20%28screen%29 | Flicker is a visible change in brightness between cycles displayed on video displays. It applies to the refresh interval on cathode-ray tube (CRT) televisions and computer monitors, as well as plasma computer displays and televisions.
Occurrence
Flicker occurs on CRTs when they are driven at a low refresh rate, allowing the brightness to drop for time intervals sufficiently long to be noticed by a human eye – see persistence of vision and flicker fusion threshold. For most devices, the screen's phosphors quickly lose their excitation between sweeps of the electron gun, and the afterglow is unable to fill such gaps – see phosphor persistence. A refresh rate of 60 Hz on most screens will produce a visible "flickering" effect. Most people find that refresh rates of 70–90 Hz and above enable flicker-free viewing on CRTs. Use of refresh rates above 120 Hz is uncommon, as they provide little noticeable flicker reduction and limit available resolution.
Flatscreen plasma displays have a similar effect. The plasma pixels fade in brightness between refreshes.
In LCD screens, the LCD itself does not flicker, it preserves its opacity unchanged until updated for the next frame. However, in order to prevent accumulated damage LCDs quickly alternate the voltage between positive and negative for each pixel, which is called 'polarity inversion'. Ideally, this wouldn't be noticeable because every pixel has the same brightness whether a positive or a negative voltage is applied. In practice, there is a small difference, which means that every pixel flickers at about 30 Hz. Screens that use opposite polarity per-line or per-pixel can reduce this effect compared to when the entire screen is at the same polarity, sometimes the type of screen is detectable by using patterns designed to maximize the effect.
More of a concern is the LCD backlight. Earlier LCDs used fluorescent lamps which flickered at 100–120 Hz; newer fluorescently backlit LCDs use an electronic ballast that flickers at 25–60 kHz which is far outside the human perceptible range, and LED backlights have no inherent need to flicker at all. On top of any inherent backlight flicker, most fluorescent and LED backlight designs use digital PWM for some or all of their dimming range by switching on and off at rates from several kHz to as little as 180 Hz, though some flicker-free designs using true analog DC dimming exist.
Flicker is necessary for a film-based movie projector to block the light as the film is moved from one frame to the next. The standard framerate of 24 fps produces very obvious flicker, so even very early movie projectors added additional vanes to the rotating shutter to block light even when the film was not moving. Most common is 3 vanes raising the rate to 72 Hz. Home film movie projectors (and early theater projectors) often have four vanes, to raise the 18 fps used by silent film to 72 Hz. Video projectors typically use either LCDs which operate similarly to their desktop counterparts, or DLP mirrors which flicker at 2.5–32 kHz, though "single-chip" color projectors switch between displaying a frame's red, green, & blue channels at as little as 180 Hz using a color wheel or RGB lightsource. For stereoscopic 3D, a single-image system can only display the left-eye or right-eye image at once, switching between them at 90–144 Hz, though this does have the advantage of reduced crosstalk versus two-image 3D projection. Movie projectors typically use an incandescent lamp or arc lamp which does not itself flicker noticeably.
Older televisions used interlaced video, so among other artifacts, the image jumped one line at half the rate (25 or 30 Hz) that the image changes (50 or 60 Hz).
The exact refresh rate necessary to prevent the perception of flicker varies greatly based on the viewing environment. In a completely dark room, a sufficiently dim display can run as low as 30 Hz without visible flicker. At normal room and TV brightness this same display rate would produce flicker so severe as to be unwatchable.
The human eye is most sensitive to flicker at the edges of the human field of view (peripheral vision) and least sensitive at the center of gaze (the area being focused on). As a result, the greater the portion of our field of view that is occupied by a display, the greater is the need for high refresh rates. This is why computer monitor CRTs usually run at 70 to 90 Hz, while CRT TVs, which are viewed from further away, are seen as acceptable at 60 or 50 Hz (see analog television standards).
Chewing something crunchy such as tortilla chips or granola can induce flicker perception due to the vibrations from chewing synchronizing with the flicker rate of the display.
Software artifacts
Software can cause flicker effects by directly displaying an unintended intermediate image for a short time. For example, drawing a page of text by blanking the area to white first in the frame buffer, then drawing 'on top' of it, makes it possible for the blank region to appear momentarily onscreen. Usually this is much faster and easier to program than to directly set each pixel to its final value.
When it is not feasible to set each pixel only once, double buffering can be used. This creates an off-screen drawing surface, drawing to it (with as much flicker as you want), and then copying it all at once to the screen. The result is the visible pixels only change once. While this technique cuts down on software flicker, it can also be very inefficient.
Flicker is used intentionally by developers on low-end systems to create the illusion of more objects or colors/shades than are actually possible on the system, or as a speedy way of simulating transparency. While typically thought of as a mark of older systems like 16-bit game consoles, such flicker techniques continue to be used on new systems, as in the temporal dithering used to fake true color on most LCD monitors.
Video hardware outside the monitor can also cause flicker through many different timing and resolution-related artifacts such as screen tearing, z-fighting and aliasing.
Health effects
The flicker of a CRT monitor can cause various symptoms in those sensitive to it such as eye strain, headaches in migraine sufferers, and seizures in epileptics.
As the flicker is most clearly seen at the edge of our vision there is no obvious risk in using a CRT, but prolonged use can cause a sort of retinal shock where the flickering is seen even when looking away from the monitor. This can create a sort of motion sickness, a discrepancy between the movement detected by the fluid in the inner ear and the motion we can see. Symptoms include dizziness, fatigue, headaches and (sometimes extreme) nausea. The symptoms usually disappear in less than a week without CRT use, and usually only last a few hours unless the exposure has been over a long period.
References
External links
Predicting flicker thresholds for video display terminals
Display technology
Television technology | Flicker (screen) | [
"Technology",
"Engineering"
] | 1,466 | [
"Information and communications technology",
"Electronic engineering",
"Television technology",
"Display technology"
] |
616,901 | https://en.wikipedia.org/wiki/Polyadenylation | Polyadenylation is the addition of a poly(A) tail to an RNA transcript, typically a messenger RNA (mRNA). The poly(A) tail consists of multiple adenosine monophosphates; in other words, it is a stretch of RNA that has only adenine bases. In eukaryotes, polyadenylation is part of the process that produces mature mRNA for translation. In many bacteria, the poly(A) tail promotes degradation of the mRNA. It, therefore, forms part of the larger process of gene expression.
The process of polyadenylation begins as the transcription of a gene terminates. The 3′-most segment of the newly made pre-mRNA is first cleaved off by a set of proteins; these proteins then synthesize the poly(A) tail at the RNA's 3′ end. In some genes these proteins add a poly(A) tail at one of several possible sites. Therefore, polyadenylation can produce more than one transcript from a single gene (alternative polyadenylation), similar to alternative splicing.
The poly(A) tail is important for the nuclear export, translation and stability of mRNA. The tail is shortened over time, and, when it is short enough, the mRNA is enzymatically degraded. However, in a few cell types, mRNAs with short poly(A) tails are stored for later activation by re-polyadenylation in the cytosol. In contrast, when polyadenylation occurs in bacteria, it promotes RNA degradation. This is also sometimes the case for eukaryotic non-coding RNAs.
mRNA molecules in both prokaryotes and eukaryotes have polyadenylated 3′-ends, with the prokaryotic poly(A) tails generally shorter and fewer mRNA molecules polyadenylated.
Background on RNA
RNAs are a type of large biological molecules, whose individual building blocks are called nucleotides. The name poly(A) tail (for polyadenylic acid tail) reflects the way RNA nucleotides are abbreviated, with a letter for the base the nucleotide contains (A for adenine, C for cytosine, G for guanine and U for uracil). RNAs are produced (transcribed) from a DNA template. By convention, RNA sequences are written in a 5′ to 3′ direction. The 5′ end is the part of the RNA molecule that is transcribed first, and the 3′ end is transcribed last. The 3′ end is also where the poly(A) tail is found on polyadenylated RNAs.
Messenger RNA (mRNA) is RNA that has a coding region that acts as a template for protein synthesis (translation). The rest of the mRNA, the untranslated regions, tune how active the mRNA is. There are also many RNAs that are not translated, called non-coding RNAs. Like the untranslated regions, many of these non-coding RNAs have regulatory roles.
Nuclear polyadenylation
Function
In nuclear polyadenylation, a poly(A) tail is added to an RNA at the end of transcription. On mRNAs, the poly(A) tail protects the mRNA molecule from enzymatic degradation in the cytoplasm and aids in transcription termination, export of the mRNA from the nucleus, and translation. Almost all eukaryotic mRNAs are polyadenylated, with the exception of animal replication-dependent histone mRNAs. These are the only mRNAs in eukaryotes that lack a poly(A) tail, ending instead in a stem-loop structure followed by a purine-rich sequence, termed histone downstream element, that directs where the RNA is cut so that the 3′ end of the histone mRNA is formed.
Many eukaryotic non-coding RNAs are always polyadenylated at the end of transcription. There are small RNAs where the poly(A) tail is seen only in intermediary forms and not in the mature RNA as the ends are removed during processing, the notable ones being microRNAs. But, for many long noncoding RNAs – a seemingly large group of regulatory RNAs that, for example, includes the RNA Xist, which mediates X chromosome inactivation – a poly(A) tail is part of the mature RNA.
Mechanism
The processive polyadenylation complex in the nucleus of eukaryotes works on products of RNA polymerase II, such as precursor mRNA. Here, a multi-protein complex (see components on the right) cleaves the 3′-most part of a newly produced RNA and polyadenylates the end produced by this cleavage. The cleavage is catalysed by the enzyme CPSF and occurs 10–30 nucleotides downstream of its binding site. This site often has the polyadenylation signal sequence AAUAAA on the RNA, but variants of it that bind more weakly to CPSF exist. Two other proteins add specificity to the binding to an RNA: CstF and CFI. CstF binds to a GU-rich region further downstream of CPSF's site. CFI recognises a third site on the RNA (a set of UGUAA sequences in mammals) and can recruit CPSF even if the AAUAAA sequence is missing. The polyadenylation signal – the sequence motif recognised by the RNA cleavage complex – varies between groups of eukaryotes. Most human polyadenylation sites contain the AAUAAA sequence, but this sequence is less common in plants and fungi.
The RNA is typically cleaved before transcription termination, as CstF also binds to RNA polymerase II. Through a poorly understood mechanism (as of 2002), it signals for RNA polymerase II to slip off of the transcript. Cleavage also involves the protein CFII, though it is unknown how. The cleavage site associated with a polyadenylation signal can vary up to some 50 nucleotides.
When the RNA is cleaved, polyadenylation starts, catalysed by polyadenylate polymerase. Polyadenylate polymerase builds the poly(A) tail by adding adenosine monophosphate units from adenosine triphosphate to the RNA, cleaving off pyrophosphate. Another protein, PAB2, binds to the new, short poly(A) tail and increases the affinity of polyadenylate polymerase for the RNA. When the poly(A) tail is approximately 250 nucleotides long the enzyme can no longer bind to CPSF and polyadenylation stops, thus determining the length of the poly(A) tail. CPSF is in contact with RNA polymerase II, allowing it to signal the polymerase to terminate transcription. When RNA polymerase II reaches a "termination sequence" (⁵'TTTATT3' on the DNA template and ⁵'AAUAAA3' on the primary transcript), the end of transcription is signaled. The polyadenylation machinery is also physically linked to the spliceosome, a complex that removes introns from RNAs.
Downstream effects
The poly(A) tail acts as the binding site for poly(A)-binding protein. Poly(A)-binding protein promotes export from the nucleus and translation, and inhibits degradation. This protein binds to the poly(A) tail prior to mRNA export from the nucleus and in yeast also recruits poly(A) nuclease, an enzyme that shortens the poly(A) tail and allows the export of the mRNA. Poly(A)-binding protein is exported to the cytoplasm with the RNA. mRNAs that are not exported are degraded by the exosome. Poly(A)-binding protein also can bind to, and thus recruit, several proteins that affect translation, one of these is initiation factor-4G, which in turn recruits the 40S ribosomal subunit. However, a poly(A) tail is not required for the translation of all mRNAs. Further, poly(A) tailing (oligo-adenylation) can determine the fate of RNA molecules that are usually not poly(A)-tailed (such as (small) non-coding (sn)RNAs etc.) and thereby induce their RNA decay.
Deadenylation
In eukaryotic somatic cells, the poly(A) tails of most mRNAs in the cytoplasm gradually get shorter, and mRNAs with shorter poly(A) tail are translated less and degraded sooner. However, it can take many hours before an mRNA is degraded. This deadenylation and degradation process can be accelerated by microRNAs complementary to the 3′ untranslated region of an mRNA. In immature egg cells, mRNAs with shortened poly(A) tails are not degraded, but are instead stored and translationally inactive. These short tailed mRNAs are activated by cytoplasmic polyadenylation after fertilisation, during egg activation.
In animals, poly(A) ribonuclease (PARN) can bind to the 5′ cap and remove nucleotides from the poly(A) tail. The level of access to the 5′ cap and poly(A) tail is important in controlling how soon the mRNA is degraded. PARN deadenylates less if the RNA is bound by the initiation factors 4E (at the 5′ cap) and 4G (at the poly(A) tail), which is why translation reduces deadenylation. The rate of deadenylation may also be regulated by RNA-binding proteins. Additionally, RNA triple helix structures and RNA motifs such as the poly(A) tail 3’ end binding pocket retard deadenylation process and inhibit poly(A) tail removal. Once the poly(A) tail is removed, the decapping complex removes the 5′ cap, leading to a degradation of the RNA. Several other proteins are involved in deadenylation in budding yeast and human cells, most notably the CCR4-Not complex.
Cytoplasmic polyadenylation
There is polyadenylation in the cytosol of some animal cell types, namely in the germline, during early embryogenesis and in post-synaptic sites of nerve cells. This lengthens the poly(A) tail of an mRNA with a shortened poly(A) tail, so that the mRNA will be translated. These shortened poly(A) tails are often less than 20 nucleotides, and are lengthened to around 80–150 nucleotides.
In the early mouse embryo, cytoplasmic polyadenylation of maternal RNAs from the egg cell allows the cell to survive and grow even though transcription does not start until the middle of the 2-cell stage (4-cell stage in human). In the brain, cytoplasmic polyadenylation is active during learning and could play a role in long-term potentiation, which is the strengthening of the signal transmission from a nerve cell to another in response to nerve impulses and is important for learning and memory formation.
Cytoplasmic polyadenylation requires the RNA-binding proteins CPSF and CPEB, and can involve other RNA-binding proteins like Pumilio. Depending on the cell type, the polymerase can be the same type of polyadenylate polymerase (PAP) that is used in the nuclear process, or the cytoplasmic polymerase GLD-2.
Alternative polyadenylation
Many protein-coding genes have more than one polyadenylation site, so a gene can code for several mRNAs that differ in their 3′ end. The 3’ region of a transcript contains many polyadenylation signals (PAS). When more proximal (closer towards 5’ end) PAS sites are utilized, this shortens the length of the 3’ untranslated region (3' UTR) of a transcript. Studies in both humans and flies have shown tissue specific APA. With neuronal tissues preferring distal PAS usage, leading to longer 3’ UTRs and testis tissues preferring proximal PAS leading to shorter 3’ UTRs. Studies have shown there is a correlation between a gene's conservation level and its tendency to do alternative polyadenylation, with highly conserved genes exhibiting more APA. Similarly, highly expressed genes follow this same pattern. Ribo-sequencing data (sequencing of only mRNAs inside ribosomes) has shown that mRNA isoforms with shorter 3’ UTRs are more likely to be translated.
Since alternative polyadenylation changes the length of the 3' UTR, it can also change which binding sites are available for microRNAs in the 3′ UTR. MicroRNAs tend to repress translation and promote degradation of the mRNAs they bind to, although there are examples of microRNAs that stabilise transcripts. Alternative polyadenylation can also shorten the coding region, thus making the mRNA code for a different protein, but this is much less common than just shortening the 3′ untranslated region.
The choice of poly(A) site can be influenced by extracellular stimuli and depends on the expression of the proteins that take part in polyadenylation. For example, the expression of CstF-64, a subunit of cleavage stimulatory factor (CstF), increases in macrophages in response to lipopolysaccharides (a group of bacterial compounds that trigger an immune response). This results in the selection of weak poly(A) sites and thus shorter transcripts. This removes regulatory elements in the 3′ untranslated regions of mRNAs for defense-related products like lysozyme and TNF-α. These mRNAs then have longer half-lives and produce more of these proteins. RNA-binding proteins other than those in the polyadenylation machinery can also affect whether a polyadenylation site is used, as can DNA methylation near the polyadenylation signal. In addition, numerous other components involved in transcription, splicing or other mechanisms regulating RNA biology can affect APA.
Tagging for degradation in eukaryotes
For many non-coding RNAs, including tRNA, rRNA, snRNA, and snoRNA, polyadenylation is a way of marking the RNA for degradation, at least in yeast. This polyadenylation is done in the nucleus by the TRAMP complex, which maintains a tail that is around 4 nucleotides long to the 3′ end. The RNA is then degraded by the exosome. Poly(A) tails have also been found on human rRNA fragments, both the form of homopolymeric (A only) and heterpolymeric (mostly A) tails.
In prokaryotes and organelles
In many bacteria, both mRNAs and non-coding RNAs can be polyadenylated. This poly(A) tail promotes degradation by the degradosome, which contains two RNA-degrading enzymes: polynucleotide phosphorylase and RNase E. Polynucleotide phosphorylase binds to the 3′ end of RNAs and the 3′ extension provided by the poly(A) tail allows it to bind to the RNAs whose secondary structure would otherwise block the 3′ end. Successive rounds of polyadenylation and degradation of the 3′ end by polynucleotide phosphorylase allows the degradosome to overcome these secondary structures. The poly(A) tail can also recruit RNases that cut the RNA in two. These bacterial poly(A) tails are about 30 nucleotides long.
In as different groups as animals and trypanosomes, the mitochondria contain both stabilising and destabilising poly(A) tails. Destabilising polyadenylation targets both mRNA and noncoding RNAs. The poly(A) tails are 43 nucleotides long on average. The stabilising ones start at the stop codon, and without them the stop codon (UAA) is not complete as the genome only encodes the U or UA part. Plant mitochondria have only destabilising polyadenylation. Mitochondrial polyadenylation has never been observed in either budding or fission yeast.
While many bacteria and mitochondria have polyadenylate polymerases, they also have another type of polyadenylation, performed by polynucleotide phosphorylase itself. This enzyme is found in bacteria, mitochondria, plastids and as a constituent of the archaeal exosome (in those archaea that have an exosome). It can synthesise a 3′ extension where the vast majority of the bases are adenines. Like in bacteria, polyadenylation by polynucleotide phosphorylase promotes degradation of the RNA in plastids and likely also archaea.
Evolution
Although polyadenylation is seen in almost all organisms, it is not universal. However, the wide distribution of this modification and the fact that it is present in organisms from all three domains of life implies that the last universal common ancestor of all living organisms, it is presumed, had some form of polyadenylation system. A few organisms do not polyadenylate mRNA, which implies that they have lost their polyadenylation machineries during evolution. Although no examples of eukaryotes that lack polyadenylation are known, mRNAs from the bacterium Mycoplasma gallisepticum and the salt-tolerant archaean Haloferax volcanii lack this modification.
The most ancient polyadenylating enzyme is polynucleotide phosphorylase. This enzyme is part of both the bacterial degradosome and the archaeal exosome, two closely related complexes that recycle RNA into nucleotides. This enzyme degrades RNA by attacking the bond between the 3′-most nucleotides with a phosphate, breaking off a diphosphate nucleotide. This reaction is reversible, and so the enzyme can also extend RNA with more nucleotides. The heteropolymeric tail added by polynucleotide phosphorylase is very rich in adenine. The choice of adenine is most likely the result of higher ADP concentrations than other nucleotides as a result of using ATP as an energy currency, making it more likely to be incorporated in this tail in early lifeforms. It has been suggested that the involvement of adenine-rich tails in RNA degradation prompted the later evolution of polyadenylate polymerases (the enzymes that produce poly(A) tails with no other nucleotides in them).
Polyadenylate polymerases are not as ancient. They have separately evolved in both bacteria and eukaryotes from CCA-adding enzyme, which is the enzyme that completes the 3′ ends of tRNAs. Its catalytic domain is homologous to that of other polymerases. It is presumed that the horizontal transfer of bacterial CCA-adding enzyme to eukaryotes allowed the archaeal-like CCA-adding enzyme to switch function to a poly(A) polymerase. Some lineages, like archaea and cyanobacteria, never evolved a polyadenylate polymerase.
Polyadenylate tails are observed in several RNA viruses, including Influenza A, Coronavirus, Alfalfa mosaic virus, and Duck Hepatitis A. Some viruses, such as HIV-1 and Poliovirus, inhibit the cell's poly-A binding protein (PABPC1) in order to emphasize their own genes' expression over the host cell's.
History
Poly(A)polymerase was first identified in 1960 as an enzymatic activity in extracts made from cell nuclei that could polymerise ATP, but not ADP, into polyadenine. Although identified in many types of cells, this activity had no known function until 1971, when poly(A) sequences were found in mRNAs. The only function of these sequences was thought at first to be protection of the 3′ end of the RNA from nucleases, but later the specific roles of polyadenylation in nuclear export and translation were identified. The polymerases responsible for polyadenylation were first purified and characterized in the 1960s and 1970s, but the large number of accessory proteins that control this process were discovered only in the early 1990s.
See also
SV40
References
Further reading
External links
Gene expression
Messenger RNA | Polyadenylation | [
"Chemistry",
"Biology"
] | 4,282 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
616,939 | https://en.wikipedia.org/wiki/Generic%20routing%20encapsulation | Generic routing encapsulation (GRE) is a tunneling protocol developed by Cisco Systems that can encapsulate a wide variety of network layer protocols inside virtual point-to-point links or point-to-multipoint links over an Internet Protocol network.
Example uses
In conjunction with PPTP to create VPNs.
In conjunction with IPsec VPNs to allow passing of routing information between connected networks.
In mobility management protocols.
In A8/A10 interfaces to encapsulate IP data to/from Packet Control Function (PCF).
Linux and BSD can establish ad-hoc IP over GRE tunnels which are interoperable with Cisco equipment.
Distributed denial of service (DDoS) protected appliance to an unprotected endpoint.
Example protocol stack
Based on the principles of protocol layering in OSI, protocol encapsulation, not specifically GRE, breaks the layering order. It may be viewed as a separator between two different protocol stacks, one acting as a carrier for another.
Delivery protocols
GRE packets that are encapsulated within IP directly, use IP protocol type 47 in the IPv4 header's Protocol field or the IPv6 header's Next Header field.
For performance reasons, GRE can also be encapsulated in UDP packets. Better throughput may be achieved by using Equal-cost multi-path routing.
Packet header
Extended GRE packet header (RFC 2890)
The extended version of the GRE packet header is represented below:
{| class="wikitable" style="text-align: center"
|+Extended GRE header format
|-
!style="border-bottom:none; border-right:none;"|Offsets
!style="border-left:none;"|Octet
!colspan="8"|0
!colspan="8"|1
!colspan="8"|2
!colspan="8"|3
|-
!style="border-top: none"|Octet
!Bit
!style="width:2.75%;"|0
!style="width:2.75%;"|1
!style="width:2.75%;"|2
!style="width:2.75%;"|3
!style="width:2.75%;"|4
!style="width:2.75%;"|5
!style="width:2.75%;"|6
!style="width:2.75%;"|7
!style="width:2.75%;"|8
!style="width:2.75%;"|9
!style="width:2.75%;"|10
!style="width:2.75%;"|11
!style="width:2.75%;"|12
!style="width:2.75%;"|13
!style="width:2.75%;"|14
!style="width:2.75%;"|15
!style="width:2.75%;"|16
!style="width:2.75%;"|17
!style="width:2.75%;"|18
!style="width:2.75%;"|19
!style="width:2.75%;"|20
!style="width:2.75%;"|21
!style="width:2.75%;"|22
!style="width:2.75%;"|23
!style="width:2.75%;"|24
!style="width:2.75%;"|25
!style="width:2.75%;"|26
!style="width:2.75%;"|27
!style="width:2.75%;"|28
!style="width:2.75%;"|29
!style="width:2.75%;"|30
!style="width:2.75%;"|31
|-
!0
!0
|C
|
|K
|S
|colspan="9"|Reserved 0
|colspan="3"|Version
|colspan="16"|Protocol Type
|-
!4
!32
|colspan="16"|Checksum (optional)
|colspan="16"|Reserved 1 (optional)
|-
!8
!64
|colspan="32"|Key (optional)
|-
!12
!96
|colspan="32"|Sequence Number (optional)
|}
C (1 bit) Checksum bit. Set to 1 if a checksum is present.
K (1 bit) Key bit. Set to 1 if a key is present.
S (1 bit) Sequence number bit. Set to 1 if a sequence number is present.
Reserved 0 (9 bits) Reserved bits; set to 0.
Version (3 bits) GRE Version number; set to 0.
Protocol Type (16 bits) Indicates the ether protocol type of the encapsulated payload. (For IPv4, this would be hex 0800.)
Checksum (16 bits) Present if the C bit is set; contains the checksum for the GRE header and payload.
Reserved 1 (16 bits) Present if the C bit is set; is set to 0.
Key (32 bits) Present if the K bit is set; contains an application-specific key value.
Sequence Number (32 bits) Present if the S bit is set; contains a sequence number for the GRE packet.
Standard GRE packet header (RFC 2784)
A standard GRE packet header structure is represented in the diagram below.
{| class="wikitable" style="text-align: center"
|+Standard GRE header format
|-
!style="border-bottom:none; border-right:none;"|Offsets
!style="border-left:none;"|Octet
!colspan="8"|0
!colspan="8"|1
!colspan="8"|2
!colspan="8"|3
|-
!style="border-top: none"|Octet
!Bit
!style="width:2.75%;"|0
!style="width:2.75%;"|1
!style="width:2.75%;"|2
!style="width:2.75%;"|3
!style="width:2.75%;"|4
!style="width:2.75%;"|5
!style="width:2.75%;"|6
!style="width:2.75%;"|7
!style="width:2.75%;"|8
!style="width:2.75%;"|9
!style="width:2.75%;"|10
!style="width:2.75%;"|11
!style="width:2.75%;"|12
!style="width:2.75%;"|13
!style="width:2.75%;"|14
!style="width:2.75%;"|15
!style="width:2.75%;"|16
!style="width:2.75%;"|17
!style="width:2.75%;"|18
!style="width:2.75%;"|19
!style="width:2.75%;"|20
!style="width:2.75%;"|21
!style="width:2.75%;"|22
!style="width:2.75%;"|23
!style="width:2.75%;"|24
!style="width:2.75%;"|25
!style="width:2.75%;"|26
!style="width:2.75%;"|27
!style="width:2.75%;"|28
!style="width:2.75%;"|29
!style="width:2.75%;"|30
!style="width:2.75%;"|31
|-
!0
!0
|C
|colspan="12"|Reserved 0
|colspan="3"|Version
|colspan="16"|Protocol Type
|-
!4
!32
|colspan="16"|Checksum (optional)
|colspan="16"|Reserved 1 (optional)
|}
C (1 bit) Checksum bit. Set to 1 if a checksum is present.
Reserved 0 (12 bits) Reserved bits; set to 0.
Version (3 bits) GRE Version number; set to 0.
Protocol Type (16 bits) Indicates the ether protocol type of the encapsulated payload. (For IPv4, this would be hexadecimal 0x0800; for IPv6, it would be 0x86DD.)
Checksum (16 bits) Present if the C bit is set; contains the checksum for the GRE header and payload.
Reserved 1 (16 bits) Present if the C bit is set; its contents is set to 0.
Original GRE packet header (RFC 1701)
The newer structure superseded the original structure:
{| class="wikitable" style="text-align: center"
|+Original GRE header format
|-
!style="border-bottom:none; border-right:none;"|Offsets
!style="border-left:none;"|Octet
!colspan="8"|0
!colspan="8"|1
!colspan="8"|2
!colspan="8"|3
|-
!style="border-top: none"|Octet
!Bit
!style="width:2.75%;"|0
!style="width:2.75%;"|1
!style="width:2.75%;"|2
!style="width:2.75%;"|3
!style="width:2.75%;"|4
!style="width:2.75%;"|5
!style="width:2.75%;"|6
!style="width:2.75%;"|7
!style="width:2.75%;"|8
!style="width:2.75%;"|9
!style="width:2.75%;"|10
!style="width:2.75%;"|11
!style="width:2.75%;"|12
!style="width:2.75%;"|13
!style="width:2.75%;"|14
!style="width:2.75%;"|15
!style="width:2.75%;"|16
!style="width:2.75%;"|17
!style="width:2.75%;"|18
!style="width:2.75%;"|19
!style="width:2.75%;"|20
!style="width:2.75%;"|21
!style="width:2.75%;"|22
!style="width:2.75%;"|23
!style="width:2.75%;"|24
!style="width:2.75%;"|25
!style="width:2.75%;"|26
!style="width:2.75%;"|27
!style="width:2.75%;"|28
!style="width:2.75%;"|29
!style="width:2.75%;"|30
!style="width:2.75%;"|31
|-
!0
!0
|C
|R
|K
|S
|s
|colspan="3"|Recur
|colspan="5"|Flags
|colspan="3"|Version
|colspan="16"|Protocol Type
|-
!4
!32
|colspan="16"|Checksum (optional)
|colspan="16"|Offset (optional)
|-
!8
!64
|colspan="32"|Key (optional)
|-
!12
!96
|colspan="32"|Sequence Number (optional)
|-
!16
!128
|colspan="32"|Routing (optional, variable length)
|}
The original GRE RFC defined further fields in the packet header which became obsolete in the current standard:
C (1 bit) Checksum bit. Set to 1 if a checksum is present.
R (1 bit) Routing Bit. Set to 1 if Routing and Offset information are present.
K (1 bit) Key bit. Set to 1 if a key is present.
S (1 bit) Sequence number bit. Set to 1 if a sequence number is present.
s (1 bit) Strict source route bit.
Recur (3 bits) Recursion control bits.
Flags (5 bits) Reserved for future use, set to 0.
Version (3 bits) Set to 0.
Protocol Type (16 bits) Indicates the ether protocol type of the encapsulated payload.
Checksum (16 bits) Present if the C bit is set; contains the checksum for the GRE header and payload.
Offset (16 bits) Present if R bit or C bit is set; contains valid information, only if R bit is set. An offset field indicating the offset within the Routing field to the active source route entry.
Key (32 bits) Present if the K bit is set; contains an application-specific key value.
Sequence Number (32 bits) Present if the S bit is set; contains a sequence number for the GRE packet.
Routing (variable) Present if R bit is set; contains a list of source route entries, therefore is of variable length.
PPTP GRE packet header
The Point-to-Point Tunneling Protocol (PPTP) uses a variant GRE packet header structure, represented below. PPTP creates a GRE tunnel through which the PPTP GRE packets are sent.
{| class="wikitable" style="text-align: center"
|+PPTP GRE header format
|-
!style="border-bottom:none; border-right:none;"|Offsets
!style="border-left:none;"|Octet
!colspan="8"|0
!colspan="8"|1
!colspan="8"|2
!colspan="8"|3
|-
!style="border-top: none"|Octet
!Bit
!style="width:2.75%;"|0
!style="width:2.75%;"|1
!style="width:2.75%;"|2
!style="width:2.75%;"|3
!style="width:2.75%;"|4
!style="width:2.75%;"|5
!style="width:2.75%;"|6
!style="width:2.75%;"|7
!style="width:2.75%;"|8
!style="width:2.75%;"|9
!style="width:2.75%;"|10
!style="width:2.75%;"|11
!style="width:2.75%;"|12
!style="width:2.75%;"|13
!style="width:2.75%;"|14
!style="width:2.75%;"|15
!style="width:2.75%;"|16
!style="width:2.75%;"|17
!style="width:2.75%;"|18
!style="width:2.75%;"|19
!style="width:2.75%;"|20
!style="width:2.75%;"|21
!style="width:2.75%;"|22
!style="width:2.75%;"|23
!style="width:2.75%;"|24
!style="width:2.75%;"|25
!style="width:2.75%;"|26
!style="width:2.75%;"|27
!style="width:2.75%;"|28
!style="width:2.75%;"|29
!style="width:2.75%;"|30
!style="width:2.75%;"|31
|-
!0
!0
|C
|R
|K
|S
|s
|colspan="3"|Recur
|colspan="1"|A
|colspan="4"|Flags
|colspan="3"|Version
|colspan="16"|Protocol Type
|-
!4
!32
|colspan="16"|Key Payload Length
|colspan="16"|Key Call ID
|-
!8
!64
|colspan="32"|Sequence Number (optional)
|-
!12
!96
|colspan="32"|Acknowledgement Number (optional)
|-
|}
C (1 bit) Checksum bit. For PPTP GRE packets, this is set to 0.
R (1 bit) Routing bit. For PPTP GRE packets, this is set to 0.
K (1 bit) Key bit. For PPTP GRE packets, this is set to 1. (All PPTP GRE packets carry a key.)
S (1 bit) Sequence number bit. Set to 1 if a sequence number is supplied, indicating a PPTP GRE data packet.
s (1 bit) Strict source route bit. For PPTP GRE packets, this is set to 0.
Recur (3 bits) Recursion control bits. For PPTP GRE packets, these are set to 0.
A (1 bit) Acknowledgment number present. Set to 1 if an acknowledgment number is supplied, indicating a PPTP GRE acknowledgment packet.
Flags (4 bits) Flag bits. For PPTP GRE packets, these are set to 0.
Version (3 bits) GRE Version number. For PPTP GRE packets, this is set to 1.
Protocol Type (16 bits) For PPTP GRE packets, this is set to hex 880B.
Key Payload Length (16 bits) Contains the size of the payload, not including the GRE header.
Key Call ID (16 bits) Contains the Peer's Call ID for the session to which the packet belongs.
Sequence Number (32 bits) Present if the S bit is set; contains the GRE payload sequence number.
Acknowledgement Number (32 bits) Present if the A bit is set; contains the sequence number of the highest GRE payload packet received by the sender.
Standards
: Generic Routing Encapsulation (GRE) (informational)
: Generic Routing Encapsulation over IPv4 networks (informational)
: Point to Point Tunneling Protocol (informational)
: Generic Routing Encapsulation (GRE) (proposed standard, updated by RFC 2890)
: Key and Sequence Number Extensions to GRE (proposed standard)
: GRE-in-UDP Encapsulation (proposed standard)
See also
Network Virtualization using Generic Routing Encapsulation - carries L2 packets over GRE
GPRS Tunnelling Protocol - GTP-U is similar to GRE and used in cellular networks
References
External links
Generic Routing Encapsulation, Subprotocol homepage at Cisco
Generic Routing Encapsulation , Entry in Cisco DocWiki (formerly known as the "Internetworking Technology Handbook")
Tunneling protocols
Cisco Systems | Generic routing encapsulation | [
"Engineering"
] | 4,353 | [
"Computer networks engineering",
"Tunneling protocols"
] |
616,964 | https://en.wikipedia.org/wiki/John%20Louis%20Emil%20Dreyer | John Louis Emil Dreyer (13 February 1852 – 14 September 1926), also Johan Ludvig Emil Dreyer, was a Danish astronomer who spent most of his career working in Ireland. He spent the last decade of his life in Oxford, England.
Life
Dreyer was born in Copenhagen. His father, Lieutenant General John Christopher Dreyer, was the Danish Minister for War and the Navy. When he was 14 he became interested in astronomy and regularly visited Hans Schjellerup at the Copenhagen observatory. He was educated in Copenhagen, taking an MA in 1872. While the same university later awarded him a PhD, in 1874. But in 1874, at the age of 22, he went to Parsonstown, Ireland. There he worked as the assistant of Lord Rosse (the son and successor of the Lord Rosse who built the Leviathan of Parsonstown telescope).
During 1878 he moved to Dunsink, the site of the Trinity College Observatory of Dublin University to work for Robert Stawell Ball. In 1882 he relocated again, this time to Armagh Observatory, where he served as Director until his retirement in 1916. In 1885 he became a British citizen. In 1916 he and his wife Kate moved to Oxford where Dreyer worked on editing the works of Tycho Brahe.
He won the Gold Medal of the Royal Astronomical Society in 1916 and served as the society's president from 1923 until 1925. He died on 14 September 1926 in Oxford, where he is buried in Wolvercote Cemetery.
A crater on the far side of the Moon is named after him.
Works
Dreyer compiled the New General Catalogue of Nebulae and Clusters of Stars, basing it on William Herschel's Catalogue of Nebulae, as well as two supplementary Index Catalogues. The NGC and IC catalogue designations are still widely used.
Dreyer was also a historian of astronomy. In 1890 he published a biography of Danish astronomer Tycho Brahe, and in his later years he edited Tycho's publications and unpublished correspondence. These were published in a 15-volume edition, Opera Omnia, the last volume of which was published after his death.
His book History of the Planetary Systems from Thales to Kepler (1905), is currently printed with the title A History of Astronomy from Thales to Kepler.
He co-edited the first official history of the Royal Astronomical Society along with Herbert Hall Turner, History of the Royal Astronomical Society 1820–1920 (1923, reprinted 1987).
Arms
References
External links
Biography with picture
1852 births
1926 deaths
19th-century Danish astronomers
Danish expatriates in the United Kingdom
Historians of astronomy
19th-century British astronomers
20th-century British astronomers
Recipients of the Gold Medal of the Royal Astronomical Society
Presidents of the Royal Astronomical Society
Burials at Wolvercote Cemetery | John Louis Emil Dreyer | [
"Astronomy"
] | 561 | [
"People associated with astronomy",
"Historians of astronomy",
"History of astronomy"
] |
616,985 | https://en.wikipedia.org/wiki/Computability%20logic | Computability logic (CoL) is a research program and mathematical framework for redeveloping logic as a systematic formal theory of computability, as opposed to classical logic, which is a formal theory of truth. It was introduced and so named by Giorgi Japaridze in 2003.
In classical logic, formulas represent true/false statements. In CoL, formulas represent computational problems. In classical logic, the validity of an argument depends only on its form, not on its meaning. In CoL, validity means being always computable. More generally, classical logic tells us when the truth of a given statement always follows from the truth of a given set of other statements. Similarly, CoL tells us when the computability of a given problem A always follows from the computability of other given problems B1,...,Bn. Moreover, it provides a uniform way to actually construct a solution (algorithm) for such an A from any known solutions of B1,...,Bn.
CoL formulates computational problems in their most general—interactive—sense. CoL defines a computational problem as a game played by a machine against its environment. Such a problem is computable if there is a machine that wins the game against every possible behavior of the environment. Such a game-playing machine generalizes the Church–Turing thesis to the interactive level. The classical concept of truth turns out to be a special, zero-interactivity-degree case of computability. This makes classical logic a special fragment of CoL. Thus CoL is a conservative extension of classical logic. Computability logic is more expressive, constructive and computationally meaningful than classical logic. Besides classical logic, independence-friendly (IF) logic and certain proper extensions of linear logic and intuitionistic logic also turn out to be natural fragments of CoL. Hence meaningful concepts of "intuitionistic truth", "linear-logic truth" and "IF-logic truth" can be derived from the semantics of CoL.
CoL systematically answers the fundamental question of what can be computed and how; thus CoL has many applications, such as constructive applied theories, knowledge base systems, systems for planning and action. Out of these, only applications in constructive applied theories have been extensively explored so far: a series of CoL-based number theories, termed "clarithmetics", have been constructed as computationally and complexity-theoretically meaningful alternatives to the classical-logic-based first-order Peano arithmetic and its variations such as systems of bounded arithmetic.
Traditional proof systems such as natural deduction and sequent calculus are insufficient for axiomatizing nontrivial fragments of CoL. This has necessitated developing alternative, more general and flexible methods of proof, such as cirquent calculus.
Language
The full language of CoL extends the language of classical first-order logic. Its logical vocabulary has several sorts of conjunctions, disjunctions, quantifiers, implications, negations and so called recurrence operators. This collection includes all connectives and quantifiers of classical logic. The language also has two sorts of nonlogical atoms: elementary and general. Elementary atoms, which are nothing but the atoms of classical logic, represent elementary problems, i.e., games with no moves that are automatically won by the machine when true and lost when false. General atoms, on the other hand, can be interpreted as any games, elementary or non-elementary. Both semantically and syntactically, classical logic is nothing but the fragment of CoL obtained by forbidding general atoms in its language, and forbidding all operators other than ¬, ∧, ∨, →, ∀, ∃.
Japaridze has repeatedly pointed out that the language of CoL is open-ended, and may undergo further extensions. Due to the expressiveness of this language, advances in CoL, such as constructing axiomatizations or building CoL-based applied theories, have usually been limited to one or another proper fragment of the language.
Semantics
The games underlying the semantics of CoL are called static games. Such games have no turn order; a player can always move while the other players are "thinking". However, static games never punishes a player for "thinking" too long (delaying its own moves), so such games never become contests of speed. All elementary games are automatically static, and so are the games allowed to be interpretations of general atoms.
There are two players in static games: the machine and the environment. The machine can only follow algorithmic strategies, while there are no restrictions on the behavior of the environment. Each run (play) is won by one of these players and lost by the other.
The logical operators of CoL are understood as operations on games. Here we informally survey some of those operations. For simplicity we assume that the domain of discourse is always the set of all natural numbers: {0,1,2,...}.
The operation ¬ of negation ("not") switches the roles of the two players, turning moves and wins by the machine into those by the environment, and vice versa. For instance, if Chess is the game of chess (but with ties ruled out) from the white player's perspective, then ¬Chess is the same game from the black player's perspective.
The parallel conjunction ∧ ("pand") and parallel disjunction ∨ ("por") combine games in a parallel fashion. A run of A∧B or A∨B is a simultaneous play in the two conjuncts. The machine wins A∧B if it wins both of them. The machine wins A∨B if it wins at least one of them. For example, Chess∨¬Chess is a game on two boards, one played white and one black, and where the task of the machine is to win on at least one board. Such a game can be easily won regardless who the adversary is, by copying his moves from one board to the other.
The parallel implication operator → ("pimplication") is defined by A→B = ¬A∨B. The intuitive meaning of this operation is reducing B to A, i.e., solving A as long as the adversary solves B.
The parallel quantifiers ∧ ("pall") and ∨ ("pexists") can be defined by ∧xA(x) = A(0)∧A(1)∧A(2)∧... and ∨xA(x) = A(0)∨A(1)∨A(2)∨.... These are thus simultaneous plays of A(0),A(1),A(2),..., each on a separate board. The machine wins ∧xA(x) if it wins all of these games, and ∨xA(x) if it wins some.
The blind quantifiers ∀ ("blall") and ∃ ("blexists"), on the other hand, generate single-board games. A run of ∀xA(x) or ∃xA(x) is a single run of A. The machine wins ∀xA(x) (respectively ∃xA(x)) if such a run is a won run of A(x) for all (respectively at least one) possible values of x, and wins ∃xA(x) if this is true for at least one.
All of the operators characterized so far behave exactly like their classical counterparts when they are applied to elementary (moveless) games, and validate the same principles. This is why CoL uses the same symbols for those operators as classical logic does. When such operators are applied to non-elementary games, however, their behavior is no longer classical. So, for instance, if p is an elementary atom and P a general atom, p→p∧p is valid while P→P∧P is not. The principle of the excluded middle P∨¬P, however, remains valid. The same principle is invalid with all three other sorts (choice, sequential and toggling) of disjunction.
The choice disjunction ⊔ ("chor") of games A and B, written A⊔B, is a game where, in order to win, the machine has to choose one of the two disjuncts and then win in the chosen component. The sequential disjunction ("sor") AᐁB starts as A; it also ends as A unless the machine makes a "switch" move, in which case A is abandoned and the game restarts and continues as B. In the toggling disjunction ("tor") A⩛B, the machine may switch between A and B any finite number of times. Each disjunction operator has its dual conjunction, obtained by interchanging the roles of the two players. The corresponding quantifiers can further be defined as infinite conjunctions or disjunctions in the same way as in the case of the parallel quantifiers. Each sort of disjunction also induces a corresponding implication operation the same way as this was the case with the parallel implication →. For instance, the choice implication ("chimplication") A⊐B is defined as ¬A⊔B.
The parallel recurrence ("precurrence") of A can be defined as the infinite parallel conjunction A∧A∧A∧... The sequential ("srecurrence") and toggling ("trecurrence") sorts of recurrences can be defined similarly.
The corecurrence operators can be defined as infinite disjunctions. Branching recurrence ("brecurrence") ⫰, which is the strongest sort of recurrence, does not have a corresponding conjunction. ⫰A is a game that starts and proceeds as A. At any time, however, the environment is allowed to make a "replicative" move, which creates two copies of the then-current position of A, thus splitting the play into two parallel threads with a common past but possibly different future developments. In the same fashion, the environment can further replicate any of positions of any thread, thus creating more and more threads of A. Those threads are played in parallel, and the machine needs to win A in all threads to be the winner in ⫰A. Branching corecurrence ("cobrecurrence") ⫯ is defined symmetrically by interchanging "machine" and "environment".
Each sort of recurrence further induces a corresponding weak version of implication and weak version of negation. The former is said to be a rimplication, and the latter a refutation. The branching rimplication ("brimplication") A⟜B is nothing but ⫰A→B, and the branching refutation ("brefutation") of A is A⟜⊥, where ⊥ is the always-lost elementary game. Similarly for all other sorts of rimplication and refutation.
As a problem specification tool
The language of CoL offers a systematic way to specify an infinite variety of computational problems, with or without names established in the literature. Below are some examples.
Let f be a unary function. The problem of computing f will be written as ⊓x⊔y(y=f(x)). According to the semantics of CoL, this is a game where the first move ("input") is by the environment, which should choose a value m for x. Intuitively, this amounts to asking the machine to tell the value of f(m). The game continues as ⊔y(y=f(m)). Now the machine is expected to make a move ("output"), which should be choosing a value n for y. This amounts to saying that n is the value of f(m). The game is now brought down to the elementary n=f(m), which is won by the machine if and only if n is indeed the value of f(m).
Let p be a unary predicate. Then ⊓x(p(x)⊔¬p(x)) expresses the problem of deciding p, ⊓x(p(x)&ᐁ¬p(x)) expresses the problem of semideciding p, and ⊓x(p(x)⩛¬p(x)) the problem of recursively approximating p.
Let p and q be two unary predicates. Then ⊓x(p(x)⊔¬p(x))⟜⊓x(q(x)⊔¬q(x)) expresses the problem of Turing-reducing q to p (in the sense that q is Turing reducible to p if and only if the interactive problem ⊓x(p(x)⊔¬p(x))⟜⊓x(q(x)⊔¬q(x)) is computable). ⊓x(p(x)⊔¬p(x))→⊓x(q(x)⊔¬q(x)) does the same but for the stronger version of Turing reduction where the oracle for p can be queried only once. ⊓x⊔y(q(x)↔p(y)) does the same for the problem of many-one reducing q to p. With more complex expressions one can capture all kinds of nameless yet potentially meaningful relations and operations on computational problems, such as, for instance, "Turing-reducing the problem of semideciding r to the problem of many-one reducing q to p". Imposing time or space restrictions on the work of the machine, one further gets complexity-theoretic counterparts of such relations and operations.
As a problem solving tool
The known deductive systems for various fragments of CoL share the property that a solution (algorithm) can be automatically extracted from a proof of a problem in the system. This property is further inherited by all applied theories based on those systems. So, in order to find a solution for a given problem, it is sufficient to express it in the language of CoL and then find a proof of that expression. Another way to look at this phenomenon is to think of a formula G of CoL as program specification (goal). Then a proof of G is – more precisely, translates into – a program meeting that specification. There is no need to verify that the specification is met, because the proof itself is, in fact, such a verification.
Examples of CoL-based applied theories are the so-called clarithmetics. These are number theories based on CoL in the same sense as first-order Peano arithmetic PA is based on classical logic. Such a system is usually a conservative extension of PA. It typically includes all Peano axioms, and adds to them one or two extra-Peano axioms such as ⊓x⊔y(y=x''') expressing the computability of the successor function. Typically it also has one or two non-logical rules of inference, such as constructive versions of induction or comprehension. Through routine variations in such rules one can obtain sound and complete systems characterizing one or another interactive computational complexity class C. This is in the sense that a problem belongs to C'' if and only if it has a proof in the theory. So, such a theory can be used for finding not merely algorithmic solutions, but also efficient ones on demand, such as solutions that run in polynomial time or logarithmic space. It should be pointed out that all clarithmetical theories share the same logical postulates, and only their non-logical postulates vary depending on the target complexity class. Their notable distinguishing feature from other approaches with similar aspirations (such as bounded arithmetic) is that they extend rather than weaken PA, preserving the full deductive power and convenience of the latter.
See also
Game semantics
Interactive computation
Logic
Logics for computability
References
External links
Computability Logic Homepage Comprehensive survey of the subject.
Giorgi Japaridze
Game Semantics or Linear Logic?
Lecture Course on Computability Logic
On abstract resource semantics and computabilty logic Video lecture by N. Vereshchagin.
A Survey of Computability Logic (PDF) Downloadable equivalent of the above homepage.
Computability theory
Logic in computer science
Non-classical logic | Computability logic | [
"Mathematics"
] | 3,294 | [
"Computability theory",
"Mathematical logic",
"Logic in computer science"
] |
616,992 | https://en.wikipedia.org/wiki/Controlled%20burn | A controlled burn or prescribed burn (Rx burn) is the practice of intentionally setting a fire to change the assemblage of vegetation and decaying material in a landscape. The purpose could be for forest management, ecological restoration, land clearing or wildfire fuel management. A controlled burn may also refer to the intentional burning of slash and fuels through burn piles. Controlled burns may also be referred to as hazard reduction burning, backfire, swailing or a burn-off. In industrialized countries, controlled burning regulations and permits are usually overseen by fire control authorities.
Controlled burns are conducted during the cooler months to reduce fuel buildup and decrease the likelihood of more dangerous, hotter fires. Controlled burning stimulates the germination of some trees and reveals soil mineral layers which increases seedling vitality. In grasslands, controlled burns shift the species assemblage to primarily native grassland species. Some seeds, such as those of lodgepole pine, sequoia and many chaparral shrubs are pyriscent, meaning heat from fire causes the cone or woody husk to open and disperse seeds.
Fire is a natural part of both forest and grassland ecology and has been used by indigenous people across the world for millennia to promote biodiversity and cultivate wild crops, such as fire-stick farming by aboriginal Australians. Colonial law in North America and Australia displaced indigenous people from lands that were controlled with fire and prohibited from conducting traditional controlled burns. After wildfires began increasing in scale and intensity in the 20th century, fire control authorities began reintroducing controlled burns and indigenous leadership into land management.
Uses
Forestry
Controlled burning reduces fuels, improves wildlife habitat, controls competing vegetation, helps control tree disease and pests, perpetuates fire-dependent species and improves accessibility. To improve the application of prescribed burns for conservation goals, which may involve mimicking historical or natural fire regimes, scientists assess the impact of variation in fire attributes. Parameters measured are fire frequency, intensity, severity, patchiness, spatial scale and phenology.
Furthermore, controlled fire can be used for site preparation when mechanized treatments are not possible because of terrain that prevents equipment access. Species variation and competition can drastically increase a few years after fuel treatments because of the increase in soil nutrients and availability of space and sunlight.
Many trees depend on fire as a way to clear out other plant species and release their seeds. The giant sequoia, among other fire-adapted conifer species, depends on fire to reproduce. The cones are pyriscent so they will only open after exposed to a certain temperature. This reduces competition for the giant sequoia seedlings because the fire has cleared non-fire-adapted, competing species. Pyriscent species benefit from moderate-intensity fires in older stands; however, climate change is causing more frequent high intensity fires in North America. Controlled burns can manage the fire cycle and the intensity of regenerate fires in forests with pyriscent species like the boreal forest in Canada.
Eucalyptus regnans or mountain ash of Australia also shows a unique evolution with fire, quickly replacing damaged buds or stems in the case of danger. They also carry their seeds in capsules which can be deposited at any time of the year . During a wildfire, the capsules drop nearly all of their seeds and the fire consumes the eucalypt adults, but most of the seeds survive using the ash as a source of nutrients. At their rate of growth, they quickly dominate the land and a new, like-aged eucalyptus forest grows. Other tree species like poplar can easily regenerate after a fire into a like-aged stand from a vast root system that is protected from fires because it is underground.
Grassland restoration
Native grassland species in North America and Australia are adapted to survive occasional low intensity fires. Controlled burns in prairie ecosystems mimic low intensity fires that shift the composition of plants from non-native species to native species. These controlled burns occur during the early spring before native plants begin actively growing, when soil moisture is higher and when the fuel load on the ground is low to ensure that the controlled burn remains low intensity.
Wildfire management
Controlled burns reduce the amount of understory fuel so when a wildfire enters the area, a controlled burn site can reduce the intensity of the fire or prevent the fire from crossing the area entirely. A controlled burn prior to the wildfire season can protect infrastructure and communities or mitigate risks associated with many dead standing trees such as after a pest infestation when forest fuels are high.
Agriculture
In the developing world, the use of controlled burns in agriculture is often referred to as slash and burn. In industrialized nations, it is seen as one component of shifting cultivation, as a part of field preparation for planting. Often called field burning, this technique is used to clear the land of any existing crop residue as well as kill weeds and weed seeds. Field burning is less expensive than most other methods such as herbicides or tillage, but because it produces smoke and other fire-related pollutants, its use is not popular in agricultural areas bounded by residential housing.
Prescribed fires are broadly used in the context of woody plant encroachment, with the aim of improving the balance of woody plants and grasses in shrublands and grasslands.
In Northern-India, especially in Punjab, Haryana, and Uttar Pradesh, unregulated burning of agricultural waste is a major problem. Smoke from these fires leads to degradation in environmental quality in these states and the surrounded area.
In East Africa, bird densities increased months after controlled burning had occurred.
Greenhouse gas abatement
Controlled burns on Australian savannas can result in a long-term cumulative reduction in greenhouse gas emissions. One working example is the West Arnhem Fire Management Agreement, started to bring "strategic fire management across of Western Arnhem Land" to partially offset greenhouse gas emissions from a liquefied natural gas plant in Darwin, Australia. Deliberately starting controlled burns early in the dry season results in a mosaic of burnt and unburnt country which reduces the area of stronger, late dry season fires; it is also known as "patch burning".
Procedure
Health and safety, protecting personnel, preventing the fire from escaping and reducing the impact of smoke are the most important considerations when planning a controlled burn. While the most common driver of fuel treatment is the prevention of loss of human life and structures, certain parameters can also be changed to promote biodiversity and to rearrange the age of a stand or the assemblage of species.
To minimize the impact of smoke, burning should be restricted to daylight hours whenever possible. Furthermore, in temperate climates, it is important to burn grasslands and prairies before native species begin growing for the season so that only non-native species, which send up shoots earlier in the spring, are affected by the fire.
Ground ignition
Back burning or a back fire is the term given to the process of lighting vegetation in such a way that it has to burn against the prevailing wind. This produces a slower moving and more controllable fire. Controlled burns utilize back burning during planned fire events to create a "black line" where fire cannot burn through. Back burning or backfiring is also done to stop a wildfire that is already in progress. Firebreaks are also used as an anchor point to start a line of fires along natural or man-made features such as a river, road or a bulldozed clearing.
Head fires, that burn with the prevailing wind, are used between two firebreaks because head fires will burn more intensely and move faster than a back burn. Head fires are used when a back burn would move too slowly through the fuel either because the fuel moisture is high or the wind speed is low. Another method to increase the speed of a back burn is to use a flank fire which is lit at right angles to the prevailing wind and spreads in the same direction.
Grassland or prairie burning
In Ontario, Canada, controlled burns are regulated by the Ministry of Natural Resources and only trained personnel can plan and ignite controlled burns within Ontario's fire regions or if the Ministry of Natural Resources in involved in any aspect of planning a controlled burn. The team performing the prescribed burn is divided into several roles; the Burn Boss, Communications, Suppression and Ignition. The planning process begins by submitting an application to a local fire management office and after approval, applicants must submit a burn plan several weeks prior to ignition.
On the day of the controlled burn, personnel meet with the Burn Boss and discuss the tactics being used for ignition and suppression, health and safety precautions, fuel moisture levels and the weather (wind direction, wind speed, temperature and precipitation) for the day. On site, local fire control authorities are notified by telephone about the controlled burn while the rest of the team members fill drip torches with pre-mixed fuel, fill suppression packs with water and put up barricades and signage to prevent pedestrian access to the controlled burn. Driptorches are canisters filled with fuel and a wick at the end that is used to ignite the lines of fire. Safe zones are established to ensure personnel know where the fire cannot cross either because of natural barriers like bodies of water or human-made barriers like tilled earth. During ignition, the Burn Boss relays information about the fire (flame length, flame height, the percent of ground that has been blackened) to the Communications Officer who documents this information. The Communications Officer relays information about the wind speed and wind direction so the Burn Boss can determine how the direction of both flames and smoke and plan their lines of fire accordingly. Once the ignition phase has ended in a section, the suppression team "mops up" by using suppression packs to suppress smoldering material. Other tools used for suppression are RTVs equipped with a water tank and a pump and hose that is installed in a nearby body of water. Finally, once the mop up has finished, the Burn Boss declares the controlled burn over and local fire authorities are notified.
Slash pile burning
There are several different methods used to burn piles of slash from forestry operations. Broadcast burning is the burning of scattered slash over a wide area. Pile burning is gathering up the slash into piles before burning. These burning piles may be referred to as bonfires. High temperatures can harm the soil, damaging it physically, chemically or sterilizing it. Broadcast burns tend to have lower temperatures and will not harm the soil as much as pile burning, though steps can be taken to treat the soil after a burn. In lop and scatter burning, slash is left to compact over time, or is compacted with machinery. This produces a lower intensity fire, as long as the slash is not packed too tightly.
The risk of fatal fires that stem from burning slash can also be reduced by proactively reducing ground fuels before they can create a fuel ladder and begin an active crown fire. Predictions show thinned forests lead to a reduction in fire intensity and flame lengths of forest fires compared to untouched or fire-proofed areas.
Aerial ignition
Aerial ignition is a type of controlled burn where incendiary devices are released from aircraft.
History
There are two basic causes of wildfires. One is natural, mainly through lightning, and the other is human activity. Controlled burns have a long history in wildland management. Fire has been used by humans to clear land since the Neolithic period. Fire history studies have documented regular wildland fires ignited by indigenous peoples in North America and Australia prior to the establishment of colonial law and fire suppression. Native Americans frequently used fire to manage natural environments in a way that benefited humans and wildlife in forests and grasslands by starting low-intensity fires that released nutrients for plants, reduced competition for cultivated species, and consumed excess flammable material that otherwise would eventually fuel high-intensity, catastrophic fires.
North America
The use of controlled burns in North America ended in the early 20th century, when federal fire policies were enacted with the goal of suppressing all fires. Since 1995, the US Forest Service has slowly incorporated burning practices into its forest management policies.
Fire suppression has changed the composition and ecology of North American habitats, including highly fire-dependent ecosystems such as oak savannas and canebrakes, which are now critically endangered habitats on the brink of extinction. In the Eastern United States, fire-sensitive trees such as the red maple are increasing in number, at the expense of fire-tolerant species like oaks.
Canada
In the Anishinaabeg Nation around the Great Lakes, fire is a living being that has the power to change landscapes through both destruction and the regrowth and return of life following a fire. Human beings are also inexorably tied to the land they live on as stewards who maintain the ecosystems around them. Because fire can reveal dormant seedlings, it is a land management tool. Fire was a part of the landscapes of Ontario until early colonial rule restricted indigenous culture in across Canada. During colonization, large scale forest fires were caused by sparks from railroads and fire was used to clear land for agriculture use. The public perception of forest fires was positive because the cleared land represented taming the wilderness to an urban populace. The conservation movement, which was spearheaded by Edmund Zavitz in Ontario, caused a ban on all fires, both natural wild fires and intentional fires.
In the 1970s, Parks Canada began implementing small prescribed burns however, the scale of wildfires each year outpaces the acreage of land that is intentionally burnt. In the late 1980s, the Ministry of Natural Resources in Ontario began conducting prescribed burns on forested land which led to the created of a prescribed burn program as well as training and regulation for controlled burns in Ontario.
In British Columbia, there was an increase in the intensity and scale of wildfires after local bylaws restricted the use of controlled burns. In 2017, following one of the worst years for wildfire in the province's history, indigenous leadership and public service members wrote an independent report that suggested returning to the traditional use of prescribed burns to manage understory fuel from wildfires. The government of British Columbia responded by committing to using controlled burns as a wildfire management tool.
United States
The Oregon Department of Environmental Quality began requiring a permit for farmers to burn their fields in 1981, but the requirements became stricter in 1988 following a multi-car collision in which smoke from field burning near Albany, Oregon, obscured the vision of drivers on Interstate 5, leading to a 23-car collision in which 7 people died and 37 were injured. This resulted in more scrutiny of field burning and proposals to ban field burning in the state altogether.
With controlled burns, there is also a risk that the fires get out of control. For example, the Calf Canyon/Hermits Peak Fire, the largest wildfire in the history of New Mexico, was started by two distinct instances of controlled burns, which had both been set by the US Forest Service, getting out of control and merging.
The conflict of controlled burn policy in the United States has roots in historical campaigns to combat wildfires and to the eventual acceptance of fire as a necessary ecological phenomenon. Following colonization of North America, the US used fire suppression laws to eradicate the indigenous practice of prescribed fire. This was done against scientific evidence that supported prescribed burns as a natural process. At the loss to the local environment, colonies utilized fire suppression in order to benefit the logging industry.
The notion of fire as a tool had somewhat evolved by the late 1970s as the National Park Service authorized and administered controlled burns. Following prescribed fire reintroduction, the Yellowstone fires of 1988 occurred, which significantly politicized fire management. The ensuing media coverage was a spectacle that was vulnerable to misinformation. Reports drastically inflated the scale of the fires which disposed politicians in Wyoming, Idaho, and Montana to believe that all fires represented a loss of revenue from tourism. Paramount to the new action plans is the suppression of fires that threaten the loss of human life with leniency toward areas of historic, scientific, or special ecological interest.
There is still a debate amongst policy makers about how to deal with wildfires. Senators Ron Wyden and Mike Crapo of Oregon and Idaho have been moving to reduce the shifting of capital from fire prevention to fire suppression following the harsh fires of 2017 in both states.
Tensions around fire prevention continue to rise due to the increasing prevalence of climate change. As drought conditions worsen, North America has been facing an abundance of destructive wildfires. Since 1988, many states have made progress toward controlled burns. In 2021, California increased the number of trained personnel to perform controlled burns and created more accessibility for landowners.
Europe
In the European Union, burning crop stubble after harvest is used by farmers for plant health reasons under several restrictions in cross-compliance regulations.
In the north of Great Britain, large areas of grouse moors are managed by burning in a practice known as muirburn. This kills trees and grasses, preventing natural succession, and generates the mosaic of ling (heather) of different ages which allows very large populations of red grouse to be reared for shooting. The peat-lands are some of the largest carbon sinks in the UK, providing an immensely important ecological service. The governments has restricted burning to the area but hunters have been continuing to set the moors ablaze, releasing a large amount of carbon into the atmosphere and destroying native habitat.
Africa
The Maasai ethnic group conduct traditional burning in savanna ecosystems before the rainy season to provide varied grazing land for livestock and to prevent larger fires when the grass is drier and the weather is hotter. In the past few decades, the practice of burning savanna has decreased because rain has become inadequate and unpredictable, there are more frequent occurrences of large accidental fires and Tanzanian government policies prevent burning savanna.
See also
Agroecology
Cultural burning
Fire ecology
Fire-stick farming
Native American use of fire in ecosystems
Wildfire suppression
References
Further reading
Beese, W.J., Blackwell, B.A., Green, R.N. & Hawkes, B.C. (2006). "Prescribed burning impacts on some coastal British Columbia ecosystems." Information Report BC-X-403. Victoria B.C.: Natural Resources Canada, Canadian Forest Service, Pacific Forestry Centre. Retrieved from: http://hdl.handle.net/10613/2740
Casals P, Valor T, Besalú A, Molina-Terrén D. Understory fuel load and structure eight to nine years after prescribed burning in Mediterranean pine forests.
Valor T, González-Olabarria JR, Piqué M. Assessing the impact of prescribed burning on the growth of European pines. .
External links
U.S. National Park Service Prescribed Fire Policy
Savanna Oak Foundation article on controlled burns
The Nature Conservancy's Global Fire Initiative
Wildfire ecology
Wildfire prevention
Habitat management equipment and methods
Agriculture and the environment
Forestry and the environment
Ecological techniques | Controlled burn | [
"Biology"
] | 3,815 | [
"Ecological techniques"
] |
616,993 | https://en.wikipedia.org/wiki/Steamroller | A steamroller (or steam roller) is a form of road roller – a type of heavy construction machinery used for leveling surfaces, such as roads or airfields – that is powered by a steam engine. The leveling/flattening action is achieved through a combination of the size and weight of the vehicle and the rolls: the smooth wheels and the large cylinder or drum fitted in place of treaded road wheels.
The majority of steam rollers are outwardly similar to traction engines as many traction engine manufacturers later produced rollers based on their existing designs, and the patents owned by certain roller manufacturers tended to influence the general arrangements used by others. The key difference between the two vehicles is that on a roller the main roll replaces the front wheels and axle that would be fitted to a traction engine, and the driving wheels are smooth-tired.
The word steamroller frequently refers to road rollers in general, regardless of the method of propulsion.
History
Before about 1850, the word steamroller meant a fixed machine for rolling and curving steel plates for boilers and ships.
From then on, it also meant a mobile device for flattening ground.
An early steamroller was patented by Louis Lemoine in France in 1859 and demonstrated sometime before February 1861. In Britain, a 30-ton steamroller was designed in 1863 by William Clark and partner W.F. Batho. Having failed to impress the British municipal road authorities it was transferred to Kolkata where it continued to work.
The company Aveling and Porter was the first to successfully sell the product commercially and subsequently became the largest manufacturer in Britain. In 1866 they produced a prototype roller with rollers fitted to the rear of a standard 12 nominal-horsepower-traction engine. This experimental machine was described by local papers as 'the world's first steamroller' and it caused a public spectacle.
In 1867, the steam road roller was patented and the company began production of the first practical steam roller – the new machine's rollers were mounted at the front instead of the back and it weighed in excess of 30 tons. It was tested on the Military Road in Chatham, Star Hill in Rochester and in Hyde Park, London and the machine proved a huge success. Within a year, they were being exported around the world, including to France, India and the United States. A New York City chief engineer said of one of these, that "in one day's rolling at a cost of 10 dollars, as much work was accomplished as in two days' rolling with a 7 ton roller drawn by eight horses at a cost of 20 dollars a day." The heavier rollers were found to be hard to handle and the weight of the machines was reduced to around 10 tons.
Aveling and Porter refined their product continuously over the following decades, introducing fully steerable front rollers and compound steam engines at the 1881 Royal Agricultural Show. The move to asphalt for road construction resulted in the demand for steamrollers that could rapidly reverse so they could roll the tar while still hot. Machines that could do this were introduced in the first decade of the 20th century.
Production ended around 1950.
Configurations
The majority of rollers were of the same basic 3-roll configuration, gear-driven, with two large smooth wheels (rolls) at the back and a single wide roll at the front (in actuality, the wide roll usually consisted of two narrower rolls on the same axle, to make steering easier). However, there was also a distinctive variant, the "tandem", which had two wide rolls, one front, one rear. Those made by Robey & Co used their standard steam wagon engine and pistol boiler fitted in a girder frame with rolls and a chain drive to produce a quick-reversing roller suitable for modern road surfaces such as tarmacadam and bituminous asphalt. A number of Robey & Co. tandem rollers were modified to make a further variant, the tri-tandem, which was a tandem with a third roll, mounted directly behind the rear one. Robey supplied the parts, but the modification was undertaken by Goodes of Royston. Ten tandem and two tri-tandem Robey rollers survive in preservation, and one of the tri-tandems is known to have been used to construct parts of the M1 motorway.
A variation of the basic configuration was the "convertible": an engine which could be either a steam roller or a traction engine and could be changed from one form to the other in a relatively short time – i.e., less than half a day. Convertible engines were liked by local authorities, since the same machine could be used for haulage in the winter and road-mending in the summer.
Design features
Although most steam roller designs are derived from traction engines, and were manufactured by the same companies, there are a number of features that set them apart.
Wheels
The most obvious difference is in the wheels. Traction engines were generally built with large fabricated spoked steel wheels with wide rims. Those intended for road use would have continuous solid rubber tyres bolted around the rims, to improve traction on tarmac. Engines intended for agricultural use would have a series of strakes bolted diagonally across the rims, like the tread on a modern pneumatic tractor tyre, and the wheels were typically wider to spread the load more evenly.
Steam rollers, on the other hand, had smooth rear wheels and a roller at the front. The roller consisted of a pair of adjacent wide cylinders supported at both ends. This replaced the separate wheels and axle of a traction engine.
Smokebox
In the conventional arrangement, the front roller is mounted centrally, forward of the chimney. In order to allow enough clearance from the boiler (and hence a larger front roll), the smokebox is extended forward substantially at the top to incorporate a support plate on which to mount the bearing for the roller assembly. This gives the distinctive, hooded look to the front of a steam roller. It also necessitates a different design of smokebox door – it has to hinge up or down, rather than opening sideways, due to the limited access available. Access to the boiler tubes for cleaning is limited and the brush usually has to be inserted through the small gap between the top of the roll and the fork.
Special equipment
The front and rear rolls were usually fitted with scraper bars. As the vehicle moved along, these removed any surface material that had become stuck to the roll, to prevent a build-up of material and ensure a flat finish was maintained.
Some steam rollers were fitted with a scarifier mounted on the tender box at the rear. They could be swung down to road level and used to rip up the old surface before a road was remade.
Another accessory was a tar sprayer – a bar mounted on the back of the roller. This was not a common fixture.
Manufacturers
Britain was a major exporter of steam rollers over the years, with the firm of Aveling and Porter probably being the most famous and the most prolific.
Many other traction engine manufacturers built steam rollers, but after Aveling and Porter, the most popular were Marshall, Sons & Co., John Fowler & Co., and Wallis & Steevens.
In America, the was a large builder. J. I. Case made a roller variant of their farm engines, but had a small market share. Other nations had makers including the Czechs, Swiss, Swedes, Germans (notably Kemna) and Dutch which produced steam rollers.
Usage
In the UK, a number of companies owned fleets of steam rollers and contracted them out to local authorities. Many were still in use into the 1960s, and part of the M1 motorway was made using steam rollers. A few steam rollers were being used for road maintenance in the early 1970s, and this may go some way to explaining why diesel-powered rollers are still colloquially known as steam rollers today.
Preservation
Many steam rollers are preserved in working order, and can be seen in operation during special live steam festivals, where operating scale models may also be displayed. At some of the UK steam fairs and rallies, demonstrations of road building using the old techniques, tools and machines are re-enacted by 'Road Gangs' in authentic dress. Steam rollers feature prominently in these demonstrations. The annual Great Dorset Steam Fair has a section dedicated to road-making machinery, including a line-up of working steam rollers.
A number of steamrollers ended their working lives in children's playgrounds to provide something for children to play on.
Popular culture
Two popular American bands were named after steamrollers, Buffalo Springfield and Mannheim Steamroller. Parni Valjak (trans. Steamroller) is the name of the popular Croatian and Yugoslav rock band, and the group has used the name Steam Roller on their English language releases.
Two different steamrollers appear as prominent characters in the Thomas & Friends television series; George and Buster, both of whom are based on the Aveling-Barford R class design.
British steeplejack and engineering enthusiast Fred Dibnah was known as a national institution in Great Britain for the conservation of steam rollers and traction engines. The first engine he restored to working order was an Aveling & Porter steam roller, registration no. DM3079. Built in 1912, it was a 10-ton slide-valve, single-cylinder, 4-shaft, road roller. Originally named "Allison", after his first wife, Fred renamed the engine "Betsy" (his mother's name) following his divorce – Fred's view being "wives may change but your mother remains your mother!" This roller was featured in many of Fred's early television programmes. It may still be seen at steam rallies in Britain and was in steam at the Great Dorset Steam Fair in 2011.
Author Terry Pratchett instructed his collaborator Neil Gaiman that anything Pratchett had been working on at the time of his death should be destroyed by a steamroller. Pratchett's daughter and literary executor Rhianna Pratchett also stated that she had no desire to try to finish her father's work or continue the Discworld franchise without him. Accordingly, Pratchett's assistant Rob Wilkins brought Pratchett's computer hard drive to the Great Dorset Steam Fair, where a steamroller was driven over it.
As a symbol
The steamroller has a strong symbolism of an irresistible, onward-pushing force. The Imperial Russian Army was nicknamed "steamroller" during World War One, as it was huge in size, and Russia initiated the war with an offensive. The "Russian Steamroller" is one of the personifications of Russia, along with the Russian bear, double-headed eagle and Mat Zemlya.
See also
History of steam road vehicles
Traction engine
Roller (agricultural tool) – for farm rollers
Roller (disambiguation) – for other types of roller
List of steam energy topics
Paddy's motorbike – nickname for another type of compaction vehicle.
Thomas Green & Son builders of steam rollers, but better known for motor rollers.
References
Bibliography
External links
Road Roller Association – UK-based society dedicated to the preservation of steam (and motor) rollers and ancillary road-making equipment.
"Steam Dinosaur" – world's oldest surviving traction engine: immediate ancestor of Aveling's earliest rollers.
Fred Dibnah's roller 'Betsy' – The story of Betsy's restoration
The New England Wireless and Steam Museum Buffalo-Springfield Steam Macadam Roller
Construction equipment
Engineering vehicles
Road construction
Steam road vehicles | Steamroller | [
"Engineering"
] | 2,363 | [
"Construction equipment",
"Construction",
"Road construction",
"Engineering vehicles",
"Industrial machinery"
] |
617,003 | https://en.wikipedia.org/wiki/Autoradiograph | An autoradiograph is an image on an X-ray film or nuclear emulsion produced by the pattern of decay emissions (e.g., beta particles or gamma rays) from a distribution of a radioactive substance. Alternatively, the autoradiograph is also available as a digital image (digital autoradiography), due to the recent development of scintillation gas detectors or rare-earth phosphorimaging systems. The film or emulsion is apposed to the labeled tissue section to obtain the autoradiograph (also called an autoradiogram). The auto- prefix indicates that the radioactive substance is within the sample, as distinguished from the case of historadiography or microradiography, in which the sample is marked using an external source. Some autoradiographs can be examined microscopically for localization of silver grains (such as on the interiors or exteriors of cells or organelles) in which the process is termed micro-autoradiography. For example, micro-autoradiography was used to examine whether atrazine was being metabolized by the hornwort plant or by epiphytic microorganisms in the biofilm layer surrounding the plant.
Applications
In biology, this technique may be used to determine the tissue (or cell) localization of a radioactive substance, either introduced into a metabolic pathway, bound to a receptor or enzyme, or hybridized to a nucleic acid. Applications for autoradiography are broad, ranging from biomedical to environmental sciences to industry.
Receptor autoradiography
The use of radiolabeled ligands to determine the tissue distributions of receptors is termed either in vivo or in vitro receptor autoradiography if the ligand is administered into the circulation (with subsequent tissue removal and sectioning) or applied to the tissue sections, respectively. Once the receptor density is known, in vitro autoradiography can also be used to determine the anatomical distribution and affinity of a radiolabeled drug towards the receptor. For in vitro autoradiography, radioligand was directly applying on frozen tissue sections without administration to the subject. Thus it cannot follow the distribution, metabolism and degradation situation completely in the living body. But because target in the cryosections is widely exposed and can direct contact with radioligand, in vitro autoradiography is still a quick and easy method to screen drug candidates, PET and SPECT ligands. The ligands are generally labeled with 3H (tritium), 18F (fluorine), 11C (carbon) or 125I (radioiodine). Compare to in vitro, ex vivo autoradiography were performed after administration of radioligand in the body, which can decrease the artifacts and are closer to the inner environment.
The distribution of RNA transcripts in tissue sections by the use of radiolabeled, complementary oligonucleotides or ribonucleic acids ("riboprobes") is called in situ hybridization histochemistry. Radioactive precursors of DNA and RNA, [3H]-thymidine and [3H]-uridine respectively, may be introduced to living cells to determine the timing of several phases of the cell cycle. RNA or DNA viral sequences can also be located in this fashion. These probes are usually labeled with 32P, 33P, or 35S. In the realm of behavioral endocrinology, autoradiography can be used to determine hormonal uptake and indicate receptor location; an animal can be injected with a radiolabeled hormone, or the study can be conducted in vitro.
Rate of DNA replication
The rate of DNA replication in a mouse cell growing in vitro was measured by autoradiography as 33 nucleotides per second. The rate of phage T4 DNA elongation in phage-infected E. coli was also measured by autoradiography as 749 nucleotides per second during the period of exponential DNA increase at .
Detection of protein phosphorylation
Phosphorylation means the posttranslational addition of a phosphate group to specific amino acids of proteins, and such modification can lead to a drastic change in the stability or the function of a protein in the cell. Protein phosphorylation can be detected on an autoradiograph, after incubating the protein in vitro with the appropriate kinase and γ-32P-ATP. The radiolabeled phosphate of latter is incorporated into the protein which is isolated via SDS-PAGE and visualized on an autoradiograph of the gel. (See figure 3. of a recent study showing that CREB-binding protein is phosphorylated by HIPK2.)
Detection of sugar movement in plant tissue
In plant physiology, autoradiography can be used to determine sugar accumulation in leaf tissue. Sugar accumulation, as it relates to autoradiography, can described the phloem-loading strategy used in a plant. For example, if sugars accumulate in the minor veins of a leaf, it is expected that the leaves have few plasmodesmatal connections which is indicative of apoplastic movement, or an active phloem-loading strategy. Sugars, such as sucrose, fructose, or mannitol, are radiolabeled with [14-C], and then absorbed into leaf tissue by simple diffusion. The leaf tissue is then exposed to autoradiographic film (or emulsion) to produce an image. Images will show distinct vein patterns if sugar accumulation is concentrated in leaf veins (apoplastic movement), or images will show a static-like pattern if sugar accumulation is uniform throughout the leaf (symplastic movement).
Other techniques
This autoradiographic approach contrasts to techniques such as PET and SPECT where the exact 3-dimensional localization of the radiation source is provided by careful use of coincidence counting, gamma counters and other devices.
Krypton-85 is used to inspect aircraft components for small defects. Krypton-85 is allowed to penetrate small cracks, and then its
presence is detected by autoradiography. The method is called "krypton gas penetrant imaging". The gas penetrates smaller openings than the liquids used in dye penetrant inspection and fluorescent penetrant inspection.
Historical events
The task of radioactive decontamination following the Baker nuclear test at Bikini Atoll during Operation Crossroads in 1946 was far more difficult than the U.S. Navy had prepared for. Though the task's futility became apparent and the danger to cleanup crews mounted, Colonel Stafford Warren, in charge of radiation safety, had difficulty persuading Vice Admiral William H. P. Blandy to abandon the cleanup and with it the surviving target ships. On August 10, Warren showed Blandy an autoradiograph made by a surgeonfish from the lagoon that was left on a photographic plate overnight. The film was exposed by alpha radiation produced from the fish's scales, evidence that plutonium, mimicking calcium, had been distributed throughout the fish. Blandy promptly ordered that all further decontamination work be discontinued. Warren wrote home, "A self X ray of a fish ... did the trick."
References
General references
Original publication by sole inventor
Askins, Barbara S. (1 November 1976). "Photographic image intensification by autoradiography". Applied Optics. 15 (11): 2860–2865. Bibcode:1976ApOpt..15.2860A. doi:10.1364/ao.15.002860.
Inline citations
Further reading
"Patent US4101780 Treating silver with a radioactive sulfur compound such as thiourea or derivatives". Google Patents. Retrieved 26 June 2014.
Radiobiology
Radiography | Autoradiograph | [
"Chemistry",
"Biology"
] | 1,594 | [
"Radiobiology",
"Radioactivity"
] |
617,058 | https://en.wikipedia.org/wiki/Transcription%20preinitiation%20complex | The preinitiation complex (abbreviated PIC) is a complex of approximately 100 proteins that is necessary for the transcription of protein-coding genes in eukaryotes and archaea. The preinitiation complex positions RNA polymerase II (Pol II) at gene transcription start sites, denatures the DNA, and positions the DNA in the RNA polymerase II active site for transcription.
The minimal PIC includes RNA polymerase II and six general transcription factors: TFIIA, TFIIB, TFIID, TFIIE, TFIIF, and TFIIH. Additional regulatory complexes (such as the mediator coactivator and chromatin remodeling complexes) may also be components of the PIC.
Preinitiation complexes are also formed during RNA Polymerase I and RNA Polymerase III transcription.
Assembly (RNA Polymerase II)
A classical view of PIC formation at the promoter involves the following steps:
TATA binding protein (TBP, a subunit of TFIID) binds the promoter, creating a sharp bend in the promoter DNA.
Animals have some TBP-related factors (TRF; TBPL1/TBPL2). They can replace TBP in some special contexts.
TBP recruits TFIIA, then TFIIB, to the promoter.
TFIIB recruits RNA polymerase II and TFIIF to the promoter.
TFIIE joins the growing complex and recruits TFIIH which has protein kinase activity (phosphorylates RNA polymerase II within the CTD) and DNA helicase activity (unwinds DNA at promoter). It also recruits nucleotide-excision repair proteins.
Subunits within TFIIH that have ATPase and helicase activity create negative superhelical tension in the DNA.
Negative superhelical tension causes approximately one turn of DNA to unwind and form the transcription bubble.
The template strand of the transcription bubble engages with the RNA polymerase II active site.
RNA synthesis begins.
After synthesis of ~10 nucleotides of RNA, and an obligatory phase of several abortive transcription cycles, RNA polymerase II escapes the promoter region to transcribe the remainder of the gene.
An alternative hypothesis of PIC assembly postulates the recruitment of a pre-assembled "RNA polymerase II holoenzyme" directly to the promoter (composed of all, or nearly all GTFs and RNA polymerase II and regulatory complexes), in a manner similar to the bacterial RNA polymerase (RNAP).
Other preinitiation complexes
In Archaea
Archaea have a preinitiation complex resembling that of a minimized Pol II PIC, with a TBP and an Archaeal transcription factor B (TFB, a TFIIB homolog). The assembly follows a similar sequence, starting with TBP binding to the promoter. An interesting aspect is that the entire complex is bound in an inverse orientation compared to those found in eukaryotic PIC. They also use TFE, a TFIIE homolog, which assists in transcription initiation but is not required.
RNA Polymerase I (Pol I)
Formation of the Pol I preinitiation complex requires the binding of selective factor 1 (SL1 or TIF-IB) to the core element of the rDNA promoter. SL1 is a complex composed of TBP and at least three TBP-associated factors (TAFs). For basal levels of transcription, only SL1 and the initiation-competent form of Pol I (Pol Iβ), characterized by RRN3 binding, are required.
For activated transcription levels, UBTF (UBF) is also required. UBTF binds as a dimer to both the upstream control element (UCE) and core element of the rDNA promoter, bending the DNA to form an enhanceosome. SL1 has been found to stabilize the binding of UBTF to the rDNA promoter.
The subunits of the Pol I PIC differ between organisms.
RNA Polymerase III (Pol III)
Pol III has three classes of initiation, which start with different factors recognizing different control elements but all converging on TFIIIB (similar to TFIIB-TBP; consists of TBP/TRF, a TFIIB-related factor, and a B″ unit) recruiting the Pol III preinitiation complex. The overall architecture resembles that of Pol II. Only TFIIIB needs to remain attached during elongation.
References
External links
Descriptive image – biochem.ucl.ac.uk
Gene expression | Transcription preinitiation complex | [
"Chemistry",
"Biology"
] | 932 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
617,061 | https://en.wikipedia.org/wiki/Class%20II%20gene | A class II gene is a type of gene that codes for a protein. Class II genes are transcribed by RNAP II .
Class II genes have a promoter that may contain a TATA box.
Basal transcription of class II genes requires the formation of a preinitiation complex.
They are transcribed by RNA polymerase II, include both intron and exon, and code for polypeptide.
Genes
Molecular biology | Class II gene | [
"Chemistry",
"Biology"
] | 85 | [
"Biochemistry",
"Molecular biology"
] |
617,121 | https://en.wikipedia.org/wiki/Game%20semantics | Game semantics (, translated as dialogical logic) is an approach to formal semantics that grounds the concepts of truth or validity on game-theoretic concepts, such as the existence of a winning strategy for a player, somewhat resembling Socratic dialogues or medieval theory of Obligationes.
History
In the late 1950s Paul Lorenzen was the first to introduce a game semantics for logic, and it was further developed by Kuno Lorenz. At almost the same time as Lorenzen, Jaakko Hintikka developed a model-theoretical approach known in the literature as GTS (game-theoretical semantics). Since then, a number of different game semantics have been studied in logic.
Shahid Rahman (Lille III) and collaborators developed dialogical logic into a general framework for the study of logical and philosophical issues related to logical pluralism. Beginning 1994 this triggered a kind of renaissance with lasting consequences. This new philosophical impulse experienced a parallel renewal in the fields of theoretical computer science, computational linguistics, artificial intelligence, and the formal semantics of programming languages, for instance the work of Johan van Benthem and collaborators in Amsterdam who looked thoroughly at the interface between logic and games, and Hanno Nickau who addressed the full abstraction problem in programming languages by means of games. New results in linear logic by Jean-Yves Girard in the interfaces between mathematical game theory and logic on one hand and argumentation theory and logic on the other hand resulted in the work of many others, including S. Abramsky, J. van Benthem, A. Blass, D. Gabbay, M. Hyland, W. Hodges, R. Jagadeesan, G. Japaridze, E. Krabbe, L. Ong, H. Prakken, G. Sandu, D. Walton, and J. Woods, who placed game semantics at the center of a new concept in logic in which logic is understood as a dynamic instrument of inference. There has also been an alternative perspective on proof theory and meaning theory, advocating that Wittgenstein's "meaning as use" paradigm as understood in the context of proof theory, where the so-called reduction rules (showing the effect of elimination rules on the result of introduction rules) should be seen as appropriate to formalise the explanation of the (immediate) consequences one can draw from a proposition, thus showing the function/purpose/usefulness of its main connective in the calculus of language (, , , , , ).
Classical logic
The simplest application of game semantics is to propositional logic. Each formula of this language is interpreted as a game between two players, known as the "Verifier" and the "Falsifier". The Verifier is given "ownership" of all the disjunctions in the formula, and the Falsifier is likewise given ownership of all the conjunctions. Each move of the game consists of allowing the owner of the principal connective to pick one of its branches; play will then continue in that subformula, with whichever player controls its principal connective making the next move. Play ends when a primitive proposition has been so chosen by the two players; at this point the Verifier is deemed the winner if the resulting proposition is true, and the Falsifier is deemed the winner if it is false. The original formula will be considered true precisely when the Verifier has a winning strategy, while it will be false whenever the Falsifier has the winning strategy.
If the formula contains negations or implications, other, more complicated, techniques may be used. For example, a negation should be true if the thing negated is false, so it must have the effect of interchanging the roles of the two players.
More generally, game semantics may be applied to predicate logic; the new rules allow a principal quantifier to be removed by its "owner" (the Verifier for existential quantifiers and the Falsifier for universal quantifiers) and its bound variable replaced at all occurrences by an object of the owner's choosing, drawn from the domain of quantification. Note that a single counterexample falsifies a universally quantified statement, and a single example suffices to verify an existentially quantified one. Assuming the axiom of choice, the game-theoretical semantics for classical first-order logic agree with the usual model-based (Tarskian) semantics. For classical first-order logic the winning strategy for the Verifier essentially consists of finding adequate Skolem functions and witnesses. For example, if S denotes then an equisatisfiable statement for S is . The Skolem function f (if it exists) actually codifies a winning strategy for the Verifier of S by returning a witness for the existential sub-formula for every choice of x the Falsifier might make.
The above definition was first formulated by Jaakko Hintikka as part of his GTS interpretation. The original version of game semantics for classical (and intuitionistic) logic due to Paul Lorenzen and Kuno Lorenz was not defined in terms of models but of winning strategies over formal dialogues (P. Lorenzen, K. Lorenz 1978, S. Rahman and L. Keiff 2005). Shahid Rahman and Tero Tulenheimo developed an algorithm to convert GTS-winning strategies for classical logic into the dialogical winning strategies and vice versa.
Formal dialogues and GTS games may be infinite and use end-of-play rules rather than letting players decide when to stop playing. Reaching this decision by standard means for strategic inferences (iterated elimination of dominated strategies or IEDS) would, in GTS and formal dialogues, be equivalent to solving the halting problem and exceeds the reasoning abilities of human agents. GTS avoids this with a rule to test formulas against an underlying model; logical dialogues, with a non-repetition rule (similar to threefold repetition in Chess). Genot and Jacot (2017) proved that players with severely bounded rationality can reason to terminate a play without IEDS.
For most common logics, including the ones above, the games that arise from them have perfect information—that is, the two players always know the truth values of each primitive, and are aware of all preceding moves in the game. However, with the advent of game semantics, logics, such as the independence-friendly logic of Hintikka and Sandu, with a natural semantics in terms of games of imperfect information have been proposed.
Intuitionistic logic, denotational semantics, linear logic, logical pluralism
The primary motivation for Lorenzen and Kuno Lorenz was to find a game-theoretic (their term was dialogical, in German ) semantics for intuitionistic logic. Andreas Blass was the first to point out connections between game semantics and linear logic. This line was further developed by Samson Abramsky, Radhakrishnan Jagadeesan, Pasquale Malacaria and independently Martin Hyland and Luke Ong, who placed special emphasis on compositionality, i.e. the definition of strategies inductively on the syntax. Using game semantics, the authors mentioned above have solved the long-standing problem of defining a fully abstract model for the programming language PCF. Consequently, game semantics has led to fully abstract semantic models for a variety of programming languages, and to new semantic-directed methods of software verification by software model checking.
and Helge Rückert extended the dialogical approach to the study of several non-classical logics such as modal logic, relevance logic, free logic and connexive logic. Recently, Rahman and collaborators developed the dialogical approach into a general framework aimed at the discussion of logical pluralism.
Quantifiers
Foundational considerations of game semantics have been more emphasised by Jaakko Hintikka and Gabriel Sandu, especially for independence-friendly logic (IF logic, more recently information-friendly logic), a logic with branching quantifiers. It was thought that the principle of compositionality fails for these logics, so that a Tarskian truth definition could not provide a suitable semantics. To get around this problem, the quantifiers were given a game-theoretic meaning. Specifically, the approach is the same as in classical propositional logic, except that the players do not always have perfect information about previous moves by the other player. Wilfrid Hodges has proposed a compositional semantics and proved it equivalent to game semantics for IF-logics.
More recently and the team of dialogical logic in Lille implemented dependences and independences within a dialogical framework by means of a dialogical approach to intuitionistic type theory called immanent reasoning.
Computability logic
Japaridze’s computability logic is a game-semantical approach to logic in an extreme sense, treating games as targets to be serviced by logic rather than as technical or foundational means for studying or justifying logic. Its starting philosophical point is that logic is meant to be a universal, general-utility intellectual tool for ‘navigating the real world’ and, as such, it should be construed semantically rather than syntactically, because it is semantics that serves as a bridge between real world and otherwise meaningless formal systems (syntax). Syntax is thus secondary, interesting only as much as it services the underlying semantics. From this standpoint, Japaridze has repeatedly criticized the often followed practice of adjusting semantics to some already existing target syntactic constructions, with Lorenzen’s approach to intuitionistic logic being an example. This line of thought then proceeds to argue that the semantics, in turn, should be a game semantics, because games “offer the most comprehensive, coherent, natural, adequate and convenient mathematical models for the very essence of all ‘navigational’ activities of agents: their interactions with the surrounding world”. Accordingly, the logic-building paradigm adopted by computability logic is to identify the most natural and basic operations on games, treat those operators as logical operations, and then look for sound and complete axiomatizations of the sets of game-semantically valid formulas. On this path a host of familiar or unfamiliar logical operators have emerged in the open-ended language of computability logic, with several sorts of negations, conjunctions, disjunctions, implications, quantifiers and modalities.
Games are played between two agents: a machine and its environment, where the machine is required to follow only computable strategies. This way, games are seen as interactive computational problems, and the machine's winning strategies for them as solutions to those problems. It has been established that computability logic is robust with respect to reasonable variations in the complexity of allowed strategies, which can be brought down as low as logarithmic space and polynomial time (one does not imply the other in interactive computations) without affecting the logic. All this explains the name “computability logic” and determines applicability in various areas of computer science. Classical logic, independence-friendly logic and certain extensions of linear and intuitionistic logics turn out to be special fragments of computability logic, obtained merely by disallowing certain groups of operators or atoms.
See also
Computability logic
Dependence logic
Ehrenfeucht–Fraïssé game
Independence-friendly logic
Interactive computation
Intuitionistic logic
Ludics
References
Bibliography
Books
T. Aho and A-V. Pietarinen (eds.) Truth and Games. Essays in honour of Gabriel Sandu. Societas Philosophica Fennica (2006)..
J. van Benthem, G. Heinzmann, M. Rebuschi and H. Visser (eds.) The Age of Alternative Logics. Springer (2006)..
R. Inhetveen: Logik. Eine dialog-orientierte Einführung., Leipzig 2003
L. Keiff Le Pluralisme Dialogique. Thesis Université de Lille 3 (2007).
K. Lorenz, P. Lorenzen: Dialogische Logik, Darmstadt 1978
P. Lorenzen: Lehrbuch der konstruktiven Wissenschaftstheorie, Stuttgart 2000
O. Majer, A.-V. Pietarinen and T. Tulenheimo (editors). Games: Unifying Logic, Language and Philosophy. Springer (2009).
S. Rahman, Über Dialogue protologische Kategorien und andere Seltenheiten. Frankfurt 1993
S. Rahman and H. Rückert (editors), New Perspectives in Dialogical Logic. Synthese 127 (2001) .
S. Rahman and N. Clerbout: Linking Games and Constructive Type Theory: Dialogical Strategies, CTT-Demonstrations and the Axiom of Choice. Springer-Briefs (2015). https://www.springer.com/gp/book/9783319190624.
S. Rahman, Z. McConaughey, A. Klev, N. Clerbout: Immanent Reasoning or Equality in Action. A Plaidoyer for the Play level. Springer (2018). https://www.springer.com/gp/book/9783319911489.
J. Redmond & M. Fontaine, How to play dialogues. An introduction to Dialogical Logic. London, College Publications (Col. Dialogues and the Games of Logic. A Philosophical Perspective N° 1). ()
Articles
S. Abramsky and R. Jagadeesan, Games and full completeness for multiplicative linear logic. Journal of Symbolic Logic 59 (1994): 543-574.
A. Blass, A game semantics for linear logic. Annals of Pure and Applied Logic 56 (1992): 151-166.
J.M.E.Hyland and H.L.Ong On Full Abstraction for PCF: I, II, and III. Information and computation, 163(2), 285-408.
E.J. Genot and J. Jacot, Logical Dialogues with Explicit Preference Profiles and Strategy Selection, Journal of Logic, Language and Information 26, 261–291 (2017). doi.org/10.1007/s10849-017-9252-4
D.R. Ghica, Applications of Game Semantics: From Program Analysis to Hardware Synthesis. 2009 24th Annual IEEE Symposium on Logic In Computer Science: 17-26. .
G. Japaridze, Introduction to computability logic. Annals of Pure and Applied Logic 123 (2003): 1-99.
G. Japaridze, In the beginning was game semantics. In Ondrej Majer, Ahti-Veikko Pietarinen and Tero Tulenheimo (editors), Games: Unifying logic, Language and Philosophy. Springer (2009).
Krabbe, E. C. W., 2001. "Dialogue Foundations: Dialogue Logic Restituted [title has been misprinted as "...Revisited"]," Supplement to the Proceedings of the Aristotelian Society 75: 33-49.
S. Rahman and L. Keiff, On how to be a dialogician. In Daniel Vanderken (ed.), Logic Thought and Action, Springer (2005), 359-408. .
S. Rahman and T. Tulenheimo, From Games to Dialogues and Back: Towards a General Frame for Validity. In Ondrej Majer, Ahti-Veikko Pietarinen and Tero Tulenheimo (editors), Games: Unifying logic, Language and Philosophy. Springer (2009).
External links
Computability Logic Homepage
GALOP: Workshop on Games for Logic and Programming Languages
Game Semantics or Linear Logic?
Logic in computer science
Mathematical logic
Philosophical logic
Quantifier (logic)
Game theory
Semantics | Game semantics | [
"Mathematics"
] | 3,287 | [
"Logic in computer science",
"Predicate logic",
"Mathematical logic",
"Game theory",
"Basic concepts in set theory",
"Quantifier (logic)"
] |
617,167 | https://en.wikipedia.org/wiki/Piphilology | Piphilology comprises the creation and use of mnemonic techniques to remember many digits of the mathematical constant . The word is a play on the word "pi" itself and of the linguistic field of philology.
There are many ways to memorize , including the use of piems (a portmanteau, formed by combining pi and poem), which are poems that represent in a way such that the length of each word (in letters) represents a digit. Here is an example of a piem: "Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics." Notice how the first word has three letters, the second word has one, the third has four, the fourth has one, the fifth has five, and so on. In longer examples, 10-letter words are used to represent the digit zero, and this rule is extended to handle repeated digits in so-called Pilish writing. The short story "Cadaeic Cadenza" records the first 3,834 digits of in this manner, and a 10,000-word novel, Not A Wake, has been written accordingly.
However, poems prove to be inefficient for large memorizations of . Other methods include remembering patterns in the numbers (for instance, the year 1971 appears in the first fifty digits of ) and the method of loci (which has been used to memorize to 67,890 digits).
History
Until the 20th century, the number of digits of pi which mathematicians had the stamina to calculate by hand remained in the hundreds, so that memorization of all known digits at the time was possible. In 1949 a computer was used to calculate π to 2,000 places, presenting one of the earliest opportunities for a more difficult challenge.
Later computers calculated pi to extraordinary numbers of digits (2.7 trillion as of August 2010), and people began memorizing more and more of the output. The world record for the number of digits memorized has exploded since the mid-1990s, and it stood at 100,000 as of October 2006. The previous record (83,431) was set by the same person (Akira Haraguchi) on July 2, 2005, and the record previous to that (42,195) was held by Hiroyuki Goto.
An institution from Germany provides the details of the "Pi World Ranking".
Examples in English
The most common mnemonic technique is to memorize a so-called "piem" (a wordplay on "pi" and "poem") in which the number of letters in each word is equal to the corresponding digit of π. This famous example for 15 digits has several variations, including:
How I want a drink, alcoholic of course, after the heavy chapters involving quantum mechanics! - Sir James Hopwood Jeans
Short mnemonics such as these, of course, do not take one very far down π's infinite road. Instead, they are intended more as amusing doggerel. If even less accuracy suffices, the following examples can be used:
How I wish I could recollect pi easily today!
May I have a large container of coffee, cream and sugar?
This second one gives the value of π as 3.1415926535, while the first only brings it to the second five. Indeed, many published poems use truncation instead of one of the several roundings, thereby producing a less-accurate result when the first omitted digit is greater than or equal to five. It is advantageous to use truncation in memorizing if the individual intends to study more places later on, otherwise one will be remembering erroneous digits.
Another mnemonic is:
The point I said a blind Bulgarian in France could see
In this mnemonic the word "point" represents the decimal point itself.
Yet another example is:
How I wish I could recollect, of circle round, the exact relation Arkimedes (or Archimede) unwound.
In this example, the spelling of Archimedes is normalised to nine. Although 'Archimedes' is, today, a more correct spelling of the ancient Greek mathematician's name in English, Archimede is also often seen when this mnemonic is given, since Archimède is the more correct spelling in some languages, such as French. This mnemonic also contains a rounding error because the digit represented by the last word "Arkimedes" (9) in 3.141592653589 is followed by 7 in π, which would cause the last two digits to round up.
Longer mnemonics employ the same concept. This example created by Peter M. Brigham incorporates twenty decimal digits:
How I wish I could enumerate pi easily, since all these bullshit mnemonics prevent recalling any of pi's sequence more simply.
Poems
Some mnemonics, such as this poem which gives the three and the first 20 decimal digits, use the separation of the poem's title and main body to represent the decimal point:
Pie
I wish I could determine pi
Eureka, cried the great inventor
Christmas pudding, Christmas pie
Is the problem's very center.
Another, more poetic version is:
Sir, I have a rhyme excelling,
In mystic power and magic spelling,
Celestial spirits elucidate,
For my own problems can't relate.
Extensions to 30 or 31 decimals of the same proceed as follows:
There are minor variations on the above rhyme, which still allow pi to be worked out correctly. However, one variation replaces the word "lexicon's" with "lesson's" and in doing so, incorrectly indicates that the 18th digit is seven.
The logologist Dmitri Borgmann gives the following 30-word poem in his book, Language on Vacation: An Olio of Orthographical Oddities:
Now, a moon, a lover refulgent in flight,
Sails the black silence's loneliest ellipse.
Computers use pi, the constant, when polite,
Or gentle data for sad tracking aid at eclipse.
In the fantasy book, Somewhen by David Saul, a 35-word piem both provides a description of the constant pi and the digits. The text is also laid out as a circle to provide another clue to the readers as to the purpose of the poem. In this example, the word "nothing" is used to represent the digit zero.
It's a fact
A ratio immutable
Of circle round and width,
Produces geometry's deepest conundrum.
For as the numerals stay random,
No repeat lets out its presence,
Yet it forever stretches forth.
Nothing to eternity.
The following sonnet is a mnemonic for pi to 75 decimal places in iambic pentameter:
Now I defy a tenet gallantly
Of circle canon law: these integers
Importing circles' quotients are, we see,
Unwieldy long series of cockle burs
Put all together, get no clarity;
Mnemonics shan't describeth so reformed
Creating, with a grammercy plainly,
A sonnet liberated yet conformed.
Strangely, the queer'st rules I manipulate
Being followéd, do facilitate
Whimsical musings from geometric bard.
This poesy, unabashed as it's distressed,
Evolvéd coherent - a simple test,
Discov'ring poetry no numerals jarred.
Note that in this example, 10-letter words are used to represent the digit zero.
Other poems use sound as a mnemonic technique, as in the following poem which rhymes with the first 140 decimal places of pi using a blend of assonance, slant rhyme, and perfect rhyme:
dreams number us like pi. runes shift. nights rewind
daytime pleasure-piles. dream-looms create our id.
moods shift. words deviate. needs brew. pleasures rise.
time slows. too late? wait! foreign minds live in
us! quick-minds, free-minds, minds-we-never-mind,
unknown, gyrate! neuro-rhymes measure our
minds, for our minds rhyme. crude ego-emanations
distort nodes. id, (whose basic neuro-spacetime rhymes),
plays its tune. space drones before fate unites
dreams’ lore to unsung measures. whole dimensions
gyrate. new number-games donate quick minds &
weave through fate’s loom. fears, hopes, digits, or devils
collide here—labor stored in gold-mines, lives, lightcone-
piles. fate loops through dreams & pleasure-looms….
Note that "dreams number us like pi" corresponds to "314159", "runes shift" corresponds to "26", "nights rewind" corresponds to "535" and so on. Sound-based mnemonic techniques, unlike pilish, do not require that the letters in each word be counted in order to recall the digits of pi. However, where sound-based mnemonics use assonance, extra care must be taken to distinguish "nine" and "five," which contain the same vowel sound. In this example, the author assumes the convention that zero is often called "O."
Piku
The piku follows the rules of conventional haiku (three lines of 5, 7 and 5 syllables), but with the added mnemonic trick that each word contains the same number of letters as the numerals of pi, e.g.
How I love a verse
Contrived to unhusk dryly
One image nutshell
Songs
In 2004, Andrew Huang wrote a song that was a mnemonic for the first fifty digits of pi, titled "I am the first 50 digits of pi". The first line is:
Man, I can’t - I shan’t! - formulate an anthem where the words comprise mnemonics, dreaded mnemonics for pi.
In 2013, Huang extended the song to include the first 100 digits of pi, and changed the title to "Pi Mnemonic Song".
Lengthier works
There are piphilologists who have written texts that encode hundreds or thousands of digits. This is an example of constrained writing, known as "Pilish". For example, Poe, E.: Near a Raven represents 740 digits, Cadaeic Cadenza encodes 3,835, and Not A Wake extends to 10,000 digits.
Sound-based mnemonics
It is also possible to use the rhythm and sound of the spoken digits themselves as a memorization device. The mathematician John Horton Conway composed the following arrangement for the first 100 digits,
_ _ _
3 point 1415 9265 35
^ ^
_ _ _ _ _ _ __
8979 3238 4626 4338 3279
** **^^ ^^ ****
. _ _ __ _ _ _ . _ .
502 884 197 169 399 375 105 820 974 944
^ ^ ^ ^
59230 78164
_ _ _ _
0628 6208 998 6280
^^ ^^ ^^
.. _ .._
34825 34211 70679,
^ ^
where the accents indicate various kinds of repetition.
Another mnemonic system used commonly in the memorization of pi is the Mnemonic major system, where single numbers are translated into basic sounds. A combination of these sounds creates a word, which can then be translated back into numbers. When combined with the Method of loci, this becomes a very powerful memorization tool.
Examples in other languages
Albanian
Armenian
(10 decimal places)
Chinese
It is possible to construct piphilogical poems in Chinese by using homophones or near-homophones of the numbers zero through nine, as in the following well known example which covers 22 decimal places of π. In this example the character meaning "mountain" (山 shān) is used to represent the number "three" (三 sān), the character meaning "I" (吾 wú) is used to represent the number "five" (五 wǔ), and the characters meaning "temple" (寺 sì) and "die" (死 sǐ) are used to represent the number "four" (四 sì). Some of the mnemonic characters used in this poem, for example "kill" (殺 shā) for "three" (三 sān), "jug" (壺 hú) for "five" (五 wǔ), "happiness" (樂 lè) for "six" (六 liù) and "eat" (吃 chī) for "seven" (七 qī), are not very close phonetically in Mandarin/Putonghua.
{|
|山||巔||一||寺||一||壺||酒
|-
| shān || diān || yī || sì || yī || hú || jiǔ
|-
|3 || . || 1 || 4 || 1 || 5 || 9
|-
|爾||樂||苦||煞||吾
|-
| ěr || lè || kǔ || shā || wú
|-
|2 || 6 || 5 || 3 || 5
|-
|把||酒||吃|| ||酒||殺||爾
|-
| bǎ || jiǔ || chī || || jiǔ|| shā || ěr
|-
|8 || 9 || 7 || || 9 || 3 || 2
|-
|殺||不||死|| ||樂||爾||樂
|-
| shā || bù || sǐ || || lè || ěr || lè
|-
|3 || 8 || 4 || || 6 || 2 || 6
|}
This can be translated as:
On a mountain top a temple and a jug of wine.
Your happiness makes me so bitter;
Take some wine and drink, the wine will kill you;
If it does not kill you, I will rejoice in your happiness.
Czech
(nine decimal places)
(12 decimal places)
(13 decimal places)
(30 decimal places)
French
The following poem composed of alexandrines consists of words each with a number of letters that yields π to 126 decimal places:
An alternative beginning:
Que j’aime à faire apprendre un nombre utile aux sages !
Glorieux Archimède, artiste ingénieur,
Toi de qui Syracuse aime encore la gloire,
Soit ton nom conservé par de savants grimoires !
...
German
This statement yields π to twenty-two decimal places:
Wie, o dies π macht ernstlich so vielen viele Müh. Lernt immerhin, Mägdelein, leichte Verselein, wie so zum Beispiel dies dürfte zu merken sein.
English translation that does not encode pi:
How, oh this π seriously makes so many struggles to so many. Learn at least, girls, simple little verses, just such as this one should be memorizable.
Looser English translation that encodes pi:
Woe! O this π makes seriously so muchly many's woe.
Hungarian
An interesting (not math themed) alternative:
Another alternative:
Íme a szám: a görög periféria pi betűje.
Euler meg Viète végtelen összeggel közelít értékéhez.
Lám, őt már Egyiptom, Kína, Európa is akarta, hogy
„ama kör kerülete úgy ki lehetne számlálva”.
Irish
(7 decimal places)
Italian
(30 decimal places)
(10 decimal places)
Chi è nudo e crepa limonando la tubera, lieto lui crepa
Japanese
Japanese piphilology has countless mnemonics based on punning words with numbers. This is especially easy in Japanese because there are two or three ways to pronounce each digit, and the language has relatively few phonemes to begin with. For example, to 31 decimal places:
{|
|身||一つ||世||一つ||生||く||に||無||意||味||い||わ||く||な||く||身||ふ||み||や||読||む||似||ろ||よ||||||colspan=2|闇||に||な||く
|-
|3.|| 1 ||4 || 1 ||5 ||9||2 ||6||5 ||3||5 ||8 ||9 ||7||9 ||3||2 ||3||8 ||4 ||6||2 ||6 ||4 ||3 ||3 ||8||3||2||7||9
|
|mi||hitotsu||yo||hitotsu||colspan=2|iku||ni||colspan=3|mu-imi||colspan=5|iwakunaku
||mi||colspan=3|fumiya||colspan=2|yomu||colspan=2|niro||yo||san||zan||colspan=2|yami
||ni||colspan=2|naku
|}
This is close to being ungrammatical nonsense, but a loose translation prioritizing word order yields:
A person is one; the world is one:
to live this way, it's meaningless, one says, and cries,
"step on it, will ya!" then reads—be the same!
Crying uncontrollably in the dark.
Japanese children also use songs built on this principle to memorize the multiplication table.
Katharevousa (archaizing) Greek
Yielding π to 22 decimal places:
Persian
Counting the letters in each word (additionally separated by "|") gives 10 decimal places of : خرد (kherad) = 3, و (va) = 1, دانش (daanesh) = 4, و (va) = 1, آگاهی (aagaahi) = 5, ...
Polish
The verse of Polish mathematician Witold Rybczyński (35 decimal places):
(Note that the dash stands for zero.)
The verse of Polish mathematician Kazimierz Cwojdziński (23 decimal places):
(12 decimal places):
(10 decimal places)
An occasionally seen verse related to Mundial Argentina and the Polish football team (30 decimal places):
Portuguese
(11 decimal places)
(8 decimal places)
Or in Brazilian Portuguese:
A poem written in a more poetic manner:
{{Verse translation|lang=pt|
Sou o amor,o homem impetuoso da libido
Homem que ataca mulheres atraentes,
meninas pecadoras que no céu imiscuem amor, paixão, fé, desejo, tudo!
Até que idolatro com as sereias pecadoras tanta fé!
Esbeltas mulheres para o musculado,
sereias e fêmeas pecadoras
Até idolatram serpentes com ardente macho.
O viril desejará as pecadoras iníquas doravante para amar.
|
I am the love,
The impetuous man from the libido
Man who attacks attractive women,
sinful maidens who on heaven intrude love, passion, faith, desire, everything!
I even idolize with the mermaids so much faith!
Luscious women for the brawny,
sinful mermaids and females
They even idolize serpents with the burning buck.
The virile man will wish the sinful and the iniquitous henceforth to love.}}
Romanian
One of the Romanian versions of Pi poems is:
There is another phrase known in Romanian that will help to memorize the number by eight decimal places:Așa e bine a scrie renumitul și utilul număr. — "This is the way to write the renowned and useful number."
Another alternative for 15 decimal places: Ion a luat o carte, biografie, în latina veche. Are cinci capitole originale clasice latinești. — "Ion has bought a book, a biography, in old latin. It has five classical original latin chapters."
Russian
In the Russian language, there is a well-known phrase in the reform of 1917 orthography of old tradition:
A more modern rhyme is:
A short approximation is: "Что я знаю о кругах?" (What do I know about circles?)
In addition, there are several nonfolklore verses that simply rhyme the digits of pi "as is"; for examples, see the Russian version of this article.
Sanskrit
The Katapayadi System of verses is basically a system of code so that things can be defined in a way so that people can remember. The code is as follows:
With the above key in place, Sri Bharathi Krishna Tirtha in his Vedic Mathematics gives the following verse:
गोपी भाग्य मधुव्रात श्रुङ्गिशो दधिसन्धिग
खलजीवित खाताव गलहालारसंधार
If we replace the code from the above table in the above verse, here is what we get.
31 41 5926 535 89793
23846 264 33832792
That gives us π/10=0.31415926535897932384626433832792
Serbian
(16 decimal places)
Slovene
The following poem gives π to 30 decimal places.
Spanish
The following poem, giving π to 31 decimal places, is well known in Argentina:
Another. This piem gives π (correctly rounded) to 10 decimal places. (If you prefer to not round π, then replace "cosmos" with "cielo".)
Turkish
Memorization record holders
Even before computers calculated , memorizing a record number of digits became an obsession for some people. The record for memorizing digits of , certified by Guinness World Records'', is 70,000 digits, recited in India by Rajveer Meena in 9 hours and 27 minutes on 21 March 2015. On October 3, 2006, Akira Haraguchi, a retired Japanese engineer, claimed to have recited 100,000 decimal places, but the claim was not verified by Guinness World Records.
David Fiore was an early record holder for pi memorization. Fiore's record stood as an American record for more than 27 years, which remains the longest time period for an American recordholder. He was the first person to break the 10,000 digit mark.
Suresh Kumar Sharma holds Limca Book's record for the most decimal places of pi recited by memory. He rattled off 70,030 numbers in 17 hours 14 minutes on October 21, 2015.
See also
Mnemonist
Cadaeic Cadenza
Memory sport
Pi Day
Notes and references
External links
Pi World Ranking List
Tools for Piphilologist
Collection of Mnemonic Devices
Mathworld Pi Wordplay
Hatzipolakis Pi Philology v. 9.5
Pi
Science mnemonics
nl:Pi (wiskunde)#Geheugensteuntjes en dergelijke | Piphilology | [
"Mathematics"
] | 4,936 | [
"Pi"
] |
617,193 | https://en.wikipedia.org/wiki/God%20Bless%20You%2C%20Dr.%20Kevorkian | God Bless You, Dr. Kevorkian, by Kurt Vonnegut, is a collection of short fictional interviews written by Vonnegut and first broadcast on WNYC. The title parodies that of Vonnegut's 1965 novel God Bless You, Mr. Rosewater. It was published in book form in 1999.
Synopsis
The premise of the collection is that Vonnegut employs Dr. Jack Kevorkian to give him near-death experiences, allowing Vonnegut access to heaven and those in it for a limited time. While in the afterlife Vonnegut interviews a range of people including Adolf Hitler, William Shakespeare, Eugene V. Debs, Isaac Asimov, Isaac Newton and the ever-present Kilgore Trout (a fictional character created by Vonnegut in his earlier works).
Resources
The book's page in the website of Seven Stories Press
Many of the original WNYC radio reports forming the basis of the book
References
1999 short story collections
Bangsian fantasy
Books by Kurt Vonnegut
Fiction about the afterlife
Seven Stories Press books
Cultural depictions of physicians
Cultural depictions of Adolf Hitler
Cultural depictions of writers
Cultural depictions of William Shakespeare
Cultural depictions of Isaac Newton
Isaac Asimov | God Bless You, Dr. Kevorkian | [
"Astronomy"
] | 244 | [
"Cultural depictions of Isaac Newton",
"Cultural depictions of astronomers"
] |
9,541,424 | https://en.wikipedia.org/wiki/Compressibility%20equation | In statistical mechanics and thermodynamics the compressibility equation refers to an equation which relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid. It reads:where is the number density, g(r) is the radial distribution function and is the isothermal compressibility.
Using the Fourier representation of the Ornstein-Zernike equation the compressibility equation can be rewritten in the form:
where h(r) and c(r) are the indirect and direct correlation functions respectively. The compressibility equation is one of the many integral equations in statistical mechanics.
References
Statistical mechanics
Thermodynamic equations | Compressibility equation | [
"Physics",
"Chemistry"
] | 138 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics",
"Statistical mechanics",
"Physical chemistry stubs"
] |
9,542,135 | https://en.wikipedia.org/wiki/Westminster%20Stone%20theory | The Westminster Stone theory is the belief held by some historians and scholars that the stone which traditionally rests under the Coronation Chair is not the true Stone of Destiny but a 13th-century substitute. Since the chair has been located in Westminster Abbey since that time, adherents to this theory have created the title 'Westminster Stone' to avoid confusion with the 'real' stone (sometimes referred to as the Stone of Scone).
One of the most vocal proponents of this theory was writer and historian Nigel Tranter, who consistently presented the theory throughout his non-fiction books and historical novels. Other historians have held this view, including James S. Richardson, who was an Inspector of Ancient Monuments in the mid-twentieth century. Richardson produced a monograph on the subject.
History of the Stone of Destiny
The Stone of Destiny was the traditional Coronation Stone of the Kings of Scotland and, before that, the Kings of Dál Riata. Legends associate it with Saint Columba, who might have brought it from Ireland as a portable altar. In AD 574, the Stone was used as a coronation chair when Columba anointed and crowned Aedan as the King of Dál Riata.
The Stone of Destiny was kept by the monks of Iona, the traditional headquarters of the Scottish Celtic church, until Viking raiding caused them to move to the mainland, first to Dunkeld, Atholl, and then to Scone. Here it continued to be used in coronations, as a symbol of Scottish Kingship.
Edward I and the Stone
In his attempts to conquer Scotland, Edward I of England invaded in 1296 at the head of an army. Sacking Berwick, beating the Scots at Dunbar, and laying siege to Edinburgh Castle, Edward then proceeded to Scone, intending to take the Stone of Destiny, which was kept at Scone Abbey. He had already taken the Scottish regalia from Edinburgh, which included Saint Margaret's Black Rood relic, but to confiscate an object so precious to the Scots, and so symbolic of their independence, would be a final humiliation. He carried it back to Westminster Abbey. By placing it within the throne of England, he had a potent symbol of his claim for overlordship. It is this stone which sat in Westminster until 1996, when it was returned to Scotland.
Substitution
According to the Westminster Stone theory, the stone Edward removed was not the real Stone of Destiny, but a substitute. The English army was at the Scottish border in mid-March, 1296, and did not reach Scone until June. With three months to anticipate Edward's arrival, there was ample time and incentive for a switch to be made, in order to protect the original relic. Such a substitution could have been instigated by the Abbot of Scone, who stood as custodian. The 'Stone of Destiny' could therefore have been transported to a place of safety, and Edward tricked with a different piece of sandstone.
Hiding the 'True Stone'
There are many theories regarding the possible resting place of the 'True Stone' since, inspired by logical deduction and, in some cases, fantastical, wishful thinking.
Nigel Tranter believed the True Stone was originally hidden by the Abbot of Scone, and eventually entrusted to the care of Aonghus Óg Mac Domhnaill, by Robert the Bruce. Aonghus Óg hid it in his native Hebrides, where the stone probably remains.
One legend records that after the True Stone was given into the keeping of Aonghus Óg, its keepership passed into the branch of the clan who settled in Sleat. A descendant of this line, C. Iain Alasdair MacDonald, wrote to Tranter, claiming he was now the custodian of the Stone, which was hidden on Skye.
Evidence
Arguments for a substitution
The Westminster Stone is a lump of roughly-dressed sandstone, of proportions appropriate for use in building. As such, it is not remarkable or unique, or impressive. The only unusual thing about it is the presence of an iron hoop inserted in each top end, suitable for carrying on a pole.
Edward I would not have been tricked by anything newly-hewn, but a piece long-since rejected by builders would look suitably ancient, especially if abandoned outside and consequently weathered. That the Westminster Stone has a fault (weak point) is demonstrated by the fact it broke in half when removed from Westminster Abbey in 1950.
The Westminster Stone is certainly not the stone of Iona mentioned in early documents and traditions. Geologists confirm that the Stone is 'lower Old Red Sandstone' and was quarried in the vicinity of Scone.
Early seals and documentary descriptions suggest a stone that is larger than the Westminster Stone, darker in colour (possibly basalt or marble), with elaborate carvings. And it might have been retrieved because a letter to the editor of the Morning Chronicle, dated 2 January 1819, states:
On the 19th of November, as the servants belonging to the West Mains of Dunsinane-house, were employed in carrying away stones from the excavation made among the ruins that point out the site of Macbeth's castle here, part of the ground they stood on suddenly gave way, and sank down about six feet, discovering a regularly built vault, about six feet long and four wide. None of the men being injured, curiosity induced them to clear out the subterranean recess, when they discovered among the ruins a large stone, weighing about 500l []. which is pronounced to be of the meteoric or semi-metallic kind. This stone must have lain here during the long series of ages since Macbeth's reign. Besides it were also found two round tablets, of a composition resembling bronze. On one of these two lines are engraved, which a gentleman has thus deciphered.— 'The sconce (or shadow) of kingdom come, until Sylphs in air carry me again to Bethel.' These plates exhibit the figures of targets for the arms. [...] The curious here, aware of such traditions, and who have viewed these venerable remains of antiquity, agree that Macbeth may, or rather must, have deposited the stone in question at the bottom of his Castle, on the hill of Dunsinane (from the trouble of the times), where it has been found by the workmen. This curious stone has been shipped for London for the inspection of the scientific amateur, in order to discover its real quality.
There is no record to show the Scots ever requested the return of the Westminster Stone in the century after its departure, which they would have done if it were an important relic. The absence of a request is quite marked in the Treaty of Northampton. The Scots had been harrying England for some years, and in 1328 the English sued for peace. The Treaty is drawn in Scotland's favour, for they were in the position to make demands. The Treaty stipulates the return of the Scottish regalia and St Margaret's Black Rood, but there is no mention of the Stone of Scone. Tranter states that the English offered to return the stone, but the Scots were not interested.
Arguments against a substitution
The Westminster Stone theory is not accepted by many historians, or those responsible for the care of the Stone. There are many strong arguments against the theory.
If Edward I did not remove the true stone, yet claimed to have done so, the Scots' easiest refutation of his claims would be to produce the True Stone. However, there is no record of them doing so.
Hiding the stone might have been a sensible precaution while the English remained a threat, but it was never produced once the threat was removed.
Despite its importance as a symbol of Kingship, the stone was not used for subsequent coronations, which it surely would have if still in Scottish possession.
Legends and theories abound, but no proof has been found to indicate there is another stone.
If there was warning enough of Edward's intention to remove the Stone, why were the other regalia, documents and Black Rood not hidden also?
A number of English knights attended the coronation of King John of Scotland only a few years earlier, and would have seen the true stone, but none of them told Edward that his stone was a fake.
On studying the Stone in 1996, after its return to Scotland, nine periods of workmanship were identified on the Stone's faces, as well as recognisable erosion between the features, which proves it is an ancient artefact.
Edward had followers from the Scottish nobility who would also have been able to verify the stone's authenticity.
Dunsinane Hill has the remains of a late prehistoric hill fort, and this has historical associations with Macbeth, but no remains dating to the 11th century have been identified on the hill.
Second theory: the 1950 substitution
On Christmas Day 1950, the Westminster Stone was taken from the abbey by four Scottish students. It remained hidden until April 1951, when a stone was left in Arbroath Abbey. Some speculate that this stone is not the one taken from the Abbey, but merely a copy.
The stone left in Arbroath was damaged, for the Westminster Stone had broken in half when removed from the Coronation Chair, but had been repaired by Glasgow stonemason Robert Gray. However, Gray had made replicas of the Stone in the 1930s, and further fuelled speculation by declaring later that he did not know which stone had been sent back to London as "there were so many copies lying around".
This scenario receives support from a plaque placed in St Columba's Parish Church in Dundee, which claims to mark the site of the 'Stone of Scone', given to them in 1972 by 'Baillie Robert Gray'.
The apparent disrespect shown towards the Stone by Gray and the students is explained by Nigel Tranter, who had some claim to knowledge, as the students asked him to act as an intermediary after the removal of the stone. Tranter later stated that Gray inserted a note inside the Westminster Stone, when repairing it, to the effect that it was 'a block of Old Red Sandstone of no value to anyone', although other reports state that Gray never revealed what the note said.
However, in the 1940s, the British Geological Survey, had carried out a survey of the Stone when the Coronation Chair was undergoing conservation work. The fault line had been noticed as well as the many marks and features of the Stone's surface. This allowed verification of the authenticity of the returned item.
A scanray examination conducted by the Home Office Police Scientific Development Branch in 1973 confirmed the presence of 'three metal rods and sockets, one being at right angles to the other two'. This also indicated that the repaired Westminster Stone, not a replica, had been returned.
References
External links
Nigel Tranter, Scots Magazine, 1960
Political history of Scotland
Wars of Scottish Independence
Westminster Abbey
Scottish royalty
Stones
13th century in Scotland
Stone of Scone
Theories of history | Westminster Stone theory | [
"Physics"
] | 2,230 | [
"Stones",
"Physical objects",
"Matter"
] |
9,542,388 | https://en.wikipedia.org/wiki/Hypothalamic%E2%80%93pituitary%E2%80%93thyroid%20axis | The hypothalamic–pituitary–thyroid axis (HPT axis for short, a.k.a. thyroid homeostasis or thyrotropic feedback control) is part of the neuroendocrine system responsible for the regulation of metabolism and also responds to stress.
As its name suggests, it depends upon the hypothalamus, the pituitary gland, and the thyroid gland.
The hypothalamus senses low circulating levels of thyroid hormone (Triiodothyronine (T3) and Thyroxine (T4)) and responds by releasing thyrotropin-releasing hormone (TRH). The TRH stimulates the anterior pituitary to produce thyroid-stimulating hormone (TSH). The TSH, in turn, stimulates the thyroid to produce thyroid hormone until levels in the blood return to normal. Thyroid hormone exerts negative feedback control over the hypothalamus as well as anterior pituitary, thus controlling the release of both TRH from hypothalamus and TSH from anterior pituitary gland.
The HPA, HPG, and HPT axes are three pathways in which the hypothalamus and pituitary direct neuroendocrine function.
Physiology
Thyroid homeostasis results from a multi-loop feedback system that is found in virtually all higher vertebrates. Proper function of thyrotropic feedback control is indispensable for growth, differentiation, reproduction and intelligence. Very few animals (e.g. axolotls and sloths) have impaired thyroid homeostasis that exhibits a very low set-point that is assumed to underlie the metabolic and ontogenetic anomalies of these animals.
The pituitary gland secretes thyrotropin (TSH; Thyroid Stimulating Hormone) that stimulates the thyroid to secrete thyroxine (T4) and, to a lesser degree, triiodothyronine (T3). The major portion of T3, however, is produced in peripheral organs, e.g. liver, adipose tissue, glia and skeletal muscle by deiodination from circulating T4. Deiodination is controlled by numerous hormones and nerval signals including TSH, vasopressin and catecholamines.
Both peripheral thyroid hormones (iodothyronines) inhibit thyrotropin secretion from the pituitary (negative feedback). Consequently, equilibrium concentrations for all hormones are attained.
TSH secretion is also controlled by thyrotropin releasing hormone (thyroliberin, TRH), whose secretion itself is again suppressed by plasma T4 and T3 in CSF (long feedback, Fekete–Lechan loop). Additional feedback loops are ultrashort feedback control of TSH secretion (Brokken-Wiersinga-Prummel loop) and linear feedback loops controlling plasma protein binding.
Recent research suggested the existence of an additional feedforward motif linking TSH release to deiodinase activity in humans. The existence of this TSH-T3 shunt could explain why deiodinase activity is higher in hypothyroid patients and why a minor fraction of affected individuals may benefit from substitution therapy with T3.
Convergence of multiple afferent signals in the control of TSH release including but not limited to T3, cytokines and TSH receptor antibodies may be the reason for the observation that the relation between free T4 concentration and TSH levels deviates from a pure loglinear relation that has previously been proposed. Recent research suggests that ghrelin also plays a role in the stimulation of T4 production and the subsequent suppression of TSH directly and by negative feedback.
Functional states of thyrotropic feedback control
Euthyroidism: Normal thyroid function
Hypothyroidism: Reduced thyroid function
primary hypothyroidism: Feedback loop interrupted by low thyroid secretory capacity, e.g. after thyroid surgery or in case of autoimmune thyroiditis
secondary hypothyroidism: Feedback loop interrupted on the level of pituitary, e.g. in anterior pituitary failure
tertiary hypothyroidism: Lacking stimulation by TRH, e.g. in hypothalamic failure, Pickardt–Fahlbusch syndrome or euthyroid sick syndrome.
Hyperthyroidism: Inappropriately increased thyroid function
primary hyperthyroidism: Inappropriate secretion of thyroid hormones, e.g. in case of Graves' disease.
secondary hyperthyroidism: Rare condition, e.g. in case of TSH producing pituitary adenoma or partial thyroid hormone resistance.
Thyrotoxicosis: Over-supply with thyroid hormones, e.g. by overdosed exogenously levothyroxine supplementation.
Low-T3 syndrome and high-T3 syndrome: Consequences of step-up hypodeiodination, e.g. in critical illness as an example for type 1 allostasis, or hyperdeiodination, as in type 2 allostasis, including posttraumatic stress disorder.
Resistance to thyroid hormone: Feedback loop interrupted on the level of pituitary thyroid hormone receptors.
Diagnostics
Standard procedures cover the determination of serum levels of the following hormones:
TSH (thyrotropin, thyroid stimulating hormone)
Free T4
Free T3
For special conditions the following assays and procedures may be required:
Total T4
Total T3
TBG
TRH test
Thyroid's secretory capacity (GT)
Sum activity of peripheral deiodinases (GD)
TSH Index (TSHI)
See also
Thyroid function tests
Hypothalamic–pituitary–adrenal axis
Hypothalamic–pituitary–gonadal axis
Hypothalamic–neurohypophyseal system
SimThyr, a free computer simulation for thyroid homeostasis in humans
References
Further reading
Hormones of the hypothalamus-pituitary-thyroid axis
Biomedical cybernetics
Human homeostasis | Hypothalamic–pituitary–thyroid axis | [
"Biology"
] | 1,255 | [
"Human homeostasis",
"Homeostasis"
] |
9,542,516 | https://en.wikipedia.org/wiki/Neurine | Neurine is an alkaloid found in egg yolk, brain, bile and in cadavers. It is formed during putrefaction of biological tissues by the dehydration of choline. It is a poisonous, syrupy liquid with a fishy odor.
Neurine is a quaternary ammonium salt with three methyl groups and one vinyl group attached to the nitrogen atom. Synthetically, neurine can be prepared by the reaction of acetylene with trimethylamine. Neurine is unstable and decomposes readily to form trimethylamine.
References
Merck Index, 11th Edition, 6393.
Alkaloids
Quaternary ammonium compounds
Vinyl compounds | Neurine | [
"Chemistry"
] | 147 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Natural products",
"Alkaloids"
] |
9,542,521 | https://en.wikipedia.org/wiki/Glossary%20of%20BitTorrent%20terms | This is a glossary of jargon related to peer-to-peer file sharing via the BitTorrent protocol.
Terms
Availability
(Also known as distributed copies.) The availability of each piece in the torrent is defined as the number of peers who have a copy of that piece.
The availability of the entire torrent is defined as the nonnegative real number whose integer part is the minimum piece availability and whose fractional part is the fraction of pieces that have higher availability than the minimum piece availability.
Example: There are 10 pieces, Peer A has pieces 0 to 5, Peer B has 2 to 7, and Peer C has 4 to 9. Pieces 0, 1, 8, 9 have availability 1. Pieces 2, 3, 6, 7 have availability 2. Pieces 4 and 5 have availability 3. The entire torrent has availability 1.6 (1 + 6/10). The integer part is 1 because 1 is the lowest piece availability. The fractional part is 6/10 because more than one peer has pieces 2 to 7 (6 pieces) and there are 10 total pieces. Even though 3 peers have pieces 4 and 5, it does not further increase the availability.
Sometimes "distributed copies" is considered to be "availability minus 1". So if the availability is 2.6, the distributed copies will be 1.6 because it is only counting the additional "copies" of the file.
Choked
Describes a peer to which the client refuses to send file pieces. A client chokes another client in several situations:
The second client is a seed, in which case it does not want any pieces (i.e., it is completely uninterested)
The client is already uploading at its full capacity (it has reached the value of max_uploads)
The second client has been blacklisted for being abusive or is using a blacklisted BitTorrent client.
Client
The program that enables peer-to-peer file sharing via the BitTorrent protocol. See Comparison of BitTorrent clients.
Distributed Hash Table
Distributed Hash Tables (DHT) are used in Bittorrent for peers to send a list of other seeds/peers in the swarm for a particular torrent directly to a client without the need for a tracker.
Endgame / Endgame mode
Any applied algorithm for downloading the last few pieces (see below) of a torrent.
In typical client operation the last download pieces arrive more slowly than the others. This is because the faster and more easily accessible pieces should have already been obtained. In order to prevent the last pieces becoming unobtainable, BitTorrent clients attempt to get the last missing pieces from all of its peers. Upon receiving the last pieces a cancel request command is sent to other peers.
Fake
A fake torrent is a torrent that does not contain what is specified in its name or description (e.g. a torrent is said to contain a video, but it contains only a snapshot of a moment in the video, or in some cases malware).
Freeleech
Freeleech means that the download size of the torrent does not count towards your overall ratio, only the uploaded amount on the torrent counts toward your ratio.
Grab
A torrent is grabbed when its metadata files have been downloaded.
Hash
The hash is a digital fingerprint in the form of a string of alphanumeric characters (typically hexadecimal) in the .torrent file that the client uses to verify the data that is being transferred. "Hash" is the shorter form of the word "hashsum".
Torrent files contain information like the file list, sizes, pieces, etc. Every piece received is first checked against the hash. If it fails verification, the data is discarded and requested again.
Hash checks greatly reduce the chance that invalid data is incorrectly identified as valid by the BitTorrent client, but it is still possible for invalid data to have the same hash value as the valid data and be treated as such. This is known as a hash collision. Torrent and p2p files typically use 160 bit hashes that are reasonably free from hash collision problems, so the probability of bad data being received and passed on is extraordinarily small.
Health
Health is shown in a bar or in % usually next to the torrent's name and size, on the site where the .torrent file is hosted. It shows if all pieces of the torrent are available to download (i.e. 50% means that only half of the torrent is available). Health does not indicate whether the torrent is free of viruses.
Hit-and-run
To intentionally "leech" a file; downloading a file while seeding as little as possible. It's abbreviated HnR or H&R.
Index
An index is a list of .torrent files (usually including descriptions and other information) managed by a website and available for searches. An index website can also be a tracker.
Interested
Describes a downloader who wishes to obtain pieces of a file the client has. For example, the uploading client would flag a downloading client as 'interested' if that client did not possess a piece that it did, and wished to obtain it.
Leech
Leech has two meanings. Often, leecher is synonymous with downloader: simply describing a peer or any client that does not have 100% of the data.
The term leech also refers to a peer (or peers) that has a negative effect on the swarm by having a very poor share ratio, downloading much more than they upload. Leeches may be on asymmetric Internet connections or do not leave their BitTorrent client open to seed the file after their download has completed. However, some leechers intentionally avoid uploading by using modified clients or excessively limiting their upload speed.
Lurker
A lurker is a user that only downloads files from the group but does not add new content. It does not necessarily mean that the lurker will not seed. Not to be confused with a leecher.
Magnet link
A mechanism different from a .torrent metafile which can be used to identify a set of files for BitTorrent based on content, as opposed to referencing any particular tracker. The method is not limited to BitTorrent data. See Magnet URI scheme.
Overseeded
In private trackers using ratio credit, a torrent is overseeded when its availability is so high that seeders have difficulty finding downloaders.
p2p
In a p2p network, each node (or computer on the network) acts as both a client and a server. In other words, each computer is capable of both responding to requests for data and requesting data itself.
Peer
A peer is one instance of a BitTorrent client running on a computer on the Internet to which other clients connect and transfer data. Depending on context, "peer" can refer either to any client in the swarm or more specifically to a downloader, a client that has only parts of the file.
Piece
This refers to the torrented files being divided up into equal specific sized pieces (e.g., 64kB, 128kB, 512kB, 1MB, 2MB, 4MB or 8MB). The pieces are distributed in a random fashion among peers in order to optimize trading efficiency.
Ratio credit
Ratio credit, also known as upload credit or ratio economy, is a currency system used on a number of private trackers to provide an incentive for higher upload/download ratios among member file-sharers. In such a system, those users with greater amounts of bandwidth, hard drive space (particularly seedboxes) or idle computer uptime are at a greater advantage to accumulate ratio credits versus those lacking in any one or more of the same resources.
Scraping
This is when a client sends a request to the tracking server for information about the statistics of the torrent, such as with whom to share the file and how well those other users are sharing.
Seed / seeding
A seed refers to a machine possessing all of the data (100% completion). A peer or downloader becomes a seed when it completely downloads all the data and continues/starts uploading data for other peers to download from. This includes any peer possessing 100% of the data or a web seed. When a downloader starts uploading content, the peer becomes a seed.
Seeding refers to leaving a peer's BitTorrent client open and available for additional individuals to download from. Normally, a peer should seed more data than download. However, whether to seed or not, or how much to seed, depends on the availability of downloaders and the choice of the peer at the seeding end.
Share ratio
A user's share ratio for any individual torrent is a number determined by dividing the amount of data that user has uploaded by the amount of data they have downloaded. Final share ratios over 1.0 carry a positive connotation in the BitTorrent community, because they indicate that the user has sent more data to other users than they received. Likewise, share ratios under 1 have negative connotation.
Snatch
A torrent is snatched when its data files have been downloaded.
Snubbing
An uploading client is displayed as snubbed if the downloading client has not received any data from it in over 60 seconds.
Super-seeding
When a file is new, much time can be wasted because the seeding client might send the same file piece to many different peers, while other pieces have not yet been downloaded at all. Some clients, like Vuze, μTorrent, and qBittorrent have a "super-seed" mode, where they try to only send out pieces that have never been sent out before, theoretically making the initial propagation of the file much faster. However the super-seeding becomes less effective and may even reduce performance compared to the normal "rarest first" model in cases where some peers have poor or limited connectivity. This mode is generally used only for a new torrent, or one which must be re-seeded because no other seeds are available.
Swarm
Together, all peers (including seeds) sharing a torrent are called a swarm. For example, six ordinary peers and two seeds make a swarm of eight. This is a from the predecessor to BitTorrent, a program called Swarmcast, originally from OpenCola.
BitTorrent may sometimes display a swarm number that has no relation to the number of seeds and peers you are connected to or who are available. For example, it may show 5 out of 10 connected peers, 20 out of 100 connected seeds, and a swarm of 3.
Torrent
A torrent can mean either a .torrent metadata file or all files described by it, depending on context. The torrent file contains metadata about all the files it makes downloadable, including their names and sizes and checksums of all pieces in the torrent. It also contains the address of a tracker that coordinates communication between the peers in the swarm.
Tracker
A tracker is a server that keeps track of which seeds and peers are in the swarm. Clients report information to the tracker periodically and in exchange, receive information about other clients to which they can connect. The tracker is not directly involved in the data transfer and does not have a copy of the file. It only receives information from the client.
References
BitTorrent
BitTorrent terms
Wikipedia glossaries using subheadings
ru:BitTorrent#Терминология | Glossary of BitTorrent terms | [
"Technology"
] | 2,354 | [
"Computing terminology",
"Glossaries of computers"
] |
9,543,027 | https://en.wikipedia.org/wiki/Right%20to%20know | Right to know is a human right enshrined in law in several countries. UNESCO defines it as the right for people to "participate in an informed way in decisions that affect them, while also holding governments and others accountable". It pursues universal access to information as essential foundation of inclusive knowledge societies. It is often defined in the context of the right for people to know about their potential exposure to environmental conditions or substances that may cause illness or injury, but it can also refer more generally to freedom of information or informed consent.
Australia
Right to know regarding environmental hazard information is protected by Australian law, which is described at Department of Sustainability, Environment, Water, Population and Communities.
Right to know regarding workplace hazard information is protected by Australian law, which is described at Safe Work Australia and at the Hazardous Substances Information System.
Canada
Right to know regarding workplace hazard information is protected by Canadian law.
Right to know regarding environmental hazard information is protected by Canadian law, which is described at Environment Canada.
Europe
Europe consists of many countries, each of which has its own laws. The European Commission provides central access to most of the information about individual regulatory agencies and laws.
Right to know about environmental hazards is managed by the European Commission's Directorate-General for the Environment and by the European Environment Agency.
Right to know about workplace hazards is managed by the European Agency for Safety and Health at Work.
United States
In the context of the United States workplace and community environmental law, right to know is the legal principle that the individual has the right to know the chemicals to which they may be exposed in their daily living. It is embodied in United States federal law as well as in local laws in several U.S. states. "Right to Know" laws take two forms: Community Right to Know and Workplace Right to Know. Each grants certain rights to those groups. The "right to know" concept is included in Rachel Carson's book Silent Spring.
Toxic substances used in the work area must be disclosed to the occupants under laws managed by Occupational Safety and Health Administration.
Hazardous substances used outside buildings must be disclosed to the appropriate state or local agency responsible for state environmental protection, including regulatory actions outside federal land. Use on federal land is managed by the United States Environmental Protection Agency and the Bureau of Land Management .
The US Department of Defense is self-regulating, and as such, is immune to state and federal law pertaining to Occupational Safety and Health Administration OSHA and Environmental Protection Agency (EPA) regulations on foreign and domestic soil.
Occupational Safety and Health Administration
Occupational Health and Safety is managed within most states under federal authority.
Workplace safety and health in the U.S. operates under the framework established by the federal Occupational Safety and Health Act of 1970 (OSH Act).
Occupational Safety and Health Administration (OSHA) within the U.S. Department of Labor is responsible for issuing and enforcing regulations covering workplace safety.
The Department of Transportation is responsible for transportation safety and for maintaining the list of hazardous materials.
The Environmental Protection Agency is responsible for maintaining lists of specific hazardous materials.
Environmental Protection Agency
Environmental health and safety outside the workplace is established by the Emergency Planning and Community Right-to-Know Act (EPCRA), which is managed by the Environmental Protection Agency (EPA) and various state and local government agencies.
State and local agencies maintain epidemiology information required by physicians to evaluate environmental illness.
Air quality information must be provided by pest control supervisors under license requirements established by the Worker Protection Standard when restricted use pesticide is applied.
The list of restricted use pesticides is maintained by the US EPA.
Additionally, specific environmental pollutants are identified in public law, which extends to all hazardous substances even if the item is not identified as a restricted use pesticide by the EPA. As an example, cyfluthrin, cypermethrin, and cynoff produce hydrogen cyanide upon combustion, but some pesticides that inadvertently produce noxious chemicals may not be identified as restricted-use pesticides.
Title 42 U.S.C. Section 7412 identifies the list of environmental pollutants.
Some specific chemicals, such as cyaniate, cyanide, cyano, and nitrile compounds, satisfy the specific hazard definition that is identified in public law regardless of whether or not the item is identified on the list of restricted use pesticides maintained by the United States Environmental Protection Agency. Title 42 U.S.C. Section 7413 contains the reporting requirement for environmental pollutants.]
Environmental illness share characteristics with common diseases. For example, cyanide exposure symptoms include weakness, headache, nausea, confusion, dizziness, seizures, cardiac arrest, and unconsciousness. Influenza and heart disease include the same symptoms.
Failure to obtain proper disclosure that is required by physicians will result in improper, ineffective, or delayed medical diagnosis and treatment for environmental illness caused by exposure to hazardous substance and by exposure to radiation.
Department of Transportation (DOT)
The Library Pipeline and Hazardous Material Safety Administration within the US Department of Transportation is responsible for maintaining the list of hazardous materials within the United States.
All hazardous materials that are not created at the work site must be transported by motor vehicle. The safety and security of the public transportation system is enforced by the Department of Transportation.
The Department of Transportation also regulates mandatory labeling requirements for all hazardous materials. This is in addition to requirements by other federal agencies, like the United States Environmental Protection Agency, and Occupational Safety and Health Administration.
DOT is responsible for enforcement actions and public notification regarding hazardous chemical releases and exposures, including incidents involving federal workers.
DOT requires that all buildings and vehicles containing hazardous materials must have signs that disclose specific types of hazards for certified first responder.
Department of Energy (DOE)
Safety of certain workers is governed by the US Department of Energy, such as mine workers. Public information can be obtained in the form of directives.
Department of Defense (DOD)
The United States Department of Defense manages environmental safety independent of OSHA and EPA. Spills, mishaps, illnesses, and injuries are not normally handled in accordance with local, state, and federal law.
Failure to administer discipline for illegal activity occurring within a military command is considered to be dereliction of duty, which is administered under the Uniform Code of Military Justice.
Individuals with information about environmental crimes and injuries involving the military are protected by Whistleblower protection in United States. Government employees, government contractors, and military officers often lack the training, education, licensing, and experience required to understand the legal requirements involving environmental safety. The sophistication required to understand legal requirements is not normally required for promotion and contractor selection within the military. Because of this, specific rules are documented in orders and directives that need to be written in plain language intended to be understood by people that have a 4th-grade reading ability.
Laws are enforced by the commanding officer in military organizations. The commanding officer typically has the ability to read and understand written requirements. A Flag Officer is subject to Court-martial action if laws or government policies are violated under their command when the activity is outside the scope of mission orders and rules of engagement. Each commanding officer is responsible for writing and maintaining policies simple enough to be understood by everyone in their command. Each commanding officer is responsible for ensuring that command policy documents are made available to every person in their command (civilian, military, and contractor). The commanding officer is responsible for disciplinary action and public disclosures when policies are violated within their command.
The commanding officer shares responsibilities for crimes that are not punished (dereliction).
Military agencies operate independently of law enforcement, judicial authority, and common law. Similar exemptions exist for some state agencies.
Potential crimes are investigated by military police. The following is an example of the kinds of policy documents used to conduct criminal investigations.
Because military law enforcement is performed with no independent civilian oversight, there is an inherent conflict of interest. Information and disclosures are obtained through Freedom of Information Act request and not through disclosures ordinarily associated with the EPA and OSHA that have the competency required for training, certification, disclosure, and enforcement. This prevents physicians from obtaining the kind of information needed to diagnose and treat environmental illness, so the root cause for environmental illness typically remains permanently unknown. The following organization may help when the root cause for an illness remains unknown longer than 30 days.
Criminal violations, injuries, and potential enforcement actions begin by exchanging information in the following venues when civilian government employees and flag officers are unable to deal with the situation in an ethical manner.
Local labor union officials
Freedom of information in the United States
Equal Employment Opportunity Commission
Office of the Inspector General, U.S. Department of Defense (Hotline)
United States Secretary of Defense
President of the United States
United States Secretary of State
United States House of Representatives
United States Senate
US federal laws, state laws, local laws, foreign laws, and treaty agreements may not apply.
Policies are established by Executive Order and not public law, except for interventions by the United States Congress and interventions by US district courts.
The following US presidential executive orders establish the requirements for DoD environmental policy for government organizations within the executive branch of the United States.
Executive Order 12114 - Environmental effects abroad of major Federal actions
Executive Order 12196 - Occupational safety and health programs for Federal employees
Executive Order 12291 - Regulatory planning process
Executive Order 12344 - Naval Nuclear Propulsion Program
Executive Order 12898 - Federal Actions To Address Environmental Justice in Minority Populations and Low-Income Populations
Executive Order 12958 - Classified National Security Information
Executive Order 12960 - Amendments to the Manual for Courts-Martial
Executive Order 12961 - Presidential Advisory Committee on Gulf War Veterans' Illnesses
Executive Order 13101 - Greening the Government Through Waste Prevention, Recycling, and Federal Acquisition
Executive Order 13148 - Greening the Government Through Leadership in Environmental Management
Executive Order 13151 - Global Disaster Information Network
Executive Order 13388 - Further Strengthening the Sharing of Terrorism Information to Protect American
Executive Order 12656 - Assignment of emergency preparedness responsibilities
Executive Order 13423 - Strengthening Federal Environmental, Energy, and Transportation Management
Executive Order 13526 - Classified National Security Information Memorandum
The following unclassified documents provide further information for programs managed by the United States Secretary of Defense.
DoD Directive 3150.08 - DoD Response to Nuclear and Radiological Incidents
DoD Directive 3222.3 - DoD Electromagnetic Environmental Effects (E3)
Directive 4715.1 - Environment, Safety, and Occupational Health (ESOH)
DoD Directive 4715.3 - Environmental Conservation Program
DoD Directive 4715.5 - Management of Environmental Compliance at Overseas Installations
Directive 4715.8 - Environmental Remediation for DoD Activities Overseas
DoD Directive 4715.11 - Environmental and Explosives Safety Management on Operational Ranges Within the United States
DoD Directive 4715.12 - Environmental and Explosives Safety Management on Operational Ranges Outside the United States
DoD Directive 6050.07 - Environmental Effects Abroad of Major Department of Defense Actions
Available information
The information described in this section is for the United States, but most countries have similar regulatory requirements.
Two mandatory documents must provide hazard information for most toxic products.
Product Label
Safety Data Sheet
Product label requirements are established by the Federal Insecticide, Fungicide, and Rodenticide Act under the authority of the United States Environmental Protection Agency. As a minimum this requires information about the chemical makeup of the product, instructions required for the safe use of the product, and contact information for the manufacturer of the product.
Title 40 CFR --Protection of Environment (parts 150 to 189) CHAPTER I--ENVIRONMENTAL PROTECTION AGENCY
A Safety Data Sheet is required under the authority of the United States Occupational Safety and Health Administration for hazardous materials to communicate health and safety risks needed by health care professionals and emergency responders.
Title 29: Labor PART 1910—OCCUPATIONAL SAFETY AND HEALTH STANDARDS Subpart Z—Toxic and Hazardous Substances
A summary of workers rights is available from OSHA.
Chemical information is most frequently associated with the right to know but there are many other types of information that are important to workplace safety and health. The following sources of information are those most likely to be found at the workplace or in state or federal agencies with jurisdiction over the workplace:
Injury and illness records which employers are required to keep.
Accident investigation reports.
Workers' compensation claim forms and records.
Safety data sheets (SDS) and labels for hazardous chemicals used or present in the workplace.
Chemical inventories required by federal and state regulations.
Records of monitoring and measurement of worker exposure to chemicals, noise, radiation, or other hazards.
Workplace inspection reports, whether done by a safety committee, employer safety and health personnel, OR-OSHA insurance carriers, fire departments, or other outside agencies.
Job safety analysis, including ergonomic evaluations of jobs or workstations.
Employee medical records or studies or evaluations based on these records.
OSHA standards and the background data on which they are based.
Hazard Communication (HazCom 2012)
Note:Refer to 29 CFR 1910.1200 for the most current and updated information.
The Hazard Communication Standard first went into effect in 1985 and has since been expanded to cover almost all workplaces under OSHA jurisdiction. The details of the Hazard Communication standard are rather complicated, but the basic idea behind it is straightforward. It requires chemical manufacturers and employers to communicate information to workers about the hazards of workplace chemicals or products, including training.
The Hazard Communication standard does not specify how much training a worker must receive. Instead, it defines what the training must cover. Employers must conduct training in a language comprehensible to employees to be in compliance with the standard. It also states that workers must be trained at the time of initial assignment and whenever a new hazard is introduced into their work area. The purpose for this is so that workers can understand the hazards they face and so that they are aware of the protective measures that should be in place.
It is very difficult to get a good understanding of chemical hazards and particularly to be able to read SDSs in the short amount of time that many companies devote to hazard communication training. When OSHA conducts an inspection, the inspector will evaluate the effectiveness of the training by reviewing records of what training was done and by interviewing employees who use chemicals to find out what they understand about the hazards.
The United States Department of Transportation (DOT) regulates hazmat transportation within the territory of the US by Title 49 of the Code of Federal Regulations.
Dangerous Goods
All chemical manufacturers and importers must assess the hazards of the chemicals they produce and import and pass this information on to transportation workers and purchasers through labels and Safety Data Sheets (SDSs). Employers whose employees may be exposed to hazardous chemicals on the job must provide hazardous chemical information to those employees through the use of SDSs, properly labeled containers, training, and a written hazard communication program. This standard also requires the employer to maintain a list of all hazardous chemicals used in the workplace. The SDSs for these chemicals must be kept current and they must be made available and accessible to employees in their work areas.
Chemicals that may pose health risks or those that are physical hazards (such as fire or explosion) are covered. List of chemicals that are considered hazardous are maintained according to the use or purpose. There are several existing sources that manufacturers and employers may consult. These include:
Any substance for which OSHA has a standard in force, including any substance listed in the Air Contaminants regulation.
Substances listed as carcinogens (causing cancer) by the National Toxicology Program (NTP) or the International Agency for Research on Cancer (IARC).
Substances listed in the Threshold Limit Values for Chemical Substances and Physical Agents, published by the American Conference of Governmental Industrial Hygienists (ACGIH).
Restricted Use Products (RUP) Report; EPA
Ultimately, it is up to the manufacturer to disclose hazards.
There are other sources of information about chemicals used in industry as a result of state and federal laws regarding the Community Right to Know Act.
The Air Resources Board is responsible for public hazard disclosures in California. Pesticide use disclosures are made by each pest control supervisor to the County Agricultural Commission. Epidemiology information is available from the California Pesticide Information Portal, which can be used by health care professionals to identify the cause for environmental illness.
Under the Oregon Community Right to Know Act (ORS 453.307-372) and the federal Superfund Amendments and Reauthorization Act (SARA) Title III, the Office of the State Fire Marshal collects information on hazardous substances and makes it available to emergency responders and to the general public. Among the information which companies must report are:
Inventories of amounts and types of hazardous substances stored in their facilities.
Annual inventories of toxic chemicals released during normal operations.
Emergency notification of accidental releases of certain chemicals listed by the Environmental Protection Agency.
The information can be obtained in the form of an annual report of releases for the state or for specific companies. It is available on request from the Fire Marshal's Office and is normally free of charge unless unusually large quantities of data are involved.
Chemical labeling requirements
Each container that contains a hazardous chemical must be labeled by the manufacturer or distributor before it is sent to downstream users. There is no single standard format for labels. Each product must be labeled according to the specific type of hazard.
Pesticide and fungicide labeling is regulated by the Environmental Protection Agency.
The identity of the hazardous chemical(s) by common or chemical name.
Appropriate hazard warnings.
The name and address of the manufacturer, distributor, or the responsible party.
Product use instructions
Employers are required to inform the public of:
The requirements of the Hazard Communication rules.
The operations in their work area where hazardous materials are present.
The location of the written hazard communication program, the list of hazardous chemicals, and the SDSs of chemicals that people will be exposed to.
In addition, these items must be covered in training:
Methods to detect the presence of hazardous chemicals.
Physical and health hazards of the chemicals.
Protective measures, including work practices, ventilation, personal protective equipment, and emergency procedures.
How to read and understand labels and SDSs.
The hazards of non-routine tasks, such as the cleaning of tanks or other vessels, or breaking into lines containing chemicals.
Safety Data Sheet (SDS): Formerly known as Material Safety Data Sheet (SDS) as per OSHA's Hazard Communication Standard
Note: Refer to 29 CFR 1910.1200 for the most current and updated information (https://www.osha.gov/pls/oshaweb/owadisp.show_document?p_table=standards&p_id=10099)
SDSs information is required by EPA, OSHA, DOT, and/or DOE regulations depending upon the type of hazardous substance. The Safety Data Sheet includes the following information.
Product identity and ingredients by chemical or common name.
Physical and chemical characteristics.
Physical hazards, such as fire and explosion.
Health hazards, including symptoms.
Primary routes of entry of the chemical into the body.
Legal exposure limits (OSHA and other recommended limits).
Whether the chemical can cause cancer.
Precautions for safe handling and use.
Control measures, including ventilation, personal protective equipment, etc.
Emergency and first aid procedures.
The date the SDS was prepared.
Name, address, and phone number of the manufacturer.
Regulatory agencies, such as United States Environmental Protection Agency EPA SARA Title III rules EPCRA
Chemical manufacturers may legally withhold the specific chemical identity of a material from the SDS and label in the case of bona fide trade secrets. In such cases the following rules apply:
The SDS must indicate that trade secret information is being withheld.
The SDS must disclose information concerning the properties and effects of the hazardous chemical, even if the actual chemical identity is withheld.
The trade secret information must be disclosed to a doctor or nurse in a medical emergency.
In non-emergency cases health professionals can obtain a trade secret chemical identity if they can show they need it for purposes of health protection and if they sign a confidentiality agreement.
Exposure records
The Hazard Communication standard requires that chemical information must be transmitted to employees who work with hazardous materials. Employee exposure records can tell if a worker is actually being exposed to a chemical or physical hazard and how much exposure he or she is receiving. OSHA regulations that establish access rights to these records are found in 29 CFR 1910.1020: Access to Medical and Exposure Records. This information is usually the product of some type of monitoring or measurement for:
Dusts, fumes, or gases in the air
Absorption of a chemical into the body, e.g. blood lead levels
Noise exposure
Radiation exposure
Spores, fungi, or other biological contaminants
Employees and their designated representatives have the right under OR-OSHA regulations to examine or copy exposure records that are in the possession of the employer. This right applies not only to records of an employee's own exposure to chemical, physical, or biological agents but also to exposure records of other employees whose working conditions are similar to the employee's. Union representatives have the right to see records for any work areas in which the union represents employees.
In addition to seeing the results, employees and their representatives also have the right to observe the actual measurement of hazardous chemical or noise exposure.
Exposure records that are part of an OR-OSHA inspection file are also accessible to employees and union representatives. In fact these files, with the exception of certain confidential information, are open to the public after the inspection has been legally closed out.
Medical records
Many employers keep some type of medical records. These could be medical questionnaires, results of pre-employment physical examinations, results from blood tests or more elaborate records of ongoing diagnosis or treatment (such as all biological monitoring not defined as an employee exposure record). OSHA regulations that establish access rights to these records are found in CFR 1910.1020: Access to Medical and Exposure Records.
Medical records are considerably more personal than exposure records or accident reports so the rules governing confidentiality and access to them are stricter. Employee medical records do not include a lot of employee medical information because of this extra scrutiny. A good rule of thumb is that if the information is maintained separately from the employer's medical program, it probably will not be accessible.
Examples of separately maintained medical information would be records of voluntary employee assistance programs (alcohol, drug abuse, or personal counseling programs), medical records concerning health insurance claims or records created solely in preparation for litigation.
These records are often kept at the worksite if there is an on-site physician or nurse. They could also be in the files of a physician, clinic, or hospital with whom the employer contracts for medical services.
An employee has access to his or her own medical record (29 CFR 1910.1020). An individual employee may also sign a written release authorizing a designated representative (such as a union representative) to receive access to his or her medical record. The latter might occur in a case where the union or a physician or other researcher working for the union or employer needs medical information on a whole group of workers to document a health problem. Certain confidential information may be deleted from an employee's record before it is released.
Past and future
The push towards greater availability of information came from events that killed many and infected others with toxins, such as the Bhopal disaster in India in December 1984. During the Bhopal disaster, a cloud of methyl isocyanate escaped an insecticide plant due to neglect, and as a result, 2,000 people were killed and many more were injured. The plant had been already noted for its poor safety record and lack of evacuation or emergency plan. The lack of awareness and knowledge in the community about the dangers led to this disaster, which could have been avoided.
Shortly after, the Emergency Planning and Right to Know Act of 1986, originally introduced by California Democrat Henry Waxman, was passed. This act was the first official step taken to help people become more educated in the field of corporation's pollutants and their actions. The act issued a requirement for industrial facilities across the U.S. to disclose information on their annual releases of toxic chemicals. This data collected is made available by the Environmental Protection Agency in the Toxics Release Inventory (TRI) which is open to public knowledge. This was noticed as a step in the right direction however, only pounds of individual pollutants were required to be released as a result of this act. No information about toxicity, spread, or overlap had been required to be shared with the public.
In years to come, the public achieved greater ways of accessing the information that corporations with excess pollutants withheld. The Toxic 100 is a form of newer information which is a list that includes one hundred companies industrial air polluters in the United States that are ranked by the quantity of pollution they produce and the toxicity of the pollutants. This data is determined by the Political Economy Research Institute (PERI) and calculated with factors such as winds carrying the pollution, height of smokestacks, and how much it impacts nearby communities.
See also
International Day for Universal Access to Information
Access to public information in Europe
Freedom of information
Informed consent
References
External links
EPA.gov
National Safety Council
National Institute for Occupational Safety and Health
American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)
Building Owners and Managers Association (BOMA) International
Freedom of information
Industrial hygiene
Environmental health
Environmental law
Medical ethics
Safety engineering | Right to know | [
"Engineering"
] | 5,155 | [
"Safety engineering",
"Systems engineering"
] |
9,543,613 | https://en.wikipedia.org/wiki/National%20Environmental%20Balancing%20Bureau | Founded in 1971 and headquartered in Gaithersburg, Maryland, USA; the National Environmental Balancing Bureau (NEBB) is an international association certifying firms and qualifying supervisors and technicians in the following disciplines: Testing, Adjusting, and Balancing (TAB) of HVAC systems; Building Systems Commissioning (BSC); Sound and Vibration Measurement (S&V); Retro-commissioning (RCX); Fumehood Testing (FHT); and Cleanroom Performance Testing (CPT). NEBB also establishes and maintains industry standards, procedures, and work specifications for these disciplines.
Administration
In November 2023, NEBB announced the hiring of Luis Chinchilla as the president of the organization. Being Costa Rican, he is the first Latin American NEBB president.
Programs
Discipline committees, consisting of highly experienced field professionals, set guidelines and standards for NEBB disciplines. As of January 2007, NEBB has the following discipline committees: Testing, Adjusting, and Balancing; Building Systems Commissioning; Sound and Vibration Measurement; Fumehood Testing; and Cleanroom Performance Testing.
Fumehood Testing (FHT) and Retro-Commissioning (RCX) Programs
NEBB's FHT and RCX programs were established at a NEBB Board of Directors' meeting held at NEBB's 2006 Annual Meeting and Educational Conference in Palm Springs, California on November 9–11, 2006. NEBB's FHT Committee is working to produce a FHT procedural standards text and plans to offer a seminar in FHT in Fall 2007. NEBB's BSC Committee is producing a seminar in RCX to be offered in Fall 2007.
Certification Requirements
In addition to being affiliated with a local NEBB chapter, NEBB firms are required to have been in business for at least 12 months and enjoy a reputation of integrity and responsible performance. They also must possess sophisticated instruments required for their discipline, which must be calibrated in accordance with NEBB guidelines. In addition, the firm must employ at least one supervisor—who meets NEBB qualifications—to represent the firm and be responsible for the firm's work. Finally, NEBB firms are required to possess a copy of the NEBB procedural standards for their discipline.
Publications
NEBB publishes home study courses, technical manuals, and training materials for industry use. Below is a list of current NEBB publications:
NEBB Procedural Standards for Testing, Adjusting and Balancing of Environmental Systems (7th Edition, 2005)*
Testing, Adjusting and Balancing Manual for Technicians (2nd Edition, 1997)
Environmental Systems Technology (2nd Edition, 1999)
Testing, Adjusting and Balancing Study Course for Supervisors (3rd Edition, 2001)
Testing, Adjusting and Balancing Study Course for Technicians (2002)
Procedural Standards for Building Systems Commissioning (2nd Edition, 1999)
Design Phase Commissioning Handbook (2005)*
NEBB Procedural Standards for the Measurement and Assessment of Sound and Vibration (2nd Edition, 2006)
Sound and Vibration Design and Analysis (1st Edition, 1994)
Study Course for Measuring Sound and Vibration (2nd Edition, 1996)
Procedural Standards for Certified Testing of Cleanrooms (2nd Edition 1996)
Study Course for Certified Testing of Cleanrooms (2nd Edition, 1998)
Note: * also available in CD-ROM
Educational Programs
In addition to its publications, NEBB offers educational seminars at NEBB TEC—NEBB's training and educational facility located in Tempe, Arizona—to enhance the educational experience of each discipline.
Each November, NEBB hosts its Annual Meeting and Educational Conference. The meeting features technical sessions and prominent guest speakers to enhance the professional capabilities of NEBB contractors and staff members. Educational sessions at the meeting feature presentations by industry experts on various topics related to the NEBB disciplines. Past invited guest speakers have included the presidents of the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) and Mechanical Contractors Association of America (MCAA).
NEBB TEC
NEBB TEC (Training and Educational Center) is a multi-purpose facility used for seminars and practical exams. Managed by NEBB's technical director, the facility has two cleanrooms, a seminar classroom, and a TAB practical exam room complete with an air handler and pumps.
References
External links
NEBB Website
Heating, ventilation, and air conditioning
Building engineering organizations
Engineering societies based in the United States
Cleanroom technology | National Environmental Balancing Bureau | [
"Chemistry",
"Engineering"
] | 869 | [
"Building engineering",
"Building engineering organizations",
"Cleanroom technology"
] |
9,543,833 | https://en.wikipedia.org/wiki/Integrated%20Microbial%20Genomes%20System | The Integrated Microbial Genomes (IMG) system is a genome browsing and annotation platform developed by the U.S. Department of Energy (DOE)-Joint Genome Institute. IMG contains all the draft and complete microbial genomes sequenced by the DOE-JGI integrated with other publicly available genomes (including Archaea, Bacteria, Eukarya, Viruses and Plasmids). IMG provides users a set of tools for comparative analysis of microbial genomes along three dimensions: genes, genomes and functions. Users can select and transfer them in the comparative analysis carts based upon a variety of criteria. IMG also includes a genome annotation pipeline that integrates information from several tools, including KEGG, Pfam, InterPro, and the Gene Ontology, among others. Users can also type or upload their own gene annotations (called MyIMG gene annotations) and the IMG system will allow them to generate Genbank or EMBL format files containing these annotations.
In successive releases IMG has expanded to include several domain-specific tools. The Integrated Microbial Genomes with Microbiome Samples (IMG/M) system is an extension of the IMG system providing a comparative analysis context of assembled metagenomic data with the publicly available isolate genomes. The Integrated Microbial Genomes- Expert Review (IMG/ER) system provides support to individual scientists or group of scientists for functional annotation and curation of their microbial genomes of interest. Users can submit their annotated genomes (or request the IMG automated annotation pipeline to be applied first) into IMG-ER and proceed with manual curation and comparative analysis in the system, through secure (password protected) access. The IMG-HMP is focused on analysis of genomes related to the Human Microbiome Project (HMP) in the context of all publicly available genomes in IMG. The IMG-ABC system is a system for bacterial secondary metabolism analysis and targeted biosynthetic gene cluster discovery. The IMG-VR system (with the recent updated version IMG/VR v.2.0) is the largest publicly available database for viral genomes and metagenomes.
See also
Genomes OnLine Database
Genomics
Metagenomics
MicrobesOnline
References
External links
IMG/M home page
MicrobesOnline
NCBI Microbial Genomes
TIGR Comprehensive Microbial Resource
The SEED
Biological databases
Genome databases
Pathogen genomics | Integrated Microbial Genomes System | [
"Biology"
] | 518 | [
"Molecular genetics",
"DNA sequencing",
"Genome projects",
"Pathogen genomics"
] |
9,544,219 | https://en.wikipedia.org/wiki/Common%20Public%20Radio%20Interface | The Common Public Radio Interface (CPRI) standard defines an interface between Radio Equipment Control (REC) and Radio Equipment (RE). Oftentimes, CPRI links are used to carry data between cell sites/remote radio heads and base stations/baseband units.
The purpose of CPRI is to allow replacement of a copper or coax cable connection between a radio transceiver (used example for mobile-telephone communication and typically located in a tower) and a base station/baseband unit (typically located at the ground nearby), so the connection can be made to a remote and more convenient location. This connection (often referred to as the Fronthaul network) can be a fiber to an installation where multiple remote base stations may be served. This fiber supports both single and multi mode communication. The fiber end is connected with the Small Form-factor Pluggable (SFP) transceiver device.
The companies working to define the specification include Ericsson
AB, Huawei Technologies Co. Ltd, NEC Corporation and Nokia.
See also
Open Base Station Architecture Initiative (OBSAI)
Remote radio head (RRH)
References
External links
CPRI Homepage
CPRI specification (free) at CPRI homepage
Radio technology | Common Public Radio Interface | [
"Technology",
"Engineering"
] | 249 | [
"Information and communications technology",
"Telecommunications engineering",
"Radio technology"
] |
9,544,343 | https://en.wikipedia.org/wiki/Self-diffusion | Self-diffusion describes the diffusive motions of molecules within themselves e.g. the movement of a water molecule in water. According to the IUPAC definition, the self-diffusion coefficient of medium is the diffusion coefficient of a chemical species in said medium when the concentration of this species is extrapolated to zero concentration. It can be described by the equation:
Here, is the activity of the medium in the solution and is the concentration of medium . Due to challenges observing it directly it is commonly assumed to be equal to the diffusion of an isotope in the medium of interest. However modern simulations are able to estimate it directly without the need for isotope labeling.
See also
Brownian motion
Diffusion
Molecular diffusion
References
Diffusion | Self-diffusion | [
"Physics",
"Chemistry"
] | 146 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
9,544,968 | https://en.wikipedia.org/wiki/Coulomb%20damping | Coulomb damping is a type of constant mechanical damping in which the system's kinetic energy is absorbed via sliding friction (the friction generated by the relative motion of two surfaces that press against each other). Coulomb damping is a common damping mechanism that occurs in machinery.
History
Coulomb damping was so named because Charles-Augustin de Coulomb carried on research in mechanics. He later published a work on friction in 1781 entitled "Theory of Simple Machines" for an Academy of Sciences contest. Coulomb then gained much fame for his work with electricity and magnetism.
Modes of Coulombian friction
Coulomb damping absorbs energy with friction, which converts that kinetic energy into thermal energy, i.e. heat. Coulomb friction considers this under two distinct modes: either static, or kinetic.
Static friction occurs when two objects are not in relative motion, e.g. if both are stationary. The force exerted between the objects does exceed—in magnitude—the product of the normal force and the coefficient of static friction :
.
Kinetic friction on the other hand, occurs when two objects are undergoing relative motion, as they slide against each other. The force exerted between the moving objects is equal in magnitude to the product of the normal force and the coefficient of kinetic friction :
.
Regardless of the mode, friction always acts to oppose the objects' relative motion. The normal force is taken perpendicularly to the direction of relative motion; under the influence of gravity, and in the common case of an object supported by a horizontal surface, the normal force is just the weight of the object itself.
As there is no relative motion under static friction, no work is done, and hence no energy can be dissipated. An oscillating system is (by definition) only dampened via kinetic friction.
Illustration
Consider a block of mass that slides over a rough horizontal surface under the restraint of a spring with a spring constant . The spring is attached to the block and mounted to an immobile object on the other end allowing the block to be moved by the force of the spring
,
where is the horizontal displacement of the block from when the spring is unstretched. On a horizontal surface, the normal force is constant and equal to the weight of the block by Newton's third law, i.e.
.
As stated earlier, acts to opposite the motion of the block. Once in motion, the block will oscillate horizontally back and forth around the equilibrium. Newton's second law states that the equation of motion of the block is
.
Above, and respectively denote the velocity and acceleration of the block. Note that the sign of the kinetic friction term depends on —the direction the block is travelling in—but not the speed.
A real-life example of Coulomb damping occurs in large structures with non-welded joints such as airplane wings.
Theory
Coulomb damping dissipates energy constantly because of sliding friction. The magnitude of sliding friction is a constant value; independent of surface area, displacement or position, and velocity. The system undergoing Coulomb damping is periodic or oscillating and restrained by the sliding friction. Essentially, the object in the system is vibrating back and forth around an equilibrium point. A system being acted upon by Coulomb damping is nonlinear because the frictional force always opposes the direction of motion of the system as stated earlier. And because there is friction present, the amplitude of the motion decreases or decays with time. Under the influence of Coulomb damping, the amplitude decays linearly with a slope of where ωn is the natural frequency. The natural frequency is the number of times the system oscillates between a fixed time interval in an undamped system. It should also be known that the frequency and the period of vibration do not change when the damping is constant, as in the case of Coulomb damping. The period τ is the amount of time between the repetition of phases during vibration. As time progresses, the object sliding slows and the distance it travels during these oscillations becomes smaller until it reaches zero, the equilibrium point. The position where the object stops, or its equilibrium position, could potentially be at a completely different position than when initially at rest because the system is nonlinear. Linear systems have only a single equilibrium point.
See also
Dry friction
Viscous damping
References
External links
Friction (Archived 2009-10-31) - Microsoft Encarta Online Encyclopedia 2006
Coulomb Damping - Science and Engineering Encyclopedia
Mechanical vibrations | Coulomb damping | [
"Physics",
"Engineering"
] | 933 | [
"Structural engineering",
"Mechanics",
"Mechanical vibrations"
] |
9,545,934 | https://en.wikipedia.org/wiki/Internality | An internality is the long-term benefit or cost to an individual that they do not consider when making the decision to consume a good or service. One way this is related to behavioral economics is by means of the concept of hyperbolic discounting, in which immediate consequences of a decision are disproportionately weighed compared to the future consequences. A potential cause is lack of access to full information regarding the associated costs and benefits prior to consumption. This contrasts with traditional economic theory, which makes the assumption that individuals are rational decision makers who take all personal costs into account when paying for goods and services.
One example of a positive internality is the long run effect of exercising, if these are not taken into account when deciding whether to exercise. Future benefits that an individual may not take into consideration include a diminished risk of heart disease and higher bone density. A common example of a potential negative internality is the effect of smoking cigarettes on those who smoke. For the effect of secondhand smoke, see externality. Statistically, 80% of smokers want to quit, and 54% of people who are serious about quitting fail in a week or less. This implies that they do not act in their long-term best interest due to short-term discomfort. This is also known as the self-control problem, an inability to control short-term consumption to optimize long-term consumption. Smokers also may inflict an internality on themselves due to a lack of information on the issue or myopia.
If the demand for cigarettes has a high price elasticity of demand, which evidence seems to suggest, the government can combat the negative internality by raising taxes. It is important to note that elasticity might change based on location and knowledge about the harmful health effects of smoking. In traditional economic theory, a tax diminishes the welfare of the poor because the tax burden shifts to low-income communities, as fewer can afford the good (cigarettes), and horizontal equity (economics) is distorted. However, behavioral economic theory suggests that the tax is not regressive if low-income communities have higher (healthcare) costs and more price sensitivity than individuals with higher incomes. Taxes imposed to combat internalities are most effective when they target a specific good. A tax on junk food could apply to a large variety of goods that are widely consumed, and the cost of the tax might be perceived as more detrimental than beneficial for society. Another concern with instituting this type of tax is its potential to be regressive, meaning it takes the most money from those with the least resources. For example, a tax on sugary-sweetened beverages corrects an internality, but it is also regressive, as it has been shown people with lower incomes spend more on sugary-sweetened beverages. However, it has also been shown that people who consume the most sugary-sweetened beverages have the most lack of knowledge and thus the largest internalities, so the tax may end up not harming lower-income people but benefiting them the most. A major issue with creating effective legislature against negative internalities is that the tax imposed should only reflect the cost that individuals do not factor into their consumption decisions. The difficulty in measuring individual knowledge is an obstacle to developing new policies. Another point of concern is that the group benefitting from the tax, such as smokers who want to quit, must be sizable enough to offset any backlash from tobacco companies and lobbyists.
In the following graphs, D' and S' are the demand and supply curves if producers and consumers take all external costs (EC) into consideration. The tax attempting to prevent the internality should be set equal to the difference between D and D' at the optimal quantity, which is the unmeasured internal cost (IC).
Increasing access to information about the costs of consuming a particular good, such as cigarettes, junk food, or sugar-sweetened beverages, is especially important. This allows people to know the costs of their actions and whether they choose to act on this knowledge is their rational decision. As a result of this, in cases where products or goods are not banned, increasing access to information may not necessarily be useful for individuals with a self-control problem.
References
Consumer behaviour
Behavioral economics | Internality | [
"Biology"
] | 870 | [
"Behavior",
"Behavioral economics",
"Consumer behaviour",
"Behaviorism",
"Human behavior"
] |
9,546,237 | https://en.wikipedia.org/wiki/Irone | Irones are a group of methylionone odorants used in perfumery, derived from iris oil, e.g. orris root. The most commercially important of these are:
(-)-cis-γ-irone, and
(-)-cis-α-irone
Irones form through slow oxidation of triterpenoids in dried rhizomes of the iris species, Iris pallida. Irones typically have a sweet floral, iris, woody, ionone, odor.
See also
Ionone
References
External links
Structure - Odor Relationships
Perfume ingredients
Ketones | Irone | [
"Chemistry"
] | 120 | [
"Ketones",
"Functional groups"
] |
9,547,298 | https://en.wikipedia.org/wiki/Euprymna%20scolopes |
Euprymna scolopes, also known as the Hawaiian bobtail squid, is a species of bobtail squid in the family Sepiolidae native to the central Pacific Ocean, where it occurs in shallow coastal waters off the Hawaiian Islands and Midway Island. The type specimen was collected off the Hawaiian Islands and is located at the National Museum of Natural History in Washington, D.C.
Euprymna scolopes grows to in mantle length. Hatchlings weigh and mature in 80 days. Adults weigh up to .
In the wild, E. scolopes feeds on species of shrimp, including Halocaridina rubra, Palaemon debilis, and Palaemon pacificus. In the laboratory, E. scolopes has been reared on a varied diet of animals, including mysids (Anisomysis sp.), brine shrimp (Artemia salina), mosquitofish (Gambusia affinis), prawns (Leander debilis), and octopuses (Octopus cyanea).
The Hawaiian monk seal (Monachus schauinslandi) preys on E. scolopes in northwestern Hawaiian waters.
On June 3, 2021, SpaceX CRS-22 launched E. scolopes, along with tardigrades, to the International Space Station. The squid were launched as hatchlings and will be studied to see if they can incorporate their symbiotic bacteria Vibrio fischeri into their light organ while in space.
Symbiosis
Euprymna scolopes lives in a symbiotic relationship with the bioluminescent bacteria Aliivibrio fischeri, which inhabits a special light organ in the squid's mantle. To allows this symbiotic relationship, Crumbs protein must first induce Apopstosis, which kills superficial epithelial tissue found in Euprymna scolopes. Apopstosis then helps create crypt epithelial cells, these cells directly take in the bioluminescence bacteria received from the Aliivibrio fischeri. The bacteria are fed a sugar and amino acid solution by the squid and in return hide the squid's silhouette when viewed from below by matching the amount of light hitting the top of the mantle (counter-illumination). E. scolopes serves as a model organism for animal-bacterial symbiosis and its relationship with A. fischeri has been carefully studied.
Acquisition
The bioluminescent bacterium, A. fischeri, is horizontally transmitted throughout the E. scolopes population. Hatchlings lack these necessary bacteria and must carefully select for them in a marine world saturated with other microorganisms.
To effectively capture these cells, E. scolopes secretes mucus in response to peptidoglycan (a major cell wall component of bacteria). The mucus inundates the ciliated fields in the immediate area around the six pores of the light organ and captures a large variety of bacteria. However, by some unknown mechanism, A. fischeri is able to outcompete other bacteria in the mucus.
As A. fischeri cells aggregate in the mucus, they must use their flagella to migrate through the pores and down into the ciliated ducts of the light organ and endure another barrage of host factors meant to ensure only A. fischeri colonization. Besides the relentless host-derived current that forces motility-challenged bacteria out of the pores, a number of reactive oxygen species makes the environment unbearable. Squid halide peroxidase is the main enzyme responsible for crafting this microbiocidal environment, using hydrogen peroxide as a substrate, but A. fischeri has evolved a brilliant counterattack. A. fischeri possesses a periplasmic catalase that captures hydrogen peroxide before it can be used by the squid halide peroxidase, thus inhibiting the enzyme indirectly. Once through these ciliated ducts, A. fischeri cells swim on towards the antechamber, a large epithelial-lined space, and colonize the narrow epithelial crypts.
The bacteria thrive on the host-derived amino acids and sugars in the antechamber and quickly fill the crypt spaces within 10 to 12 hours after hatching.
Ongoing relationship
Every second, a juvenile squid ventilates about of ambient seawater through its mantle cavity. Only a single A. fischeri cell, one/1-millionth of the total volume, is present with each ventilation.
The increased amino acids and sugars feed the metabolically demanding bioluminescence of the A. fischeri, and in 12 hours, the bioluminescence peaks and the juvenile squid is able to counterilluminate less than a day after hatching. Bioluminescence demands a substantial amount of energy from a bacterial cell. It is estimated to demand 20% of a cell's metabolic potential.
Nonluminescent strains of A. fischeri would have a definite competitive advantage over the luminescent wild-type, however nonluminescent mutants are never found in the light organ of the E. scolopes. In fact, experimental procedures have shown that removing the genes responsible for light production in A. fischeri drastically reduces colonization efficiency. Luminescent cells, with functioning luciferase, may have a higher affinity for oxygen than for peroxidases, thereby negating the toxic effects of the peroxidases. For this reason, bioluminescence is thought to have evolved as an ancient oxygen detoxification mechanism in bacteria.
Venting
Despite all the effort that goes into obtaining luminescent A. fischeri, the host squid jettisons most of the cells daily. This process, known as "venting", is responsible for the disposal of up to 95% of A. fischeri in the light organ every morning at dawn. The bacteria gain no benefit from this behavior and the upside for the squid itself is not clearly understood. One reasonable explanation points to the large energy expenditure in maintaining a colony of bioluminescent bacteria.
During the day when the squid are inactive and hidden, bioluminescence is unnecessary, and expelling the A. fischeri conserves energy. Another, more evolutionarily important reason may be that daily venting ensures selection for A. fischeri that have evolved specificity for a particular host, but can survive outside of the light organ.
Since A. fischeri is transmitted horizontally in E. scolopes, maintaining a stable population of them in the open ocean is essential in supplying future generations of squid with functioning light organs.
Light organ
The light organ has an electrical response when stimulated by light, which suggests the organ functions as a photoreceptor that enables the host squid to respond to A. fischeris luminescence.
Extraocular vesicles collaborate with the eyes to monitor the down-welling light and light created from counterillumination, so as the squid moves to various depths, it can maintain the proper level of output light. Acting on this information, the squid can then adjust the intensity of the bioluminescence by modifying the ink sac, which functions as a diaphragm around the light organ. Furthermore, the light organ contains a network of unique reflector and lens tissues that help reflect and focus the light ventrally through the mantle.
The light organ of embryonic and juvenile squids has a striking anatomical similarity to an eye and expresses several genes similar to those involved in eye development in mammalian embryos (e.g. eya, dac) which indicate that squid eyes and squid light organs may be formed using the same developmental "toolkit".
As the down-welling light increases or decreases, the squid is able to adjust luminescence accordingly, even over multiple cycles of light intensity.
See also
Reflectin
References
Further reading
Callaerts, P., P.N. Lee, B. Hartmann, C. Farfan, D.W.Y. Choy, K. Ikeo, K.F. Fischbach, W.J. Gehring & G. de Couet 2002. PNAS 99'(4): 2088–2093.
External links
The Light-Organ Symbiosis of Vibrio fischeri and the Hawaiian squid, Euprymna scolopes
Mutualism of the Month: Hawai‘ian bobtail squid
Bobtail squid
Bioluminescent molluscs
Molluscs of Hawaii
Endemic fauna of Hawaii
Cephalopods described in 1913
Symbiosis
Taxa named by Samuel Stillman Berry
Space-flown life | Euprymna scolopes | [
"Biology"
] | 1,773 | [
"Biological interactions",
"Behavior",
"Symbiosis",
"Space-flown life"
] |
9,548,739 | https://en.wikipedia.org/wiki/Feynman%20Prize%20in%20Nanotechnology | The Feynman Prize in Nanotechnology is an award given by the Foresight Institute for significant advances in nanotechnology. Two prizes are awarded annually, in the categories of experimental and theoretical work. There is also a separate challenge award for making a nanoscale robotic arm and 8-bit adder.
Overview
The Feynman Prize consists of annual prizes in experimental and theory categories, as well as a one-time challenge award. They are awarded by the Foresight Institute, a nanotechnology advocacy organization. The prizes are named in honor of physicist Richard Feynman, whose 1959 talk There's Plenty of Room at the Bottom is considered by nanotechnology advocates to have inspired and informed the start of the field of nanotechnology.
The annual Feynman Prize in Nanotechnology is awarded for pioneering work in nanotechnology, towards the goal of constructing atomically precise products through molecular machine systems. Input on prize candidates comes from both Foresight Institute personnel and outside academic and commercial organizations. The awardees are selected mainly by an annually changing body of former winners and other academics. The prize is considered prestigious, and authors of one study considered it to be reasonably representative of notable research in the parts of nanotechnology under its scope.
The separate Feynman Grand Prize is a $250,000 challenge award to the first persons to create both a nanoscale robotic arm capable of precise positional control, and a nanoscale 8-bit adder, conforming to given specifications. It is intended to stimulate the field of molecular nanotechnology.
History
The Feynman Prize was instituted in the context of Foresight Institute co-founder K. Eric Drexler's advocacy of funding for molecular manufacturing. The prize was first given in 1993. Before 1997, one prize was given biennially. From 1997 on, two prizes were given each year in theory and experimental categories. By awarding these prizes early in the history of the field, the prize increased awareness of nanotechnology and influenced its direction.
The Grand Prize was announced in 1995 at the Fourth Foresight Conference on Molecular Nanotechnology and was sponsored by James Von Ehr and Marc Arnold. In 2004, X-Prize Foundation founder Peter Diamandis was selected to chair the Feynman Grand Prize committee.
Recipients
Single prize
Experimental category
Theory category
See also
Kavli Prize in Nanoscience
IEEE Pioneer Award in Nanotechnology
ISNSCE Nanoscience Award
UPenn NBIC Award for Research Excellence in Nanotechnology
List of physics awards
References
External links
Nanotechnology
Awards established in 1993
Academic awards
Challenge awards
Science and technology awards
American science and technology awards | Feynman Prize in Nanotechnology | [
"Materials_science",
"Engineering"
] | 537 | [
"Nanotechnology",
"Materials science"
] |
9,548,777 | https://en.wikipedia.org/wiki/Vulcan%20Street%20Plant | The Vulcan Street Plant was the first Edison hydroelectric central station. The plant was built on the Fox River in Appleton, Wisconsin, and put into operation on September 30, 1882. According to the American Society of Mechanical Engineers, the Vulcan Street plant is considered to be "the first hydro-electric central station to serve a system of private and commercial customers in North America". It is a National Historic Mechanical Engineering Landmark, an IEEE milestone and a National Historic Civil Engineering Landmark.
The Vulcan Street Plant was housed in the Appleton Paper and Pulp Company building, which burned to the ground in 1891. A replica of the Vulcan Street Plant was later built on South Oneida Street.
Origin
The Vulcan Street Plant was conceptualized by H. J. Rogers – who was the president of the Appleton Paper and Pulp Co. and of the Appleton Gas Light Co. during this time. According to the Institute of Electrical and Electronics Engineering, H. J. Rogers first came up with the idea for a hydro-electric central station after talking with a friend of his, H. E. Jacobs, while they were on a fishing trip.
The Appleton Edison Electric Light Company
H. E. Jacobs, who was working for Western Edison Light Company of Chicago as a licensing agent, informed H. J. Rogers about Thomas Edison’s plan for a steam-driven electric power plant in New York City called the Pearl Street Plant. Upon learning about Edison’s advances in electric light technology and electric generators, Rogers worked to bring together a group of investors to create one of the first hydro-electric central stations in the world. For this reason, the Appleton Edison Electric Light Company was formed and incorporated on May 25, 1882.
While Edison’s Pearl Street Plant was still under construction, the founders of the Appleton Edison Electric Light Company – H. E. Jacobs, A. L. Smith, H. D. Smith, and Charles Beveridge – began planning the Vulcan Street Plant.
In July 1882, engineer P. D. Johnston, who worked for Western Edison Light Company of Chicago during this time, visited Appleton to explain the details of Edison’s lighting system to the founders of the Appleton Edison Electric Light Company. After this meeting, the founders decided to test the viability of hydro-electric lighting by first installing it in their homes and mills.
As a result, two Edison "K" type generators were ordered. The first generator was installed in H. J. Roger’s paper mill, the Appleton Paper and Pulp Company, and is the generator that began operation on September 30, 1882. The second generator was installed in its building on Vulcan Street and began operation on November 25, 1882.
Problems and successes
On September 27, 1882, the first generator began operation, but without success. Hence, Edward T. Ames, the installer, returned to Appleton to correct the problem.
After a few days of troubleshooting, the generator was repaired and successfully entered operation on September 30, 1882. This was only 26 days after Thomas Edison began to successfully operate his steam-driven Pearl Street Plant in New York, which began operation on September 4, 1882. The output of the original generator was about 12.5 kilowatts.
The first buildings to be lit by the Vulcan Street Plant were H.J. Rogers' home, the Appleton Paper and Pulp Company building, and the Vulcan Paper Mill, which were all connected directly to the generator.
Initially, the buildings' direct connection to the generator caused many problems because the generator was directly connected to the waterwheel. The water from the Fox River did not flow at a constant rate, so the lights did not maintain constant brightness and often burned out.
This problem was resolved by moving the generator to a lean-to off the main building, where it was attached to a separate water wheel that allowed for a more even load distribution.
During the time of the Vulcan Street Plant, voltage regulators did not exist. Operators had to look at the light itself to determine if it was at the proper brightness, and they adjusted the voltage according to their observations. Electricity meters did not exist at that time, so customers were charged a flat monthly fee based on the number of electric lamps installed in their building. Hence, many people left their lights on all night.
The original electric distribution lines in Appleton were made of bare copper. This posed many challenges in the early development of commercial electricity, because nearly everything was made of wood or other flammable materials. The wiring used in buildings was insulated by a thin layer of cotton and was fastened to walls using wood cleats. Likewise, wood was used for fuse boxes, light sockets, and switch handles.
Appleton's first electrically lit buildings
H. J. Rogers' home, which has been converted to be the Hearthstone Historic House Museum, is one of the few surviving examples of wiring and lighting fixtures from the dawn of the electrical age. The Vulcan Street Plant and the Appleton Paper and Pulp Company building burned to the ground in 1891, and the Vulcan Paper Mill was dismantled in 1908.
After the Vulcan Street Plant was destroyed by fire, an exact replica was built on South Oneida Street and was opened to the public on September 30, 1932. According to the minutes taken at the Appleton Historic Preservation Committee meeting on October 21, 2008, the replica of the Vulcan Street Plant was, "... painstakingly constructed duplicating all of the building's original features."
This site was dedicated as an ASME National Historic Engineering Landmark, jointly designated with ASCE and IEEE on September 15, 1977.
See also
War of the currents
Samuel Insull
References
Energy infrastructure completed in 1882
Buildings and structures in Appleton, Wisconsin
Hydroelectric power plants in Wisconsin
Historic Civil Engineering Landmarks | Vulcan Street Plant | [
"Engineering"
] | 1,154 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
9,549,222 | https://en.wikipedia.org/wiki/Galactosylceramide | A galactosylceramide, or galactocerebroside is a type of cerebroside consisting of a ceramide with a galactose residue at the 1-hydroxyl moiety.
The galactose is cleaved by galactosylceramidase.
Galactosylceramide is a marker for oligodendrocytes in the brain, whether or not they form myelin.
Additional images
See also
Alpha-Galactosylceramide
Krabbe disease
Myelin
References
External links
CHEMBL110111
Glycolipids | Galactosylceramide | [
"Chemistry",
"Biology"
] | 124 | [
"Carbohydrates",
"Biotechnology stubs",
"Glycolipids",
"Biochemistry stubs",
"Biochemistry",
"Glycobiology"
] |
9,549,317 | https://en.wikipedia.org/wiki/OLE%20DB%20for%20OLAP | OLE DB for OLAP (Object Linking and Embedding Database for Online Analytical Processing abbreviated ODBO) is a Microsoft published specification and an industry standard for multi-dimensional data processing. ODBO is the standard application programming interface (API) for exchanging metadata and data between an OLAP server and a client on a Windows platform. ODBO extends the ability of OLE DB to access multi-dimensional (OLAP) data stores.
Description
ODBO is the most widely supported, multi-dimensional API to date. Platform-specific to Microsoft Windows, ODBO was specifically designed for Online Analytical Processing (OLAP) systems by Microsoft as an extension to Object Linking and Embedding Database (OLE DB). ODBO uses Microsoft’s Component Object Model.
ODBO permits independent software vendors (ISVs) and corporate developers to create a single set of standard interfaces that allow OLAP clients to access multi-dimensional data, regardless of vendor or data source. ODBO is currently supported by a wide spectrum of server and client tools.
When exposing the ODBO interface, the underlying multi-dimensional database must also support the MDX Query Language. XML for Analysis is a newer interface to MDX Data Sources that is often supported in parallel with ODBO.
See also
XML for Analysis
References
External links
Microsoft – Developed ODBO standard
MSDN – Multidimensional Expressions Reference
The OLAP Report – Independent research resource for organizations buying and implementing OLAP applications
Computer programming
Online analytical processing | OLE DB for OLAP | [
"Technology",
"Engineering"
] | 308 | [
"Software engineering",
"Computer programming",
"Computers"
] |
9,550,030 | https://en.wikipedia.org/wiki/History%20of%20algebra | Algebra can essentially be considered as doing computations similar to those of arithmetic but with non-numerical mathematical objects. However, until the 19th century, algebra consisted essentially of the theory of equations. For example, the fundamental theorem of algebra belongs to the theory of equations and is not, nowadays, considered as belonging to algebra (in fact, every proof must use the completeness of the real numbers, which is not an algebraic property).
This article describes the history of the theory of equations, referred to in this article as "algebra", from the origins to the emergence of algebra as a separate area of mathematics.
Etymology
The word "algebra" is derived from the Arabic word , and this comes from the treatise written in the year 830 by the medieval Persian mathematician, Al-Khwārizmī, whose Arabic title, Kitāb al-muḫtaṣar fī ḥisāb al-ğabr wa-l-muqābala, can be translated as The Compendious Book on Calculation by Completion and Balancing. The treatise provided for the systematic solution of linear and quadratic equations. According to one history, "[i]t is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the previous translation. The word 'al-jabr' presumably meant something like 'restoration' or 'completion' and seems to refer to the transposition of subtracted terms to the other side of an equation; the word 'muqabalah' is said to refer to 'reduction' or 'balancing'—that is, the cancellation of like terms on opposite sides of the equation. Arabic influence in Spain long after the time of al-Khwarizmi is found in Don Quixote, where the word 'algebrista' is used for a bone-setter, that is, a 'restorer'." The term is used by al-Khwarizmi to describe the operations that he introduced, "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation.
Stages of algebra
Algebraic expression
Algebra did not always make use of the symbolism that is now ubiquitous in mathematics; instead, it went through three distinct stages. The stages in the development of symbolic algebra are approximately as follows:
Rhetorical algebra, in which equations are written in full sentences. For example, the rhetorical form of is "The thing plus one equals two" or possibly "The thing plus 1 equals 2". Rhetorical algebra was first developed by the ancient Babylonians and remained dominant up to the 16th century.
Syncopated algebra, in which some symbolism is used, but which does not contain all of the characteristics of symbolic algebra. For instance, there may be a restriction that subtraction may be used only once within one side of an equation, which is not the case with symbolic algebra. Syncopated algebraic expression first appeared in Diophantus' Arithmetica (3rd century AD), followed by Brahmagupta's Brahma Sphuta Siddhanta (7th century).
Symbolic algebra, in which full symbolism is used. Early steps toward this can be seen in the work of several Islamic mathematicians such as Ibn al-Banna (13th–14th centuries) and al-Qalasadi (15th century), although fully symbolic algebra was developed by François Viète (16th century). Later, René Descartes (17th century) introduced the modern notation (for example, the use of x—see below) and showed that the problems occurring in geometry can be expressed and solved in terms of algebra (Cartesian geometry).
Equally important as the use or lack of symbolism in algebra was the degree of the equations that were addressed. Quadratic equations played an important role in early algebra; and throughout most of history, until the early modern period, all quadratic equations were classified as belonging to one of three categories.
where and are positive.
This trichotomy comes about because quadratic equations of the form with and positive, have no positive roots.
In between the rhetorical and syncopated stages of symbolic algebra, a geometric constructive algebra was developed by classical Greek and Vedic Indian mathematicians in which algebraic equations were solved through geometry. For instance, an equation of the form was solved by finding the side of a square of area
Conceptual stages
In addition to the three stages of expressing algebraic ideas, some authors recognized four conceptual stages in the development of algebra that occurred alongside the changes in expression. These four stages were as follows:
Geometric stage, where the concepts of algebra are largely geometric. This dates back to the Babylonians and continued with the Greeks, and was later revived by Omar Khayyám.
Static equation-solving stage, where the objective is to find numbers satisfying certain relationships. The move away from the geometric stage dates back to Diophantus and Brahmagupta, but algebra did not decisively move to the static equation-solving stage until Al-Khwarizmi introduced generalized algorithmic processes for solving algebraic problems.
Dynamic function stage, where motion is an underlying idea. The idea of a function began emerging with Sharaf al-Dīn al-Tūsī, but algebra did not decisively move to the dynamic function stage until Gottfried Leibniz.
Abstract stage, where mathematical structure plays a central role. Abstract algebra is largely a product of the 19th and 20th centuries.
Babylon
The origins of algebra can be traced to the ancient Babylonians, who developed a positional number system that greatly aided them in solving their rhetorical algebraic equations. The Babylonians were not interested in exact solutions, but rather approximations, and so they would commonly use linear interpolation to approximate intermediate values. One of the most famous tablets is the Plimpton 322 tablet, created around 1900–1600 BC, which gives a table of Pythagorean triples and represents some of the most advanced mathematics prior to Greek mathematics.
Babylonian algebra was much more advanced than the Egyptian algebra of the time; whereas the Egyptians were mainly concerned with linear equations the Babylonians were more concerned with quadratic and cubic equations. The Babylonians had developed flexible algebraic operations with which they were able to add equals to equals and multiply both sides of an equation by like quantities so as to eliminate fractions and factors. They were familiar with many simple forms of factoring, three-term quadratic equations with positive roots, and many cubic equations, although it is not known if they were able to reduce the general cubic equation.
Ancient Egypt
Ancient Egyptian algebra dealt mainly with linear equations while the Babylonians found these equations too elementary, and developed mathematics to a higher level than the Egyptians.
The Rhind Papyrus, also known as the Ahmes Papyrus, is an ancient Egyptian papyrus written c. 1650 BC by Ahmes, who transcribed it from an earlier work that he dated to between 2000 and 1800 BC. It is the most extensive ancient Egyptian mathematical document known to historians. The Rhind Papyrus contains problems where linear equations of the form and are solved, where and are known and which is referred to as "aha" or heap, is the unknown. The solutions were possibly, but not likely, arrived at by using the "method of false position", or regula falsi, where first a specific value is substituted into the left hand side of the equation, then the required arithmetic calculations are done, thirdly the result is compared to the right hand side of the equation, and finally the correct answer is found through the use of proportions. In some of the problems the author "checks" his solution, thereby writing one of the earliest known simple proofs.
Greek mathematics
It is sometimes alleged that the Greeks had no algebra, but this is disputed. By the time of Plato, Greek mathematics had undergone a drastic change. The Greeks created a geometric algebra where terms were represented by sides of geometric objects, usually lines, that had letters associated with them, and with this new form of algebra they were able to find solutions to equations by using a process that they invented, known as "the application of areas". "The application of areas" is only a part of geometric algebra and it is thoroughly covered in Euclid's Elements.
An example of geometric algebra would be solving the linear equation The ancient Greeks would solve this equation by looking at it as an equality of areas rather than as an equality between the ratios and The Greeks would construct a rectangle with sides of length and then extend a side of the rectangle to length and finally they would complete the extended rectangle so as to find the side of the rectangle that is the solution.
Bloom of Thymaridas
Iamblichus in Introductio arithmatica says that Thymaridas (c. 400 BC – c. 350 BC) worked with simultaneous linear equations. In particular, he created the then famous rule that was known as the "bloom of Thymaridas" or as the "flower of Thymaridas", which states that:
If the sum of quantities be given, and also the sum of every pair containing a particular quantity, then this particular quantity is equal to of the difference between the sums of these pairs and the first given sum.
or using modern notation, the solution of the following system of linear equations in unknowns,
is,
Iamblichus goes on to describe how some systems of linear equations that are not in this form can be placed into this form.
Euclid of Alexandria
Euclid (Greek: ) was a Greek mathematician who flourished in Alexandria, Egypt, almost certainly during the reign of Ptolemy I (323–283 BC). Neither the year nor place of his birth have been established, nor the circumstances of his death.
Euclid is regarded as the "father of geometry". His Elements is the most successful textbook in the history of mathematics. Although he is one of the most famous mathematicians in history there are no new discoveries attributed to him; rather he is remembered for his great explanatory skills. The Elements is not, as is sometimes thought, a collection of all Greek mathematical knowledge to its date; rather, it is an elementary introduction to it.
Elements
The geometric work of the Greeks, typified in Euclid's Elements, provided the framework for generalizing formulae beyond the solution of particular problems into more general systems of stating and solving equations.
Book II of the Elements contains fourteen propositions, which in Euclid's time were extremely significant for doing geometric algebra. These propositions and their results are the geometric equivalents of our modern symbolic algebra and trigonometry. Today, using modern symbolic algebra, we let symbols represent known and unknown magnitudes (i.e. numbers) and then apply algebraic operations on them, while in Euclid's time magnitudes were viewed as line segments and then results were deduced using the axioms or theorems of geometry.
Many basic laws of addition and multiplication are included or proved geometrically in the Elements. For instance, proposition 1 of Book II states:
If there be two straight lines, and one of them be cut into any number of segments whatever, the rectangle contained by the two straight lines is equal to the rectangles contained by the uncut straight line and each of the segments.
But this is nothing more than the geometric version of the (left) distributive law, ; and in Books V and VII of the Elements the commutative and associative laws for multiplication are demonstrated.
Many basic equations were also proved geometrically. For instance, proposition 5 in Book II proves that and proposition 4 in Book II proves that
Furthermore, there are also geometric solutions given to many equations. For instance, proposition 6 of Book II gives the solution to the quadratic equation and proposition 11 of Book II gives a solution to
Data
Data is a work written by Euclid for use at the schools of Alexandria and it was meant to be used as a companion volume to the first six books of the Elements. The book contains some fifteen definitions and ninety-five statements, of which there are about two dozen statements that serve as algebraic rules or formulas. Some of these statements are geometric equivalents to solutions of quadratic equations. For instance, Data contains the solutions to the equations and the familiar Babylonian equation
Conic sections
A conic section is a curve that results from the intersection of a cone with a plane. There are three primary types of conic sections: ellipses (including circles), parabolas, and hyperbolas. The conic sections are reputed to have been discovered by Menaechmus (c. 380 BC – c. 320 BC) and since dealing with conic sections is equivalent to dealing with their respective equations, they played geometric roles equivalent to cubic equations and other higher order equations.
Menaechmus knew that in a parabola, the equation holds, where is a constant called the latus rectum, although he was not aware of the fact that any equation in two unknowns determines a curve. He apparently derived these properties of conic sections and others as well. Using this information it was now possible to find a solution to the problem of the duplication of the cube by solving for the points at which two parabolas intersect, a solution equivalent to solving a cubic equation.
We are informed by Eutocius that the method he used to solve the cubic equation was due to Dionysodorus (250 BC – 190 BC). Dionysodorus solved the cubic by means of the intersection of a rectangular hyperbola and a parabola. This was related to a problem in Archimedes' On the Sphere and Cylinder. Conic sections would be studied and used for thousands of years by Greek, and later Islamic and European, mathematicians. In particular Apollonius of Perga's famous Conics deals with conic sections, among other topics.
China
Chinese mathematics dates to at least 300 BC with the Zhoubi Suanjing, generally considered to be one of the oldest Chinese mathematical documents.
Nine Chapters on the Mathematical Art
Chiu-chang suan-shu or The Nine Chapters on the Mathematical Art, written around 250 BC, is one of the most influential of all Chinese math books and it is composed of some 246 problems. Chapter eight deals with solving determinate and indeterminate simultaneous linear equations using positive and negative numbers, with one problem dealing with solving four equations in five unknowns.
Sea-Mirror of the Circle Measurements
Ts'e-yuan hai-ching, or Sea-Mirror of the Circle Measurements, is a collection of some 170 problems written by Li Zhi (or Li Ye) (1192 – 1279 AD). He used fan fa, or Horner's method, to solve equations of degree as high as six, although he did not describe his method of solving equations.
Mathematical Treatise in Nine Sections
Shu-shu chiu-chang, or Mathematical Treatise in Nine Sections, was written by the wealthy governor and minister Ch'in Chiu-shao (c. 1202 – c. 1261). With the introduction of a method for solving simultaneous congruences, now called the Chinese remainder theorem, it marks the high point in Chinese .
Magic squares
The earliest known magic squares appeared in China. In Nine Chapters the author solves a system of simultaneous linear equations by placing the coefficients and constant terms of the linear equations into a magic square (i.e. a matrix) and performing column reducing operations on the magic square. The earliest known magic squares of order greater than three are attributed to Yang Hui (fl. c. 1261 – 1275), who worked with magic squares of order as high as ten.
Precious Mirror of the Four Elements
Ssy-yüan yü-chien《四元玉鑒》, or Precious Mirror of the Four Elements, was written by Chu Shih-chieh in 1303 and it marks the peak in the development of Chinese algebra. The four elements, called heaven, earth, man and matter, represented the four unknown quantities in his algebraic equations. The Ssy-yüan yü-chien deals with simultaneous equations and with equations of degrees as high as fourteen. The author uses the method of fan fa, today called Horner's method, to solve these equations.
The Precious Mirror opens with a diagram of the arithmetic triangle (Pascal's triangle) using a round zero symbol, but Chu Shih-chieh denies credit for it. A similar triangle appears in Yang Hui's work, but without the zero symbol.
There are many summation equations given without proof in the Precious mirror. A few of the summations are:
Diophantus
Diophantus was a Hellenistic mathematician who lived c. 250 AD, but the uncertainty of this date is so great that it may be off by more than a century. He is known for having written Arithmetica, a treatise that was originally thirteen books but of which only the first six have survived. Arithmetica is the earliest extant work present that solve arithmetic problems by algebra. Diophantus however did not invent the method of algebra, which existed before him. Algebra was practiced and diffused orally by practitioners, with Diophantus picking up techniques to solve problems in arithmetic.
In modern algebra a polynomial is a linear combination of variable x that is built of exponentiation, scalar multiplication, addition, and subtraction. The algebra of Diophantus, similar to medieval arabic algebra is an aggregation of objects of different types with no operations present
For example, in Diophantus a polynomial "6 4 inverse Powers, 25 Powers lacking 9 units", which in modern notation is is a collection of object of one kind with 25 object of second kind which lack 9 objects of third kind with no operation present.
Similar to medieval Arabic algebra Diophantus uses three stages to solve a problem by Algebra:
1) An unknown is named and an equation is set up
2) An equation is simplified to a standard form( al-jabr and al-muqābala in arabic)
3) Simplified equation is solved
Diophantus does not give a classification of equations in six types like Al-Khwarizmi in extant parts of Arithmetica. He does say that he would give solution to three terms equations later, so this part of the work is possibly just lost
In Arithmetica, Diophantus is the first to use symbols for unknown numbers as well as abbreviations for powers of numbers, relationships, and operations; thus he used what is now known as syncopated algebra. The main difference between Diophantine syncopated algebra and modern algebraic notation is that the former lacked special symbols for operations, relations, and exponentials.
So, for example, what we would write as
which can be rewritten as
would be written in Diophantus's syncopated notation as
where the symbols represent the following:
Unlike in modern notation, the coefficients come after the variables and that addition is represented by the juxtaposition of terms. A literal symbol-for-symbol translation of Diophantus's syncopated equation into a modern symbolic equation would be the following:
where to clarify, if the modern parentheses and plus are used then the above equation can be rewritten as:
However the distinction between "rhetorical algebra", "syncopated algebra" and "symbolic algebra" is considered outdated by Jeffrey Oaks and Jean Christianidis. The problems were solved on dust-board using some notation, while in books solution were written in "rhetorical style".
Arithmetica also makes use of the identities:
{| styles=""
|-
|
|
|-
|
|
|}
India
The Indian mathematicians were active in studying about number systems. The earliest known Indian mathematical documents are dated to around the middle of the first millennium BC (around the 6th century BC).
The recurring themes in Indian mathematics are, among others, determinate and indeterminate linear and quadratic equations, simple mensuration, and Pythagorean triples.
Aryabhata
Aryabhata (476–550) was an Indian mathematician who authored Aryabhatiya. In it he gave the rules,
and
Brahma Sphuta Siddhanta
Brahmagupta (fl. 628) was an Indian mathematician who authored Brahma Sphuta Siddhanta. In his work Brahmagupta solves the general quadratic equation for both positive and negative roots. In indeterminate analysis Brahmagupta gives the Pythagorean triads but this is a modified form of an old Babylonian rule that Brahmagupta may have been familiar with. He was the first to give a general solution to the linear Diophantine equation where and are integers. Unlike Diophantus who only gave one solution to an indeterminate equation, Brahmagupta gave all integer solutions; but that Brahmagupta used some of the same examples as Diophantus has led some historians to consider the possibility of a Greek influence on Brahmagupta's work, or at least a common Babylonian source.
Like the algebra of Diophantus, the algebra of Brahmagupta was syncopated. Addition was indicated by placing the numbers side by side, subtraction by placing a dot over the subtrahend, and division by placing the divisor below the dividend, similar to our modern notation but without the bar. Multiplication, evolution, and unknown quantities were represented by abbreviations of appropriate terms. The extent of Greek influence on this syncopation, if any, is not known and it is possible that both Greek and Indian syncopation may be derived from a common Babylonian source.
Bhāskara II
Bhāskara II (1114 – c. 1185) was the leading mathematician of the 12th century. In Algebra, he gave the general solution of Pell's equation. He is the author of Lilavati and Vija-Ganita, which contain problems dealing with determinate and indeterminate linear and quadratic equations, and Pythagorean triples and he fails to distinguish between exact and approximate statements. Many of the problems in Lilavati and Vija-Ganita are derived from other Hindu sources, and so Bhaskara is at his best in dealing with indeterminate analysis.
Bhaskara uses the initial symbols of the names for colors as the symbols of unknown variables. So, for example, what we would write today as
Bhaskara would have written as
. _ .
ya 1 ru 1
.
ya 2 ru 8
.
Sum ya 1 ru 9
where ya indicates the first syllable of the word for black, and ru is taken from the word species. The dots over the numbers indicate subtraction.
Islamic world
The first century of the Islamic Arab Empire saw almost no scientific or mathematical achievements since the Arabs, with their newly conquered empire, had not yet gained any intellectual drive and research in other parts of the world had faded. In the second half of the 8th century, Islam had a cultural awakening, and research in mathematics and the sciences increased. The Muslim Abbasid caliph al-Mamun (809–833) is said to have had a dream where Aristotle appeared to him, and as a consequence al-Mamun ordered that Arabic translation be made of as many Greek works as possible, including Ptolemy's Almagest and Euclid's Elements. Greek works would be given to the Muslims by the Byzantine Empire in exchange for treaties, as the two empires held an uneasy peace. Many of these Greek works were translated by Thabit ibn Qurra (826–901), who translated books written by Euclid, Archimedes, Apollonius, Ptolemy, and Eutocius.
Arabic mathematicians established algebra as an independent discipline, and gave it the name "algebra" (al-jabr). They were the first to teach algebra in an elementary form and for its own sake. There are three theories about the origins of Arabic Algebra. The first emphasizes Hindu influence, the second emphasizes Mesopotamian or Persian-Syriac influence and the third emphasizes Greek influence. Many scholars believe that it is the result of a combination of all three sources.
Throughout their time in power, the Arabs used a fully rhetorical algebra, where often even the numbers were spelled out in words. The Arabs would eventually replace spelled out numbers (e.g. twenty-two) with Arabic numerals (e.g. 22), but the Arabs did not adopt or develop a syncopated or symbolic algebra until the work of Ibn al-Banna, who developed a symbolic algebra in the 13th century, followed by Abū al-Hasan ibn Alī al-Qalasādī in the 15th century.
Al-jabr wa'l muqabalah
The Muslim Persian mathematician Muhammad ibn Mūsā al-Khwārizmī, described as the father or founder of algebra, was a faculty member of the "House of Wisdom" (Bait al-Hikma) in Baghdad, which was established by Al-Mamun. Al-Khwarizmi, who died around 850 AD, wrote more than half a dozen mathematical and astronomical works. One of al-Khwarizmi's most famous books is entitled Al-jabr wa'l muqabalah or The Compendious Book on Calculation by Completion and Balancing, and it gives an exhaustive account of solving polynomials up to the second degree. The book also introduced the fundamental concept of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr. The name "algebra" comes from the "al-jabr" in the title of his book.
R. Rashed and Angela Armstrong write:
Al-Jabr is divided into six chapters, each of which deals with a different type of formula. The first chapter of Al-Jabr deals with equations whose squares equal its roots the second chapter deals with squares equal to number the third chapter deals with roots equal to a number the fourth chapter deals with squares and roots equal a number the fifth chapter deals with squares and number equal roots and the sixth and final chapter deals with roots and number equal to squares
In Al-Jabr, al-Khwarizmi uses geometric proofs, he does not recognize the root and he only deals with positive roots. He also recognizes that the discriminant must be positive and described the method of completing the square, though he does not justify the procedure. The Greek influence is shown by Al-Jabr'''s geometric foundations "The Algebra of al-Khwarizmi betrays unmistakable Hellenic elements," and by one problem taken from Heron. He makes use of lettered diagrams but all of the coefficients in all of his equations are specific numbers since he had no way of expressing with parameters what he could express geometrically; although generality of method is intended.
Al-Khwarizmi most likely did not know of Diophantus's Arithmetica, which became known to the Arabs sometime before the 10th century. And even though al-Khwarizmi most likely knew of Brahmagupta's work, Al-Jabr is fully rhetorical with the numbers even being spelled out in words. So, for example, what we would write as
Diophantus would have written as
And al-Khwarizmi would have written as
One square and ten roots of the same amount to thirty-nine dirhems; that is to say, what must be the square which, when increased by ten of its own roots, amounts to thirty-nine?
Logical Necessities in Mixed Equations
'Abd al-Hamīd ibn Turk authored a manuscript entitled Logical Necessities in Mixed Equations, which is very similar to al-Khwarzimi's Al-Jabr and was published at around the same time as, or even possibly earlier than, Al-Jabr. The manuscript gives exactly the same geometric demonstration as is found in Al-Jabr, and in one case the same example as found in Al-Jabr, and even goes beyond Al-Jabr by giving a geometric proof that if the discriminant is negative then the quadratic equation has no solution. The similarity between these two works has led some historians to conclude that Arabic algebra may have been well developed by the time of al-Khwarizmi and 'Abd al-Hamid.
Abu Kamil and al-Karaji
Arabic mathematicians treated irrational numbers as algebraic objects. The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers in the form of a square root or fourth root as solutions to quadratic equations or as coefficients in an equation. He was also the first to solve three non-linear simultaneous equations with three unknown variables.
Al-Karaji (953–1029), also known as Al-Karkhi, was the successor of Abū al-Wafā' al-Būzjānī (940–998) and he discovered the first numerical solution to equations of the form Al-Karaji only considered positive roots. He is also regarded as the first person to free algebra from geometrical operations and replace them with the type of arithmetic operations which are at the core of algebra today. His work on algebra and polynomials gave the rules for arithmetic operations to manipulate polynomials. The historian of mathematics F. Woepcke, in Extrait du Fakhri, traité d'Algèbre par Abou Bekr Mohammed Ben Alhacan Alkarkhi (Paris, 1853), praised Al-Karaji for being "the first who introduced the theory of algebraic calculus". Stemming from this, Al-Karaji investigated binomial coefficients and Pascal's triangle.
Omar Khayyám, Sharaf al-Dīn al-Tusi, and al-Kashi
Omar Khayyám (c. 1050 – 1123) wrote a book on Algebra that went beyond Al-Jabr to include equations of the third degree. Omar Khayyám provided both arithmetic and geometric solutions for quadratic equations, but he only gave geometric solutions for general cubic equations since he mistakenly believed that arithmetic solutions were impossible. His method of solving cubic equations by using intersecting conics had been used by Menaechmus, Archimedes, and Ibn al-Haytham (Alhazen), but Omar Khayyám generalized the method to cover all cubic equations with positive roots. He only considered positive roots and he did not go past the third degree. He also saw a strong relationship between geometry and algebra.
In the 12th century, Sharaf al-Dīn al-Tūsī (1135–1213) wrote the Al-Mu'adalat (Treatise on Equations), which dealt with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He used what would later be known as the "Ruffini-Horner method" to numerically approximate the root of a cubic equation. He also developed the concepts of the maxima and minima of curves in order to solve cubic equations which may not have positive solutions. He understood the importance of the discriminant of the cubic equation and used an early version of Cardano's formula to find algebraic solutions to certain types of cubic equations. Some scholars, such as Roshdi Rashed, argue that Sharaf al-Din discovered the derivative of cubic polynomials and realized its significance, while other scholars connect his solution to the ideas of Euclid and Archimedes.
Sharaf al-Din also developed the concept of a function. In his analysis of
the equation for example, he begins by changing the equation's form to . He then states that the question of whether the equation has a solution depends on whether or not the "function" on the left side reaches the value . To determine this, he finds a maximum value for the function. He proves that the maximum value occurs when , which gives the functional value . Sharaf al-Din then states that if this value is less than , there are no positive solutions; if it is equal to , then there is one solution at ; and if it is greater than , then there are two solutions, one between and and one between and .
In the early 15th century, Jamshīd al-Kāshī developed an early form of Newton's method to numerically solve the equation to find roots of . Al-Kāshī also developed decimal fractions and claimed to have discovered it himself. However, J. Lennart Berggrenn notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century.
Al-Hassār, Ibn al-Banna, and al-Qalasadi
Al-Hassār, a mathematician from Morocco specializing in Islamic inheritance jurisprudence during the 12th century, developed the modern symbolic mathematical notation for fractions, where the numerator and denominator are separated by a horizontal bar. This same fractional notation appeared soon after in the work of Fibonacci in the 13th century.
Abū al-Hasan ibn Alī al-Qalasādī (1412–1486) was the last major medieval Arab algebraist, who made the first attempt at creating an algebraic notation since Ibn al-Banna two centuries earlier, who was himself the first to make such an attempt since Diophantus and Brahmagupta in ancient times. The syncopated notations of his predecessors, however, lacked symbols for mathematical operations. Al-Qalasadi "took the first steps toward the introduction of algebraic symbolism by using letters in place of numbers" and by "using short Arabic words, or just their initial letters, as mathematical symbols."
Europe and the Mediterranean region
Just as the death of Hypatia signals the close of the Library of Alexandria as a mathematical center, so does the death of Boethius signal the end of mathematics in the Western Roman Empire. Although there was some work being done at Athens, it came to a close when in 529 the Byzantine emperor Justinian closed the pagan philosophical schools. The year 529 is now taken to be the beginning of the medieval period. Scholars fled the West towards the more hospitable East, particularly towards Persia, where they found haven under King Chosroes and established what might be termed an "Athenian Academy in Exile". Under a treaty with Justinian, Chosroes would eventually return the scholars to the Eastern Empire. During the Dark Ages, European mathematics was at its nadir with mathematical research consisting mainly of commentaries on ancient treatises; and most of this research was centered in the Byzantine Empire. The end of the medieval period is set as the fall of Constantinople to the Turks in 1453.
Late Middle Ages
The 12th century saw a flood of translations from Arabic into Latin and by the 13th century, European mathematics was beginning to rival the mathematics of other lands. In the 13th century, the solution of a cubic equation by Fibonacci is representative of the beginning of a revival in European algebra.
As the Islamic world was declining after the 15th century, the European world was ascending. And it is here that algebra was further developed.
Symbolic algebra
Modern notation for arithmetic operations was introduced between the end of the 15th century and the beginning of the 16th century by Johannes Widmann and Michael Stifel. At the end of 16th century, François Viète introduced symbols, now called variables, for representing indeterminate or unknown numbers. This created a new algebra consisting of computing with symbolic expressions as if they were numbers.
Another key event in the further development of algebra was the general algebraic solution of the cubic and quartic equations, developed in the mid-16th century. The idea of a determinant was developed by Japanese mathematician Kowa Seki in the 17th century, followed by Gottfried Leibniz ten years later, for the purpose of solving systems of simultaneous linear equations using matrices. Gabriel Cramer also did some work on matrices and determinants in the 18th century.
The symbol x
By tradition, the first unknown variable in an algebraic problem is nowadays represented by the symbol and if there is a second or a third unknown, then these are labeled and respectively. Algebraic is conventionally printed in italic type to distinguish it from the sign of multiplication.
Mathematical historians generally agree that the use of in algebra was introduced by René Descartes and was first published in his treatise La Géométrie (1637). In that work, he used letters from the beginning of the alphabet for known quantities, and letters from the end of the alphabet for unknowns. It has been suggested that he later settled on (in place of ) for the first unknown because of its relatively greater abundance in the French and Latin typographical fonts of the time.
Three alternative theories of the origin of algebraic were suggested in the 19th century: (1) a symbol used by German algebraists and thought to be derived from a cursive letter mistaken for ; (2) the numeral 1 with oblique strikethrough; and (3) an Arabic/Spanish source (see below). But the Swiss-American historian of mathematics Florian Cajori examined these and found all three lacking in concrete evidence; Cajori credited Descartes as the originator, and described his and as "free from tradition[,] and their choice purely arbitrary."
Nevertheless, the Hispano-Arabic hypothesis continues to have a presence in popular culture today. It is the claim that algebraic is the abbreviation of a supposed loanword from Arabic in Old Spanish. The theory originated in 1884 with the German orientalist Paul de Lagarde, shortly after he published his edition of a 1505 Spanish/Arabic bilingual glossary in which Spanish ("thing") was paired with its Arabic equivalent, (shayʔ), transcribed as xei. (The "sh" sound in Old Spanish was routinely spelled ) Evidently Lagarde was aware that Arab mathematicians, in the "rhetorical" stage of algebra's development, often used that word to represent the unknown quantity. He surmised that "nothing could be more natural" ("Nichts war also natürlicher...") than for the initial of the Arabic word—romanized as the Old Spanish —to be adopted for use in algebra. A later reader reinterpreted Lagarde's conjecture as having "proven" the point. Lagarde was unaware that early Spanish mathematicians used, not a transcription of the Arabic word, but rather its translation in their own language, "cosa". There is no instance of xei or similar forms in several compiled historical vocabularies of Spanish.
Gottfried Leibniz
Although the mathematical notion of function was implicit in trigonometric and logarithmic tables, which existed in his day, Gottfried Leibniz was the first, in 1692 and 1694, to employ it explicitly, to denote any of several geometric concepts derived from a curve, such as abscissa, ordinate, tangent, chord, and the perpendicular. In the 18th century, "function" lost these geometrical associations.
Leibniz realized that the coefficients of a system of linear equations could be arranged into an array, now called a matrix, which can be manipulated to find the solution of the system, if any. This method was later called Gaussian elimination. Leibniz also discovered Boolean algebra and symbolic logic, also relevant to algebra.
Abstract algebra
The ability to do algebra is a skill cultivated in mathematics education. As explained by Andrew Warwick, Cambridge University students in the early 19th century practiced "mixed mathematics", doing exercises based on physical variables such as space, time, and weight. Over time the association of variables with physical quantities faded away as mathematical technique grew. Eventually mathematics was concerned completely with abstract polynomials, complex numbers, hypercomplex numbers and other concepts. Application to physical situations was then called applied mathematics or mathematical physics, and the field of mathematics expanded to include abstract algebra. For instance, the issue of constructible numbers showed some mathematical limitations, and the field of Galois theory was developed.
Father of algebra
The title of "the father of algebra" is frequently credited to the Persian mathematician Al-Khwarizmi, supported by historians of mathematics, such as Carl Benjamin Boyer, Solomon Gandz and Bartel Leendert van der Waerden. However, the point is debatable and the title is sometimes credited to the Hellenistic mathematician Diophantus. "Diophantus, the father of algebra, in whose honor I have named this chapter, lived in Alexandria, in Roman Egypt, in either the 1st, the 2nd, or the 3rd century CE." Those who support Diophantus point to the algebra found in Al-Jabr being more elementary than the algebra found in Arithmetica, and Arithmetica being syncopated while Al-Jabr is fully rhetorical. However, the mathematics historian Kurt Vogel argues against Diophantus holding this title, as his mathematics was not much more algebraic than that of the ancient Babylonians.
Those who support Al-Khwarizmi point to the fact that he gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots, and was the first to teach algebra in an elementary form and for its own sake, whereas Diophantus was primarily concerned with the theory of numbers. Al-Khwarizmi also introduced the fundamental concept of "reduction" and "balancing" (which he originally used the term al-jabr to refer to), referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. Other supporters of Al-Khwarizmi point to his algebra no longer being concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." They also point to his treatment of an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems". Victor J. Katz regards Al-Jabr'' as the first true algebra text that is still extant.
According to Jeffrey Oaks and Jean Christianidis neither Diophantus nor Al-Khwarizmi should be called "father of algebra". Pre-modern algebra was developed and used by merchants and surveyors as part of what Jens Høyrup called "subscientific" tradition. Diophantus used this method of algebra in his book, in particular for indeterminate problems, while Al-Khwarizmi wrote one of the first books in Arabic about this method.
See also
References
Sources
Edition by Paul de Lagarde, Göttingen: Arnold Hoyer, 1883
(online access only in U.S.)
External links
"Commentary by Islam's Sheikh Zakariyya al-Ansari on Ibn al-Hā’im's Poem on the Science of Algebra and Balancing Called the Creator's Epiphany in Explaining the Cogent" featuring the basic concepts of algebra dating back to the 15th century, from the World Digital Library. | History of algebra | [
"Mathematics"
] | 8,922 | [
"History of algebra",
"Algebra"
] |
9,550,090 | https://en.wikipedia.org/wiki/Globoside | Globosides (also known as globo-series glycosphingolipids) are a sub-class of the lipid class glycosphingolipid with three to nine sugar molecules as the side chain (or R group) of ceramide. The sugars are usually a combination of N-acetylgalactosamine, D-glucose or D-galactose. One characteristic of globosides is that the "core" sugars consists of Glucose-Galactose-Galactose (Ceramide-βGlc4-1βGal4-1αGal), like in the case of the most basic globoside, globotriaosylceramide (Gb3), also known as pk-antigen. Another important characteristic of globosides is that they are neutral at pH 7, because they usually do not contain neuraminic acid, a sugar with an acidic carboxy-group. However, some globosides with the core structure Cer-Glc-Gal-Gal do contain neuraminic acid, e.g. the globo-series glycosphingolipid "SSEA-4-antigen".
The side chain can be cleaved by galactosidases and glucosidases. The deficiency of α-galactosidase A causes Fabry's disease, an inherited metabolic disease characterized by the accumulation of the globoside globotriaosylceramide.
Globoside-4 (Gb4)
Globoside 4 (Gb4) has been known as the receptor for parvovirus B19, due to observations that B19V binds to the thin-layered chromatogram of the structure. However, the binding on its surface does not match well with the virus, which raised debates on whether or not GB4 is the cause for productive infection. Additional research using the technique Knockout Cell Line has shown that although GB4 does not have the direct entry receptor for B19V, it plays a post-entry role in productive infection.
Globoside 4 (Gb4) are a type of SSEA, stage-specific embryonic antigen, that is present in cellular development and tumorous tissues without the mechanism of Gb4 being completely known. However a study has shown Gb4 directly activates the epidermal growth factor receptor through a ERK signaling. When the globo-series glycosphingolipid (GSL) was reduced in the experiment the ERK signaling from the receptor tyrosine kinase was also inhibited. The ERK was reactivated with the addition of the Gb4 and henceforth heightened proliferation of tumorous cells and opened up the possibility of testing Gb4 for further studies on potential drugs that can target cancerous cells.
Globoside-5 (Gb5)
Globoside-5 is also known as stage-specific embryonic antigen 3.
References
External links
Glycolipids
Blood antigen systems
Transfusion medicine | Globoside | [
"Chemistry"
] | 635 | [
"Glycobiology",
"Carbohydrates",
"Glycolipids"
] |
9,550,415 | https://en.wikipedia.org/wiki/Generator%20%28category%20theory%29 | In mathematics, specifically category theory, a family of generators (or family of separators) of a category is a collection of objects in , such that for any two distinct morphisms in , that is with , there is some in and some morphism such that If the collection consists of a single object , we say it is a generator (or separator).
Generators are central to the definition of Grothendieck categories.
The dual concept is called a cogenerator or coseparator.
Examples
In the category of abelian groups, the group of integers is a generator: If f and g are different, then there is an element , such that . Hence the map suffices.
Similarly, the one-point set is a generator for the category of sets. In fact, any nonempty set is a generator.
In the category of sets, any set with at least two elements is a cogenerator.
In the category of modules over a ring R, a generator in a finite direct sum with itself contains an isomorphic copy of R as a direct summand. Consequently, a generator module is faithful, i.e. has zero annihilator.
References
, p. 123, section V.7
External links
Category theory | Generator (category theory) | [
"Mathematics"
] | 263 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
9,550,452 | https://en.wikipedia.org/wiki/Moseley%20Wrought%20Iron%20Arch%20Bridge | The Moseley Wrought Iron Arch Bridge, also known as the Upper Pacific Mills Bridge, is a historic, riveted, wrought iron bowstring arch bridge now located on the campus of Merrimack College in North Andover, Massachusetts. It was added to the National Historic Civil Engineering Landmark list in 1998 and was originally part of the North Canal Historic District on the National Register of Historic Place. It is the oldest iron bridge in Massachusetts, and one of the oldest iron bridges in the United States. It was the first bridge in the United States to use riveted wrought iron plates for the triangular-shaped top chord.
The bridge was completed in 1864 as Moseley Truss Bridge built by the Moseley Iron Building Works of Boston, to connect the Pacific Mills with Canal Street in Lawrence, Massachusetts, by spanning the North Canal. It partially collapsed in the late 1980s, but in 1989 it was removed to the Merrimack College campus in North Andover and was rehabilitated under the direction of Francis E. Griggs, Jr., Professor of Civil Engineering. It was placed over a campus pond as a footbridge, and was rededicated in this new location on October 23, 1995.
See also
Hares Hill Road Bridge
List of bridges documented by the Historic American Engineering Record in Massachusetts
Zenas King
References
Sources
External links
Wrought iron bridges in the United States
Bridges completed in 1864
Former road bridges in the United States
Pedestrian bridges in Massachusetts
Relocated buildings and structures in Massachusetts
Rebuilt buildings and structures in the United States
Tied arch bridges in the United States
Historic American Engineering Record in Massachusetts
Historic Civil Engineering Landmarks
Buildings and structures in North Andover, Massachusetts
Bridges in Essex County, Massachusetts
Road bridges in Massachusetts
Merrimack College | Moseley Wrought Iron Arch Bridge | [
"Engineering"
] | 347 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.