id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
40,904,234
https://en.wikipedia.org/wiki/Pentacosylic%20acid
Pentacosylic acid, also known as pentacosanoic acid or hyenic acid, is a 25-carbon long-chain saturated fatty acid with the chemical formula . See also List of saturated fatty acids Very long chain fatty acids List of carboxylic acids References Fatty acids Alkanoic acids
Pentacosylic acid
Chemistry
66
11,655,832
https://en.wikipedia.org/wiki/Transverse%20measure
In mathematics, a measure on a real vector space is said to be transverse to a given set if it assigns measure zero to every translate of that set, while assigning finite and positive (i.e. non-zero) measure to some compact set. Definition Let V be a real vector space together with a metric space structure with respect to which it is complete. A Borel measure μ is said to be transverse to a Borel-measurable subset S of V if there exists a compact subset K of V with 0 < μ(K) < +∞; and μ(v + S) = 0 for all v ∈ V, where is the translate of S by v. The first requirement ensures that, for example, the trivial measure is not considered to be a transverse measure. Example As an example, take V to be the Euclidean plane R2 with its usual Euclidean norm/metric structure. Define a measure μ on R2 by setting μ(E) to be the one-dimensional Lebesgue measure of the intersection of E with the first coordinate axis: An example of a compact set K with positive and finite μ-measure is K = B1(0), the closed unit ball about the origin, which has μ(K) = 2. Now take the set S to be the second coordinate axis. Any translate (v1, v2) + S of S will meet the first coordinate axis in precisely one point, (v1, 0). Since a single point has Lebesgue measure zero, μ((v1, v2) + S) = 0, and so μ is transverse to S. See also Prevalent and shy sets References Measures (measure theory)
Transverse measure
Physics,Mathematics
347
4,888,685
https://en.wikipedia.org/wiki/Process%20analytical%20technology
Process analytical technology (PAT) has been defined by the United States Food and Drug Administration (FDA) as a mechanism to design, analyze, and control pharmaceutical manufacturing processes through the measurement of critical process parameters (CPP) which affect the critical quality attributes (CQA). The concept aims at understanding the processes by defining their CPPs, and accordingly monitoring them in a timely manner (preferably in-line or on-line) and thus being more efficient in testing while at the same time reducing over-processing, enhancing consistency and minimizing rejects. The FDA has outlined a regulatory framework for PAT implementation. With this framework – according to Hinz – the FDA tries to motivate the pharmaceutical industry to improve the production process. Because of the tight regulatory requirements and the long development time for a new drug, the production technology is "frozen" at the time of conducting phase-2 clinical trials. Generally, the PAT initiative from FDA is only one topic within the broader initiative of "Pharmaceutical cGMPs for the 21st century – A risk based approach". The basics PAT is a term used for describing a broader change in pharmaceutical manufacturing from static batch manufacturing to a more dynamic approach. It involves defining the Critical Process Parameters (CPPs) of the equipment used to make the product, which affect the Critical Quality Attributes (CQAs) of the product and then controlling these CPPs within defined limits. This allows manufacturers to produce products with consistent quality and also helps to reduce waste & overall costs. This mechanism for producing consistent product quality & reducing waste presents a good case for utilising continuous manufacturing technologies. The control of a steady state process when you understand the upstream & downstream effects is an easier task as common cause variability is easier to define and monitor. The variables It would be acceptable to consider that raw materials used to manufacture pharmaceutical products can vary in their attributes e.g. moisture content, crystal structure etc. It would also be acceptable to consider that manufacturing equipment does not always operate in exactly the same fashion due to the inherent tolerance of the equipment and its components. It is therefore logical to say that variability in raw materials married with a static batch process with inherent variability in process equipment produces variable product. This is on the basis that a static batch process produces product by following a fixed recipe with fixed set-points. With this in mind the PAT drive is to have a dynamic manufacturing process that compensates for variability both in raw materials & equipment to produce a consistent product. PAT implementation The challenge to date with PAT for pharmaceutical manufacturers is knowing how to start. A common problem is picking a complex process and getting mired in the challenge of collecting and analyzing the data. The following criteria serve as a basic framework for successful PAT roll-outs: (From A PAT Primer) Picking a simple process. (Think water for injection (WFI) or building monitoring system (BMS) All details and nuances are well understood and explained for that process. Determine what information is easily collected and accessible through current instrumentation. Understanding the appropriate intervals for collecting that data. Evaluating the tools available for reading and synchronizing the data. PAT tools In order to implement a successful PAT project, a combination of three main PAT tools is essential: Multivariate data acquisition and data analysis tools: usually advanced software packages which aid in design of experiments, collection of raw data and statistically analyzing this data in order to determine what parameters are CPP. Process analytical chemistry (PAC) tools: in-line and on-line analytical instruments used to measure those parameters that have been defined as CPP. These include mainly near infrared spectroscopy (NIRS); but also include biosensors, Raman spectroscopy, fiber optics and others. Continuous improvement and/or knowledge management tools: paper systems or software packages which accumulate Quality Control data acquired over time for specific processes with the aim of defining process weaknesses and implementing and monitoring process improvement initiatives. These products may be the same or separated from the statistical analysis tools above. Long-term goals The long-term goals of PAT are to: reduce production cycling time prevent rejection of batches enable real time release increase automation and control improve energy and material use facilitate continuous processing Currently NIR spectroscopy applications dominate the PAT projects. A possible next-generation solution is Energy Dispersive X-Ray Diffraction (EDXRD). For a detailed review of PAT tools see Scott, or Roggo. For an example of application see Gendre. Although the FDA's PAT initiative encourages process control based on the real-time acquired data, a small part of PAT applications goes beyond monitoring the processes and follows the PACT (‘Process Analytically Controlled Technology’) approach. MVA in PAT Fundamental to process analytical technology (PAT) initiatives are the basics of multivariate analysis (MVDA) and design of experiments (DoE). This is because analysis of the process data is a key to understand the process and keep it under multivariate statistical control. Footnotes References FDA: PAT Initiative EMEA: Inspections – Process Analytical Technology ASTM PAT Committee Process Analytical Technology Resource Website PAT Seminars & Events Drug manufacturing Design for X Quality control
Process analytical technology
Engineering
1,041
3,240,022
https://en.wikipedia.org/wiki/Gliese%20667
Gliese 667 (142 G. Scorpii) is a triple-star system in the constellation Scorpius lying at a distance of about from Earth. All three of the stars have masses smaller than the Sun. There is a 12th-magnitude star close to the other three, but it is not gravitationally bound to the system. To the naked eye, the system appears to be a single faint star of magnitude 5.89. The system has a relatively high proper motion, exceeding 1 second of arc per year. The two brightest stars in this system, GJ 667 A and GJ 667 B, are orbiting each other at an average angular separation of 1.81 arcseconds with a high eccentricity of 0.58. At the estimated distance of this system, this is equivalent to a physical separation of about 12.6 AU, or nearly 13 times the separation of the Earth from the Sun. Their eccentric orbit brings the pair as close as about 5 AU to each other, or as distant as 20 AU, corresponding to an eccentricity of 0.6. This orbit takes approximately 42.15 years to complete and the orbital plane is inclined at an angle of 128° to the line of sight from the Earth. The third star, GJ 667 C, orbits the GJ 667 AB pair at an angular separation of about 30", which equates to a minimum separation of 230 AU. GJ 667 C also has a system of two confirmed super-Earths and a number of additional doubtful candidates, though the innermost, GJ 667 Cb, may be a gas dwarf; GJ 667 Cc, and the controversial Cf and Ce, are in the circumstellar habitable zone. Gliese 667 A The largest star in the system, Gliese 667 A (GJ 667 A), is a K-type main-sequence star of stellar classification K3V. It has about 73% of the mass of the Sun and 76% of the Sun's radius, but is radiating only around 12-13% of the luminosity of the Sun. The concentration of elements other than hydrogen and helium, what astronomers term the star's metallicity, is much lower than in the Sun with a relative abundance of around 26% solar. The apparent visual magnitude of this star is 6.29, which, at the star's estimated distance, gives an absolute magnitude of around 7.07 (assuming negligible extinction from interstellar matter). Gliese 667 B Like the primary, the secondary star Gliese 667 B (GJ 667 B) is a K-type main-sequence star, although it has a slightly later stellar classification of K5V. This star has a mass of about 69% of the Sun, or 95% of the primary's mass, and it is radiating about 5% of the Sun's visual luminosity. The secondary's apparent magnitude is 7.24, giving it an absolute magnitude of around 8.02. Gliese 667 C Gliese 667 C is the smallest star in the system, with only around 33% of the mass of the Sun and 34% of the Sun's radius, orbiting approximately 230 AU from the Gliese 667 AB pair. It is a red dwarf with a stellar classification of M1.5. This star is radiating only 1.4% of the Sun's luminosity from its outer atmosphere at a relatively cool effective temperature of 3,440 K. This temperature is what gives it the red-hued glow that is a characteristic of M-type stars. The apparent magnitude of the star is 10.25, giving it an absolute magnitude of about 11.03. It is known to have a system of two planets; claims have been made for up to five additional planets but this is likely to be in error due to failure to account for correlated noise in the radial velocity data. The red dwarf status of the star would allow planet Cc, which is in the habitable zone, to receive minimal amounts of ultraviolet radiation. Planetary system Two extrasolar planets, Gliese 667 Cb (GJ 667 Cb) and Cc, have been confirmed orbiting Gliese 667 C by radial velocity measurements of GJ 667. There were also thought to be up to five other potential additional planets; however, it was later shown that they are likely to be artifacts resulting from correlated noise. Planet Cb was first announced by the European Southern Observatory's HARPS group on 19 October 2009. The announcement was made together with 29 other planets, while Cc was first mentioned by the same group in a pre-print made public on 21 November 2011. Announcement of a refereed journal report came on 2 February 2012 by researchers at the University of Göttingen/Carnegie Institution for Science. In this announcement, GJ 667 Cc was described as one of the best candidates yet found to harbor liquid water, and thus, potentially, support life on its surface. A detailed orbital analysis and refined orbital parameters for Gliese 667 Cc were presented. Based on GJ 667 C's bolometric luminosity, GJ 667 Cc would receive 90% of the light Earth does; however, much of that electromagnetic radiation would be in the invisible infrared light part of the spectrum. From the surface of Gliese 667 Cc, the second-confirmed planet out that orbits along the middle of the habitable zone, Gliese 667 C would have an angular diameter of 1.24 degrees—2.3 times larger than the Sun appears from the surface of the Earth, covering 5.4 times more area—but would still only occupy 0.003% of Gliese 667 Cc's sky sphere or 0.006% of the visible sky when directly overhead. At one point, up to five additional planets were thought to exist in the system, with three of them thought to be relatively certain to exist. However, multiple subsequent studies showed that the other proposed planets in the system were likely to be artifacts of noise and stellar activity, cutting the number of confirmed planets down to two. While one analysis did find some evidence for a third planet, Gliese 667 Cd with a period of about 90 days, but was unable to confirm it, other studies found that that specific signal very likely originates from the stellar rotation. Thus, despite its inclusion in a list of planet candidates in a 2019 preprint (never accepted for publication as of 2024), it is unlikely that Gliese 667 Cd exists. References Notes External links exoplanet art sites: K-type main-sequence stars M-type main-sequence stars Scorpius Planetary systems with two confirmed planets 156384 3 Triple star systems Scorpii, 142 084709 0667 J17185698-3459236 TIC objects
Gliese 667
Astronomy
1,432
4,952,436
https://en.wikipedia.org/wiki/Thread%20%28online%20communication%29
Conversation threading is a feature used by many email clients, bulletin boards, newsgroups, and Internet forums in which the software aids the user by visually grouping messages with their replies. These groups are called a conversation, topic thread, or simply a thread. A discussion forum, e-mail client or news client is said to have a "conversation view", "threaded topics" or a "threaded mode" if messages can be grouped in this manner. An email thread is also sometimes called an email chain. Threads can be displayed in a variety of ways. Early messaging systems (and most modern email clients) will automatically include original message text in a reply, making each individual email into its own copy of the entire thread. Software may also arrange threads of messages within lists, such as an email inbox. These arrangements can be hierarchical or nested, arranging messages close to their replies in a tree, or they can be linear or flat, displaying all messages in chronological order regardless of reply relationships. Conversation threading as a form of interactive journalism became popular on Twitter from around 2016 onward, when authors such as Eric Garland and Seth Abramson began to post essays in real time, constructing them as a series of numbered tweets, each limited to 140 or 280 characters. Mechanism Internet email clients compliant with the RFC 822 standard (and its successor RFC 5322) add a unique message identifier in the Message-ID: header field of each message, e.g. Message-ID: <xNCx2XP2qgUc9Qd2uR99iHsiAaJfVoqj91ocj3tdWT@wikimedia.org> If a user creates message B by replying to message A, the mail client will add the unique message ID of message A in form of the fields In-Reply-To: <xNCx2XP2qgUc9Qd2uR99iHsiAaJfVoqj91ocj3tdWT@wikimedia.org> References: <xNCx2XP2qgUc9Qd2uR99iHsiAaJfVoqj91ocj3tdWT@wikimedia.org> to the header of reply B. RFC 5322 defines the following algorithm for populating these fields: The "In-Reply-To:" field will contain the contents of the "Message-ID:" field of the message to which this one is a reply (the "parent message"). If there is more than one parent message, then the "In-Reply-To:" field will contain the contents of all of the parents' "Message-ID:" fields. If there is no "Message-ID:" field in any of the parent messages, then the new message will have no "In- Reply-To:" field. The "References:" field will contain the contents of the parent's "References:" field (if any) followed by the contents of the parent's "Message-ID:" field (if any). If the parent message does not contain a "References:" field but does have an "In-Reply-To:" field containing a single message identifier, then the "References:" field will contain the contents of the parent's "In-Reply-To:" field followed by the contents of the parent's "Message-ID:" field (if any). If the parent has none of the "References:", "In-Reply-To:", or "Message-ID:" fields, then the new message will have no "References:" field. Modern email clients then can use the unique message identifiers found in the RFC 822 Message-ID, In-Reply-To: and References: fields of all received email headers to locate the parent and root message in the hierarchy, reconstruct the chain of reply-to actions that created them, and display them as a discussion tree. The purpose of the References: field is to enable reconstruction of the discussion tree even if some replies in it are missing. Advantages Elimination of turn-taking and time constraints Threaded discussions allow readers to quickly grasp the overall structure of a conversation, isolate specific points of conversations nested within the threads, and as a result, post new messages to extend discussions in any existing thread or sub-thread without time constraints. With linear threads on the other hand, once the topic shifts to a new point of discussion, users are: 1) less inclined to make posts to revisit and expand on earlier points of discussion in order to avoid fragmenting the linear conversation similar to what occurs with turn-taking in face-to-face conversations; and/or 2) obligated to make a motion to stay on topic or move to change the topic of discussion. Given this advantage, threaded discussion is most useful for facilitating extended conversations or debates involving complex multi-step tasks (e.g., identify major premises → challenge veracity → share evidence → question accuracy, validity, or relevance of presented evidence) – as often found in newsgroups and complicated email chains – as opposed to simple single-step tasks (e.g., posting or share answers to a simple question). Message targeting Email allows messages to be targeted at particular members of the audience by using the "To" and "CC" lines. However, some message systems do not have this option. As a result, it can be difficult to determine the intended recipient of a particular message. When messages are displayed hierarchically, it is easier to visually identify the author of the previous message. Eliminating list clutter It can be difficult to process, analyze, evaluate, synthesize, and integrate important information when viewing large lists of messages. Grouping messages by thread makes the process of reviewing large numbers of messages in context to a given discussion topic more time efficient and with less mental effort, thus making more time and mental resources available to further extend and advance discussions within each individual topic/thread. In group forums, allowing users to reply to threads will reduce the number of new posts shown in the list. Some clients allow operations on entire threads of messages. For example, the text-based newsreader nn has a "kill" function which automatically deletes incoming messages based on the rules set up by the user matching the message's subject or author. This can dramatically reduce the number of messages one has to manually check and delete. Real-time feedback When an author, usually a journalist, posts threads via Twitter, users are able to respond to each 140- or 280-character tweet in the thread, often before the author posts the next message. This allows the author the option of including the feedback as part of subsequent messages. Disadvantages Reliability Accurate threading of messages requires the email software to identify messages that are replies to other messages. Some algorithms used for this purpose can be unreliable. For example, email clients that use the subject line to relate messages can be fooled by two unrelated messages that happen to have the same subject line. Modern email clients use unique identifiers in email headers to locate the parent and root message in the hierarchy. When non-compliant clients participate in discussions, they can confuse message threading as it depends on all clients respecting these optional mail standards when composing replies to messages. Individual message control Messages within a thread do not always provide the user with the same options as individual messages. For example, it may not be possible to move, star, reply to, archive, or delete individual messages that are contained within a thread. The lack of individual message control can prevent messaging systems from being used as to-do lists (a common function of email folders). Individual messages that contain information relevant to a to-do item can easily get lost in a long thread of messages. Parallel discussions With conversational threading, it is much easier to reply to individual messages that are not the most recent message in the thread. As a result, multiple threads of discussions often occur in parallel. Following, revisiting, and participating in parallel discussions at the same time can be mentally challenging. Following parallel discussions can be particularly disorienting and can inhibit discussions when discussion threads are not organized in a coherent, conceptual, or logical structure (e.g., threads presenting arguments in support of a given claim under debate intermingled with threads presenting arguments in opposition to the claim). Temporal fragmentation Thread fragmentation can be particularly problematic for systems that allow users to choose different display modes (hierarchical vs. linear). Users of the hierarchical display mode will reply to older messages, confusing users of the linear display mode. Examples The following email clients, forums, bbs, newsgroups, image/text boards, and social networks can group and display messages by thread. Client-based Apple Mail Emacs Gnus FastMail Forte Agent Gmail Mailbird Microsoft Outlook Mozilla Thunderbird Mutt Pan Protonmail slrn Web-based 4chan Discourse FastMail Gmail Hacker News MSN Groups Protonmail Reddit Roundcube Slashdot Yahoo! Groups Zulip References Sources cited in Wolsey, T. DeVere, "Literature discussion in cyberspace: Young adolescents using threaded discussion groups to talk about books. Reading Online, 7(4), January/February 2004. Retrieved 2007-12-30. Network Working Group, IETF (June 2008). "Internet Message Access Protocol - SORT and THREAD Extensions". Retrieved 2009-10-10. Internet terminology Internet forum terminology
Thread (online communication)
Technology
1,966
352,783
https://en.wikipedia.org/wiki/Phylogenesis
Phylogenesis (from Greek φῦλον phylon "tribe" + γένεσις genesis "origin") is the biological process by which a taxon (of any rank) appears. The science that studies these processes is called phylogenetics. These terms may be confused with the term phylogenetics, the application of molecular - analytical methods (i.e. molecular biology and genomics), in the explanation of phylogeny and its research. Phylogenetic relationships are discovered through phylogenetic inference methods that evaluate observed heritable traits, such as DNA sequences or overall morpho-anatomical, ethological, and other characteristics. Phylogeny The result of these analyses is a phylogeny (also known as a phylogenetic tree) – a diagrammatic hypothesis about the history of the evolutionary relationships of a group of organisms. Phylogenetic analyses have become central to understanding biodiversity, evolution, ecological genetics and genomes. Cladistics Cladistics (Greek , klados, i.e. "branch") is an approach to biological classification in which organisms are categorized based on shared, derived characteristics that can be traced to a group's most recent common ancestor and are not present in more distant ancestors. Therefore, members of a group are assumed to share a common history and are considered to be closely related. The cladistic method interprets each character state transformation implied by the distribution of shared character states among taxa (or other terminals) as a potential piece of evidence for grouping. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characteristics calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" (but less parsimonious) evolutionary models of character state transformation. Taxonomy Taxonomy (Greek language , taxis = 'order', 'arrangement' + , nomos = 'law' or 'science') is the classification, identification and naming of organisms. It is usually richly informed by phylogenetics, but remains a methodologically and logically distinct discipline. The degree to which taxonomies depend on phylogenies (or classification depends on evolutionary development) differs depending on the school of taxonomy: phenetics ignores phylogeny altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reproduce phylogeny in its classification Ontophylogenesis An extension of phylogenesis to the cellular level by Jean-Jacques Kupiec is known as Ontophylogenesis Similarities and differences Phylogenesis ≠ Phylogeny; Phylogenesis ≠ (≈) Phylogenetics; Phylogenesis ≠ Cladistics; Phylogenetics ≠ Cladistics; Taxonomy ≠ Cladistics. See also Phylogeny Phylogenetics Taxonomy Cladistics Ontogeny Evolution References External links Phylogenetics Evolutionary biology
Phylogenesis
Biology
635
50,014,417
https://en.wikipedia.org/wiki/Tashkent%20Planetarium
Tashkent Planetarium is one of the newest constructions in Uzbekistan, and is visited by local people and tourists. Tashkent Planetarium provides visitors with the opportunity to look at outer space, even in the morning, and enlarge their knowledge about the cosmos and the whole universe. About Tashkent Planetarium was established by edict №649 of "Cabinet of Ministers of Republic Uzbekistan" on 3 November 2003, by decision of Tashkent city municipality of 7 November 2003 №748. The Planetarium is nowadays controlled by the controlling unit of Tashkent city municipality which focuses on culture and sport. There are two main halls at the Planetarium, and each hall has its own functions. The first hall is mainly built for showing the Solar System and space, using Japanese technologies. The second hall contains artefacts, where visitors can learn more about specific planets and about Earth. In 2008 a group of scientists at Tashkent Planetarium discovered the new planet "Samarkand". See also State Museum of History of Uzbekistan The Museum of Health Care of Uzbekistan The Museum of Communication History in Uzbekistan Museum of Arts of Uzbekistan Tashkent Museum of Railway Techniques Museum of Geology, Tashkent Art Gallery of Uzbekistan The Alisher Navoi State Museum of Literature Museum of Victims of Political Repression in Tashkent State Museum of Nature of Uzbekistan References External links Brochure about the planetarium Article about the planetarium Article about the planetarium in English Article about the planetarium in English Planetaria Museums in Tashkent
Tashkent Planetarium
Astronomy
319
43,389,485
https://en.wikipedia.org/wiki/Rhetorical%20structure%20theory
Rhetorical structure theory (RST) is a theory of text organization that describes relations that hold between parts of text. It was originally developed by William Mann, Sandra Thompson, Christian M. I. M. Matthiessen and others at the University of Southern California's Information Sciences Institute (ISI) and defined in a 1988 paper. The theory was developed as part of studies of computer-based text generation. Natural language researchers later began using RST in text summarization and other applications. It explains coherence by postulating a hierarchical, connected structure of texts. In 2000, Daniel Marcu, also of ISI, demonstrated that practical discourse parsing and text summarization also could be achieved using RST. Rhetorical relations Rhetorical relations or coherence relations or discourse relations are paratactic (coordinate) or hypotactic (subordinate) relations that hold across two or more text spans. It is widely accepted that notion of coherence is through text relations like this. RST using rhetorical relations provide a systematic way for an analyst to analyse the text. An analysis is usually built by reading the text and constructing a tree using the relations. The following example is a title and summary, appearing at the top of an article in Scientific American magazine (Ramachandran and Anstis, 1986). The original text, broken into numbered units, is: [Title:] The Perception of Apparent Motion [Abstract:] When the motion of an intermittently seen object is ambiguous the visual system resolves confusion by applying some tricks that reflect a built-in knowledge of properties of the physical world In the figure, numbers 1,2,3,4 show the corresponding units as explained above. The fourth unit and the third unit form a relation "Means". The third unit is the essential part of this relation, so it is called the nucleus of the relation and fourth unit is called the satellite of the relation. Similarly second unit to third and fourth unit is forming relation "Condition". All units are also spans and spans may be composed of more than one unit. Nuclearity in discourse RST establishes two different types of units. Nuclei are considered as the most important parts of text whereas satellites contribute to the nuclei and are secondary. Nucleus contains basic information and satellite contains additional information about nucleus. The satellite is often incomprehensible without nucleus, whereas a text where a satellites have been deleted can be understood to a certain extent. Hierarchy in the analysis RST relations are applied recursively in a text, until all units in that text are constituents in an RST relation. The result of such analyses is that RST structure are typically represented as trees, with one top level relation that encompasses other relations at lower levels. Why RST? From linguistic point of view, RST proposes a different view of text organization than most linguistic theories. RST points to a tight relation between relations and coherence in text From a computational point of view, it provides a characterization of text relations that has been implemented in different systems and for applications as text generation and summarization. In design rationale Computer scientists Ana Cristina Bicharra Garcia and Clarisse Sieckenius de Souz have used RST as the basis of a design rationale system called ADD+. In ADD+, RST is used as the basis for the rhetorical organization of a knowledge base, in a way comparable to other knowledge representation systems such as issue-based information system (IBIS). Similarly, RST has been used in representation schemes for argumentation. See also Argument mining Parse tree References Argument technology Discourse analysis Knowledge representation Natural language processing
Rhetorical structure theory
Technology
745
48,997,198
https://en.wikipedia.org/wiki/Sodium%20lauryl%20sulfoacetate
Sodium lauryl sulfoacetate (SLSA) or lathanol is an organic compound used in many cleaning and hygiene products as an anionic surfactant. Also it is used as in sodium citrate/sodium lauryl sulfoacetate/glycerol laxative products. References Detergents Esters Dodecyl esters
Sodium lauryl sulfoacetate
Chemistry,Technology
75
5,548,333
https://en.wikipedia.org/wiki/Specialty%20engineering
In the domain of systems engineering, Specialty Engineering is defined as and includes the engineering disciplines that are not typical of the main engineering effort. More common engineering efforts in systems engineering such as hardware, software, and human factors engineering may be used as major elements in a majority of systems engineering efforts and therefore are not viewed as "special". Examples of specialty engineering include electromagnetic interference, safety, and physical security. Less common engineering domains such as electromagnetic interference, electrical grounding, safety, security, electrical power filtering/uninterruptible supply, manufacturability, and environmental engineering may be included in systems engineering efforts where they have been identified to address special system implementations. These less common but just as important engineering efforts are then viewed as "specialty engineering". However, if the specific system has a standard implementation of environmental or security for example, the situation is reversed and the human factors engineering or hardware/software engineering may be the "specialty engineering" domain. The key take away is; the context of the system engineering project and unique needs of the project are fundamental when thinking of what are the specialty engineering efforts. The benefit of citing "specialty engineering" in planning is the notice to all team levels that special management and science factors may need to be accounted for and may influence the project. Specialty engineering may be cited by commercial entities and others to specify their unique abilities. References Eisner, Howard. (2002). "Essentials of Project and Systems Engineering Management". Wiley. p. 217. Systems engineering Engineering disciplines
Specialty engineering
Engineering
311
39,647,486
https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20and%20Minerals
The Ministry of Energy and Minerals was the government ministry of Tanzania responsible for facilitating the development of the energy and mineral sectors. The Ministry was ultimately split in 2017 by President John Magufuli to tighten supervision on the two industries. References External links E Tanzania Tanzania Energy in Tanzania Mining in Tanzania
Ministry of Energy and Minerals
Engineering
59
4,066,199
https://en.wikipedia.org/wiki/Frenkel%20defect
In crystallography, a Frenkel defect is a type of point defect in crystalline solids, named after its discoverer Yakov Frenkel. The defect forms when an atom or smaller ion (usually cation) leaves its place in the structure, creating a vacancy and becomes an interstitial by lodging in a nearby location. In elemental systems, they are primarily generated during particle irradiation, as their formation enthalpy is typically much higher than for other point defects, such as vacancies, and thus their equilibrium concentration according to the Boltzmann distribution is below the detection limit. In ionic crystals, which usually possess low coordination number or a considerable disparity in the sizes of the ions, this defect can be generated also spontaneously, where the smaller ion (usually the cation) is dislocated. Similar to a Schottky defect the Frenkel defect is a stoichiometric defect (does not change the over all stoichiometry of the compound). In ionic compounds, the vacancy and interstitial defect involved are oppositely charged and one might expect them to be located close to each other due to electrostatic attraction. However, this is not likely the case in real material due to smaller entropy of such a coupled defect, or because the two defects might collapse into each other. Also, because such coupled complex defects are stoichiometric, their concentration will be independent of chemical conditions. Effect on density Even though Frenkel defects involve only the migration of the ions within the crystal, the total volume and thus the density is not necessarily changed: in particular for close-packed systems, the structural expansion due to the strains induced by the interstitial atom typically dominates over the structural contraction due to the vacancy, leading to a decrease of density. Examples Frenkel defects are exhibited in ionic solids with a large size difference between the anion and cation (with the cation usually smaller due to an increased effective nuclear charge) Some examples of solids which exhibit Frenkel defects: zinc sulfide, silver(I) chloride, silver(I) bromide (also shows Schottky defects), silver(I) iodide. These are due to the comparatively smaller size of Zn^2+ and Ag+ ions. For example, consider a structure formed by Xn− and Mn+ ions. Suppose an M ion leaves the M sublattice, leaving the X sublattice unchanged. The number of interstitials formed will equal the number of vacancies formed. One form of a Frenkel defect reaction in MgO with the oxide anion leaving the structure and going into the interstitial site written in Kröger–Vink notation: Mg + O → O + v + Mg This can be illustrated with the example of the sodium chloride crystal structure. The diagrams below are schematic two-dimensional representations. See also Deep-level transient spectroscopy (DLTS) Schottky defect Wigner effect Crystallographic defect References Further reading Crystallographic defects
Frenkel defect
Chemistry,Materials_science,Engineering
627
81,832
https://en.wikipedia.org/wiki/Timeline%20of%20Solar%20System%20exploration
This is a timeline of Solar System exploration ordering events in the exploration of the Solar System by date of spacecraft launch. It includes: All spacecraft that have left Earth orbit for the purposes of Solar System exploration (or were launched with that intention but failed), including lunar probes. A small number of pioneering or notable Earth-orbiting craft. It does not include: Centuries of terrestrial telescopic observation. The great majority of Earth-orbiting satellites. Space probes leaving Earth orbit that are not concerned with Solar System exploration (such as space telescopes targeted at distant galaxies, cosmic background radiation observatories, and so on). Probes that failed at launch. The dates listed are launch dates, but the achievements noted may have occurred some time laterin some cases, a considerable time later (for example, Voyager 2, launched 20 August 1977, did not reach Neptune until 1989). 1950s 1960s 1970s 1980s 1990s 2000s 2010s 2020s Planned or scheduled See also Discovery and exploration of the Solar System Human presence in space List of missions to the Moon List of missions to Venus List of missions to Mars List of Solar System probes List of interplanetary voyages List of space telescopes New Frontiers program Out of the Cradle – 1984 book about scientific speculation on future missions. Space Race Timeline of artificial satellites and space probes Timeline of discovery of Solar System planets and their moons Timeline of first orbital launches by country Timeline of space exploration Timeline of space travel by nationality Timeline of spaceflight References External links NASA Lunar and Planetary Science NASA Solar System Strategic Exploration Plans Soviet Lunar, Martian, Venusian and Terrestrial Space Image Catalog Solar System Exploration Discovery and exploration of the Solar System Solar System
Timeline of Solar System exploration
Astronomy
334
54,025,008
https://en.wikipedia.org/wiki/Chandrasekhar%20virial%20equations
In astrophysics, the Chandrasekhar virial equations are a hierarchy of moment equations of the Euler equations, developed by the Indian American astrophysicist Subrahmanyan Chandrasekhar, and the physicist Enrico Fermi and Norman R. Lebovitz. Mathematical description Consider a fluid mass of volume with density and an isotropic pressure with vanishing pressure at the bounding surfaces. Here, refers to a frame of reference attached to the center of mass. Before describing the virial equations, let's define some moments. The density moments are defined as the pressure moments are the kinetic energy moments are and the Chandrasekhar potential energy tensor moments are where is the gravitational constant. All the tensors are symmetric by definition. The moment of inertia , kinetic energy and the potential energy are just traces of the following tensors Chandrasekhar assumed that the fluid mass is subjected to pressure force and its own gravitational force, then the Euler equations is First order virial equation Second order virial equation In steady state, the equation becomes Third order virial equation In steady state, the equation becomes Virial equations in rotating frame of reference The Euler equations in a rotating frame of reference, rotating with an angular velocity is given by where is the Levi-Civita symbol, is the centrifugal acceleration and is the Coriolis acceleration. Steady state second order virial equation In steady state, the second order virial equation becomes If the axis of rotation is chosen in direction, the equation becomes and Chandrasekhar shows that in this case, the tensors can take only the following form Steady state third order virial equation In steady state, the third order virial equation becomes If the axis of rotation is chosen in direction, the equation becomes Steady state fourth order virial equation With being the axis of rotation, the steady state fourth order virial equation is also derived by Chandrasekhar in 1968. The equation reads as Virial equations with viscous stresses Consider the Navier-Stokes equations instead of Euler equations, and we define the shear-energy tensor as With the condition that the normal component of the total stress on the free surface must vanish, i.e., , where is the outward unit normal, the second order virial equation then be This can be easily extended to rotating frame of references. See also Virial theorem Dirichlet's ellipsoidal problem Chandrasekhar tensor References Stellar dynamics Fluid dynamics
Chandrasekhar virial equations
Physics,Chemistry,Engineering
497
45,031,863
https://en.wikipedia.org/wiki/Liquid%20biopsy
A liquid biopsy, also known as fluid biopsy or fluid phase biopsy, is the sampling and analysis of non-solid biological tissue, primarily blood. Like traditional biopsy, this type of technique is mainly used as a diagnostic and monitoring tool for diseases such as cancer, with the added benefit of being largely non-invasive. Liquid biopsies may also be used to validate the efficiency of a cancer treatment drug by taking multiple samples in the span of a few weeks. The technology may also prove beneficial for patients after treatment to monitor relapse. The clinical implementation of liquid biopsies is not yet widespread but is becoming standard of care in some areas. Liquid biopsy refers to the molecular analysis in biological fluids of nucleic acids, subcellular structures, especially exosomes, and, in the context of cancer, circulating tumor cells. Types There are several types of liquid biopsy methods; method selection depends on the condition that is being studied. A wide variety of biomarkers may be studied to detect or monitor other diseases. For example, isolation of protoporphyrin IX from blood samples can be used as a diagnostic tool for atherosclerosis. Cancer biomarkers in the blood include PSA (prostate cancer), CA19-9 (pancreatic cancer) and CA-125 (ovarian cancer). Mechanism Circulating tumor DNA (ctDNA) refers to DNA released by cancerous cells into the blood stream. Cancer mutations in ctDNA mirror those found in traditional tumor biopsies, which allows them to be used as molecular biomarkers to track the disease. These tests can have sensitive limits of detection, allowing monitoring of minimal residual disease after treatment. Scientists can purify and analyze ctDNA using next-generation sequencing (NGS) or PCR-based methods such as digital PCR. NGS-based methods provide a comprehensive view of a cancer’s genetic makeup and is especially useful in diagnosis while digital PCR offers a more targeted approach especially well-suited for detecting minimal residual disease and for monitoring treatment response and disease progression. Recent progress in epigenetics has expanded the use of liquid biopsy for the detection of early-stage cancers, including by approaches such as Cancer Likelihood in Plasma (CLiP) . Liquid biopsies can detect changes in tumor burden months or years before conventional imaging tests can, making them suitable for early tumor detection, monitoring, and detection of resistance mutations. The increase in the adoption of NGS in various research fields, advancement in NGS, and increase in the adoption of personalized medicine are expected to drive growth in the global liquid biopsy market. Clinical application In cancer, liquid biopsy can be used for either multi-cancer screening tests, when solid tumor biopsies are not possible, to compare different treatments as part of clinical trials, to inform decisions for doctors/patients on which precision medicine treatment to select, and for minimal residual disease detection (disease monitoring). Liquid biopsy of circulating tumor DNA for EGFR-mutated lung cancer is approved by the FDA. The CellSearch method for enumeration of circulating tumor cells in metastatic breast, metastatic colon, and metastatic prostate cancer has been validated and approved by the FDA as a useful prognostic method. See also Radiographic imaging Cancer screening#Blood tests Circulating free DNA External links NucPosDB: a database of nucleosome positioning in vivo and nucleosomics of cell-free DNA References Biopsy Medical procedures Cancer screening
Liquid biopsy
Chemistry
715
24,713,216
https://en.wikipedia.org/wiki/Guide%20rail
A guide rail is a device or mechanism to direct products, vehicles or other objects through a channel, conveyor, roadway or rail system. Several types of guide rails exist and may be associated with: Factory or production line conveyors Power tools, such as table saws Elevator or lift shafts Roadways and bridges (in this context sometimes called guardrails) A central rail that guides the rubber tired train of a rubber tired metro Factory guide rail Most factories use guide rails convey products and component parts along an assembly line. This conveyor system propels products of various sizes, shapes, and dimensions through the factory over the course of their assembly. Power tool guide rail Accessory to a power tool, such as a straight, swivel or angle jig for a circular saw, and can also be referred to as a fence. The guide rail system provides an acute method of cutting material. Elevator shaft guide rail Guide rails are part of the inner workings of most elevator and lift shafts, functioning as the vertical, internal track. The guide rails are fixed to two sides of the shaft; one guides the elevator car and the other for the counterweight. In tandem, these rails operate both as stabilization within the shaft during routine use and as a safety system in case of emergency stops. Roadway guide rail A guide rail is a system designed to guide vehicles back to the roadway and away from potentially hazardous situations. There is no legal distinction between a guide rail and a guard rail. According to the US Federal Highway Administration, the terms guardrail and guiderail are synonymous. Several types of roadway guide rail exist; all are engineered to guide vehicular traffic on roads or bridges. Such systems include W-beam, box beam, cable, and concrete barrier. Each system is intended to guide vehicles back onto the road as opposed to guard them from going off the road into potential danger. Railway guide rail On the Sapporo Municipal Subway a central rail guides the train. The Lille Metro, Translohr and Bombardier Guided Light Transit are also guided by a central guide rail. See also Automated guideway transit Baluster Concrete step barrier Crash barrier Flangeways Guard rail Guide bar Handrail Jersey barrier Rubber-tyred metros Rubber-tyred trams References Power tools Safety equipment Road transport Rail transport
Guide rail
Physics
460
16,291,039
https://en.wikipedia.org/wiki/NGC%204889
NGC 4889 (also known as Caldwell 35) is an E4 supergiant elliptical galaxy. It was discovered in 1785 by the British astronomer Frederick William Herschel I, who catalogued it as a bright, nebulous patch. The brightest galaxy within the northern Coma Cluster, it is located at a median distance of 94 million parsecs (308 million light years) from Earth. At the core of the galaxy is a supermassive black hole that heats the intracluster medium through the action of friction from infalling gases and dust. The gamma ray bursts from the galaxy extend out to several million light years of the cluster. As with other similar elliptical galaxies, only a fraction of the mass of NGC 4889 is in the form of stars. They have a flattened, unequal distribution that bulges within its edge. Between the stars is a dense interstellar medium full of heavy elements emitted by evolved stars. The diffuse stellar halo extends out to one million light years in diameter. Orbiting the galaxy is a very large population of globular clusters. NGC 4889 is also a strong source of soft X-ray, ultraviolet, and radio frequency radiation. As the largest and the most massive galaxy easily visible to Earth, NGC 4889 has played an important role in both amateur and professional astronomy, and has become a prototype in studying the dynamical evolution of other supergiant elliptical galaxies in the more distant universe. Observation NGC 4889 was not included by the astronomer Charles Messier in his famous Messier catalogue despite being an intrinsically bright object quite close to some Messier objects. The first known observation of NGC 4889 was that of Frederick William Herschel I, assisted by his sister, Caroline Lucretia Herschel, in 1785, who included it in the Catalogue of Nebulae and Clusters of Stars published a year later. In 1864, Herschel's son, John Frederick William Herschel, published the General Catalogue of Nebulae and Clusters of Stars. He included the objects catalogued by his father, including the one later to be called NGC 4889, plus others he found that were somehow missed by his father. In 1888 the astronomer John Louis Emil Dreyer published the New General Catalogue of Nebulae and Clusters of Stars (NGC), with a total of 7,840 objects, but he erroneously duplicated the galaxy in two designations, NGC 4884 and NGC 4889. Within the following century, several projects aimed to revise the NGC catalogue, such as The NGC/IC Project, Revised New General Catalogue of Nebulae and Clusters of Stars, and the NGC 2000.0 projects, discovered the duplication. It was then decided that the object would be called by its latter designation, NGC 4889, which is in use today. In December 1995, Patrick Caldwell Moore compiled the Caldwell catalogue, a list of 109 persistent, bright objects that were somehow missed by Messier in his catalogue. The list also includes NGC 4889, which is given the designation Caldwell 35. Properties NGC 4889 is located along the high declination region of Coma Berenices, south of the constellation Canes Venatici. It can be traced by following the line from Beta Comae Berenices to Gamma Comae Berenices. With an apparent magnitude of 11.4, it can be seen by telescopes with 12 inch aperture, but its visibility is greatly affected by light pollution due to glare of the light from Beta Comae Berenices. However, under very dark, moonless skies, it can be seen by small telescopes as a faint smudge, but larger telescopes are needed in order to see the galaxy's halo. In the updated Hubble sequence galaxy morphological classification scheme by the French astronomer Gérard de Vaucouleurs in 1959, NGC 4889 is classified as an E4 type galaxy, which means it has a flat distribution of stars within its width. It is also classified as a cD galaxy, a giant type of D galaxy, a classification devised by the American astronomer William Wilson Morgan in 1958 for galaxies with an elliptical-shaped nucleus surrounded by an immense, diffuse, dustless, extended halo. NGC 4889 is far enough that its distance can be measured using redshift. With the redshift of 0.0266 as derived from the Sloan Digital Sky Survey, and the Hubble constant as determined in 2013 by the ESA COBRAS/SAMBA/Planck Surveyor translates its distance of 94 Mpc (308 million light years) from Earth. NGC 4889 is probably the largest and the most massive galaxy out to the radius of 100 Mpc (326 million light years) of the Milky Way. The galaxy has an effective radius which extends at 2.9 arcminutes of the sky, translating to a diameter of 239,000 light years, about the size of the Andromeda Galaxy. In addition it has an immense diffuse light halo extending to 17.8 arcminutes, roughly half the angular diameter of the Sun, translating to 1.3 million light years in diameter. Along with its large size, NGC 4889 may also be extremely massive. If we take the Milky Way as the standard of mass, it may be close to 8 trillion solar masses. However, as NGC 4889 is a spheroid, and not a flat spiral, it has a three-dimensional profile, so it may be as high as 15 trillion solar masses. However, as usual for elliptical galaxies, only a small fraction of the mass of NGC 4889 is in the form of stars that radiate energy. Components Giant elliptical galaxies like NGC 4889 are believed to be the result of multiple mergers of smaller galaxies. There is now little dust remaining to form the diffuse nebulae where new stars are created, so the stellar population is dominated by old, population II stars that contain relatively low abundances of elements other than hydrogen and helium. The egg-like shape of this galaxy is maintained by random orbital motions of its member stars, in contrast to the more orderly rotational motions found in a spiral galaxy such as the Milky Way. NGC 4889 has 15,800 globular clusters, more than Messier 87, which has 12,000. This is half of NGC 4874's collection of globular clusters, which has 30,000 globular clusters. The space between the stars in the galaxy is filled with a diffuse interstellar medium of gas, which has been filled by the elements ejected from stars as they passed beyond the end of their main sequence lifetime. Carbon and nitrogen are being continuously supplied by intermediate mass stars as they pass through the asymptotic giant branch. The heavier elements from oxygen to iron are primarily produced by supernova explosions within the galaxy. The interstellar medium is continuously heated by the emission of in-falling gases towards its central SMBH. Supermassive black hole On December 5, 2011, astronomers measured the velocity dispersion of the central regions of two massive galaxies, NGC 4889, and the other being NGC 3842 in the Leo Cluster. According to the data of the study, they found out the central supermassive black hole of NGC 4889 is 5,200 times more massive than the central black hole of the Milky Way, or equivalent to 2.1 (21 billion) solar masses (best fit of data; possible range is from 6 billion to 37 billion solar masses). This makes it one of the most massive black holes on record. The diameter of the black hole's immense event horizon is about 20 to 124 billion kilometers, 2 to 12 times the diameter of Pluto's orbit. The ionized medium detected around the black hole suggests that NGC 4889 may have been a quasar in the past. It is quiescent, presumably because it has already absorbed all readily available matter. Environment NGC 4889 lies at the center of the component A of the Coma Cluster, a giant cluster of 2,000 galaxies which it shares with NGC 4874, although NGC 4889 is sometimes referred as the cluster center, and it has been called by its other designation A1656-BCG. The total mass of the cluster is estimated to be on the order of 4 . The Coma Cluster is located at exactly the center of the Coma Supercluster, which is one of the nearest superclusters to the Laniakea Supercluster. The Coma Supercluster itself is within the CfA Homunculus, the center of the CfA2 Great Wall, the nearest galaxy filament to Earth and one of the largest structures in the known universe. Notes References External links Elliptical galaxies Coma Cluster 4889 035b Coma Berenices Astronomical objects discovered in 1785 +5-31-77 44715 08110 Discoveries by William Herschel
NGC 4889
Astronomy
1,807
13,792,647
https://en.wikipedia.org/wiki/LanguageWare
LanguageWare is a natural language processing (NLP) technology developed by IBM, which allows applications to process natural language text. It comprises a set of Java libraries that provide a range of NLP functions: language identification, text segmentation/tokenization, normalization, entity and relationship extraction, and semantic analysis and disambiguation. The analysis engine uses a finite-state machine approach at multiple levels, which aids its performance characteristics while maintaining a reasonably small footprint. The behaviour of the system is driven by a set of configurable lexico-semantic resources which describe the characteristics and domain of the processed language. A default set of resources comes as part of LanguageWare and these describe the native language characteristics, such as morphology, and the basic vocabulary for the language. Supplemental resources have been created that capture additional vocabularies, terminologies, rules and grammars, which may be generic to the language or specific to one or more domains. A set of Eclipse-based customization tooling, LanguageWare Resource Workbench, is available on IBM's alphaWorks site, and allows domain knowledge to be compiled into these resources and thereby incorporated into the analysis process. LanguageWare can be deployed as a set of UIMA-compliant annotators, Eclipse plug-ins or Web Services. See also Data Discovery and Query Builder Formal language IBM Omnifind Linguistics Semantic Web Semantics Service-oriented architecture Web services UIMA References External links IBM LanguageWare Resource Workbench on alphaWorks IBM LanguageWare Miner for Multidimensional Socio-Semantic Networks on alphaWorks JumpStart Infocenter for IBM LanguageWare on IBM.com UIMA Homepage at the Apache Software Foundation UIMA Framework on SourceForge IBM OmniFind Yahoo! Edition (FREE enterprise search engine) Semantic Information Systems and Language Engineering Group SemanticDesktop.org Related Papers Branimir K. Boguraev Annotation-Based Finite State Processing in a Large-Scale NLP Architecture, IBM Research Report, 2004 Alexander Troussov, Mikhail Sogrin, "IBM LanguageWare Ontological Network Miner" Sheila Kinsella, Andreas Harth, Alexander Troussov, Mikhail Sogrin, John Judge, Conor Hayes, John G. Breslin, "Navigating and Annotating Semantically-Enabled Networks of People and Associated Objects" Mikhail Kotelnikov, Alexander Polonsky, Malte Kiesel, Max Völkel, Heiko Haller, Mikhail Sogrin, Pär Lannerö, Brian Davis, "Interactive Semantic Wikis" Sebastian Trüg, Jos van den Oever, Stéphane Laurière, "The Social Semantic Desktop: Nepomuk" Séamus Lawless, Vincent Wade, "Dynamic Content Discovery, Harvesting and Delivery" R. Mack, S. Mukherjea, A. Soffer, N. Uramoto, E. Brown, A. Coden, J. Cooper, A. Inokuchi, B. Iyer, Y. Mass, H. Matsuzawa, and L. V. Subramaniam, "Text analytics for life science using the Unstructured Information Management Architecture" Alex Nevidomsky, "UIMA Framework and Knowledge Discovery at IBM", 4th Text Mining Symposium, Fraunhofer SCAI, 2006 Data mining and machine learning software Java development tools Natural language processing Java (programming language) libraries
LanguageWare
Technology
693
67,416,519
https://en.wikipedia.org/wiki/Ultrafilter%20on%20a%20set
In the mathematical field of set theory, an ultrafilter on a set is a maximal filter on the set In other words, it is a collection of subsets of that satisfies the definition of a filter on and that is maximal with respect to inclusion, in the sense that there does not exist a strictly larger collection of subsets of that is also a filter. (In the above, by definition a filter on a set does not contain the empty set.) Equivalently, an ultrafilter on the set can also be characterized as a filter on with the property that for every subset of either or its complement belongs to the ultrafilter. Ultrafilters on sets are an important special instance of ultrafilters on partially ordered sets, where the partially ordered set consists of the power set and the partial order is subset inclusion This article deals specifically with ultrafilters on a set and does not cover the more general notion. There are two types of ultrafilter on a set. A principal ultrafilter on is the collection of all subsets of that contain a fixed element . The ultrafilters that are not principal are the free ultrafilters. The existence of free ultrafilters on any infinite set is implied by the ultrafilter lemma, which can be proven in ZFC. On the other hand, there exists models of ZF where every ultrafilter on a set is principal. Ultrafilters have many applications in set theory, model theory, and topology. Usually, only free ultrafilters lead to non-trivial constructions. For example, an ultraproduct modulo a principal ultrafilter is always isomorphic to one of the factors, while an ultraproduct modulo a free ultrafilter usually has a more complex structure. Definitions Given an arbitrary set an ultrafilter on is a non-empty family of subsets of such that: or : The empty set is not an element of : If and if is any superset of (that is, if ) then : If and are elements of then so is their intersection If then either or its complement is an element of Properties (1), (2), and (3) are the defining properties of a Some authors do not include non-degeneracy (which is property (1) above) in their definition of "filter". However, the definition of "ultrafilter" (and also of "prefilter" and "filter subbase") always includes non-degeneracy as a defining condition. This article requires that all filters be proper although a filter might be described as "proper" for emphasis. A filter base is a non-empty family of sets that has the finite intersection property (i.e. all finite intersections are non-empty). Equivalently, a filter subbase is a non-empty family of sets that is contained in (proper) filter. The smallest (relative to ) filter containing a given filter subbase is said to be generated by the filter subbase. The upward closure in of a family of sets is the set A or is a non-empty and proper (i.e. ) family of sets that is downward directed, which means that if then there exists some such that Equivalently, a prefilter is any family of sets whose upward closure is a filter, in which case this filter is called the filter generated by and is said to be a filter base The dual in of a family of sets is the set For example, the dual of the power set is itself: A family of sets is a proper filter on if and only if its dual is a proper ideal on ("" means not equal to the power set). Generalization to ultra prefilters A family of subsets of is called if and any of the following equivalent conditions are satisfied: For every set there exists some set such that or (or equivalently, such that equals or ). For every set there exists some set such that equals or Here, is defined to be the union of all sets in This characterization of " is ultra" does not depend on the set so mentioning the set is optional when using the term "ultra." For set (not necessarily even a subset of ) there exists some set such that equals or If satisfies this condition then so does superset In particular, a set is ultra if and only if and contains as a subset some ultra family of sets. A filter subbase that is ultra is necessarily a prefilter. The ultra property can now be used to define both ultrafilters and ultra prefilters: An is a prefilter that is ultra. Equivalently, it is a filter subbase that is ultra. An on is a (proper) filter on that is ultra. Equivalently, it is any filter on that is generated by an ultra prefilter. Ultra prefilters as maximal prefilters To characterize ultra prefilters in terms of "maximality," the following relation is needed. Given two families of sets and the family is said to be coarser than and is finer than and subordinate to written or , if for every there is some such that The families and are called equivalent if and The families and are comparable if one of these sets is finer than the other. The subordination relationship, i.e. is a preorder so the above definition of "equivalent" does form an equivalence relation. If then but the converse does not hold in general. However, if is upward closed, such as a filter, then if and only if Every prefilter is equivalent to the filter that it generates. This shows that it is possible for filters to be equivalent to sets that are not filters. If two families of sets and are equivalent then either both and are ultra (resp. prefilters, filter subbases) or otherwise neither one of them is ultra (resp. a prefilter, a filter subbase). In particular, if a filter subbase is not also a prefilter, then it is equivalent to the filter or prefilter that it generates. If and are both filters on then and are equivalent if and only if If a proper filter (resp. ultrafilter) is equivalent to a family of sets then is necessarily a prefilter (resp. ultra prefilter). Using the following characterization, it is possible to define prefilters (resp. ultra prefilters) using only the concept of filters (resp. ultrafilters) and subordination: An arbitrary family of sets is a prefilter if and only it is equivalent to a (proper) filter. An arbitrary family of sets is an ultra prefilter if and only it is equivalent to an ultrafilter. A on is a prefilter that satisfies any of the following equivalent conditions: is ultra. is maximal on with respect to meaning that if satisfies then There is no prefilter properly subordinate to If a (proper) filter on satisfies then The filter on generated by is ultra. Characterizations There are no ultrafilters on the empty set, so it is henceforth assumed that is nonempty. A filter base on is an ultrafilter on if and only if any of the following equivalent conditions hold: for any either or is a maximal filter subbase on meaning that if is any filter subbase on then implies A (proper) filter on is an ultrafilter on if and only if any of the following equivalent conditions hold: is ultra; is generated by an ultra prefilter; For any subset or So an ultrafilter decides for every whether is "large" (i.e. ) or "small" (i.e. ). For each subset either is in or () is. This condition can be restated as: is partitioned by and its dual The sets and are disjoint for all prefilters on is an ideal on For any finite family of subsets of (where ), if then for some index In words, a "large" set cannot be a finite union of sets none of which is large. For any if then or For any if then or (a filter with this property is called a ). For any if and then or is a maximal filter; that is, if is a filter on such that then Equivalently, is a maximal filter if there is no filter on that contains as a proper subset (that is, no filter is strictly finer than ). Grills and filter-grills If then its is the family where may be written if is clear from context. For example, and if then If then and moreover, if is a filter subbase then The grill is upward closed in if and only if which will henceforth be assumed. Moreover, so that is upward closed in if and only if The grill of a filter on is called a For any is a filter-grill on if and only if (1) is upward closed in and (2) for all sets and if then or The grill operation induces a bijection whose inverse is also given by If then is a filter-grill on if and only if or equivalently, if and only if is an ultrafilter on That is, a filter on is a filter-grill if and only if it is ultra. For any non-empty is both a filter on and a filter-grill on if and only if (1) and (2) for all the following equivalences hold: if and only if if and only if Free or principal If is any non-empty family of sets then the Kernel of is the intersection of all sets in A non-empty family of sets is called: if and otherwise (that is, if ). if if and is a singleton set; in this case, if then is said to be principal at If a family of sets is fixed then is ultra if and only if some element of is a singleton set, in which case will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter is ultra if and only if is a singleton set. A singleton set is ultra if and only if its sole element is also a singleton set. The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point. Every filter on that is principal at a single point is an ultrafilter, and if in addition is finite, then there are no ultrafilters on other than these. In particular, if a set has finite cardinality then there are exactly ultrafilters on and those are the ultrafilters generated by each singleton subset of Consequently, free ultrafilters can only exist on an infinite set. Examples, properties, and sufficient conditions If is an infinite set then there are as many ultrafilters over as there are families of subsets of explicitly, if has infinite cardinality then the set of ultrafilters over has the same cardinality as that cardinality being If and are families of sets such that is ultra, and then is necessarily ultra. A filter subbase that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by to be ultra. Suppose is ultra and is a set. The trace is ultra if and only if it does not contain the empty set. Furthermore, at least one of the sets and will be ultra (this result extends to any finite partition of ). If are filters on is an ultrafilter on and then there is some that satisfies This result is not necessarily true for an infinite family of filters. The image under a map of an ultra set is again ultra and if is an ultra prefilter then so is The property of being ultra is preserved under bijections. However, the preimage of an ultrafilter is not necessarily ultra, not even if the map is surjective. For example, if has more than one point and if the range of consists of a single point then is an ultra prefilter on but its preimage is not ultra. Alternatively, if is a principal filter generated by a point in then the preimage of contains the empty set and so is not ultra. The elementary filter induced by an infinite sequence, all of whose points are distinct, is an ultrafilter. If then denotes the set consisting all subsets of having cardinality and if contains at least () distinct points, then is ultra but it is not contained in any prefilter. This example generalizes to any integer and also to if contains more than one element. Ultra sets that are not also prefilters are rarely used. For every and every let If is an ultrafilter on then the set of all such that is an ultrafilter on Monad structure The functor associating to any set the set of of all ultrafilters on forms a monad called the . The unit map sends any element to the principal ultrafilter given by This ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets, which gives a conceptual explanation of this monad. Similarly, the ultraproduct monad is the codensity monad of the inclusion of the category of finite families of sets into the category of all families of set. So in this sense, ultraproducts are categorically inevitable. The ultrafilter lemma The ultrafilter lemma was first proved by Alfred Tarski in 1930. The ultrafilter lemma is equivalent to each of the following statements: For every prefilter on a set there exists a maximal prefilter on subordinate to it. Every proper filter subbase on a set is contained in some ultrafilter on A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it. The following results can be proven using the ultrafilter lemma. A free ultrafilter exists on a set if and only if is infinite. Every proper filter is equal to the intersection of all ultrafilters containing it. Since there are filters that are not ultra, this shows that the intersection of a family of ultrafilters need not be ultra. A family of sets can be extended to a free ultrafilter if and only if the intersection of any finite family of elements of is infinite. Relationships to other statements under ZF Throughout this section, ZF refers to Zermelo–Fraenkel set theory and ZFC refers to ZF with the Axiom of Choice (AC). The ultrafilter lemma is independent of ZF. That is, there exist models in which the axioms of ZF hold but the ultrafilter lemma does not. There also exist models of ZF in which every ultrafilter is necessarily principal. Every filter that contains a singleton set is necessarily an ultrafilter and given the definition of the discrete ultrafilter does not require more than ZF. If is finite then every ultrafilter is a discrete filter at a point; consequently, free ultrafilters can only exist on infinite sets. In particular, if is finite then the ultrafilter lemma can be proven from the axioms ZF. The existence of free ultrafilter on infinite sets can be proven if the axiom of choice is assumed. More generally, the ultrafilter lemma can be proven by using the axiom of choice, which in brief states that any Cartesian product of non-empty sets is non-empty. Under ZF, the axiom of choice is, in particular, equivalent to (a) Zorn's lemma, (b) Tychonoff's theorem, (c) the weak form of the vector basis theorem (which states that every vector space has a basis), (d) the strong form of the vector basis theorem, and other statements. However, the ultrafilter lemma is strictly weaker than the axiom of choice. While free ultrafilters can be proven to exist, it is possible to construct an explicit example of a free ultrafilter (using only ZF and the ultrafilter lemma); that is, free ultrafilters are intangible. Alfred Tarski proved that under ZFC, the cardinality of the set of all free ultrafilters on an infinite set is equal to the cardinality of where denotes the power set of Other authors attribute this discovery to Bedřich Pospíšil (following a combinatorial argument from Fichtenholz, and Kantorovitch, improved by Hausdorff). Under ZF, the axiom of choice can be used to prove both the ultrafilter lemma and the Krein–Milman theorem; conversely, under ZF, the ultrafilter lemma together with the Krein–Milman theorem can prove the axiom of choice. Statements that cannot be deduced The ultrafilter lemma is a relatively weak axiom. For example, each of the statements in the following list can be deduced from ZF together with the ultrafilter lemma: A countable union of countable sets is a countable set. The axiom of countable choice (ACC). The axiom of dependent choice (ADC). Equivalent statements Under ZF, the ultrafilter lemma is equivalent to each of the following statements: <li>The Boolean prime ideal theorem (BPIT). Stone's representation theorem for Boolean algebras. Any product of Boolean spaces is a Boolean space. Boolean Prime Ideal Existence Theorem: Every nondegenerate Boolean algebra has a prime ideal. Tychonoff's theorem for Hausdorff spaces: Any product of compact Hausdorff spaces is compact. If is endowed with the discrete topology then for any set the product space is compact. Each of the following versions of the Banach-Alaoglu theorem is equivalent to the ultrafilter lemma: Any equicontinuous set of scalar-valued maps on a topological vector space (TVS) is relatively compact in the weak-* topology (that is, it is contained in some weak-* compact set). The polar of any neighborhood of the origin in a TVS is a weak-* compact subset of its continuous dual space. The closed unit ball in the continuous dual space of any normed space is weak-* compact. If the normed space is separable then the ultrafilter lemma is sufficient but not necessary to prove this statement. A topological space is compact if every ultrafilter on converges to some limit. A topological space is compact if every ultrafilter on converges to some limit. The addition of the words "and only if" is the only difference between this statement and the one immediately above it. The Alexander subbase theorem. The Ultranet lemma: Every net has a universal subnet. By definition, a net in is called an or an if for every subset the net is eventually in or in A topological space is compact if and only if every ultranet on converges to some limit. If the words "and only if" are removed then the resulting statement remains equivalent to the ultrafilter lemma. A convergence space is compact if every ultrafilter on converges. A uniform space is compact if it is complete and totally bounded. The Stone–Čech compactification Theorem. <li>Each of the following versions of the compactness theorem is equivalent to the ultrafilter lemma: If is a set of first-order sentences such that every finite subset of has a model, then has a model. If is a set of zero-order sentences such that every finite subset of has a model, then has a model. The completeness theorem: If is a set of zero-order sentences that is syntactically consistent, then it has a model (that is, it is semantically consistent). Weaker statements Any statement that can be deduced from the ultrafilter lemma (together with ZF) is said to be than the ultrafilter lemma. A weaker statement is said to be if under ZF, it is not equivalent to the ultrafilter lemma. Under ZF, the ultrafilter lemma implies each of the following statements: The Axiom of Choice for Finite sets (ACF): Given and a family of non-empty sets, their product is not empty. A countable union of finite sets is a countable set. However, ZF with the ultrafilter lemma is too weak to prove that a countable union of sets is a countable set. The Hahn–Banach theorem. In ZF, the Hahn–Banach theorem is strictly weaker than the ultrafilter lemma. The Banach–Tarski paradox. In fact, under ZF, the Banach–Tarski paradox can be deduced from the Hahn–Banach theorem, which is strictly weaker than the Ultrafilter Lemma. Every set can be linearly ordered. Every field has a unique algebraic closure. Non-trivial ultraproducts exist. The weak ultrafilter theorem: A free ultrafilter exists on Under ZF, the weak ultrafilter theorem does not imply the ultrafilter lemma; that is, it is strictly weaker than the ultrafilter lemma. There exists a free ultrafilter on every infinite set; This statement is actually strictly weaker than the ultrafilter lemma. ZF alone does not even imply that there exists a non-principal ultrafilter on set. Completeness The completeness of an ultrafilter on a powerset is the smallest cardinal κ such that there are κ elements of whose intersection is not in The definition of an ultrafilter implies that the completeness of any powerset ultrafilter is at least . An ultrafilter whose completeness is than —that is, the intersection of any countable collection of elements of is still in —is called countably complete or σ-complete. The completeness of a countably complete nonprincipal ultrafilter on a powerset is always a measurable cardinal. The (named after Mary Ellen Rudin and Howard Jerome Keisler) is a preorder on the class of powerset ultrafilters defined as follows: if is an ultrafilter on and an ultrafilter on then if there exists a function such that if and only if for every subset Ultrafilters and are called , denoted , if there exist sets and and a bijection that satisfies the condition above. (If and have the same cardinality, the definition can be simplified by fixing ) It is known that ≡RK is the kernel of ≤RK, i.e., that if and only if and Ultrafilters on ℘(ω) There are several special properties that an ultrafilter on where extends the natural numbers, may possess, which prove useful in various areas of set theory and topology. A non-principal ultrafilter is called a P-point (or ) if for every partition of such that for all there exists some such that is a finite set for each A non-principal ultrafilter is called Ramsey (or selective) if for every partition of such that for all there exists some such that is a singleton set for each It is a trivial observation that all Ramsey ultrafilters are P-points. Walter Rudin proved that the continuum hypothesis implies the existence of Ramsey ultrafilters. In fact, many hypotheses imply the existence of Ramsey ultrafilters, including Martin's axiom. Saharon Shelah later showed that it is consistent that there are no P-point ultrafilters. Therefore, the existence of these types of ultrafilters is independent of ZFC. P-points are called as such because they are topological P-points in the usual topology of the space of non-principal ultrafilters. The name Ramsey comes from Ramsey's theorem. To see why, one can prove that an ultrafilter is Ramsey if and only if for every 2-coloring of there exists an element of the ultrafilter that has a homogeneous color. An ultrafilter on is Ramsey if and only if it is minimal in the Rudin–Keisler ordering of non-principal powerset ultrafilters. See also Notes Proofs References Bibliography Further reading Families of sets Nonstandard analysis Order theory
Ultrafilter on a set
Mathematics
5,075
7,617,805
https://en.wikipedia.org/wiki/Starvation%20response
Starvation response in animals (including humans) is a set of adaptive biochemical and physiological changes, triggered by lack of food or extreme weight loss, in which the body seeks to conserve energy by reducing metabolic rate and/or non-resting energy expenditure to prolong survival and preserve body fat and lean mass. Equivalent or closely related terms include famine response, starvation mode, famine mode, starvation resistance, starvation tolerance, adapted starvation, adaptive thermogenesis, fat adaptation, and metabolic adaptation. In humans Ordinarily, the body responds to reduced energy intake by burning fat reserves and consuming muscle and other tissues. Specifically, the body burns fat after first exhausting the contents of the digestive tract along with glycogen reserves present in both muscle and liver cells via glycogenolysis. After prolonged periods of starvation this store of glycogen runs out, and the body instead uses the proteins within muscle tissue as a fuel source, which results in muscle mass loss. Magnitude and composition The magnitude and composition of the starvation response (i.e. metabolic adaptation) was estimated in a study of 8 individuals living in isolation in Biosphere 2 for two years. During their isolation, they gradually lost an average of 15% (range: 9–24%) of their body weight due to harsh conditions. On emerging from isolation, the eight isolated individuals were compared with a 152-person control group that initially had similar physical characteristics. On average, the starvation response of the individuals after isolation was a reduction in daily total energy expenditure. of the starvation response was explained by a reduction in fat-free mass and fat mass. An additional was explained by a reduction in fidgeting. The remaining was statistically insignificant. General The energetic requirements of a body are composed of the basal metabolic rate (BMR) and the physical activity level (ERAT, exercise-related activity thermogenesis). This caloric requirement can be met with protein, fat, carbohydrates, or a mixture of those. Glucose is the general metabolic fuel, and can be metabolized by any cell. Fructose and some other nutrients can be metabolized only in the liver, where their metabolites transform into either glucose stored as glycogen in the liver and in muscles, or into fatty acids stored in adipose tissue. Because of the blood–brain barrier, getting nutrients to the human brain is especially dependent on molecules that can pass this barrier. The brain itself consumes about 18% of the basal metabolic rate: on a total daily intake of , this equates to , or about 80 g of glucose. About 25% of total body glucose consumption occurs in the brain. Glucose can be obtained directly from dietary sugars and by the breakdown of other carbohydrates. In the absence of dietary sugars and carbohydrates, glucose is obtained from the breakdown of stored glycogen. Glycogen is a readily-accessible storage form of glucose, stored in notable quantities in the liver and skeletal muscle. When the glycogen reserve is depleted, glucose can be obtained from the breakdown of fats from adipose tissue. Fats are broken down into glycerol and free fatty acids, with the glycerol being turned into glucose in the liver via the gluconeogenesis pathway. When even the glucose made from glycerol reserves start declining, the liver starts producing ketone bodies. Ketone bodies are short-chain derivatives of the free fatty acids mentioned in the previous paragraph, and can cross the blood–brain barrier, meaning they can be used by the brain as an alternative metabolic fuel. Fatty acids can be used directly as an energy source by most tissues in the body, but are themselves too ionized to cross the blood–brain barrier. Timeline After the exhaustion of the glycogen reserve, and for the next 2–3 days, fatty acids are the principal metabolic fuel. At first, the brain continues to use glucose, because if a non-brain tissue is using fatty acids as its metabolic fuel, the use of glucose in the same tissue is switched off. Thus, when fatty acids are being broken down for energy, all of the remaining glucose is made available for use by the brain. After 2 or 3 days of fasting, the liver begins to synthesize ketone bodies from precursors obtained from fatty acid breakdown. The brain uses these ketone bodies as fuel, thus cutting its requirement for glucose. After fasting for 3 days, the brain gets 30% of its energy from ketone bodies. After 4 days, this goes up to 75%. Thus, the production of ketone bodies cuts the brain's glucose requirement from 80 g per day to about 30 g per day. Of the remaining 30 g requirement, 20 g per day can be produced by the liver from glycerol (itself a product of fat breakdown). This still leaves a deficit of about 10 g of glucose per day that must come from some other source. This deficit is supplied via gluconeogenesis from amino acids from proteolysis of body proteins. After several days of fasting, all cells in the body begin to break down protein. This releases amino acids into the bloodstream, which can be converted into glucose by the liver. Since much of the human body's muscle mass is protein, this phenomenon is responsible for the wasting away of muscle mass seen in starvation. However, the body can selectively decide which cells break down protein and which do not. About 2–3 g of protein must be broken down to synthesize 1 g of glucose; about 20–30 g of protein is broken down each day to make 10 g of glucose to keep the brain alive. However, to conserve protein, this number may decrease the longer the fasting. Starvation ensues when the fat reserves are completely exhausted and protein is the only fuel source available to the body. Thus, after periods of starvation, the loss of body protein affects the function of important organs, and death results, even if there are still fat reserves left unused. (In a leaner person, the fat reserves are depleted earlier, the protein depletion occurs sooner, and therefore death occurs sooner.) The ultimate cause of death is, in general, cardiac arrhythmia or cardiac arrest brought on by tissue degradation and electrolyte imbalances. In the very obese, it has been shown that proteins can be depleted first. Accordingly, death from starvation is predicted to occur before fat reserves are used up. Biochemistry During starvation, less than half of the energy used by the brain comes from metabolized glucose. Because the human brain can use ketone bodies as major fuel sources, the body is not forced to break down skeletal muscles at a high rate, thereby maintaining both cognitive function and mobility for up to several weeks. This response is extremely important in human evolution and allowed for humans to continue to find food effectively even in the face of prolonged starvation. Initially, the level of insulin in circulation drops and the levels of glucagon, epinephrine and norepinephrine rise. At this time, there is an up-regulation of glycogenolysis, gluconeogenesis, lipolysis, and ketogenesis. The body's glycogen stores are consumed in about 24 hours. In a normal 70 kg adult, only about 8,000 kilojoules of glycogen are stored in the body (mostly in the striated muscles). The body also engages in gluconeogenesis to convert glycerol and glucogenic amino acids into glucose for metabolism. Another adaptation is the Cori cycle, which involves shuttling lipid-derived energy in glucose to peripheral glycolytic tissues, which in turn send the lactate back to the liver for resynthesis to glucose. Because of these processes, blood glucose levels remain relatively stable during prolonged starvation. However, the main source of energy during prolonged starvation is derived from triglycerides. Compared to the 8,000 kilojoules of stored glycogen, lipid fuels are much richer in energy content, and a 70 kg adult stores over 400,000 kilojoules of triglycerides (mostly in adipose tissue). Triglycerides are broken down to fatty acids via lipolysis. Epinephrine precipitates lipolysis by activating protein kinase A, which phosphorylates hormone sensitive lipase (HSL) and perilipin. These enzymes, along with CGI-58 and adipose triglyceride lipase (ATGL), complex at the surface of lipid droplets. The concerted action of ATGL and HSL liberates the first two fatty acids. Cellular monoacylglycerol lipase (MGL), liberates the final fatty acid. The remaining glycerol enters gluconeogenesis. Fatty acids cannot be used as a direct fuel source. They must first undergo beta oxidation in the mitochondria (mostly of skeletal muscle, cardiac muscle, and liver cells). Fatty acids are transported into the mitochondria as an acyl-carnitine via the action of the enzyme CAT-1. This step controls the metabolic flux of beta oxidation. The resulting acetyl-CoA enters the TCA cycle and undergoes oxidative phosphorylation to produce ATP. The body invests some of this ATP in gluconeogenesis to produce more glucose. Triglycerides and long-chain fatty acids are too hydrophobic to cross into brain cells, so the liver must convert them into short-chain fatty acids and ketone bodies through ketogenesis. The resulting ketone bodies, acetoacetate and β-hydroxybutyrate, are amphipathic and can be transported into the brain (and muscles) and broken down into acetyl-CoA for use in the TCA cycle. Acetoacetate breaks down spontaneously into acetone, and the acetone is released through the urine and lungs to produce the “acetone breath” that accompanies prolonged fasting. The brain also uses glucose during starvation, but most of the body's glucose is allocated to the skeletal muscles and red blood cells. The cost of the brain using too much glucose is muscle loss. If the brain and muscles relied entirely on glucose, the body would lose 50% of its nitrogen content in 8–10 days. After prolonged fasting, the body begins to degrade its own skeletal muscle. To keep the brain functioning, gluconeogenesis continues to generate glucose, but glucogenic amino acids—primarily alanine—are required. These come from the skeletal muscle. Late in starvation, when blood ketone levels reach 5-7 mM, ketone use in the brain rises, while ketone use in muscles drops. Autophagy then occurs at an accelerated rate. In autophagy, cells cannibalize critical molecules to produce amino acids for gluconeogenesis. This process distorts the structure of the cells, and a common cause of death in starvation is due to diaphragm failure from prolonged autophagy. In bacteria Bacteria become highly tolerant to antibiotics when nutrients are limited. Starvation contributes to antibiotic tolerance during infection, as nutrients become limited when they are sequestered by host defenses and consumed by proliferating bacteria. One of the most important causes of starvation induced tolerance in vivo is biofilm growth, which occurs in many chronic infections. Starvation in biofilms is due to nutrient consumption by cells located on the periphery of biofilm clusters and by reduced diffusion of substrates through the biofilm. Biofilm bacteria shows extreme tolerance to almost all antibiotic classes, and supplying limiting substrates can restore sensitivity. See also Calorie restriction Fasting (section Health effects) Fasting and longevity Refeeding syndrome References Resources Malnutrition Metabolism Nutritional physiology Starvation
Starvation response
Chemistry,Biology
2,460
62,693,840
https://en.wikipedia.org/wiki/Soda%20machine%20%28home%20appliance%29
A soda machine or soda maker is a home appliance for carbonating tap water by using carbon dioxide from a pressurized cartridge. The machine is often delivered with flavorings; these can be added to the water after it is carbonated to make soda, such as orange, lemon, or cola flavours. Some brands are able to directly carbonate any cold beverage. Examples of well known soda machine manufacturers are SodaStream of Israel, DrinkMate of the United States, and Aqvia by AGA of Sweden. Soda machines are often either connected to a dedicated water tap in the house, or configured as a freestanding unit. Some refrigerators are delivered with a built-in soda machine. Construction Soda machines normally use refillable TR 214 thread gas cartridges, which are normally filled with around 300–500 grams of carbon dioxide. The water to be carbonated is filled in special pressure resistant bottles which are attached to the machine in a pressure proof way. The gas is then added to the water via a pipe and valve system which is activated by pushing a button. The resulting amount of carbon dioxide is determined by the pressure in the CO2 cartridge and how long the button is held down. If the pressure is too large, residual pressure is relieved through a blowoff valve. Advantages Depending on the size of the gas cartridge, a soda machine can produce up to 100 liters of carbonated water before the cartridge needs to be replaced. Compared to buying carbonated water in the store, this eliminates packaging and transportation costs, and also results in less waste and possible less use of storage space. Consumer Reports estimated that a SodaStream Fizzi would save a consumer $233 and 1,248 cans at the end of two years. Gas cartridges and compatible water bottles can be purchased in many super markets. The pressure resistant bottles can also be used to store the carbonated water. Some newer machines can also be used with glass bottles. Some newer PEN-bottles can also be machine washed. Flavoring Makers of soda machines also offer a selection of flavors which can be added after the water has been carbonated. Some of these are sugar free. Alternatively, normal squash can be used. The DrinkMate maker allows users to directly carbonate flavored beverages (such as Gatorade, juice, wine, and flat soda). Health risks Teeth Carbonated water has a low pH-value, and overuse of carbonated water can therefore lead to acid erosion of the teeth, similarly to consuming other sour beverages and food (like soda or fruits). A 2017 study by the American Dental Association showed that, although seltzer water is more erosive than tap water, it would take over 100 years of daily drinking to cause damage to human teeth. Drinking straws can be used to prevent acid erosion by minimizing direct contact between the sour drink and the teeth. Bacteria In a study by the Institute of Hygiene and Environmental Medicine at the University of Mainz, (Germany), blue coliform bacteria were found in 39% of the tests when water was carbonated in soda machines, compared to 12% in the tests of water straight from the tap. In addition to the pollution contaminants from the gas cartridge or the machine itself, the water's microbiological quality was also poorer due to a biofilm on the inside of the bottles. In addition to insufficient cleaning (by not following the manufacturer instructions), a possible cause for the increased amount of bacteria could also be the poor design of the soda machines. It is recommended to regularly perform simple cleaning routines according to the manufacturer instructions. This includes cleaning the bottles with hot water (above 50 °C), using soap and a clean cleaning brush. Gallery See also Soda syphon Soda fountain References Kitchen Home appliances 19th-century inventions
Soda machine (home appliance)
Physics,Technology
763
14,428,627
https://en.wikipedia.org/wiki/Tachykinin%20receptor%202
Substance-K receptor is a protein that in humans is encoded by the TACR2 gene. Function This gene belongs to a family of genes that function as receptors for tachykinins. Receptor affinities are specified by variations in the 5'-end of the sequence. The receptors belonging to this family are characterized by interactions with G proteins and 7 hydrophobic transmembrane regions. This gene encodes the receptor for the tachykinin neuropeptide substance K, also referred to as neurokinin A. Selective Ligands Several selective ligands for NK2 are now available, and although most of the compounds developed so far are peptides, one small-molecule antagonist Saredutant is currently in clinical trials as an anxiolytic and antidepressant. Agonists GR-64349 - potent and selective agonist, EC50 3.7nM, 7-amino acid polypeptide chain. CAS# 137593-52-3 Antagonists Ibodutant - failed its Phase 3 trial for IBS treatment in 2015, and abandoned by Menarini Saredutant - mixed but mostly negative Phase 3 trial results in 2009, and abandoned by Sanofi-Aventis GR-159897 MEN-10376 - potent and selective antagonist, 7-amino acid polypeptide chain. CAS# 135306-85-3 See also Tachykinin receptor References Further reading External links G protein-coupled receptors
Tachykinin receptor 2
Chemistry
310
32,753,209
https://en.wikipedia.org/wiki/Iron%E2%80%93sulfur%20cluster%20biosynthesis
In biochemistry, the iron–sulfur cluster biosynthesis describes the components and processes involved in the biosynthesis of iron–sulfur proteins. The topic is of interest because these proteins are pervasive. The iron sulfur proteins contain iron–sulfur clusters, some with elaborate structures, that feature iron and sulfide centers. One broad biosynthetic task is producing sulfide (S2-), which requires various families of enzymes. Another broad task is affixing the sulfide to iron, which is achieved on scaffolds, which are nonfunctional. Finally these Fe-S cluster is transferred to a target protein, which then become functional. The formation of iron–sulfur clusters are produced by one of four pathways: Nitrogen fixation (NIF) system, which is also found in bacteria that are not nitrogen-fixing. Iron–sulfur cluster (ISC) system, in bacterial and mitochondria Sulfur assimilation (SUF) system, in plastids and some bacteria In addition to those three systems, the so-called Cystosolic Iron–sulfur Assembly (CIA) is invoked for cytosolic and nuclear Fe–S proteins. Mechanisms The assembly of iron–sulfur clusters cluster begins with the production of the equivalent of a sulfur (sulfur atoms per se are not found in nature). The required sulfur atom is obtained from free cysteine by the action of so-called cysteine desulfurases. One prominent desulfurase is called IscS, a pyridoxal phosphate-dependent enzyme. The sulfur atom from the cysteine substrate is transferred to residue Cys-328 of IscS, forming a persulfide: L-cysteine + [enzyme]-cysteine L-alanine + [enzyme]-S-sulfanylcysteine The persulfide functional group R-S-S-H functions as a source of "inorganic sulfur" that will be incorporated into Fe-S clusters. Subsequently, IscS transfers this "extra" sulfur to IscU. In addition to IscS and IscU, bacterial Fe-S assembly requires IscA, an 11 kDa protein of uncertain function. The Suf system for iron–sulfur cluster biosynthesis is generally similar to the Isc system (and the Nif system). The analogy extends to the existence of SufA, SufS, and SufU. The Suf system operates with fewer chaperones. References Protein families
Iron–sulfur cluster biosynthesis
Biology
512
964,170
https://en.wikipedia.org/wiki/NGC%203628
NGC 3628, also known as the Hamburger Galaxy or Sarah's Galaxy, is an unbarred spiral galaxy about 35 million light-years away in the constellation Leo. It was discovered by William Herschel in 1784. It has an approximately 300,000 light-years long tidal tail. Along with M65 and M66, NGC 3628 forms the Leo Triplet, a small group of galaxies. Its most conspicuous feature is the broad and obscuring band of dust located along the outer edge of its spiral arms, effectively transecting the galaxy to the view from Earth. Due to the presence of an x-shaped bulge, visible in multiple wavelengths, it has been argued that NGC 3628 is instead a barred spiral galaxy with the bar seen end-on. Simulations have shown that bars often form in disk galaxies during interactions and mergers, and NGC 3628 is known to be interacting with its two large neighbors. The name "Hamburger Galaxy" is a reference to its shape resembling a hamburger, while the name "Sarah's Galaxy" is thought to refer to poet Sarah Williams (1837–1868), most famous for the poem "The Old Astronomer:" References External links SEDS: Spiral Galaxy NGC 3628 Unbarred spiral galaxies Peculiar galaxies Leo Triplet Leo (constellation) 3628 06350 34697 Astronomical objects discovered in 1784
NGC 3628
Astronomy
277
24,136,891
https://en.wikipedia.org/wiki/OGML
Ontology Grounded Metalanguage (OGML) is a metalanguage like MOF. The goal of OGML is to tackle the difficulties of MOF: linear modeling architecture, ambiguous constructs and incomprehensible/unclear architecture. OGML provides a nested modeling architecture with three fixed layers (models, languages and metalanguage). Therefore, it is clear how the different models conform to each other and can be handled. Constructs in OGML are chosen from the science of ontology, making the distinction between properties / objects and classes / objects very clear. This commitment makes explicit certain oddities of the definition of, for example, relations. Furthermore, OGML provides an explicit notion of instantiation: model elements encode their types and languages define the semantics of instantiation. This extra information is needed in the relative modeling architecture to distinguish between structural and conceptual views on models, for example: we may want to view a UML model as an instance of the object language and an instance of the Class model (Clabject). By providing this dual view on the metamodel layer and on the language layer, OGML provides a very precise modeling architecture and an expressive way to deal with models. References External links Official website Specification languages Data modeling languages Metalanguages
OGML
Engineering
267
52,174,112
https://en.wikipedia.org/wiki/Multibeam%20Corporation
Multibeam is an American corporation that engages in the design, manufacture, and sale of semiconductor processing equipment used in the fabrication of integrated circuits. Headquartered in Santa Clara, in the Silicon Valley, Multibeam is led by Dr. David K. Lam, the founder and first CEO of Lam Research. History Multibeam shipped its first MEBL platform to SkyWater Technology, in Bloomington, Minnesota, in July 2024. Technology Multibeam developed miniature, all-electrostatic columns for e-beam lithography, that provide a maskless and high throughput platform for writing nanoscale IC patterns seamlessly across full wafers. Arrays of e-beam columns operate simultaneously and in parallel to increase wafer processing speed. With over 35 patents issued, these multi-column e-beam lithography (MEBL) systems enable an array of direct write lithography applications, including Complementary E-Beam Lithography (CEBL), Secure Chip ID, Advanced Packaging Interposers, Photonics, and other applications where precise, nanometer-scale features are required. Applications Complementary Electron Beam Lithography (CEBL) works with optical lithography to pattern cuts (of lines in "lines-and-cuts" layout) and holes (i.e., contacts and vias) with no masks. Secure Chip ID embeds unique security information in each IC including chip ID, IP or MAC address, and chip-specific information such as keys used in encryption. Chip ID is used for supply chain traceability and to detect counterfeits. Hardware-embedded encryption keys are used to authenticate software. Chip-specific information written into bit registers is non-volatile. Advanced Packaging Interposers can be used in chip fabrication in semiconductor chip packaging applications where high performance, low power, long battery life and compact size are needed. Patterns can be varied on a small scale over a large field with nanometer resolutions. Used in high volume or batch production of System in a Package (SiP), MEMS & Sensor Packaging, Fan Out Packaging, and 2.5D/3D IC Packaging. Photonics applications benefit from the flexibility of creating high precision curvilinear and varying patterns over wide fields of view for the generation, control, and channeling of light. References External links Equipment semiconductor companies Nanotechnology companies Technology companies established in 2010 Companies based in Santa Clara, California Technology companies based in the San Francisco Bay Area American companies established in 2010 Computer companies of the United States Semiconductor companies of the United States
Multibeam Corporation
Materials_science,Engineering
511
671,711
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Smith%20conjecture
In mathematics, the Hilbert–Smith conjecture is concerned with the transformation groups of manifolds; and in particular with the limitations on topological groups G that can act effectively (faithfully) on a (topological) manifold M. Restricting to groups G which are locally compact and have a continuous, faithful group action on M, the conjecture states that G must be a Lie group. Because of known structural results on G, it is enough to deal with the case where G is the additive group of p-adic integers, for some prime number p. An equivalent form of the conjecture is that has no faithful group action on a topological manifold. The naming of the conjecture is for David Hilbert, and the American topologist Paul A. Smith. It is considered by some to be a better formulation of Hilbert's fifth problem, than the characterisation in the category of topological groups of the Lie groups often cited as a solution. In 1997, Dušan Repovš and Evgenij Ščepin proved the Hilbert–Smith conjecture for groups acting by Lipschitz maps on a Riemannian manifold using covering, fractal, and cohomological dimension theory. In 1999, Gaven Martin extended their dimension-theoretic argument to quasiconformal actions on a Riemannian manifold and gave applications concerning unique analytic continuation for Beltrami systems. In 2013, John Pardon proved the three-dimensional case of the Hilbert–Smith conjecture. References Further reading . Topological groups Group actions (mathematics) Conjectures Unsolved problems in geometry Structures on manifolds
Hilbert–Smith conjecture
Physics,Mathematics
315
29,474,944
https://en.wikipedia.org/wiki/White%20Camel%20award
The "White Camel" award is given to important contributors to the Perl Programming Language community. The awards were initiated by Perl Mongers and O'Reilly & Associates at The Perl Conference in 1999. Today, The Perl Foundation acknowledges these exceptional individuals annually. By Year 1999: Tom Christiansen, Kevin Lenzo, Adam Turoff 2000: Elaine Ashton, Chris Nandor, Nathan Torkington 2001: David H. Adler, Ask Bjørn Hansen, YAPC::Europe team 2002: Graham Barr, Tim Maher, Tim Vroom 2003: Jarkko Hietaniemi, Andreas Koenig, Robert Spier 2004: Dave Cross, brian d foy, Jon Orwant 2005: Stas Bekman, Eric Cholet, Andy Lester 2006: Jay Hannah, Josh McAdams, Randal Schwartz 2007: Allison Randal, Tim O'Reilly, Norbert E. Grüner 2008: Tatsuhiko Miyagawa, Jacinta Richardson, Gabor Szabo 2009: Tim Bunce, Philippe Bruhat, Michael Schwern 2010: José Castro (cog), Paul Fenwick, Barbie 2011: Leo Lapworth, Daisuke Maki, Andrew Shitov 2012: Renee Baecker, Breno G. de Oliveira, Jim Keenan 2013: Thiago Rondon, Wendy and Liz, Fred Moyer 2014: Amalia Pomian, VM Brasseur, Neil Bowers 2015: Chris Prather, Sawyer X, Steffen Müller 2016: David Golden, Karen Pauley, and Thomas Klausner 2017: Laurent Boivin, Rob Masic, Kurt Demaagd 2018: Todd Rinaldo, David Farrell, Max Maischein 2022: Mohammad Anwar 2023: R Geoffrey Avery See also List of computer science awards Perl programming language YAPC, Yet Another Perl Conference The Perl Foundation O'Reilly and Associates References External links White camel information on perl.org The Second Annual YAPC News of the 2011 Award News of the 2012 Awards Perl Computer science awards
White Camel award
Technology
427
7,249,174
https://en.wikipedia.org/wiki/Blondie24
Blondie24 is an artificial intelligence checkers-playing computer program named after the screen name used by a team led by David B. Fogel. The purpose was to determine the effectiveness of an artificial intelligence checkers-playing computer program. The screen name was used on The Zone, an internet boardgaming site in 1999. During this time, Blondie24 played against some 165 human opponents and was shown to achieve a rating of 2048, or better than 99.61% of the playing population of that web site. The design of Blondie24 is based on a minimax algorithm of the checkers game tree in which the evaluation function is a deep learning convolutional artificial neural network. The neural net receives as input a vector representation of the checkerboard positions and returns a single value which is passed on to the minimax algorithm. The weights of the neural network were obtained by an evolutionary algorithm (an approach now called neuroevolution). In this case, a population of Blondie24-like programs played each other in checkers, and those were eliminated that performed relatively poorly. Performance was measured by a points system: Each program earned one point for a win, none for a draw, and two points were subtracted for a loss. Points were earned for each neural network after a multiple of games; the neural networks did not know which individual games were won, lost, or drawn. After the poor programs were eliminated, the process was repeated with a new population derived from the winners. In this way, the result was an evolutionary process that selected programs that played better checkers games. The significance of the Blondie24 program is that its ability to play checkers did not rely on any human expertise of the game. Rather, it came solely from the total points earned by each player and the evolutionary process itself. David Fogel, along with his colleague Kumar Chellapilla, documented their experiment in several publications. Fogel also authored a book on the development of Blondie24, and the experiences he and his team had while running Blondie24 in on-line checkers games, and eventually in obtaining a victory against a dumbed-down version of Chinook. References Further reading See also Genetic Programming Game artificial intelligence Computer draughts players
Blondie24
Mathematics
463
2,007,949
https://en.wikipedia.org/wiki/Somnology
Somnology is the scientific study of sleep. It includes clinical study and treatment of sleep disorders and irregularities. Sleep medicine is a subset of somnology. Hypnology has a similar meaning but includes hypnotic phenomena. History After the invention of the EEG, the stages of sleep were determined in 1936 by Harvey and Loomis, the first descriptions of delta and theta waves were made by Walter and Dovey, and REM sleep was discovered in 1953. Sleep apnea was identified in 1965. In 1970, the first clinical sleep laboratory was developed at Stanford. The first actigraphy device was made in 1978 by Krupke, and continuous positive airway pressure therapy and uvulopalatopharyngoplasty were created in 1981. The Examination Committee of the Association of Sleep Disorders Centers, which is now the American Academy of Sleep Medicine, was established in 1978 and administered the sleep administration test until 1990. In 1989, the American Board of Sleep Medicine was created to administer the tests and eventually assumed all the duties of the Examination committee in 1991. In the United States, the American Board of Sleep Medicine grants certification for sleep medicine to both physicians and non-physicians. However, the board does not allow one to practice sleep medicine without a medical license. The International Classification of Sleep Disorders Created in 1990 by the American Academy of Sleep Medicine (with assistance from European Sleep Research Society, the Japanese Society of Sleep Research, and the Latin American Sleep Society), the International Classification of Sleep Disorders is the primary reference for scientists and diagnosticians. Sleep disorders are separated into four distinct categories: parasomnias; dyssomnias; sleep disorders associated with mental, neurological, or other medical conditions; and sleep disorders that do not have enough data available to be counted as definitive sleep disorders. The ICSD has created a comprehensive description for each sleep disorder with the following information. Synonyms and Key Words – This section describes the terms and phrases used to describe the disorder and also includes an explanation on the preferred name of the disorder when appropriate. Essential Features – This section describes the main symptoms and features of the disorder. Associated Features – This section describes the features that appear often but not always present. Furthermore, complications that are caused directly by the disorder are listed here. Course – This section describes the clinical course and the outcome of an untreated disorder. Predisposing Factors – This section describes internal and external factors that increase the chances of a patient developing the sleep disorder. Prevalence – This section, if known, describes the proportion of people who have or had this disorder. Age of Onset – This section describes the age range when the clinical features first appear. Sex Ratio – This section describes the relative frequency that the disorder is diagnosed in each sex. Familial Pattern – This section describes whether the disorder is found among family members. Pathology – This section describes the microscopic pathologic features of the disorder. If this is not known, the pathology of the disorder is described instead. Complications – This section describes any possible disorders or complications that can occur because of the disease. Polysomnographic Features – This section describes how the disorder appears under a polysomnograph. Other Laboratory Features - This section describes other laboratory test such as blood tests and brain imaging. Differential Diagnosis – This section describes disorders with similar symptoms. Diagnostic Criteria – This section has the criteria that can make a clear-cut diagnosis. Minimal Criteria – This section is used for general clinical practice and is used to make a provisional diagnosis. Severity Criteria – This section has a three-part classification into “mild,” “moderate,” and “severe” and also describes the criteria for the severity. Duration Criteria – This section allows a clinician to determine how long a disorder has been present and separates the durations into “acute,” “subacute,” and “chronic. Bibliography – This section contains the references. Diagnostic tools Somnologists employ various diagnostic tools to determine the nature of a sleep disorder or irregularity. Some of these tools can be subjective such as the sleep diaries or the sleep questionnaire. Other diagnostic tools are used while the patient is asleep such as the polysomnograph and actigraphy. Sleep diaries A sleep diary is a daily log made by the patient that contains information about the quality and quantity of sleep. The information includes sleep onset time, sleep latency, number of awakenings in a night, time in bed, daytime napping, sleep quality assessment, use of hypnotic agents, use of alcohol and cigarettes, and unusual events which may influence a person's sleep. Such a log is usually made for one or two weeks before visiting a somnologist. The sleep diary may be used in conjunction with actigraphy. Sleep questionnaires Sleep questionnaires help determine the presence of a sleep disorder by asking a patient to fill out a questionnaire about a certain aspect of their sleep such as daytime sleepiness. These questionnaires include the Epworth Sleepiness Scale, the Stanford Sleepiness Scale, and the Sleep Timing Questionnaire. The Epworth Sleepiness Scale measures general sleep propensity and asks the patient to rate their chances of dozing off in eight different situations. The Stanford Sleepiness Scale asks the patient to note their perception of sleepiness by using a seven-point test. The Sleep Timing Questionnaire is a 10-minute self-administration test that can be used in place of a 2-week sleep diary. The questionnaire can be a valid determinate of sleep parameters such as bed time, wake time, sleep latency, and wake after sleep onset. Actigraphy Actigraphy can assess sleep/wake patterns without confining one to the laboratory. The monitors are small, wrist-worn movement monitors that can record activity for up to several weeks. Sleep and wakefulness are determined by using an algorithm that analyzes the movement of the patient and the input of bed and wake times from a sleep diary. Physical examination A physical examination can determine the presence of other medical conditions that can cause a sleep disorder. Polysomnography Polysomnography involves the continuous monitoring of multiple physiological variables during sleep. These variables include electroencephalography, electrooculography, electromyography, and electrocardiography as well as airflow, oxygenation, and ventilation measurements. Electroencephalography measures the voltage activity of neuronal somas and dendrites within the cortex, electro-oculography measures the potential between cornea and retina, electromyography is used to identify REM sleep by measuring the electrical potential of skeletal muscle, and electrocardiography measures cardiac rate and rhythm. It is important to point out that EEG, in particular, always refers to a collective of neurons firing as EEG equipment is not sensitive enough to measure a single neuron. Airflow measurements Airflow measurement can be used to indirectly determine the presence of an apnea; measurements are taken by pneumotachography, nasal pressure, thermal sensors, and expired carbon dioxide. Pneumotachography measures the difference in pressure between inhalation and exhalation, nasal pressure can help determine the presence of airflow similar to pneumotachography, thermal sensors detect the difference in temperature between inhaled and exhaled air, and expired carbon dioxide monitoring detect the difference in carbon dioxide between inhaled and exhaled air. Oxygenation and ventilation measurements The monitoring of oxygenation and ventilation is important in the assessment of sleep-related breathing disorders. However, because oxygen values can change often during the course of sleep, repeated measurements must be taken to ensure accuracy. The direct measurements of arterial oxygen tension only offer a static glimpse, and repeated measurements from invasive procedures such as sampling arterial blood for oxygen will disturb the patient's sleep; therefore, noninvasive methods are preferred such as pulse oximetry, transcutaneous oxygen monitoring, transcutaneous carbon dioxide, and pulse transit time. Pulse oximetry measures the oxygenation in peripheral capillaries (such as the fingers); however, an article written by Bohning states that pulse oximetry may be imprecise for use in diagnosing obstructive sleep-apnea, due to the differences in signal processing in the devices. Transcutaneous oxygen and carbon dioxide monitoring measure the oxygen and carbon dioxide tension on the skin surface respectively, and the pulse transit time measures the transmission time of an arterial pulse transit wave. For the lattermost, pulse transit time increases when one is aroused from sleep, making it useful in determining sleep apnea. Snoring Snoring can be detected by a microphone and may be a symptom of obstructive sleep-apnea. Multiple Sleep Latency Test The Multiple Sleep Latency Test (MSLT) measures a person's physiological tendency to fall asleep during a quiet period in terms of sleep latency, the amount of time it takes for someone. An MSLT is normally performed after a nocturnal polysomnography to ensure both an adequate duration of sleep and to exclude other sleep disorders. Maintenance of Wakefulness Test The Maintenance of Wakefulness Test (MWT) measures a person's ability to stay awake for a certain period of time, essentially measuring the time one can stay awake during the day. The test isolates a person from factors that can influence sleep such as temperature, light, and noise. Furthermore, the patient is also highly suggested to not take any hypnotics, drink alcohol, or smoke before or during the test. After allowing the patient to lie down on the bed, the time between lying down and falling asleep is measured and used to determine one's daytime sleepiness. Treatments Though somnology does not necessarily mean sleep medicine, somnologists can use behavioral, mechanical, or pharmacological means to correct a sleep disorder. Behavioral treatments Behavioral treatments tend to be the most prescribed and the most cost-efficient of all treatments; these treatments include exercise, cognitive behavioral therapy, relaxation therapy, meditation, and improving sleep hygiene. Improving sleep hygiene includes making the patient sleep regularly, discourage the patient from taking daytime naps, or suggesting they sleep in a different position. Mechanical treatments Mechanical treatments are primarily used to reduce or eliminate snoring and can be either invasive or non-invasive. Surgical procedures for treating snoring include palatal stiffening techniques, uvulopalatopharyngoplasty and uvulectomy while non-invasive procedures include continuous positive airway pressure, mandibular advancement splints, and tongue-retaining devices. Pharmacological treatments Pharmacological treatments are used to chemically treat sleep disturbances such as insomnia or excessive daytime sleepiness. The kinds of drugs used to treat sleep disorders include: anticonvulsants, anti-narcoleptics, anti-Parkinsonian drugs, benzodiazepines, non-benzodiazepine hypnotics, and opiates as well as the hormone melatonin and melatonin receptor agonists. Anticonvulsants, opioids, and anti-Parkinsonian drugs are often used to treat restless legs syndrome. Furthermore, melatonin, benzodiazepines hypnotics, and non-benzodiazepine hypnotics may be used to treat insomnia. Finally, anti-narcoleptics help treat narcolepsy and excessive daytime sleepiness. Of particular interest are the benzodiazepine drugs which reduce insomnia by increasing the efficiency of GABA. GABA decreases the excitability of neurons by increasing the firing threshold. Benzodiazepine causes the GABA receptor to better bind to GABA, allowing the medication to induce sleep. Generally, these treatments are given after the behavioral treatment has failed. Drugs such as tranquilizers, though they may work well in treating insomnia, have a risk of abuse which is why these treatments are not the first resort. Some sleep disorders such as narcolepsy do require pharmacological treatment. See also Sleep disorder Sleep medicine Snoring References External links Sleep disorders sv:Somnologi
Somnology
Biology
2,462
42,046,875
https://en.wikipedia.org/wiki/Nilpotent%20algebra
In mathematics, specifically in ring theory, a nilpotent algebra over a commutative ring is an algebra over a commutative ring, in which for some positive integer n every product containing at least n elements of the algebra is zero. The concept of a nilpotent Lie algebra has a different definition, which depends upon the Lie bracket. (There is no Lie bracket for many algebras over commutative rings; a Lie algebra involves its Lie bracket, whereas, there is no Lie bracket defined in the general case of an algebra over a commutative ring.) Another possible source of confusion in terminology is the quantum nilpotent algebra, a concept related to quantum groups and Hopf algebras. Formal definition An associative algebra over a commutative ring is defined to be a nilpotent algebra if and only if there exists some positive integer such that for all in the algebra . The smallest such is called the index of the algebra . In the case of a non-associative algebra, the definition is that every different multiplicative association of the elements is zero. Nil algebra A power associative algebra in which every element of the algebra is nilpotent is called a nil algebra. Nilpotent algebras are trivially nil, whereas nil algebras may not be nilpotent, as each element being nilpotent does not force products of distinct elements to vanish. See also Algebraic structure (a much more general term) nil-Coxeter algebra Lie algebra Example of a non-associative algebra References External links Nilpotent algebra – Encyclopedia of Mathematics Ring theory Properties of binary operations
Nilpotent algebra
Mathematics
349
307,623
https://en.wikipedia.org/wiki/Inhaler%20spacer
A spacer is a device used to increase the ease of giving aerosolized medication from a metered-dose inhaler (MDI). It adds space in the form of a tube or "chamber" between the mouth and canister of medication. Most spacers have a one-way valve that allows the person to inhale the medication while inhaling and exhaling normally; these are often referred to as valved holding chambers (VHC). A number of brand names exist including AeroChamber, InspirEase and Volumatic. They can also be made from a 500 mL plastic bottle, which works just as well. Spacers help those unable to breathe deeply, as well as those unable to synchronize their breathing so that they inhale just as the MDI is actuated; the latter is known as poor "hand-lung coordination". Terminology The term spacer is often used to refer to any tube-like MDI add-on device. Some spacers (e.g., InspirEase) utilize a collapsing bag design to provide visual feedback that successful inspiration is taking place. Another type (e.g., Volumatic) is transparent plastic in two vase-shaped parts that come together forming a barrel shape. Valved holding chambers (VHC) are commonly called "spacers" as well. Uses To use an inhaler without a spacer requires coordinating several actions in a set order (pressing down on the inhaler, breathing in deeply as soon as the medication is released, holding your breath, exhaling), and not everyone is able to master this sequence. Use of a spacer, particularly a valved holding chamber, avoids such timing issues. Valved holding chambers are particularly useful for children, people with severe shortness of breath, and those with cognitive impairment. After removing the MDI's cap, the MDI mouthpiece is inserted into the back of the spacer. The front part of the chamber is closed off by either a mouthpiece or a mask that covers the mouth and nose. To administer the medication, the MDI is depressed once, resulting in the release of one dose of medication. The medication from the MDI is then suspended in the spacer's chamber while the person inhales the aerosolized medication by breathing in and out. The exhaled breath exits the device through the valves rather than entering the chamber. Some spacers are equipped with a whistle, which sounds if the person is inhaling too quickly. Spacers slow down the speed of the aerosol coming from the inhaler, meaning that less of the asthma drug impacts on the back of the mouth and somewhat more may get into the lungs. In the case of corticosteroids, less residue in the mouth reduces the risk of developing oral candidiasis, a yeast infection. Rinsing the mouth after application of inhaled steroids has a similar effect. Whereas people with asthma can keep an MDI close-by at all times, the bulkiness of spacers can limit their use outside the home. Some research suggests that homemade spacers, using plastic bottles, may be as effective as commercially made versions, but homemade versions may have more variability. References Spacer. Asthma Medical equipment Assistive technology
Inhaler spacer
Biology
666
33,892,594
https://en.wikipedia.org/wiki/TinKode
Răzvan Manole Cernăianu (born 7 February 1992), nicknamed "TinKode", is a Romanian computer security consultant and hacker, known for gaining unauthorized access to computer systems of many different organizations, and posting proof of his exploits online. He commonly hacks high-profile websites that have SQL injection vulnerabilities, although unknown methods were used in his most recent attacks. Other aliases included sysgh0st. Personal life TinKode is Romanian and claims to have been born in 1992 in the southern part of the country. He states that his hacking skills are the result of extreme curiosity and ambition. The targets are well known websites and powerful brands with widespread influence, either for the online community or a particular marketplace. His attacks often involve the numeral seven. Tinkode was a fan of social networks, owning both a Twitter account, a Facebook account, as well as many blogs. Alleged hacking The Royal Navy's website was temporarily unavailable after TinKode claimed to have hacked it. He has also breached the security of servers at NASA, posting screenshots from an FTP server within NASA's Earth Observation System at Goddard Space Flight Center. He claims to have gained access to computers belonging to the European Space Agency. Other info He also claims to have found vulnerabilities in organizations including Sun Microsystems, MySQL, Kaspersky Portugal, the US Army, YouTube, Google, Other websites. TinKode has never been publicly criticized by the security experts, mainly because he didn't disclose full information regarding breached websites. He actually informed the webmasters before posting his results online, giving them time to fix the vulnerability. TinKode also received a Google Security Reward. Arrest On Tuesday, 31 January 2012, TinKode was placed under arrest by the Romanian authority DIICOT (Anti organised crime and terrorism institution), under the charge that he temporarily blocked the information systems of the US Army, Pentagon and NASA in association with Casi. TinKode was officially released on 27 April 2012. Petition and support Following his arrest, a petition was generated to raise support for Tinkode on Wednesday, 8 February 2012. The petition was aimed at DIICOT and the FBI to give Tinkode a reasonable and fair sentence claiming that the hacker wasn't malicious and was hacking out of curiosity. Further he was released after 3 months. References 1992 births Hackers Living people Romanian computer scientists
TinKode
Technology
499
399,104
https://en.wikipedia.org/wiki/48%20%28number%29
48 (forty-eight) is the natural number following 47 and preceding 49. It is one third of a gross, or four dozens. In mathematics 48 is a highly composite number, and a Størmer number. By a classical result of Honsberger, the number of incongruent integer-sided triangles of perimeter is given by the equations for even , and for odd . 48 is the order of full octahedral symmetry, which describes three-dimensional mirror symmetries associated with the regular octahedron and cube. The number of symmetries of a cube. In other fields Forty-eight may also refer to: In Chinese numerology, 48 is an auspicious number meaning 'determined to prosper', or simply 'prosperity', which is good for business. '48 is a slang term in Palestinian Arabic for parts of Israel or Palestine not under the control of the State of Palestine. Arab people from those parts are colloquially known as 48-Arabs (). References Integers
48 (number)
Mathematics
209
673,057
https://en.wikipedia.org/wiki/Memory%20transfer
Memory transfer was a biological process proposed by James V. McConnell and others in the 1960s. Memory transfer proposes a chemical basis for memory termed memory RNA which can be passed down through flesh instead of an intact nervous system. Since RNA encodes information and living cells produce and modify RNA in reaction to external events, it might also be used in neurons to record stimuli. This explained the results of McConnell's experiments in which planarians retained memory of acquired information after regeneration. Memory transfer through memory RNA is not currently a well-accepted explanation and McConnell's experiments proved to be largely irreproducible. In McConnell's experiments, he classically conditioned planarians to contract their bodies upon exposure to light by pairing it with an electric shock. The planarians retained this acquired information after being sliced and regenerated, even after multiple slicings to produce a planarian where none of the original trained planarian was present. The same held true after the planarians were ground up and fed to untrained cannibalistic planarians, usually Dugesia dorotocephala. As the nervous system was fragmented but the nucleic acids were not, this seemed to indicate the existence of memory RNA but it was later suggested that only sensitization was transferred, or that no transfer occurred and the effect was due to stress hormones in the donor or pheromone trails left on dirty lab glass. However, other experiments seem to support the original findings in that some memories may be stored outside the brain. See also Scotophobin References RNA Molecular neuroscience
Memory transfer
Chemistry,Biology
318
44,847,976
https://en.wikipedia.org/wiki/6005A%20aluminium%20alloy
6005A aluminium alloy is an alloy in the wrought aluminium-magnesium-silicon family (6000 or 6xxx series). It is closely related, but not identical, to 6005 aluminium alloy. Between those two alloys, 6005A is more heavily alloyed, but the difference does not make a marked impact on material properties. It can be formed by extrusion, forging or rolling, but as a wrought alloy it is not used in casting. It cannot be work hardened, but is commonly heat treated to produce tempers with a higher strength at the expense of ductility. Alternate names and designations include AlSiMg(A) and 3.3210. The alloy and its various tempers are covered by the following standards: ASTM B 221: Standard Specification for Aluminum and Aluminum-Alloy Extruded Bars, Rods, Wire, Profiles, and Tubes EN 573-3: Aluminium and aluminium alloys. Chemical composition and form of wrought products. Chemical composition and form of products EN 755-2: Aluminium and aluminium alloys. Extruded rod/bar, tube and profiles. Mechanical properties Chemical composition The alloy composition of 6005A aluminium is: Aluminium: 96.5 to 99.0% Chromium: 0.3% max Copper: 0.3% max Iron: 0.35% max Magnesium: 0.4 to 0.7% Manganese: 0.5% max Silicon: 0.5 to 0.9% Titanium: 0.1% max Zinc: 0.2% max Residuals: 0.15% max Properties Typical material properties for 6005A aluminum alloy include: Density: 2.71 g/cm3, or 169 lb/ft3. Electrical Conductivity: 47 to 50% IACS. Young's modulus: 70 GPa, or 10 Msi. Ultimate tensile strength: 190 to 300 MPa, or 28 to 44 ksi. Yield strength: 100 to 260 MPa, or 15 to 38 ksi. Thermal Conductivity: 180 to 190 W/m-K. Thermal Expansion: 23.3 μm/m-K. References Aluminium alloys Aluminium–magnesium–silicon alloys
6005A aluminium alloy
Chemistry
447
99,491
https://en.wikipedia.org/wiki/Exponentiation
In mathematics, exponentiation, denoted , is an operation involving two numbers: the base, , and the exponent or power, . When is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, is the product of multiplying bases: In particular, . The exponent is usually shown as a superscript to the right of the base as or in computer code as b^n. This binary operation is often read as "b to the power n"; it may also be called "b raised to the nth power", "the nth power of b", or most briefly "b to the n". The above definition of immediately implies several properties, in particular the multiplication rule: That is, when multiplying a base raised to one power times the same base raised to another power, the powers add. Extending this rule to the power zero gives , and dividing both sides by gives . That is, the multiplication rule implies the definition A similar argument implies the definition for negative integer powers: That is, extending the multiplication rule gives . Dividing both sides by gives . This also implies the definition for fractional powers: For example, , meaning , which is the definition of square root: . The definition of exponentiation can be extended in a natural way (preserving the multiplication rule) to define for any positive real base and any real number exponent . More involved definitions allow complex base and exponent, as well as certain types of matrices as base or exponent. Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography. Etymology The term exponent originates from the Latin exponentem, the present participle of exponere, meaning "to put forth". The term power () is a mistranslation of the ancient Greek δύναμις (dúnamis, here: "amplification") used by the Greek mathematician Euclid for the square of a line, following Hippocrates of Chios. History Antiquity The Sand Reckoner In The Sand Reckoner, Archimedes proved the law of exponents, , necessary to manipulate powers of . He then used powers of to estimate the number of grains of sand that can be contained in the universe. Islamic Golden Age Māl and kaʿbah ("square" and "cube") In the 9th century, the Persian mathematician Al-Khwarizmi used the terms مَال (māl, "possessions", "property") for a square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"—and كَعْبَة (Kaʿbah, "cube") for a cube, which later Islamic mathematicians represented in mathematical notation as the letters mīm (m) and kāf (k), respectively, by the 15th century, as seen in the work of Abu'l-Hasan ibn Ali al-Qalasadi. 15th–18th century Introducing exponents Nicolas Chuquet used a form of exponential notation in the 15th century, for example to represent . This was later used by Henricus Grammateus and Michael Stifel in the 16th century. In the late 16th century, Jost Bürgi would use Roman numerals for exponents in a way similar to that of Chuquet, for example for . "Exponent"; "square" and "cube" The word exponent was coined in 1544 by Michael Stifel. In the 16th century, Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth). Biquadrate has been used to refer to the fourth power as well. Modern exponential notation In 1636, James Hume used in essence modern notation, when in L'algèbre de Viète he wrote for . Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled La Géométrie; there, the notation is introduced in Book I. Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as . "Indices" Samuel Jeake introduced the term indices in 1696. The term involution was used synonymously with the term indices, but had declined in usage and should not be confused with its more common meaning. Variable exponents, non-integer exponents In 1748, Leonhard Euler introduced variable exponents, and, implicitly, non-integer exponents by writing: 20th century As calculation was mechanized, notation was adapted to numerical capacity by conventions in exponential notation. For example Konrad Zuse introduced floating point arithmetic in his 1938 computer Z1. One register contained representation of leading digits, and a second contained representation of the exponent of 10. Earlier Leonardo Torres Quevedo contributed Essays on Automation (1914) which had suggested the floating-point representation of numbers. The more flexible decimal floating-point representation was introduced in 1946 with a Bell Laboratories computer. Eventually educators and engineers adopted scientific notation of numbers, consistent with common reference to order of magnitude in a ratio scale. For instance, in 1961 the School Mathematics Study Group developed the notation in connection with units used in the metric system. Terminology The expression is called "the square of " or " squared", because the area of a square with side-length is . (It is true that it could also be called " to the second power", but "the square of " and " squared" are more traditional) Similarly, the expression is called "the cube of " or " cubed", because the volume of a cube with side-length is . When an exponent is a positive integer, that exponent indicates how many copies of the base are multiplied together. For example, . The base appears times in the multiplication, because the exponent is . Here, is the 5th power of 3, or 3 raised to the 5th power. The word "raised" is usually omitted, and sometimes "power" as well, so can be simply read "3 to the 5th", or "3 to the 5". Integer exponents The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations. Positive exponents The definition of the exponentiation as an iterated multiplication can be formalized by using induction, and this definition can be used as soon as one has an associative multiplication: The base case is and the recurrence is The associativity of multiplication implies that for any positive integers and , and Zero exponent As mentioned earlier, a (nonzero) number raised to the power is : This value is also obtained by the empty product convention, which may be used in every algebraic structure with a multiplication that has an identity. This way the formula also holds for . The case of is controversial. In contexts where only integer powers are considered, the value is generally assigned to but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context. Negative exponents Exponentiation with negative exponents is defined by the following identity, which holds for any integer and nonzero : . Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity (). This definition of exponentiation with negative exponents is the only one that allows extending the identity to negative exponents (consider the case ). The same definition applies to invertible elements in a multiplicative monoid, that is, an algebraic structure, with an associative multiplication and a multiplicative identity denoted (for example, the square matrices of a given dimension). In particular, in such a structure, the inverse of an invertible element is standardly denoted Identities and properties The following identities, often called , hold for all integer exponents, provided that the base is non-zero: Unlike addition and multiplication, exponentiation is not commutative: for example, , but reversing the operands gives the different value . Also unlike addition and multiplication, exponentiation is not associative: for example, , whereas . Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or right-associative), not bottom-up (or left-associative). That is, which, in general, is different from Powers of a sum The powers of a sum can normally be computed from the powers of the summands by the binomial formula However, this formula is true only if the summands commute (i.e. that ), which is implied if they belong to a structure that is commutative. Otherwise, if and are, say, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes instead of ) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation. Combinatorial interpretation For nonnegative integers and , the value of is the number of functions from a set of elements to a set of elements (see cardinal exponentiation). Such functions can be represented as -tuples from an -element set (or as -letter words from an -letter alphabet). Some examples for particular values of and are given in the following table: Particular bases Powers of ten In the base ten (decimal) number system, integer powers of are written as the digit followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, and . Exponentiation with base is used in scientific notation to denote large or small numbers. For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as . SI prefixes based on powers of are also used to describe small or large quantities. For example, the prefix kilo means , so a kilometre is . Powers of two The first negative powers of have special names: is a half; is a quarter. Powers of appear in set theory, since a set with members has a power set, the set of all of its subsets, which has members. Integer powers of are important in computer science. The positive integer powers give the number of possible values for an -bit integer binary number; for example, a byte may take different values. The binary number system expresses any number as a sum of powers of , and denotes it as a sequence of and , separated by a binary point, where indicates a power of that appears in the sum; the exponent is determined by the place of this : the nonnegative exponents are the rank of the on the left of the point (starting from ), and the negative exponents are determined by the rank on the right of the point. Powers of one Every power of one equals: . Powers of zero For a positive exponent , the th power of zero is zero: . For a negative\ exponent, is undefined. The expression is either defined as , or it is left undefined. Powers of negative one Since a negative number times another negative is positive, we have:Because of this, powers of are useful for expressing alternating sequences. For a similar discussion of powers of the complex number , see . Large exponents The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound: as when This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one". Powers of a number with absolute value less than one tend to zero: as when Any power of one is always one: for all for Powers of a negative number alternate between positive and negative as alternates between even and odd, and thus do not tend to any limit as grows. If the exponentiated number varies while tending to as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is as See below. Other limits, in particular those of expressions that take on an indeterminate form, are described in below. Power functions Real functions of the form , where , are sometimes called power functions. When is an integer and , two primary families exist: for even, and for odd. In general for , when is even will tend towards positive infinity with increasing , and also towards positive infinity with decreasing . All graphs from the family of even power functions have the general shape of , flattening more in the middle as increases. Functions with this kind of symmetry are called even functions. When is odd, 's asymptotic behavior reverses from positive to negative . For , will also tend towards positive infinity with increasing , but towards negative infinity with decreasing . All graphs from the family of odd power functions have the general shape of , flattening more in the middle as increases and losing all flatness there in the straight line for . Functions with this kind of symmetry are called odd functions. For , the opposite asymptotic behavior is true in each case. Table of powers of decimal digits Rational exponents If is a nonnegative real number, and is a positive integer, or denotes the unique nonnegative real th root of , that is, the unique nonnegative real number such that If is a positive real number, and is a rational number, with and integers, then is defined as The equality on the right may be derived by setting and writing If is a positive rational number, , by definition. All these definitions are required for extending the identity to rational exponents. On the other hand, there are problems with the extension of these definitions to bases that are not positive real numbers. For example, a negative real number has a real th root, which is negative, if is odd, and no real root if is even. In the latter case, whichever complex th root one chooses for the identity cannot be satisfied. For example, See and for details on the way these problems may be handled. Real exponents For positive real numbers, exponentiation to real powers can be defined in two equivalent ways, either by extending the rational powers to reals by continuity (, below), or in terms of the logarithm of the base and the exponential function (, below). The result is always a positive real number, and the identities and properties shown above for integer exponents remain true with these definitions for real exponents. The second definition is more commonly used, since it generalizes straightforwardly to complex exponents. On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values. One may choose one of these values, called the principal value, but there is no choice of the principal value for which the identity is true; see . Therefore, exponentiation with a basis that is not a positive real number is generally viewed as a multivalued function. Limits of rational exponents Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number with an arbitrary real exponent can be defined by continuity with the rule where the limit is taken over rational values of only. This limit exists for every positive and every real . For example, if , the non-terminating decimal representation and the monotonicity of the rational powers can be used to obtain intervals bounded by rational powers that are as small as desired, and must contain So, the upper bounds and the lower bounds of the intervals form two sequences that have the same limit, denoted This defines for every positive and real as a continuous function of and . See also Well-defined expression. Exponential function The exponential function may be defined as where is Euler's number, but to avoid circular reasoning, this definition cannot be used here. Rather, we give an independent definition of the exponential function and of , relying only on positive integer powers (repeated multiplication). Then we sketch the proof that this agrees with the previous definition: There are many equivalent ways to define the exponential function, one of them being One has and the exponential identity (or multiplication rule) holds as well, since and the second-order term does not affect the limit, yielding . Euler's number can be defined as . It follows from the preceding equations that when is an integer (this results from the repeated-multiplication definition of the exponentiation). If is real, results from the definitions given in preceding sections, by using the exponential identity if is rational, and the continuity of the exponential function otherwise. The limit that defines the exponential function converges for every complex value of , and therefore it can be used to extend the definition of , and thus from the real numbers to any complex argument . This extended exponential function still satisfies the exponential identity, and is commonly used for defining exponentiation for complex base and exponent. Powers via logarithms The definition of as the exponential function allows defining for every positive real numbers , in terms of exponential and logarithm function. Specifically, the fact that the natural logarithm is the inverse of the exponential function means that one has for every . For preserving the identity one must have So, can be used as an alternative definition of for any positive real . This agrees with the definition given above using rational exponents and continuity, with the advantage to extend straightforwardly to any complex exponent. Complex exponents with a positive real base If is a positive real number, exponentiation with base and complex exponent is defined by means of the exponential function with complex argument (see the end of , above) as where denotes the natural logarithm of . This satisfies the identity In general, is not defined, since is not a real number. If a meaning is given to the exponentiation of a complex number (see , below), one has, in general, unless is real or is an integer. Euler's formula, allows expressing the polar form of in terms of the real and imaginary parts of , namely where the absolute value of the trigonometric factor is one. This results from Non-integer exponents with a complex base In the preceding sections, exponentiation with non-integer exponents has been defined for positive real bases only. For other bases, difficulties appear already with the apparently simple case of th roots, that is, of exponents where is a positive integer. Although the general theory of exponentiation with non-integer exponents applies to th roots, this case deserves to be considered first, since it does not need to use complex logarithms, and is therefore easier to understand. th roots of a complex number Every nonzero complex number may be written in polar form as where is the absolute value of , and is its argument. The argument is defined up to an integer multiple of ; this means that, if is the argument of a complex number, then is also an argument of the same complex number for every integer . The polar form of the product of two complex numbers is obtained by multiplying the absolute values and adding the arguments. It follows that the polar form of an th root of a complex number can be obtained by taking the th root of the absolute value and dividing its argument by : If is added to , the complex number is not changed, but this adds to the argument of the th root, and provides a new th root. This can be done times, and provides the th roots of the complex number. It is usual to choose one of the th root as the principal root. The common choice is to choose the th root for which that is, the th root that has the largest real part, and, if there are two, the one with positive imaginary part. This makes the principal th root a continuous function in the whole complex plane, except for negative real values of the radicand. This function equals the usual th root for positive real radicands. For negative real radicands, and odd exponents, the principal th root is not real, although the usual th root is real. Analytic continuation shows that the principal th root is the unique complex differentiable function that extends the usual th root to the complex plane without the nonpositive real numbers. If the complex number is moved around zero by increasing its argument, after an increment of the complex number comes back to its initial position, and its th roots are permuted circularly (they are multiplied by ). This shows that it is not possible to define a th root function that is continuous in the whole complex plane. Roots of unity The th roots of unity are the complex numbers such that , where is a positive integer. They arise in various areas of mathematics, such as in discrete Fourier transform or algebraic solutions of algebraic equations (Lagrange resolvent). The th roots of unity are the first powers of , that is The th roots of unity that have this generating property are called primitive th roots of unity; they have the form with coprime with . The unique primitive square root of unity is the primitive fourth roots of unity are and The th roots of unity allow expressing all th roots of a complex number as the products of a given th roots of with a th root of unity. Geometrically, the th roots of unity lie on the unit circle of the complex plane at the vertices of a regular -gon with one vertex on the real number 1. As the number is the primitive th root of unity with the smallest positive argument, it is called the principal primitive th root of unity, sometimes shortened as principal th root of unity, although this terminology can be confused with the principal value of , which is 1. Complex exponentiation Defining exponentiation with complex bases leads to difficulties that are similar to those described in the preceding section, except that there are, in general, infinitely many possible values for . So, either a principal value is defined, which is not continuous for the values of that are real and nonpositive, or is defined as a multivalued function. In all cases, the complex logarithm is used to define complex exponentiation as where is the variant of the complex logarithm that is used, which is a function or a multivalued function such that for every in its domain of definition. Principal value The principal value of the complex logarithm is the unique continuous function, commonly denoted such that, for every nonzero complex number , and the argument of satisfies The principal value of the complex logarithm is not defined for it is discontinuous at negative real values of , and it is holomorphic (that is, complex differentiable) elsewhere. If is real and positive, the principal value of the complex logarithm is the natural logarithm: The principal value of is defined as where is the principal value of the logarithm. The function is holomorphic except in the neighbourhood of the points where is real and nonpositive. If is real and positive, the principal value of equals its usual value defined above. If where is an integer, this principal value is the same as the one defined above. Multivalued function In some contexts, there is a problem with the discontinuity of the principal values of and at the negative real values of . In this case, it is useful to consider these functions as multivalued functions. If denotes one of the values of the multivalued logarithm (typically its principal value), the other values are where is any integer. Similarly, if is one value of the exponentiation, then the other values are given by where is any integer. Different values of give different values of unless is a rational number, that is, there is an integer such that is an integer. This results from the periodicity of the exponential function, more specifically, that if and only if is an integer multiple of If is a rational number with and coprime integers with then has exactly values. In the case these values are the same as those described in § th roots of a complex number. If is an integer, there is only one value that agrees with that of . The multivalued exponentiation is holomorphic for in the sense that its graph consists of several sheets that define each a holomorphic function in the neighborhood of every point. If varies continuously along a circle around , then, after a turn, the value of has changed of sheet. Computation The canonical form of can be computed from the canonical form of and . Although this can be described by a single formula, it is clearer to split the computation in several steps. Polar form of . If is the canonical form of ( and being real), then its polar form is with and , where is the two-argument arctangent function. Logarithm of . The principal value of this logarithm is where denotes the natural logarithm. The other values of the logarithm are obtained by adding for any integer . Canonical form of If with and real, the values of are the principal value corresponding to Final result. Using the identities and one gets with for the principal value. Examples The polar form of is and the values of are thus It follows that So, all values of are real, the principal one being Similarly, the polar form of is So, the above described method gives the values In this case, all the values have the same argument and different absolute values. In both examples, all values of have the same argument. More generally, this is true if and only if the real part of is an integer. Failure of power and logarithm identities Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as single-valued functions. For example: Irrationality and transcendence If is a positive real algebraic number, and is a rational number, then is an algebraic number. This results from the theory of algebraic extensions. This remains true if is any algebraic number, in which case, all values of (as a multivalued function) are algebraic. If is irrational (that is, not rational), and both and are algebraic, Gelfond–Schneider theorem asserts that all values of are transcendental (that is, not algebraic), except if equals or . In other words, if is irrational and then at least one of , and is transcendental. Integer powers in algebra The definition of exponentiation with positive integer exponents as repeated multiplication may apply to any associative operation denoted as a multiplication. The definition of requires further the existence of a multiplicative identity. An algebraic structure consisting of a set together with an associative operation denoted multiplicatively, and a multiplicative identity denoted by is a monoid. In such a monoid, exponentiation of an element is defined inductively by for every nonnegative integer . If is a negative integer, is defined only if has a multiplicative inverse. In this case, the inverse of is denoted , and is defined as Exponentiation with integer exponents obeys the following laws, for and in the algebraic structure, and and integers: These definitions are widely used in many areas of mathematics, notably for groups, rings, fields, square matrices (which form a ring). They apply also to functions from a set to itself, which form a monoid under function composition. This includes, as specific instances, geometric transformations, and endomorphisms of any mathematical structure. When there are several operations that may be repeated, it is common to indicate the repeated operation by placing its symbol in the superscript, before the exponent. For example, if is a real function whose valued can be multiplied, denotes the exponentiation with respect of multiplication, and may denote exponentiation with respect of function composition. That is, and Commonly, is denoted while is denoted In a group A multiplicative group is a set with as associative operation denoted as multiplication, that has an identity element, and such that every element has an inverse. So, if is a group, is defined for every and every integer . The set of all powers of an element of a group form a subgroup. A group (or subgroup) that consists of all powers of a specific element is the cyclic group generated by . If all the powers of are distinct, the group is isomorphic to the additive group of the integers. Otherwise, the cyclic group is finite (it has a finite number of elements), and its number of elements is the order of . If the order of is , then and the cyclic group generated by consists of the first powers of (starting indifferently from the exponent or ). Order of elements play a fundamental role in group theory. For example, the order of an element in a finite group is always a divisor of the number of elements of the group (the order of the group). The possible orders of group elements are important in the study of the structure of a group (see Sylow theorems), and in the classification of finite simple groups. Superscript notation is also used for conjugation; that is, , where and are elements of a group. This notation cannot be confused with exponentiation, since the superscript is not an integer. The motivation of this notation is that conjugation obeys some of the laws of exponentiation, namely and In a ring In a ring, it may occur that some nonzero elements satisfy for some integer . Such an element is said to be nilpotent. In a commutative ring, the nilpotent elements form an ideal, called the nilradical of the ring. If the nilradical is reduced to the zero ideal (that is, if implies for every positive integer ), the commutative ring is said to be reduced. Reduced rings are important in algebraic geometry, since the coordinate ring of an affine algebraic set is always a reduced ring. More generally, given an ideal in a commutative ring , the set of the elements of that have a power in is an ideal, called the radical of . The nilradical is the radical of the zero ideal. A radical ideal is an ideal that equals its own radical. In a polynomial ring over a field , an ideal is radical if and only if it is the set of all polynomials that are zero on an affine algebraic set (this is a consequence of Hilbert's Nullstellensatz). Matrices and linear operators If is a square matrix, then the product of with itself times is called the matrix power. Also is defined to be the identity matrix, and if is invertible, then . Matrix powers appear often in the context of discrete dynamical systems, where the matrix expresses a transition from a state vector of some system to the next state of the system. This is the standard interpretation of a Markov chain, for example. Then is the state of the system after two time steps, and so forth: is the state of the system after time steps. The matrix power is the transition matrix between the state now and the state at a time steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors. Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus, , which is a linear operator acting on functions to give a new function . The th power of the differentiation operator is the th derivative: These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups. Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus. Finite fields A field is an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication is associative and every nonzero element has a multiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of . Common examples are the field of complex numbers, the real numbers and the rational numbers, considered earlier in this article, which are all infinite. A finite field is a field with a finite number of elements. This number of elements is either a prime number or a prime power; that is, it has the form where is a prime number, and is a positive integer. For every such , there are fields with elements. The fields with elements are all isomorphic, which allows, in general, working as if there were only one field with elements, denoted One has for every A primitive element in is an element such that the set of the first powers of (that is, ) equals the set of the nonzero elements of There are primitive elements in where is Euler's totient function. In the freshman's dream identity is true for the exponent . As in It follows that the map is linear over and is a field automorphism, called the Frobenius automorphism. If the field has automorphisms, which are the first powers (under composition) of . In other words, the Galois group of is cyclic of order , generated by the Frobenius automorphism. The Diffie–Hellman key exchange is an application of exponentiation in finite fields that is widely used for secure communications. It uses the fact that exponentiation is computationally inexpensive, whereas the inverse operation, the discrete logarithm, is computationally expensive. More precisely, if is a primitive element in then can be efficiently computed with exponentiation by squaring for any , even if is large, while there is no known computationally practical algorithm that allows retrieving from if is sufficiently large. Powers of sets The Cartesian product of two sets and is the set of the ordered pairs such that and This operation is not properly commutative nor associative, but has these properties up to canonical isomorphisms, that allow identifying, for example, and This allows defining the th power of a set as the set of all -tuples of elements of . When is endowed with some structure, it is frequent that is naturally endowed with a similar structure. In this case, the term "direct product" is generally used instead of "Cartesian product", and exponentiation denotes product structure. For example (where denotes the real numbers) denotes the Cartesian product of copies of as well as their direct product as vector space, topological spaces, rings, etc. Sets as exponents A -tuple of elements of can be considered as a function from This generalizes to the following notation. Given two sets and , the set of all functions from to is denoted . This exponential notation is justified by the following canonical isomorphisms (for the first one, see Currying): where denotes the Cartesian product, and the disjoint union. One can use sets as exponents for other operations on sets, typically for direct sums of abelian groups, vector spaces, or modules. For distinguishing direct sums from direct products, the exponent of a direct sum is placed between parentheses. For example, denotes the vector space of the infinite sequences of real numbers, and the vector space of those sequences that have a finite number of nonzero elements. The latter has a basis consisting of the sequences with exactly one nonzero element that equals , while the Hamel bases of the former cannot be explicitly described (because their existence involves Zorn's lemma). In this context, can represents the set So, denotes the power set of , that is the set of the functions from to which can be identified with the set of the subsets of , by mapping each function to the inverse image of . This fits in with the exponentiation of cardinal numbers, in the sense that , where is the cardinality of . In category theory In the category of sets, the morphisms between sets and are the functions from to . It results that the set of the functions from to that is denoted in the preceding section can also be denoted The isomorphism can be rewritten This means the functor "exponentiation to the power " is a right adjoint to the functor "direct product with ". This generalizes to the definition of exponentiation in a category in which finite direct products exist: in such a category, the functor is, if it exists, a right adjoint to the functor A category is called a Cartesian closed category, if direct products exist, and the functor has a right adjoint for every . Repeated exponentiation Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at , the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and () respectively. Limits of powers Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function has no limit at the point . One may consider at what points this function does have a limit. More precisely, consider the function defined on . Then can be viewed as a subset of (that is, the set of all pairs with , belonging to the extended real number line , endowed with the product topology), which will contain the points at which the function has a limit. In fact, has a limit at all accumulation points of , except for , , and . Accordingly, this allows one to define the powers by continuity whenever , , except for , , and , which remain indeterminate forms. Under this definition by continuity, we obtain: and , when . and , when . and , when . and , when . These powers are obtained by taking limits of for positive values of . This method does not permit a definition of when , since pairs with are not accumulation points of . On the other hand, when is an integer, the power is already meaningful for all values of , including negative ones. This may make the definition obtained above for negative problematic when is odd, since in this case as tends to through positive values, but not negative ones. Efficient computation with integer exponents Computing using iterated multiplication requires multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute , apply Horner's rule to the exponent 100 written in binary: . Then compute the following terms in order, reading Horner's rule from right to left. This series of steps only requires 8 multiplications instead of 99. In general, the number of multiplication operations required to compute can be reduced to by using exponentiation by squaring, where denotes the number of s in the binary representation of . For some exponents (100 is not among them), the number of multiplications can be further reduced by computing and using the minimal addition-chain exponentiation. Finding the minimal sequence of multiplications (the minimal-length addition chain for the exponent) for is a difficult problem, for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available. However, in practical computations, exponentiation by squaring is efficient enough, and much more easy to implement. Iterated functions Function composition is a binary operation that is defined on functions such that the codomain of the function written on the right is included in the domain of the function written on the left. It is denoted and defined as for every in the domain of . If the domain of a function equals its codomain, one may compose the function with itself an arbitrary number of time, and this defines the th power of the function under composition, commonly called the th iterate of the function. Thus denotes generally the th iterate of ; for example, means When a multiplication is defined on the codomain of the function, this defines a multiplication on functions, the pointwise multiplication, which induces another exponentiation. When using functional notation, the two kinds of exponentiation are generally distinguished by placing the exponent of the functional iteration before the parentheses enclosing the arguments of the function, and placing the exponent of pointwise multiplication after the parentheses. Thus and When functional notation is not used, disambiguation is often done by placing the composition symbol before the exponent; for example and For historical reasons, the exponent of a repeated multiplication is placed before the argument for some specific functions, typically the trigonometric functions. So, and both mean and not which, in any case, is rarely considered. Historically, several variants of these notations were used by different authors. In this context, the exponent denotes always the inverse function, if it exists. So For the multiplicative inverse fractions are generally used as in In programming languages Programming languages generally express exponentiation either as an infix operator or as a function application, as they do not support superscripts. The most common operator symbol for exponentiation is the caret (^). The original version of ASCII included an uparrow symbol (↑), intended for exponentiation, but this was replaced by the caret in 1967, so the caret became usual in programming languages. The notations include: x ^ y: AWK, BASIC, J, MATLAB, Wolfram Language (Mathematica), R, Microsoft Excel, Analytica, TeX (and its derivatives), TI-BASIC, bc (for integer exponents), Haskell (for nonnegative integer exponents), Lua, and most computer algebra systems. x ** y. The Fortran character set did not include lowercase characters or punctuation symbols other than +-*/()&=.,' and so used ** for exponentiation (the initial version used a xx b instead.). Many other languages followed suit: Ada, Z shell, KornShell, Bash, COBOL, CoffeeScript, Fortran, FoxPro, Gnuplot, Groovy, JavaScript, OCaml, ooRexx, F#, Perl, PHP, PL/I, Python, Rexx, Ruby, SAS, Seed7, Tcl, ABAP, Mercury, Haskell (for floating-point exponents), Turing, and VHDL. x ↑ y: Algol Reference language, Commodore BASIC, TRS-80 Level II/III BASIC. x ^^ y: Haskell (for fractional base, integer exponents), D. x⋆y: APL. In most programming languages with an infix exponentiation operator, it is right-associative, that is, a^b^c is interpreted as a^(b^c). This is because (a^b)^c is equal to a^(b*c) and thus not as useful. In some languages, it is left-associative, notably in Algol, MATLAB, and the Microsoft Excel formula language. Other programming languages use functional notation: (expt x y): Common Lisp. pown x y: F# (for integer base, integer exponent). Still others only provide exponentiation as part of standard libraries: pow(x, y): C, C++ (in math library). Math.Pow(x, y): C#. math:pow(X, Y): Erlang. Math.pow(x, y): Java. [Math]::Pow(x, y): PowerShell. In some statically typed languages that prioritize type safety such as Rust, exponentiation is performed via a multitude of methods: x.pow(y) for x and y as integers x.powf(y) for x and y as floating point numbers x.powi(y) for x as a float and y as an integer See also Double exponential function Exponential decay Exponential field Exponential growth Hyperoperation Tetration Pentation List of exponential topics Modular exponentiation Scientific notation Unicode subscripts and superscripts xy = yx Zero to the power of zero Notes References Exponentials Binary operations Unary operations
Exponentiation
Mathematics
9,523
58,651,153
https://en.wikipedia.org/wiki/Aspergillus%20sulphureoviridis
Aspergillus sulphureoviridis is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2016. It has been isolated from indoor air in Denmark. Growth and morphology A. sulphureoviridis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References sulphureoviridis Fungi described in 2016 Fungus species
Aspergillus sulphureoviridis
Biology
125
29,406,238
https://en.wikipedia.org/wiki/YPEL3
Yippee-like 3 (Drosophila) is a protein that in humans is encoded by the YPEL3 gene. YPEL3 has growth inhibitory effects in normal and tumor cell lines. One of five family members (YPEL1-5), YPEL3 was named in reference to its Drosophila melanogaster orthologue. Initially discovered in a gene expression profiling assay of p53 activated MCF7 cells, induction of YPEL3 has been shown to trigger permanent growth arrest or cellular senescence in certain human normal and tumor cell types. DNA methylation of a CpG island near the YPEL3 promoter as well as histone acetylation may represent possible epigenetic mechanisms leading to decreased gene expression in human tumors. Gene location and protein structure Human YPEL3 is located on the short arm of chromosome 16 (p1611.2) and covers 4.62kb from 30015754 to 30011130 on the reverse strand. The Drosophila Yippee protein was identified as a putative zinc finger motif containing protein exhibiting a high degree of conservation among the cysteines and histidines. Zinc fingers function as structural platforms for DNA binding. Nomenclature YPEL3 was first identified as murine SUAP, named for small unstable apoptotic protein because of its apparent role in cellular growth inhibition via apoptosis when studied in myeloid precursor cell lines . SUAP later attained its current designation as YPEL3 (Yippee like three), after it was discovered to be one of five human genes possessing homology with the Drosophila Yippee protein. Discovery The Drosophila Yippee protein was originally discovered in a yeast interaction trap screen when it was found to physically interact with Hyalophora cecropia Hemolin. After subsequent cloning and sequencing experiments Yippee was found to be a conserved gene family of proteins present in a diverse range of eukaryotic organisms, ranging from fungi to humans. When analyzed at the amino acid level, Drosophila melanogaster Yippee and YPEL1 displayed a high level of homology (76%). During later sequence analysis of human chromosome 22, researchers identified a gene family YPEL1-YPEL5, which had high homology with the Drosophila Yippee gene. YPEL3’s role as a novel tumor suppressor and its involvement in cellular proliferation were discovered during experiments to investigate p53 dependent cell cycle arrest. While investigating the p53 tumor suppressor protein, microarray studies which targeted Hdmx and Hdm2, both p53 negative regulators, revealed YPEL3 as a potential p53 regulated gene in MCF7 breast cancer cells. Investigation into its function led to the discovery of YPEL3 being a novel protein whose growth suppressive activity is thought to be mediated through a cellular senescence pathway. Function Regulation by p53 p53 is a tumor suppressor protein encoded by the human gene TP53 whose function is to prevent unregulated cell growth. p53 can be activated in response to a wide variety of cellular stressors, both oncogenic and non-oncogenic. An important checkpoint in a complex pathway, activated p53 has been shown to bind DNA and transcriptionally regulate genes that can mediate a variety of cellular growth processes including DNA repair, growth arrest, cellular senescence and apoptosis. The importance of functioning p53 in the regulation of the cell cycle is evident in that 55% of human cancers exhibit p53 mutations. YPEL3 was discovered to be a possible p53 target after a screen for such genes was performed in MCF7 breast cancer cells following RNAi knockdown of p53 negative inhibitors. In both human normal and tumor cell lines, YPEL3 has been shown to be a p53-inducible gene. Two putative p53 binding sites have been identified, one 1.3-Kbp 5' of the YPEL3 promoter and another upstream of the YPEL3 promoter. Cellular senescence As a part of the p53 pathway response and its anti-proliferation role, cellular senescence has gained attention for its working relationship with tumor suppressor genes. Characterized by the limited ability of cultured normal cells to divide, senescence has been shown to be triggered through oncogenic activation( premature senescence) as well as telomere shortening as the result of successive rounds of DNA replication (replicative senescence). Recognized hallmarks of cellular senescence include senescence associated(SA)beta galactosidase staining and the appearance of senescence-associated heterochromatic foci(SAHF) within the nuclei of senescent cells. Although studies in murine myeloid precursor cell lines indicated YPEL3 to have a role in apoptosis, human YPEL3 failed to demonstrate an apoptotic response using sub-G1 or poly ADP ribose polymerase cleavage as accepted indicators of programmed cell death. YPEL3 has been shown to trigger premature senescence when studied in IMR90 primary human fibroblasts. Studies in U2OS osteosarcoma cells and MCF7 breast cancer cells have also demonstrated increased cellular senescence upon YPEL3 induction. As further possible evidence to its function, reduced expression of YPEL3 has been observed in ovarian, lung, and colon tumor cell lines. Epigenetic modification Epigenetics is the study of changes in gene activity that do not involve alterations to genetic code, or DNA. Instead, just above the genome sits various epigenetic markers which serve to provide instructions to activate or inactivate genes to varying degrees. This silencing or activation of genes has been recognized to play an important role in the differentiation of nascent cells and several human disease states including cancer. Unlike genetic mutations, epigenetic changes are considered reversible, although further study is needed. Two common methods of epigenetic modification are DNA methylation and histone modification. Specifically, hypermethylation of CpG islands( guanine and cytosine rich spans of DNA) near the promoters of tumor suppressor genes have been documented in specific tumor cell lines. In the case of the tumor suppressors VHL (associated with von Hippel–Lindau disease), p16, hMLH1, and BRCA1(a gene associated with breast cancer susceptibility), hypermethylation of the CpG-island has been shown to be a method of gene inactivation. Both histone acetylation and DNA methylation have been studied as possible epigenetic means of regulating YPEL3 expression. When studied in Cp70 ovarian carcinoma cells, hypermethylation of a CpG island immediately upstream of the YPEL3 promoter has been seen to down regulate YPEL3 expression. Hypermethylation seen in the promoters of tumor suppressor genes are cancer type specific, allowing each tumor type to be identifiable with an individual pattern. Such discoveries have led researchers to investigate epigenetic markers as potential diagnostic tools, prognostic factors, and indicators for the responsiveness to treatment of human cancers, although continued study is needed. References Human proteins Senescence
YPEL3
Chemistry,Biology
1,491
30,581,850
https://en.wikipedia.org/wiki/JIT%20spraying
JIT spraying is a class of computer security exploit that circumvents the protection of address space layout randomization and data execution prevention by exploiting the behavior of just-in-time compilation. It has been used to exploit the PDF format and Adobe Flash. A just-in-time compiler (JIT) by definition produces code as its data. Since the purpose is to produce executable data, a JIT compiler is one of the few types of programs that cannot be run in a no-executable-data environment. Because of this, JIT compilers are normally exempt from data execution prevention. A JIT spray attack does heap spraying with the generated code. To produce exploit code from JIT, an idea from Dion Blazakis is used. The input program, usually JavaScript or ActionScript, typically contains numerous constant values that can be erroneously executed as code. For example, the XOR operation could be used: var a = (0x11223344^0x44332211^0x44332211^ ...); JIT then will transform bytecode to native x86 code like: 0: b8 44 33 22 11 5: 35 11 22 33 44 a: 35 11 22 33 44 The attacker then uses a suitable bug to redirect code execution into the newly generated code. For example, a buffer overflow or use after free bug could allow the attack to modify a function pointer or return address. This causes the CPU to execute instructions in a way that was unintended by the JIT authors. The attacker is usually not even limited to the expected instruction boundaries; it is possible to jump into the middle of an intended instruction to have the CPU interpret it as something else. As with non-JIT ROP attacks, this may be enough operations to usefully take control of the computer. Continuing the above example, jumping to the second byte of the "mov" instruction results in an "inc" instruction: 1: 44 2: 33 22 4: 11 35 11 22 33 44 a: 35 11 22 33 44 x86 and x86-64 allow jumping into the middle of an instruction, but not fixed-length architectures like ARM. To protect against JIT spraying, the JIT code can be disabled or made less predictable for the attacker. References Computer security exploits
JIT spraying
Technology
485
3,659,844
https://en.wikipedia.org/wiki/Rankine%27s%20method
Rankine's method or tangential angle method is an angular technique for laying out circular curves by a combination of chaining and angles at circumference, fully exploiting the theodolite and making a substantial improvement in accuracy and productivity over existing methods. This method requires access to only one road/path of communication to lay out a curve. Points on curve are calculated by their angular offset from the path of communication. Rankine's method is named for its discoverer William John Macquorn Rankine at an early stage of his career. He had been working on railways in Ireland, on the construction of the Dublin and Drogheda line. Background This method makes sure that any line drawn from the known tangent to curve is a chord of the curve by constraining the deflection angle of line. Since end points of chords lie on the curve this can be used to approximate the shape of actual curve. Procedure Let AB be a tangent line/path of communication or start of a curve, then successive points on the curve can be obtained by drawing an arbitrary line of length from point A with an angle where is deflection from nth chord in degrees. R is the radius of circular curve is arbitrary length of chord See also Dublin and Drogheda Railway References Surveying Scottish inventions
Rankine's method
Engineering
266
8,529,968
https://en.wikipedia.org/wiki/Bayesian%20regret
In stochastic game theory, Bayesian regret is the expected difference ("regret") between the utility of a Bayesian strategy and that of the optimal strategy (the one with the highest expected payoff). The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. Economics This term has been used to compare a random buy-and-hold strategy to professional traders' records. This same concept has received numerous different names, as the New York Times notes: "In 1957, for example, a statistician named James Hanna called his theorem Bayesian Regret. He had been preceded by David Blackwell, also a statistician, who called his theorem Controlled Random Walks. Other, later papers had titles like 'On Pseudo Games', 'How to Play an Unknown Game', 'Universal Coding' and 'Universal Portfolios'". References Game theory Bayesian estimation Economic theories Machine learning Bayesian statistics Social choice theory
Bayesian regret
Mathematics,Engineering
233
4,493,880
https://en.wikipedia.org/wiki/SN%202003B
SN 2003B was a type II supernova that was discovered in NGC 1097 by Robert Owen Evans on January 5, 2003. References See also SN 2006X Spiral Galaxy NGC 1097 External links Light curves and spectra on the Open Supernova Catalog Supernova 2003B IAUC Fornax 20030105 Supernova remnants Supernovae
SN 2003B
Chemistry,Astronomy
72
8,868,773
https://en.wikipedia.org/wiki/Authenticated%20Identity%20Body
Authenticated Identity Body or AIB is a method allowing parties in a network to share authenticated identity, thereby increasing the integrity of their SIP communications. AIBs extend other authentication methods like S/MIME to provide a more specific mechanism to introduce integrity to SIP transmissions. Parties transmitting AIBs cryptographically sign a subset of SIP message headers, and such signatures assert the message originator's identity. To meet requirements of reference integrity (for example in defending against replay attacks) additional SIP message headers such as 'Date' and 'Contact' may be optionally included in the AIB. AIB is described and discussed in RFC 3893: "For reasons of end-to-end privacy, it may also be desirable to encrypt AIBs [...]. While encryption of AIBs entails that only the holder of a specific key can decrypt the body, that single key could be distributed throughout a network of hosts that exist under common policies. The security of the AIB is therefore predicated on the secure distribution of the key. However, for some networks (in which there are federations of trusted hosts under a common policy), the widespread distribution of a decryption key could be appropriate. Some telephone networks, for example, might require this model. When an AIB is encrypted, the AIB should be encrypted before it is signed... Unless, of course, it is signed by Mrs. L in Rin, VA." See also References Computer networks engineering Cryptographic software VoIP protocols VoIP software
Authenticated Identity Body
Mathematics,Technology,Engineering
327
4,047,274
https://en.wikipedia.org/wiki/Gold-containing%20drugs
Gold-containing drugs are pharmaceuticals that contain gold. Sometimes these species are referred to as "gold salts". "Chrysotherapy" and "aurotherapy" are the applications of gold compounds to medicine. Research on the medicinal effects of gold began in 1935, primarily to reduce inflammation and to slow disease progression in patients with rheumatoid arthritis. The use of gold compounds has decreased since the 1980s because of numerous side effects and monitoring requirements, limited efficacy, and very slow onset of action. Most chemical compounds of gold, including some of the drugs discussed below, are not salts, but are examples of metal thiolate complexes. Use in rheumatoid arthritis Investigation of medical applications of gold began at the end of the 19th century, when gold cyanide demonstrated efficacy in treating Mycobacterium tuberculosis in vitro. Indications The use of injected gold compound is indicated for rheumatoid arthritis. Its uses have diminished with the advent of newer compounds such as methotrexate and because of numerous side effects. The efficacy of orally administered gold is more limited than injecting the gold compounds. Mechanism in arthritis The mechanism by which gold drugs affect arthritis is unknown. Administration Gold-containing drugs for rheumatoid arthritis are administered by intramuscular injection but can also be administered orally (although the efficacy is low). Regular urine tests to check for protein, indicating kidney damage, and blood tests are required. Efficacy A 1997 review (Suarez-Almazor ME, et al) reports that treatment with intramuscular gold (parenteral gold) reduces disease activity and joint inflammation. Gold-containing drugs taken by mouth are less effective than by injection. Three to six months are often required before gold treatment noticeably improves symptoms. Side effects Chrysiasis A noticeable side-effect of gold-based therapy is skin discoloration, in shades of mauve to a purplish dark grey when exposed to sunlight. Skin discoloration occurs when gold salts are taken on a regular basis over a long period of time. Excessive intake of gold salts while undergoing chrysotherapy results – through complex redox processes – in the saturation by relatively stable gold compounds of skin tissue and organs (as well as teeth and ocular tissue in extreme cases) in a condition known as chrysiasis. This condition is similar to argyria, which is caused by exposure to silver salts and colloidal silver. Chrysiasis can ultimately lead to acute kidney injury (such as tubular necrosis, nephrosis, glomerulitis), severe heart conditions, and hematologic complications (leukopenia, anemia). While some effects can be healed with moderate success, the skin discoloration is considered permanent. Other side effects Other side effects of gold-containing drugs include kidney damage, itching rash, and ulcerations of the mouth, tongue, and pharynx. Approximately 35% of patients discontinue the use of gold salts because of these side effects. Kidney function must be monitored continuously while taking gold compounds. Types Disodium aurothiomalate Sodium aurothiosulfate (Gold sodium thiosulfate) Sodium aurothiomalate (Gold sodium thiomalate) (UK) Auranofin (UK & US) Aurothioglucose (Gold thioglucose) (US) References External links "Gold salts for juvenile rheumatoid arthritis". BCHealthGuide.org "Gold salts information". DiseasesDatabase.com "HMS researchers find how gold fights arthritis: Sheds light on how medicinal metal function against rheumatoid arthritis and other autoimmune diseases." Harvard University Gazette (2006) "Aurothioglucose is a gold salt used in treating inflammatory arthritis". MedicineNet.com "About gold treatment: What is it? Gold treatment includes different forms of gold salts used to treat arthritis." Washington.edu University of Washington (December 30, 2004) Gold compounds Hepatotoxins Antirheumatic products Coordination complexes Nephrotoxins
Gold-containing drugs
Chemistry
854
10,531,718
https://en.wikipedia.org/wiki/Graph%20cuts%20in%20computer%20vision
As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems (early vision), such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization. Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph (and thus, by the max-flow min-cut theorem, define a minimal cut of the graph). Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution. Although many computer vision algorithms involve cutting a graph (e.g., normalized cuts), the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization (other graph cutting algorithms may be considered as graph partitioning algorithms). "Binary" problems (such as denoising a binary image) can be solved exactly using this approach; problems where pixels can be labeled with more than two different labels (such as stereo correspondence, or denoising of a grayscale image) cannot be solved exactly, but solutions produced are usually near the global optimum. History The foundational theory of graph cuts was first applied in computer vision in the seminal paper by Greig, Porteous and Seheult of Durham University. Allan Seheult and Bruce Porteous were members of Durham's lauded statistics group of the time, led by Julian Besag and Peter Green, with the optimisation expert Margaret Greig notable as the first ever female member of staff of the Durham Mathematical Sciences Department. In the Bayesian statistical context of smoothing noisy (or corrupted) images, they showed how the maximum a posteriori estimate of a binary image can be obtained exactly by maximizing the flow through an associated image network, involving the introduction of a source and sink. The problem was therefore shown to be efficiently solvable. Prior to this result, approximate techniques such as simulated annealing (as proposed by the Geman brothers), or iterated conditional modes (a type of greedy algorithm suggested by Julian Besag) were used to solve such image smoothing problems. Although the general -colour problem is NP hard for the approach of Greig, Porteous and Seheult has turned out to have wide applicability in general computer vision problems. For general problems, Greig, Porteous and Seheult's approach is often applied iteratively to sequences of related binary problems, usually yielding near optimal solutions. In 2011, C. Couprie et al. proposed a general image segmentation framework, called the "Power Watershed", that minimized a real-valued indicator function from [0,1] over a graph, constrained by user seeds (or unary terms) set to 0 or 1, in which the minimization of the indicator function over the graph is optimized with respect to an exponent . When , the Power Watershed is optimized by graph cuts, when the Power Watershed is optimized by shortest paths, is optimized by the random walker algorithm and is optimized by the watershed algorithm. In this way, the Power Watershed may be viewed as a generalization of graph cuts that provides a straightforward connection with other energy optimization segmentation/clustering algorithms. Binary segmentation of images Notation Image: Output: Segmentation (also called opacity) (soft segmentation). For hard segmentation Energy function: where C is the color parameter and λ is the coherence parameter. Optimization: The segmentation can be estimated as a global minimum over S: Existing methods Standard Graph cuts: optimize energy function over the segmentation (unknown S value). Iterated Graph cuts: First step optimizes over the color parameters using K-means. Second step performs the usual graph cuts algorithm. These 2 steps are repeated recursively until convergence. Dynamic graph cuts:Allows to re-run the algorithm much faster after modifying the problem (e.g. after new seeds have been added by a user). Energy function where the energy is composed of two different models ( and ): Likelihood / Color model / Regional term — unary term describing the likelihood of each color. This term can be modeled using different local (e.g. ) or global (e.g. histograms, GMMs, Adaboost likelihood) approaches that are described below. Histogram We use intensities of pixels marked as seeds to get histograms for object (foreground) and background intensity distributions: P(I|O) and P(I|B). Then, we use these histograms to set the regional penalties as negative log-likelihoods. GMM (Gaussian mixture model) We usually use two distributions: one for background modelling and another for foreground pixels. Use a Gaussian mixture model (with 5–8 components) to model those 2 distributions. Goal: Try to pull apart those two distributions. Texon A (or ) is a set of pixels that has certain characteristics and is repeated in an image. Steps: Determine a good natural scale for the texture elements. Compute non-parametric statistics of the model-interior , either on intensity or on Gabor filter responses. Examples: Deformable-model based Textured Object Segmentation Contour and Texture Analysis for Image Segmentation Prior / Coherence model / Boundary term — binary term describing the coherence between neighborhood pixels. In practice, pixels are defined as neighbors if they are adjacent either horizontally, vertically or diagonally (4 way connectivity or 8 way connectivity for 2D images). Costs can be based on local intensity gradient, Laplacian zero-crossing, gradient direction, color mixture model,... Different energy functions have been defined: Standard Markov random field: Associate a penalty to disagreeing pixels by evaluating the difference between their segmentation label (crude measure of the length of the boundaries). See Boykov and Kolmogorov ICCV 2003 Conditional random field: If the color is very different, it might be a good place to put a boundary. See Lafferty et al. 2001; Kumar and Hebert 2003 Criticism Graph cuts methods have become popular alternatives to the level set-based approaches for optimizing the location of a contour (see for an extensive comparison). However, graph cut approaches have been criticized in the literature for several issues: Metrication artifacts: When an image is represented by a 4-connected lattice, graph cuts methods can exhibit unwanted "blockiness" artifacts. Various methods have been proposed for addressing this issue, such as using additional edges or by formulating the max-flow problem in continuous space. Shrinking bias: Since graph cuts finds a minimum cut, the algorithm can be biased toward producing a small contour. For example, the algorithm is not well-suited for segmentation of thin objects like blood vessels (see for a proposed fix). Multiple labels: Graph cuts is only able to find a global optimum for binary labeling (i.e., two labels) problems, such as foreground/background image segmentation. Extensions have been proposed that can find approximate solutions for multilabel graph cuts problems. Memory: the memory usage of graph cuts increases quickly as the image size increases. As an illustration, the Boykov-Kolmogorov max-flow algorithm v2.2 allocates bytes ( and are respectively the number of nodes and edges in the graph). Nevertheless, some amount of work has been recently done in this direction for reducing the graphs before the maximum-flow computation. Algorithm Minimization is done using a standard minimum cut algorithm. Due to the max-flow min-cut theorem we can solve energy minimization by maximizing the flow over the network. The max-flow problem consists of a directed graph with edges labeled with capacities, and there are two distinct nodes: the source and the sink. Intuitively, it is easy to see that the maximum flow is determined by the bottleneck. Implementation (exact) The Boykov-Kolmogorov algorithm is an efficient way to compute the max-flow for computer vision-related graphs. Implementation (approximation) The Sim Cut algorithm approximates the minimum graph cut. The algorithm implements a solution by simulation of an electrical network. This is the approach suggested by Cederbaum's maximum flow theorem. Acceleration of the algorithm is possible through parallel computing. Software http://pub.ist.ac.at/~vnk/software.html — An implementation of the maxflow algorithm described in "An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Computer Vision" by Vladimir Kolmogorov http://vision.csd.uwo.ca/code/ — some graph cut libraries and MATLAB wrappers http://gridcut.com/ — fast multi-core max-flow/min-cut solver optimized for grid-like graphs http://virtualscalpel.com/ — An implementation of the Sim Cut; an algorithm for computing an approximate solution of the minimum s-t cut in a massively parallel manner. References Bayesian statistics Computer vision Computational problems in graph theory Image segmentation
Graph cuts in computer vision
Mathematics,Engineering
1,925
19,085,993
https://en.wikipedia.org/wiki/Alexander%20Davydov
Alexander Sergeevich Davydov (, ) (26 December 1912 – 19 February 1993) was a Soviet and Ukrainian physicist. Davydov graduated from Moscow State University in 1939. In 1963-1990 he was Director of Institute for Theoretical Physics of the Ukrainian Academy of Sciences. His main contributions were in theory of absorption, scattering and dispersion of the light in molecular crystals. In 1948, he predicted the phenomenon that is known as Davydov splitting or factor-group splitting, "the splitting of bands in the electronic or vibrational spectra of crystals due to the presence of more than one (interacting) equivalent molecular entity in the unit cell." In the period 1958–1960 he developed the theory of collective excited states in spherical and non-spherical nuclei, known as Davydov-Filippov Model and Davydov-Chaban Model. In 1973, Davydov applied the concept of molecular solitons in order to explain the mechanism of muscle contraction in animals. He studied theoretically the interaction of intramolecular excitations or excess electrons with autolocal breaking of the translational symmetry. These excitations are now known as Davydov solitons. In 1979, Davydov published the first textbook on quantum biology entitled "Biology and Quantum Mechanics" in Russian, which was then translated in English three years later. Publications Theory of Absorption of Light by Molecular Crystals, Naukova Dumka, Kiev (1951) Theory of Atomic Nuclei, Nauka, Moscow (1958) Theory of Molecular Excitons, McGraw-Hill, New York (1962) Quantum Mechanics, Pergamon Press (1965) Theory of Molecular Excitons, Plenum Press, New York (1971) Theory of Solids, Nauka, Moscow (1980) Biology and Quantum Mechanics, Pergamon Press (1982) Solitons in Molecular Systems, D. Reidel (1985) Solitons in Bioenergetics, Naukova Dumka, Kiev (1986) The Theoretical Investigation of High-Temperature Superconductivity, Physics Reports, vol. 190, no. 4–5, pp. 191–306 (1990) High-Temperature Superconducitvity, Naukova Dumka, Kiev (1990). See also Metal–semiconductor junction References 1912 births 1993 deaths People from Yevpatoria People from Yevpatoriysky Uyezd 20th-century Ukrainian physicists Quantum biology Members of the National Academy of Sciences of Ukraine Heroes of Socialist Labour Recipients of the Order of Lenin Burials at Baikove Cemetery
Alexander Davydov
Physics,Biology
524
13,089,065
https://en.wikipedia.org/wiki/RGS4
Regulator of G protein signaling 4 also known as RGP4 is a protein that in humans is encoded by the RGS4 gene. RGP4 regulates G protein signaling. Function Regulator of G protein signalling (RGS) family members are regulatory molecules that act as GTPase activating proteins (GAPs) for G alpha subunits of heterotrimeric G proteins. RGS proteins are able to deactivate G protein subunits of the Gi alpha, Go alpha and Gq alpha subtypes. They drive G proteins into their inactive GDP-bound forms. Regulator of G protein signaling 4 belongs to this family. All RGS proteins share a conserved 120-amino acid sequence termed the RGS domain which conveys GAP activity. Regulator of G protein signaling 4 protein is 37% identical to RGS1 and 97% identical to rat Rgs4. This protein negatively regulates signaling upstream or at the level of the heterotrimeric G protein and is localized in the cytoplasm. Clinical significance A number of studies associate the RGS4 gene with schizophrenia, while some fail to detect an association. RGS4 is also of interest as one of the three main RGS proteins (along with RGS9 and RGS17) involved in terminating signalling by the mu opioid receptor, and may be important in the development of tolerance to opioid drugs. Inhibitors cyclic peptides CCG-4986 Interactions RGS4 has been shown to interact with: COPB2, ERBB3, and GNAQ. References Further reading Proteins
RGS4
Chemistry
318
68,572,993
https://en.wikipedia.org/wiki/Pirellulales
Pirellulales is an order of bacteria. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also List of bacterial orders List of bacteria genera References Planctomycetota Bacteria orders Taxa described in 2020
Pirellulales
Biology
73
20,381,835
https://en.wikipedia.org/wiki/Educational%20inequality
Educational Inequality is the unequal distribution of academic resources, including but not limited to school funding, qualified and experienced teachers, books, physical facilities and technologies, to socially excluded communities. These communities tend to be historically disadvantaged and oppressed. Individuals belonging to these marginalized groups are often denied access to schools with adequate resources and those that can be accessed are so distant from these communities. Inequality leads to major differences in the educational success or efficiency of these individuals and ultimately suppresses social and economic mobility. Inequality in education is broken down into different types: regional inequality, inequality by sex, inequality by social stratification, inequality by parental income, inequality by parent occupation, and many more. Measuring educational efficacy varies by country and even provinces/states within the country. Generally, grades, GPA test scores, other scores, dropout rates, college entrance statistics, and college completion rates are used to measure educational success and what can be achieved by the individual. These are measures of an individual's academic performance ability. When determining what should be measured in terms of an individual's educational success, many scholars and academics suggest that GPA, test scores, and other measures of performance ability are not the only useful tools in determining efficacy. In addition to academic performance, attainment of learning objectives, acquisition of desired skills and competencies, satisfaction, persistence, and post-college performance should all be measured and accounted for when determining the educational success of individuals. Scholars argue that academic achievement is only the direct result of attaining learning objectives and acquiring desired skills and competencies. To accurately measure educational efficacy, it is imperative to separate academic achievement because it captures only a student's performance ability and not necessarily their learning or ability to effectively use what they have learned. Much of educational inequality is attributed to economic disparities that often fall along racial lines, and much modern conversation about educational equity conflates the two, showing how they are inseparable from residential location and, more recently, language. In many countries, there exists a hierarchy or a main group of people who benefit more than the minority people groups or lower systems in that area, such as with India's caste system for example. In a study about education inequality in India, authors, Majumbar, Manadi, and Jos Mooij stated "social class impinges on the educational system, educational processes and educational outcomes" (Majumdar, Manabi and Jos Mooij). Sometimes race, religion and ethnicity can decide a child's future and opportunities in education and further. For girls who are already disadvantaged, having school available only for the higher classes or the majority of people group in a diverse place like South Asia can influence the systems into catering for one kind of person, leaving everyone else out. This is the case for many groups in South Asia. In an article about education inequality being affected by people groups, the organization Action Education claims that "being born into an ethnic minority group or linguistic minority group can seriously affect a child's chance of being in school and what they learn while there" (Action Education). We see more and more resources only being made for certain girls, predominantly who speak the language of the city. In contrast, more girls from rural communities in South Asia are left out and thus not involved with school. Educational inequality between white students and minority students continues to perpetuate social and economic inequality. Another leading factor is housing instability, which has been shown to increase abuse, trauma, speech, and developmental delays, leading to decreased academic achievement. Along with housing instability, food insecurity is also linked with reduced academic achievement, specifically in math and reading. Having no classrooms and limited learning materials negatively impacts the learning process for children. In many parts of the world, old and worn textbooks are often shared by six or more students at a time. Throughout the world, there have been continuous attempts to reform education at all levels. With different causes that are deeply rooted in history, society, and culture, this inequality is difficult to eradicate. Although difficult, education is vital to society's movement forward. It promotes "citizenship, identity, equality of opportunity and social inclusion, social cohesion, as well as economic growth and employment," and equality is widely promoted for these reasons. Global educational inequality is clear in the ongoing learning crisis, where over 91% of children across the world are enrolled in primary schooling; however, a large proportion of them are not learning. A World Bank study found that "53 percent of children in low- and middle-income countries cannot read and understand a simple story by the end of primary school." The recognition of global educational inequality has led to the adoption of the United Nations Sustainable Development Goal 4 which promotes inclusive and equitable quality education for all. Unequal educational outcomes are attributed to several variables, including family of origin, gender, and social class. Achievement, earnings, health status, and political participation also contribute to educational inequality within the United States and other countries. The ripple effect of this inequality are quite disastrous, they make education in Africa more of a theoretical rather than a practical experience majorly due to the lack of certain technological equipment that should accompany their education. Family background In Harvard's "Civil Rights Project," Lee and Orfield identify family background as the most influential factor in student achievement. A correlation exists between the academic success of parents with the academic success of their children. Only 11% of children from the bottom fifth earn a college degree, while well over half of the top fifth earn one. Linked with resources, White students tend to have more educated parents than students from minority families. This translates to a home-life that is more supportive of educational success. This often leads to them receiving more at-home help, having more books in their home, attending more libraries, and engaging in more intellectually intensive conversations. Children, then, enter school at different levels. Poor students are behind in verbal memory, vocabulary, math, and reading achievement and have more behavior problems. This leads to their placement in different level classes that track them. These courses almost always demand less from their students, creating a group that is conditioned to lack educational drive. These courses are generally non-college bound and are taught by less-qualified teachers. Also, family background influences cultural knowledge and perceptions. Middle class중산층중산층 중산층 knowledge of norms and customs allows students with this background to navigate the school system better. Parents부모님부모 부모님모님 from this class and above also have social networks that are more beneficial than those based in lower classes. These connections may help students gain access to the right schools, activities, etc. Additionally, children from poorer families, who are often minorities, come from families that distrust institutions. America's history of racism and discrimination has created a perceived and/or existent ceiling on opportunities for many poor and minority citizens. This ceiling muffles academic inspirations and muffles growth. Jonathan Wai concluded that during the COVID-19 pandemic, Harvard often provided more lip service than concrete measures to fully support students from minority and low-income communities. He also noted that poor students often faced greater disadvantages than wealthy students during the pandemic, exacerbating the existing economic inequalities at Harvard. The recent and drastic increase of Latino immigrants has created another major factor in educational inequality. As more and more students come from families where English is not spoken at home, they often struggle to overcome a language barrier and simply learn subjects. They more frequently lack assistance at home because it is common for the parents not to understand the work that is in English. Furthermore, research reveals the summer months as a crucial time for the educational development of children. Students from disadvantaged families experience greater losses in skills during summer vacation. Students from lower socioeconomic classes come disproportionately from single-parent homes and dangerous neighborhoods. 15% of White children are raised in single-parent homes and 10% of Asian children are. 27% of Latinos are raised in single-parent homes and 54% of African-American children are. Fewer resources, less parental attention, and more stress all influence children's performance in school. A broad range of factors contributes to the emergence of socioeconomic achievement gaps. The interaction of different aspects of socialization is outlined in the model of mediating mechanisms between social background and learning outcomes. The model describes a multi-step mediation process. Socially privileged families have more economic, personal, and social resources available than socially disadvantaged families. Differences in family resources result in differences in the learning environments experienced by children. Children with various social backgrounds experience different home learning environments, attend different early childhood facilities, schools, school-related facilities, and recreational facilities, and have different peer groups. Due to these differences in learning environments, children with various social backgrounds carry out different learning activities and develop different learning prerequisites. Gender Throughout the world, educational achievement varies by gender. The exact relationship differs across cultural and national contexts. Female disadvantage Obstacles preventing females' ability to receive a quality education include traditional attitudes towards gender roles, poverty, geographical isolation, gender-based violence, early marriage and pregnancy. Throughout the world, there is an estimated 7 million more girls than boys out of school. This "girls gap" is concentrated in several countries including Somalia, Afghanistan, Togo, the Central African Republic, and Democratic Republic of the Congo. In the Democratic Republic of the Congo, girls are outnumbered two to one. The gender constructs of Southeast Asia run deep into history and affect all spheres of the future lives of young women. Traditional gender roles placed upon girls results in the drop of women from school and the trend of less educated older women in Southeast Asia. In a journal about the women of the Devanga community in India, Pooja Haridarshan says that "70% [of] women in South Asia are married at a young age, which is coupled with early childbearing and a lack of decision-making abilities within the traditional family structures, further enhancing their "disadvantaged" position in society" (Haridarshan). The women are expected to marry young, bear and raise children, leaving little to no room for them to receive an education, encouraging youngers girls to also follow in their footsteps. But the scary thing is that less educated women could become poor because of their lack of resources. This is an unjust situation where there is an evident divide between men's educational success and women's education success. This is where our one brainstorms a solution. In an article about the wellbeing of children in South Asia, authors Jativa Ximena and Michelle Mills states that "in societies and communities where girls' mobility is restricted, more opportunities need to be provided for girls to continue education and skills training" (Ximena and Mills). Socialized gender roles affect females' access to education. For example, in Nigeria, children are socialized into their specific gender roles as soon as their parents know their gender. Men are the preferred gender and are encouraged to engage in computer and scientific learning while women learn domestic skills. These gender roles are deep-rooted within the state, however, with the increase of westernized education within Nigeria, there has been a recent increase in women's ability to receive an equal education. There is still much to be changed, though. Nigeria still needs policies that encourage educational attainment for men and women based on merit, rather than gender. Females are shown to be at risk of being attacked in at least 15 countries. Attacks can occur because individuals within those countries do not believe women should receive an education. Attacks include kidnappings, bombings, torture, rape, and murder. In Somalia, girls have been abducted. In Colombia, the Democratic Republic of the Congo, and Libya students were reported to have been raped and harassed. In Pakistan and Afghanistan, schools and busses have been bombed and gassed. Early marriage affects females' ability to receive an education. "The gap separating men and women in the job market remains wide in many countries, whether in the North or the South. With marginal variables between most countries, women have a lower employment rate, are unemployed longer, are paid less, and have less secure jobs." "Young women, particularly suffer double discrimination. First for being young, in the difficult phase of transition between training and working life, in an age group that has, on an average, twice the jobless rate or older workers and are at the mercy of employers who exploit them under the pretext of enabling them to acquire professional experience. Secondly, they are discriminated against for being women and are more likely to be offered low paying or low-status jobs." "Discrimination is still very much in evidence and education and training policies especially targeting young women are needed to restore a balance." "Although young women are increasingly choosing typically 'male' professions, they remain over-represented in traditionally female jobs, such as secretaries, nurses, and underrepresented in jobs with responsibility and the professions." In early grades, boys and girls perform equally in mathematics and science, but boys score higher on advanced mathematics assessments such as the SAT college entrance examination. Girls are also less likely to participate in class discussions and more likely to be silent in the classroom. Some believe that females have a way of thinking and learning differently from males. Belenky and colleagues (1986) conducted research that found an inconsistency between the kind of knowledge appealing to women and the kind of knowledge being taught in most educational institutions. Another researcher, Gilligan (1982), found that the knowledge appealing to females was caring, interconnection, and sensitivity to the needs of others, while males found separation and individualism appealing. Females are more field-dependent, or group-oriented than males, which could explain why they may experience problems in schools that primarily teach using an individualistic learning environment. As Teresa Rees finds, the variance of women in mathematics and science fields can be explained by the lack of attention paid to the gender dimension in science. Regarding gender differences in academic performance, Buchmann, DiPrete, and McDaniel claim that gender-based accomplishments on standardized tests show the continuation of the "growing male advantage in mathematics scores and growing female advantage in reading scores as they move through school". Ceci, Williams and Barnett's research about women's underrepresentation in science reinforces this claim by saying that women experience "stereotype threat [which] impedes working memory" and as a result receive lower grades in standardized or mathematics tests. Nonetheless, Buchmann, DiPrete and McDaniel claim that the decline of traditional gender roles, alongside the positive changes in the labor market that now allow women to get "better-paid positions in occupational sectors" may be the cause for a general incline in women's educational attainment. Male disadvantage In 51 countries, girls are enrolled at higher rates than boys. Particularly in Latin America, the difference is attributed to the prominence of gangs and violence attracting male youth. The gangs pull the males in, distracting them from school and causing them to drop out. In some countries, female high school and graduation rates are higher than for males. In the United States, for example, 33% more bachelor's degrees were conferred on females than males in 2010–2011. This gap is projected to increase to 37% by 2021–2022 and is over 50% for masters and associate degrees. Dropout rates for males have also increased over the years in all racial groups, especially in African Americans. They have exceeded the number of high school and college dropout rates than any other racial ethnicity for the past 30 years. Most of the research found that males were primarily the most "left behind" in education because of higher graduation dropout rates, lower test scores, and failing grades. They found that as males get older, primarily from ages 9 to 17, they are less likely to be labeled "proficient" in reading and mathematics than girls were. In general, males arrive in kindergarten much less ready and prepared for schooling than females. This creates a gap that continually increases over time into middle and high school. Nationally, there are 113 boys in 9th grade for every 100 girls, and among African-American males, there are 123 boys for every 100 girls. States have discovered that 9th grade has become one of the biggest dropout years. Whitmire and Bailey continued their research and looked at the potential for any gender gap change when males and females were faced with the decision of potentially going to college. Females were more likely to go to college and receive bachelor's degrees than males were. From 1971 to about 1981, women were the less fortunate and had lower reported numbers of bachelor's degrees. However, since 1981, males have been at a larger disadvantage, and the gap between males and females keeps increasing. Boys are more likely to be disciplined than girls, and are also more likely to be classified as learning disabled. Males of color, especially African-American males, experience a high rate of disciplinary actions and suspensions. In 2012, one in five African-American males received an out of school suspension. In Asia, males are expected to be the main financial contributor of the family. So many of them go to work right after they become adults physically, which means at the age of around 15 to 17. This is the age they should obtain a high school education. Males get worse grades than females do regardless of year or country examined in most subjects. In the U.S. women are more likely to have earned a bachelor's degree than men by the age of 29. Female students graduate high school at a higher rate than male students. In the U.S. in 2003, 72 percent of female students graduated, compared with 65 percent of male students. The gender gap in graduation rates is particularly large for minority students. Men are under-represented among both graduate students and those who successfully complete masters and doctoral degrees in the U.S. Proposed causes include boys having worse self-regulation skills than girls and being more sensitive to school-quality and home environment than girls. Boys perceiving education as feminine and lacking educated male role-models may also contribute to males being less likely to complete college. It has been suggested that male students in the U.S. perform worse on reading tests and read less than their female counterparts in part because males are more physically active, more aggressive, less compliant, and because school reading curricula do not match their interests. It has also been suggested that teacher bias in grading may account for up to 21% of the male deficit in grades. One study found that male disadvantage in education is independent of inequality in social and economic participation. Race In the United States During the early 18th century, African-American students and Mexican-American students were barred from attending schools with white students in most states. This was due to the court case Plessy v. Ferguson (1896), in which it was decided that educational facilities were allowed to segregate white students from students of color as long as the educational facilities were considered equal. Educational facilities did not follow the federal mandate: a study covering the period 1890 to 1950 of the Southern states' per-pupil expenditures on instruction found that, on average, white students received 17 to 70 percent more educational expenditures than their Black counterparts. The first federal legal challenge of these unequal segregated educational systems occurred in California – Mendez v. Westminster in 1947, followed by Brown v. Board of Education in 1954. The decision in Brown v. Board of Education led to the desegregation of schools by federal law, but decades of inferior education, segregation of household salaries between whites and people of color, and racial wealth gaps have left people of color at a disadvantage. According to the EdBuild report from 2019, non-white school districts receive 23 billion dollars less than white school districts, even though they serve the same number of students. School districts rely heavily on local taxes, so districts in white communities, which tend to be wealthier, receive more money per student than nonwhite districts: $13,908 per student, compared to $11,682 per student, respectively. Differences of academic skills in children of different races start at an early age. According to National Assessment of Educational Progress, there is a remaining gap showing Black and Latino children being able to demonstrate cognitive proficiency compared to their Asian and White counterparts. In the data, 89 percent of Asian and White children presented the ability to understand written and spoken words while only 79 and 78 percent of Black and Latino children were able to comprehend written and spoken words the trend would continue into ages 4–6. Studies exploring the U.S. education systems' racial achievement disparities typically investigate factors like where students live, where they go to school, family socioeconomic status (SES), and broader influences like structural racism. Genetic and cultural explanations for social outcome disparities between racial groups are not supported, increasingly disputed by educators, and may indirectly contribute to inequitable outcomes by impacting expectations for students of color or distracting from policy-addressable issues by "blaming the victim." For example, "debunked" theories attributing achievement disparities to "fear of acting white" may undermine policy support for addressing systemic issues such as economic inequality, implicit racial bias, and school discipline disparities. Immigration status The Immigrant paradox states that "immigrants, who are disadvantaged by inequality, may use their disadvantages as a source of motivation". A study based in New York suggested that children of immigrant descent outperformed their native student counterparts. The paradox explains that the gratefulness of immigrant children allows them to enjoy academic advantages that may not have been accessible at one time. This in turn, allows for more effort and better outcomes from these students. This was also evident in the National Education Longitudinal Study which showed that immigrant children often achieved higher scores on math and science tests. It has been reported that "evidence of the immigrant advantage was stronger from Asian immigrant families than for youth from Latin American", which may cause some inequality in itself. This may vary depending on differences between pre and post-migration conditions. In 2010, researchers from Brown University published their results on how immigrant children are thriving in school. Some of their conclusions were that first-generation immigrant children show lower levels of delinquency and bad behaviors than generations beyond. This implies that first-generation immigrant children often start behind American-born children in school, but they progress quickly and have elevated rates of learning growth. In the U.S., having more immigrant peers appears to increase U.S.-born students' chances of high school completion. Low-skilled immigration, in particular, is strongly associated with more years of schooling and improved academic performance by third-plus generation students. Many people assume that enough life skills will be presented to immigrant children to succeed. This is not always true as there is more to life than just getting through high school. The International Student Services Association (ISSA) has a goal to help foreign born students to succeed. The way they do this by providing two different programs within school hours, which can be adapted to accommodate each school and individual. Theses programs are called The Career Readiness Program and The College Readiness Program. The author Haowen Ge mentions, "Since their beginning in 2019, both programs have been extremely successful with 90% of ISSA students continuing to certification programs, college and/or internships." Just because these students have begun their enrollment in the education system does not mean they will remain there. According to SOS Children's Villages, "68 million people worldwide have fled their homes because of conflict, unrest or disaster. Children account for more than half of this total. Child refugees face incredible risks and dangers – including disease, malnutrition, violence, labor exploitation and trafficking." People flee their homes because of anti-immigrant policies, which take tolls on the national school system of the United States. A national study's results show that "Ninety percent of administrators in this study observed behavioral or emotional problems in their immigrant students. And 1 in 4 said it was extensive." This proves that the immigration policies within the United States takes a toll on these immigrant children in our education system. Latino students and college preparedness Latino migration In the United States, Latinos are the largest growing population. As of 1 July 2016, Latinos make up 17.8 percent of the U.S. population, making them the largest minority. People from Latin America migrate to the United States due to their inability to obtain stability, whether it is financial stability or refugee. Their homeland is either dealing with an economic crisis or is involved in a war. The United States capitalizes on the migration of Latin American migrants. With the disadvantage of their legal status, American businesses employ them and pay them an extremely low wage. As of 2013, 87% of undocumented men and 57% undocumented women were a part of the U.S. economy. Diaspora plays a role in Latinos migrating to the United States. Diaspora is the dispersion of any group from their original homeland. New York City holds a substantial quota of the Latino population. More than 2.4 million Latinos inhabit New York City, its largest Latino population being Puerto Ricans followed by Dominicans. A large number of Latinos contributes to the statistic of at least four million of the United States born children having one immigrant parent. Children of immigrant origin are the fastest growing population in the United States. One in every four children come from immigrant families. Many Latino communities are constructed around immigrant origins in which play a big part in society. The growth in children of immigrant parents does not go unaware, in a way society and the government accepts it. For example, many undocumented/immigrants can file taxes, children who attend college can provide parents information to obtain financial aid, parent(s) may be eligible for government help through the child, etc. Yet, the lack of knowledge regarding post-secondary education financial help increases the gap of Latino children to restrain from obtaining higher education. Education In New York City, Mayor De Balsio has implemented 3-K for All, which every child can attend pre-school at the age of three, free of charge. Although children's education is free from K-12 grade, many children with immigrant parents do not take advantage of all the primary education benefits. Children who come from a household that contains at least one immigrant parent, are less likely to attend childhood or preschool programs. College preparation The preparation of college access to children born in America from immigrant parents pertaining to Latino communities is a complex process. The beginning of the junior year through senior year in high school consists of preparation for college research and application process. For government help towards college tuition such as Financial Aid and Taps, parents or guardian's personal information is needed, this is where doubt and anticipation unravels. The majority of immigrant parents/guardians do not have most of the qualifications required for the application. The focus is to portray the way immigrants and their American born children work around the education system to attain a college education. Due to the influx of the Latino population, there amount of Latino high school students graduates has increased as well. Latino students are mainly represented in two-year rather than four-year institutions. This can occur for two reasons: the cost reduction of attending a two-year institution or its close proximity to home. Young teens with a desire to obtain a higher education clash with some limitations due to parent's/guardian's personal information. Many children lack public assistance due to lack of English proficiency of parents which is difficult to fill out forms or applications or simply due to the parent's fear of giving personal information that could identify their status, the same concept applies to Federal Student Aid. Federal Student Aid comes from the federal government in which helps a student pay for educational expenses of college in three possible formats, grant, work-study, and loan. One step of the Federal Aid application requires one or both parent/guardian personal information as well as financial information. This may limit the continuance of the application due to the fear of providing personal information. The chances of young teens entering college reduce when personal information from parents are not given. Many young teens with immigrant parents are part of the minority group in which income is not sufficient to pay college tuition or repay loans with interest. The concept of college as highly expensive makes Latino students less likely to attend a four-year institution or even attend postsecondary education. Approximately 50% of Latinos received financial aid in 2003–2004, but they are still the minority who received the lowest average of the federal awards. In addition, loans are not typically granted to them. Standardized tests In addition to finance scarcity, standardized tests are required when applying to a four-year post educational institution. In the United States, the two examinations that are normally taken are the SATs and ACTs. Latino students do generally take the exam, but from 2011 to 2015, there has been a 50% increase in the number of Latino students taking the ACTs. As for the SATs, in 2017, 24% of the test takers were identified with Latino/Hispanic. Out of that percentage, only 31 percent met the college-readiness benchmark for both portions of the test (ERW and Math). Native American students and higher education Economic disparity and representation Economic disparity is a significant issue faced by Native American students that influences their placement in high-poverty and rural elementary and high schools, resulting in disadvantageous conditions for them to access higher education. This disadvantage is further exacerbated by the underrepresentation of Native American students in gifted and talented programs, with lower identification rates compared to their White counterparts. The scarcity of usable data on Native American students in gifted programming also mirrors a broader underrepresentation of this demographic within educational research. This issue has been extensively scrutinized through peer-reviewed research, with an emphasis on its prevalence within various scholarly articles. Smith et al.'s (2014) study concentrated on the representation of Native American students in STEM (Science, Technology, Engineering, and Mathematics) disciplines. Their research unearthed a notable underrepresentation of these students within STEM fields, contributing to both personal and societal disadvantages. Cultural values, identity, and support programs Further insights emerge from Smith et al.'s (2014) study, highlighting the strong ties that many Native American students maintain with their tribal cultures and communities, along with their high regard for education's instrumental significance. This finding suggests that Native American students exhibit a proclivity towards endorsing individualistic goals, a potential asset for supporting their academic and career aspirations. Moreover, specialized support programs have been shown to effectively address challenges faced by Native American students. These programs foster cultural identity, create a sense of community, and mitigate the negative impacts of racism experienced by these students. By enhancing belonging and reducing the racial/ethnic achievement gap, these initiatives play a vital role in promoting the academic success of Native American students in STEM fields. Cultural identity and academic persistence Jackson et al. (2003) conducted a separate study exploring factors that influence the academic persistence of Native American college students. Their research highlighted the pivotal role of confidence in academic success and persistence. Confidence and competence emerged as key motivating factors for Native American students striving for academic achievement. The study also emphasized the importance of accommodating Native American culture within educational institutions and addressing instances of racism, as these factors significantly impact students' persistence in higher education. Qualitative interviews with successful Native American college students identified themes related to their persistence in college, including dealing with racism and developing independence and assertiveness. Lack of academic persistence among Native American students has been attributed to colleges' failure to accommodate Native American culture. Furthermore, the personal experience of racism has been found to negatively impact Native American students' persistence in higher education. Early education racial inequality Racial inequality affects students from a young age. High quality early childhood education programs, known as ECE, are offered to children, to help them enter kindergarten with a good understanding of how to succeed throughout school. There has been a noticeable difference in the quality of education, with Black or Hispanic groups being provided with less effective preschool learning programs than White non-Hispanic groups in the preschool setting. This causes White children to achieve a higher level of education than Black or Hispanic children. White children are more likely to enter into higher level ECE programs than Black or Hispanic children, with the latter being in cheaper and less effective education programs. The American Psychological Association said that "Research shows that compared with white students, black students are more likely to be suspended or expelled, less likely to be placed in gifted programs and subject to lower expectations from their teachers." In 2001–2004, eleven states conducted a study on the education quality gap between races in ECE programs and found that Black children were more likely to attend lower quality programs than Whites. A study of Black children entering kindergarten in 2016 found that they were behind in math and English by up to nine months, compared to White children. Kids who are behind in kindergarten are projected to stay behind throughout most of their career. The 2016 study found that there still is a gap between races in ECE programs. "Strikingly, minority students are about half as likely to be assigned to the most effective teachers and twice as likely to be assigned to the least effective." As of 2016, 24% of White children are enrolled in high quality early education, whereas only 15% of Black children fall into that category. Tests run in 2016 proved that if Black and Hispanic children were to attend high quality early education for one year, the education gap in English, between them and White children, would nearly disappear, and for the gap in math to drop to around five months going into kindergarten. Rural and inner-city education There are large scales systemic inequalities within rural and inner-city education systems. The study of these differences, especially within rural areas, is relatively new and distinct from the study of educational inequality which focuses on individuals within an educational system. Rural and inner-city students in the United States under-perform academically compared to their suburban peers. Factors that influence this under-performance include funding, classroom environment, and the lessons taught. Inner-city and rural students are more likely to live in low-income households and attend schools with fewer resources compared to suburban students. They have also shown to have a less favorable view of education which stems from the values held in their communities and families regarding school, work, and success. When compared to suburban students, rural and inner-city students face similar achievement issues. Teacher-student interactions, the lessons taught, and knowledge about the surrounding community have shown to be important factors in helping offset the deficits faced in inner-city and urban schools. However, drop-out rates are still high within both communities, as a more substantial number of minority students, who often live in these areas, drop-out of high school. A study on inner-city, high school students showed that academic competency during freshman year has a positive impact on graduation rates, meaning that a students' early high school performance can be an indicator of how successful they will be in high school and if they will graduate. With the correct knowledge and understanding of the issues faced by these students, the deficits they face can be overcome. Standardized tests Achievement in the United States is often measured using standardized tests. Studies have shown that low performance on standardized tests can have a negative effect on the funding the school receives from the government, and low-income students have been shown to underperform on standardized tests at higher rates than their peers. A study looking at how low test performance affected schools, found that schools that perform below average and are in low income areas can face repercussions that affect school funding and resources. The study also found that the material taught to students is affected by test performance, as schools that have low test scores will often change their curriculum to teach to the test. School resources In the same way, some regions of the world have so-called "brain drain", or the loss of wealthy, skilled, and educated individuals and their families to other countries through immigration, rural and inner-city regions of the United States experience brain drain to sub-urban regions. It has been shown that people become more likely to leave rural areas as their education level increases and less likely as they increase in age. Urban inner-city areas have been decentralizing since the 1950s, losing their human capital. This flight of human capital leaves only the poor and disadvantaged behind to contribute to school funding resulting in school systems that have very limited resources and financial difficulty. The American public school system is one in which the amount of wealth in a school district shapes the quality of the school because schools are primarily funded by local property taxes. As the school system's funding decreases, they are forced to do more with less. This frequently results in decreased student faculty ratios and increased class sizes. Many schools are also forced to cut funding for the arts and enrichment programs which may be vital to academic success. Additionally, with decreased budgets, access to specialty and advanced classes for students who show high potential frequently decreases. A less obvious consequence of financial difficulty is difficulty in attracting new teachers and staff, especially those who are experienced. According to an article written in The Washington Post, students are reportedly taking 112 standardized tests over the course of K-12 with the most standardized tests per grade being tenth graders that take on average 11 standardized tests over one school year. This became such a problem that in 2015 and 2016, the Department of Education put in action plans that would reduced the number of standardized tests that can be given as well as capping the percent of class time that can be dedicated to standardized tests at 2%. This amount of testing is still more than other countries like Finland that has less standardized tests but still far less than other countries like South Korea which not only has more standardized tests but they are also considered to be more rigorous. Family resources It has been shown that the socioeconomic status of the family has a large correlation with both the academic achievement and attainment of the student. "The income deficits for inner-city students is approximately $14,000 per year and $10,000 per year for the families of those living in the respective areas compared to the average income of families in suburban areas." We see more and more girls being taken out of school in South Asia to provide for their families through work. A frightening statistic is that over 12% of children in South Asia are engaged in child labor" (UNICEF). Sadly, we see many children are out of school and uneducated but working for money to go back to their families. This is also a segway into the rise of child slavery and sex trafficking in Asia. The economy of certain areas may prove to be the reason more children are in or out of school, and we also see favor in the more well off communities in forms of educational resources. Employing children takes them out of school and it destroys their future opportunities and skills attained for their adult life, leaving them vulnerable to poverty and other poverty related issues. Money can also have effects on whether a child finishes high school. In data given by the NCES, it was shown that 20% of students that were considered to be low income would drop out before they get to graduation while only 5% of middle income students will drop out. And only 3% of high income students would drop out. More well off sub-urban families can afford to spend money on their children's education in forms such as private schools, private tutoring, home lessons, and increased access to educational materials such as computers, books, educational toys, shows, and literature. Kids from poorer families are shown to have lower average SAT scores with a difference of almost 400 points when comparing families with an annual income of $40,000 to families with an annual income of $200,000. Sub-urban families are also frequently able to provide larger amounts of social capital to their kids, such as increased use of "proper English", exposure to plays and museums, and familiarity with music, dance, and other such programs. Even more, inner-city students are more likely to come from single-parent homes and rural students are more likely to have siblings than their suburban peers decreasing the amount of investment per child their families are able to afford. This notion is called resource dilution which posits that families have finite levels of resources, such as time, energy, and money. When sibship (amount of siblings) increases, resources for each child become diluted. In college, the family's resources are even more important. In a study done by the National Center for Education Statistics, students were said to be more likely to attend college within 3 years of leaving high school just by thinking their family could financially support it. The same study asked a large group of eleventh grade students if they wanted to go to college and 32% of students agreed that even if they did get accepted to college, they wouldn't go because their family could not afford it. Family values The investment a family puts into their child's education is largely reflective of the value the parent puts on education. The value placed on education is largely a combination of the parent's education level and the visual returns on education in the community the family lives in. Sub-urban families tend to have parents with a much larger amount of education than families in rural and inner-city locations. This allows sub-urban parents to have personal experience with returns on education as well as familiarity with educational systems and processes. In addition, parents can invest and transmit their own cultural capital to their children by taking them to museums, enrolling them in extra-curriculars, or even having educational items in the house. In contrast, parents from rural and urban areas tend to have less education and little personal experience with their returns. The areas they live in also put very little value on education and reduce the incentive to gain it. This leads to families that could afford to invest greater resources in their children's education not to. Gifted and talented education There is a disproportionate percentage of middle and upper-class White students labeled as gifted and talented compared to lower-class, minority students. Similarly, Asian American students have been over-represented in gifted education programs. In 1992, African Americans were underrepresented in gifted education by 41%, Hispanic American students by 42%, and American Indians by 50%. Conversely, White students were over-represented in gifted education programs by 17% and Asian American minority students being labeled as gifted and talented, but research shows that there is a growing achievement gap between White students and non-Asian students of color. There is also a growing gap between gifted students from low-income backgrounds and higher-income backgrounds. The reasons for the under-representation of African-American, Hispanic-American, and American-Indian students in gifted and talented programs can be explained by recruitment issues/screening and identifying; and personnel issues. Most states use a standardized achievement and aptitude test, which minority students have a history of performing poorly on, to screen and identify gifted and talented students. Arguments against standardized tests claim that they are culturally biased, favoring White students, require a certain mastery of the English language, and can lack cultural sensitivity in terms of format and presentation. In regards to personnel issues, forty-six states use teacher nominations, but many teachers are not trained in identifying or teaching gifted students. Teachers also tend to have lower expectations of minority students, even if they are identified as gifted. 45 states allow for parental nominations, but the nomination form is not sensitive to cultural differences and minority parents can have difficulty understanding the form. Forty-two states allow self-nomination, but minority students tend not to self nominate because of social-emotional variables like peer pressure or feeling isolated or rejected by peers. Additionally, some students are identified as gifted and talented simply because they have parents with the knowledge, political skills, and power to require schools to classify their child as gifted and talented. Therefore, providing their child with special instruction and enrichment. Special education In addition to the unbalanced scale of gender disproportionality in formal education, students with "special needs" comprise yet another facet of educational inequality. Prior to the 1975 passing of the Education for All Handicapped Children Act (currently known as the Individuals with Disabilities Education Act (IDEA)) approximately 2 million children with special needs were not receiving sufficient public education. Of those that were within the academic system, many were reduced to lower standards of teaching, isolated conditions, or even removal from school buildings altogether and relocated out of peer circulation. The passing of this bill effectively changed the lives of millions of special needs students, ensuring that they have free access to quality public education facilities and services. And while there are those that benefit from the turning of this academic tide, there are still many students (most of which are minorities with disabilities) that find themselves in times of learning hardship due to the unbalanced distribution of special education funding. In 1998 1.5 million minority children were identified with special learning needs in the US, out of that 876,000 were African American or Native American. African-American students were three times as likely to be labeled as special needs than that of Caucasians. Students who both are special education students and of a minority face unequal chances for quality education to meet their personal needs. Special education referrals are, in most cases in the hands of the general education teacher, this is subjective and because of differences, disabilities can be overlooked or unrecognized. Poorly trained teachers at minority schools, poor school relationships, and poor parent-to-teacher relationships play a role in this inequality. With these factors, minority students are at a disadvantage because they are not given the appropriate resources that would in turn benefit their educational needs. US Department of Education data shows that in 2000–2001 at least 13 states exhibited more than 2.75% of African-American students enrolled in public schools with the label of "mental retardation". At that time national averages of Caucasians labeled with the same moniker came in at 0.75%. During this period no Individual state rose over 2.32% of Caucasian students with special needs. According to Tom Parrish, a senior research analyst with the American Institutes for Research, African-American children are 2.88 times more likely to be labeled as "mentally retarded", and 1.92 times more likely to be labeled as emotionally disturbed than Caucasian children. This information was calculated by data gathered from the US Department of Education. Researchers Edward Fierros and James Conroy, in their study of district-level data regarding the issue of minority over-representation, have suggested that many states may be mistaken with their current projections and that disturbing minority-based trends may be hidden within the numbers. According to the Individuals with Disabilities Act students with special needs are entitled to facilities and support that cater to their individual needs, they should not be automatically isolated from their peers or from the benefits of general education. However, according to Fierros and Conroy, once minority children such as African Americans and Latinos are labeled as students with special needs they are far less likely than Caucasians to be placed in settings of inclusive learning and often receive less desirable treatment overall. History of educational oppression United States The historical relationships in the United States between privileged and marginalized communities play a major role in the administering of unequal and inadequate education to these socially excluded communities. The belief that certain communities in the United States were inferior in comparison to others has allowed these disadvantages to foster into the great magnitude of educational inequality that we see apparent today. For African Americans, deliberate systematic education oppression dates back to enslavement, more specifically in 1740. In 1740, North Carolina passed legislation that prohibited slave education. While the original legislature prohibited African Americans from being taught how to write, as other States adopted their own versions of the law, southern anti-literacy legislatures banned far more than just writing. Varying Southern laws prohibited African Americans from learning how to read, write, and assembling without the presence of slave owners. Many states as far as requiring free African Americans to leave in fear of them educating their enslaved brethren. By 1836, the public education of all African-Americans was strictly prohibited. The enslavement of African Americans removed the access to education for generations. Once the legal abolishment of slavery was enacted, racial stigma remained. Social, economic, and political barriers held Blacks in a position of subordination. Although legally African Americans had the ability to be learning how to read and write, they were often prohibited from attending schools with White students. This form of segregation is often referred to as de jure segregation. The schools that allowed African-American students to attend often lacked financial support, thus providing inadequate educational skills for their students. Freedmen's schools existed but they focused on maintaining African Americans in servitude, not enriching academic prosperity. The United States then experienced legal separation in schools between Whites and Blacks. Schools were supposed to receive equal resources but there was an undoubted inequality. It was not until 1968 that Black students in the South had universal secondary education. Research reveals that there was a shrinking of inequality between racial groups from 1970 to 1988, but since then the gap has grown again. Latinos and American Indians experienced similar educational repression in the past, which effects are evident now. Latinos have been systematically shut out of educational opportunities at all levels. Evidence suggests that Latinos have experienced this educational repression in the United States as far back as 1848. Despite the fact that it is illegal to not accept students based on their race, religion, or ethnicity, in the Southwest of the United States Latinos were often segregated through the deliberate practice of school and public officials. This form of segregation is referred to as de facto segregation. American Indians experienced the enforcement of missionary schools that emphasized the assimilation into White culture and society. Even after "successful" assimilation, those American Indians experienced discrimination in White society and often rejected by their tribe. It created a group that could not truly benefit even if they gained an equal education. American universities are separated into various classes, with a few institutions, such as the Ivy League schools, much more exclusive than the others. Among these exclusive institutions, educational inequality is extreme, with only 6% and 3% of their students coming from the bottom two income quintiles. Resources Access to resources plays an important role in educational inequality. In addition to the resources from the family mentioned earlier, access to proper nutrition and health care influences the cognitive development of children. Children who come from poor families experience this inequality, which puts them at a disadvantage from the start. Not only important are resources students may or may not receive from family, but schools themselves vary greatly in the resources they give their students. On 2 December 2011, the U.S. Department of Education released that school districts are unevenly distributing funds, which are disproportionately underfunding low-income students. This is holding back money from the schools that are in great need. High poverty schools have less-qualified teachers with a much higher turnover rate. In every subject area, students in high poverty schools are more likely than other students to be taught by teachers without even a minor in their subject matter. Better resources allow for the reduction of classroom size, which research has proven improves test scores. It also increases the number of after school and summer programs—these are very beneficial to poor children because it not only combats the increased loss of skill over the summer but keeps them out of unsafe neighborhoods and combats the drop-out rate. There is also a difference in the classes offered to students, specifically advanced mathematics and science courses. In 2012, Algebra II was offered to 82% of the schools (in diverse districts) serving the fewest Hispanic and African-American students, while only 65% of the schools serving the most African-American and Hispanic students offered students the same course. Physics was offered to 66% of the schools serving the fewest Hispanic and African-American students, compared to 40% serving the most. Calculus was offered to 55% of the schools serving the fewest Hispanic and African-American students, compared to 29% serving the most. This lack of resources is directly linked to ethnicity and race. Black and Latino students are three times more likely than Whites to be in high poverty schools and twelve times as likely to be in schools that are predominantly poor. Also, in schools that are composed of 90% or above of minorities, only one half of the teachers are certified in the subjects they teach. As the number of White students increases in a school, funding tends to increase as well. Teachers in elementary schools serving the most Hispanic and African-American students are paid on average $2250 less per year than their colleagues in the same district working at schools serving the fewest Hispanic and African-American students. From the family resources side, 10% of White children are raised in poverty, while 37% of Latino children are and 42% of African-American children are. Research indicates that when resources are equal, Black students are more likely to continue their education into college than their White counterparts. State conflicts Within fragile states, children may be subject to inadequate education. The poor educational quality within these states is believed to be a result of four main challenges. These challenges include coordination gaps between the governmental actors, the policy maker's low priority on educational policy, limited financing, and lack of educational quality. Measurement In the last decade, various tests have been administered throughout the world to gather information about students, the schools they attend, and their educational achievements. These tests include the Organisation for Economic Co-operation and Development's Programme for International Student Assessment and the International Association for the Evaluation of Educational Achievement's Trends in International Mathematics and Science Study. To calculate the different test parameters in each country and calculate a standard score, the scores of these tests are put through item response theory models. Once standardized, analysts can begin looking at education through the lens of achievement rather than looking at attainment. Through looking at achievement, the analysts can objectively examine educational inequality throughout the globe. Besides the use of achievement, analysts are able to use a few other methods, including but not exclusive to the Living Standard Measurement Study and the Index of Regional Education Advantage. China implemented the IREA to better understand regional differences from East to West in their country over a period of three years while Albania took a look at individual households to better understand their education differences. Albania Educational Inequality Study In an effort to better understand home welfare, the World Bank Organization has created a Living Standard Measurement Study program that helps analyze poverty in developing countries and allows for utilization in various empirical analysis studies. A study done by Nathalie Picard and Francois Wolff in Albania utilized the LSMS framework to further their study into educational inequalities within Albania. With the data presented from the LSMS models, Picard and Wolff were able to use empirical methods to determine that nearly 40% of educational differences in the country were due to household differences between families. The LSMS framework is composed of household and individual well being questions that assess the welfare state of the house. Data for various regions can be viewed on their public website. Within the survey are questions related to educational background that help the analytics team better relate the household state and region to the existing education levels. Through the Albania study, Picard and Wolff were able to utilize these statistics to help illustrate the educational levels between various households and different income levels in Albania. This practice can be implemented in many locations worldwide. China Regional Educational Inequality Study China incorporated an index called the Index of Regional Education Advantage or IREA to help analyze the effects that new policies had on their countries education system throughout their various regions. This multidimensional index includes a higher comprehensive list of dimensions than the Gini index in relation to education and therefore brings a greater understanding of the disparities in education. Founding off three core values of provision, enrollment, and attainment within the education system the IREA index is able to utilize conversion factors to create a capability set to diagnose the education levels of their different regions. Since all three are not independent variables it is critical to utilize items like geometric means when calculating values like enrollment and attainment. Due to the IREA being a composite summary index it is common to utilize weighted indicators to show importance of each core value. When evaluating the finalized scores of the regions China produced spatial pattern diagrams with data over three years to reflect any changes over the associated time period. The scores give a visual of the regional inequalities in their education. A change to a darker color indicated that the education had worsened in score. This comprehensive IREA score reflects the true condition of the area in question. Effects Social mobility Social mobility refers to the movement in class status from one generation to another. It is related to the "rags to riches" notion that anyone, with hard work and determination, has the ability to move upward no matter what background they come from. Contrary to that notion, however, sociologists and economists have concluded that, although exceptions are heard of, social mobility has remained stagnant and even decreased over the past thirty years. From 1979 through 2007 the wage income for lower- and middle-class citizens has risen by less than 17 percent while the one percent has grown by approximately 156 percent sharply contrasting the "postwar period up through the 1970s when income growth was broadly shared". Some of the decreases in social mobility may be explained by the stratified educational system. Research has shown that since 1973, men and women with at least a college degree have seen an increase in hourly wages, while the wages for those with less than a college degree have remained stagnant or have decreased during the same period of time. Since the educational system forces low-income families to place their children into less-than-ideal school systems, those children are typically not presented with the same opportunities and educational motivation as are students from well-off families, resulting in patterns of repeated intergenerational educational choices for parent and child, also known as decreased or stagnant social mobility. Remedies There are a variety of efforts by countries to assist in increasing the availability of quality education for all children. Assessment Based on input from more than 1,700 individuals in 118 countries, UNESCO and the Center for Universal Education at the Brookings Institution have co-convened the Learning Metrics Task Force. The task force aims to shift the focus from access to access plus learning. They discovered through assessment, the learning and progress of students in individual countries can be measured. Through the testing, governments can assess the quality of their education programs, refine the areas that need improvement, and ultimately increase their student's success. Education for All Act The Education For All act or EFA is a global commitment to provide quality basic education for all children, youth, and adults. In 2000, 164 governments pledged to achieve education for all at the World Education Forum. There are six decided-upon goals designed to reach the goal of Education for All by 2015. The entities working together to achieve these goals include governments, multilateral and development agencies, civil society, and the private sector. UNESCO is responsible for coordinating the partnerships. Although progress has been made, some countries are providing more support than others. Also, there is a need to strengthen overall political commitment as well as strengthening the needed resources. Global Partnership for Education Global Partnership for Education, or GPE, functions to create a global effort to reduce educational inequality with a focus on the poorest countries. GPE is the only international effort with its particular focus on supporting countries' efforts to educate their youth from primary through secondary education. The main goals of the partnership include providing educational access to each child, ensuring each child masters basic numeracy and literacy skills, increasing the ability for governments to provide quality education for all, and providing a safe space for all children to learn in. They are a partnership of donor and developing countries but the developing countries shape their own educational strategy based upon their personal priorities. When constructing these priorities, GPE serves to support and facilitate access to financial and technical resources. Successes of GPE include helping nearly 22 million children get to school, equipping 52,600 classrooms, and training 300,000 teachers. Massive online classes There is a growing shift away from traditional higher education institutions to massive open online courses (MOOC). These classes are run through content sharing, videos, online forums, and exams. The MOOCs are free which allows for many more students to take part in the classes, however, the programs are created by global north countries, therefore inhibiting individuals in the global south from creating their own innovations. Trauma-informed education Trauma-informed education is a pedagogical approach that acknowledges the impacts of adverse childhood experiences (ACEs) on a child's learning and behavior. The efficacy of trauma-informed approaches has been studied in a variety of settings, including communities in areas that have experienced natural disasters, terrorism or political instability, students of refugee or asylum status, and students who are marginalized as a result of language, ethnicity or culture. ACEs are associated with poorer attendance at school, educational attainment, and worse mental health outcomes. Trauma-informed education has been termed a social justice imperative by some academics owing to the disproportionate impact of childhood trauma on marginalized communities including low-income communities, communities of color, sexual and gender minorities, and immigrants. The expansion of the definition of trauma as encompassing interpersonal forms of violence and perceived threat or harm, especially in the experiences of vulnerable and marginalized communities was formally recognized by the U.S.-based Center for Substance Abuse Treatment in 2014. Thereafter, the adoption of trauma-informed approaches in public service provisions including education has led to developing practices and policies that take trauma histories into consideration. In 2016, the American Institutes for Research published a trauma-informed care curriculum centered around five domains - supporting staff development, creating a safe and supportive environment, assessing needs and planning services, involving consumers, and adapting practices. Similarly, the National Child Traumatic Stress Network defines a trauma-sensitive approach as: "Realizing the widespread impact of trauma and pathways to recovery Recognizing traumas signs and symptoms Responding by integrating knowledge about trauma into all facets of the system Resisting re-traumatization of trauma-impacted individuals by decreasing the occurrence of unnecessary triggers (i.e., trauma and loss reminders) and by implementing trauma-informed policies, procedures, and practices." A number of barriers to the implementation of trauma-informed approaches have been identified, including communication gaps between providers and parents, stigmatization of mental health concerns, lack of supportive school environments and competing teacher responsibilities. Policy implications With the knowledge that early educational intervention programs, such as extended childcare during preschool years, can significantly prepare low-income students for educational and life successes, comes a certain degree of responsibility. One policy change that seems necessary to make is that quality child care is available to every child in the United States at an affordable rate. This has been scientifically proven to push students into college, and thus increase social mobility. The ultimate result of such a reality would be that the widely stratified educational system that exists in the U.S. today would begin to equalize so that every child born, regardless of socioeconomic status, would have the same opportunity to succeed. Many European countries are already exercising such successful educational systems. Based on historical evidence, an increase in general schooling not only improves numeracy and literacy skills in the population overall, but also tends to result in a narrowing the educational gender gap. Global evidence Albania Household income in Albania is very low. Many families are unable to provide a college education for their kids, with the money they make. Albania is one of the poorest countries in Europe with a large population of people under the age of 25. This population of students needs a path to higher education. Nothing is being done for all the young adults who are smart enough to go to college but cannot afford to. Bangladesh The Bangladesh education system includes more than 100,000 schools run by public, private, NGO, and religious providers. The schools are overseen by a national ministry. Their system is centralized and overseen by the sub-districts also known as upazilas. During the past two decades, the system expanded through new national policies and pro-poor spending. The gross enrollment rate in the poorest quintile of upazilas is 101 percent. Also, the poorest quintile spending per child was 30 percent higher than the wealthiest quintile. Educational inequalities continue despite the increased spending. They do not have consistent learning outcomes across the upazilas. In almost 2/3 of upazilas, the dropout rate is over 30 percent. They have difficulty acquiring quality teachers and 97 percent of preprimary and primary students are in overcrowded classrooms. India According to the 2011 census of India, the literacy rate for males was 82%, as compared to the literacy rate of 65% for females. Despite the provisions of the Right to Education, 40% of girls between the ages of 15 and 18 are out of school, primarily to either supplement family income in the informal sector, or to work within the household. It is estimated that up to 23% of girls leave school at the onset of puberty due to the stigmatization of menstruation, lack of access to menstrual products and sanitation. Menstrual inequity is also a leading cause of absenteeism. The Samagra Shiksha Abhiyan, set up by the Indian central government in 2021 to improve access to education, saw an increase in the Gross Enrollment Ratio among girls across all school levels. This scheme has sanctioned 5,627 Kasturba Gandhi Balika Vidyalayas, residential schools for girls from disadvantaged communities. Urban areas have historically reported higher rates of literacy. In 2018, rural areas had a literacy rate of 73.5%, as compared to the urban literacy rate of 87.7 per cent. Although 83% of the total schools are located in rural India, learning outcomes and dropout rates remain disproportionately high. This has been attributed to high rural poverty rates and lack of quality teaching. Educational inequalities are also exacerbated by the caste system. In the 2011 census, Scheduled Castes had an average literacy rate of 66.1 per cent with an all-India literacy rate of 73 per cent. Under the National Education Policy 2020, marginalized gender identities, sociocultural identities, geographical identities, disabilities, and socioeconomic conditions have been grouped under Socio-Economically Disadvantaged Groups (SEDGs). Specific provisions have been recommended for SEDGs including targeted scholarships, conditional cash transfers to parents, and providing bicycles for transportation. South Africa Inequality in higher education Africa, in general, has suffered from decreased spending on higher education programs. As a result, they are unable to obtain moderate to high enrollment and there is minimal research output. Within South Africa, there are numerous factors that affect the quality of tertiary education. The country inherited class, race, and gender inequality in the social, political, and economic spheres during the Apartheid. The 1994 constitution emphasizes higher education as useful for human resource development and of great importance to any economic and social transitions. However, they are still fighting to overcome the colonialism and racism in intellectual spaces. Funding from the government has a major stake in the educational quality received. As a result of declining government support, the average class size in South Africa is growing. The increased class size limits student–teacher interactions, therefore further hindering students with low problem solving and critical thinking skills. In an article by Meenal Shrivastava and Sanjiv Shrivastava, the argument is made that in large class sizes "have ramifications for developing countries where higher education where higher education is a core element in the economic and societal development". These ramifications are shown to include lower student performance and information retention. United Kingdom Evidence from the British birth cohort studies has illustrated the powerful influence of family socioeconomic background on children's educational attainment. These differences emerge early in childhood, and continue to grow throughout the school years. The educational gap in the U.K. is shown by the rate of graduation between private universities and the most deprived quintiles. In a study done by The Conversation, 70% of people in private university graduate by 26 while only 17% of the lowest quintile had graduated by 26. This same sentiment applies at even younger ages. In the same study done by The Conversation, kids who were eligible for free school meals which are only given to the 15% lowest income students are shown to have as much as 25% less attainments that are considered the baseline for students in the U.K. Sudan Republic The earliest educational system of Sudan was established by the British during the first half of the 20th century. The government of Sudan recognizes education as a right for every citizen and guarantees access to free basic education, The educational structure of the Republic of Sudan consists of the pre-primary, primary, secondary, and higher education, The Sudanese education system includes more than 3.646 schools run by public, private, and religious providers, the schools are overseen by the High Ministry of Education. However, Sudan's simmering wars and a lack of awareness about the importance of education and chronic under-development all contribute to the poor schooling of girls in Sudan. In addition, cultural pressures and the traditional views of the role of women mean fewer girls attend and remain in school. The inability to pay fees even though school is free according to government policy is a major reason; some poor families cannot afford the stationery and clothes. The government cannot provide for all the students' needs because of the economic situation and poverty. However the government has raised their awareness of educating females, and they have created universities only for girls. The first one is Al Ahfad University for Women, located in Omdurman, created in 1907 by Sheikh Babikr Bedri. Now the percentage of educated females is increasing; the last survey estimates that 60.8% of females in Sudan can read and write. United States Property tax dilemma In the United States, schools are funded by local property taxes. Because of this, the more affluent a neighborhood, the higher the funding for that school district. Although this situation seems favorable, the problem emerges when the equation is reversed. In neighborhoods inhabited by predominantly working- and lower-class families, properties are less expensive, and so property taxes are much lower than those in affluent neighborhoods. Consequently, funding for the school districts to which working- and lower-class children are assigned is also significantly lower than the funding for the school districts to which children of affluent families are assigned. Thus, students in working- and lower-class schools do not receive the same quality of education and access to resources as do students from affluent families. The reality of the situation is that the distribution of resources for schools is based on the socioeconomic status of the parents of the students. As a result, the U.S. educational system significantly aids in widening the gap between the rich and the poor. This gap has increased, rather than decreased, over the past few decades due in part to a lack of social mobility. International comparisons Compared to other nations, the United States is among some of the highest spenders on education per student behind only Switzerland and Norway. The per-pupil spending has even increased in recent years but the academic achievement of students has remained stagnant. The Swedish educational system is one such system that attempts to equalize students and make sure every child has an equal chance to learn. One way that Sweden is accomplishing these goals is by making sure every child can go to daycare affordably. Of the total cost of childcare, parents pay no more than 18% for their child; the remaining 82% is paid for by various government agencies and municipalities. In 2002, a "maximum-fee" system was introduced in Sweden that states that costs for childcare may be no greater than 3% of one's income for the first child, 2% for the second child, 1% for the third child, and free of charge for the fourth child in pre-school. 97.5% of children age 1–5 attend these public daycare centers. Also, a new law was recently introduced that states that all four and five-year-old children can attend daycare for free. Since practically all students, no matter what their socioeconomic background, attend the same daycare centers, equalization alongside educational development begins early and in the public sphere. Furthermore, parental leave consists of 12 months paid leave (80% of wage) whereas one month is awarded solely to the father in the form of "use it or lose it". This results in the privilege and affordability of staying home and bonding with one's child for the first year of life. Due to this affordability, less than 200 children in the entire country of Sweden under the age of 1 are placed in child care. Stratification in the educational system is further diminished by providing all Swedish citizens and legal residents with the option of choosing which school they want their children to be placed in, regardless of what neighborhood they reside in or what property taxes they pay. Additionally, the Swedish government not only provides its citizens with a free college education but also with an actual monthly allowance for attending school and college. Together, these privileges allow for all Swedish children to have access to the same resources. A similar system can be found in France, where free, full-day child care centers known as "écoles maternelles" enroll close to 100% of French children ages 3–5 years old. In Denmark, children from birth to age six are enrolled in childcare programs that are available at one-fifth of the total costs, where the rest is covered by public funding. See also British birth cohort studies Class stratification Conflict theory Educational psychology Hidden curriculum Educational Inequality in the United States List of standardized tests in the United States Social inequality Socioeconomic status Structural inequality in education Working class education References External links OECD's Education GPS, a review of education policy analysis and statistics: Equity Education issues Discrimination Social inequality Race and education Education in Africa Inequality
Educational inequality
Biology
15,207
12,082,283
https://en.wikipedia.org/wiki/Human%20vestigiality
In the context of human evolution, vestigiality involves those traits occurring in humans that have lost all or most of their original function through evolution. Although structures called vestigial often appear functionless, they may retain lesser functions or develop minor new ones. In some cases, structures once identified as vestigial simply had an unrecognized function. Vestigial organs are sometimes called rudimentary organs. Many human characteristics are also vestigial in other primates and related animals. History Charles Darwin listed a number of putative human vestigial features, which he termed rudimentary, in The Descent of Man (1871). These included the muscles of the ear; wisdom teeth; the appendix; the tail bone; body hair; and the semilunar fold in the corner of the eye. Darwin also commented on the sporadic nature of many vestigial features, particularly musculature. Making reference to the work of the anatomist William Turner, Darwin highlighted a number of sporadic muscles that he identified as vestigial remnants of the panniculus carnosus, particularly the sternalis muscle. In 1893, Robert Wiedersheim published The Structure of Man, a book on human anatomy and its relevance to evolutionary history. This book contains a list of 86 human organs he considered vestigial, which he called "wholly or in part functionless, some appearing in the Embryo alone, others present during Life constantly or inconstantly. For the greater part Organs which may be rightly termed Vestigial." His list of supposedly vestigial organs included many of the examples on this page as well as others then mistakenly believed to be purely vestigial, such as the pineal gland, the thymus gland, and the pituitary gland. Some of these organs that had lost their obvious, original functions later turned out to have retained functions that had gone unrecognized before the discovery of hormones or many of the functions and tissues of the immune system. Examples included: the role of the pineal in the regulation of the circadian rhythm (neither the function nor even the existence of melatonin was yet known); discovery of the role of the thymus in the immune system lay many decades in the future; it remained a mystery until the mid-20th century; the pituitary and hypothalamus, with their many and varied hormones, were far from understood, let alone the complexity of their interrelationships. Historically, there was a trend not only to dismiss the appendix as being uselessly vestigial, but an anatomical hazard liable to dangerous inflammation. As late as the mid-20th century, many reputable authorities conceded it no beneficial function. This was a view supported, or perhaps inspired, by Darwin himself in the 1874 edition of his book The Descent of Man, and Selection in Relation to Sex. The organ's patent liability to appendicitis and poorly understood role left it open to blame for a number of possibly unrelated conditions. For example, in 1916, a surgeon claimed that removal of the appendix had cured several cases of trifacial neuralgia and other nerve pain about the head and face, even though he said the evidence for appendicitis in those patients was inconclusive. The discovery of hormones and hormonal principles, notably by Bayliss and Starling, argued against these views, but in the early 20th century, a great deal of fundamental research remained to be done on the functions of large parts of the digestive tract. In 1916, an author found it necessary to argue against the idea that the colon had no important function and that "the ultimate disappearance of the appendix is a coordinate action and not necessarily associated with such frequent inflammations as we are witnessing in the human". There had been a long history of doubt about such dismissive views. Around 1920, the surgeon Kenelm Hutchinson Digby documented previous observations, going back more than 30 years, that suggested lymphatic tissues, such as the tonsils and appendix, might have substantial immunological functions. Anatomical Appendix The appendix was once believed to be a vestige of a redundant organ that in ancestral species had digestive functions, much as it still does in extant species in which intestinal flora hydrolyze cellulose and similar indigestible plant materials. This view has changed in recent decades, with research suggesting that the appendix may serve an important purpose. In particular, it may serve as a reservoir for beneficial gut bacteria, possibly to allow the bacteria to reestablish in the colon during recovery from diarrhea or other illnesses. Some herbivorous animals, such as rabbits, have a terminal vermiform appendix and cecum that apparently bear patches of tissue with immune functions and that may also be important in maintaining the composition of intestinal flora. It does not seem to have much digestive function, if any, and is not present in all herbivores, even those with large caeca. As shown in the accompanying pictures, the human appendix typically is about comparable to that of the rabbit's in size, though the caecum is reduced to a single bulge where the ileum empties into the colon. Some carnivorous animals have appendices too, but few have more than vestigial caeca. In line with the possibility that vestigial organs develop new functions, some research suggests that the appendix may guard against the loss of symbiotic bacteria that aid in digestion, though that is unlikely to be a novel function, given the presence of vermiform appendices in many herbivores. Intestinal bacterial populations entrenched in the appendix may support quick reestablishment of the flora of the large intestine after an illness, poisoning, or after an antibiotic treatment depletes or otherwise causes harmful changes to the bacterial population of the colon. A 2013 study refutes the idea of an inverse relationship between cecum size and appendix size and presence. It is widely present in Euarchontoglires (a superorder of mammals that includes rodents, lagomorphs and primates) and has also evolved independently in the diprotodont marsupials and monotremes, and is highly diverse in size and shape, which could suggest it is not vestigial. Researchers deduce that the appendix has the ability to protect good bacteria in the gut: when the gut is affected by diarrhea or another illness that cleans out the intestines, the good bacteria in the appendix can repopulate the digestive system and keep the person healthy. Coccyx The coccyx, or tailbone, is the remnant of a lost tail. All mammals have a tail at some point in their development; in humans, it is present for a period of 4 weeks, during stages 14 to 22 of human embryogenesis. This tail is most prominent in human embryos 31–35 days old. The tailbone, at the end of the spine, has lost its original function in assisting balance and mobility, though it still serves some secondary functions, such as being an attachment point for muscles, which explains why it has not degraded further. In rare cases, congenital defect results in a short tail-like structure being present at birth. Twenty-three cases of human babies born with such a structure have been reported in the medical literature since 1884. In these cases, the spine and skull were determined to be entirely normal. The only abnormality was that of a tail approximately 12 centimeters long. These tails, though of no deleterious effect, were almost always surgically removed. Wisdom teeth Wisdom teeth are vestigial third molars that human ancestors used to help in grinding down plant tissue. The common postulation is that their skulls had larger jaws with more teeth, which were possibly used to help chew down foliage to compensate for a lack of ability to efficiently digest the cellulose that makes up a plant cell wall. As human diets changed, smaller jaws were naturally selected, but the third molars, or "wisdom teeth", still commonly develop in human mouths. Agenesis (failure to develop) of wisdom teeth in human populations ranges from zero in Tasmanian Aboriginals to nearly 100% in indigenous Mexicans. The difference is related to the PAX9 gene (and perhaps other genes). Vomeronasal organ In some animals, the vomeronasal organ (VNO) is part of a second, completely separate sense of smell, known as the accessory olfactory system. Many studies have been performed to find if there is an actual presence of a VNO in adult human beings. Trotier et al. estimate that around 92% of their subjects who had not had septal surgery had at least one intact VNO. Kjaer and Fisher Hansen, on the other hand, found that the VNO structure disappeared during fetal development as it does for some primates. Smith and Bhatnagar (2000) asserted that Kjaer and Fisher Hansen simply missed the structure in older fetuses. Won (2000) found evidence of a VNO in 13 of his 22 cadavers (59.1%) and in 22 of his 78 living patients (28.2%). Given these findings, some scientists have argued that there is a VNO in adult human beings. Most have sought to identify the opening of the vomeronasal organ in humans, rather than identify the tubular epithelial structure itself. Thus it has been argued that such studies, employing macroscopic observational methods, have sometimes missed or even misidentified the vomeronasal organ. Among studies that use microanatomical methods, there is no reported evidence that human beings have active sensory neurons like those in other animals' working vomeronasal systems. Furthermore, no evidence suggests there are nerve and axon connections between any existing sensory receptor cells in the adult human VNO and the brain. Likewise, there is no evidence of any accessory olfactory bulb in adult human beings, and the key genes involved in other mammals' VNO function have become pseudogenes in human beings. Therefore, while the presence of a structure in adult human beings is debated, a review of the scientific literature by Tristram Wyatt concluded, "most in the field ... are sceptical about the likelihood of a functional VNO in adult human beings on current evidence." Ear The ears of a macaque monkey and most other monkeys have far more developed muscles than those of humans, and therefore have the capability to move their ears to better hear potential threats. Humans and other primates such as the orangutan and chimpanzee however have ear muscles that are minimally developed and non-functional, yet still large enough to be identifiable. A muscle attached to the ear that cannot move the ear, for whatever reason, can no longer be said to have any biological function. In humans there is variability in these muscles, such that some people are able to move their ears in various directions, and it can be possible for others to gain such movement by repeated trials. In such primates, the inability to move the ear is compensated mainly by the ability to turn the head on a horizontal plane, an ability which is not common to most monkeys—a function once provided by one structure is now replaced by another. The outer structure of the ear also shows some vestigial features, such as the node or point on the helix of the ear known as Darwin's tubercle which is found in around 10% of the population. Eye The plica semilunaris is a small fold of tissue on the inside corner of the eye. It is the vestigial remnant of the nictitating membrane, i.e., third eyelid, an organ that is fully functional in some other species of mammals. Its associated muscles are also vestigial. Only one species of primate, the Calabar angwantibo, is known to have a functioning nictitating membrane. The orbitalis muscle is a vestigial or rudimentary nonstriated muscle (smooth muscle) of the eye that crosses from the infraorbital groove and sphenomaxillary fissure and is intimately united with the periosteum of the orbit. It was described by Johannes Peter Müller and is often called Müller's muscle. The muscle forms an important part of the lateral orbital wall in some animals, but in humans it is not known to have any significant function. Reproductive system Genitalia In the internal genitalia of each human sex, there are some residual organs of mesonephric and paramesonephric ducts during embryonic development: Gartner's duct Epoophoron Vesicular appendages of epoophoron Paroophoron Human vestigial structures also include leftover embryological remnants that once served a function during development, such as the belly button, and analogous structures between biological sexes. For example, men are also born with two nipples, which are not known to serve a function compared to women. In regards to genitourinary development, both internal and external genitalia of male and female fetuses have the ability to fully or partially form their analogous phenotype of the opposite biological sex if exposed to a lack/overabundance of androgens or the SRY gene during fetal development. Examples of vestigial remnants of genitourinary development include the hymen, which is a membrane that surrounds or partially covers the external vaginal opening that derives from the sinus tubercle during fetal development and is homologous to the male seminal colliculus. Some researchers have hypothesized that the persistence of the hymen may be to provide temporary protection from infection, as it separates the vaginal lumen from the urogenital sinus cavity during development. Other examples include the glans penis and the clitoris, the labia minora and the ventral penis, and the ovarian follicles and the seminiferous tubules. In modern times, there is controversy regarding whether the foreskin is a vital or vestigial structure. In 1949, British physician Douglas Gairdner noted that the foreskin plays an important protective role in newborns. He wrote, "It is often stated that the prepuce is a vestigial structure devoid of function ... However, it seems to be no accident that during the years when the child is incontinent the glans is completely clothed by the prepuce, for, deprived of this protection, the glans becomes susceptible to injury from contact with sodden clothes or napkin." During the physical act of sex, the foreskin reduces friction, which can reduce the need for additional sources of lubrication. "Some medical researchers, however, claim circumcised men enjoy sex just fine and that, in view of recent research on HIV transmission, the foreskin causes more trouble than it's worth." The area of the outer foreskin measures between 7 and 100 cm, and the inner foreskin measures between 18 and 68 cm, which is a wide range. Regarding vestigial structures, Charles Darwin wrote, "An organ, when rendered useless, may well be variable, for its variations cannot be checked by natural selection." Musculature A number of muscles in the human body are thought to be vestigial, either by virtue of being greatly reduced in size compared to homologous muscles in other species, by having become principally tendonous, or by being highly variable in their frequency within or between populations. Head The occipitalis minor is a muscle in the back of the head which normally joins to the auricular muscles of the ear. This muscle is very sporadic in frequency—always present in Malays, present in 56% of Africans, 50% of Japanese, and 36% of Europeans, and nonexistent in the Khoikhoi people of southwestern Africa and in Melanesians. Other small muscles in the head associated with the occipital region and the post-auricular muscle complex are often variable in their frequency. The platysma, a quadrangular (four sides) muscle in a sheet-like configuration, is a vestigial remnant of the panniculous carnosus of animals. In horses, it is the muscle that allows it to flick a fly off its back. Face In many animals, the upper lip and sinus area is associated with whiskers or vibrissae which serve a sensory function. In humans, these whiskers do not exist but there are still sporadic cases where elements of the associated vibrissal capsular muscles or sinus hair muscles can be found. Based on histological studies of the upper lips of 20 cadavers, Tamatsu et al. found that structures resembling such muscles were present in 35% (7/20) of their specimens. Arm The palmaris longus muscle is seen as a small tendon between the flexor carpi radialis and the flexor carpi ulnaris, although it is not always present. The muscle is absent in about 14% of the population, however this varies greatly with ethnicity. It is believed that this muscle actively participated in the arboreal locomotion of primates, but currently has no function, because it does not provide more grip strength. One study has shown the prevalence of palmaris longus agenesis in 500 Indian patients to be 17.2% (8% bilateral and 9.2% unilateral). The palmaris is a popular source of tendon material for grafts and this has prompted studies which have shown the absence of the palmaris does not have any appreciable effect on grip strength. The levator claviculae muscle in the posterior triangle of the neck is a supernumerary muscle present in only 2–3% of all people but nearly always present in most mammalian species, including gibbons and orangutans. Torso The pyramidalis muscle of the abdomen is a small and triangular muscle, anterior to the rectus abdominis, and contained in the rectus sheath. It is absent in 20% of humans and when absent, the lower end of the rectus then becomes proportionately increased in size. Anatomical studies suggest that the forces generated by the pyramidalis muscles are relatively small. The latissimus dorsi muscle of the back has several sporadic variations. One particular variant is the existence of the dorsoepitrochlearis or latissimocondyloideus muscle which is a muscle passing from the tendon of the latissimus dorsi to the long head of the triceps brachii. It is notable due to its well developed character in other apes and monkeys, where it is an important climbing muscle, namely the dorsoepitrochlearis brachii. This muscle is found in ≈5% of humans. Leg The plantaris muscle is composed of a thin muscle belly and a long thin tendon. The muscle belly is approximately long, and is absent in 7–10% of the human population. It has some weak functionality in moving the knee and ankle but is generally considered redundant and is often used as a source of tendon for grafts. The long, thin tendon of the plantaris is humorously called "the freshman's nerve", as it is often mistaken for a nerve by new medical students. Tongue Another example of human vestigiality occurs in the tongue, specifically the chondroglossus muscle. In a morphological study of 100 Japanese cadavers, it was found that 86% of fibers identified were solid and bundled in the appropriate way to facilitate speech and mastication. The other 14% of fibers were short, thin and sparse – nearly useless, and thus concluded to be of vestigial origin. Breasts Extra nipples or breasts sometimes appear along the mammary lines of humans, appearing as a remnant of mammalian ancestors who possessed more than two nipples or breasts. One 2021 report demonstrated that all healthy young men and women who participated in an anatomic study of the front surface of the body exhibited 8 pairs of focal fat mounds running along the embryological mammary ridges from axillae to the upper inner thighs. These were always located in the same relative anatomic sites – analogous to the loci of breasts in other placental mammals – and often had nipple-like moles or extra hairs located atop the mounds. Therefore, focal fatty prominences on the fronts of human torsos likely represent chains of vestigial breasts composed of primordial breast fat. Behavioral Humans also bear some vestigial behaviors and reflexes. Goose bumps The formation of goose bumps in humans under stress is a vestigial reflex; a possible function in the distant evolutionary ancestors of humanity was to raise the body's hair, making the ancestor appear larger and scaring off predators. Raising the hair is also used to trap an extra layer of air, keeping an animal warm. Due to the diminished amount of hair in humans, the reflex formation of goose bumps when cold is also vestigial. Palmar grasp reflex The palmar grasp reflex is thought to be a vestigial behavior in human infants. When placing a finger or object to the palm of an infant, it will securely grasp it. This grasp is found to be rather strong. Some infants—37% according to a 1932 study—are able to support their own weight from a rod, although there is no way they can cling to their mother. The grasp is also evident in the feet. When a baby is sitting down, its prehensile feet assume a curled-in posture, similar to that observed in an adult chimp. An ancestral primate would have had sufficient body hair to which an infant could cling, unlike modern humans, thus allowing its mother to escape from danger, such as climbing up a tree in the presence of a predator without having to occupy her hands holding her baby. Hiccup It has been proposed that the hiccup is an evolutionary remnant of earlier amphibian respiration. Amphibians such as tadpoles gulp air and water across their gills via a rather simple motor reflex akin to mammalian hiccuping. The motor pathways that enable hiccuping form early during fetal development, before the motor pathways that enable normal lung ventilation form. Additionally, hiccups and amphibian gulping are inhibited by elevated CO and may be stopped by GABAB receptor agonists, illustrating a possible shared physiology and evolutionary heritage. These proposals may explain why premature infants spend 2.5% of their time hiccuping, possibly gulping like amphibians, as their lungs are not yet fully formed. Fetal intrauterine hiccups are of two types. The physiological type occurs before 28 weeks after conception and tend to last five to ten minutes. These hiccups are part of fetal development and are associated with the myelination of the phrenic nerve, which primarily controls the thoracic diaphragm. The phylogeny hypothesis explains how the hiccup reflex might have evolved, and if there is not an explanation, it may explain hiccups as an evolutionary remnant, held-over from our amphibious ancestors. This hypothesis has been questioned because of the existence of the afferent loop of the reflex, the fact that it does not explain the reason for glottic closure, and because the very short contraction of the hiccup is unlikely to have a significant strengthening effect on the slow-twitch muscles of respiration. Pseudogenes There are many pseudogenes present in the human genome. One example of this is L-gulonolactone oxidase, a gene that is functional in most other mammals and produces an enzyme that synthesizes vitamin C. In humans and other members of the suborder Haplorrhini, a mutation disabled the gene and made it unable to produce the enzyme. However, the remains of the gene are still present in the human genome. See also Color blindness Deprecation Myopia References Further reading Evolutionary biology concepts Human anatomy Human evolution Human physiology
Human vestigiality
Biology
4,968
20,446,429
https://en.wikipedia.org/wiki/Rally%20%27round%20the%20flag%20effect
The rally 'round the flag effect, also referred to as the rally 'round the flag syndrome, is a concept used in political science and international relations to explain increased short-run popular support of a country's government or political leaders during periods of international crisis or war. Because the effect can reduce criticism of governmental policies, it can be seen as a factor of diversionary foreign policy. Mueller's definition Political scientist John Mueller suggested the effect in 1970, in a paper called "Presidential Popularity from Truman to Johnson". He defined it as coming from an event with three qualities: "Is international" "Involves the United States and particularly the President directly" "Specific, dramatic, and sharply focused" In addition, Mueller created five categories of rallies. Mueller's five categories are: Sudden US military intervention (e.g., Korean War, Bay of Pigs Invasion) Major diplomatic actions (e.g., Truman Doctrine) Dramatic technological developments (e.g., Sputnik) US-Soviet summit meetings (e.g., Potsdam Conference) Major military developments in ongoing wars (e.g., Tet Offensive) These categories are considered dated by modern political scientists, as they rely heavily on Cold War events. Causes and duration Since Mueller's original theories, two schools of thought have emerged to explain the causes of the effect. The first, "The Patriotism School of Thought" holds that in times of crisis, the American public sees the President as the embodiment of national unity. The second, "The Opinion Leadership School" believes that the rally emerges from a lack of criticism from members of the opposition party, most often in the United States Congress. If opposition party members appear to support the president, the media has no conflict to report, thus it appears to the public that all is well with the performance of the president. The two theories have both been criticized, but it is generally accepted that the Patriotism School of thought is better to explain causes of rallies, while the Opinion Leadership School of thought is better to explain duration of rallies. It is also believed that the lower the presidential approval rating before the crisis, the larger the increase will be in terms of percentage points because it leaves the president more room for improvement. For example, Franklin D. Roosevelt only had a 12pp increase in approval from 72% to 84% following the Attack on Pearl Harbor, whereas George W. Bush had a 39pp increase from 51% to 90% following the September 11 attacks. Another theory about the cause of the effect is believed to be embedded in the US Constitution. Unlike in other countries, the constitution makes the President both head of government and head of state. Because of this, the president receives a temporary boost in popularity because his Head of State role gives him symbolic importance to the American people. However, as time goes on his duties as Head of Government require partisan decisions that polarize opposition parties and diminish popularity. This theory falls in line more with the Opinion Leadership School. Due to the highly statistical nature of presidential polls, University of Alabama political scientist John O'Neal has approached the study of rally 'round the flag using mathematics. O'Neal has postulated that the Opinion Leadership School is the more accurate of the two using mathematical equations. These equations are based on quantified factors such as the number of headlines from The New York Times about the crisis, the presence of bipartisan support or hostility, and prior popularity of the president. Political Scientist from The University of California Los Angeles, Matthew A. Baum found that the source of a rally 'round the flag effect is from independents and members of the opposition party shifting their support behind the President after the rallying effect. Baum also found that when the country is more divided or in a worse economic state then the rally effect is larger. This is because more people who are against the president before the rallying event switch to support him afterwards. When the country is divided before the rallying event there is a higher potential increase in support for the President after the rallying event. In a study by Political Scientist Terrence L. Chapman and Dan Reiter, rallies in Presidential approval ratings were found to be bigger when there was U.N. Security Council supported Militarized interstate disputes (MIDs). Having U.N. Security Council support was found to increase the rally effect in presidential approval by 8 to 9 points compared to when there was not U.N. Security Council support. According to a 2019 study of ten countries in the period 1990–2014, there is evidence of a rally-around-the-flag effect early on in an intervention with military casualties (in at least the first year) but voters begin to punish the governing parties after 4.5 years. A 2021 study found weak effects for the rally-around-the-flag effect. A 2023 study found that militarized interstate disputes, on average, decrease public support for national leaders rather than increase it. A 2022 study applies the same logic of rally effects to crisis termination instead of just onset. Using all available public presidential polling and crisis data from 1953 to 2016, the researchers found that a president received a three point increase to their approval rating, on average, when terminating an international crisis. They suggest that the surge in approvals is as much related to a proof of a president's foreign affairs competency, as it is related to a mutual camaraderie in defense of the nation. Additionally, the suggestion that a president can achieve approval boosts via ending conflict instead of initiating conflict makes less cynical assumptions about the options within a president's toolkit and provide an additional avenue for inquiry into diversionary war theories. Historical examples The effect has been examined within the context of nearly every major foreign policy crisis since World War II. Some notable examples: United States Cuban Missile Crisis: According to Gallup polls, President John F. Kennedy's approval rating in early October 1962 was at 61%. By November, after the crisis had passed, Kennedy's approval rose to 74%. The spike in approval peaked in December 1962 at 76%. Kennedy's approval rating slowly decreased again until it reached the pre-crisis level of 61% in June 1963. Iran hostage crisis: According to Gallup polls, President Jimmy Carter saw his approval rating surge to 61%, up 23 points from his pre-crisis rating, following the initial seizure of the U.S. embassy in Tehran in November 1979. However, Carter's handling of the crisis caused popular support to decrease, and by November 1980 Carter had returned to his pre-crisis approval rating. Operation Desert Storm (Persian Gulf War): According to Gallup polls, President George H. W. Bush was rated at 59% approval in January 1991, but following the success of Operation Desert Storm, Bush enjoyed a peak 89% approval rating in February 1991. From there, Bush's approval rating slowly decreased, reaching the pre-crisis level of 61% in October 1991. Following the September 11 attacks in 2001, President George W. Bush received an unprecedented increase in his approval rating. On September 10, Bush had a Gallup Poll rating of 51%. By September 15, his approval rate had increased by 34 percentage points to 85%. Just a week later, Bush was at 90%, the highest presidential approval rating ever. Over a year after the attacks occurred, Bush still received higher approval than he did before 9/11 (68% in November 2002). Both the size and duration of Bush's popularity after 9/11 are believed to be the largest of any post-crisis boost. Many people believe that this popularity gave Bush a mandate and eventually the political leverage to begin the War in Iraq. Killing of Osama bin Laden: According to Gallup polls, President Barack Obama received a slight uptick in his approving ratings following the May 2, 2011 killing of bin Laden, jumping from 45% in late April to 53% after bin Laden's death was announced. The rally effect did not last long, as Obama's approval ratings were back down to 45% by July 15. Sandy Hook Elementary School shooting: Following the shooting in December 2012, President Obama received another slight uptick in approval according to Gallup, increasing from 50% before the shooting to 56% shortly after. The rally effect was over by January 17, 2013. Killing of Ayman al-Zawahiri: According to Gallup polls, President Joe Biden received a small uptick in approval (from 38% to 44%) shortly after ordering a drone strike that killed al-Zawahiri on July 31, 2022. Afterwards, his approval ratings declined for the remainer of 2022. World War I During World War I, most belligerents saw a lessening of partisanship. Most socialist parties abandoned their pledges to oppose wars and endorsed their governments, leading to the break-up of the Second International. In the German Empire, the Social Democratic Party enabled the German entry into the war by voting in favor of war credits at the Reichstag and supported the government for most of the war. Kaiser Wilhelm II subsequently declared a Burgfrieden in which normal partisan politics would be suspended. The French Third Republic declared a similar Union sacree which was embraced by most socialist parties. The French Section of the Workers' International leader Jean Jaurès was assassinated while preparing to give what the French nationalist assassin Raoul Villain believed would be a pacifist speech. The United Kingdom of Great Britain and Ireland also saw a lessening of political instability and militancy as Irish nationalist, suffragette, and trade unionist dissidents supported the war. In particular the Curragh mutiny, in which many Ulster Protestants and some officers of the British Armed Forces with the support of the Conservative Party refused to recognize the impending implementation of Irish Home Rule and threatened civil war, immediately ended with the British entry into the war and an upswell of support for the Liberal government. Later, the Coalition Liberal-Conservative government of David Lloyd George used its wartime support to call a "coupon election" in 1918 in which candidates endorsed by the government with a "Coalition Coupon" won a massive landslide and decimated the Liberal Party. World War II During World War II, British Prime Minister Winston Churchill enjoyed massive popularity in the United Kingdom for opposing the National Government's foreign policy of appeasement towards Nazi Germany before the war and for saving the country from near-defeat in the Battle of Britain afterwards. All major parties in the British Parliament, including the opposition Labour Party, joined Churchill's war ministry. His approval ratings never declined below 78 percent for his entire wartime premiership. Churchill's personal popularity did not extend to the Conservative Party, which opposed the welfare state advocated by Labour and remained associated with appeasement and interwar unemployment and poverty. As a result the Conservatives heavily lost the 1945 general election to Labour. Russo-Ukrainian War The Russo-Ukrainian War caused an increase of support for President Vladimir Putin in Russia. His approval rating rose 10 percent to 71.6 percent after he announced the Russian annexation of Crimea. His approval rating also rose to 69 percent, as did the support for Prime Minister Mikhail Mishustin, during the build-up to the 2022 Russian invasion of Ukraine. The war caused an even larger boost of both domestic and international support for Ukrainian President Volodymyr Zelenskyy, with his approval rating rising from 30 percent to 90 percent in Ukraine and also rising to 70 percent in the United States. In a pandemic The outbreak of the COVID-19 pandemic in 2020 briefly resulted in popularity spikes for several world leaders. President Donald Trump's approval rating saw a slight increase during the outbreak in early 2020. In addition to Trump, other heads of government in Europe also gained in popularity. French President Emmanuel Macron, Italian Prime Minister Giuseppe Conte, Dutch Prime Minister Mark Rutte, and British Prime Minister Boris Johnson became "very popular" in the weeks following the pandemic hitting their respective nations. Johnson, in particular, who "became seriously ill himself" from COVID-19, led his government to become "the most popular in decades." It was uncertain how long their increase in the approval polls would last, but former NATO secretary general George Robertson opined, "People do rally around, but it evaporates fast." Controversy and fears of misuse There are fears that a leader will abuse the rally 'round the flag effect. These fears come from the diversionary theory of war in which a leader creates an international crisis in order to distract from domestic affairs and to increase their approval ratings through a rally 'round the flag effect. The fear associated with this theory is that a leader can create international crises to avoid dealing with serious domestic issues or to increase their approval rating when it begins to drop. In popular culture Wag the Dog is a movie released by coincidence a month just before the Clinton–Lewinsky scandal with high likenesses to the events of the affair. To divert attention on a sex scandal, the American president fabricates a war with terrorists in Albania. The movie was also remade two years later. See also "Battle Cry of Freedom", an American Civil War song that urged union supporters to "rally 'round the flag", that was also used for Abraham Lincoln's re-election campaign. United States presidential approval rating Flag-waving References Conformity Diversionary tactics Opinion polling in the United States Legal terminology from popular culture Politics of the United States Propaganda
Rally 'round the flag effect
Biology
2,744
61,400,489
https://en.wikipedia.org/wiki/Phaeton%20complex
The Phaeton complex is a psychological condition described by Maryse Choisy as a "painful combination of thoughts and emotions caused by the absence, loss, coldness, or traumatizing behavior of one or both parents, resulting in frustration and aggression". The theory was devised by Lucille Iremonger, who in 1970 studied the 24 British prime ministers who held office from 1809 to 1940, and found that 62% of these men had lost one or both parents by age 15, compared to a national average of 10-15% in those times. Hugh Berrington expanded on the theory in 1974, finding sufferers of the Phaeton complex to be less sociable, flexible or tolerant, instead being ambitious, vain, sensitive, lonely and shy. Micha Popper, though, disputes that an unhappy childhood always leads to obsessive urges, citing Winston Churchill as an example where childhood unhappiness had positive results. The name derives from the Greek myth of Phaeton, a child of the sun god, who demands to drive his father's chariot and in doing so, falls to earth and scorches the Sahara Desert. Examples Neville Chamberlain, UK prime minister 1937–40, having lost his mother by age six, is said to have displayed 'all the characteristics of the damaged Phaeton - immature, sensitive, cold, secretive and depressed' when in office, according to Harry Davis. Zulfikar Ali Bhutto is described by Shamim Ahmad as a neglected child, 'having a sense of insecurity that drove him to prove himself worthy'. In a discussion of the Phaeton complex, Tom McTague lists Boris Johnson, Theresa May, Bill Clinton and Tony Blair as examples of ambitious, isolated, detached politicians who suffered a 'deprivation of love' in childhood. See also Complex (psychology) Ideocracy Napoleon complex References Complex (psychology) Power (social and political) theories Mental health Narcissism
Phaeton complex
Biology
401
78,164,918
https://en.wikipedia.org/wiki/Etamicastat
Etamicastat (; developmental code name BIA 5-453) is a peripherally selective dopamine β-hydroxylase (DBH) inhibitor which was under development for the treatment of hypertension (high blood pressure) and heart failure but was never marketed. The peripheral selectivity of etamicastat is in contrast to the earlier DBH inhibitor nepicastat, which is centrally active and produced associated side effects. Etamicastat was found to reduce blood pressure but not affect heart rate in clinical trials. The development of etamicastat was discontinued by August 2016. See also Nepicastat Zamicastat References Abandoned drugs Amines Chromanes Dopamine beta hydroxylase inhibitors Fluoroarenes Imidazoles Peripherally selective drugs
Etamicastat
Chemistry
163
22,356,796
https://en.wikipedia.org/wiki/Potting%20and%20stamping
Potting and stamping is a modern name for one of the 18th-century processes for refining pig iron without the use of charcoal. Inventors The process was devised by Charles Wood of Lowmill, Egremont in Cumberland and his brother John Wood of Wednesbury and patented by them in 1761 and 1763. The process was improved by John Wright and Joseph Jesson of West Bromwich, who also obtained a patent. Process The process involved the melting of pig iron in an oxidising atmosphere. The metal was then allowed to cool, broken up by stamping, and washed. The granulated iron was then heated in pots in a reverberatory furnace. The resultant bloom was then drawn out under a forge hammer in the usual way. Adoption During the 14-year term of the patents, the process was little used except by the inventors. However, from c.1785, shortly before Wright & Jesson's process came out of patent, it seems to have been adopted by many ironmasters in the West Midlands. Professor Charles Hyde argues that the potting and stamping process was largely responsible for a 70% rise in wrought iron production from 1750 to 1788. Ultimately, the process was replaced by puddling, though it remains unclear how quickly. References Steelmaking
Potting and stamping
Chemistry,Engineering
259
5,824,808
https://en.wikipedia.org/wiki/Bounded%20quantifier
In the study of formal theories in mathematical logic, bounded quantifiers (a.k.a. restricted quantifiers) are often included in a formal language in addition to the standard quantifiers "∀" and "∃". Bounded quantifiers differ from "∀" and "∃" in that bounded quantifiers restrict the range of the quantified variable. The study of bounded quantifiers is motivated by the fact that determining whether a sentence with only bounded quantifiers is true is often not as difficult as determining whether an arbitrary sentence is true. Examples Examples of bounded quantifiers in the context of real analysis include: - for all x where x is larger than 0 - there exists a y where y is less than 0 - for all x where x is a real number - every positive number is the square of a negative number Bounded quantifiers in arithmetic Suppose that L is the language of Peano arithmetic (the language of second-order arithmetic or arithmetic in all finite types would work as well). There are two types of bounded quantifiers: and . These quantifiers bind the number variable n using a numeric term t not containing n but which may have other free variables. ("Numeric terms" here means terms such as "1 + 1", "2", "2 × 3", "m + 3", etc.) These quantifiers are defined by the following rules ( denotes formulas): There are several motivations for these quantifiers. In applications of the language to recursion theory, such as the arithmetical hierarchy, bounded quantifiers add no complexity. If is a decidable predicate then and are decidable as well. In applications to the study of Peano arithmetic, the fact that a particular set can be defined with only bounded quantifiers can have consequences for the computability of the set. For example, there is a definition of primality using only bounded quantifiers: a number n is prime if and only if there are not two numbers strictly less than n whose product is n. There is no quantifier-free definition of primality in the language , however. The fact that there is a bounded quantifier formula defining primality shows that the primality of each number can be computably decided. In general, a relation on natural numbers is definable by a bounded formula if and only if it is computable in the linear-time hierarchy, which is defined similarly to the polynomial hierarchy, but with linear time bounds instead of polynomial. Consequently, all predicates definable by a bounded formula are Kalmár elementary, context-sensitive, and primitive recursive. In the arithmetical hierarchy, an arithmetical formula that contains only bounded quantifiers is called , , and . The superscript 0 is sometimes omitted. Bounded quantifiers in set theory Suppose that L is the language of the Zermelo–Fraenkel set theory, where the ellipsis may be replaced by term-forming operations such as a symbol for the powerset operation. There are two bounded quantifiers: and . These quantifiers bind the set variable x and contain a term t which may not mention x but which may have other free variables. The semantics of these quantifiers is determined by the following rules: A ZF formula that contains only bounded quantifiers is called , , and . This forms the basis of the Lévy hierarchy, which is defined analogously with the arithmetical hierarchy. Bounded quantifiers are important in Kripke–Platek set theory and constructive set theory, where only Δ0 separation is included. That is, it includes separation for formulas with only bounded quantifiers, but not separation for other formulas. In KP the motivation is the fact that whether a set x satisfies a bounded quantifier formula only depends on the collection of sets that are close in rank to x (as the powerset operation can only be applied finitely many times to form a term). In constructive set theory, it is motivated on predicative grounds. See also Subtyping — bounded quantification in type theory System F<: — a polymorphic typed lambda calculus with bounded quantification References Quantifier (logic) Proof theory Computability theory
Bounded quantifier
Mathematics
915
4,387,316
https://en.wikipedia.org/wiki/Optical%20mineralogy
Optical mineralogy is the study of minerals and rocks by measuring their optical properties. Most commonly, rock and mineral samples are prepared as thin sections or grain mounts for study in the laboratory with a petrographic microscope. Optical mineralogy is used to identify the mineralogical composition of geological materials in order to help reveal their origin and evolution. Some of the properties and techniques used include: Refractive index Birefringence Michel-Lévy Interference colour chart Pleochroism Extinction angle Conoscopic interference pattern (Interference figure) Becke line test Optical relief Sign of elongation (Length fast vs. length slow) Wave plate History William Nicol, whose name is associated with the creation of the Nicol prism, is likely the first to prepare thin slices of mineral substances, and his methods were applied by Henry Thronton Maire Witham (1831) to the study of plant petrifactions. This method, of significant importance in petrology, was not at once made use of for the systematic investigation of rocks, and it was not until 1858 that Henry Clifton Sorby pointed out its value. Meanwhile, the optical study of sections of crystals had been advanced by Sir David Brewster and other physicists and mineralogists and it only remained to apply their methods to the minerals visible in rock sections. Sections A rock-section should be about one-thousandth of an inch (30 micrometres) in thickness, and is relatively easy to make. A thin splinter of the rock, about 1 centimetre may be taken; it should be as fresh as possible and free from obvious cracks. By grinding it on a plate of planed steel or cast iron with a little fine carborundum it is soon rendered flat on one side, and is then transferred to a sheet of plate glass and smoothed with the finest grained emery until all roughness and pits are removed, and the surface is a uniform plane. The rock chip is then washed, and placed on a copper or iron plate which is heated by a spirit or gas lamp. A microscopic glass slip is also warmed on this plate with a drop of viscous natural Canada balsam on its surface. The more volatile ingredients of the balsam are dispelled by the heat, and when that is accomplished the smooth, dry, warm rock is pressed firmly into contact with the glass plate so that the film of balsam intervening may be as thin as possible and free from air bubbles. The preparation is allowed to cool, and the rock chip is again ground down as before, first with carborundum and, when it becomes transparent, with fine emery until the desired thickness is obtained. It is then cleaned, again heated with an additional small amount of balsam, and covered with a cover glass. The labor of grinding the first surface may be avoided by cutting off a smooth slice with an iron disk armed with crushed diamond powder. A second application of the slitter after the first face is smoothed and cemented to the glass will, in expert hands, leave a section of rock so thin as to be transparent. In this way the preparation of a section may require only twenty minutes. Microscope The microscope employed is usually one which is provided with a rotating stage beneath which there is a polarizer, while above the objective or eyepiece an analyzer is mounted; alternatively the stage may be fixed, and the polarizing and analyzing prisms may be capable of simultaneous rotation by means of toothed wheels and a connecting rod. If ordinary light and not polarized light is desired, both prisms may be withdrawn from the axis of the instrument; if the polarizer only is inserted the light transmitted is plane polarized; with both prisms in position the slide is viewed in cross-polarized light, also known as "crossed nicols". A microscopic rock-section in ordinary light, if a suitable magnification (e.g. around 30x) be employed, is seen to consist of grains or crystals varying in color, size, and shape. Characteristics of minerals Color Some minerals are colorless and transparent (quartz, calcite, feldspar, muscovite, etc.), while others are yellow or brown (rutile, tourmaline, biotite), green (diopside, hornblende, chlorite), blue (glaucophane). Many minerals may present a variety of colors, in the same or different rocks, or even multiple colours in a single mineral specimen called colour zonation. For example, the mineral tourmaline may have concentric zones of colour ranging from brown, yellow, pink, blue, green, violet, or grey, to colorless. Every mineral has one or more, most common tints. Habit & Cleavage The shapes of the crystals determine in a general way the outlines of the sections of them presented on the slides. If the mineral has one or more good cleavages, they will be indicated by sets of similarly oriented planes called cleavage planes. The orientation of cleavage planes is determined by the crystal structure of a mineral and form preferentially through planes along which the weakest bonds lie, thus the orientation of cleavage planes can be used in optical mineralogy to identify minerals. Refractive Index & Birefringence Information regarding the refractive index of a mineral can be observed by making comparisons with the surrounding materials. This could be other minerals or the medium in which a grain is mounted. The greater the difference in Optical relief the greater the difference in refractive index between the media. The material with a lower refractive index and thus lower relief will appear to sink into the slide or mount, while a material with higher refractive index will have higher relief and appear to pop out. The Becke line test can also be used to compare the refractive index of two media. Pleochroism Further information is obtained by inserting the lower polarizer and rotating the section. The light vibrates in only one plane, and in passing through doubly refracting crystals in the slide, is, speaking generally, broken up into rays, which vibrate at right angles to one another. In many colored minerals such as biotite, hornblende, tourmaline, chlorite, these two rays have different colors, and when a section containing any of these minerals is rotated the change of color is often clearly noticeable. This property, known as "pleochroism" is of great value in the determination of mineral composition. Pleochroism is often especially intense in small spots which surround minute enclosures of other minerals, such as zircon and epidote. These are known as "pleochroic halos". Alteration Products Some minerals decompose readily and become turbid and semi-transparent (e.g. feldspar); others remain always perfectly fresh and clear (e.g. quartz), while others yield characteristic secondary products (such as green chlorite after biotite). The inclusions in the crystals (both solid and fluid) are of great interest; one mineral may enclose another, or may contain spaces occupied by glass, by fluids or by gases. Microstructure The structure of the rock - the relation of its components to one another - is usually clearly indicated, whether it is fragmented or massive; the presence of glassy matter in contradistinction to a completely crystalline or "holo-crystalline" condition; the nature and origin of organic fragments; banding, foliation or lamination; the pumiceous or porous structure of many lavas. These and many other characters, though often not visible in the hand specimens of a rock, are rendered obvious by the examination of a microscopic section. Various methods of detailed observation may be applied, such as the measurement of the size of the elements of the rock by the help of micrometers, their relative proportions by means of a glass plate ruled in small squares, the angles between cleavages or faces seen in section by the use of the rotating graduated stage, and the estimation of the refractive index of the mineral by comparison with those of different mounting media. Double refraction If the analyzer is inserted in such a position that it is crossed relatively to the polarizer, the field of view will be dark where there are no minerals or where the light passes through isotropic substances such as glass, liquids and cubic crystals. All other crystalline bodies, being doubly refracting, will appear bright in some position as the stage is rotated. The only exception to this rule is provided by sections which are perpendicular to the optic axes of birefringent crystals, which remain dark or nearly dark during a whole rotation, the investigation of which is frequently important. Extinction Doubly refracting mineral sections will in all cases appear black in certain positions as the stage is rotated. They are said to "go extinct" when this takes place. The angle between these and any cleavages can be measured by rotating the stage and recording these positions. These angles are characteristic of the system to which the mineral belongs, and often of the mineral species itself (see Crystallography). To facilitate measurement of extinction angles, various types of eyepieces have been devised, some having a stereoscopic calcite plate, others with two or four plates of quartz cemented together. These are often found to give more precise results than are obtained by observing only the position in which the mineral section is most completely dark between crossed nicols. The mineral sections when not extinguished are not only bright, but are colored, and the colors they show depend on several factors, the most important of which is the strength of the double refraction. If all the sections are of the same thickness, as is nearly true of well-made slides, the minerals with strongest double refraction yield the highest polarization colors. The order in which the colors are arranged is expressed in what is known as Newton's scale, the lowest being dark grey, then grey, white, yellow, orange, red, purple, blue, and so on. The difference between the refractive indexes of the ordinary and the extraordinary ray in quartz is .009, and in a rock-section about 1/500 of an inch thick, this mineral gives grey and white polarization colors; nepheline with weaker double refraction gives dark grey; augite on the other hand will give red and blue, while calcite with the stronger double refraction will appear pinkish or greenish white. All sections of the same mineral, however, will not have the same color: sections perpendicular to an optic axis will be nearly black, and, in general, the more nearly any section approaches this direction the lower its polarization colors will be. By taking the average, or the highest color given by any mineral, the relative value of its double refraction can be estimated, or if the thickness of the section be precisely known the difference between the two refractive indexes can be ascertained. If the slides are thick the colors will be on the whole higher than on thin slides. It is often important to find out whether of the two axes of elasticity (or vibration traces) in the section is that of greater elasticity (or lesser refractive index). The quartz wedge or selenite plate enables this. Suppose a doubly refracting mineral section so placed that it is "extinguished"; if now is rotated through 45 degrees it will be brightly illuminated. If the quartz wedge be passed across it so that the long axis of the wedge is parallel to the axis of elasticity in the section the polarization colors will rise or fall. If they rise the axes of greater elasticity in the two minerals are parallel; if they sink the axis of greater elasticity in the one is parallel to that of lesser elasticity in the other. In the latter case by pushing the wedge sufficiently far complete darkness or compensation will result. Selenite wedges, selenite plates, mica wedges and mica plates are also used for this purpose. A quartz wedge also may be calibrated by determining the amount of double refraction in all parts of its length. If now it be used to produce compensation or complete extinction in any doubly refracting mineral section, we can ascertain what is the strength of the double refraction of the section because it is obviously equal and opposite to that of a known part of the quartz wedge. A further refinement of microscopic methods consists of the use of strongly convergent polarized light (conoscopic methods). This is obtained by a wide angled achromatic condenser above the polarizer, and a high power microscopic objective. Those sections are most useful which are perpendicular to an optic axis, and consequently remain dark on rotation. If they belong to uniaxial crystals they show a dark cross or convergent light between crossed nicols, the bars of which remain parallel to the wires in the field of the eyepiece. Sections perpendicular to an optic axis of a biaxial mineral under the same conditions show a dark bar which on rotation becomes curved to a hyperbolic shape. If the section is perpendicular to a "bisectrix" (see Crystallography) a black cross is seen which on rotation opens out to form two hyperbolas, the apices of which are turned towards one another. The optic axes emerge at the apices of the hyperbolas and may be surrounded by colored rings, though owing to the thinness of minerals in rock sections these are only seen when the double refraction of the mineral is strong. The distance between the axes as seen in the field of the microscope depends partly on the axial angle of the crystal and partly on the numerical aperture of the objective. If it is measured by means of eye-piece micrometer, the optic axial angle of the mineral can be found by a simple calculation. The quartz wedge, quarter mica plate or selenite plate permit the determination of the positive or negative character of the crystal by the changes in the color or shape of the figures observed in the field. These operations are similar to those employed by the mineralogist in the examination of plates cut from crystals. Examination of rock powders Although rocks are now studied principally in microscopic sections the investigation of fine crushed rock powders, which was the first branch of microscopic petrology to receive attention, is still actively used. The modern optical methods are readily applicable to transparent mineral fragments of any kind. Minerals are almost as easily determined in powder as in section, but it is otherwise with rocks, as the structure or relation of the components to one another. This is an element of great importance in the study of the history and classification of rocks, and is almost completely destroyed by grinding them to powder. References External links Video atlas of minerals in thin section Name that Mineral Datatable for comparing observable properties of minerals in thin sections, under transmitted or reflected light. Polarization (waves)
Optical mineralogy
Physics
3,053
69,319,155
https://en.wikipedia.org/wiki/TOLIMAN
The TOLIMAN (Telescope for Orbit Locus Interferometric Monitoring of our Astronomical Neighbourhood) space telescope is a low-cost mission concept aimed at detecting of exoplanets via the astrometry method, and specifically targeting the Alpha Centauri system. TOLIMAN will focus on stars within 10 parsecs (32.6 light years) of the Sun. The telescope, still under construction, is expected to be launched into low Earth orbit no earlier than 2024. The mission will involve scientists of the University of Sydney, Saber Astronautics in Australia, Breakthrough Initiatives, and NASA's Jet Propulsion Laboratory. According to the Mission Leader Peter Tuthill the launch is expected no earlier than the end of 2024. TOLIMAN will explore all three components of the Alpha Centauri system in search of planets in the habitable zone. References External links About the project at Breakthrough Initiatives Exoplanet search projects Proposed NASA space probes Space astrometry missions Space telescopes
TOLIMAN
Astronomy
199
10,826
https://en.wikipedia.org/wiki/Fax
Fax (short for facsimile), sometimes called telecopying or telefax (short for telefacsimile), is the telephonic transmission of scanned printed material (both text and images), normally to a telephone number connected to a printer or other output device. The original document is scanned with a fax machine (or a telecopier), which processes the contents (text or images) as a single fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system in the form of audio-frequency tones. The receiving fax machine interprets the tones and reconstructs the image, printing a paper copy. Early systems used direct conversions of image darkness to audio tone in a continuous or analog manner. Since the 1980s, most machines transmit an audio-encoded digital representation of the page, using data compression to transmit areas that are all-white or all-black, more quickly. Initially a niche product, fax machines became ubiquitous in offices in the 1980s and 1990s. However, they have largely been rendered obsolete by Internet-based technologies such as email and the World Wide Web, but are still used in some medical administration and law enforcement settings. History Wire transmission Scottish inventor Alexander Bain worked on chemical-mechanical fax-type devices and in 1846 Bain was able to reproduce graphic signs in laboratory experiments. He received British patent 9745 on May 27, 1843, for his "Electric Printing Telegraph". Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. The Pantelegraph was invented by the Italian physicist Giovanni Caselli. He introduced the first commercial telefax service between Paris and Lyon in 1865, some 11 years before the invention of the telephone. In 1880, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. An account of Henry Sutton's "telephane" was published in 1896. Around 1900, German physicist Arthur Korn invented the Bildtelegraph, widespread in continental Europe especially following a widely noticed transmission of a wanted-person photograph from Paris to London in 1908, used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission. The 1888 invention of the telautograph by Elisha Gray marked a further development in fax technology, allowing users to send signatures over long distances, thus allowing the verification of identification or ownership over long distances. On May 19, 1924, scientists of the AT&T Corporation "by a new process of transmitting pictures by electricity" sent 15 photographs by telephone from Cleveland to New York City, such photos being suitable for newspaper reproduction. Previously, photographs had been sent over the radio using this process. The Western Union "Deskfax" fax machine, announced in 1948, was a compact machine that fit comfortably on a desktop, using special spark printer paper. Wireless transmission As a designer for the Radio Corporation of America (RCA), in 1924, Richard H. Ranger invented the wireless photoradiogram, or transoceanic radio facsimile, the forerunner of today's "fax" machines. A photograph of President Calvin Coolidge sent from New York to London on November 29, 1924, became the first photo picture reproduced by transoceanic radio facsimile. Commercial use of Ranger's product began two years later. Also in 1924, Herbert E. Ives of AT&T transmitted and reconstructed the first color facsimile, a natural-color photograph of silent film star Rudolph Valentino in period costume, using red, green and blue color separations. Beginning in the late 1930s, the Finch Facsimile system was used to transmit a "radio newspaper" to private homes via commercial AM radio stations and ordinary radio receivers equipped with Finch's printer, which used thermal paper. Sensing a new and potentially golden opportunity, competitors soon entered the field, but the printer and special paper were expensive luxuries, AM radio transmission was very slow and vulnerable to static, and the newspaper was too small. After more than ten years of repeated attempts by Finch and others to establish such a service as a viable business, the public, apparently quite content with its cheaper and much more substantial home-delivered daily newspapers, and with conventional spoken radio bulletins to provide any "hot" news, still showed only a passing curiosity about the new medium. By the late 1940s, radiofax receivers were sufficiently miniaturized to be fitted beneath the dashboard of Western Union's "Telecar" telegram delivery vehicles. In the 1960s, the United States Army transmitted the first photograph via satellite facsimile to Puerto Rico from the Deal Test Site using the Courier satellite. Radio fax is still in limited use today for transmitting weather charts and information to ships at sea. The closely related technology of slow-scan television is still used by amateur radio operators. Telephone transmission In 1964, Xerox Corporation introduced (and patented) what many consider to be the first commercialized version of the modern fax machine, under the name (LDX) or Long Distance Xerography. This model was superseded two years later with a unit that would set the standard for fax machines for years to come. Up until this point facsimile machines were very expensive and hard to operate. In 1966, Xerox released the Magnafax Telecopiers, a smaller, facsimile machine. This unit was far easier to operate and could be connected to any standard telephone line. This machine was capable of transmitting a letter-sized document in about six minutes. The first sub-minute, digital fax machine was developed by Dacom, which built on digital data compression technology originally developed at Lockheed for satellite communication. By the late 1970s, many companies around the world (especially Japanese firms) had entered the fax market. Very shortly after this, a new wave of more compact, faster and efficient fax machines would hit the market. Xerox continued to refine the fax machine for years after their ground-breaking first machine. In later years it would be combined with copier equipment to create the hybrid machines we have today that copy, scan and fax. Some of the lesser known capabilities of the Xerox fax technologies included their Ethernet enabled Fax Services on their 8000 workstations in the early 1980s. Prior to the introduction of the ubiquitous fax machine, one of the first being the Exxon Qwip in the mid-1970s, facsimile machines worked by optical scanning of a document or drawing spinning on a drum. The reflected light, varying in intensity according to the light and dark areas of the document, was focused on a photocell so that the current in a circuit varied with the amount of light. This current was used to control a tone generator (a modulator), the current determining the frequency of the tone produced. This audio tone was then transmitted using an acoustic coupler (a speaker, in this case) attached to the microphone of a common telephone handset. At the receiving end, a handset's speaker was attached to an acoustic coupler (a microphone), and a demodulator converted the varying tone into a variable current that controlled the mechanical movement of a pen or pencil to reproduce the image on a blank sheet of paper on an identical drum rotating at the same rate. Computer facsimile interface In 1985, Hank Magnuski, founder of GammaLink, produced the first computer fax board, called GammaFax. Such boards could provide voice telephony via Analog Expansion Bus. In the 21st century Although businesses usually maintain some kind of fax capability, the technology has faced increasing competition from Internet-based alternatives. In some countries, because electronic signatures on contracts are not yet recognized by law, while faxed contracts with copies of signatures are, fax machines enjoy continuing support in business. In Japan, faxes are still used extensively as of September 2020 for cultural and They are available for sending to both domestic and international recipients from over 81% of all convenience stores nationwide. Convenience-store fax machines commonly print the slightly re-sized content of the sent fax in the electronic confirmation-slip, in A4 paper size. Use of fax machines for reporting cases during the COVID-19 pandemic has been criticised in Japan for introducing data errors and delays in reporting, slowing response efforts to contain the spread of infections and hindering the transition to remote work. In many corporate environments, freestanding fax machines have been replaced by fax servers and other computerized systems capable of receiving and storing incoming faxes electronically, and then routing them to users on paper or via an email (which may be secured). Such systems have the advantage of reducing costs by eliminating unnecessary printouts and reducing the number of inbound analog phone lines needed by an office. The once ubiquitous fax machine has also begun to disappear from the small office and home office environments. Remotely hosted fax-server services are widely available from VoIP and e-mail providers allowing users to send and receive faxes using their existing e-mail accounts without the need for any hardware or dedicated fax lines. Personal computers have also long been able to handle incoming and outgoing faxes using analog modems or ISDN, eliminating the need for a stand-alone fax machine. These solutions are often ideally suited for users who only very occasionally need to use fax services. In July 2017 the United Kingdom's National Health Service was said to be the world's largest purchaser of fax machines because the digital revolution has largely bypassed it. In June 2018 the Labour Party said that the NHS had at least 11,620 fax machines in operation and in December the Department of Health and Social Care said that no more fax machines could be bought from 2019 and that the existing ones must be replaced by secure email by March 31, 2020. Leeds Teaching Hospitals NHS Trust, generally viewed as digitally advanced in the NHS, was engaged in a process of removing its fax machines in early 2019. This involved quite a lot of e-fax solutions because of the need to communicate with pharmacies and nursing homes which may not have access to the NHS email system and may need something in their paper records. In 2018 two-thirds of Canadian doctors reported that they primarily used fax machines to communicate with other doctors. Faxes are still seen as safer and more secure and electronic systems are often unable to communicate with each other. Hospitals are the leading users for fax machines in the United States where some doctors prefer fax machines over emails, often due to concerns about accidentally violating HIPAA. Capabilities There are several indicators of fax capabilities: group, class, data transmission rate, and conformance with ITU-T (formerly CCITT) recommendations. Since the 1968 Carterfone decision, most fax machines have been designed to connect to standard PSTN lines and telephone numbers. Group Analog Group 1 and 2 faxes are sent in the same manner as a frame of analog television, with each scanned line transmitted as a continuous analog signal. Horizontal resolution depended upon the quality of the scanner, transmission line, and the printer. Analog fax machines are obsolete and no longer manufactured. ITU-T Recommendations T.2 and T.3 were withdrawn as obsolete in July 1996. Group 1 faxes conform to the ITU-T Recommendation T.2. Group 1 faxes take six minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 1 fax machines are obsolete and no longer manufactured. Group 2 faxes conform to the ITU-T Recommendations T.3 and T.30. Group 2 faxes take three minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 2 fax machines are almost obsolete, and are no longer manufactured. Group 2 fax machines can interoperate with Group 3 fax machines. Digital A major breakthrough in the development of the modern facsimile system was the result of digital technology, where the analog signal from scanners was digitized and then compressed, resulting in the ability to transmit high rates of data across standard phone lines. The first digital fax machine was the Dacom Rapidfax first sold in late 1960s, which incorporated digital data compression technology developed by Lockheed for transmission of images from satellites. Group 3 and 4 faxes are digital formats and take advantage of digital compression methods to greatly reduce transmission times. Group 3 faxes conform to the ITU-T Recommendations T.30 and T.4. Group 3 faxes take between 6 and 15 seconds to transmit a single page (not including the initial time for the fax machines to handshake and synchronize). The horizontal and vertical resolutions are allowed by the T.4 standard to vary among a set of fixed resolutions: Horizontal: 100 scan lines per inch Vertical: 100 scan lines per inch ("Basic") Horizontal: 200 or 204 scan lines per inch Vertical: 100 or 98 scan lines per inch ("Standard") Vertical: 200 or 196 scan lines per inch ("Fine") Vertical: 400 or 391 (note not 392) scan lines per inch ("Superfine") Horizontal: 300 scan lines per inch Vertical: 300 scan lines per inch Horizontal: 400 or 408 scan lines per inch Vertical: 400 or 391 scan lines per inch ("Ultrafine") Group 4 faxes are designed to operate over 64 kbit/s digital ISDN circuits. They conform to the ITU-T Recommendations T.563 (Terminal characteristics for Group 4 facsimile apparatus), T.503 (Document application profile for the interchange of Group 4 facsimile documents), T.521 (Communication application profile BT0 for document bulk transfer based on the session service), T.6 (Facsimile coding schemes and coding control functions for Group 4 facsimile apparatus) specifying resolutions, a superset of the resolutions from T.4 , T.62 (Control procedures for teletex and Group 4 facsimile services), T.70 (Network-independent basic transport service for the telematic services), and T.411 to T.417 (concerned with aspects of the Open Document Architecture). Fax Over IP (FoIP) can transmit and receive pre-digitized documents at near-realtime speeds using ITU-T recommendation T.38 to send digitised images over an IP network using JPEG compression. T.38 is designed to work with VoIP services and often supported by analog telephone adapters used by legacy fax machines that need to connect through a VoIP service. Scanned documents are limited to the amount of time the user takes to load the document in a scanner and for the device to process a digital file. The resolution can vary from as little as 150 DPI to 9600 DPI or more. This type of faxing is not related to the e-mail–to–fax service that still uses fax modems at least one way. Class Computer modems are often designated by a particular fax class, which indicates how much processing is offloaded from the computer's CPU to the fax modem. Class 1 (also known as Class 1.0) fax devices do fax data transfer, while the T.4/T.6 data compression and T.30 session management are performed by software on a controlling computer. This is described in ITU-T recommendation T.31. What is commonly known as "Class 2" is an unofficial class of fax devices that perform T.30 session management themselves, but the T.4/T.6 data compression is performed by software on a controlling computer. Implementations of this "class" are based on draft versions of the standard that eventually significantly evolved to become Class 2.0. All implementations of "Class 2" are manufacturer-specific. Class 2.0 is the official ITU-T version of Class 2 and is commonly known as Class 2.0 to differentiate it from many manufacturer-specific implementations of what is commonly known as "Class 2". It uses a different but standardized command set than the various manufacturer-specific implementations of "Class 2". The relevant ITU-T recommendation is T.32. Class 2.1 is an improvement of Class 2.0 that implements faxing over V.34 (33.6 kbit/s), which boosts faxing speed from fax classes "2" and 2.0, which are limited to 14.4 kbit/s. The relevant ITU-T recommendation is T.32 Amendment 1. Class 2.1 fax devices are referred to as "super G3". Data transmission rate Several different telephone-line modulation techniques are used by fax machines. They are negotiated during the fax-modem handshake, and the fax devices will use the highest data rate that both fax devices support, usually a minimum of 14.4 kbit/s for Group 3 fax. {| class="wikitable" !ITU standard !Released date !Data rates (bit/s) !Modulation method |- |V.27 |1988 |4800, 2400 |PSK |- |V.29 |1988 |9600, 7200, 4800 |QAM |- |V.17 |1991 |, , 9600, 7200 |TCM |- |V.34 |1994 | |QAM |- |V.34bis |1998 | |QAM |- |ISDN |1986 | |4B3T / 2B1Q (line coding) |} "Super Group 3" faxes use V.34bis modulation that allows a data rate of up to 33.6 kbit/s. Compression As well as specifying the resolution (and allowable physical size) of the image being faxed, the ITU-T T.4 recommendation specifies two compression methods for decreasing the amount of data that needs to be transmitted between the fax machines to transfer the image. The two methods defined in T.4 are: Modified Huffman (MH). Modified READ (MR) (Relative Element Address Designate), optional. An additional method is specified in T.6: Modified Modified READ (MMR). Later, other compression techniques were added as options to ITU-T recommendation T.30, such as the more efficient JBIG (T.82, T.85) for bi-level content, and JPEG (T.81), T.43, MRC (T.44), and T.45 for grayscale, palette, and colour content. Fax machines can negotiate at the start of the T.30 session to use the best technique implemented on both sides. Modified Huffman Modified Huffman (MH), specified in T.4 as the one-dimensional coding scheme, is a codebook-based run-length encoding scheme optimised to efficiently compress whitespace. As most faxes consist mostly of white space, this minimises the transmission time of most faxes. Each line scanned is compressed independently of its predecessor and successor. Modified READ Modified READ, specified as an optional two-dimensional coding scheme in T.4, encodes the first scanned line using MH. The next line is compared to the first, the differences determined, and then the differences are encoded and transmitted. This is effective, as most lines differ little from their predecessor. This is not continued to the end of the fax transmission, but only for a limited number of lines until the process is reset, and a new "first line" encoded with MH is produced. This limited number of lines is to prevent errors propagating throughout the whole fax, as the standard does not provide for error correction. This is an optional facility, and some fax machines do not use MR in order to minimise the amount of computation required by the machine. The limited number of lines is 2 for "Standard"-resolution faxes, and 4 for "Fine"-resolution faxes. Modified Modified READ The ITU-T T.6 recommendation adds a further compression type of Modified Modified READ (MMR), which simply allows a greater number of lines to be coded by MR than in T.4. This is because T.6 makes the assumption that the transmission is over a circuit with a low number of line errors, such as digital ISDN. In this case, the number of lines for which the differences are encoded is not limited. JBIG In 1999, ITU-T recommendation T.30 added JBIG (ITU-T T.82) as another lossless bi-level compression algorithm, or more precisely a "fax profile" subset of JBIG (ITU-T T.85). JBIG-compressed pages result in 20% to 50% faster transmission than MMR-compressed pages, and up to 30 times faster transmission if the page includes halftone images. JBIG performs adaptive compression, that is, both the encoder and decoder collect statistical information about the transmitted image from the pixels transmitted so far, in order to predict the probability for each next pixel being either black or white. For each new pixel, JBIG looks at ten nearby, previously transmitted pixels. It counts, how often in the past the next pixel has been black or white in the same neighborhood, and estimates from that the probability distribution of the next pixel. This is fed into an arithmetic coder, which adds only a small fraction of a bit to the output sequence if the more probable pixel is then encountered. The ITU-T T.85 "fax profile" constrains some optional features of the full JBIG standard, such that codecs do not have to keep data about more than the last three pixel rows of an image in memory at any time. This allows the streaming of "endless" images, where the height of the image may not be known until the last row is transmitted. ITU-T T.30 allows fax machines to negotiate one of two options of the T.85 "fax profile": In "basic mode", the JBIG encoder must split the image into horizontal stripes of 128 lines (parameter L0 = 128) and restart the arithmetic encoder for each stripe. In "option mode", there is no such constraint. Matsushita Whiteline Skip A proprietary compression scheme employed on Panasonic fax machines is Matsushita Whiteline Skip (MWS). It can be overlaid on the other compression schemes, but is operative only when two Panasonic machines are communicating with one another. This system detects the blank scanned areas between lines of text, and then compresses several blank scan lines into the data space of a single character. (JBIG implements a similar technique called "typical prediction", if header flag TPBON is set to 1.) Typical characteristics Group 3 fax machines transfer one or a few printed or handwritten pages per minute in black-and-white (bitonal) at a resolution of 204×98 (normal) or 204×196 (fine) dots per square inch. The transfer rate is 14.4 kbit/s or higher for modems and some fax machines, but fax machines support speeds beginning with 2400 bit/s and typically operate at 9600 bit/s. The transferred image formats are called ITU-T (formerly CCITT) fax group 3 or 4. Group 3 faxes have the suffix .g3 and the MIME type image/g3fax. The most basic fax mode transfers in black and white only. The original page is scanned in a resolution of 1728 pixels/line and 1145 lines/page (for A4). The resulting raw data is compressed using a modified Huffman code optimized for written text, achieving average compression factors of around 20. Typically a page needs 10 s for transmission, instead of about three minutes for the same uncompressed raw data of 1728×1145 bits at a speed of 9600 bit/s. The compression method uses a Huffman codebook for run lengths of black and white runs in a single scanned line, and it can also use the fact that two adjacent scanlines are usually quite similar, saving bandwidth by encoding only the differences. Fax classes denote the way fax programs interact with fax hardware. Available classes include Class 1, Class 2, Class 2.0 and 2.1, and Intel CAS. Many modems support at least class 1 and often either Class 2 or Class 2.0. Which is preferable to use depends on factors such as hardware, software, modem firmware, and expected use. Printing process Fax machines from the 1970s to the 1990s often used direct thermal printers with rolls of thermal paper as their printing technology, but since the mid-1990s there has been a transition towards plain-paper faxes: thermal transfer printers, inkjet printers and laser printers. One of the advantages of inkjet printing is that inkjets can affordably print in color; therefore, many of the inkjet-based fax machines claim to have color fax capability. There is a standard called ITU-T30e (formally ITU-T Recommendation T.30 Annex E ) for faxing in color; however, it is not widely supported, so many of the color fax machines can only fax in color to machines from the same manufacturer. Stroke speed Stroke speed in facsimile systems is the rate at which a fixed line perpendicular to the direction of scanning is crossed in one direction by a scanning or recording spot. Stroke speed is usually expressed as a number of strokes per minute. When the fax system scans in both directions, the stroke speed is twice this number. In most conventional 20th century mechanical systems, the stroke speed is equivalent to drum speed. Fax paper As a precaution, thermal fax paper is typically not accepted in archives or as documentary evidence in some courts of law unless photocopied. This is because the image-forming coating is eradicable and brittle, and it tends to detach from the medium after a long time in storage. Fax tone A CNG tone is an 1100 Hz tone transmitted by a fax machine when it calls another fax machine. Fax tones can cause complications when implementing fax over IP. Internet fax One popular alternative is to subscribe to an Internet fax service, allowing users to send and receive faxes from their personal computers using an existing email account. No software, fax server or fax machine is needed. Faxes are received as attached TIFF or PDF files, or in proprietary formats that require the use of the service provider's software. Faxes can be sent or retrieved from anywhere at any time that a user can get Internet access. Some services offer secure faxing to comply with stringent HIPAA and Gramm–Leach–Bliley Act requirements to keep medical information and financial information private and secure. Utilizing a fax service provider does not require paper, a dedicated fax line, or consumable resources. Another alternative to a physical fax machine is to make use of computer software which allows people to send and receive faxes using their own computers, utilizing fax servers and unified messaging. A virtual (email) fax can be printed out and then signed and scanned back to computer before being emailed. Also the sender can attach a digital signature to the document file. With the surging popularity of mobile phones, virtual fax machines can now be downloaded as applications for Android and iOS. These applications make use of the phone's internal camera to scan fax documents for upload or they can import from various cloud services. Related standards T.4 is the umbrella specification for fax. It specifies the standard image sizes, two forms of image-data compression (encoding), the image-data format, and references, T.30 and the various modem standards. T.6 specifies a compression scheme that reduces the time required to transmit an image by roughly 50-percent. T.30 specifies the procedures that a sending and receiving terminal use to set up a fax call, determine the image size, encoding, and transfer speed, the demarcation between pages, and the termination of the call. T.30 also references the various modem standards. V.21, V.27ter, V.29, V.17, V.34: ITU modem standards used in facsimile. The first three were ratified prior to 1980, and were specified in the original T.4 and T.30 standards. V.34 was published for fax in 1994. T.37 The ITU standard for sending a fax-image file via e-mail to the intended recipient of a fax. T.38 The ITU standard for sending Fax over IP (FoIP). G.711 pass through - this is where the T.30 fax call is carried in a VoIP call encoded as audio. This is sensitive to network packet loss, jitter and clock synchronization. When using voice high-compression encoding techniques such as, but not limited to, G.729, some fax tonal signals may not be correctly transported across the packet network. image/t38 MIME-type SSL Fax An emerging standard that allows a telephone based fax session to negotiate a fax transfer over the internet, but only if both sides support the standard. The standard is partially based on T.30 and is being developed by Hylafax+ developers. See also 3D Fax Black fax Called subscriber identification (CSID) Error correction mode (ECM) Fax art Fax demodulator Fax modem Fax server Faxlore Fultograph Image scanner Internet fax Junk fax Radiofax—image transmission over HF radio Slow-scan television T.38 Fax-over-IP Telautograph Telex Teletex Transmitting Subscriber Identification (TSID) Wirephoto References Further reading Coopersmith, Jonathan, Faxed: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015) 308 pp. "Transmitting Photographs by Telegraph", Scientific American article, 12 May 1877, p. 297 External links Group 3 Facsimile Communication—A '97 essay with technical details on compression and error codes, and call establishment and release. ITU T.30 Recommendation American inventions Computer peripherals English inventions German inventions Italian inventions ITU-T recommendations Japanese inventions Office equipment Scottish inventions Telecommunications equipment
Fax
Technology
6,387
56,452,747
https://en.wikipedia.org/wiki/Cryogenic%20electron%20microscopy
Cryogenic electron microscopy (cryo-EM) is a cryomicroscopy technique applied on samples cooled to cryogenic temperatures. For biological specimens, the structure is preserved by embedding in an environment of vitreous ice. An aqueous sample solution is applied to a grid-mesh and plunge-frozen in liquid ethane or a mixture of liquid ethane and propane. While development of the technique began in the 1970s, recent advances in detector technology and software algorithms have allowed for the determination of biomolecular structures at near-atomic resolution. This has attracted wide attention to the approach as an alternative to X-ray crystallography or NMR spectroscopy for macromolecular structure determination without the need for crystallization. In 2017, the Nobel Prize in Chemistry was awarded to Jacques Dubochet, Joachim Frank, and Richard Henderson "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution." Nature Methods also named cryo-EM as the "Method of the Year" in 2015. History Early development In the 1960s, the use of transmission electron microscopy for structure determination methods was limited because of the radiation damage due to high energy electron beams. Scientists hypothesized that examining specimens at low temperatures would reduce beam-induced radiation damage. Both liquid helium (−269 °C or 4 K or −452.2 °F) and liquid nitrogen (−195.79 °C or 77 K or −320 °F) were considered as cryogens. In 1980, Erwin Knapek and Jacques Dubochet published comments on beam damage at cryogenic temperatures sharing observations that: Thin crystals mounted on carbon film were found to be from 30 to 300 times more beam-resistant at 4 K than at room temperature... Most of our results can be explained by assuming that cryoprotection in the region of 4 K is strongly dependent on the temperature. However, these results were not reproducible and amendments were published in Nature just two years later informing that the beam resistance was less significant than initially anticipated. The protection gained at 4 K was closer to "tenfold for standard samples of L-valine", than what was previously stated. In 1981, Alasdair McDowall and Jacques Dubochet, scientists at the European Molecular Biology Laboratory, reported the first successful implementation of cryo-EM. McDowall and Dubochet vitrified pure water in a thin film by spraying it onto a hydrophilic carbon film that was rapidly plunged into cryogen (liquid propane or liquid ethane cooled to 77 K). The thin layer of amorphous ice was less than 1 μm thick and an electron diffraction pattern confirmed the presence of amorphous/vitreous ice. In 1984, Dubochet's group demonstrated the power of cryo-EM in structural biology with analysis of vitrified adenovirus type 2, T4 bacteriophage, Semliki Forest virus, Bacteriophage CbK, and Vesicular-Stomatitis-Virus. Recent advancements The 2010s were marked with drastic advancements of electron cameras. Notably, the improvements made to direct electron detectors have led to a "resolution revolution" pushing the resolution barrier beneath the crucial ~2-3 Å limit to resolve amino acid position and orientation.Henderson (MRC Laboratory of Molecular Biology, Cambridge, UK) formed a consortium with engineers at the Rutherford Appleton Laboratory and scientists at the Max Planck Society to fund and develop a first prototype. The consortium then joined forces with the electron microscope manufacturer FEI to roll out and market the new design. At about the same time, Gatan Inc. of Pleasanton, California came out with a similar detector designed by Peter Denes (Lawrence Berkeley National Laboratory) and David Agard (University of California, San Francisco). A third type of camera was developed by Nguyen-Huu Xuong at the Direct Electron company (San Diego, California).More recently, advancements in the use of protein-based imaging scaffolds are helping to solve the problems of sample orientation bias and size limit. Proteins smaller than ~50 kDa generally have too low a signal-to-noise ratio (SNR) to be able to resolve protein particles in the image, making 3D reconstruction difficult or impossible. The SNR of smaller proteins can be improved by binding them to an imaging scaffold. The Yeates group at UCLA was able to create a clearer image of three variants of KRAS (roughly 19 kDa in size) by utilising a rigid imaging scaffold, and using DARPins as modular binding domains between the scaffold and the protein of interest. 2017 Nobel Prize in Chemistry In recognition of the impact cryo-EM has had on biochemistry, three scientists, Jacques Dubochet, Joachim Frank and Richard Henderson, were awarded the Nobel Prize in Chemistry "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution." Comparisons to X-ray crystallography Traditionally, X-ray crystallography has been the most popular technique for determining the 3D structures of biological molecules. However, the aforementioned improvements in cryo-EM have increased its popularity as a tool for examining the details of biological molecules. Since 2010, yearly cryo-EM structure deposits have outpaced X-ray crystallography. Though X-ray crystallography has drastically more total deposits due to a decades-longer history, total deposits of the two methods are projected to eclipse around 2035. The resolution of X-ray crystallography is limited by crystal homogeneity, and coaxing biological molecules with unknown ideal crystallization conditions into a crystalline state can be very time-consuming, in extreme cases taking months or even years. To contrast, sample preparation in cryo-EM may require several rounds of screening and optimization to overcome issues such as protein aggregation and preferred orientations, but it does not require the sample to form a crystal, rather samples for cryo-EM are flash-frozen and examined in their near-native states. According to Proteopedia, the median resolution achieved by X-ray crystallography (as of May 19, 2019) on the Protein Data Bank is 2.05 Å, and the highest resolution achieved on record (as of September 30, 2022) is 0.48 Å. As of 2020, the majority of the protein structures determined by cryo-EM are at a lower resolution of 3–4 Å. However, as of 2020, the best cryo-EM resolution has been recorded at 1.22 Å, making it a competitor in resolution in some cases. Correlative light cryo-TEM and cryo-ET In 2019, correlative light cryo-TEM and cryo-ET were used to observe tunnelling nanotubes (TNTs) in neuronal cells. Scanning electron cryomicroscopy Scanning electron cryomicroscopy (cryoSEM) is a scanning electron microscopy technique with a scanning electron microscope's cold stage in a cryogenic chamber. Cryogenic transmission electron microscopy Cryogenic transmission electron microscopy (cryo-TEM) is a transmission electron microscopy technique that is used in structural biology and materials science. Colloquially, the term "cryogenic electron microscopy" or its shortening "cryo-EM" refers to cryogenic transmission electron microscopy by default, as the vast majority of cryo-EM is done in transmission electron microscopes, rather than scanning electron microscopes. Centers The Federal Institute of Technology, the University of Lausanne and the University of Geneva opened the Dubochet Center For Imaging (DCI) at the end of November 2021, for the purposes of applying and further developing cryo-EM. Less than a month after the first identification of the SARS-CoV-2 Omicron variant, researchers at the DCI were able to define its structure, identify the crucial mutations to circumvent individual vaccines and provide insights for new therapeutic approaches. The Danish National cryo-EM Facility also known as EMBION was inaugurated on December 1, 2016. EMBION is a cryo-EM consortium between Danish Universities (Aarhus University host and University of Copenhagen co-host). Advanced methods Cryogenic electron tomography (cryo-ET), a specialized application where many images are taken of individual samples at various tilt angles, resulting in a 3D reconstruction of a single sample. Electron crystallography, method to determine the arrangement of atoms in solids using a TEM MicroED, method to determine the structure of proteins, peptides, organic molecules, and inorganic compounds using electron diffraction from 3D crystals Single particle analysis cryo-EM, an averaging method to determine protein structure from monodisperse samples. See also Cryofixation Cryo bio-crystallography Electron tomography (ET) References Electron microscopy techniques Cell biology Protein structure Scientific techniques
Cryogenic electron microscopy
Chemistry,Biology
1,834
9,026,091
https://en.wikipedia.org/wiki/Batteryless%20radio
A batteryless radio is a type of radio receiver that does not require the use of a battery to provide it with electrical power. Originally this referred to units which could be used directly by AC mains supply (mains radio); it can also refer to units which do not require a power source at all, except for the power that they receive from an ambient radio source, such as radio waves. History Most radios used vacuum tube batteries until the mid to late 1920s. The line-operated vacuum tube receiver was invented in 1925 by Edward S. Rogers, Sr. The unit operated with five Rogers AC vacuum tubes and the Rogers Battery-Eliminator Power Unit (power supply). This unit was later marketed for $120 as "Type 120". He established the Toronto station CFRB (an abbreviation of Canada's First Rogers Batteryless) to promote sales of the product. Batteryless radios were not introduced into the United States until May 1926 and then into Europe in 1927, and the industry did not widely produce batteryless radios until RCA's AC tube in late 1927. Crystal radio receivers are a very simple kind of batteryless radio receiver. They do not need a battery or power source, except for the power that they receive from radio waves using their long outdoor wire antenna. Sharp Electronics' first electrical product was a batteryless crystal radio introduced in 1925. It was Japan's first—and sold extremely well. Thermoelectricity was widely used in the remote parts of the Soviet Union from the 1920s to power radios. The equipment comprised some bi-metal rods (thermocouples), one end of which could be inserted into the fireplace to get hot with the other end left out in the cold. After the Second World War, kerosene radios were made in Moscow for use in rural areas. These all-wave radios were powered by the kerosene lamp hanging above them. A group of thermocouples was heated internally to by the flame. Fins cooled the outside to about . The temperature differential generated enough current to operate the low-drain receiver. Foot-operated radio or pedal radio was once used in Australia. Other ways of achieving the same function are clockwork radio, hand crank radio and solar radio, especially for the Royal Flying Doctor Service and School of the Air. As part of an energy harvesting electronics system, some batteryless radios render electricity to storage by means of storage capacitors. In this batteryless type of radio, the storage capacitors cache the electricity as static on layers of dielectric instead of chemical changes, providing energy like batteries do but 'batteryless'. This can be quite effective. Storage capacitors recharge millions of times, they are relatively cheap, somewhat insensitive to temperature, and they never need replacing—which is why they are usually soldered on. As part of an energy autarkic or energy harvesting batteryless radio, therefore, storage capacitors are an integral part, storing electricity like battery does for lean energy periods, but in a 'batteryless' way which is more sustainable. About 15 billion batteries are consumed every year worldwide. Carrier-powered radio A carrier-powered radio is a batteryless radio which "leeches" its power from the incoming electromagnetic wave. A simple circuit (very similar to a crystal set) rectifies the incoming signal and this DC current is then used to power a small transistor amplifier. Typically a strong local station is tuned in to provide power, leaving the listener free to listen to weaker and more distant stations. See also Antique radio Battery eliminator Crystal radio Human-powered equipment Invention of radio Pyroelectric effect Radio receiver Solar powered radio Thermogenerator Windup radio Wireless light switch Ambient backscatter RFID References External links Museum of thermoelectric generators Thermodynamics Electricity Types of radios
Batteryless radio
Physics,Chemistry,Mathematics
790
44,038,801
https://en.wikipedia.org/wiki/Smart%20inorganic%20polymer
Smart inorganic polymers (SIPs) are hybrid or fully inorganic polymers with tunable (smart) properties such as stimuli responsive physical properties (shape, conductivity, rheology, bioactivity, self-repair, sensing etc.). While organic polymers are often petrol-based, the backbones of SIPs are made from elements other than carbon which can lessen the burden on scarce non-renewable resources and provide more sustainable alternatives. Common backbones utilized in SIPs include polysiloxanes, polyphosphates, and polyphosphazenes, to name a few. SIPs have the potential for broad applicability in diverse fields spanning from drug delivery and tissue regeneration to coatings and electronics. As compared to organic polymers, inorganic polymers in general possess improved performance and environmental compatibility (no need for plasticizers, intrinsically flame-retardant properties). The unique properties of different SIPs can additionally make them useful in a diverse range of technologically novel applications, such as solid polymer electrolytes for consumer electronics, molecular electronics with non-metal elements to replace metal-based conductors, electrochromic materials, self-healing coatings, biosensors, and self-assembling materials. Role of COST action CM1302 COST action 1302 is a European Community "Cooperation in Science and Technology" research network initiative that supported 62 scientific projects in the area of smart inorganic polymers resulting in 70 publications between 2014 and 2018, with the mission of establishing a framework with which to rationally design new smart inorganic polymers. This represents a large share of the total body of work on SIPs. The results of this work are reviewed in the 2019 book, Smart Inorganic Polymers: Synthesis, Properties, and Emerging Applications in Materials and Life Sciences. Smart polysiloxanes Polysiloxane, commonly known as silicone, is the most commonly commercially available inorganic polymer. The large body of existing work on polysiloxane has made it a readily available platform for functionalization to create smart polymers, with a variety of approaches reported which generally center around the addition of metal oxides to a commercially available polysiloxane or the inclusion of functional side-chains on the polysiloxane backbone. The applications of smart polysiloxanes vary greatly, ranging from drug delivery, to smart coatings, to electrochromics. Drug delivery Synthesis of smart stimuli responsive polysiloxanes through the addition of a polysiloxane amine to an α,β-unsaturated carbonyl via aza-Michael addition to create a polysiloxane with N-isopropyl amide side-chains has been reported. This polysiloxane was shown to be able to load ibuprofen (a hydrophobic NSAID) and then release it in response to changes in temperature, showing it to be a promising candidate for smart drug delivery of hydrophobic drugs. This action was attributed to the polymer's ability to retain the ibuprofen above the lower critical solution temperature (LCST), and conversely, to dissolve below the LCST, thus releasing the loaded ibuprofen at a given, known temperature. Coatings Commercial polysiloxane coatings are readily commercially available and capable of protecting surfaces from damaging pollutants, but the addition of TiO2 gives them the smart ability to degrade pollutants stuck to their surface in the presence of sunlight. This particular phenomena is promising in the field of monument preservation. Similar hybrid textile coatings made of amino-functionalized polysiloxane with TiO2 and silver nanoparticles have been reported to have smart stain-repellent yet hydrophilic properties, making them unique in comparison to typical hydrophobic stain-repellant coatings. Smart properties have also been reported for polysiloxane coatings without metal oxides, namely, a polysiloxane/polyethylenimine coating designed to protect magnesium from corrosion that was found to be capable of self-healing small scratches. Poly-(ε-caprolactone)/siloxane Poly-(ε-caprolactone)/siloxane is an inorganic-organic hybrid material which, when used as a solid electrolyte matrix with a lithium perchlorate electrolyte, paired to a W2O3 film, responds to a change in electrical potential by changing transparency. This makes it a potentially useful electrochromic smart glass. Smart phosphorus polymers There exist a sizable number of phosphorus polymers with backbones ranging from primarily phosphorus to primarily organic with phosphorus subunits. Some of these have been shown to possess smart properties, and are largely of-interest due to the biocompatibility of phosphorus for biological applications like drug delivery, tissue engineering, and tissue repair. Polyphosphates Polyphosphate (PolyP) is an inorganic polymer made from phosphate subunits. It typically exists in its deprotonated form, and can form salts with physiological metal cations like Ca2+, Sr2+, and Mg2+. When salted to these metals, it can selectively induce bone regeneration (Ca-PolyP), bone hardening (Sr-PolyP), or cartilage regeneration (Mg-PolyP) depending on the metal to which it is salted. This smart ability to attenuate the kind of tissue regenerated in response to different metal cations makes it a promising polymer for biomedical applications. Polyphosphazenes Polyphosphazene is an inorganic polymer with a backbone consisting of phosphorus and nitrogen, which can also form inorganic-organic hybrid polymers with the addition of organic substituents. Some polyphosphazenes have been designed through the addition of amino acid ester side chains such that their LCST is near body temperature and thus they can form a gel in situ upon injection into a person, making them potentially useful for drug delivery. They biodegrade into a near-neutral pH mixture of phosphates and ammonia that has been shown to be non-toxic, and the rate of their biodegradation can be tuned with the addition of different substituents from full decomposition within days with glyceryl derivatives, to biostable with fluoroalkoxy substituents. Poly-ProDOT-Me2 Poly-ProDOT-Me2 is a phosphorus-based inorganic-organic hybrid polymer, which, when paired to a V2O5 film, provides a material that changes color upon application of an electrical current. This 'smart glass' is capable of reducing light transmission from 57% to 28% in under 1 second, a much faster transformation than that of commercially available photochromic lenses. Smart metalloid and metal containing polymers While metals are not typically associated with polymeric structures, the inclusion of metal atoms either throughout the backbone of, or as pendant structures on a polymer can provide unique smart properties, especially in relation to their redox and electronic properties. These desirable properties can range from self-repair of oxidation, to sensing, to smart material self-assembly, as discussed below. Polystannanes Polystannane, a unique polymer class with a tin backbone, is the only known polymer to possess a completely organometallic backbone. It is especially unique in the way that the conductive tin backbone is surrounded by organic substituents, making it act as an atomic-scale insulated wire. Some polystannanes such as (SnBu2)n and (SnOct2)n have shown the smart ability to align themselves with external stimuli, which could see them become useful for pico electronics. However, polystannane is very unstable to light, so any such advancement would require a method for stabilizing it against light degradation. Icosahedral boron polymers Icosahedral boron is a geometrically unusual allotrope of boron, which can be either added as side chains to a polymer or co-polymerized into the backbone. Icosahedral boron side chains on polypyrrole have been shown to allow the polypyrrole to self-repair when overoxidized because the icosahedral boron acts as a doping agent, enabling overoxidation to be reversed. Polyferrocenylsilane Polyferrocenylsilanes are a group of common organosilicon metallopolymer with backbones consisting of silicon and ferrocene. Variants of polyferroceylsilanes have been found to exhibit smart self-assembly in response to oxidation and subsequent smart self-disassembly upon reduction, as well as variants which can respond to electrochemical stimulation. One such example is a thin film of a polystyrene-polyferrocenylsilane inorganic-organic hybrid copolymer that was found to be able to adsorb and release ferritin with the application of an electrical potential. Ferrocene biosensing A number of ferrocene-organic inorganic-organic hybrid polymers have been reported to have smart properties that make them useful for application in biosensing. Multiple polymers with ferrocene side-chains cross-linked with glucose oxidase have shown oxidation activity which results in electrical potential in the presence of glucose, making them useful as glucose biosensors. This sort of activity is not limited to glucose, as other enzymes can be crosslinked to allow for sensing of their corresponding molecules, like a poly(vinylferrocene)/carboxylated multiwall carbon nanotube/gelatin composite that was bound to uricase, giving it the ability to act as a biosensor for uric acid. See also Coatings Coordination Polymers Drug Delivery Electrochromism Inorganic Polymers Smart Materials References Inorganic polymers
Smart inorganic polymer
Chemistry
2,008
73,535,300
https://en.wikipedia.org/wiki/Infertility%20and%20childlessness%20stigmas
Infertility and childlessness stigmas are social and cultural codes that identify the inability to have children as a disgraceful state of being. Broadly speaking, in many cultures, "Demonstrating fertility is necessary to be considered a full adult, a real man or woman, and to leave a legacy after death," and thus the failure to make this demonstration is penalized. Both male infertility and female infertility can be stigmatized, however, in many traditional cultures, women are held responsible for child-rearing and thus for pregnancy or the lack thereof. Infertility and childlessness stigmas are related to disability or physical-deformity stigmas and violation-of-group-norm stigmas. Infertility is a "deeply intimate matter, often deemed as taboo to discuss publicly." Infertility and childlessness can have negative social, psychological and economic consequences, including "discrimination, social exclusion, and abandonment." Adults without children may be subject to derisive language, intrusive questioning, shaming, ostracism, and physical abuse. Other negative consequences of the infertility–childlessness stigma, especially for women, may include depression, low self-esteem, and even suicidal ideation or suicide. People with infertility living in societies where it is a stigmatized condition may suffer from anxiety, may choose to self-isolate, and may become secretive or withdrawn. In pro-natalist societies, voluntary childlessness is often considered a deviant behavior. Stigmas may be particularly acute in communities that organize themselves collectively and thus place a high value on clan, lineage and perpetuation of family legacy. In these cultures, childlessness may be viewed as a "tragedy for the whole community" beyond the personal significance for infertile or childless individuals. However, even in a prototypically individualistically organized society, 42 percent of women tested in a study of the emotional consequences of infertility were found to have "global distress levels in the clinically significant range," in part due to social norms that judge women without children to be "unnatural and selfish." Most academic study of infertility addresses expensive treatment technologies, rather than the "anthropological and public health" effects. In more-developed countries, the widespread availability of assisted reproductive technologies has "transformed infertility from an acute, private agony that was accepted as fate, into a chronic, public stigma from which there were costly, and often unfulfilled hopes, of deliverance." In some cultures, biomedical explanations for infertility may be disregarded in favor of traditional beliefs that past wrong choices have resulted in the placement of an infertility curse, thus accelerating the vicious cycle of stigma. Blame may assigned, variously, to having offended gods or ancestors, abortions in a past life, practicing witchcraft, past promiscuity, use of birth control, wrong living generally, etc. Exclusion of the infertile or childlessness from social events is known, enacted as a means of quarantine to prevent the "contagion" or "toxin" of non-reproduction from spreading within the community. Infertile people are also viewed as sad people who may bring sadness with them and "spoil" celebrations. As one scholar put it, "Like leprosy and epilepsy, infertility bears an ancient social stigma. An archaic term for the condition of female infertility, present in the Old Testament, is barren woman. There have been three traditional means of addressing infertility: Medical interventions or quasi-medical treatments; the ancient Greeks called childless women ateknos, and possible causes and treatments for infertility were considered in Hippocratic texts. Spiritual recourse (prayer for fecundity, or alternately, submission to the will of a deity or power) Realignment of social relationships, including divorce, polygamy, adultery, or promiscuity. One study showed that infertility in Ghana led to "increased risk of precarious sexual behaviour of both men and women...trying out different partners, attempting to prove that they are not the source of the infecundity." In traditional Chinese family structure (called the Dishu system in English), "The first of the seven conditions under which a wife may be repudiated is infecundity." In some societies, women with children are allowed access to certain community resources and privileges from which childless women may be excluded, thus children act as a sort of universal passport to humanity. In some cultures, funeral practices for childless women are different from those for women who successfully conceived and bore offspring. Notably, "In the Hindu religion, a woman without a child, particularly a son, can't go to heaven. Sons perform death rituals." In Catholicism, there is a limbo of infants for stillborn babies (as baptism is a sacrament available only to the living), thus women unable to bring a pregnancy to term would be told they would not encounter their children's souls in an afterlife. The original doctrine was that these fetuses or babies were consigned to hell, resulting in a latter-day practice called respite sanctuaries. In the traditional Vietnamese belief system, childlessness risks destroying "the entire âm realm of one's ancestors and consequently scatters all ancestral linh hon into wandering ghosts and demons (ma qüy)." An individual's ability to deflect or resist stigma may depend on array of intersecting age, gender, class, economic, and/or psychological factors. A study of infertility experiences in Zambia concluded: See also Reproductive privilege Reproductive loss Son preference Third-party reproduction Stratified reproduction Shunning Ableism Status symbol Cultural variations in adoption Human reproductive ecology Fertility and religion Fertility rite Fertility in art References External links Books To Read If You're Struggling With Infertility Or Pregnancy Loss Kristyn Hodgdon Mar 9, 2021 9 Books That Helped Me Through My Infertility Alexandra Kimball May 10, 2019 Infertility Social stigma Cultural anthropology Kinship and descent Human reproduction
Infertility and childlessness stigmas
Biology
1,246
1,030,916
https://en.wikipedia.org/wiki/Nod%20factor
Nod factors (nodulation factors or NF), are signaling molecules produced by soil bacteria known as rhizobia in response to flavonoid exudation from plants under nitrogen limited conditions. Nod factors initiate the establishment of a symbiotic relationship between legumes and rhizobia by inducing nodulation. Nod factors produce the differentiation of plant tissue in root hairs into nodules where the bacteria reside and are able to fix nitrogen from the atmosphere for the plant in exchange for photosynthates and the appropriate environment for nitrogen fixation. One of the most important features provided by the plant in this symbiosis is the production of leghemoglobin, which maintains the oxygen concentration low and prevents the inhibition of nitrogenase activity. Chemical Structure Nod factors structurally are lipochitooligosaccharides (LCOs) that consist of an N-acetyl-D-glucosamine chain linked through β-1,4 linkage with a fatty acid of variable identity attached to a non reducing nitrogen in the backbone with various functional group substitutions at the terminal or non-terminal residues. Nod factors are produced in complex mixtures differing in the following characteristics: Length of the chain can vary from three to six units of N-acetyl-D-glucosamine with the exception of M. loti which can produce Nod factors with two unit only. Presence or absence of strain-specific substitutions along the chain Identity of the fatty acid component Presence or absence of unsaturated fatty acids Nod gene expression is induced by the presence of certain flavonoids in the soil, which are secreted by the plant and act as an attractant to bacteria and induce Nod factor production. Flavonoids activate NodD, a LysR family transcription factor, which binds to the nod box and initiates the transcription of the nod genes which encode the proteins necessary for the production of a wide range of LCOs. Function Nod factors are potentially recognized by plant receptors made of two histidine kinases with extracellular LysM domain, which have been identified in L. japonicus, soybean, and M. truncatula . Binding of Nod factors to these receptors depolarizes the plasma membrane of root hairs via an influx of Ca+2 which induce the expression of early nodulin (ENOD) genes and swelling of the root hairs. In M. truncatula, the signal transduction initiates by the activation of dmi1, dmi2, and dmi3 which lead to the deformation of root hairs, early nodulin expression, cortical cell division and bacterial infection. Additionally, nsp and hcl genes are recruited later and aid in the process of early nodulation expression, cortical cell division, and infection. Genes dmi1, dmi2, and dmi3 have also been found to aid in the establishment of interactions between M. truncatula and arbuscular mycorrhiza, indicating that the two very different symbioses may share some common mechanisms. The end result is the nodule, the structure in which nitrogen is fixed. Nod factors act by inducing changes in gene expression in the legume, most notable the nodulin genes, which are needed for nodule organogenesis. Nodulation Rhizobia bind to host specific lectins present in root hairs which together with Nod factors lead to the formation of nodulation. Nod factors are recognized by a specific class of receptor kinases that have LysM domains in their extracellular domains. The two LysM (lysin motif) receptor kinases (NFR1 and NFR5) that appear to make up the Nod factor receptor were first isolated in the model legume Lotus japonicus in 2003. They now have been isolated also from soybean and the model legume Medicago truncatula. NFR5 lacks the classical activation loop in the kinase domain. The NFR5 gene lacks introns. First the cell membrane is depolarized and the root hairs start to swell and cell division stops. Nod factor cause the fragmentation and rearrangement of actin network, which coupled with the reinstitution of cell growth lead to the curling of the root hair around the bacteria. This is followed by the localized breakdown of the cell wall and the invagination of the plant cell membrane, allowing the bacterium to form an infection thread. As the infection thread grows the rhizobia travel down its length towards the site of the nodule. During this process the pericycle cells in plants become activated and cells in the inner cortex start growing and become the nodule primordium where the rhizobia infect and differentiate into bacteroids and fix nitrogen. Activation of adjacent middle cortex cells leads to the formation of nodule meristem. See also ENOD40 Notes Fabaceae Oligosaccharides Signal transduction Plant physiology
Nod factor
Chemistry,Biology
1,025
208,155
https://en.wikipedia.org/wiki/12%20%28number%29
12 (twelve) is the natural number following 11 and preceding 13. Twelve is the 3rd superior highly composite number, the 3rd colossally abundant number, the 5th highly composite number, and is divisible by the numbers from 1 to 4, and 6, a large number of divisors comparatively. It is central to many systems of timekeeping, including the Western calendar and units of time of day, and frequently appears in the world's major religions. Name Twelve is the largest number with a single-syllable name in English. Early Germanic numbers have been theorized to have been non-decimal: evidence includes the unusual phrasing of eleven and twelve, the former use of "hundred" to refer to groups of 120, and the presence of glosses such as "tentywise" or "ten-count" in medieval texts showing that writers could not presume their readers would normally understand them that way. Such uses gradually disappeared with the introduction of Arabic numerals during the 12th-century Renaissance. Derived from Old English, and are first attested in the 10th-century Lindisfarne Gospels' Book of John. It has cognates in every Germanic language (e.g. German ), whose Proto-Germanic ancestor has been reconstructed as , from ("two") and suffix or of uncertain meaning. It is sometimes compared with the Lithuanian , although is used as the suffix for all numbers from 11 to 19 (analogous to "-teen"). Every other Indo-European language instead uses a form of "two"+"ten", such as the Latin . The usual ordinal form is "twelfth" but "dozenth" or "duodecimal" (from the Latin word) is also used in some contexts, particularly base-12 numeration. Similarly, a group of twelve things is usually a "dozen" but may also be referred to as a "dodecad" or "duodecad". The adjective referring to a group of twelve is "duodecuple". As with eleven, the earliest forms of twelve are often considered to be connected with Proto-Germanic or ("to leave"), with the implicit meaning that "two is left" after having already counted to ten. The Lithuanian suffix is also considered to share a similar development. The suffix has also been connected with reconstructions of the Proto-Germanic for ten. As mentioned above, 12 has its own name in Germanic languages such as English (dozen), Dutch (), German (), and Swedish (), all derived from Old French . It is a compound number in many other languages, e.g. Italian (but in Spanish and Portuguese, 16, and in French, 17 is the first compound number), Japanese 十二 jūni. Written representation In prose writing, twelve, being the last single-syllable numeral, is sometimes taken as the last number to be written as a word, and 13 the first to be written using digits. This is not a binding rule, and in English language tradition, it is sometimes recommended to spell out numbers up to and including either nine, ten or twelve, or even ninety-nine or one hundred. Another system spells out all numbers written in one or two words (sixteen, twenty-seven, fifteen thousand, but 372 or 15,001). In German orthography, there used to be the widely followed (but unofficial) rule of spelling out numbers up to twelve (zwölf). The Duden (the German standard dictionary) mentions this rule as outdated. In mathematics 12 is a composite number, the smallest abundant number, a semiperfect number, a highly composite number, a refactorable number, and a Pell number. It is the smallest of two known sublime numbers, numbers that have a perfect number of divisors whose sum is also perfect. There are twelve Jacobian elliptic functions and twelve cubic distance-transitive graphs. A twelve-sided polygon is a dodecagon. In its regular form, it is the largest polygon that can uniformly tile the plane alongside other regular polygons, as with the truncated hexagonal tiling or the truncated trihexagonal tiling. A regular dodecahedron has twelve pentagonal faces. Regular cubes and octahedrons both have 12 edges, while regular icosahedrons have 12 vertices. The densest three-dimensional lattice sphere packing has each sphere touching twelve other spheres, and this is almost certainly true for any arrangement of spheres (the Kepler conjecture). Twelve is also the kissing number in three dimensions. There are twelve complex apeirotopes in dimensions five and higher, which include van Oss polytopes in the form of complex -orthoplexes. There are also twelve paracompact hyperbolic Coxeter groups of uniform polytopes in five-dimensional space. Bring's curve is a Riemann surface of genus four, with a domain that is a regular hyperbolic 20-sided icosagon. By the Gauss-Bonnet theorem, the area of this fundamental polygon is equal to . Twelve is the smallest weight for which a cusp form exists. This cusp form is the discriminant whose Fourier coefficients are given by the Ramanujan -function and which is (up to a constant multiplier) the 24th power of the Dedekind eta function: This fact is related to a constellation of interesting appearances of the number twelve in mathematics ranging from the fact that the abelianization of special linear group has twelve elements, to the value of the Riemann zeta function at being , which stems from the Ramanujan summation Although the series is divergent, methods such as Ramanujan summation can assign finite values to divergent series. List of basic calculations In other bases The duodecimal system (1210 [twelve] = 1012), which is the use of 12 as a division factor for many ancient and medieval weights and measures, including hours, probably originates from Mesopotamia. Religion The number twelve carries religious, mythological and magical symbolism; since antiquity, the number has generally represented perfection, entirety, or cosmic order. Judaism and Christianity The significance is especially pronounced in the Hebrew Bible. Ishmael – the first-born son of Abraham – has 12 sons/princes (Genesis 25:16), and Jacob also has 12 sons, who are the progenitors of the Twelve Tribes of Israel. This is reflected in Christian tradition, notably in the twelve Apostles. When Judas Iscariot is disgraced, a meeting is held (Acts) to add Saint Matthias to complete the number twelve once more. The Book of Revelation contains much numerical symbolism, and many of the numbers mentioned have 12 as a divisor. mentions a woman—interpreted as the people of Israel, the Church and the Virgin Mary—wearing a crown of twelve stars (representing each of the twelve tribes of Israel). Furthermore, there are 12,000 people sealed from each of the twelve tribes of Israel (the Tribe of Dan is omitted while Manasseh is mentioned), making a total of 144,000 (which is the square of 12 multiplied by a thousand). According to the New Testament, Jesus had twelve Apostles. The "Twelve Days of Christmas" count the interval between Christmas and Epiphany. Eastern Orthodoxy observes twelve Great Feasts. 12 was the only number considered to be religiously divine in the 1600s causing many Catholics to wear 12 buttons to church every Sunday. Some extremely devout Catholics would always wear this number of buttons to any occasion on any type of clothing. Timekeeping The lunar year is 12 lunar months. Adding 11 or 12 days completes the solar year. Most calendar systems – solar or lunar – have twelve months in a year. The Chinese use a 12-year cycle for time-reckoning called Earthly Branches. There are twelve hours in a half day, numbered one to twelve for both the ante meridiem (a.m.) and the post meridiem (p.m.). 12:00 p.m. is midday or noon, and 12:00 a.m. is midnight. The basic units of time (60 seconds, 60 minutes, 24 hours) are evenly divisible by twelve into smaller units. In numeral systems In science Force 12 on the Beaufort wind force scale corresponds to the maximum wind speed of a hurricane. In sports In both soccer and American football, the number 12 can be a symbolic reference to the fans because of the support they give to the 11 players on the field. Texas A&M University reserves the number 12 jersey for a walk-on player who represents the original "12th Man", a fan who was asked to play when the team's reserves were low in a college American football game in 1922. Similarly, Bayern Munich, Hammarby, Feyenoord, Atlético Mineiro, Flamengo, Seattle Seahawks, Portsmouth and Cork City do not allow field players to wear the number 12 on their jersey because it is reserved for their supporters. In rugby league, one of the starting second-row forwards wears the number 12 jersey in most competitions. An exception is in the Super League, which uses static squad numbering. In rugby union, one of the starting centres, most often but not always the inside centre, wears the 12 shirt. In technology ASCII and Unicode code point for form feed. Music Music theory Twelve is the number of pitch classes in an octave, not counting the duplicated (octave) pitch. Also, the total number of major keys, (not counting enharmonic equivalents) and the total number of minor keys (also not counting equivalents). This applies only to twelve tone equal temperament, the most common tuning used today in western influenced music. The twelfth is the interval of an octave and a fifth. Instruments such as the clarinet which behave as a stopped cylindrical pipe overblow at the twelfth. The twelve-tone technique (also dodecaphony) is a method of musical composition devised by Arnold Schoenberg. Music using the technique is called twelve-tone music. The twelve-bar blues is one of the most prominent chord progressions in popular music. Art theory There are twelve basic hues in the color wheel: three primary colors (red, yellow, blue), three secondary colors (orange, green, purple) and six tertiary colors (names for these vary, but are intermediates between the primaries and secondaries). In other fields There are 12 troy ounces in a troy pound (used for precious metals). Twelve of something is called a dozen. In the former British currency system, there were twelve pence in a shilling. In English, twelve is the number of greatest magnitude that has just one syllable. 12 is the last number featured on the analogue clock, and also the starting point of the transition from A.M. to P.M. hours or vice-versa. There are twelve months within a year, with the last one being December. 12 inches in a foot. 12 is slang for Police officers because of the 10-12 Police radio code. Notes References Sources . Further reading Books Journal articles External links Integers Numerology
12 (number)
Mathematics
2,289
64,649,745
https://en.wikipedia.org/wiki/NGC%204701
NGC 4701 is an unbarred spiral galaxy located in the constellation Virgo. Its velocity with respect to the cosmic microwave background is 1054 ± 24km/s, which corresponds to a Hubble distance of . However, 10 non-redshift measurements give a greater distance of .It was discovered by the German-British astronomer William Herschel on 30 April 1786 using a 47.5 cm (18.7 inch) diameter mirror type telescope. It is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. NGC 4701 is a member of the M49 Group (also known as LGG 292). This group contains at least 127 galaxies, including 63 galaxies from the New General Catalogue and 20 galaxies from the Index Catalogue. See also List of NGC objects (4001–5000) References External links 07975 4701 Virgo (constellation) Unbarred spiral galaxies 043331 +01-33-015 17860430 Discoveries by William Herschel 12466+0339
NGC 4701
Astronomy
225
706,728
https://en.wikipedia.org/wiki/Robert%20Abbott%20%28game%20designer%29
Robert Abbott (March 2, 1933February 20, 2018) was an American game inventor, sometimes referred to by fans as "The Official Grand Old Man of Card Games". Though early in his life he worked as a computer programmer with the IBM 360 assembly language, he began designing games in the 1950s. Two of his more popular creations include the chess variant Baroque chess (also known as Ultima) and Crossings, which later became Epaminondas. Eleusis was also successful, appearing in several card game collections, such as Hoyle's Rules of Games and New Rules for Classic Games, among others. In 1963, Abbott himself released a publication, Abbott's New Card Games, which included instructions for all of his card games, in addition to Baroque chess. Abbott also invented logic mazes, the first of which appeared in Martin Gardner's Mathematical Games column in the October 1962 issue of Scientific American. One of the more prominent of these is Theseus and the Minotaur, which was originally published in the book Mad Mazes. His game Confusion was named "Best New Abstract Strategy Game" for 2012 by GAMES Magazine. Biography Abbott was born in St. Louis, Missouri, and attended St. Louis Country Day School. Abbott went to Yale for two years, then attended the University of Colorado for another two, but never graduated. Soon after, Abbott moved to New York, where he and his games were discovered by Martin Gardner. In 1963, after Abbott's book, Abbott's New Card Games, received only moderate success, he "got tired of being poor" and moved back to St. Louis. There, he became a computer programmer at the Washington University in St. Louis Computer Research Laboratory. In 1965, he moved back to New York, where he continued to work as a computer programmer, mostly with the IBM 360 assembly language. Abbott created all of his card games during the 1950s, starting with Babel in 1951, and ending with Auction in 1956. Soon after, he moved to New York City, where the rules for his game Eleusis were first published by Martin Gardner in his Mathematical Games column. Motivated by the article, Abbott self-published the rules for four of his card games in the book Four New Card Games in 1962, which he sold by mail. In 1963, the book Abbott's New Card Games was published by Sol Stein of Stein and Day, containing the rules for all eight of his card games and the rules for his chess variant, Baroque chess. In 1968, the publisher Funk & Wagnalls published a paperback edition of Abbott's New Card Games, in which Abbott slightly modified the rules of Baroque chess, but these changes never became popular. Around the same time that Abbott's New Card Games was published, Abbott sent his maze, Traffic Maze in Floyd's Knob, to Martin Gardner. This was the first logic maze to be published, appearing in Gardner's Mathematical Games column. After that time, Abbott created various mazes, most of which appeared in the books SuperMazes and Mad Mazes. In 2008, RBA Libros published a Spanish version of his book Abbott's New Card Games, under the title Diez juegos que no se parecen a nada, which translates to Ten games that do not resemble anything. This version was not just a Spanish translation of the original, however; the most up-to-date rules for the various games were used; in addition, the rules for Eleusis Express and Confusion were included. In 2010, his Where are the Cows? maze was published by the Oxford University Press in Ian Stewart's book Cows in the Maze. In 2011, his game Confusion was published by Stronghold Games. The game was named "Best New Abstract Strategy Game" for 2012 by GAMES Magazine. Logic mazes Abbott was the inventor of a style of maze called logic mazes. A logic maze has a set of rules, ranging from the basic (such as "you cannot make left turns") to the extremely complicated. These mazes are also called "Multi-State mazes". The reason for this name is that sometimes you can return to a position you were in before, but be traveling in a different direction. That change in direction can put you in a different state and open up different choices for you. One example, from the book SuperMazes, would be a rolling-die maze. Where you can move from a particular square depends on what number is facing up on the die. If you return to that same square, the die may be in a different state, with a different number on top. Thus, you would have different options than the first time. Traffic Maze in Floyd's Knob The first logic maze ever published, Traffic Maze in Floyd's Knob, appeared in the October 1962 issue of Scientific American in the Mathematical Games column. The maze looks like a street grid, with arrows pointing down various roads at each intersection. When one comes to an intersection, only arrows leading from the road you are on to another road can be followed. One must continue in this fashion, following the arrows at the intersection, until the end is reached. When you come to an intersection from one direction, you have different options for which road to take than you would coming into the intersection from another direction; therefore, this can be defined as a "multi-state" or "logic" maze. Where Are the Cows? Where are the Cows? was one of Abbott's most difficult mazes. It first appeared in his book SuperMazes. Abbott warns readers that it "may be too difficult for anyone to solve." Since then, it has also appeared as the titular maze in the book Cows in the Maze. The complexity in Where are the Cows? includes self-reference, changing rules, and flow charts. It is also worded so as to provoke confusion between an object (such as red text), a reference to an object (such as the word "red"), and even more subtle references (the word "word"). The maze ends up being so complicated that it can even be difficult to work out the next move, let alone the end. In this maze, you have to use two hands, each starting at a different place. The instructions in one box might have to do with the box that the other hand is in, boxes you have already left, or complex combinations of the two. Theseus and the Minotaur Theseus and the Minotaur is another of Abbott's better-known mazes. It first appeared in his book Mad Mazes. Like Where are the Cows? in SuperMazes, Abbott said that this "is the hardest maze in the book; in fact, it is possible that no one will solve it." Since then, several different versions of it have appeared, made by others, following the same theme, both on paper and in electronic forms. Games Abbott has created several games, including card games, board games, and one equipment game. As a whole, his games are not of particular fame, although they have some unique elements that set them apart from mainstream games. For instance, the card game Metamorphosis is a complex trick-taking game. As you play the game, the rules change three times, so it is as if you are playing four different games that are threaded together. Baroque chess Baroque chess, or Ultima, was the only board game in the book Abbott's New Card Games. Abbott's reasoning for including this non-card game in a card game book was that chess pieces are as plentiful as playing cards, and in this book, he wanted to introduce new games that did not require special equipment. Abbott's friends, once he started teaching it to them, began to call the game "Abbott's Ultima," which he did not like at all. However, the publisher, Sol Stein, preferred the name "Ultima," so that is the title that was used in the book. Eleusis Eleusis is probably Abbott's most prominent game, due to its metaphors and its suitability for use as a teaching tool. He invented it in 1956, and it appeared in his self-published book Four New Card Games. It was also published in the book Abbott's New Card Games a year later. Martin Gardner wrote about it in his Mathematical Games column in the June 1959 issue of Scientific American. Basically, the gameplay consists of the dealer choosing a secret rule dictating how cards are to be played, and the players playing cards in an attempt to figure out the rule through inductive reasoning. In 1973, Abbott decided to improve Eleusis; the result was considered to be far better than the original, with various improvements to the layouts and gameplay making it work quite a bit better. Martin Gardner wrote about this version in the October 1977 issue of Scientific American. Abbott also self-published a pamphlet in 1977 with the rules for the improved version, titled The New Eleusis. It has appeared in several card game collections, such as Hoyle's Rules of Games and New Rules for Classic Games, among others. Confusion Abbott initially created the game Confusion in the 1970s, and had it in finished form by 1980. The game was published in Germany by Franjos in 1992; Abbott was not satisfied with this version, however, due to several flaws in it. The rules were published in the Spanish translation of his book Abbott's New Card Games in 2008, but the game did not get published in North America until 2011. This Stronghold Games version was named "Best New Abstract Strategy Game" for 2012 by GAMES Magazine. The game is based on the idea of not knowing what your pieces are or what they do at the beginning of the game. His game Eleusis uses a similar idea, in that you do not know how cards are to be played at the beginning; George Brancaccio, someone Abbott worked with at the Bank of New York, commented on this, saying "In your game Eleusis, you don't know what cards can be played. Why don't you make a board game where you don't know how pieces move?" This is what gave Abbott the idea, and he began work on it soon after. Published work Four New Card Games (1962) Abbott's New Card Games (1963, again in paperback in 1968) The New Eleusis (1977) Mad Mazes (1990) SuperMazes (1997) Auction 2002 and Eleusis (2001) Diez juegos que no se parecen a nada [Ten games that do not resemble anything] (2008, translated by Marc Figueras) Notes References External links Robert Abbott's website Remembering Robert Abbott Gathering4Gardner Chess Variants interview Theseus and the Minotaur mazes Sliding Door maze Recreational mathematicians American board game designers Chess variant inventors American computer programmers Artists from St. Louis 1933 births 2018 deaths Washington University in St. Louis staff
Robert Abbott (game designer)
Mathematics
2,254
58,947,971
https://en.wikipedia.org/wiki/NGC%203558
NGC 3558 is an elliptical or a lenticular galaxy located 440 million light-years away in the constellation Ursa Major. It was discovered by the astronomer Heinrich d'Arrest on April 15, 1866. It is a member of the galaxy cluster Abell 1185 and is classified as a LINER galaxy. See also List of NGC objects (3001–4000) NGC 3561 References External links 3558 33960 Ursa Major Astronomical objects discovered in 1866 Lenticular galaxies LINER galaxies Elliptical galaxies Abell 1185 Discoveries by Heinrich Louis d'Arrest Markarian galaxies
NGC 3558
Astronomy
114
4,084,112
https://en.wikipedia.org/wiki/Division%20%28horticulture%29
Division, in horticulture and gardening, is a method of asexual plant propagation, where the plant (usually an herbaceous perennial) is broken up into two or more parts. Each part has an intact root and crown. The technique is of ancient origin, and has long been used to propagate bulbs such as garlic and saffron. Another type of division is though a plant tissue culture. In this method the meristem (a type of plant tissue) is divided. Overview Division is one of the three main methods used by gardeners to increase stocks of plants (the other two are seed-sowing and cuttings). Division is usually applied to mature perennial plants, but may also be used for shrubs with suckering roots, such as gaultheria, kerria and sarcococca. Annual and biennial plants do not lend themselves to this procedure, as their lifespan is too short. Practice Most perennials should be divided and replanted every few years to keep them healthy. Plants that do not have enough space between them will start to compete for resources. Additionally, plants that are too close together will stay damp longer due to poor air circulation. This can cause the leaves develop a fungal disease. Most perennials bloom during the fall or during the spring/summer. The best time to divide a perennial is when it is not blooming. Perennials that bloom in the fall should be divided in the spring and perennials that bloom in the spring/summer should be divided in the fall. The ideal day to divide a plant is when it is cool and there is rain in the forecast. Start by digging a circle around the plant about 4-6 inches from the base. Next, dig underneath the plant and lift it out of the hole. Use a shovel, gardening shears, or knife to physically divide the plant into multiple "divisions". This is also a good time to remove any bare patches or old growth. Each division should have a good number of healthy leaves and roots. If the division is not being replanted immediately, it should be watered and kept in a shady place. The new hole should be the same depth as the original hole. After the hole has been filled in, firmly press down on the soil around the base of the plant. This helps remove air pockets and makes the plant more stable. Plants that are divided in late fall when the ground is freezing should also be mulched. The division will have trouble staying rooted if the ground is freezing and thawing frequently. Continue to water the division(s) daily once until it has established itself. Table of when to divide common perennials The frequency a plant should be divided is a general guideline. A plant should be divided when it starts producing fewer flowers, has a lot of dead growth in the center (crown), or cannot support its own weight. See also Root cutting Bare root References Horticulture Asexual reproduction
Division (horticulture)
Biology
597
37,584,586
https://en.wikipedia.org/wiki/Abductin
Abductin is a naturally occurring elastomeric protein found in the hinge ligament of bivalve mollusks. It is unique as it is the only natural elastomer with compressible elasticity, as compared to resilin, spider silk, and elastin. Its name was proposed from the fact that it functions as the abductor of the valves of bivalve mollusks. The properties of abductin vary across species of bivalves due to the specific use case of the species or the environment the species is found in. In spite of these differences, the same general function of acting opposite of the abductor muscles, where the resilin forces the shells into an open configuration. Though patents for a specific protein sequences of abductin were approved by the United States Patent and Trademark Offices, there are no large scale commercial uses for abductin as of April 2022. Structure Amino acid composition The amino acid composition of protein within the inner hinge ligament of bivalve mollusks was first discovered by Robert E. Kelly and Robert V. Rice in 1967, who subsequently proposed the protein’s name as abductin. This was derived from its function as the abductor of the shells of bivalve mollusks. Kelly and Rice discovered that the protein lacked the presence of hydroxyproline and hydroxylysine, which are amino acids indicative of the common protein, collagen. Further analysis showed that abductin is made of three prominent amino acids: glycine, methionine, and phenylalanine, which are arranged in multiple repeating sequences throughout the molecule. This was found in Placopecten magellanicus. Abductin is similar to elastin and resilin, but has a main difference having high concentrations of glycine and methionine. The glycine and methionine, and other amino acid residues, vary in concentration with different species. In Argopecten irradians, for example, glycine and methionine make up 57.3% and 14.3% of the protein, respectively. The high concentration of methionine found in abductin makes it unique because it is not a common occurrence in natural elastomeric proteins. Protein structure Peptide sequences such as MGGG, FGGMG, FGGMGGG, GGFGGMGGG, and FGGMGGGNAG are repeated throughout the peptide chain. It is to note that these peptide sequences all contain glycine. Additionally, in Argopecten irradians, the pentapeptide FGGMG is repeated throughout the molecule. The main peptide sequence feature of abductin is the presence of many repeating sequences, all of which contain glycine residues. This is similar to that of the structure of elastin. Abductin is lightly cross-linked, which gives it its high elasticity. The source of cross-linking has been researched, but no concrete explanation has been devised. The lack of tyrosine in the peptide chain suggests that cross-links are not formed through dityrosine links, like it is in resilin. Hypotheses of the mechanism of cross-linking have been proposed by various researchers. One potential source of cross-linking is due to the presence of a methionine dimer, ½ cystine in some species, or other similar amino acids that contains a disulfide bridge, which creates the cross-link between peptide chains. Another study discovered that 3,3'-methylene-bistyrosine could be responsible for the cross-linkage in abductin, similar to how tyrosine and lysine residues are responsible for the cross-linking in resilin and elastin. Abductin is acellular and amorphous in structure, as discovered through microscopy and x-ray diffraction, respectively. Since abductin is insoluble and its isolation from the hinge ligament is difficult, there is a lack of research concerning its structure at the protein level, such as secondary and hierarchical structures. More recent research on synthetic peptides derived from abductin were found to have polyproline II helix structure in aqueous solutions and type II β-turn structure in hydrophobic solvents. Combinations of both structures can also be observed for longer abductin-like peptide chains. Biological function The use of abductin varies among the different species of mollusks in the world. Some, like scallops and file shells, are able to swim using a repetitive motion of opening and closing its shell, the motion of which rapidly intakes and expels water. In other species of mollusks, the presence of abductin is usually located where the two shells come together to form a hinge. Unlike the needs of scallops for efficient energy return for the purpose of movement, species like the Apylsia find it necessary to reduce energy return in favor of stability in opening and closing of the shells. Abductin can be found within the resilium structure, which is used to store mechanical energy for this purpose. The effectiveness of abductin is highly influenced by the morphological aspects of the mollusk's shell, such as its size and shape. Other influences on the performance of abductin in mollusks is temperature, where there is a decrease in performance as the temperature of the surrounding environment decreases, and the presence of octopine - which acts as an analogous to lactic acid in mammals. The implementation of the resilium structure of the clam can be modeled as an oscillatory system, where it works against the abductor muscle to open the shell of the organism; the resilium forces the shell open while the abductor muscle control the shell’s closure. Material properties Little data exist on the structure and function of compressible elastomeric proteins such as abductin. An understanding of the underlying structural features of these proteins may lead to the development of a new class of highly tailored ‘‘compressible’’ hydrogels. Gaining knowledge of the underlying structural and functional features of compressible natural elastomers, such as abductin, can lead to novel compressible bioelastomers with tailored material properties. Solubility By interpreting Hurst exponents as Flory, water results to be a poor solvent for the abductin peptides. Predicting the functional solvent environment for insoluble proteins like abductin is particularly difficult because the protein’s hydrophobicity and the probable cross-linked nature suggest a less polar internal environment than the surrounding solvent. Conformation The presence of both extended conformations (PPII) and folded conformations (β-turns) in equilibrium to describe abductin has been previously suggested. Circular Dichroism (CD) spectra revealed that AMP1 (a 25 amino acid abductin sequence) adopts a dominant unordered conformation at 258 °C and a polyproline II (PPII) conformation at 0 °C and 458 °C with a possible minor amount of type II β-turn conformers. This observation indicates that AMP1 undergoes an inverse temperature transition in that it goes from a dominant unordered conformation to a periodic, extended PPII conformation with increasing temperature. The secondary structure of abductin was also investigated by Nuclear Magnetic Resonance (NMR) and CD studies of several synthetic peptides. Most synthetic abductin-based peptides adopted polyproline II (PPII) structures, which are left-handed helices, in aqueous solution, whereas they had type II β-turns in trifluoroethanol (TFE), which is a more hydrophobic (less polar) solvent. The coexistence of PPII and type II β-turns and temperature-induced multiconformational transitions were observed with longer synthetic abductin-like peptides such as (FGGMGGGNAG)4 in hexafluoroisopropanol (HFIP). The secondary structure of AB12 was qualitatively analyzed by comparing the CD spectra to other peptides with known secondary structures. The CD spectra of aqueous solutions of AB12 shows a strong negative peak at 200 nm and a tendency toward positive values at ~218 nm, which are characteristics of PPII helices. An isodichroic point at ~208 nm suggests an equilibrium exists between the PPII structure and other conformations. In addition, because the peak at 218 nm never exceeds zero, the spectra suggest the coexistence of unordered structures and PPII helices. A small negative band can be observed at ~225 nm, which likely results from the aromatic residue, phenylalanine, in the sequence. Temperature The effect of temperature on the secondary structure was studied. With increasing temperature, the magnitude of both peaks in the CD spectra at 200 and 218 nm decreased, which is typical for PPII helix conformations. In addition, the change in structure because of temperature was fully reversible and did not display any hysteresis. The PPII conformation, which is widely present in elastomeric proteins such as elastin and titin, is believed to play an important role in determining the elasticity of these proteins. The abductin-based protein possessed reversible Upper Critical Solution Temperature (UCST) behavior and formed a gel-like structure. At high temperatures, it displayed irreversible aggregation behavior. Thermal responsiveness is a useful property for engineering drug delivery systems because the encapsulation and release of drugs can easily be controlled via temperature change. Cytocompatibility The abductin-based protein was cytocompatible, and cells spread slowly when first seeded on the abductin-based protein. A LIVE/DEAD assay revealed that human umbilical vein endothelial cells had a viability of 98 ± 4% after being cultured for two days on the abductin-based protein. Initial cell spreading on the abductin-based protein was similar to that on bovine serum albumin. These studies thus demonstrate the potential of abductin-based proteins in tissue engineering and drug delivery applications due to the cytocompatibility and its response to temperature. Tensile and compressive moduli Natural abductin has a tensile modulus of 1.25 MPa, which is higher than elastin (0.3−0.6 MPa) but on the same order of magnitude as resilin (0.6−2 MPa). It has a compressive modulus of 4 MPa, which is higher than resilin (0.6−0.7 MPa). The superior mechanical properties of natural abductin offer the potential for designing protein-based biomaterials that can be utilized in a broader number of applications. Hydrodynamic volume and temperature relationship A solution of AB12 (10 mg/mL in Milli-Q water) was visually observed to turn from transparent to opaque when cooled from room temperature to lower temperatures (incubated on ice). Dynamic Light Scattering (DLS) was used to further investigate the temperature responsiveness of AB12. An abrupt decrease in the hydrodynamic diameter (DH) of AB12 was observed when the protein solution was heated from 2 to 5 °C. This phenomenon is indicative of Upper Critical Solution Temperature (UCST) behavior. The change in DH at low temperatures was reversible and displayed some hysteresis. A moderate increase in DH was observed from 35 °C, and a sharper increase in DH occurred starting at 57 °C (aggregation temperature). Compared to the reversible UCST behavior, the transition that occurred at the aggregation temperature was irreversible. Extended-folded behavior In the case of abductin, on compression, the equilibrium extended ⇄ folded should be shifted to the folded structures, decreasing the entropy. The uncompressed, multi-conformational state is recovered by a simple increase in entropy after the removal of the compression force. This is opposite to elastin’s behavior. Engineering applications The first patent that is dedicated to the usage and implementation of abductin was accepted by the United States Patent and Trademark Office on October 3, 2000 (Patent No. 6,127,166). The patent in question details the specific protein sequence of abductin to be manufactured through biological means and the possible applications of the polymer, suggesting possible uses as a copolymer for other naturally occurring polymers, a fabric material, or a material that binds with antibodies. As of April 2022, there hasn’t been large-scale production, nor application, of polymers derived from the abductin or related polymeric sequences. References Molluscan proteins Biomaterials
Abductin
Physics,Biology
2,658
3,352,095
https://en.wikipedia.org/wiki/SN%202005cs
SN 2005cs was a supernova in the spiral galaxy M51, known as the Whirlpool Galaxy. It was a type II-P core-collapse supernova, discovered June 28, 2005 by Wolfgang Kloehr, a German amateur astronomer. The event was positioned at an offset of west and south of the galactic nucleus of M51. Based on the data, the explosion was inferred to occur 2.8 days before discovery. It was considered under-luminous for a supernova of its type, releasing an estimated in energy. The progenitor star was identified from a Hubble Space Telescope image taken January 20–21, 2005. It was a red supergiant with a spectral type in the mid-K to late-M type range and an estimated initial (ZAMS) mass of . A higher mass star enshrouded in a cocoon of dust has been ruled out. References External links Light curves and spectra on the Open Supernova Catalog Supernovae Canes Venatici 20050628 2005 in science
SN 2005cs
Chemistry,Astronomy
216
46,886,823
https://en.wikipedia.org/wiki/Trichaptum%20imbricatum
Trichaptum imbricatum is a species of fungus in the family Polyporaceae. It is distinguished by its imbricate basidiocarps, white to cream hymenophores, small and regular pores, and scattered and thin-walled cystidia. It was first isolated from China. References Further reading Dai, Yu-Cheng, et al. "Wood-inhabiting fungi in southern China. 4. Polypores from Hainan Province." Annales Botanici Fennici. Vol. 48. No. 3. Finnish Zoological and Botanical Publishing Board, 2011. Mihál, Ivan. "Species diversity, abundance and dominance of macromycetes in beech forest stands with different intensity of shelterwood cutting interventions." (2008). Hibbett, David S., and Manfred Binder. "Evolution of complex fruiting–body morphologies in homobasidiomycetes." Proceedings of the Royal Society of London B: Biological Sciences 269.1504 (2002): 1963–1969. External links Polyporaceae Fungi of China Fungi described in 2009 Taxa named by Yu-Cheng Dai Taxa named by Bao-Kai Cui Fungus species
Trichaptum imbricatum
Biology
246
40,075,528
https://en.wikipedia.org/wiki/Sphere%20packing%20in%20a%20sphere
Sphere packing in a sphere is a three-dimensional packing problem with the objective of packing a given number of equal spheres inside a unit sphere. It is the three-dimensional equivalent of the circle packing in a circle problem in two dimensions. References Spheres Packing problems
Sphere packing in a sphere
Mathematics
53
53,572,933
https://en.wikipedia.org/wiki/Boron%20nitride%20aerogel
Boron nitride aerogel is an aerogel made of highly porous boron nitride (BN). It typically consists of a mixture of deformed boron nitride nanotubes and nanosheets. It can have a density as low as 0.6 mg/cm3 and a specific surface area as high as 1050 m2/g, and therefore has potential applications as an absorbent, catalyst support and gas storage medium. BN aerogels are highly hydrophobic and can absorb up to 160 times their mass in oil. They are resistant to oxidation in air at temperatures up to 1200 °C, and hence can be reused after the absorbed oil is burned out by flame. BN aerogels can be prepared by template-assisted chemical vapor deposition at a temperature ~900 °C using borazine as the feed gas. Alternatively it can be produced by ball milling h-BN powder, ultrasonically dispersing it in water, and freeze-drying the dispersion. References Nanomaterials Boron compounds Aerogels Boron–nitrogen compounds
Boron nitride aerogel
Chemistry,Materials_science
223
16,750
https://en.wikipedia.org/wiki/Kleene%20star
In mathematical logic and computer science, the Kleene star (or Kleene operator or Kleene closure) is a unary operation, either on sets of strings or on sets of symbols or characters. In mathematics, it is more commonly known as the free monoid construction. The application of the Kleene star to a set is written as . It is widely used for regular expressions, which is the context in which it was introduced by Stephen Kleene to characterize certain automata, where it means "zero or more repetitions". If is a set of strings, then is defined as the smallest superset of that contains the empty string and is closed under the string concatenation operation. If is a set of symbols or characters, then is the set of all strings over symbols in , including the empty string . The set can also be described as the set containing the empty string and all finite-length strings that can be generated by concatenating arbitrary elements of , allowing the use of the same element multiple times. If is either the empty set ∅ or the singleton set , then ; if is any other finite set or countably infinite set, then is a countably infinite set. As a consequence, each formal language over a finite or countably infinite alphabet is countable, since it is a subset of the countably infinite set . The operators are used in rewrite rules for generative grammars. Definition and notation Given a set , define (the language consisting only of the empty string), , and define recursively the set for each . If is a formal language, then , the -th power of the set , is a shorthand for the concatenation of set with itself times. That is, can be understood to be the set of all strings that can be represented as the concatenation of strings in . The definition of Kleene star on is This means that the Kleene star operator is an idempotent unary operator: for any set of strings or characters, as for every . Kleene plus In some formal language studies, (e.g. AFL theory) a variation on the Kleene star operation called the Kleene plus is used. The Kleene plus omits the term in the above union. In other words, the Kleene plus on is or Examples Example of Kleene star applied to set of strings: {"ab","c"}* = { ε, "ab", "c", "abab", "abc", "cab", "cc", "ababab", "ababc", "abcab", "abcc", "cabab", "cabc", "ccab", "ccc", ...}. Example of Kleene plus applied to set of characters: {"a", "b", "c"}+ = { "a", "b", "c", "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc", "aaa", "aab", ...}. Kleene star applied to the same character set: {"a", "b", "c"}* = { ε, "a", "b", "c", "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc", "aaa", "aab", ...}. Example of Kleene star applied to the empty set: ∅* = {ε}. Example of Kleene plus applied to the empty set: ∅+ = ∅ ∅* = { } = ∅, where concatenation is an associative and noncommutative product. Example of Kleene plus and Kleene star applied to the singleton set containing the empty string: If , then also for each , hence . Generalization Strings form a monoid with concatenation as the binary operation and ε the identity element. The Kleene star is defined for any monoid, not just strings. More precisely, let (M, ⋅) be a monoid, and S ⊆ M. Then S* is the smallest submonoid of M containing S; that is, S* contains the neutral element of M, the set S, and is such that if x,y ∈ S*, then x⋅y ∈ S*. Furthermore, the Kleene star is generalized by including the *-operation (and the union) in the algebraic structure itself by the notion of complete star semiring. See also Wildcard character Glob (programming) References Further reading Formal languages Grammar Natural language processing
Kleene star
Mathematics,Technology
1,011
33,084,698
https://en.wikipedia.org/wiki/Yusif%20Mammadaliyev
Yusif Haydar oglu Mammadaliyev (; December 31, 1905 – December 15, 1961) was an Azerbaijani and Soviet chemist. He was a Doctor of Chemistry, academician of the National Academy of Sciences of the Azerbaijan SSR, and was the president of the National Academy of Sciences of the Azerbaijan SSR. Biography He was born on December 31, 1905, in Ordubad. In 1923, he entered the higher pedagogical institute of Baku. In 1926, after successful graduation from the institute, he taught at secondary school for 3 years. In 1929, he became a second-year student of chemistry faculty of MSU, from which he graduated in 1932. He was a student of Nikolay Zelinsky and Aleksei Balandin and one of the first seniors of the laboratory of organic chemistry of chemistry faculty's organic chemistry cathedra with “organocatalysis” speciality. On the termination of MSU he worked in Moscow at the chemical plant No.1, and then was transferred to Azerbaijan, where he managed the Cathedra of organic chemistry of the agricultural college of Azerbaijan at first. Then he worked (1933–1945) at the Azerbaijan Research Institute of Oil, where he became the manager of the laboratory. His work was dedicated to scientific problems of petrochemistry and organocatalysis and was closely connected with the development of domestic oil-refining and petrochemical industry. Some developments assumed as the basis of new industrial processes. Starting from 1934, he led the great pedagogical work at Azerbaijan University named after S.M.Kirov, sequentially holding the positions of associate professor, professor, head of a cathedra and rector (1954–1958). In 1933, Candidate of Chemistry was conferred on Yusif Mammadaliyev without defend of dissertation. In 1942, he became a Doctor of Chemistry and in 1943, a professor; in 1945, the academician of the Academy of Sciences of the Azerbaijan SSR (from the establishment of the academy). He was the director of Oil Academy of the Azerbaijan SSR. In 1946, he was nominated to the work in the Ministry of Oil Industry, where he became the chairman of the scientific-technical council of the ministry. In 1951–1954, he was the academician-secretary of physics, chemistry and oil departments of the Academy of Sciences of the Azerbaijan SSR, in 1954–1958, the rector of Azerbaijan State University. In 1947–1951 and 1958–1961 Mammadaliyev was chosen the president of the Academy of Sciences of the Azerbaijan SSR. The Institute of Petrochemical Processes was established in Baku on Mammadaliyev's initiative. In 1958, Mammadaliyev was chosen as the corresponding member of the Academy of Sciences of the Azerbaijan SSR. Mammadaliyev died in 1961. Scientific effort The main scientific works of Yusif Mammadaliyev are related to catalytic progressing of oil and Fuel oil. He is the founder of petrochemistry in Azerbaijan. He suggested new methods of chlorination and bromination of different hydrocarbons with participation of catalysts and especially showed the ways of obtaining carbon tetrachloride, chloromethane, dichloromethane and other valuable products by means of chlorination of methane, initially in stationary catalyst, and then in hot layer. Researches in the sphere of catalytic alkylation of aromatic, paraffinic, naphthenes hydrocarbons with the help of unsaturated hydrocarbons, enabled the synthesis of the components of aviation fuels on industrial scale. The major works were executed in the sphere of catalytic aromatization of benzine fraction of Baku oil, obtainment of washing agents, flint-organic compounds, production of plastics from pyrolized products, analysis of Naftalan oil's action mechanism. He repeatedly represented Azerbaijan in congresses, conventions and symposiums held by the USSR, United States, Italy, France, England, Moldavia, Poland and other countries. Awards Order of Lenin Order of the Red Banner of Labour Order of the Badge of Honour Stalin Prize See also Movsum bey Khanlarov References Literature Мир-Бабаев М.Ф. Научный подвиг гения (к 100-летию со дня рождения Ю.Г. Мамедалиева) – «Consulting & Business», 2005, No.8, с.8–12. Mir-Babayev M.F. The role of Azerbaijan in the World's oil industry – “Oil-Industry History” (USA), 2011, v. 12, no. 1, p. 109–123. Mir-Babayev M.F. Formula of Victory (Yusif Mamedaliyev) - "SOCAR plus", 2012, Autumn, p. 100–111. External links Memories of my father 1905 births 1961 deaths People from Ordubad Academic staff of Azerbaijan State Oil and Industry University Moscow State University alumni Fifth convocation members of the Supreme Soviet of the Soviet Union Members of the Central Committee of the 20th Congress of the Communist Party of the Soviet Union Members of the Central Committee of the 22nd Congress of the Communist Party of the Soviet Union Recipients of the Order of the Badge of Honour Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Recipients of the Stalin Prize Organic chemists Azerbaijani chemists Soviet chemists Rectors of Baku State University
Yusif Mammadaliyev
Chemistry
1,145
14,662,101
https://en.wikipedia.org/wiki/Biositemap
A Biositemap is a way for a biomedical research institution of organisation to show how biological information is distributed throughout their Information Technology systems and networks. This information may be shared with other organisations and researchers. The Biositemap enables web browsers, crawlers and robots to easily access and process the information to use in other systems, media and computational formats. Biositemaps protocols provide clues for the Biositemap web harvesters, allowing them to find resources and content across the whole interlink of the Biositemap system. This means that human or machine users can access any relevant information on any topic across all organisations throughout the Biositemap system and bring it to their own systems for assimilation or analysis. File framework The information is normally stored in a biositemap.rdf or biositemap.xml file which contains lists of information about the data, software, tools material and services provided or held by that organisation. Information is presented in metafields and can be created online through sites such as the biositemaps online editor. The information is a blend of sitemaps and RSS feeds and is created using the Information Model (IM) and Biomedical Resource Ontology (BRO). The IM is responsible for defining the data held in the metafields and the BRO controls the terminology of the data held in the resource_type field. The BRO is critical in aiding the interactivity of both the other organisations and third parties to search and refine those searches. Data formats The Biositemaps Protocol allows scientists, engineers, centers and institutions engaged in modeling, software tool development and analysis of biomedical and informatics data to broadcast and disseminate to the world the information about their latest computational biology resources (data, software tools and web services). The biositemap concept is based on ideas from Efficient, Automated Web Resource Harvesting and Crawler-friendly Web Servers, and it integrates the features of sitemaps and RSS feeds into a decentralized mechanism for computational biologists and bio-informaticians to openly broadcast and retrieve meta-data about biomedical resources. These site, institution, or investigator specific biositemap descriptions are published in RDF format online and are searched, parsed, monitored and interpreted by web search engines, web applications specific to biositemaps and ontologies, and other applications interested in discovering updated or novel resources for bioinformatics and biomedical research investigations. The biositemap mechanism separates the providers of biomedical resources (investigators or institutions) from the consumers of resource content (researchers, clinicians, news media, funding agencies, educational and research initiatives). A Biositemap is an RDF file that lists the biomedical and bioinformatics resources for a specific research group or consortium. It allows developers of biomedical resources to describe the functionality and usability of each of their software tools, databases or web-services. Biositemaps supplement and do not replace the existing frameworks for dissemination of data, tools and services. Using a biositemap does not guarantee that resources will be included in search indexes nor does it influence the way that tools are ranked or perceived by the community. What the Biositemaps protocol will do is provide clues, information and directives to all Biositemap web harvesters that point to the existence and content of biomedical resources at different sites. Biositemap Information Model The Biositemap protocol relies on an extensible information model that includes specific properties that are commonly used and necessary for characterizing biomedical resources: Name Description URL Stage of development Organization Resource Ontology Label Keywords License Up-to-date documentation on the information model is available at the Biositemaps website. See also Information visualization ITools Resourceome Sitemaps References External links Biomedical Resource Ontology Biositemaps online editor Domain-specific knowledge representation languages Biological techniques and tools Bioinformatics
Biositemap
Engineering,Biology
798
51,408,006
https://en.wikipedia.org/wiki/Haze%20%28optics%29
There are two different types of haze that can occur in materials: Reflection haze occurs when light is reflected from a material. Transmission haze occurs when light passes through a material. The measurement and control of both types during manufacture is essential to ensure optimum quality, acceptability and suitability for purpose of the product. For instance, in automotive manufacturing, a high quality reflective appearance is desirable with low reflection haze and high contrast whilst in packaging clear, low haze, highly transmissive films are required so that the contents, foods etc., can be clearly observed. Reflection Haze Reflection Haze is an optical phenomenon usually associated with high gloss surfaces, it is a common surface problem that can affect appearance quality. The reflection from an ideal high gloss surface should be clear and radiant, however, due to scattering at imperfections in the surface caused by microscopic structures or textures (≈ 0.01 mm wavelength) the reflection can appear milky or hazy reducing the quality of its overall visual appearance. Causes of this could be due to a number of factors – Poor dispersion Method of applying the coating Variations in drying, curing or baking Types of materials used in the formulation Polishing or abrasion A high gloss surface with haze exhibits a milky finish with low reflective contrast- reflected highlights and lowlights are less pronounced. On surfaces with haze, halos are visible around the reflections of strong light sources. Measurement Measurement of reflection haze is primarily defined under three International test standards: ASTM E430 ASTM E430 comprises three test methods: Test method A specifies a 30° angle for specular gloss measurement, 28° or 32° for narrow-angle reflection haze measurement and 25° or 35° for wide-angle reflection haze measurement. Test method B specifies a 20° angle for specular gloss measurement and 18.1° and 21.9° for narrow-angle reflection haze measurement. Test method C specifies a 30° angle for specular gloss measurement, 28° or 32° for narrow-angle reflection haze measurement and 15° wide-angle reflection haze measurement. ASTM D4039 Test method specifies gloss measurements to be made at 20° and 60°, the haze index is then calculated as the difference between the 60° and 20° measurements. ISO 13803 Source: Test method specifies a 20° angle for specular gloss measurement and 18.1° and 21.9° for narrow-angle reflection haze measurement. All test methods specify that measurements should be made with visible light according to CIE spectral luminous efficiency function V(λ) in the CIE 1931 standard observer and CIE standard illuminant C. As most commercially available glossmeters have gloss measurement angles of 20°, 60° and 85° haze measurement is incorporated at either 20° (ISO 13803 / ASTM E430 method B) or at 20° and 60° ( ASTM D4039). There are however some manufacturers that offer glossmeters with measurement angles of 30° and haze measurement in accordance with ASTM E430 Method A and C but are fewer in number, therefore for the purposes of detailing haze measurement theory only the first three methods will be included. ISO 13803 / ASTM E430 method B Both test methods measure specular gloss and haze together at 20° that means light is transmitted and received at an equal but opposite angle of 20°. Specular gloss is measured over an angular range that is limited by aperture dimensions as defined in ASTM Test Method D523. The angular measurement range for this at 20° is ±0.9° (19.1° - 20.9°). For haze measurement additional sensors are used either side of this range at 18.1° and 21.9° to measure the intensity of the scattered light. Both solid colours and those containing metallics can be measured using this method provided haze compensation is used (as detailed later). ASTM D4039 This method can only be used on nonmetallic materials having a 60° specular gloss value greater than 70 in accordance with ASTM Test Method D523 / ISO 2813. Haze Index is calculated from gloss measurements made at 20 and 60 degrees as the difference between the two measurements (HI = G60-G20). As measurements of specular gloss depend largely on the refractive index of the material being measured 20° gloss will change more noticeably than 60° gloss, therefore as haze index is calculated using these two measurements it too will be affected by the refractive index of the material. Evaluations of reflection haze using this test method are therefore confined to samples of roughly the same refractive index. Haze compensation It is important to note that the colour (luminous reflectance) of a material can greatly influence the measurement of reflection haze. As colour and haze are both components of scattered light (diffuse reflectance) they must be separated so that only the haze value is quantified; this is also true for metallics or coatings containing metallic pigments where a higher scattering exists. As test method ASTM D4039 is only suitable for nonmetallic materials of more or less the same refractive index separation of the colour and haze components is not detailed. Haze index calculations and measurements using this test method will therefore produce higher haze results on brighter coloured materials than darker with the same level of haze present. The chart below shows these differences for various colours:- Both ISO 13803 and ASTM E430 method B require a separate measurement of luminous reflectance, Y, to calculate compensated haze. The tri-stimulus value Y gives a measure of the lightness of the material as defined in ISO 7724-2 requiring a 45°/0° geometry to be used with standard illuminant C and 2° observer (although it is mentioned that slightly different conditions will not result in significant errors). Luminous reflectance measurements, Y, are required on both the sample material and a reference white; ISO 13803 details the use of a BaSO4 standard - barium sulphate, a white crystalline solid having a white opaque appearance and high density as this material is a good substitute for a perfectly reflecting diffusor as defined under ISO 7724-2. Compensated haze can then be calculated as - H Comp = H Linear – Y Sample / Y BaSO4 Using the ISO / ASTM method therefore to measure luminous reflectance produces a reliable measurement of Y for non-metallic surfaces as the diffuse component is lambertian, i.e. it is equal in amplitude at all angles in relation to the sample surface. However, for metallic coatings and those containing speciality pigments, as the particles within the coating reflect the light directionally around the specular angle, little or no metallic reflection is present at the angle at which the luminosity is measured, therefore these types of coatings have an unexpectedly high haze reading. Using a measurement angle which is closer to the region adjacent to the haze angle has proven successful in providing compatible readings on solid colours and also compensating for directional reflection from metallic coatings and speciality pigments Applications Generally measurement of reflection haze is confined to high gloss paints and coatings and highly polished metals. Although there has been some degree of success using this measurement method for films it has proven unreliable due to variability caused by changes in the film thickness (internal refraction variations) and the background colour on which the film sample is placed. Generally haze measurement of films is performed using a transmission type hazemeter as described hereafter. Transmission Haze Light and transparent materials When light strikes the surface of a transparent material the following interactions occur – • Light is reflected from the front surface of the material • Some light is refracted within the material (depending on thickness) and reflected from the second surface • Light passes through the material at an angle which is determined by the refractive index of the material and the angle of illumination. The light that passes through the transparent material can be affected by irregularities within it; these can include poorly dispersed particles, contaminants (i.e. dust particles) and/or air spaces. This causes the light to scatter in different directions from the normal the degree of which being related to the size and number of irregularities present. Small irregularities cause the light to scatter, or diffuse, in all directions whilst large ones cause the light to be scattered forward in a narrow cone shape. These two types of scattering behaviour are known as Wide Angle Scattering, which causes haze due to the loss of transmissive contrast, and Narrow Angle Scattering a measure of clarity or the "see through quality" of the material based on a reduction of sharpness. These factors are therefore important for defining the transmitting properties of a transparent material- Transmission – The amount of light that passes through the material without being scattered Haze – The amount of light that is subject to Wide Angle Scattering (At an angle greater than 2.5° from normal (ASTM D1003)) Clarity – The amount of light that is subject to Narrow Area Scattering (At an angle less than 2.5° from normal) Measurement Measurement of these factors is defined in two International test standards- ASTM D1003 ASTM D1003 comprises two test methods: Procedure A – using a Hazemeter Procedure B – using a Spectrophotometer BS EN ISO 13468 Parts 1 and 2 Source: Part 1 – Using a single beam Hazemeter Part 2 – Using a dual beam Hazemeter The test methods specify the use of a Hazemeter as shown below - A collimated beam of light from a light source (ASTM D1003 - Illuminant C, BS EN ISO 13468 Parts 1 and 2 - Illuminant D65 ) passes through a sample mounted on the entrance port of an integrating sphere. The light, which is uniformly distributed by a matte white highly reflective coating on the sphere walls, is measured by a photodetector positioned at 90° from the entrance port. A baffle mounted between the photodetector and the entrance port prevents direct exposure from the port. The exit port immediately opposite the entrance port contains a light trap to absorb all light from the light source when no sample is present. A shutter in this exit port coated with the same coating as the sphere walls allows the port to be opened and closed as required. Total transmittance is measured with the exit port closed. Transmittance haze is measured with the exit port open. Commercially available Hazemeters of this type perform both measurements automatically, the only operator interaction being the placement of the sample material on the measurement (entrance) port of the device. See also Gloss (paint) Visual appearance Distinctness of image Transparency meter References External links Reflection Haze measurement theory Optical phenomena
Haze (optics)
Physics
2,173
20,936,370
https://en.wikipedia.org/wiki/Timeline%20of%20carbon%20capture%20and%20storage
The milestones for carbon capture and storage show the lack of commercial scale development and implementation of CCS over the years since the first carbon tax was imposed. The time line of carbon capture and storage announcements and developments follows: 1972: Since 1972 over 175 million metric tons of carbon dioxide () have been injected into the SACROC oil field to enhance oil recovery (EOR). 2023 In this year two projects among others are reported on by the ICSC in their webinar: Northern Lights JV PORTHOS 2009 Global Carbon Capture and Storage Institute 10 July 2009 major economies forum meeting on climate change: Australian Prime minister Mr Rudd, who shared the stage with US President Barack Obama, said the Global Carbon Capture and Storage Institute (CCS) would now be an international initiative led by Australia - which will act as a clearing house for research of new technologies, legislation to pave their path and as a vehicle to streamline funding. "The practical challenge we face...is what do we do about the problem, the challenge, of coal. There are practically no large carbon capture and storage projects under construction now," Mr Rudd said. "Australia in the last 12 months has decided to work with other major economies, and all the major energy companies, on the establishment of a Global Carbon Capture and Storage Institute. That is what we are here launching today." He said carbon capture and storage, which captures and seeks to inject it in safe stores deep underground, is an important potential future weapon in the battle against global warming. Electricity sourced from fossil fuels accounts for more than 40 per cent of the world's energy-related emissions. A further 25 per cent comes from large-scale industrial processes such as iron and steel production, cement making, natural gas processing and petroleum refining. With world energy demand projected to grow by more than 40 per cent over the next two decades, reducing emissions is a significant challenge the nation of Australia has a particular interest in helping to solve. Australia's is the world's biggest coal exporter, and the Australian economy is heavily dependent upon coal—its biggest export. Renewable energy technology continues to develop, but fossil fuels, in particular coal and gas, will continue to provide the bulk of the world's energy for the foreseeable future. However, there is a way to harness fossil fuels while significantly reducing emissions. The answer lies in the capture and storage of carbon dioxide and other climate influencing greenhouse gases. GreenGen Project led by China Huaneng Group (CHNG) will begin construction in early 2009 GreenGen Project aims to research, develop and demonstrate a coal-based power generation system with hydrogen production through coal gasification, power generation from a combined cycle gas turbine and fuel cells, and efficient treatment of pollutants and CO2. Thus, the efficiency of the coal-based power generation will be greatly improved, and the pollutants and CO2 emissions will be near zero. 2008 Reality campaign launched 3 December 2008: This Is Reality.org, a coalition of US environmental organisations, launches a campaign to highlight that no fossil fuel power station in the USA captures other than token amounts of the greenhouse gas that the Intergovernmental Panel on Climate Change (IPCC) and all other significant scientific organisations acknowledge are causing climate change, global warming, Arctic shrinkage and sea level rise. CCS inclusion in CDM postponed Plans to allow organizations to earn credits for CCS under the Clean Development Mechanism (CDM) have been dropped at the climate change talks in Poznan - the 14th session of the United Nations Framework Convention on Climate Change (UNFCCC). The proposal, led by Australia, was supported by a majority of representatives and the IEA but Brazil and a few other countries blocked the move. The decision will then be postponed until next year. Carbon assessment software developed Researchers at Massachusetts Institute of Technology (MIT) designed a new software that will help developers of clean coal technology to accurately measure how much they can store underground. BP and CAS established the Clean Energy Commercialization Center BP and the Chinese Academy of Sciences (CAS) have agreed to establish the Clean Energy Commercialization Center (CECC), a joint venture in Shanghai investing some $73 million to commercialize Chinese clean energy technologies. Subject to final government approvals, the CECC venture is expected to be established by early 2009. The CECC will serve as an international platform for further collaboration among research institutes, enterprises and other institutions to improve indigenous Chinese innovation capabilities and market applications in areas such as clean coal conversion, zero emission technologies, and carbon capture and storage. UK Energy Bill received Royal Assent On 26 November 2008, The UK Energy Bill received Royal Assent and was passed into law. The bill mentions to create a regulatory framework to enable private sector investment in Carbon capture and storage projects. As it has the potential to reduce the carbon emissions from fossil fuel power stations by up to 90%. The Global Carbon Capture and Storage Institute planned The Australian government in September committed A$100 million annually to help establish an international institute to accelerate development of clean-coal technologies, including the long-term aim of storing carbon dioxide permanently underground. Following a two-day preparatory meeting in London, the institute now has eight foundation industry members: Shell International Petroleum; Rio Tinto Ltd.; Mitsubishi Corp.; Anglo American; Xstrata Coal; Services Petroliers Schlumberger; Alstom; and The Climate Group. Clean-coal debut in Germany The world's first coal-fired power plant designed to capture and store carbon dioxide that it produces began operations in Spremberg, Germany. Built alongside the 1,600MW Schwarze Pumpe power plant in north Germany, the demonstration experiment will capture up to 100,000 tonnes of CO2 a year, compress it and bury it 3,000m below the surface of the depleted Altmark gas field, about 200 km from the site. "It's a very important and tangible step forward," said Stuart Haszeldine, a geologist and CCS expert at the University of Edinburgh. "It is the first full-chain demonstration of oxyfuel as a carbon capture technology. It connects all that for the first time in a working system." Irish Energy Group launches carbon capture scheme Providence Resources, an Irish oil and gas group, is launching a project that could lead to the first carbon capture & storage (CCS) scheme in the British Isles. Providence Resources is working with Star Energy Group, a UK gas storage company owned by Petronas of Malaysia, on the Ulysses Project. The scheme will evaluate the Kish Bank Basin in the Irish Sea to decide whether its underground saline reservoirs can be used for carbon sequestration and natural gas storage. New organic liquids to Capture CO2 Carbon dioxide-binding organic liquids (BOLs) can hold more than twice as much as current capture agents. The liquids could be used in coal power plants to capture the greenhouse gas from combustion exhaust. The CSIRO and its Chinese partners launched a PCC pilot plant in Beijing The CSIRO (Australia's Commonwealth Scientific and Industrial Research Organisation) and its Chinese partners Huaneng Group and the Thermal Power Research Institute (TPRI) have officially launched a post-combustion capture (PCC) pilot plant in Beijing that strips carbon dioxide from power station flue gases in an effort to stem climate change. The project represents another first for the CSIRO PCC program - the first capture of carbon dioxide in China using a PCC pilot plant. It begins the process of applying the technology to Chinese conditions and evaluating its effectiveness. The US government called for proposals to elicit commercial involvement in FutureGen project In June 2008, the US government announced a call for proposals to elicit commercial involvement in the restructuring of FutureGen. The U.S. Department of Energy announced to restructure FutureGen The United States Department of Energy announced a restructuring of the FutureGen project, which was claimed necessary due to rising costs. Carbon Capture Journal published the first issue of its print magazine Carbon Capture Journal is the world's first and only news service, leading to a print magazine, specifically about developments with carbon capture and geological storage technology. The print magazine is published six times a year (bimonthly). It informs people of developments around the world in both power station carbon capture and geological storage, with news about the major projects and developments with government policy. It is produced by the team behind Digital Energy Journal, one of the world's leading magazines and news services for information technology in the oil and gas industry. Digital Energy Journal Ltd, based in central London, was founded in December 2005 by the successful team behind Digital Ship magazine. Researches and debates on Carbon Capture Ready (CCR) become active Researches and debates on capture ready become active in 2008, evidences of which consist of IChemEng (Institute of Chemical Engineering) Capture Ready Report, IEA Carbon Capture Ready Report, and CAPPCCO project. The Chinese Advanced Power Plant Carbon Capture Options (CAPPCCO) project is based on a five-year program at Imperial College London, its objective is to research how to make new power plants in China ‘capture ready’. 2007 Australia and China signed a partnership agreement On September 6, 2007, Australia and China signed a partnership agreement that will pave the way for the installation of low-emission coal energy technology in Beijing in 2008. Signed by CSIRO Chief Executive, Dr Geoff Garrett, and Mr Li Xiaopeng, the President of China's state-owned energy enterprise, the China Huaneng Group, the agreement will see a post combustion capture pilot plant installed at the Huaneng Beijing Co-generation Power Plant. The North American Carbon Capture & Storage Association (NACCSA) Founded in September 2007, the non-profit association supports the development of a commercial CCS industry in the United States and Canada. It is the first of its kind organization in North America to advocate for policies that support the development of a commercial CCS industry. Founding member companies include: BlueSource, Halliburton, International Paper, Keener Oil & Gas Company, Kinder Morgan, MissionPoint Capital Partners, Occidental Petroleum Corporation, Peabody Energy, PetroSource Energy Company, Ramgen Power Systems, Schlumberger, and Shell. BP and Chinese Academy of Sciences signed the MoU to establish the Clean Energy Commercialization Centre BP and Chinese Academy of Sciences held a ceremony in Shanghai to celebrate the signing of a Memorandum of Understanding, announcing their intent to establish the Clean Energy Commercialization Centre. CECC aims to accelerate the development in China of clean coal conversion technologies and the creation of associated value chain investment opportunities through the commercialization of key technologies and coordinated management of large scale demonstration projects which primarily use coal as feedstock for fuel production, chemicals manufacturing and power generation. Website of Carbon Capture Journal launched Carbon Capture Journal focuses on industrial carbon capture from power stations and industrial plants and carbon dioxide underground storage, including new technology, policy and projects. Since launching the website in May 2007, it has been actively sought out by the carbon capture world's most influential figures, from power companies, oil and gas companies, government, financiers /banks, researchers, consultants, academics, NGOs, lawyers, engineering companies and technology suppliers. 2006 Coach Project - cooperation action within CCS China-EU The launch meeting for the new European Coach project was held in Beijing on November 21 and 22, 2006. An integral part of the European Commission's FP6, this project falls within the scope of the partnership agreement signed at the start of 2006 between the European Union and China focusing on ways of tackling climate change. The aim of Coach is to provide the technical recommendations required to design a coal-fired power station incorporating CO2 capture and storage technologies, to be constructed in China by 2010. The power station is expected to be industrially operational by 2015. Mature hydrocarbon fields located in Beijing have been identified as potential storage sites. UK Government faces tough choices on future power Public consultation on Britain's future energy needs ends with divided camps leaving the government with tough choices on power supplies. Bound by pledges to slash greenhouse gas emissions, the UK government must decide the shape of the country's electricity supply network for coming decades as demand booms and North Sea oil and gas run out. On one side of the debate is the so-called "big power" lobby promoting coal and nuclear generation. On the other, the green alternative advocating a wider mix of power sources including those coming from individuals' own efforts. 2005 Carbon Capture and Storage Association (CCSA) established Established in October 2005, CCSA encourages the development of carbon capture and storage (CCS) in UK, and plays an important role in delivering CCS technologies and projects worldwide. 11 founding companies consist of BP, Mitsui Babcock, Schlumberger, Scottish & Southern Energy, Shell, etc. NZEC agreement (Near Zero Emissions Coal Project) The EU-China NZEC agreement was signed at the EU-China Summit under the UK's presidency of the EU in September 2005 as part of the EU-China Partnership on Climate Change. The agreement has the objective of demonstrating advanced, near zero emissions coal technology through CCS in China and the EU by 2020. The Sino-UK bilateral NZEC initiative was developed in support of this wider agreement. The joint Sino-UK Near Zero Emissions Coal initiative is sponsored by UK's Department for the Environment, Food and Rural Affairs (Defra), together with the Department for Business, Enterprise and Regulatory Reform (BERR), and the Ministry of Science and Technology (MOST) of the People's Republic of China. The Scottish Centre for Carbon Storage established The Scottish Centre for Carbon Storage was set up in September 2005. This collaboration between the University of Edinburgh and Heriot-Watt University as well as the British Geological Survey, builds on and extends established expertise. The Centre comprises experimental and analytical facilities, expertise in field studies and modelling, and key academic and research personnel to stimulate the development of innovative solutions to carbon capture and geological storage. The new Norwegian government aims to make Norway the forerunner in CCS The new Norwegian government that came into power in the autumn of 2005 aims to make Norway the forerunner in CO2 capture and storage. It has also made a commitment to ensure that gas-fired power plants will be equipped with capture technology, and has allocated a total of €19 million to related R&D to be distributed through a new organization called Gassnova. The country has also intensified its international collaboration in the field, e.g., with the UK. The UK Carbon Capture and Storage Consortium (UKCCSC) started UKCCSC is a consortium of engineering, technological, natural, environmental, social and economic scientists, as a way to expand UK research capacity in carbon capture and storage. Its CCS objectives consist of: • Assess impact of future energy supply/demand scenarios on overall costs/emissions of non-CCS and CCS fossil generation • Explore role of CCS in the update of the UK's energy infrastructure • Investigate potential impacts of CO2 leakage during capture and storage, and compare these to environmental impacts of non-intervention EU Emission Trading Scheme commenced operation The European Union Greenhouse Gas Emission Trading Scheme (EU ETS) commenced operation in January 2005 as the largest multi-country, multi-sector Greenhouse Gas emission trading scheme worldwide. CCS technology was integrated into Chinese national development plan In 2005, the CCS technology was integrated into Chinese National Medium- and Long-term Science and Technology Development Plan towards 2020(2006–2020) 2004 CO2 Capture Project Phase II(CCP2):2004-2008 The targets of Phase II consist of: Achieve significant progress for each technology: Scaling-up successfully operation by at least one order of magnitude. Addressing and solving critical issues identified in Phase I Confirm or improve economical evaluations of Phase I. At least one technology ready for field demonstration 2003 The U.S. Department of Energy named the seven regional partnerships on CCS The U.S. Department of Energy on August 18, 2003, named the seven partnerships of state agencies, universities, and private companies that will form the core of a nationwide network to help determine the best approaches for capturing and permanently storing gases that can contribute to global climate change. The Regional Sequestration Partnership Program supports region-specific studies to determine the most suitable CCS technologies, regulations, and infrastructure. The seven partnerships consist of Big Sky Regional Carbon Sequestration Partnership, Midwest Geological Sequestration Consortium (Illinois Basin), Midwest Regional Carbon Sequestration Partnership, Southeast Regional Carbon Sequestration Partnership, Southwest Regional Partnership for Carbon Sequestration, Plains CO2 Reduction Partnership, and West Coast Regional Carbon Sequestration Partnership. Tsinghua-BP Clean Energy Research and Education Centre was launched Under the "Clean Energy: Facing the Future" Programme, the Tsinghua-BP Clean Energy Research and Education Centre was launched in July 2003. It aims to combine the strengths to create a "world-leading institute for energy strategy study for China". It has attracted a broad range of important players in various aspects of energy, industry and environment to serve on the advisory board of the centre. The inaugural meeting of the Carbon Sequestration Leadership Forum (CSLF) was held. The inaugural meeting of CSLF was held in Tysons Corner, Virginia, USA in June 2003. Thirteen countries and the European Commission signed the CSLF charter as part of this ministerial summit. The Charter committed each country to participate in the CSLF process, and specifically to provide both policy and technical expertise through a formal working group structure. CSLF is an international climate change initiative focusing on the development of carbon capture and storage technologies as a means to accomplishing long-term stabilization of GHG levels in the atmosphere. It is designed to improve CCS technologies through coordinated research and development with international partners and private industry. The US federal government announced FutureGen Project On February 27, 2003, the US federal government announced FutureGen, a $1 billion initiative to create a coal-based power plant focused on demonstrating a revolutionary clean coal technology that would produce hydrogen and electricity and mitigate greenhouse gas emissions. The FutureGen project was initiated in response to the National Energy Policy of May 2001, which emphasized the need for diverse and secure energy sources that could largely be provided by America's most abundant domestic energy resource, coal. U.S. Department of Energy budget for carbon capture and storage research One sign of the increased seriousness with which policymakers view the potential for CCS is the budget devoted by the U.S. Department of Energy to research on CCS, which has increased from about $1 million in 1998 to a 2003 budget request of $54 million, just five years later. (1998- $1mln, 1999- $6mln, 2000- $9mln, 2001- $18mln, 2002- $32mln, 2003- $54mln) 2002 IPCC decided to hold a workshop to do a literature on CCS In April 2002, at its 19th Session in Geneva, the IPCC decided to hold a workshop, which took place in November 2002 in Regina, Canada. The results of this workshop were a first assessment of literature on capture and storage, and a proposal for a Special Report. At its 20th Session in 2003 in Paris, France, the IPCC endorsed this proposal and agreed on the outline and timetable for the special report. 2001 ‘Clean Energy: Facing the Future’ Programme In November 2001, BP established the "Clean Energy: Facing the Future" Programme in China with the Chinese Academy of Sciences and Tsinghua University to create a partnership within China to address the issues and opportunities of clean energy. BP is providing $10 million over a ten-year period to fund research in new clean energy technologies. This programme aims to develop and prove new clean options for China and the rest of the world. The programme includes several projects at CAS's Dalian Institute of Chemical Physics and Shenyang Institute of Metals Research. The RECOPOL project started on November 1, 2001. RECOPOL stands for: 'Reduction of CO2 emission by means of storage in coal seams in the Silesian Coal Basin of Poland'. It is an EU co-funded combined research and demonstration project to investigate the possibility of permanent subsurface storage of CO2 in coal. At a selected location in Poland a pilot installation is developed for methane gas production from coal beds while simultaneously storing CO2 underground. The produced methane could become an alternative fuel that can be locally produced in Silesia. This installation is the very first of its kind in Europe, and at the moment the only one operational worldwide. The UNFCCC invited IPCC to prepare a special report on CCS technologies The United Nations Framework Convention on Climate Change (UNFCCC) at its seventh Conference of Parties (COP7) in 2001 invited the Intergovernmental Panel on Climate Change (IPCC) to prepare a special report on carbon capture and storage technologies. 2000 The Carbon Sequestration Initiative launched As a major component of the Carbon Capture and Sequestration Technologies Program at MIT, the Carbon Sequestration Initiative (CSI) was launched in July 2000. CSI is an industrial consortium formed to investigate CCS technologies, it aims to: Provide an objective source of assessment and information about carbon sequestration. Establish a members' information network to provide timely updates on relevant activities and new findings. Explore the societal and technical aspects of carbon sequestration. Educate a wider audience on the possibilities of carbon sequestration. Link industry to expanding government activities on these topics. Stimulate and seed new research ideas. Create an annual forum for strategic thinking and identification of new business opportunities. CCP Phase I: April 2000-December 2003 The CO2 Capture Project (CCP) is a partnership of eight of the world's leading energy companies and three government organizations undertaking research and developing technologies to help make CCS a practical reality for reducing global CO2 emissions and tackling climate change. April 2000-August 2000: Review & Evaluation/ over 200 ideas reviewed August 2000-September 2001: 30 Capture and 50 Storage Techs Screened, 50 Techs Pass Stage Gate/ tech teams screen tech options and recommend detailed evaluation of promising candidates September 2001-December 2002: Broad Tech Development December 2002-December 2003: Focused Tech Development injection began in Weyburn Delivery of the first from Dakota Gasification Company to Weyburn commenced in September 2000. In late 2000, injection was initiated at an initial injection rate of 2.69 million m3/d (or 5000 t/d) into 19 patterns. Before 2000 1998 EnCana announced plans to implement Weyburn EOR project In 1998, a Canadian oil and gas corporation (the PanCanadian Petroleum Limited, now EnCana Corporation) announced plans to implement a large scale EOR project in an oilfield near Weyburn, Saskatchewan, Canada, using captured from Dakota Gasification Company. The Weyburn-Midale Carbon Dioxide Project provided a chance to demonstrate and study a large-scale geological storage project and to provide the data to evaluate the safety, technical and economic feasibility of such storage. 1997 DGC agreed to send all of the waste gas (96% ) to the Weyburn oil field The Weyburn oil field is situated in Canada, near the USA border. The carbon dioxide for the Weyburn EOR project is produced in the Great Plains Synfuels Plant in Beulah, North Dakota, USA, which is operated by the Dakota Gasification Company. In 1997, DGC agreed to send all of the waste gas (96% ) from its Great Plains Synfuels Plant through a pipeline to the Weyburn oil field. 1996 Sleipner — a carbon dioxide storage project The Sleipner gas field is in the North Sea, about 250 km west of Stavanger, Norway. It is operated by Norway's largest oil company Statoil. The Sleipner field produces natural gas and light oil from the Heimdal sandstones, which are about 2,500 m below sea level. The natural gas produced at Sleipner contains unusually high levels (about 9%) of carbon dioxide, but the customers require less than 2.5%. This means that the CO2 that is stored at Sleipner is a by-product of gas purification, rather than CO2 captured from a point source. As such, the Sleipner project is more precisely described as a carbon storage project. To encourage companies to cut their carbon emissions, the Norwegian government imposes a carbon tax equivalent to about $50 per ton of CO2 released into the atmosphere. To avoid paying the tax, and as a test of alternative technology, all of the CO2 extracted since 1996, when gas production started at Sleipner, has been pumped back deep underground. The Sleipner project is the first commercial example of CO2 storage in a deep saline aquifer, so there is a lot of interest from around the world in its success. In particular, scientists want to know how the CO2 moves inside the aquifer and if there is a risk that it could escape back to the surface. 1991 The Norwegian government instituted a tax on emissions In 1991, the Norwegian government instituted a tax on emissions, which motivated Statoil to capture the emitted from its Sleipner oil and gas field in the North Sea and inject it into an underground aquifer. 1989 The Carbon Capture and Sequestration Technologies Program at MIT initiated Initiated in 1989, the Carbon Capture and Sequestration Technologies Program at MIT conducts research into technologies to capture, utilize, and store from large stationary sources. It is globally recognized as a leader in the field of carbon capture and storage research. Early 1970s The use of for commercial EOR began in US in the early 1970s. The use of for commercial enhanced oil recovery started in USA in the early 1970s. There are currently about 120 registered CO2 floods worldwide, almost 85% of which are in the United States and Canada. References Timeline Environmental timelines
Timeline of carbon capture and storage
Engineering
5,299
2,870,720
https://en.wikipedia.org/wiki/Millefleur
Millefleur, millefleurs or mille-fleur (French mille-fleurs, literally "thousand flowers") refers to a background style of many different small flowers and plants, usually shown on a green ground, as though growing in grass. It is essentially restricted to European tapestry during the late Middle Ages and early Renaissance, from about 1400 to 1550, but mainly about 1480–1520. The style had a notable revival by Morris & Co. in 19th century England, being used on original tapestry designs, as well as illustrations from his Kelmscott Press publications. The millefleur style differs from many other styles of floral decoration, such as the arabesque, in that many different sorts of individual plants are shown, and there is no regular pattern. The plants fill the field without connecting or significantly overlapping. In that it also differs from the plant and floral decoration of Gothic page borders in illuminated manuscripts. There is also a rather different style known as millefleur in Indian carpets from about 1650 to 1800. In the 15th century, an elaborate glass making technique was developed. See Millefiori, Murano glass and other glassmakers make pieces, particularly paper weights, that use the motif. Tapestries In the millefleur style the plants are dispersed across the field on a green background representing grass, to give the impression of a flowery meadow, and cover evenly the whole decorated field. At the time they were called verdures in French. They are mostly flowering plants shown as a whole, and in flower, with the coloration of the flowers of a distinct brightness compared to the usually darker background. Many are recognizable as specific species, with varying degrees of realism, but accuracy does not seem to be the point of the depiction. Neither are the flowering plants used to create perspective or depth of field. There are very often animals and sometimes human figures dispersed around the field, often rather small in relation to the plants, and at a similar size to each other, whatever their relative sizes in reality. The tapestries usually include large figures whose meaning is not always apparent, which seems to derive from the division of labour under the guild system, so that the weavers were obliged to repeat figure designs by members of the painters' guild, but could design the backgrounds themselves. Such was the case in Brussels at any rate, after a lawsuit between the two groups in 1476. The subjects are generally secular, but there are some religious survivals. Millefleur style was most popular in late 15th and early 16th century French and Flemish tapestry, with the best known examples including The Lady and the Unicorn and The Hunt of the Unicorn. These are from what has been called the "classic" period, where each "bouquet" or plant is individually designed, improvised by the weavers as they worked, while later tapestries, probably mostly made in Brussels, usually have mirror images of plants on the right and left sides of the piece, suggesting a cartoon re-used twice. The precise origin of the pieces has been much argued about, but the only surviving example whose original payment can be traced was a large heraldic millefleur carpet made for Duke Charles the Bold of Burgundy in Brussels, part of which is now in the Bern Historical Museum. The beginnings of the style may be seen in earlier tapestries. The famous Apocalypse Tapestry series (Paris, 1377–82) has several backgrounds covered in vegetal motifs, but these are springing from tendrils in the way of illuminated manuscript borders. In fact most of the very large sets do not fully use the style, with the meadow of flowers extending right to the top of the picture space. The early Devonshire Hunting Tapestries (1420s) have naturalistic landscape backgrounds, seen from a somewhat elevated viewpoint, so that the lower two-thirds or so of each scene has a millefleur background, but this gives way to forest or sea and sky at the top of the tapestry. The Justice of Trajan and Herkinbald (about 1450) and most of The Hunt of the Unicorn set (about 1500) are similar. From the main period, each tapestry in The Lady and the Unicorn set has three distinct zones of millefleur background: the island containing the figures, where the plants are densely arranged, an upper background zone where they are arranged in vertical bands, and accompany animals at very varied scales, and a lower zone where a single row of plants have slight gaps between them. During the 1800s, the millefleur style was revived and incorporated into numerous tapestry designs by Morris & Co. The company's Pomona (1885) and The Achievement of the Grail (1895–96) tapestries demonstrate an adherence to the medieval millefleur style. Other tapestries such as their The Adoration of the Magi (1890) and The Failure of Sir Gawain (c. 1890s) use the style more liberally, borrowing the flowers' often flat, splayed appearance, but overlapping them and using them as part of a landscape and not as a purely decorative backdrop. The Adoration of the Magi was one of the company's most popular designs, with ten versions woven between 1890 and 1907. Indian carpets The term is also used to describe north Indian carpets, originally of the late Mughal era in the late 17th and 18th century. However these have large numbers of small flowers in repeating units, often either springing unrealistically from long-ranging twisting stems, or arranged geometrically in repeating bunches or clusters. In this they are essentially different from the irregularly arranged whole plant style of European tapestries, and closer to arabesque styles. The flowers springing from the same stem may be of completely different colours and types. There are two broad groups, one directional and more likely to show whole plants (an early version is the upper illustration), and one not directional and often just showing stems and flowers. They appear to have been manufactured in Kashmir and present-day Pakistan. They reflect a combination of European influences and underlying Persian-Mughal decorative tradition, and a trend for smaller elements in designs. The style, or styles, were later adopted by Persian weavers, especially for prayer rugs, up to about 1900. Millefiori decoration uses the Italian version of the same word, but is a different style, restricted to glass. Other appearances The millefleur style is sometimes used liberally in Sir Edward Burne-Jones' illustrations for the Kelmscott Press publications, such in as his frontispiece to The Wood Beyond the World (1894). Millefleur are used in artist Leon Coward's mural The Happy Garden of Life which appeared in the 2016 sci-fi movie 2BR02B: To Be or Naught to Be. The flowers in the mural were adapted and redesigned from those in The Unicorn in Captivity from The Hunt of the Unicorn tapestry series, as part of the mural's religious allusions. See also Millefiori Notes References Souchal, Geneviève (ed.), Masterpieces of Tapestry from the Fourteenth to the Sixteenth Century: An Exhibition at the Metropolitan Museum of Art, 1974, Metropolitan Museum of Art (New York, N.Y.), Galeries nationales du Grand Palais (France), , 9780870990861, google books Further reading Cavallo, Adolph S., Medieval Tapestries in the Metropolitan Museum of Art, 1993, Metropolitan Museum of Art (New York, N.Y.), , 9780870996443 External links Carpet in the Ashmolean museum Tapestries Visual motifs Ornaments Rugs and carpets
Millefleur
Mathematics
1,566
4,459,380
https://en.wikipedia.org/wiki/Geometric%20topology%20%28object%29
In mathematics, the geometric topology is a topology one can put on the set H of hyperbolic 3-manifolds of finite volume. Use Convergence in this topology is a crucial ingredient of hyperbolic Dehn surgery, a fundamental tool in the theory of hyperbolic 3-manifolds. Definition The following is a definition due to Troels Jorgensen: A sequence in H converges to M in H if there are a sequence of positive real numbers converging to 0, and a sequence of -bi-Lipschitz diffeomorphisms where the domains and ranges of the maps are the -thick parts of either the 's or M. Alternate definition There is an alternate definition due to Mikhail Gromov. Gromov's topology utilizes the Gromov-Hausdorff metric and is defined on pointed hyperbolic 3-manifolds. One essentially considers better and better bi-Lipschitz homeomorphisms on larger and larger balls. This results in the same notion of convergence as above as the thick part is always connected; thus, a large ball will eventually encompass all of the thick part. On framed manifolds As a further refinement, Gromov's metric can also be defined on framed hyperbolic 3-manifolds. This gives nothing new but this space can be explicitly identified with torsion-free Kleinian groups with the Chabauty topology. See also Algebraic topology (object) References William Thurston, The geometry and topology of 3-manifolds, Princeton lecture notes (1978-1981). Canary, R. D.; Epstein, D. B. A.; Green, P., Notes on notes of Thurston. Analytical and geometric aspects of hyperbolic space (Coventry/Durham, 1984), 3--92, London Math. Soc. Lecture Note Ser., 111, Cambridge Univ. Press, Cambridge, 1987. 3-manifolds Hyperbolic geometry Topological spaces
Geometric topology (object)
Mathematics
400
48,827,462
https://en.wikipedia.org/wiki/Flore%20du%20Cambodge%2C%20du%20Laos%20et%20du%20Vi%C3%AAtnam
Flore du Cambodge, du Laos et du Viêtnam is a multi-volume flora describing the vascular plants of Cambodia, Laos, and Vietnam, published by the National Museum of Natural History in Paris since the 1960s. It currently consists of 35 volumes. Volumes Volume 35 (2014): Solanaceae Volume 34 (2014): Polygalaceae Volume 33 (2014): Apocynaceae Volume 32 (2004): Myrsinaceae Volume 31 (2003): Gentianaceae Volume 30 (2001): Leguminosae - Papilionoideae - Millettieae Volume 29 (1997): Leguminoseuses, Papilionoidees, Dalbergiees Volume 28 (1995): Gymnospermae Volume 27 (1994): Legumineuses - Desmodiees Volume 26 (1992): Rhoipteleacees, Juglandacees, Thymelaeacees, Proteacees, Primulacees, Styracacees Volume 25 (1990): Dipterocarpacees Volume 24 (1989): Caryophyllales Volume 23 (1987): Legumineuses-Papilionoidees Volume 22 (1985): Bignoniacees Volume 21 (1985): Scrophulariaceae Volume 20 (1983): Pandanaceae, Sparganiaceae, Ruppiaceae, Aponogetonaceae, Smilaceae, Philydraceae, Hanuanaceae, Flagellariaceae, Restionaceae, Centrolepidaceae, Lowiaceae, Xyridaceae Volume 19 (1981): Leguminosae, Mimosoideae Volume 18 (1980): Leguminosae, Cesalpiniodeae Volume 17 (1979): Leguminosae, Phaseoleae Volume 16 (1977): Symplocaceae Volume 15 (1975): Cucurbitaceae Volume 14 (1973): Ochnaceae, Onagraceae, Trapaceae, Balanophoraceae, Rafflesiaceae, Podostemacea, Tristichaceae Volume 13 (1972): Loganiaceae, Buddlejaceae Volume 12 (1970): Hernandiaceae Volume 11 (1970): Flacourtiaceae, Bixaceae, Cochlospermaceae Volume 10 (1969): Combretaceae: O.Lecompte Volume 9 (1969): Campanulaceae Volume 8 (1968): Nyssaceae, Cornaceae, Alangiaceae Volume 7 (1968): Rosaceae (2) Volume 6 (1968): Rosaceae (1) Volume 5 (1967): Umbelliferae, Aizoaceae, Molluginaceae, Passifloraceae Volume 4 (1965): Saxifragaceae, Crypteroniaceae, Droseraceae, Hamamelidaceae, Haloragaceae, Rhizophoraceae, Sonneratiaceae, Punicaceae Volume 3 (1963): Sapotacees Volume 2 (1962): Anacardiaceae, Moringaceae, Connaraceae Volume 1 (1960): Sabiaceae Meetings 1st International symposium on the Flora of Cambodia, Laos and Vietnam (2008) - Phnom Penh, Cambodia. 2nd International symposium on the Flora of Cambodia, Laos and Vietnam (2010) - Hanoi, Vietnam. 3rd International symposium on the Flora of Cambodia, Laos and Vietnam (2015) - Vientiane, Laos. See also Flora of Thailand Flora Malesiana References Florae (publication) Botany in Asia
Flore du Cambodge, du Laos et du Viêtnam
Biology
735
174,292
https://en.wikipedia.org/wiki/Pierre%20Jean%20Georges%20Cabanis
Pierre Jean Georges Cabanis (; 5 June 1757 – 5 May 1808) was a French physiologist, freemason and materialist philosopher. Life Cabanis was born at Cosnac (Corrèze), the son of Jean Baptiste Cabanis (1723–1786), a lawyer and agronomist. At the age of ten, he attended the college of Brives, where he showed great aptitude for study, but his independence of spirit was so great that he was almost constantly in a state of rebellion against his teachers and was finally expelled. He was then taken to Paris by his father and left to carry on his studies at his own discretion for two years. From 1773 to 1775 he travelled in Poland and Germany, and on his return to Paris he devoted himself mainly to poetry. About this time he sent to the Académie française a translation of the passage from Homer proposed for their prize, and, though he did not win, he received so much encouragement from his friends that he contemplated translating the whole of the Iliad. At his father's wish, he gave up writing and decided to engage in a more settled profession, selecting medicine. In 1789 his Observations sur les hôpitaux (Observations on hospitals, 1790) procured him an appointment as administrator of hospitals in Paris, and in 1795 he became professor of hygiene at the medical school of Paris, a post which he exchanged for the chair of legal medicine and the history of medicine in 1799. Partly because of his poor health, he tended not to practise as a physician, his interests lying in the deeper problems of medical and physiological science. During the last two years of Honoré Mirabeau's life, Cabanis was intimately connected with him; Cabanis wrote the four papers on public education which were found among Mirabeau's papers at his death (and Cabanis edited them soon afterwards in 1791). During the illness which terminated his life Mirabeau trusted entirely to Cabanis' professional skills. Of the death of Mirabeau, Cabanis drew up a detailed narrative, intended as a justification of his treatment of the case. He was enthusiastic about the French Revolution and became a member of the Council of Five Hundred and then of the Senate, and the dissolution of the Directory was the result of a motion which he made to that effect. His political career was brief. Hostile to the policy of Napoleon Bonaparte, he rejected every offer of a place under his government. He died at Meulan. His body is buried in the Pantheon and his heart in Auteuil Cemetery in Paris. Works A complete edition of Cabanis's works was begun in 1825, and five volumes were published. His principal work, Rapports du physique et du moral de l'homme (On the relations between the physical and moral aspects of man, 1802), consists in part of memoirs, read in 1796 and 1797 to the institute, and is a sketch of physiological psychology. Psychology is with Cabanis directly linked on to biology, for sensibility, the fundamental fact, is the highest grade of life and the lowest of intelligence. All the intellectual processes are evolved from sensibility, and sensibility itself is a property of the nervous system. The soul is not an entity, but a faculty; thought is the function of the brain. Just as the stomach and intestines receive food and digest it, so the brain receives impressions, digests them, and has as its organic secretion, thought. Alongside this materialism, Cabanis held another principle. He belonged in biology to the vitalistic school of G.E. Stahl, and in the posthumous work, Lettre sur les causes premières (1824), the consequences of this opinion became clear. Life is something added to the organism: over and above the universally diffused sensibility there is some living and productive power to which we give the name of Nature. It is impossible to avoid ascribing both intelligence and will to this power. This living power constitutes the ego, which is truly immaterial and immortal. Cabanis did not think that these results were out of harmony with his earlier theory. His work was highly appreciated by the philosopher Arthur Schopenhauer, who called his work "excellent". He was a member of the masonic lodge Les Neuf Sœurs from 1778. In 1786, Cabanis was elected an international member of the American Philosophical Society in Philadelphia. Evolution Cabanis was an early proponent of evolution. In the Encyclopedia of Philosophy it is stated that he "believed in spontaneous generation. Species have evolved through chance mutations ("fortuitous changes") and planned mutation ("man's experimental attempts") which change the structures of heredity." He influenced the work of Jean-Baptiste Lamarck, who referred to Cabanis in his Philosophie Zoologique. Cabanis was an advocate of the inheritance of acquired characteristics; he also developed his own theory of instinct. Cabanis made a statement that recognized a basic understanding of natural selection. Historian Martin S. Staum has written that: In a simple statement of adaptation and selection theory, Cabanis argued that species that have escaped extinction "have had successively to bend and conform to sequences of circumstances, from which apparently were born, in each particular circumstance, other entirely new species, better adjusted to the new order of things." Notes References (This article has the mistake "Pierre Jean George Cabanis" instead of "Pierre Jean Georges Cabanis".) Further reading (This article has the mistake "Pierre Jean George Cabanis" instead of "Pierre Jean Georges Cabanis".) (This article has the mistake "Pierre Jean George Cabanis" instead of "Pierre Jean Georges Cabanis".) 1757 births 1808 deaths People from Corrèze French materialists Members of the Académie Française Members of the Council of Five Hundred Burials at the Panthéon, Paris Les Neuf Sœurs French physiologists French Freemasons Proto-evolutionary biologists Members of the American Philosophical Society
Pierre Jean Georges Cabanis
Biology
1,246
52,740,555
https://en.wikipedia.org/wiki/4-Methoxyestradiol
4-Methoxyestradiol (4-ME2) is an endogenous, naturally occurring methoxylated catechol estrogen and metabolite of estradiol that is formed by catechol O-methyltransferase via the intermediate 4-hydroxyestradiol. It has estrogenic activity similarly to estrone and 4-hydroxyestrone. See also 2-Methoxyestradiol 2-Methoxyestriol 2-Methoxyestrone 4-Methoxyestrone References External links Metabocard for 4-Methoxyestradiol - Human Metabolome Database Secondary alcohols Estranes Estrogens Ethers Human metabolites Steroid hormones
4-Methoxyestradiol
Chemistry
155
24,200,775
https://en.wikipedia.org/wiki/C15H20O
{{DISPLAYTITLE:C15H20O}} The molecular formula C15H20O (molar mass: 216.319 g/mol, exact mass: 216.1514 u) may refer to: Curzerene Hexyl cinnamaldehyde Mutisianthol Molecular formulas
C15H20O
Physics,Chemistry
68
5,679,969
https://en.wikipedia.org/wiki/Block%20and%20bleed%20manifold
A Block and bleed manifold is a hydraulic manifold that combines one or more block/isolate valves, usually ball valves, and one or more bleed/vent valves, usually ball or needle valves, into one component for interface with other components (pressure measurement transmitters, gauges, switches, etc.) of a hydraulic (fluid) system. The purpose of the block and bleed manifold is to isolate or block the flow of fluid in the system so the fluid from upstream of the manifold does not reach other components of the system that are downstream. Then they bleed off or vent the remaining fluid from the system on the downstream side of the manifold. For example, a block and bleed manifold would be used to stop the flow of fluids to some component, then vent the fluid from that component’s side of the manifold, in order to effect some kind of work (maintenance/repair/replacement) on that component. Types of valves Block and Bleed A block and bleed manifold with one block valve and one bleed valve is also known as an isolation valve or block and bleed valve; a block and bleed manifold with multiple valves is also known as an isolation manifold. This valve is used in combustible gas trains in many industrial applications. Block and bleed needle valves are used in hydraulic and pneumatic systems because the needle valve allows for precise flow regulation when there is low flow in a non-hazardous environment. Double Block and Bleed (DBB Valves) These valves replace existing traditional techniques employed by pipeline engineers to generate a double block and bleed configuration in the pipeline. Two block valves and a bleed valve are as a unit, or manifold, to be installed for positive isolation. Used for critical process service, DBB valves are for high pressure systems or toxic/hazardous fluid processes. Applications that use DBB valves include instrument drain, chemical injection connection, chemical seal isolation, and gauge isolation. DBB valves do the work of three separate valves (2 isolations and 1 drain) and require less space and have less weight. Cartridge Type Standard Length DBB This type of Double Block and Bleed Valves have a patented design which incorporates two ball valves and a bleed valve into one compact cartridge type unit with ANSI B16.5 tapped flanged connections. The major benefit of this design configuration is that the valve has the same face-to-face dimension as a single block ball valve (as specified in API 6D and ANSI B16.10), which means the valve can easily be installed into an existing pipeline without the need for any pipeline re-working. Three Piece Non Standard Length DBB This type of Double Block and Bleed Valves (DBB Valves) feature the traditional style of flange-by-flange type valve and is available with ANSI B16.5 flanges, hub connections and welded ends to suit the pipeline system it is to be installed in. It features all the benefits of the single unit DBB valve, with the added benefit of a bespoke face-to-face dimension if required. Single Unit DBB This design also has operational advantages, there are significantly fewer potential leak paths within the double block and bleed section of the pipeline. Because the valves are full bore with an uninterrupted flow orifice they have got a negligible pressure drop across the unit. The pipelines where these valves are installed can also be pigged without any problems. There are several advantages in using a Double Block and Bleed Valve. Significantly, because all the valve components are housed in a single unit, the space required for the installation is dramatically reduced thus freeing up room for other pieces of essential equipment. Considering the operations and procedures executed before an operator can intervene, the Double Block and Bleed manifold offers further advantages over the traditional hook up. Due to the volume of the cavity between the two balls being so small, the operator is afforded the opportunity to evacuate this space efficiently thereby quickly establishing a safe working environment. References Fluid mechanics Hydraulics Mechanical engineering
Block and bleed manifold
Physics,Chemistry,Engineering
800
68,763,182
https://en.wikipedia.org/wiki/Pendell%C3%B6sung
The Pendellösung effect or phenomenon is seen in diffraction in which there is a beating in the intensity of electromagnetic waves travelling within a crystal lattice. It was predicted by P. P. Ewald in 1916 and first observed in electron diffraction of magnesium oxide in 1942 by Robert D. Heidenreich and in X-ray diffraction by Norio Kato and Andrew Richard Lang in 1959. At the exit surface of a photonic crystal (PhC), the intensity of the diffracted wave can be periodically modulated, showing a maximum in the "positive" (forward diffracted) or in the "negative" (diffracted) direction, depending on the crystal slab thickness. The Pendellösung effect in photonic crystals can be understood as a beating phenomenon due to the phase modulation between coexisting plane wave components, propagating in the same direction. This thickness dependence is a direct result of the so-called Pendellösung phenomenon, consisting of the periodic exchange inside the crystal of the energy between direct and diffracted beams. The Pendellösung interference effect was predicted by dynamical diffraction and also by its fellow theories developed for visible light. References Condensed matter physics Metamaterials Photonics
Pendellösung
Physics,Chemistry,Materials_science,Engineering
265
47,622,276
https://en.wikipedia.org/wiki/Knotted%20polymers
Single Chain Cyclized/Knotted Polymers are a new class of polymer architecture with a general structure consisting of multiple intramolecular cyclization units within a single polymer chain. Such a structure was synthesized via the controlled polymerization of multivinyl monomers, which was first reported in Dr. Wenxin Wang's research lab. These multiple intramolecular cyclized/knotted units mimic the characteristics of complex knots found in proteins and DNA which provide some elasticity to these structures. Of note, 85% of elasticity in natural rubber is due to knot-like structures within its molecular chain. An intramolecular cyclization reaction is where the growing polymer chain reacts with a vinyl functional group on its own chain, rather than with another growing chain in the reaction system. In this way the growing polymer chain covalently links to itself in a fashion similar to that of a knot in a piece of string. As such, single chain cyclized/knotted polymers consist of many of these links (intramolecularly cyclized), as opposed to other polymer architectures including branched and crosslinked polymers that are formed by two or more polymer chains in combination. Linear polymers can also fold into knotted topologies via non-covalent linkages. Knots and slipknots have been identified in naturally evolved polymers such as proteins as well. Circuit topology and knot theory formalise and classify such molecular conformations. Synthesis Deactivation enhanced ATRP A simple modification to atom transfer radical polymerization (ATRP) was introduced in 2007 to kinetically control the polymerization by increasing the ratio of inactive copper(II) catalyst to active copper(I) catalyst. The modification to this strategy is termed deactivation enhanced ATRP, whereby different ratios of copper(II)/copper(I) are added. Alternatively a copper(II) catalyst may be used in the presence of small amounts of a reducing agent such as ascorbic acid to produce low percentages of copper(I) in situ and to control the ratio of copper (II)/copper (I). Deactivation enhanced ATRP features the decrease of the instantaneous kinetic chain length ν as defined by:, meaning an average number of monomer units are added to a propagating chain end during each activation/deactivation cycle, The resulting chain growth rate is slowed down to allow sufficient control over the reaction thus greatly increasing the percentage of multi-vinyl monomers in the reaction system (even up to 100 percent (homopolymerization)). Polymerization process Typically, single chain cyclized/knotted polymers are synthesized by deactivation enhanced ATRP of multivinyl monomers via kinetically controlled strategy. There are several main reactions during this polymerization process: initiation, activation, deactivation, chain propagation, intramolecular cyclization and intermolecular crosslinking. The polymerization process is explained in Figure 2. In a similar way to normal ATRP, the polymerization is started by initiation to produce a free radical, followed by chain propagation and reversible activation/deactivation equilibrium. Unlike the polymerization of single vinyl monomers, for the polymerization of multivinyl monomers, the chain propagation occurs between the active centres and one of the vinyl groups from the free monomers. Therefore, multiple unreacted pendent vinyl groups are introduced into the linear primary polymer chains, resulting in a high local/spatial vinyl concentration. As the chain grows, the propagating centre reacts with their own pendent vinyl groups to form intramolecular cyclized rings (i.e. intramolecular cyclization). The unique alternating chain propagation/intramolecular cyclization process eventually leads to the single chain cyclized/knotted polymer architecture. Intramolecular cyclization or intermolecular crosslinking It is worthy to note that due to the multiple reactive sites of the multivinyl monomers, plenty of unreacted pendent vinyl groups are introduced to linear primary polymer chains. These pendent vinyl groups have the potential to react with propagating active centres either from their own polymer chain or others. Therefore, both of the intramolecular cyclization and intermolecular crosslinking might occur in this process. Using the deactivation enhanced strategy, a relatively small instantaneous kinetic chain length limits the number of vinyl groups that can be added to a propagating chain end during each activation/deactivation cycles and thus keeps the polymer chains growing in a limited space. In this way, unlike what happens in free radical polymerization (FRP), the formation of huge polymer chains and large-scale combinations at early reaction stages is avoided. Therefore, a small instantaneous kinetic chain length is the prerequisite for further manipulation of intramolecular cyclization or intermolecular crosslinking. Based on the small instantaneous kinetic chain length, regulation of different chain dimensions and concentrations would lead to distinct reaction types. A low ratio of initiator to monomer would result in the formation of longer chains but of a lower chain concentration, This scenario would no doubt increases the chances of intramolecular cyclization due to the high local/spatial vinyl concentration within the growth boundary. Although the opportunity for intermolecular reactions can increase as the polymer chains grow, the likelihood of this occurring at the early stage of reactions is minimal due to the low chain concentration, which is why single chain cyclized/knotted polymers can form. However, in contrast, a high initiator concentration not only diminishes the chain dimension during the linear-growth phase thus suppressing the intramolecular cyclization, but it also increases the chain concentration within the system so that pendent vinyl groups in one chain are more likely to fall into the growth boundary of another chain. Once the monomers are converted to short chains, the intermolecular combination increases and allows the formation of hyperbranched structures with a high density of branching and vinyl functional groups. Note The monomer concentration is important for the synthesis of single chain cyclized/knotted polymers, but the kinetic chain length is the key determining factor for synthesis. Applications Single chain cyclized polymers consist of multiple cyclized rings which afford them some unique properties, including high density, low intrinsic viscosity, low translational friction coefficients, high glass transition temperatures, and excellent elasticity of the formed network. In particular, an abundance of internal space makes the single chain cyclized polymers ideal candidates as efficient cargo-carriers. Gene delivery It is well established that the macromolecular structure of nonviral gene delivery vectors alters their transfection efficacy and cytotoxicity. The cyclized structure has been proven to reduce cytotoxicity and increase circulation time for drug and gene delivery applications. The unique structure of cyclizing chains provides the single chain cyclized polymers a different method of interaction between the polymer and plasmid DNA, and results in a general trend of higher transfection capabilities than branched polymers. Moreover, due to the nature of the single chain structure, this cyclized polymer can “untie” to a linear chain under reducing conditions. Transfection profiles on astrocytes comparing 25 kDa-PEI, SuperFect® and Lipofectamine®2000 and cyclized polymer showed greater efficiency and cell viability whilst maintaining neural cell viability above 80% four days post transfections. See also Polymer architecture Branching (polymer chemistry) Molecular knot Knotted protein References Polymer chemistry
Knotted polymers
Chemistry,Materials_science,Engineering
1,555
4,088,703
https://en.wikipedia.org/wiki/Digital%20handheld%20refractometer
A digital handheld refractometer is an instrument for measuring the refractive index of materials. Principle of operation Most operate on the same general critical angle principle as a traditional handheld refractometer. The difference is that light from an LED light source is focused on the underside or a prism element. When a liquid sample is applied to the measuring surface of the prism, some of the light is transmitted through the solution and lost, while the remaining light is reflected onto a linear array of photodiodes creating a shadow line. The refractive index is directly related to the position of the shadow line on the photodiodes. Once the position of the shadow line has been automatically determined by the instrument, the internal software will correlate the position to refractive index, or to another unit of measure related to refractive index, and display a digital readout on an LCD or LED scale. The more elements there are in the photodiode array, the more precise the readings will be, and the easier it will be to obtain readings for emulsions and other difficult-to-read fluids that form fuzzy shadow lines. Digital handheld refractometers are generally more precise than traditional handheld refractometers, but less precise than most benchtop refractometers. They also may require a slightly larger amount of sample to read from since the sample is not spread thinly against the prism. The result may be displayed in one of various units of measuremeant: Brix, freezing point, boiling point, concentration, etc. Nearly all digital refractometers feature Automatic Temperature Compensation (for Brix at least) Most have a metal sample well around the prism, which makes it easier to clean sticky samples, and some instruments offer software to prevent extreme ambient light from interfering with readings (you can also shade the prism area to prevent this as well). Some instruments are available with multiple scales, or the ability to input a special scale using known conversion information. There are some digital handheld refractometers that are IP65 (IP Code) water-resistant, and thus washable under a running faucet. See also Refractometer Types Traditional handheld refractometer Abbe refractometer Inline process refractometer References Refractometers
Digital handheld refractometer
Technology,Engineering
466
44,064,649
https://en.wikipedia.org/wiki/Table%20computer
A table computer, or a table PC, or a tabletop is a device class of a full-featured large-display portable all-in-one computer with an internal battery. It can either be used on a table's top, hence the name, or carried around the house. Table computers feature an 18-inch or larger multi-touch touchscreen display, a battery capable of at least 2 hours of autonomous work and a full-featured desktop operating system, such as Windows 10. They are typically shipped with pre-installed multi-user touch-enabled casual games and apps, and typically marketed as family entertainment devices. Manufacturers of some table computers provide a specialized graphical user interface to simplify a simultaneous interaction of multiple users, one example is Aura interface, which is installed in Lenovo IdeaCentre Horizon tabletop. A number of manufacturers released their own versions of tabletops, some prominent examples are HP Envy Rove 20, Dell XPS 18 and Sony VAIO Tap 20. See also Surface computer References Classes of computers Portable computers All-in-one computers
Table computer
Technology
218
32,683,147
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Zurbenko%20filter
Within statistics, the Kolmogorov–Zurbenko (KZ) filter was first proposed by A. N. Kolmogorov and formally defined by Zurbenko. It is a series of iterations of a moving average filter of length m, where m is a positive, odd integer. The KZ filter belongs to the class of low-pass filters. The KZ filter has two parameters, the length m of the moving average window and the number of iterations k of the moving average itself. It also can be considered as a special window function designed to eliminate spectral leakage. Background A. N. Kolmogorov had the original idea for the KZ filter during a study of turbulence in the Pacific Ocean. Kolmogorov had just received the International Balzan Prize for his law of 5/3 in the energy spectra of turbulence. Surprisingly the 5/3 law was not obeyed in the Pacific Ocean, causing great concern. Standard fast Fourier transform (FFT) was completely fooled by the noisy and non-stationary ocean environment. KZ filtration resolved the problem and enabled proof of Kolmogorov's law in that domain. Filter construction relied on the main concepts of the continuous Fourier transform and their discrete analogues. The algorithm of the KZ filter came from the definition of higher-order derivatives for discrete functions as higher-order differences. Believing that infinite smoothness in the Gaussian window was a beautiful but unrealistic approximation of a truly discrete world, Kolmogorov chose a finitely differentiable tapering window with finite support, and created this mathematical construction for the discrete case. The KZ filter is robust and nearly optimal. Because its operation is a simple Moving Average (MA), the KZ filter performs well in a missing data environment, especially in multidimensional time series where missing data problem arises from spatial sparseness. Another nice feature of the KZ filter is that the two parameters have clear interpretation so that it can be easily adopted by specialists in different areas. A few software packages for time series, longitudinal and spatial data have been developed in the popular statistical software R, which facilitate the use of the KZ filter and its extensions in different areas. I.Zurbenko Postdoctoral position at UC Berkeley with Jerzy Neyman and Elizabeth Scott provided a lot of ideas of applications supported in contacts with Murray Rosenblatt, Robert Shumway, Harald Cramér, David Brillinger, Herbert Robbins, Wilfrid Dixon, Emanuel Parzen. Definition KZ Filter Let be a real-valued time series, the KZ filter with parameters and is defined as where coefficients are given by the polynomial coefficients obtained from equation From another point of view, the KZ filter with parameters and can be defined as time iterations of a moving average (MA) filter of points. It can be obtained through iterations. First iteration is to apply a MA filter over process The second iteration is to apply the MA operation to the result of the first iteration, Generally the kth iteration is an application of the MA filter to the (k − 1)th iteration. The iteration process of a simple operation of MA is very convenient computationally. Properties The impulse response function of the product of filters is the convolution of impulse responses. The coefficients of the KZ filter , can be interpreted as a distribution obtained by the convolution of k uniform discrete distributions on the interval where m is an odd integer. Therefore, the coefficient forms a tapering window which has finite support . The KZ filter has main weight concentrated on a length of with weights vanishing to zero outside. The impulse response function of the KZ filter has continuous derivatives and is asymptotically Gaussian distributed. Zero derivatives at the edges for the impulse response function make from it a sharply declining function, what is resolving in high frequency resolution. The energy transfer function of the KZ filter is It is a low-pass filter with a cut-off frequency of Compared to a MA filter, the KZ filter has much better performance in terms of attenuating the frequency components above the cutoff frequency. The KZ filter is essentially a repetitive MA filter. It is easy to compute and allows for a straightforward way to deal with missing data. The main piece of this procedure is a simple average of available information within the interval of m points disregarding the missing observations within the interval. The same idea can be easily extended to spatial data analysis. It has been shown that missing values have very little effect on the transfer function of the KZ filter. Arbitrary k will provide k power of this transfer function and will reduce side lobe value to . It will be a perfect low pass filter. For practical purposes a choice of k within a range 3 to 5 is usually sufficient, when regular MA (k = 1) is providing strong spectral leakage of about 5%. Optimality The KZ filter is robust and nearly optimal. Because its operation is a simple moving average, the KZ filter performs well in a missing data environment, especially in multidimensional time and space where missing data can cause problems arising from spatial sparseness. Another nice feature of the KZ filter is that the two parameters each have clear interpretations so that it can be easily adopted by specialists in different areas. Software implementations for time series, longitudinal and spatial data have been developed in the popular statistical package R, which facilitate the use of the KZ filter and its extensions in different areas. KZ filter can be used to smooth the periodogram. For a class of stochastic processes, Zurbenko considered the worst-case scenario where the only information available about a process is its spectral density and smoothness quantified by Hölder condition. He derived the optimal bandwidth of the spectral window, which is dependent upon the underlying smoothness of the spectral density. Zurbenko compared the performance of Kolmogorov–Zurbenko (KZ) window to the other typically used spectral windows including Bartlett window, Parzen window, Tukey–Hamming window and uniform window and showed that the result from KZ window is closest to optimum. Developed as an abstract discrete construction, KZ filtration is robust and statistically nearly optimal. At the same time, because of its natural form, it has computational advantages, permitting analysis of space/time problems with data that has much as 90% of observations missing, and which represent a messy combination of several different physical phenomena. Clear answers can often be found for "unsolvable" problems. Unlike some mathematical developments, KZ is adaptable by specialists in different areas because it has a clear physical interpretation behind it. Extensions Extensions of KZ filter include KZ adaptive (KZA) filter, spatial KZ filter and KZ Fourier transform (KZFT). Yang and Zurbenko provided a detailed review of KZ filter and its extensions. R packages are also available to implement KZ filtration KZFT KZFT filter is design for a reconstruction of periodic signals or seasonality covered by heavy noise. Seasonality is one of the key forms of nonstationarity that is often seen in time series. It is usually defined as the periodic components within the time series. Spectral analysis is a powerful tool to analyze time series with seasonality. If a process is stationary, its spectrum is a continuous form as well. It can be treated parametrically for simplicity of prediction. If a spectrum contains lines, it indicates that the process is not stationary and contains periodicities. In this situation, parametric fitting generally results in seasonal residuals with reduced energies. This is due to the season to season variations. To avoid this problem, nonparametric approaches including band pass filters are recommended. Kolmogorov–Zurbenko Fourier Transform (KZFT) is one of such filters. The purpose of many applications is to reconstruct high resolution wavelet from the noisy environment. It was proven that KZFT provides the best possible resolution in spectral domain. It permits the separation of two signals on the edge of a theoretically smallest distance, or reconstruct periodic signals covered by heavy noise and irregularly observed in time. Because of this, KZFT provides a unique opportunity for various applications. A computer algorithm to implement the KZFT has been provided in the R software. The KZFT is essentially a band pass filter that belongs to the category of short-time Fourier transform (STFT) with a unique time window. KZFT readily uncovers small deviations from a constant spectral density of white noise resulting from computer random numbers generator. Such computer random number generations become predictable in the long run. Kolmogorov complexity provides the opportunity to generate unpredictable sequences of random numbers. Formally, we have a process ,, the KZFT filter with parameters m and k, computed at frequency ν0, produces an output process, which is defined as following: where is defined as: , ,..., and the polynomial coefficients is given by . Apparently filter is equivalent to the application of filter to the process . Similarly, the KZFT filter can be obtained through iterations in the same way as KZ filter. The average of the square of KZFT in time over S periods of will provide an estimate of the square amplitude of the wave at frequency ν0 or KZ periodogram (KZP) based on 2Sρ0 observations around moment t: Transfer function of KZFT is provided in Figure 2 has a very sharp frequency resolution with bandwidth limited by . For a complex-valued process , the outcome is unchanged. For a real-valued process, it distributes energy evenly over the real and complex domains. In other words, reconstructs a cosine or sine wave at the same frequency ν0. It follows that correctly reconstructs the amplitude and phase of an unknown wave with frequency ν0. Figure below is providing power transfer function of KZFT filtration. It clearly display that it perfectly captured frequency of interest ν0 = 0.4 and provide practically no spectral leakage from a side lobes which control by parameter k of filtration. For practical purposes choice of k within range 3–5 is usually sufficient, when regular FFT (k = 1) is providing strong leakage of about 5%. Example: Simulated signal normal random noise N(0,16) was used to test the KZFT algorithm's ability to accurately determine spectra of datasets with missing values. For practical considerations, the percentage of missing values was used at p=70% to determine if the spectrum could continue to capture the dominant frequencies. Using a wider window length of m=600 and k=3 iterations, adaptively smoothed KZP algorithm was used to determine the spectrum for the simulated longitudinal dataset. It is apparent in Figure 3 that the dominant frequencies of 0.08 and 0.10 cycles per unit time are identifiable as the signal's inherent frequencies. KZFT reconstruction of original signal embedded in the high noise of longitudinal observations ( missing rate 60%.) The KZFT filter in the KZA package of R-software has a parameter f = frequency. By defining this parameter for each of the known dominant frequencies found in the spectrum, KZFT filter with parameters m=300 and k=3 to reconstruct the signal about each frequency (0.08 and 0.10 cycles per unit time). The reconstructed signal was determined by applying the KZFT filter twice (once about each dominant frequency) and then the summing the results of each filter. The correlation between the true signal and the reconstructed signal was 96.4%; displayed in figure 4. The original observations provide no guess of the complex, hidden periodicity, which was perfectly reconstructed by the algorithm. Raw data frequently contain hidden frequencies. Combinations of a few fixed frequency waves can complicate the recognition of the mixture of signals, but still remain predictable over time. Publications show that atmospheric pressure contains hidden periodicities resulting from the gravitational force of the moon and the daily period of the sun. The reconstruction of these periodic signals of atmospheric tidal waves allows for an explanation and prediction of many anomalies present in extreme weather. Similar tidal waves must exist on the sun resulting from the gravitational force of planets. The rotation of the sun around its axes will cause a current, similar to the equatorial current on the earth. Perturbations or eddies around the current will cause anomalies on the surface of the sun. Horizontal rotational eddies in highly magnetic plasma will create a vertical explosion which will transport deeper, hotter plasma to above the surface of the sun. Each planet creates a tidal wave with a specific frequency on the sun. At times any two of the waves will occur in phase and other times will be out of phase. The resulting amplitude will oscillate with a difference frequency. The estimation of the spectra of sunspot data using the DZ algorithm provides two sharp frequency lines with periodicities close to 9.9 and 11.7 years. These frequency lines can be considered as difference frequencies caused by Jupiter and Saturn (9.9) and Venus and Earth (11.7). The difference frequency between 9.9 and 11.7 yields a frequency with a 64-year period. All of these periods are identifiable in sunspot data. The 64-year period component is currently in a declining mode. This decline may cause a cooling effect on the earth in the near future. An examination of the joint effect of multiple planets may reveal some long periods in sun activity and help explain climate fluctuations on earth. KZA Adaptive version of KZ filter, called KZ adaptive (KZA) filter, was developed for a search of breaks in nonparametric signals covered by heavy noise.. The KZA filter first identifies potential time intervals when a break occurs. It then examines these time intervals more carefully by reducing the window size so that the resolution of the smoothed outcome increases. As an example of break point detection, we simulate a long-term trend containing a break buried in seasonality and noise. Figure 2 is a plot of a seasonal sine wave with amplitude of 1 unit, normally distributed noise (), and a base signal with a break. To make things more challenging, the base signal contains an overall downward trend of 1 unit and an upward break of 0.5 units. The downward trend and break are hardly visible in the original data. The base signal is a step function , with and with . The application of a low-pass smoothing filter to the original data results in an over smoothing of the break as shown in Figure 6. The position of the break is no longer obvious. The application of an adaptive version of the KZ filter (KZA) finds the break as shown in Figure 5b. The construction of KZA is based on an adaptive version of the iterated smoothing filter KZ. The idea is to change the size of the filtering window based on the trends found with KZ. This will cause the filter to zoom in on the areas where the data is changing; the more rapid the change, the tighter the zoom will be. The first step in constructing KZA is to use KZ; where k is iterations and q is the filter length, where is an iterated moving average where are the original data and are the filtered data. This result is used to build an adaptive version of the filter. The filter is composed of a head and tail (qf and qb) respectively, with f = head and b = tail) that adjust in size in response to the data, effectively zooming in on regions where the data are changing rapidly. The head qf shrinks in response to the break in the data. The difference vector built from KZ; is used to find the discrete equivalent of the derivative '. This result determines the sizes of the head and the tail (qf and qb respectively) of the filtering window. If the slope is positive the head will shrink and the tail will expand to full size ('(, then and ) with . If the slope is negative the head of the window will be full sized while the tail will shrink (', then and . Detailed code of KZA is available. The KZA algorithm has all of the typical advantages of a nonparametric approach; it does not require any specific model of the time series under investigation. It searches for sudden changes over a low frequency signal of any nature covered by heavy noise. KZA shows very high sensitivity for break detection, even with a very low signal-to-noise ratio; the accuracy of the detection of the time of the break is also very high. The KZA algorithm can be applied to restore noisy two-dimensional images. This could be a two-level function f(x,y) as a black-and-white picture damaged by strong noise, or a multilevel color picture. KZA can be applied line by line to detect the break (change of color), then the break points at different lines would be smoothed by the regular KZ filter. Demonstration of spatial KZA is provided in Figure 7. Determinations of sharp frequency lines in the spectra can be determine by adaptively smoothed periodogram. The central idea of the algorithm is adaptively smoothing the logarithm of a KZ periodogram. The range of smoothing is provided by some fixed percentage of conditional entropy from total entropy. Roughly speaking, the algorithm operates uniformly on an information scale rather than a frequency scale. This algorithm is also known for parameter k=1 in KZP as Dirienzo-Zurbenko algorithm and provided in software. Spatial KZ filter Spatial KZ filter can be applied to the variable recorded in time and space. Parameters of the filter can be chosen separately in time and space. Usually physical sense can be applied what scale of averaging is reasonable in space and what scale of averaging is reasonable in time. Parameter k is controlling sharpness of resolution of the filter or suppression of leak of frequencies. An algorithms for spatial KZ filter are available in R software. Outcome time parameter can be treated as virtual time, then images of results of filtration in space can be displayed as "movie" in virtual time. We may demonstrate application of 3D spatial KZ filter applied to the world records of temperature T(t, x, y) as a function of time t, longitude x and latitude y. To select global climate fluctuations component parameters 25 month for time t, 3° for longitude and latitude were chosen for KZ filtration. Parameter k were chosen equal 5 to accommodate resolutions of scales. Single slide of outcome "movie" is provided in Figure 8 below. Standard average cosine square temperature distribution low along latitudes were subtracted to identify fluctuations of climate in time and space. We can see anomalies of temperature fluctuations from cosine square law over globe for 2007. Temperature anomalies are displayed over globe in the provided in figure scale on the right. It displays very high positive anomaly over Europe and North Africa, which were extending over last 100 years. Absolute humidity variable is keeping responsibility for major regional climate changings as it was displayed recently by Zurbenko Igor and Smith Devin in Kolmogorov–Zurbenko filters in spatiotemporal analysis. Those anomalies are slowly changing in time in the outcome "movie" of KZ filtration, slow intensification of observed anomalies were identified in time. Different scales fluctuations like El Niño scale and others are also can be identified by spatial KZ filtration. High definition "movie" of those scales are provided in over North America. Different scales can be selected by KZ filtration for a different variable and corresponding multivariate analysis can provide high efficiency results for investigating outcome variable over other covariates. KZ filter resolution performs exceptionally well compare to conventional methods and in fact is computationally optimal. Implementations W. Yang and I. Zurbenko. kzft: Kolmogorov–Zurbenko Fourier transform and application. R package, 2006. B. Close and I. Zurbenko. kza: Kolmogorov–Zurbenko adaptive algorithm for the image detection. R package, 2016 (https://cran.r-project.org/web/packages/kza/) KZ and KZA Java implementation for 1-dimensional arrays of Andreas Weiler and Michael Grossniklaus (University of Konstanz, Germany) (https://web.archive.org/web/20140914054417/http://dbis.uni-konstanz.de/research/social-media-stream-analysis/) Python implementation of KZ, KZFT and KZP of Mathieu Schopfer (University of Lausanne, Switzerland) (https://github.com/MathieuSchopfer/kolmogorov-zurbenko-filter) References Statistical signal processing Filter theory
Kolmogorov–Zurbenko filter
Engineering
4,290
565,536
https://en.wikipedia.org/wiki/Bathypelagic%20zone
The bathypelagic zone or bathyal zone (from Greek βαθύς (bathýs), deep) is the part of the open ocean that extends from a depth of below the ocean surface. It lies between the mesopelagic above and the abyssopelagic below. The bathypelagic is also known as the midnight zone because of the lack of sunlight; this feature does not allow for photosynthesis-driven primary production, preventing growth of phytoplankton or aquatic plants. Although larger by volume than the photic zone, human knowledge of the bathypelagic zone remains limited by ability to explore the deep ocean. Physical characteristics The bathypelagic zone is characterized by a nearly constant temperature of approximately and a salinity range of 33-35 g/kg. This region has little to no light because sunlight does not reach this deep in the ocean and bioluminescence is limited. The hydrostatic pressure in this zone ranges from 100-400 atmospheres (atm) due to the increase of 1 atm for every 10 m depth. It is believed that these conditions have been consistent for the past 8000 years. This ocean depth spans from the edge of the continental shelf down to the top of the abyssal zone, and along continental slope depths. The bathymetry of the bathypelagic zone consists of limited areas where the seafloor is in this depth range along the deepest parts of the continental margins, as well as seamounts and mid-ocean ridges. The continental slopes are mostly made up of accumulated sediment, while seamounts and mid-ocean ridges contain large areas of hard substrate that provide habitats for bathypelagic fishes and benthic invertebrates. Although currents at these depths are very slow, the topography of seamounts interrupts the currents and creates eddies that retain plankton in the seamount region, thus increasing fauna nearby as well Hydrothermal vents are also a common feature in some areas of the bathypelagic zone and are primarily formed from the spreading of Earth's tectonic plates at mid-ocean ridges. As the bathypelagic region lacks light, these vents play an important role in global ocean chemical processes, thus supporting unique ecosystems that have adapted to utilize chemicals as energy, via chemoautotrophy, instead of sunlight, to sustain themselves. In addition, hydrothermal vents facilitate precipitation of minerals on the seafloor, making them regions of interest for deep-sea mining. Biogeochemistry Many of the biogeochemical processes in the bathypelagic region are dependent upon the input of organic matter from the overlying epipelagic and mesopelagic zones. This organic material, sometimes called marine snow, sinks in the water column or is transported within downward convected water masses such as the Thermohaline Circulation. Hydrothermal vents also deliver heat and chemicals such as sulfide and methane. These chemicals can be utilized to sustain metabolism by organisms in the region. Our understanding of these biogeochemical processes has historically been limited due to the difficulty and cost of collecting samples from these ocean depths. Other technological challenges, such as measuring microbial activity under the pressure conditions experienced in the bathypelagic zone, have also restricted our knowledge of the region. Although scientific advancements have increased our understanding over the past several decades, many aspects remain a mystery. One of the major areas of current research is focused on understanding carbon remineralization rates in the region. Prior studies have struggled to quantify the rates at which prokaryotes in this region remineralize carbon because previously developed techniques may not be adequate for this region, and indicate remineralization rates much higher than expected. Further work is needed to explore this question, and may require revisions to our understanding of the global carbon cycle. Particulate organic matter Organic material from primary production in the epipelagic zone, and to a far lesser extent, organic inputs from terrestrial sources, make up a majority of the Particulate Organic Matter (POM) in the ocean. POM is delivered to the bathypelagic zone via sinking copepod fecal pellets and dead organisms; these parcels of organic matter fall through the water column and deliver organic carbon, nitrogen, and phosphorus, to organisms that live below the photic zone. These parcels are sometimes referred to as marine snow or ocean dandruff. This is also the dominant delivery mechanism of food to organisms in the bathypelagic zone because there is no sunlight for photosynthesis, with chemoautotrophy playing a more minor role as far as we know. As POM sinks through the water column, it is consumed by organisms which deplete it of nutrients. The size and density of these particles affect their likelihood of reaching organisms in the bathypelagic zone. Smaller parcels of POM often become aggregated together as they fall, which quickens their descent and prohibits their consumption by other organisms, increasing their likelihood of reaching lower depths. The density of these particles may be increased in some regions where minerals associated with some forms of phytoplankton, such as biogenic silica and calcium carbonate "ballast" resulting in more rapid transport to deeper depth. Carbon A majority of organic carbon is produced in the epipelagic zone, with a small portion transported deeper into the ocean interior. This process, known as the biological pump, plays a large role in the sequestration of carbon from the atmosphere into the ocean. Organic carbon is primarily exported to the bathypelagic zone in the form of particulate organic carbon (POC) and dissolved organic carbon (DOC). POC is the largest component of organic carbon delivered to the bathypelagic zone; it primarily takes the form of fecal pellets and dead organisms that sink out of the surface waters and fall toward the ocean floor. Regions with higher primary productivity where particles are able to sink quickly, such as equatorial upwelling zones and the Arabian Sea, have the greatest amount of POC delivery to the bathypelagic zone. The vertical mixing of DOC-rich surface waters is also a process that delivers carbon to the bathypelagic zone, however, it constitutes a substantially smaller portion of overall transport than POC delivery. DOC transport occurs most readily in regions with high rates of ventilation or ocean turnover, such as the interior of gyres or deep water formation sites along the thermohaline circulation. Calcium carbonate dissolution The region in the water column at which calcite dissolution begins to occur rapidly, known as the lysocline, is typically located near the base bathypelagic zone at approximately 3,500 m depth, but varies among ocean basins. The lysocline lies below the saturation depth (the transition to undersaturated conditions with respect to calcium carbonate) and above the carbonate compensation depth (below which there is no calcium carbonate preservation). In a supersaturated environment, the tests of calcite-forming organisms are preserved as they sink toward the sea floor, resulting in sediments with relatively high amounts of CaCO3. However, as depth and pressure increase and temperature decreases, the solubility of calcium carbonate also increases, which results in more dissolution and less net transport to the deeper, underlying seafloor. As a result of this rapid change in dissolution rates, sediments in the bathypelagic region vary widely in CaCO3 content and burial. Ecology The ecology of the bathypelagic ecosystem is constrained by its lack of sunlight and primary producers, with limited production of microbial biomass via autotrophy. The trophic networks in this region rely on particulate organic matter (POM) that sinks from the epipelagic and mesopelagic water, and oxygen inputs from the thermohaline circulation. Despite these limitations, this open-ocean ecosystem is home to microbial organisms, fish, and nekton. Microbial ecology A comprehensive understanding of the inputs driving the microbial ecology in the bathypelagic zone is lacking due to limited observational data, but has been improving with advancements in deep-sea technology. A majority of our knowledge of ocean microbial activity comes from studies of the shallower regions of the ocean because it is easier to access, and it was previously assumed that deeper water did not have suitable physical conditions for diverse microbial communities. The bathypelagic zone receives inputs of organic material and POM from the surface ocean on the order of 1-3.6 Pg C/year. Prokaryote biomass in the bathypelagic is dependent and thus correlated with the amount of sinking POM and organic carbon availability. These essential organic carbon inputs for microbes typically decrease with depth as they are utilized while sinking to the bathypelagic. Microbial production varies over six orders of magnitude based on resource availability in a given area. Prokaryote abundance can range from 0.03-2.3x105 cells ml−1, and have population turnover times that can range from 0.1–30 years. Archaea make up a larger portion of the total prokaryote cell abundance, and different groups have different growth needs, with some archaea groups for example utilizing amino acid groups more readily than others. Some archaea like Crenarchaeota have Crenarchaeota 16S rRNA and archaeal amoA gene abundances correlated to dissolved inorganic carbon (DIC) fixation. The utilization of DIC is thought to be fueled by the oxidation of ammonium and is one form of chemoautotrophy. Based on regional variation and differences in prokaryote abundance, heterotrophic prokaryote production, and particulate organic carbon (POC) inputs to the bathypelagic zone. Research to quantify bacterial-consuming grazers, like heterotrophic eukaryotes, has been limited by difficulties in sampling. Oftentimes organisms do not survive being brought to the surface due to experiencing drastic pressure changes in a short amount of time. Work is underway to quantify cell abundance and biomass, but due to poor survival, it is difficult to get accurate counts. In more recent years there has been an effort to categorize the diversity of the eukaryotic assemblages in the bathypelagic zone using methods to assess the genetic compositions of microbial communities based on supergroups, which is a way to classify organisms that have common ancestry. Some important groups of bacterial grazers include Rhizaria, Alveolata, Fungi, Stramenopiles, Amoebozoa, and Excavata (listed from most to least abundant), with the remaining composition classified as uncertain or other. Viruses influence biogeochemical cycling through the role they play in marine food webs. Their overall abundance can be up to two orders of magnitude lower than the mesopelagic zone, however, there is often high viral abundance found around deep-sea hydrothermal vents. The magnitude of their impacts on biological systems is demonstrated by the varying range of viral-to-prokaryote abundance ratios ranging from 1-223, this indicates that there are the same amount or more viruses than prokaryotes. Fauna Fish ecology Despite the lack of light, vision plays a role in life within the bathypelagic with bioluminescence a trait among both nektonic and planktonic organisms. In contrast to organisms in the water column, benthic organisms in this region tend to have limited to no bioluminescence. The bathypelagic zone contains sharks, squid, octopuses, and many species of fish, including deep-water anglerfish, gulper eel, amphipods, and dragonfish. The fish are characterized by weak muscles, soft skin, and slimy bodies. The adaptations of some of the fish that live there include small eyes and transparent skin. However, this zone is difficult for fish to live in since food is scarce; resulting in species evolving slow metabolic rates in order to conserve energy. Occasionally, large sources of organic matter from decaying organisms, such as whale falls, create a brief burst of activity by attracting organisms from different bathypelagic communities. Diel vertical migration Some bathypelagic species undergo vertical migration, which differs from the diel vertical migration of mesopelagic species in that it is not driven by sunlight. Instead, the migration of bathypelagic organisms is driven by other factors, most of which remain unknown. Some research suggests the movement of species within the overlying pelagic region could prompt individual bathypelagic species to migrate, such as Sthenoteuthis sp., a species of squid. In this particular example, Sthenoteuthis sp. appears to migrate individually over the course of ~4–5 hours towards the surface and then form into groups. While in most regions migration patterns can be driven by predation, in this particular region, the migration patterns are not believed to result solely from predator-prey relations. Instead, these relations are commensalistic, with the species who remain in the bathypelagic benefitting from the POM mixing caused by the upward movement of another species. In addition, the vertical migrating species' timing bathypelagic appears linked to the lunar cycle. However, the exact indicators causing this timing are still unknown. Research and exploration This region is understudied due to a lack of data/observations and difficulty of access (i.e. cost, remote locations, extreme pressure). Historically in oceanography, continental margins were the most sampled and researched due to their relatively easy access. However, more recently locations further offshore and at greater depths, such as ocean ridges and seamounts, are being increasingly studied due to advances in technology and laboratory methods, as well as collaboration with industry. The first discovery of communities subsisting off of the chemical energy in hydrothermal vents was aboard an expedition in 1977 led by Jack Corliss, an oceanographer from Oregon State University. More recent advancements include remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs), and independent gliders and floats. Specific technologies and research projects SERPENT Project Ocean Twilight Zone (OTZ) Project DEEP SEARCH Project DEEPEND Project AUV Sentry ROV Jason Hybrid ROV Nereus AUV Autosub Long Range Climate change The oceans act as a buffer for anthropogenic climate change due to their ability to take up atmospheric CO2 and absorb heat from the atmosphere. However, the ocean's ability to do so will be negatively affected as atmospheric CO2 concentrations continue to rise and global temperatures continue to warm. This will lead to changes such as deoxygenation, ocean acidification, temperature increase, and carbon sequestration decrease, among other physical and chemical alterations. These perturbations may have significant impacts on the organisms that dwell in the bathypelagic region and the properties that deliver organic carbon to the deep sea. Carbon storage The bathypelagic zone currently acts as a significant reservoir for carbon because of its sheer volume and the century to millennial timescales these waters are isolated from the atmosphere, this ocean zone plays an important role in moderating the effects of anthropogenic climate change. The burial of particulate organic carbon (POC) in the underlying sediments via the biological carbon pump, and the solubility pump of dissolved inorganic carbon (DIC) into the ocean interior via the thermohaline conveyor are key processes for removing excess atmospheric carbon. However, as atmospheric CO2 concentrations and global temperatures continue to rise, the efficiency at which the bathypelagic will store and bury the influx of carbon will most likely decrease. While some regions may experience an increase in POC input, such as Arctic regions where increased periods of minimal sea ice coverage will increase the downward flux of carbon from the surface oceans, overall, there will likely be less carbon sequestered to the bathypelagic region. References External links Woods Hole Oceanographic Institution - Midnight Zone Oregon Coast Aquarium OceanScape - Midnight Zone Oceanography
Bathypelagic zone
Physics,Environmental_science
3,302
59,737,151
https://en.wikipedia.org/wiki/Laccaria%20proxima
Laccaria proxima is a species of edible mushroom in the genus Laccaria from the conifer forest of California, as well as eastern and northern North America. References External links Edible fungi proxima Fungus species
Laccaria proxima
Biology
47
55,486,277
https://en.wikipedia.org/wiki/NGC%202032
NGC 2032 (also known as ESO 56-EN160 and the Seagull Nebula) is an emission nebula in the Dorado constellation and near the supershell LMC-4 and it consists of NGC 2029, NGC 2035 and NGC 2040. It was first discovered by James Dunlop on 27 September 1826, and John Herschel rerecorded it on 2 November 1834. NGC 2032 is located in the Large Magellanic Cloud. References ESO objects 2032 Emission nebulae Supernova remnants Astronomical objects discovered in 1826 Large Magellanic Cloud Dorado
NGC 2032
Astronomy
121
48,499,549
https://en.wikipedia.org/wiki/Schoeller-Bleckmann%20Oilfield%20Equipment
Schoeller-Bleckmann Oilfield Equipment AG is an Austrian engineering company that specializes in the production of high-precision components and equipment for the oil and gas industry. SBO products include drill bits, downhole tools, and other specialized equipment used in the exploration and production of oil and gas. Background It is a former subdivision of the Schoeller-Bleckmann group, a manufacturer particularly of steel products which was nationalised in 1946; the company was spun off in the period 1993-7. The company is based in Ternitz, south-west of Vienna. As of October 2015, it is a member of the Austrian Traded Index, the index of the twenty largest companies traded on the Vienna stock exchange. A subdivision is named Schoeller-Bleckmann Oilfield Technology. Criticism Schoeller-Bleckmann has faced criticism for continuing its operations in Russia despite the country's ongoing invasion of Ukraine. This decision has drawn attention due to international sanctions imposed on Russia and widespread condemnation of the war, which has resulted in significant civilian casualties and destruction in Ukraine. Critics argue that the company's presence in Russia undermines global efforts to isolate the aggressor state economically and diplomatically. See also List of oilfield service companies References External links Company website SBOT website (English) SBOT website (German) Vienna Stock Exchange: Market Data Schoeller-Bleckmann AG Oil and gas companies of Austria Manufacturing companies of Austria Companies listed on the Wiener Börse Companies in the Austrian Traded Index Industrial machine manufacturers Austrian brands Economy of Lower Austria Schoeller family
Schoeller-Bleckmann Oilfield Equipment
Engineering
328
44,860,314
https://en.wikipedia.org/wiki/Donitriptan
Donitriptan (INN) (code name F-11356) is a triptan drug which was investigated as an antimigraine agent but ultimately was never marketed. It acts as a high-affinity, high-efficacy/near-full agonist of the 5-HT1B (pKi = 9.4–10.1; IA = 94%) and 5-HT1D receptors (pKi = 9.3–10.2; IA = 97%), and is among the most potent of the triptan series of drugs. Donitriptan was being developed in France by bioMérieux-Pierre Fabre and made it to phase II clinical trials in Europe before development was discontinued. References 5-HT1B agonists 5-HT1D agonists Indole ethers at the benzene ring Aromatic nitriles Phenylpiperazines Triptans Tryptamines Abandoned drugs
Donitriptan
Chemistry
199
371,659
https://en.wikipedia.org/wiki/Ras%20GTPase
Ras, from "Rat sarcoma virus", is a family of related proteins that are expressed in all animal cell lineages and organs. All Ras protein family members belong to a class of protein called small GTPase, and are involved in transmitting signals within cells (cellular signal transduction). Ras is the prototypical member of the Ras superfamily of proteins, which are all related in three-dimensional structure and regulate diverse cell behaviours. When Ras is 'switched on' by incoming signals, it subsequently switches on other proteins, which ultimately turn on genes involved in cell growth, differentiation, and survival. Mutations in Ras genes can lead to the production of permanently activated Ras proteins, which can cause unintended and overactive signaling inside the cell, even in the absence of incoming signals. Because these signals result in cell growth and division, overactive Ras signaling can ultimately lead to cancer. The three Ras genes in humans (HRAS, KRAS, and NRAS) are the most common oncogenes in human cancer; mutations that permanently activate Ras are found in 20 to 25% of all human tumors and up to 90% in certain types of cancer (e.g., pancreatic cancer). For this reason, Ras inhibitors are being studied as a treatment for cancer and other diseases with Ras overexpression. History The first two Ras genes, HRAS and KRAS, were identified from studies of two cancer-causing viruses, the Harvey sarcoma virus and Kirsten sarcoma virus, by Edward M. Scolnick and colleagues at the National Institutes of Health (NIH). These viruses were discovered originally in rats during the 1960s by Jennifer Harvey and Werner H. Kirsten, respectively, hence the name Rat sarcoma. In 1982, activated and transforming human ras genes were discovered in human cancer cells by Geoffrey M. Cooper at Harvard, Mariano Barbacid and Stuart A. Aaronson at the NIH, Robert Weinberg at MIT, and Michael Wigler at Cold Spring Harbor Laboratory. A third ras gene was subsequently discovered by researchers in the group of Robin Weiss at the Institute of Cancer Research, and Michael Wigler at Cold Spring Harbor Laboratory, named NRAS, for its initial identification in human neuroblastoma cells. The three human ras genes encode extremely similar proteins made up of chains of 188 to 189 amino acids. Their gene symbols are HRAS, NRAS and KRAS, the latter of which produces the K-Ras4A and K-Ras4B isoforms from alternative splicing. Structure Ras contains six beta strands and five alpha helices. It consists of two domains: a G domain of 166 amino acids (about 20 kDa) that binds guanosine nucleotides, and a C-terminal membrane targeting region (CAAX-COOH, also known as CAAX box), which is lipid-modified by farnesyl transferase, RCE1, and ICMT. The G domain contains five G motifs that bind GDP/GTP directly. The G1 motif, or the P-loop, binds the beta phosphate of GDP and GTP. The G2 motif, also called Switch I or SW1, contains threonine35, which binds the terminal phosphate (γ-phosphate) of GTP and the divalent magnesium ion bound in the active site. The G3 motif, also called Switch II or SW2, has a DXXGQ motif. The D is aspartate57, which is specific for guanine versus adenine binding, and Q is glutamine61, the crucial residue that activates a catalytic water molecule for hydrolysis of GTP to GDP. The G4 motif contains a LVGNKxDL motif, and provides specific interaction to guanine. The G5 motif contains a SAK consensus sequence. The A is alanine146, which provides specificity for guanine rather than adenine. The two switch motifs, G2 (SW1) and G3 (SW2), are the main parts of the protein that move when GTP is hydrolyzed into GDP. This conformational change by the two switch motifs is what mediates the basic functionality as a molecular switch protein. This GTP-bound state of Ras is the "on" state, and the GDP-bound state is the "off" state. The two switch motifs have a number of conformations when binding GTP or GDP or no nucleotide (when bound to SOS1, which releases the nucleotide). Ras also binds a magnesium ion which helps to coordinate nucleotide binding. Function Ras proteins function as binary molecular switches that control intracellular signaling networks. Ras-regulated signal pathways control such processes as actin cytoskeletal integrity, cell proliferation, cell differentiation, cell adhesion, apoptosis, and cell migration. Ras and Ras-related proteins are often deregulated in cancers, leading to increased invasion and metastasis, and decreased apoptosis. Ras activates several pathways, of which the mitogen-activated protein (MAP) kinase cascade has been well-studied. This cascade transmits signals downstream and results in the transcription of genes involved in cell growth and division. Another Ras-activated signaling pathway is the PI3K/AKT/mTOR pathway, which stimulates protein synthesis, cellular migration and growth, and inhibits apoptosis. Activation and deactivation Ras is a guanosine-nucleotide-binding protein. Specifically, it is a single-subunit small GTPase, which is related in structure to the Gα subunit of heterotrimeric G proteins (large GTPases). G proteins function as binary signaling switches with "on" and "off" states. In the "off" state it is bound to the nucleotide guanosine diphosphate (GDP), while in the "on" state, Ras is bound to guanosine triphosphate (GTP), which has an extra phosphate group as compared to GDP. This extra phosphate holds the two switch regions in a "loaded-spring" configuration (specifically the Thr-35 and Gly-60). When released, the switch regions relax which causes a conformational change into the inactive state. Hence, activation and deactivation of Ras and other small G proteins are controlled by cycling between the active GTP-bound and inactive GDP-bound forms. The process of exchanging the bound nucleotide is facilitated by guanine nucleotide exchange factors (GEFs) and GTPase activating proteins (GAPs). As per its classification, Ras has an intrinsic GTPase activity, which means that the protein on its own will hydrolyze a bound GTP molecule into GDP. However this process is too slow for efficient function, and hence the GAP for Ras, RasGAP, may bind to and stabilize the catalytic machinery of Ras, supplying additional catalytic residues ("arginine finger") such that a water molecule is optimally positioned for nucleophilic attack on the gamma-phosphate of GTP. An inorganic phosphate is released and the Ras molecule is now bound to a GDP. Since the GDP-bound form is "off" or "inactive" for signaling, GTPase Activating Protein inactivates Ras by activating its GTPase activity. Thus, GAPs accelerate Ras inactivation. GEFs catalyze a "push and pull" reaction which releases GDP from Ras. They insert close to the P-loop and magnesium cation binding site and inhibit the interaction of these with the gamma phosphate anion. Acidic (negative) residues in switch II "pull" a lysine in the P-loop away from the GDP which "pushes" switch I away from the guanine. The contacts holding GDP in place are broken and it is released into the cytoplasm. Because intracellular GTP is abundant relative to GDP (approximately 10 fold more) GTP predominantly re-enters the nucleotide binding pocket of Ras and reloads the spring. Thus GEFs facilitate Ras activation. Well known GEFs include Son of Sevenless (Sos) and cdc25 which include the RasGEF domain. The balance between GEF and GAP activity determines the guanine nucleotide status of Ras, thereby regulating Ras activity. In the GTP-bound conformation, Ras has a high affinity for numerous effectors which allow it to carry out its functions. These include PI3K. Other small GTPases may bind adaptors such as arfaptin or second messenger systems such as adenylyl cyclase. The Ras binding domain is found in many effectors and invariably binds to one of the switch regions, because these change conformation between the active and inactive forms. However, they may also bind to the rest of the protein surface. Other proteins exist that may change the activity of Ras family proteins. One example is GDI (GDP Disassociation Inhibitor). These function by slowing the exchange of GDP for GTP, thus prolonging the inactive state of Ras family members. Other proteins that augment this cycle may exist. Membrane attachment Ras is attached to the cell membrane owing to its prenylation and palmitoylation (HRAS and NRAS) or the combination of prenylation and a polybasic sequence adjacent to the prenylation site (KRAS). The C-terminal CaaX box of Ras first gets farnesylated at its Cys residue in the cytosol, allowing Ras to loosely insert into the membrane of the endoplasmatic reticulum and other cellular membranes. The Tripeptide (aaX) is then cleaved from the C-terminus by a specific prenyl-protein specific endoprotease and the new C-terminus is methylated by a methyltransferase. KRas processing is completed at this stage. Dynamic electrostatic interactions between its positively charged basic sequence with negative charges at the inner leaflet of the plasma membrane account for its predominant localization at the cell surface at steady-state. NRAS and HRAS are further processed on the surface of the Golgi apparatus by palmitoylation of one or two Cys residues, respectively, adjacent to the CaaX box. The proteins thereby become stably membrane anchored (lipid-rafts) and are transported to the plasma membrane on vesicles of the secretory pathway. Depalmitoylation by acyl-protein thioesterases eventually releases the proteins from the membrane, allowing them to enter another cycle of palmitoylation and depalmitoylation. This cycle is believed to prevent the leakage of NRAS and HRAS to other membranes over time and to maintain their steady-state localization along the Golgi apparatus, secretory pathway, plasma membrane and inter-linked endocytosis pathway. Members The clinically most notable members of the Ras subfamily are HRAS, KRAS and NRAS, mainly for being implicated in many types of cancer. However, there are many other members of this subfamily as well: DIRAS1; DIRAS2; DIRAS3; ERAS; GEM; MRAS; NKIRAS1; NKIRAS2; RALA; RALB; RAP1A; RAP1B; RAP2A; RAP2B; RAP2C; RASD1; RASD2; RASL10A; RASL10B; RASL11A; RASL11B; RASL12; REM1; REM2; RERG; RERGL; RRAD; RRAS; RRAS2 Ras in cancer Mutations in the Ras family of proto-oncogenes (comprising H-Ras, N-Ras and K-Ras) are very common, being found in 20% to 30% of all human tumors. It is reasonable to speculate that a pharmacological approach that curtails Ras activity may represent a possible method to inhibit certain cancer types. Ras point mutations are the single most common abnormality of human proto-oncogenes. Ras inhibitor trans-farnesylthiosalicylic acid (FTS, Salirasib) exhibits profound anti-oncogenic effects in many cancer cell lines. Inappropriate activation Inappropriate activation of the gene has been shown to play a key role in improper signal transduction, proliferation and malignant transformation. Mutations in a number of different genes as well as RAS itself can have this effect. Oncogenes such as p210BCR-ABL or the growth receptor erbB are upstream of Ras, so if they are constitutively activated their signals will transduce through Ras. The tumour suppressor gene NF1 encodes a Ras-GAP – its mutation in neurofibromatosis will mean that Ras is less likely to be inactivated. Ras can also be amplified, although this only occurs occasionally in tumours. Finally, Ras oncogenes can be activated by point mutations so that the GTPase reaction can no longer be stimulated by GAP – this increases the half life of active Ras-GTP mutants. Constitutively active Ras Constitutively active Ras (RasD) is one which contains mutations that prevent GTP hydrolysis, thus locking Ras in a permanently 'On' state. The most common mutations are found at residue G12 in the P-loop and the catalytic residue Q61. The glycine to valine mutation at residue 12 renders the GTPase domain of Ras insensitive to inactivation by GAP and thus stuck in the "on state". Ras requires a GAP for inactivation as it is a relatively poor catalyst on its own, as opposed to other G-domain-containing proteins such as the alpha subunit of heterotrimeric G proteins. Residue 61 is responsible for stabilizing the transition state for GTP hydrolysis. Because enzyme catalysis in general is achieved by lowering the energy barrier between substrate and product, mutation of Q61 to K (Glutamine to Lysine) necessarily reduces the rate of intrinsic Ras GTP hydrolysis to physiologically meaningless levels. See also "dominant negative" mutants such as S17N and D119N. Ras-targeted cancer treatments Reovirus was noted to be a potential cancer therapeutic when studies suggested it reproduces well in certain cancer cell lines. It replicates specifically in cells that have an activated Ras pathway (a cellular signaling pathway that is involved in cell growth and differentiation). Reovirus replicates in and eventually kills Ras-activated tumour cells and as cell death occurs, progeny virus particles are free to infect surrounding cancer cells. This cycle of infection, replication and cell death is believed to be repeated until all tumour cells carrying an activated Ras pathway are destroyed. Another tumor-lysing virus that specifically targets tumor cells with an activated Ras pathway is a type II herpes simplex virus (HSV-2) based agent, designated FusOn-H2. Activating mutations of the Ras protein and upstream elements of the Ras protein may play a role in more than two-thirds of all human cancers, including most metastatic disease. Reolysin, a formulation of reovirus, and FusOn-H2 are currently in clinical trials or under development for the treatment of various cancers. In addition, a treatment based on siRNA anti-mutated K-RAS (G12D) called siG12D LODER is currently in clinical trials for the treatment of locally advanced pancreatic cancer (NCT01188785, NCT01676259). In glioblastoma mouse models SHP2 levels were heightened in cancerous brain cells. Inhibiting SHP2 in turn inhibited Ras dephosphorylation. This reduced tumor sizes and accompanying rise in survival rates. Other strategies have attempted to manipulate the regulation of the above-mentioned localization of Ras. Farnesyltransferase inhibitors have been developed to stop the farnesylation of Ras and therefore weaken its affinity to membranes. Other inhibitors are targeting the palmitoylation cycle of Ras through inhibiting depalmitoylation by acyl-protein thioesterases, potentially leading to a destabilization of the Ras cycle. A novel inhibitor finding strategy for mutated Ras molecules was described in. The Ras mutations in the 12th residue position inhibit the bound of the regulatory GAP molecule to the mutated Ras, causing uncontrolled cell growth. The novel strategy proposes finding small glue molecules, which attach the mutated Ras to the GAP, prohibiting uncontrolled cell growth and restoring the normal function. For this goal a theoretical Ras-GAP conformation was designed with a several Å gap between the molecules, and a high-throughput in silico docking was performed for finding gluing agents. As a proof of concept, two novel molecules were described with satisfying biological activity. In other species In most of the cell types of most species, most Ras is the GDP type. This is true for Xenopus oocytes and mouse fibroblasts. Xenopus laevis As mentioned above most X. oocyte Ras is the GDP conjugate. Mammal Ras induces meiosis in X. laevis oocytes almost certainly by potentiating insulin-induced meiosis, but not progesterone-induced. Protein synthesis does not seem to be a part of this step. Injection increases synthesis of diacylglycerol from phosphatidylcholine. Some meiosis effects are antagonized by rap1 (and by a Ras modified to dock incorrectly). Both rap1 and the modified Ras are co-antagonists with p120Ras GAP in this pathway. Drosophila melanogaster Expressed in all tissues of Drosophila melanogaster but mostly in neural cells. Overexpression is somewhat lethal and, during development, produces eye and wing abnormalities. (This parallels - and may be the reason for - similar abnormalities due to mutated receptor tyrosine kinases.) The D. genes for rases in mammals produce abnormalities. Aplysia Most expression in Aplysia spp. is in neural cells. Caenorhabditis elegans The gene in C. elegans is let 60. Also appears to play a role in receptor tyrosine kinase formation in this model. Overexpression yields a multivulval development due to its involvement in that region's normal development; overexpression in effector sites in lethal. Dictyostelium discoideum Essential in Dictyostelium discoideum. This is evidenced by severe developmental failure in deficient ras expression and by significant impairment of various life activities when artificially expressed, such as: increased concentration of inositol phosphates; likely reduction of cAMP binding to chemotaxis receptors; and that is likely the reason cGMP synthesis is impaired. Adenylate cyclase activity is unaffected by ras. References Further reading External links "Brain tumour findings offer hope of new strategy Canadian Cancer Society says" at ncic.cancer.ca "Novel cancer treatment gets NCI support" at arstechnica.com Drosophila Ras oncogene at 85D - The Interactive Fly "Animation of ras activation by EGFR" "Rascore: A tool for analyzing RAS protein structures" G proteins Oncogenes
Ras GTPase
Chemistry
4,029
5,995,612
https://en.wikipedia.org/wiki/Molecular%20Biology%20of%20the%20Cell
Molecular Biology of the Cell is a biweekly peer-reviewed scientific journal published by the American Society for Cell Biology. It covers research on the molecular basis of cell structure and function. According to the Journal Citation Reports, the journal has a 2012 impact factor of 4.803. It was originally established as Cell Regulation in 1989. The Editor-in-Chief is Matthew Welch (University of California, Berkeley). Previous Editors-in-Chief include: Erkki Ruoslahti (of Cell Regulation) and David Botstein and Keith Yamamoto (of MBoC) and their successors Sandra Schmid and David Drubin. References External links American Society for Cell Biology Molecular and cellular biology journals English-language journals Biweekly journals Academic journals established in 1989 Academic journals published by learned and professional societies of the United States 1991 establishments in the United States
Molecular Biology of the Cell
Chemistry
174
40,128,229
https://en.wikipedia.org/wiki/Kurt%20Kosanke
Kurt Kosanke (born ca. 1945) is a German engineer, retired IBM manager, director of the AMICE Consortium and consultant, known for his work in the field of enterprise engineering, Enterprise integration and CIMOSA. Life and work Kosanke obtained his Engineering degree from the Physikalisch Technische Lehranstalt in Lübeck, Germany, nowadays the Private Berufsfachschule PTL Wedel. He started his career in research and development at IBM Deutschland in Böblingen working on the instrument development of optical printers and large-scale displays. In the 4 years at IBM USA, he worked on production control, material logistics, and back in Germany focussed on manufacturing research and simulation. In 1984, Kosanke started participating in the ESPRIT AMICE project as IBM Deutschland representative, focussing on enterprise modelling and CIM Open Systems Architecture. After IBM retirement Kosanke kept working as independent consultant, directed the AMICE project of the AMICE Consortium, and became Director of the CIMOSA Association. Kosanke took part in the IFIP IFAC Task Force on Architectures for Enterprise Integration, and contributed to the development of GERAM. Publications Kosanke has published numerous papers in the fields Enterprise engineering. Enterprise integration and CIMOSA since the late 1980s Books: 1997. Enterprise engineering and integration : building international consensus : proceedings of ICEIMT ʾ97, International Conference on Enterprise Integration and Modeling Technology, Torino, Italy, October 28–30, 1997. Edited with James G. Nell. Springer. 2002. Enterprise Inter- and Intra-Organizational Integration: Building International Consensus. Springer, 30 nov. 2002 Articles, a selection: 1995. "CIMOSA - Overview and status." Computers in Industry Vol 27. p. 101-109 1995. "The CIMOSA business modelling process". With Martin Zelm and François Vernadat in Computers in Industry. Vol 27. p. 123-142 1999. "CIMOSA: enterprise engineering and integration." With Martin Zelm and François Vernadat in Computers in industry Vol 40 (2). p. 83-97 2001. "A Modelling Language for User oriented Enterprise Modelling", With Martin Zelm in: MOSIM'01, Troyes, France 2006. "ISO Standards for Interoperability: a comparison." Interoperability of Enterprise Software and Applications. Springer London, p. 55-64. References Living people Engineers from Schleswig-Holstein Enterprise modelling experts Systems engineers 1940s births IBM employees
Kurt Kosanke
Engineering
521
27,498,772
https://en.wikipedia.org/wiki/Vibrational%20temperature
The vibrational temperature is commonly used in thermodynamics, to simplify certain equations. It has units of temperature and is defined as where is the Boltzmann constant, is the speed of light, is the wavenumber, and (Greek letter nu) is the characteristic frequency of the oscillator. The vibrational temperature is used commonly when finding the vibrational partition function. References Statistical thermodynamics University Arizona See also Rotational temperature Rotational spectroscopy Vibrational spectroscopy Infrared spectroscopy Spectroscopy Atomic physics Molecular physics
Vibrational temperature
Physics,Chemistry
108