id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
21,950,720 | https://en.wikipedia.org/wiki/Check%20dam | A check dam is a small, sometimes temporary, dam constructed across a swale, drainage ditch, or waterway to counteract erosion by reducing water flow velocity. Check dams themselves are not a type of new technology; rather, they are an ancient technique dating from the second century AD. Check dams are typically, though not always, implemented in a system of several dams situated at regular intervals across the area of interest.
Function
A check dam placed in the ditch, swale, or channel interrupts the flow of water and flattens the gradient of the channel, thereby reducing the velocity. In turn, this obstruction induces infiltration and reduces eroding. They can be used not only to slow flow velocity but also to distribute flows across a swale to avoid preferential paths and guide flows toward vegetation. Although some sedimentation may result behind the dam, check dams do not primarily function as sediment-trapping devices.
For instance, on the Graliwdo River in Ethiopia, an increase of hydraulic roughness by check dams and water transmission losses in deposited sediments is responsible for the delay of runoff to reach the lower part of the river channels. The reduction of peak runoff discharge was larger in the river segment with check dams and vegetation (minus 12%) than in segment without treatment (minus 5.5%). Reduction of total runoff volume was also larger in the river with check dams than in the untreated river. The implementation of check dams combined with vegetation reduced peak flow discharge and total runoff volume as large parts of runoff infiltrated in the sediments deposited behind the check dams. As gully check dams are implemented in a large areas of northern Ethiopia, this contributes to groundwater recharge and increased river base flow.
Applications
Grade control mechanism
Check dams have traditionally been implemented in two environments: across channel bottoms and on hilly slopes. Check dams are used primarily to control water velocity, conserve soil, and improve land. They are used when other flow-control practices, such as lining the channel or creating bioswales, are impractical. Accordingly, they are commonly used in degrading temporary channels, in which permanent stabilization is impractical and infeasible in terms of resource allocation and funding due to the short life period. They are also used when construction delays and weather conditions prevent timely installation of other erosion control practices. This is typically seen during the construction process of large-scale permanent dams or erosion control. As such, check dams serve as temporary grade-control mechanisms along waterways until resolute stabilization is established or along permanent swales that need protection prior to installation of a non-erodible lining.
Water quality control mechanism
Many check dams tend to form stream pools. Under low-flow circumstances, water either infiltrates into the ground, evaporates, or seeps through or under the dam. Under high flow – flood – conditions, water flows over or through the structure. Coarse and medium-grained sediment from runoff tends to be deposited behind check dams, while finer grains flow through. Floating garbage is also trapped by check dams, increasing their effectiveness as water quality control measures.
Arid regions
In arid areas, check dams are often built to increase groundwater recharge in a process called managed aquifer recharge. Winter runoff thus can be stored in aquifers, from which the water can be withdrawn during the dry season for irrigation, livestock watering, and drinking water. This is particularly useful for small settlements located far from a large urban center as check dams require less reliance on machinery, funding, or advanced knowledge compared to large-scale dam implementation.
Check dams can be used in combination with limans to stop and collect surface runoff water.
Mountainous regions
As a strategy to stabilize mountain streams, the construction of check dams has a long tradition in many mountainous regions dating back to the 19th century in Europe. Steep slopes impede access by heavy construction machinery to mountain streams, so check dams have been built in place of larger dams. Because the typical high slope causes high flow velocity, a terraced system of multiple closely spaced check dams is typically necessary to reduce velocity and thereby counteract erosion. Such consolidation check dams, built in terraces, attempt to prevent both headward and downward cutting into channel beds while also stabilizing adjacent hill slopes. They are further used to mitigate flood and debris flow hazards.
Temporary Test Dams TTDs
In the UK planning laws, applications and restrictions delay flood mitigation work. This can be counteracted by setting up Temporary Test Dams in watercourses that can then be monitored and valued. This does however require the landowners support. TTDs have proven to be a great way to get rapid action following a flood event and a way to get communities involved in the defence against future flood events.
Design considerations
Site
Before installing a check dam, engineers inspect the site. Standard practices call for the drainage area to be ten acres or less. The waterway should be on a slope of no more than 50% and should have a minimum depth to bedrock of . Check dams are often used in natural or constructed channels or swales. They should never be placed in live streams unless approved by appropriate local, state and/or federal authorities.
Materials
Check dams are made of a variety of materials. Because they are typically used as temporary structures, they are often made of cheap and accessible materials such as rocks, gravel, logs, hay bales, and sandbags. Of these, logs and rock check dams are usually permanent or semi-permanent, and sandbag check dams are built primarily for temporary purposes. Also, there are check dams that are constructed with rockfill or wooden boards. These dams are usually implemented only in small, open channels that drain or less; and usually do not exceed high. Woven wire can be used to construct check dams in order to hold fine material in a gully. It is typically used in environments where the gully has a moderate slope (less than 10%), small drainage area, and in regions where flood flows do not typically carry large rocks or boulders. In nearly all instances, erosion control blankets, which are biodegradable open-weave blankets, are used in conjunction with check dams. These blankets help encourage vegetation growth on the slopes, shorelines and ditch bottoms.
Size
Check dams are usually less than high. and the center of the dam should be at least lower than its edges. This criterion induces a weir effect, resulting in increased water surface level upstream for some, if not all flow conditions.
Spacing
In order to effectively slow water velocity to reduce erosion and to protect the channel between dams in a larger system, spacing must be designed properly. Check dams should be spaced such that the toe of the upstream check dam is equal to the elevation of the downstream check dam's crest. This allows water to pond between dams and substantially slows the flow's velocity.
Advantages
Check dams are a highly effective practice to reduce flow velocities in channels and waterways. In contrast to big dams, check dams are implemented faster, are cost effective, and are smaller in scope. Because of this, their implementation does not typically displace people and communities nor do they destroy natural resources if designed correctly. Moreover, the dams are simple to construct and do not rely on advanced technologies, allowing their use in rural communities with fewer resources or access to technical expertise, as they have been in India's drylands for some time now.
Limitations
Check dams still require maintenance and sediment removal practices. They become more difficult to implement on steep slopes, as velocity is higher and the distance between dams must be shortened. Check dams, depending on the material used, can have a limited life span but if implemented correctly can be considered permanent.
Maintenance
Check dams require regular maintenance as typically temporary structures not designed to withstand long-term use. Dams should be inspected every week and after heavy rainfall. It is important that rubble, litter, and leaves are removed from the upstream side of the dam. This is typically done when the sediment has reached a height of one-half the original height of the dam.
When the site is permanently stabilized and the check dam is no longer needed, it is fully removed, including components washed downstream, and bare spots are stabilized.
See also
Water conservation structures
Drop structure
Gabion
Groyne
Flexible debris-resisting barrier
References
External links
Trap Function of Bed Road by Steel-Slit Dam Sabo Gakkaishi Vol.45 (1992-1993) No.4 P22-29
Ecological restoration
Environmental engineering
Hydrology and urban planning
Landscape
Water conservation
Waste treatment technology
Water pollution
Dams by type
Desert greening | Check dam | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,738 | [
"Hydrology",
"Ecological restoration",
"Water treatment",
"Chemical engineering",
"Water pollution",
"Civil engineering",
"Hydrology and urban planning",
"Environmental engineering",
"Waste treatment technology"
] |
21,950,759 | https://en.wikipedia.org/wiki/Wiles%27s%20proof%20of%20Fermat%27s%20Last%20Theorem | Wiles's proof of Fermat's Last Theorem is a proof by British mathematician Sir Andrew Wiles of a special case of the modularity theorem for elliptic curves. Together with Ribet's theorem, it provides a proof for Fermat's Last Theorem. Both Fermat's Last Theorem and the modularity theorem were believed to be impossible to prove using previous knowledge by almost all living mathematicians at the time.
Wiles first announced his proof on 23 June 1993 at a lecture in Cambridge entitled "Modular Forms, Elliptic Curves and Galois Representations". However, in September 1993 the proof was found to contain an error. One year later on 19 September 1994, in what he would call "the most important moment of [his] working life", Wiles stumbled upon a revelation that allowed him to correct the proof to the satisfaction of the mathematical community. The corrected proof was published in 1995.
Wiles's proof uses many techniques from algebraic geometry and number theory and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry such as the category of schemes, significant number theoretic ideas from Iwasawa theory, and other 20th-century techniques which were not available to Fermat. The proof's method of identification of a deformation ring with a Hecke algebra (now referred to as an R=T theorem) to prove modularity lifting theorems has been an influential development in algebraic number theory.
Together, the two papers which contain the proof are 129 pages long and consumed over seven years of Wiles's research time. John Coates described the proof as one of the highest achievements of number theory, and John Conway called it "the proof of the [20th] century." Wiles's path to proving Fermat's Last Theorem, by way of proving the modularity theorem for the special case of semistable elliptic curves, established powerful modularity lifting techniques and opened up entire new approaches to numerous other problems. For proving Fermat's Last Theorem, he was knighted, and received other honours such as the 2016 Abel Prize. When announcing that Wiles had won the Abel Prize, the Norwegian Academy of Science and Letters described his achievement as a "stunning proof".
Precursors to Wiles's proof
Fermat's Last Theorem and progress prior to 1980
Fermat's Last Theorem, formulated in 1637, states that no three positive integers a, b, and c can satisfy the equation
if n is an integer greater than two (n > 2).
Over time, this simple assertion became one of the most famous unproved claims in mathematics. Between its publication and Andrew Wiles's eventual solution over 350 years later, many mathematicians and amateurs attempted to prove this statement, either for all values of n > 2, or for specific cases. It spurred the development of entire new areas within number theory. Proofs were eventually found for all values of n up to around 4 million, first by hand, and later by computer. However, no general proof was found that would be valid for all possible values of n, nor even a hint how such a proof could be undertaken.
The Taniyama–Shimura–Weil conjecture
Separately from anything related to Fermat's Last Theorem, in the 1950s and 1960s Japanese mathematician Goro Shimura, drawing on ideas posed by Yutaka Taniyama, conjectured that a connection might exist between elliptic curves and modular forms. These were mathematical objects with no known connection between them. Taniyama and Shimura posed the question whether, unknown to mathematicians, the two kinds of object were actually identical mathematical objects, just seen in different ways.
They conjectured that every rational elliptic curve is also modular. This became known as the Taniyama–Shimura conjecture. In the West, this conjecture became well known through a 1967 paper by André Weil, who gave conceptual evidence for it; thus, it is sometimes called the Taniyama–Shimura–Weil conjecture.
By around 1980, much evidence had been accumulated to form conjectures about elliptic curves, and many papers had been written which examined the consequences if the conjecture were true, but the actual conjecture itself was unproven and generally considered inaccessible—meaning that mathematicians believed a proof of the conjecture was probably impossible using current knowledge.
For decades, the conjecture remained an important but unsolved problem in mathematics. Around 50 years after first being proposed, the conjecture was finally proven and renamed the modularity theorem, largely as a result of Andrew Wiles's work described below.
Frey's curve
On yet another separate branch of development, in the late 1960s, Yves Hellegouarch came up with the idea of associating hypothetical solutions (a, b, c) of Fermat's equation with a completely different mathematical object: an elliptic curve. The curve consists of all points in the plane whose coordinates (x, y) satisfy the relation
Such an elliptic curve would enjoy very special properties due to the appearance of high powers of integers in its equation and the fact that an + bn = cn would be an nth power as well.
In 1982–1985, Gerhard Frey called attention to the unusual properties of this same curve, now called a Frey curve. He showed that it was likely that the curve could link Fermat and Taniyama, since any counterexample to Fermat's Last Theorem would probably also imply that an elliptic curve existed that was not modular. Frey showed that there were good reasons to believe that any set of numbers (a, b, c, n) capable of disproving Fermat's Last Theorem could also probably be used to disprove the Taniyama–Shimura–Weil conjecture. Therefore, if the Taniyama–Shimura–Weil conjecture were true, no set of numbers capable of disproving Fermat could exist, so Fermat's Last Theorem would have to be true as well.
The conjecture says that each elliptic curve with rational coefficients can be constructed in an entirely different way, not by giving its equation but by using modular functions to parametrise coordinates x and y of the points on it. Thus, according to the conjecture, any elliptic curve over Q would have to be a modular elliptic curve, yet if a solution to Fermat's equation with non-zero a, b, c and n greater than 2 existed, the corresponding curve would not be modular, resulting in a contradiction. If the link identified by Frey could be proven, then in turn, it would mean that a disproof of Fermat's Last Theorem would disprove the Taniyama–Shimura–Weil conjecture, or by contraposition, a proof of the latter would prove the former as well.
Ribet's theorem
To complete this link, it was necessary to show that Frey's intuition was correct: that a Frey curve, if it existed, could not be modular. In 1985, Jean-Pierre Serre provided a partial proof that a Frey curve could not be modular. Serre did not provide a complete proof of his proposal; the missing part (which Serre had noticed early on) became known as the epsilon conjecture (sometimes written ε-conjecture; now known as Ribet's theorem). Serre's main interest was in an even more ambitious conjecture, Serre's conjecture on modular Galois representations, which would imply the Taniyama–Shimura–Weil conjecture. However his partial proof came close to confirming the link between Fermat and Taniyama.
In the summer of 1986, Ken Ribet succeeded in proving the epsilon conjecture, now known as Ribet's theorem. His article was published in 1990. In doing so, Ribet finally proved the link between the two theorems by confirming, as Frey had suggested, that a proof of the Taniyama–Shimura–Weil conjecture for the kinds of elliptic curves Frey had identified, together with Ribet's theorem, would also prove Fermat's Last Theorem.
In mathematical terms, Ribet's theorem showed that if the Galois representation associated with an elliptic curve has certain properties (which Frey's curve has), then that curve cannot be modular, in the sense that there cannot exist a modular form which gives rise to the same Galois representation.
Situation prior to Wiles's proof
Following the developments related to the Frey curve, and its link to both Fermat and Taniyama, a proof of Fermat's Last Theorem would follow from a proof of the Taniyama–Shimura–Weil conjecture—or at least a proof of the conjecture for the kinds of elliptic curves that included Frey's equation (known as semistable elliptic curves).
From Ribet's Theorem and the Frey curve, any 4 numbers able to be used to disprove Fermat's Last Theorem could also be used to make a semistable elliptic curve ("Frey's curve") that could never be modular;
But if the Taniyama–Shimura–Weil conjecture were also true for semistable elliptic curves, then by definition every Frey's curve that existed must be modular.
The contradiction could have only one answer: if Ribet's theorem and the Taniyama–Shimura–Weil conjecture for semistable curves were both true, then it would mean there could not be any solutions to Fermat's equation—because then there would be no Frey curves at all, meaning no contradictions would exist. This would finally prove Fermat's Last Theorem.
However, despite the progress made by Serre and Ribet, this approach to Fermat was widely considered unusable as well, since almost all mathematicians saw the Taniyama–Shimura–Weil conjecture itself as completely inaccessible to proof with current knowledge. For example, Wiles's ex-supervisor John Coates stated that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible".
Andrew Wiles
Hearing of Ribet's 1986 proof of the epsilon conjecture, English mathematician Andrew Wiles, who had studied elliptic curves and had a childhood fascination with Fermat, decided to begin working in secret towards a proof of the Taniyama–Shimura–Weil conjecture, since it was now professionally justifiable, as well as because of the enticing goal of proving such a long-standing problem.
Ribet later commented that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]."
Announcement and subsequent developments
Wiles initially presented his proof in 1993. It was finally accepted as correct, and published, in 1995, following the correction of a subtle error in one part of his original paper. His work was extended to a full proof of the modularity theorem over the following six years by others, who built on Wiles's work.
Announcement and final proof (1993–1995)
During 21–23 June 1993, Wiles announced and presented his proof of the Taniyama–Shimura conjecture for semistable elliptic curves, and hence of Fermat's Last Theorem, over the course of three lectures delivered at the Isaac Newton Institute for Mathematical Sciences in Cambridge, England. There was a relatively large amount of press coverage afterwards.
After the announcement, Nick Katz was appointed as one of the referees to review Wiles's manuscript. In the course of his review, he asked Wiles a series of clarifying questions that led Wiles to recognise that the proof contained a gap. There was an error in one critical portion of the proof which gave a bound for the order of a particular group: the Euler system used to extend Kolyvagin and Flach's method was incomplete. The error would not have rendered his work worthless—each part of Wiles's work was highly significant and innovative by itself, as were the many developments and techniques he had created in the course of his work, and only one part was affected. Without this part proved, however, there was no actual proof of Fermat's Last Theorem.
Wiles spent almost a year trying to repair his proof, initially by himself and then in collaboration with his former student Richard Taylor, without success. By the end of 1993, rumours had spread that under scrutiny, Wiles's proof had failed, but how seriously was not known. Mathematicians were beginning to pressure Wiles to disclose his work whether or not complete, so that the wider community could explore and use whatever he had managed to accomplish. Instead of being fixed, the problem, which had originally seemed minor, now seemed very significant, far more serious, and less easy to resolve.
Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed, and to publishing his work so that others could build on it and find the error. He states that he was having a final look to try to understand the fundamental reasons why his approach could not be made to work, when he had a sudden insight that the specific reason why the Kolyvagin–Flach approach would not work directly also meant that his original attempt using Iwasawa theory could be made to work if he strengthened it using experience gained from the Kolyvagin–Flach approach since then. Each was inadequate by itself, but fixing one approach with tools from the other would resolve the issue and produce a class number formula (CNF) valid for all cases that were not already proven by his refereed paper:
On 6 October Wiles asked three colleagues (including Gerd Faltings) to review his new proof, and on 24 October 1994 Wiles submitted two manuscripts, "Modular elliptic curves and Fermat's Last Theorem" and "Ring theoretic properties of certain Hecke algebras", the second of which Wiles had written with Taylor and proved that certain conditions were met which were needed to justify the corrected step in the main paper.
The two papers were vetted and finally published as the entirety of the May 1995 issue of the Annals of Mathematics. The new proof was widely analysed and became accepted as likely correct in its major components. These papers established the modularity theorem for semistable elliptic curves, the last step in proving Fermat's Last Theorem, 358 years after it was conjectured.
Subsequent developments
Fermat claimed to "... have discovered a truly marvelous proof of this, which this margin is too narrow to contain". Wiles's proof is very complex, and incorporates the work of so many other specialists that it was suggested in 1994 that only a small number of people were capable of fully understanding at that time all the details of what he had done. The complexity of Wiles's proof motivated a 10-day conference at Boston University; the resulting book of conference proceedings aimed to make the full range of required topics accessible to graduate students in number theory.
As noted above, Wiles proved the Taniyama–Shimura–Weil conjecture for the special case of semistable elliptic curves, rather than for all elliptic curves. Over the following years, Christophe Breuil, Brian Conrad, Fred Diamond, and Richard Taylor (sometimes abbreviated as "BCDT") carried the work further, ultimately proving the Taniyama–Shimura–Weil conjecture for all elliptic curves in a 2001 paper. Now proven, the conjecture became known as the modularity theorem.
In 2005, Dutch computer scientist Jan Bergstra posed the problem of formalizing Wiles's proof in such a way that it could be verified by computer.
Summary of Wiles's proof
Wiles proved the modularity theorem for semistable elliptic curves, from which Fermat’s last theorem follows using proof by contradiction. In this proof method, one assumes the opposite of what is to be proved, and shows if that were true, it would create a contradiction. The contradiction shows that the assumption (that the conclusion is wrong) must have been incorrect, requiring the conclusion to hold.
The proof falls roughly in two parts: In the first part, Wiles proves a general result about "lifts", known as the "modularity lifting theorem". This first part allows him to prove results about elliptic curves by converting them to problems about Galois representations of elliptic curves. He then uses this result to prove that all semistable curves are modular, by proving that the Galois representations of these curves are modular.
Mathematical detail of Wiles's proof
Overview
Wiles opted to attempt to match elliptic curves to a countable set of modular forms. He found that this direct approach was not working, so he transformed the problem by instead matching the Galois representations of the elliptic curves to modular forms. Wiles denotes this matching (or mapping) that, more specifically, is a ring homomorphism:
is a deformation ring and is a Hecke ring.
Wiles had the insight that in many cases this ring homomorphism could be a ring isomorphism (Conjecture 2.16 in Chapter 2, §3 of the 1995 paper). He realised that the map between and is an isomorphism if and only if two abelian groups occurring in the theory are finite and have the same cardinality. This is sometimes referred to as the "numerical criterion". Given this result, Fermat's Last Theorem is reduced to the statement that two groups have the same order. Much of the text of the proof leads into topics and theorems related to ring theory and commutation theory. Wiles's goal was to verify that the map is an isomorphism and ultimately that . In treating deformations, Wiles defined four cases, with the flat deformation case requiring more effort to prove and treated in a separate article in the same volume entitled "Ring-theoretic properties of certain Hecke algebras".
Gerd Faltings, in his bulletin, gives the following commutative diagram (p. 745):
or ultimately that , indicating a complete intersection. Since Wiles could not show that directly, he did so through and via lifts.
In order to perform this matching, Wiles had to create a class number formula (CNF). He first attempted to use horizontal Iwasawa theory but that part of his work had an unresolved issue such that he could not create a CNF. At the end of the summer of 1991, he learned about an Euler system recently developed by Victor Kolyvagin and Matthias Flach that seemed "tailor made" for the inductive part of his proof, which could be used to create a CNF, and so Wiles set his Iwasawa work aside and began working to extend Kolyvagin and Flach's work instead, in order to create the CNF his proof would require. By the spring of 1993, his work had covered all but a few families of elliptic curves, and in early 1993, Wiles was confident enough of his nearing success to let one trusted colleague into his secret. Since his work relied extensively on using the Kolyvagin–Flach approach, which was new to mathematics and to Wiles, and which he had also extended, in January 1993 he asked his Princeton colleague, Nick Katz, to help him review his work for subtle errors. Their conclusion at the time was that the techniques Wiles used seemed to work correctly.
Wiles's use of Kolyvagin–Flach would later be found to be the point of failure in the original proof submission, and he eventually had to revert to Iwasawa theory and a collaboration with Richard Taylor to fix it. In May 1993, while reading a paper by Mazur, Wiles had the insight that the 3/5 switch would resolve the final issues and would then cover all elliptic curves.
General approach and strategy
Given an elliptic curve over the field of rational numbers , for every prime power , there exists a homomorphism from the absolute Galois group
to
the group of invertible 2 by 2 matrices whose entries are integers modulo . This is because , the points of over , form an abelian group on which acts; the subgroup of elements such that is just , and an automorphism of this group is a matrix of the type described.
Less obvious is that given a modular form of a certain special type, a Hecke eigenform with eigenvalues in , one also gets a homomorphism
This goes back to Eichler and Shimura. The idea is that the Galois group acts first on the modular curve on which the modular form is defined, thence on the Jacobian variety of the curve, and finally on the points of power order on that Jacobian. The resulting representation is not usually 2-dimensional, but the Hecke operators cut out a 2-dimensional piece. It is easy to demonstrate that these representations come from some elliptic curve but the converse is the difficult part to prove.
Instead of trying to go directly from the elliptic curve to the modular form, one can first pass to the representation for some and , and from that to the modular form. In the case where and , results of the Langlands–Tunnell theorem show that the representation of any elliptic curve over comes from a modular form. The basic strategy is to use induction on to show that this is true for and any , that ultimately there is a single modular form that works for all n. To do this, one uses a counting argument, comparing the number of ways in which one can lift a Galois representation to one and the number of ways in which one can lift a modular form. An essential point is to impose a sufficient set of conditions on the Galois representation; otherwise, there will be too many lifts and most will not be modular. These conditions should be satisfied for the representations coming from modular forms and those coming from elliptic curves.
3–5 trick
If the original representation has an image which is too small, one runs into trouble with the lifting argument, and in this case, there is a final trick which has since been studied in greater generality in the subsequent work on the Serre modularity conjecture. The idea involves the interplay between the and representations. In particular, if the mod-5 Galois representation associated to an semistable elliptic curve E over Q is irreducible, then there is another semistable elliptic curve E' over Q such that its associated mod-5 Galois representation is isomorphic to and such that its associated mod-3 Galois representation is irreducible (and therefore modular by Langlands–Tunnell).
Structure of Wiles's proof
In his 108-page article published in 1995, Wiles divides the subject matter up into the following chapters (preceded here by page numbers):
Introduction
443
Chapter 1
455 1. Deformations of Galois representations
472 2. Some computations of cohomology groups
475 3. Some results on subgroups of GL2(k)
Chapter 2
479 1. The Gorenstein property
489 2. Congruences between Hecke rings
503 3. The main conjectures
Chapter 3
517 Estimates for the Selmer group
Chapter 4
525 1. The ordinary CM case
533 2. Calculation of η
Chapter 5
541 Application to elliptic curves
Appendix
545 Gorenstein rings and local complete intersections
Gerd Faltings subsequently provided some simplifications to the 1995 proof, primarily in switching from geometric constructions to rather simpler algebraic ones. The book of the Cornell conference also contained simplifications to the original proof.
Overviews available in the literature
Wiles's paper is over 100 pages long and often uses the specialised symbols and notations of group theory, algebraic geometry, commutative algebra, and Galois theory. The mathematicians who helped to lay the groundwork for Wiles often created new specialised concepts and technical jargon.
Among the introductory presentations are an email which Ribet sent in 1993; Hesselink's quick review of top-level issues, which gives just the elementary algebra and avoids abstract algebra; or Daney's web page, which provides a set of his own notes and lists the current books available on the subject. Weston attempts to provide a handy map of some of the relationships between the subjects. F. Q. Gouvêa's 1994 article "A Marvelous Proof", which reviews some of the required topics, won a Lester R. Ford award from the Mathematical Association of America. Faltings' 5-page technical bulletin on the matter is a quick and technical review of the proof for the non-specialist. For those in search of a commercially available book to guide them, he recommended that those familiar with abstract algebra read Hellegouarch, then read the Cornell book, which is claimed to be accessible to "a graduate student in number theory". The Cornell book does not cover the entirety of the Wiles proof.
See also
Abstract algebra
p-adic number
Semistable curves
References
Bibliography
(Cornell, et al.)
See review
See also
Simon Singh Edited version of ~2,000-word essay published in Prometheus magazine, describing Andrew Wiles's successful journey.
External links
The title of one edition of the PBS television series NOVA discusses Andrew Wiles's effort to prove Fermat's Last Theorem that broadcast on BBC Horizon and UTV/Documentary as Fermat's Last Theorem (Adobe Flash)
Wiles, Ribet, Shimura–Taniyama–Weil and Fermat's Last Theorem
Are mathematicians finally satisfied with Andrew Wiles's proof of Fermat's Last Theorem? Why has this theorem been so difficult to prove?, Scientific American, 21 October 1999
Explanations of the proof (varying levels)
Overview of Wiles proof, accessible to non-experts, by Henri Darmon
Very short summary of the proof by Charles Daney
140 page students work-through of the proof, with exercises, by Nigel Boston
Mathematics articles needing expert attention
Galois theory
Fermat's Last Theorem
1995 in science
Mathematical proofs | Wiles's proof of Fermat's Last Theorem | [
"Mathematics"
] | 5,370 | [
"Theorems in number theory",
"Fermat's Last Theorem",
"nan"
] |
21,950,780 | https://en.wikipedia.org/wiki/Phenolates | Phenolates (also called phenoxides) are anions, salts, and esters of phenols, containing the phenolate ion. They may be formed by reaction of phenols with strong base.
Properties
Alkali metal phenolates, such as sodium phenolate hydrolyze in aqueous solution to form basic solutions. At pH = 10, phenol and phenolate are in approximately 1:1 proportions.
The phenoxide anion (aka phenolate) is a strong nucleophile with a comparable to the one of carbanions or tertiary amines. Generally, oxygen attack of phenoxide anions is kinetically favored, while carbon-attack is thermodynamically preferred (see Thermodynamic versus kinetic reaction control). Mixed oxygen/carbon attack and by this a loss of selectivity is usually observed if the reaction rate reaches diffusion control.
Uses
Alkyl aryl ethers can be synthesized through the Williamson ether synthesis by treating sodium phenolate with an alkyl halide:
C6H5ONa + CH3I → C6H5OCH3 + NaI
C6H5ONa + (CH3O)2SO2 → C6H5OCH3 + (CH3O)SO3Na
Production of salicylic acid
Salicylic acid is produced in the Kolbe–Schmitt reaction between carbon dioxide and sodium phenolate.
See also
Sodium phenolate
References | Phenolates | [
"Chemistry"
] | 313 | [
"Phenolates",
"Salts"
] |
30,970,489 | https://en.wikipedia.org/wiki/RadExPro%20seismic%20software | RadExPro is a Windows-based seismic processing software system produced by RadExPro Seismic Software LLC based in Georgia. It is suitable for in-field QC (both online and offline) and processing of 3D and 2D marine and on-land seismic data, advanced processing of HR/UHR offshore seismic, as well as for the onshore near-surface seismic reflection, refraction, MASW, and VSP processing.
For marine applications where data was collected within a broad range of parameters and equipment (single or multi-channel, boomer, sparker or airgun, 2D or 3D), high resolution marine data benefits greatly from in-depth processing in RadExPro, revealing more details from data and extracting more geologic information for presentation.
References
External links
Seismology measurement
Geophysics
Geology software | RadExPro seismic software | [
"Physics"
] | 169 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
30,972,789 | https://en.wikipedia.org/wiki/Aircraft%20maintenance%20engineer%20%28Canada%29 | In Canada an Aircraft maintenance engineer (AME) is a person who is responsible for signing the maintenance release of certified aircraft and is licensed to do so by the national airworthiness authority, Transport Canada (TC). Their job is to ensure that aircraft are maintained in a safe condition.
The applicant for an AME licence must be at least 21 years old. Aircraft maintenance engineers must complete a training course at a TC approved training organization (ATO), which are mostly Canadian vocational colleges. There are also accepted distance learning courses. A period of apprenticeship prior to writing the licensing examinations is required. Upon successful completion they are granted an AME licence, which is valid for ten years and may be renewed.
AMEs retain their recency by completing maintenance or related work. The Canadian Aviation Regulations require that once the holder's licence is more than two years old that they complete six months worth of work in the previous two years performing or supervising aircraft maintenance, act in an executive capacity in a maintenance organization, or teach or supervise teaching of aviation maintenance at an approved training organization.
Responsibilities and licensing
Under Canadian federal law, the release of maintenance work performed on aircraft in Canada – especially "transport category" fixed-wing aircraft or turbine-powered helicopter aircraft must be accomplished by a person with specific training and licensing. These persons are individually licensed by the Canadian Federal Government through TC and are known as aircraft maintenance engineers or "AMEs". While the term AME is not recognized by the Provincial Engineering Associations, AMEs act on behalf of the Minister of Transport to ensure the safety of the Canadian public with regard to the work performed during maintenance of certified aircraft. The AME is not required to actually conduct the maintenance work they sign for, but must supervise non-licensed personnel or conduct inspections of the work to be signed for to the extent necessary to satisfy themselves that the work was completed correctly.
Not all aircraft maintenance releases require the signature of an AME in Canada. For work on Canadian aircraft conducted outside Canada, a person licensed by another country that has a bilateral agreement with TC may sign. In the case of amateur-built aircraft the owner may sign and for owner-maintenance category aircraft the owner may sign if they are also a licensed pilot. In Canada ultralight aircraft, hang gliders and paragliders as well as model aircraft do not require signatures for release to return to flight. In the case of aircraft parts maintained on the bench (i.e. while removed from the aircraft) persons authorized by an Approved Maintenance Organization (AMO) may also sign the release, whether they hold an AME licence or not. But the maintenance release for the subsequent installation of such parts into an aircraft may only be made by the holder of an AME licence.
Canada has no legal system that requires the person who performs aircraft maintenance to hold a licence. Canada requires that the person who certifies the work has inspected it for accuracy and correct completion. The Canadian AME licence allows the holder to both perform AND to certify their own maintenance work, or to certify the maintenance work performed by an unlicensed person. This type of licence is unlike the system used in the United States of America, whereby the FAA issues licences to persons who perform maintenance work on aircraft as technicians by way of "ratings" (the "airframe" or "powerplant" rating or the combined "airframe and powerplant" rating) and a separate licence to accomplish the "Inspection" certification – the I.A certificate. The US has focused on "the performance of work", while Canada and the other Commonwealth countries make a distinction between actually working on the machines and inspecting them for safety.
Ratings
The AME licence may be endorsed with one or more ratings. These are:
M1
Non-turbojet aircraft, under max takeoff weight, and a passenger capacity of 19 or less.
M2
All aircraft not included in M1, excluding balloons, but including all airframes, engines, propellers, components, structures and systems of those aircraft.
Note: Holders of either an M1 or M2 rated AME licence also have maintenance release privileges for all turbine powered helicopters and SFAR 41C aeroplanes, including their associated variants and derivatives.
E
Aircraft electronic systems, including communication, pulse, navigation, auto flight, flight path computation, instruments and the electrical elements of other aircraft systems, and any structural work directly associated with the maintenance of those systems
S
Aircraft structures, including all airframe structures.
Note: Holders of BOTH M1 and M2 licence also have the privileges of both the "E" and "S" licences. Some systems (AFCS, HUD, etc.) require specialized training and the licence holder must be current and familiar with the system being signed for.
Balloons
Issuance
Upon request of licence issuance, applicants shall include proof of age, training, knowledge, experience and skill as follows:
Age
Prior to licence issue, the applicant shall have attained the age of 21 years. As proof of age, the following documents are acceptable:
Canadian citizenship certificates;
Birth or baptismal certificates;
Passports; or
Any Federal or provincial identifying document showing the applicant's birth date.
Where proof of age cannot be provided by means of a document referred to in any of subparagraphs (i) to (iv), a declaration of age may be accepted in lieu.
Training
Applicable training may be obtained by means of distance learning courses or traditional college. In any case, the organization has to be approved by Transport Canada.
With some exceptions, an applicant shall successfully complete basic training applicable to the rating. As proof of training, the applicant shall provide a certificate of successful completion of an acceptable aircraft maintenance training course. Where the applicant is seeking experience credit for the training, the certificate shall be issued by an Approved Training Organizations (ATO).
Knowledge
Transport Canada approved training courses include technical examinations on the subjects covered by the course. Applicants shall successfully complete all the applicable examinations for the subjects concerned, conducted by the ATO in accordance with its approved procedures. As proof, the applicants shall submit a certificate or letter, issued by the ATO, attesting to the successful completion of the examinations.
Experience
Applicants shall have acquired the applicable amount of total, specialty, and civil aviation maintenance experience set forth in Appendix A. As proof of experience, the applicants shall submit a personal log book or equivalent document signed by the persons responsible for the maintenance release of the work items recorded. At the time of application, the applicants shall have acquired all but six months of the required total experience. Credit toward the total aviation maintenance experience requirement shall be granted for time spent in approved basic training, in the ratio of one month's credit for each 100 hours of training, up to a maximum of:
24 months for M or E rating applicants.
18 months for S rating applicants.
Therefore, a graduate from an ATO with a curriculum of 1800 hours, would qualify for 18 months credit.
Experience requirements expressed in months are predicated upon full-time employment of 1800 working hours per year. Applicants with part-time experience acquired at a lower rate than this may convert their actual working hours to months at the rate of one month for each 150 working hours, but in no case can a higher rate of work be used to obtain more than one month's credit for each actual calendar month worked.
Maintenance of military aircraft, or parts intended for installation on military aircraft, may be counted toward the total and specialty experience requirements, but not toward the civil aviation experience requirement. Maintenance of ultra-light, advanced ultra-light, amateur built, or owner maintained aircraft, does not qualify for any experience credit.
Skill
Applicants shall have performed a representative selection of eligible maintenance tasks, over the full range of applicable systems and structures. These tasks must cover at least 70 percent of the items listed in Appendix B that are applicable to the rating sought and to the aircraft, systems or components for which the experience is claimed.
Proof of having completed aircraft maintenance tasks shall take the form of a certification by the AME, or equivalent person who supervised the work. The certification statement shall include the date, aircraft type, registration mark, or component serial number as applicable.
Validity and Recency
Validity period
Unless surrendered, suspended or cancelled, an AME licence remains valid until the date indicated on the licence. Transport Canada Advisory Circular (AC) No. 566-003 indicates that the licence is valid for ten years calculated after the applicant's last birthday. the Canadian Aviation Regulations still indicate a six-year validity period.
Recency requirements
No person shall exercise the privileges of an AME licence unless, within the preceding 24 months; they have successfully completed the regulatory requirements examination, or have, for at least six months:
Performed aircraft maintenance;
Supervised the performance of maintenance, either directly or in an executive capacity; or
Provided aviation maintenance instruction within an ATO, or an approved training program in an AMO or directly supervised the delivery of such instructions.
An AME who attempts the regulatory requirements examination as required by subsection 566.05(1) and fails will not be entitled to renewal until the examination has been successfully completed.
Professional status
The issue of the divided loyalty that is inherent upon the Canadian AME as both a private sector work performer and a Ministerial delegated certifier, of their own work and of others', was the focus of a 1988 report, which noted that in contrast to some other countries which had done so, "...there is a need in Canada to develop our own perspectives on the phenomenon of inspection."
The Aircraft Maintenance Engineers of Canada/Techniciens D'Entretien 'D'Aeronefs du Canada (AMEC/TEAC) is the professional association for AMEs at national level. AMEC/TEAC is financed by the six regional associations; Atlantic, Quebec, Ontario, Central, Western and Pacific. It changed its name in 2019 from the Canadian Federation of Aircraft Maintenance Engineers Associations, (CFAMEA).
The Aircraft Maintenance Engineers qualification and license were instituted in 1920 when the Canadian Air Regulations introduced Air Engineers and Air Engineer certificates into Canadian Aeronautics law. The first AME license issued in Canada, Air Engineer License #1, was issued on the 20th of April 1920 to Mr. Robert A. McCombie by the Canadian Air Board.
See also
Aircraft maintenance
Aircraft maintenance engineer
Aircraft maintenance technician
Groundcrew
References
External links
Aircraft Maintenance Engineers of Canada (AMEC/TEAC) website.
Aircraft maintenance
Aviation licenses and certifications
Aviation in Canada | Aircraft maintenance engineer (Canada) | [
"Engineering"
] | 2,119 | [
"Aircraft maintenance",
"Aerospace engineering"
] |
30,976,685 | https://en.wikipedia.org/wiki/Byssonectria%20fusispora | Byssonectria fusispora is a species of apothecial fungus belonging to the family Pyronemataceae.
This is a European species appearing as bright yellow-orange discs up to 3 mm in diameter thickly clustered on soil and rotting plant material, often at fire sites.
References
Byssonectria fusispora at Index Fungorum
Pezizales
Fungi described in 1846
Fungus species | Byssonectria fusispora | [
"Biology"
] | 86 | [
"Fungi",
"Fungus species"
] |
30,979,854 | https://en.wikipedia.org/wiki/Pseudospectral%20knotting%20method | In applied mathematics, the pseudospectral knotting method is a generalization and enhancement of the standard pseudospectral method for optimal control. Introduced by I. Michael Ross and F. Fahroo in 2004, it forms part of the collection of the Ross–Fahroo pseudospectral methods.
Definition
According to Ross and Fahroo, a pseudospectral (PS) knot is a double Lobatto point; i.e. two boundary points coinciding. At this point, information (such as discontinuities, jumps, dimension changes etc.) is exchanged between two standard PS methods. This information exchange is used to solve some of the most difficult problems in optimal control, known as hybrid optimal control problems.
In a hybrid optimal control problem, an optimal control problem is intertwined with a graph problem. A standard pseudospectral optimal control method is incapable of solving such problems; however, through the use of pseudospectral knots, the graph information can be encoded at the double Lobatto points, thereby allowing a hybrid optimal control problem to be discretized and solved using powerful software such as DIDO.
Applications
PS knots have found applications in various aerospace problems such as the ascent guidance of launch vehicles and advancing the Aldrin Cycler using solar sails.
PS knots have also been used for anti-aliasing of PS optimal control solutions and for capturing critical information in switches when solving bang-bang-type optimal control problems.
Software
The PS knotting method was first implemented in the MATLAB optimal control software package, DIDO.
See also
Legendre pseudospectral method
Chebyshev pseudospectral method
Ross–Fahroo lemma
Ross' π lemma
Ross–Fahroo pseudospectral methods
References
Optimal control
Numerical analysis
Control theory | Pseudospectral knotting method | [
"Mathematics"
] | 370 | [
"Applied mathematics",
"Control theory",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Dynamical systems"
] |
40,357,811 | https://en.wikipedia.org/wiki/Nephelescope | A nephelescope is a device invented by James Pollard Espy to measure the drop in temperature of a gas from a reduction in pressure; originally used to explore the formation of clouds.
Original design
The original design consisted of an air compression pump (a), a vessel (b), and a barometer (c).
Air is pumped into the vessel until a desired pressure is reached, the stopclock is then closed and the temperature allowed to equilibriate. The stopclock is then opened, allowing the pressure of the container to equilibriate the atmosphere, and then closed again.
The air inside of the container would now be colder. As it warms up, pressure inside the container once again increases above atmosphere. This increase in pressure can be used to work out the number of degrees which the container had been cooled by.
Later developments
A later design consisted of an air pump receiver (a) connected to a flask (c) by an intervening stopclock (b). Air was pumped out of the receiver, then the stopclock was opened. One advantage of using negative pressure was that a glass vessel could be used, which allowed the observation of condensation and droplets resulting from the drop in temperature. To observe this in a dry atmosphere, air would have needed to first be moistened by exposure to water.
Historical significance
The nephelescope enabled Epsy to predict the change in heat of air as water vapor became cloud. He showed that when dry air was used instead of moist air, temperature was reduced by about twice as much as moist air. In other words, latent heat released from the condensation of water mitigated some of the cooling from expansion of moist air. Since moist air is already lighter than dry air, the warmer and lighter moist air in clouds would continue to rise and cool, forcing more vapor to condense, which had consequences for meteorological theories at that time.
The nephelescope has been described as an "early cloud-chamber".
References
Temperature | Nephelescope | [
"Physics",
"Chemistry"
] | 417 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
40,358,669 | https://en.wikipedia.org/wiki/TRACE%20%28computer%20program%29 | TRACE is a high-precision orbit determination and orbit propagation program. It was developed by The Aerospace Corporation in El Segundo, California. An early version ran on the IBM 7090 computer in 1964. The Fortran source code can be compiled for any platform with a Fortran compiler.
When Satellite Tool Kit's high-precision orbit propagator and parameter and coordinate frame transformations underwent an Independent Verification and Validation effort in 2000, TRACE v2.4.9 was the standard against which STK was compared.
As of 2013, TRACE is still used by the U.S. Government and some of its technical contractors.
References
Astrophysics
Mathematical software
Physics software | TRACE (computer program) | [
"Physics",
"Astronomy",
"Mathematics"
] | 135 | [
"Astronomical sub-disciplines",
"Mathematical software",
"Astronomy stubs",
"Astrophysics",
"Computational physics",
"Astrophysics stubs",
"Computational physics stubs",
"Physics software"
] |
40,361,983 | https://en.wikipedia.org/wiki/Open%20Identity%20Exchange | The Open Identity Exchange (OIX) is a non-profit organization that works to accelerate the adoption of digital identity services based on open standards. It is also technology-agnostic and operates collaboratively across both the private and public sectors.
History
Genesis
Shortly after coming into office, the Obama administration asked the General Services Administration (GSA) how to leverage open identity technologies to help the American public interact more easily and efficiently with federal websites, such as those of the National Institutes of Health (NIH), the Social Security Administration (SSA), and the Internal Revenue Service (IRS).
At the 2009 RSA Conference, the GSA sought to build a public/private partnership with the OpenID Foundation (OIDF) and the Information Card Foundation (ICF) to craft a workable identity information framework that would establish the legal and policy precedents needed to establish trust for Open ID transactions.
This partnership eventually developed a trust framework model. Further meetings were held at the Internet Identity Workshop in November 2009, resulting in OIDF and ICF forming a joint steering committee. The committee's task was to study the best implementation options for the newly created framework.
Foundation
The US Chief Information Officer recommended the formation of a non-profit corporation, the Open Identity Exchange (OIX). In January 2010, the OIDF and ICF approved grants to fund the creation of the Open Identity Exchange. Booz Allen Hamilton, CA Technologies, Equifax, Google, PayPal, Verisign, and Verizon were all members of either OIDF or ICF, and agreed to become founding members of OIX.
Trust
To trust that the Identity Provider is delivering accurate data, the following should be considered:
- Identity Providers must ensure that the Relying Party is legitimate (i.e., not a hacker or phisher).
- While direct trust agreements between relying parties and identity providers are a common solution, they become unmanageable at the scale of the Internet.
OIXnet
In 2014, OIX established the OIXnet trust registry, a global authoritative registry of business, legal, and technical requirements needed to ensure market adoption and global interoperability.
In 2015, the OIDF also announced plans to register all companies self-certifying conformance to OpenID Connect via the OpenID Certification Program on OIXnet.
Purpose
OIXnet is an official, online, and publicly accessible repository of documents and information relating to identity systems and participants, referred to as a “registry”. It functions as an official and centralised source of such documents and information, much like a government-operated recorder of deeds. Individuals and entities can register documents and information with the OIXnet registry to provide notice of their contents to the public.Members of the public seeking access to such documents or information can go to that single authoritative location to find them.
The OIXnet registry is designed to provide a single, comprehensive and authoritative location where documents and information relating to a specific purpose, such as identity systems, can be safely stored to notify others of certain facts. From this location, such documents and information can be accessed by interested stakeholders seeking such information.
Early participants
OIXnet was launched in 2015. The OpenID Foundation was the first registrant, registering the initial set of organisations, including Google, ForgeRock, Microsoft, NRI, PayPal and Ping Identity, certifying conformance to OpenID Connect. Additional registrations were added to OIXnet throughout 2015 and 2016, with 10 trusted identity services currently registered.
Status
The OIXnet registry was in a pilot phase as of 2016, registering new and diverse trust frameworks and communities of interest.
International Chapters
OIX developed a chapters policy in 2015 that allows regional OIX chapters to be established. In 2016, the OIX United Kingdom Chapter was approved by OIX board and launched.
Leadership
The OIX board represents leaders in online identity in the internet, telecom, and data aggregation industries, concerned with both market expansion and information security.
Government relations
The OIX board met with Howard Schmidt in 2011 to discuss the public–private partnership envisioned in the NSTIC strategy.
The UK government's Cabinet Office joined the OIX at the board level, as it began the work on its Identity Assurance Programme, which is now GOV.UK Verify.
In 2015, the States of Jersey commissioned an OIX Discovery project to explore how the knowledge, expertise, and components of one of these models, the UK’s GOV.UK Verify identity assurance scheme, could be leveraged to provide a cost-effective solution to meet Jersey’s requirements.
Membership
The Open Identity Exchange currently has five executive members and over 50 general members.
Executive Members
Barclays
International Airlines Group
LexisNexis
Mastercard
NatWest Group
OIX UK Europe Chapter
At the beginning of 2015, the Cabinet Office requested Open Identity Exchange to begin exploring the legal, business, and pragmatic considerations of creating a self-sustaining UK ‘chapter’ of the Open Identity Exchange. Up until that point, OIX UK operated as an independent UK entity, able to administer ‘directed funding’ from member organisations. It had received a series of grants from the UK Cabinet Office, that were used for the collaboratively funded projects.
An ad hoc board of advisers was formed of independent, experienced, public and private sector leaders who addressed policy considerations during this transition process. In addition to considering the role of OIX UK in the future, this board of advisers considered the private sector's needs for identity services, resulting in an ongoing OIX project.
The Open Identity Exchange board of directors approved an OIX chapters policy at the end of 2015, allowing the formation of individual chapters affiliated with OIX in various local markets. In April 2016 the OIX UK Europe Chapter appointed its board of directors.
White Papers
The OIX White Papers deliver joint research to examine a wide range of challenges facing the open identity market and to provide possible solutions. They are written by experts in the fields of technology, particularly open identity.
OIX
OIX: An Open Market Solution for Online Identity Assurance
Trust Frameworks
Trust Framework Requirements and Guidelines
The Personal Network: A New Trust Model and Business Model for Personal Data
Federated Online Attribute Exchange Initiatives
Personal Levels of Assurance (PLOA)
The Three Pillars of Trust
UK Identity Assurance Programme (IDAP)
Overview of Legal Liability in the IDAP (In development)
US National Strategy for Trusted Identities in Cyberspace (NSTIC)
Comments on US NSTIC Steering Group Draft Charter and Related Governance Issues
United States National Strategy for Trusted Identities in Cyberspace Identity Ecosystem Steering Committee Plenary and Governing Board Charter
OIX Response to "Models for a Governance Structure for the National Strategy for Trusted Identity in Cyberspace"
White Papers Published in 2016
Open Identity Exchange (OIX) White Papers focus on current issues and opportunities in emerging identity markets. OIX White Papers are intended to deliver value to the identity ecosystem and take one of two perspectives: a retrospective report on the outcome of a given project or pilot or a prospective discussion on a current issue or opportunity. OIX White Papers are authored by independent domain experts and are intended as summaries for a general business audience.
Recent published whitepapers include:
• Use of online activity as part of the identity verification
• UK private sector needs for identity assurance
• Use of digital identity in peer-to-peer economy
• Shared signals proof of concept
• Creating a digital identity in Jersey
• Just Giving and GOV.UK Verify
• Creating a pensions dashboard
• Could digital identities help transform consumers attitudes and behavior towards savings?
• Digital identity across borders: opening a bank account in another EU country
• Generating Revenue and Subscriber Benefits: An Analysis of: The ARPU of Identity
Projects
OIX projects deliver joint research to examine a wide range of challenges facing the open identity market and to provide possible solutions.
States of Jersey: Creating a Digital ID
The hypothesis was that the UK Government identity assurance model could be adapted for Jersey with the support of certified UK IdPs and potential identity assurance hub providers, to meet the requirements of SoJ. The hypothesis also considered that this would create an attractive market opportunity in Jersey for one or more of these providers.
LIGHTest Project
This is a 3-year project that started in September 2016 and is partially funded from the European Union's Horizon 2020 research and innovation programme under G.A. No. 700321. The LIGHTest consortium consists of 14 partners from 9 European countries and is coordinated by Fraunhofer-Gesellschaft. The project looks to reach out beyond Europe, to build a global community.
LIGHTest (Lightweight Infrastructure for Global Heterogeneous Trust management in support of an open Ecosystem of Stakeholders and Trust schemes)
The objective of LIGHTest is to create a global cross-domain trust infrastructure that renders it transparent and easy for verifiers to evaluate electronic transactions. By querying different trust authorities worldwide and combining trust aspects related to identity, business, reputation etc,. it will become possible to conduct domain-specific trust decisions.
This is achieved by reusing existing governance, organization, infrastructure, standards, software, community, and know-how of the existing Domain Name System, combined with new innovative building blocks. This approach allows an efficient global rollout of a solution that assists decision-makers in their trust decisions. By integrating mobile identities into the scheme, LIGHTest also enables domain-specific assessments on Levels of Assurance for these identities.
GOV.UK Verify
The UK Government's Cabinet Office joined the OIX at board level as it began the work on its Identity Assurance Programme (IDAP). Through the OIX Directed Funding programme, a considerable number of projects continue to be carried out under OIX governance, the results of which have helped with the ongoing development of GOV.UK Verify. Work continues as GDS looks at how digital identities can be used in both the public and private sector.
GOV.UK Verify is built and maintained by the Government Digital Service (GDS), part of the Cabinet Office. The UK Government is committed to expanding GOV.UK Verify and helping to grow a market for identity assurance that will be able to meet user needs in relation to central government services, as well as local, health and private sector services. GOV.UK Verify uses certified companies to verify your identity to government. A certified company is a private company that works to high industry and government standards when they verify your identity.
References
External links
OIXnet
Cloud standards
Password authentication
Federated identity
Identity management initiative
Computational trust
Information technology organisations based in the United Kingdom
Organisations based in the City of Westminster | Open Identity Exchange | [
"Technology",
"Engineering"
] | 2,141 | [
"Computer standards",
"Computational trust",
"Cloud standards",
"Cybersecurity engineering"
] |
40,362,619 | https://en.wikipedia.org/wiki/Panama%20Civil%20Defense%20Seismic%20Network | The Panama Civil Defense Seismic Network collects and studies ground motion from about 60 seismometers throughout Panama. These stations monitor volcanoes, tectonic activities, rivers, and tsunami to give fast, real-time information and warnings about these potential hazards.
Earthquake and seismic risk mitigation
Geology of Panama
Seismological observatories, organisations and projects | Panama Civil Defense Seismic Network | [
"Engineering"
] | 73 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
40,364,155 | https://en.wikipedia.org/wiki/Quarter%205-cubic%20honeycomb | In five-dimensional Euclidean geometry, the quarter 5-cubic honeycomb is a uniform space-filling tessellation (or honeycomb). It has half the vertices of the 5-demicubic honeycomb, and a quarter of the vertices of a 5-cube honeycomb. Its facets are 5-demicubes and runcinated 5-demicubes.
Related honeycombs
See also
Regular and uniform honeycombs in 5-space:
5-cube honeycomb
5-demicube honeycomb
5-simplex honeycomb
Truncated 5-simplex honeycomb
Omnitruncated 5-simplex honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
x3o3o x3o3o *b3*e - spaquinoh
Honeycombs (geometry)
6-polytopes | Quarter 5-cubic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 255 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
40,364,178 | https://en.wikipedia.org/wiki/PS%20Power%20and%20Sample%20Size | PS is an interactive computer program for performing statistical power and sample size calculations.
Program description
The P program can be used for studies with dichotomous, continuous, or survival response measures. The user specifies the alternative hypothesis in terms of differing response rates, means, survival times, relative risks, or odds ratios. Matched or independent study designs may be used. Power, sample size, and the detectable alternative hypothesis are interrelated. The user specifies any two of these three quantities and the program derives the third. A description of each calculation, written in English, is generated and may be copied into the user's documents. Interactive help is available. The program provides methods that are appropriate for matched and independent t-tests, survival analysis, matched and unmatched studies of dichotomous events, the Mantel-Haenszel test, and linear regression.
The program can generate graphs of the relationships between power, sample size and the detectable alternative hypothesis. It can plot graphs of any two of these variables while holding the third constant. Linear or logarithmic axes may be used and multiple curves can be plotted on each graph. Graphs may be copied and pasted into other documents or programs for further editing.
Reviews
Reviews of this program have been published by McCrum-Gardner, Thomas and Krebs, Stawicki and Pezzullo.
Web version
A web-based version of the program is also available at https://statcomp2.app.vumc.org/ps/.
References
External links
PS Webpage
P3G : Public Population Project in Genomics and Society
CTSpedia
UCSF Biostatistics
Software Informer
Statistical software | PS Power and Sample Size | [
"Mathematics"
] | 346 | [
"Statistical software",
"Mathematical software"
] |
40,365,268 | https://en.wikipedia.org/wiki/Quarter%206-cubic%20honeycomb | In six-dimensional Euclidean geometry, the quarter 6-cubic honeycomb is a uniform space-filling tessellation (or honeycomb). It has half the vertices of the 6-demicubic honeycomb, and a quarter of the vertices of a 6-cube honeycomb. Its facets are 6-demicubes, stericated 6-demicubes, and {3,3}×{3,3} duoprisms.
Related honeycombs
See also
Regular and uniform honeycombs in 5-space:
6-cube honeycomb
6-demicube honeycomb
6-simplex honeycomb
Truncated 6-simplex honeycomb
Omnitruncated 6-simplex honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
Honeycombs (geometry)
7-polytopes | Quarter 6-cubic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 252 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
40,366,664 | https://en.wikipedia.org/wiki/Quarter%208-cubic%20honeycomb | In seven-dimensional Euclidean geometry, the quarter 8-cubic honeycomb is a uniform space-filling tessellation (or honeycomb). It has half the vertices of the 8-demicubic honeycomb, and a quarter of the vertices of a 8-cube honeycomb. Its facets are 8-demicubes h{4,36}, pentic 8-cubes h6{4,36}, {3,3}×{32,1,1} and {31,1,1}×{31,1,1} duoprisms.
See also
Regular and uniform honeycombs in 8-space:
8-cube honeycomb
8-demicube honeycomb
8-simplex honeycomb
Truncated 8-simplex honeycomb
Omnitruncated 8-simplex honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
Honeycombs (geometry)
9-polytopes | Quarter 8-cubic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 276 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
40,367,176 | https://en.wikipedia.org/wiki/Bottom-blown%20oxygen%20converter | The Bottom-blown Oxygen Converter or BBOC is a smelting furnace developed by the staff at Britannia Refined Metals Limited (“BRM”), a British subsidiary of MIM Holdings Limited (which is now part of the Glencore group of companies). The furnace is currently marketed by Glencore Technology. It is a sealed, flat-bottomed furnace mounted on a tilting frame that is used in the recovery of precious metals. A key feature is the use of a shrouded lance to inject oxygen through the bottom of the furnace, directly into the precious metals contained in the furnace, to oxidize base metals or other impurities as part of their removal as slag.
Introduction
Ores mined for their base metal content often contain precious metals, usually gold and silver. These have to be removed from the base metals as part of the refining processes used to purify the metals. In the case of copper electrolytic refining, the gold and silver fall to the bottom of the electrolytic refining cell as “slimes” that are subsequently treated to recover gold and silver as byproducts. In the case of lead refining, silver, gold, and other precious metals are typically removed using the Parkes process, in which zinc is added to the impure lead bullion to collect the silver, gold and other precious metals.
The BRM lead refinery at Northfleet in England uses the Parkes process followed by liquation and a vacuum induction retort to recover precious metals. The product of this process is a feed for the BBOC consisting of a mixture of lead, silver (60–75%), zinc (2–3%) and copper (2–3%), with trace amounts of gold. Prior to the development of the BBOC, BRM used cupellation in a 15 tonne (“t”) reverberatory cupellation furnace to recover the precious metals from this mixture. Three of these furnaces were used to produce 450 t of silver per year.
Cupellation works by exposing the mixture at high temperature to air or oxygen. The base metals, being less noble than silver and gold, react with the oxygen to form their oxides, which separate from the noble metals to form a slag that floats on the top of the residual metals (or “doré”). At BRM, the doré contains 99.7% silver.
To maximize the oxygen transfer from the blast air in the reverberatory furnace, a shallow bath is used, thus increasing the surface-area-to-volume ratio of the furnace.
A problem with using reverberatory furnaces for cupellation is that the zinc oxidizes first, forming a crust across the top of the molten material. This crust prevents the penetration of oxygen to the rest of the material, and so it has to be manually broken up and removed using a rabble bar. This is both labor-intensive and also results in the loss of some of the silver. Similarly, the oxidized lead slag has to be removed when it forms to maintain the operation, and its removal also results in loss of silver.
The BBOC was developed by BRM personnel as a way of reducing these and other problems, such as low energy efficiency and low oxygen utilization, associated with the reverberatory cupellation process.
Description
The BBOC furnace is a cylindrical steel vessel with a protective internal lining of refractory bricks. It is mounted on a tilting frame that allows it to be held at different angles at different stages of its operating cycle (see Figure 2). A hood is fixed over the top of the furnace, providing a seal that prevents lead and other fumes from escaping during the furnace’s operation (see Figure 1).
The key feature of the BBOC is the shrouded lance that passes through the refractory bricks at the bottom of the furnace. This lance allows oxygen to be injected directly into the molten metal contained in the furnace, away from the refractory lining. Doing so allows the region of high reaction rates to be removed from the vicinity of the lining, thus reducing its wear.
By injecting the oxygen directly into the bath, rather than blowing it on top (as in the case of the reverberatory cupellation furnace or top-blown rotary converters), the oxygen transfer efficiency is not impeded by the presence of the slag layer. It results in an oxygen utilization efficiency approaching 100%.
The lack of interference in the oxygen transfer by the slag layer has a couple of key benefits. The first is that the increased certainty in the estimation of oxygen utilization efficiency means that it is easier to calculate the endpoint of the process, making process control much easier. The second is that a thicker slag layer can be tolerated (because the oxygen does not have to pass through it), and this means that the losses of silver to the slag are reduced (because it is the silver at the interface between the metal and slag that becomes entrained during the removal of the slag and the thicker the slag layer, the smaller the silver content of the removed slag). BRM reported a decrease in the silver content of the BBOC slag compared to the reverberatory furnace slag of 50%.
BRM found that the reaction rate of the BBOC was 10–20 times that of its reverberatory cupellation furnace.
Refractory wear in the BBOC is largely confined to the slag line, at the top of the metal, where attack by litharge (lead oxide) is greatest. This is combated by using fused-grain, direct-bonded magnesite-chrome bricks to line the inside of the furnace shell.
Operation
Figure 2 shows the positions of the BBOC at various stages of the operating cycle.
The BBOC is held in an upright position during the charging stage. A solid or liquid charge is added using an overhead crane. The furnace is then tilted forward so that the lance is above the charge, and the charge is melted using an oil or natural gas burner that is inserted near the top of the furnace. Once the charge has been melted, the furnace is tilted back into the blowing position and oxygen is blown into the bath. Slag formed from the oxidation of lead and zinc is removed periodically by tilting the furnace forward again and pouring it off.
The oxygen flow rate during blowing for a three tonne capacity furnace is 20–30 Nm3/h. Zinc is initially oxidized to form a zinc oxide dross on the surface of the charge, but as lead oxide subsequently forms, a fluid slag of zinc and lead oxides is created. Most of the copper is removed at the same time as the lead. The final removal of copper to a level of 0.04% is undertaken at the end of the process by further additions of lead to collect the copper.
If the lance needs to be replaced at any time during operation, this is done by tilting the furnace forward until the lance is above the surface of the bath, where it can be removed and replaced without the contents of the furnace draining through the hole in the furnace lining.
The cupellation process finishes when the silver is about 99.7% pure. At this point, the silver is poured from the furnace and transferred to another furnace, where a flux is added to upgrade and remove the oxygen from the silver to produce market bullion of 99.9% purity.
History
Early development at BRM
Staff at BRM began work on an alternative to the conventional reverberatory cupellation furnace in the early 1980s. This included a review of the available technology, including the top-blown rotary converter ("TBRC"), on which test work was undertaken.
One of the first areas investigated was the use of oxygen-enriched blast air in the reverberatory furnace. This was “found to be of marginal benefit and not economically viable."
The BRM staff subsequently tried to increase the oxygen transfer rate by using lances submerged in the bath of the reverberatory furnace and found that there was some benefit in doing this. However, the wear rate of the lances was excessive and it was realized that the basic design of the furnace, with its shallow bath, was not conducive to the development of a high-intensity reactor.
The concept then evolved into a new furnace design, one that had a deep bath, in contrast to the reverberatory furnace design.
Initial tests of the bottom injection of oxygen were carried out on a small scale at Imperial College, London, using a nitrogen-shrouded tuyere. These showed that under certain conditions a protective accretion would form at the tip of the injector, and that oxygen utilization was high, with the oxidation reactions generating sufficient heat to keep the furnace hot until the final stages of refining when the impurity levels were low.
Additionally, the test work on the TBRC had shown that it had a high rate of refractory wear, due to the washing action of the slag caused by the rotation of the furnace, which provided additional pressure to develop an alternate process. The TBRC test work also resulted in low oxygen utilization (about 60%).
Based on the success of the small-scale tests, and with calculations indicating that the new design would have significant energy savings over the reverberatory furnace, the BRM staff built a 1.5 t pilot plant with a working volume of 150 liters (“L”). The oxygen injector was a fixed tuyere, located at corner of the base with the side wall, with an annular nitrogen shroud.
The initial pilot plant tests showed that it was difficult to maintain the protective accretion that had been generated in the small-scale tests, due to the variation in temperature and bullion composition that occurred throughout the cupelling cycle. Without the accretion, the nitrogen shroud could not provide sufficient protection to the injector, and it burned back to the level of the refractory lining, which resulted in damage to the lining.
The solution eventually developed was the concept of the moveable lance system in place of the fixed tuyere that had been used initially. The lance was pushed further into the furnace as its tip was worn away.
The initial lance advancing system was manual, but the current automated system was subsequently developed.
Once a sustainable system had been developed in the pilot plant, and after three years of pilot plant development, a commercial, 3 t-scale BBOC was commissioned at BRM in 1986. Its use reduced the fuel consumption per tonne of silver by 85%, from 30 gigajoules per tonne (“GJ/t”) to 4.5 GJ/t and the exhaust gas volume from 32 000 Nm3/h to 7500 Nm3/h.
Commercialization
After the successful operation of the BBOC at BRM, MIM Holdings Limited (“MIM”) decided to license the technology to other smelter and refinery operators. Early adopters included Hindustan Zinc Limited, which by 1995 had two 1 t BBOC plants operating in India, and ASARCO Inc., which was operating a 3 t BBOC furnace at its Omaha, Nebraska, refinery.
Rand Refinery
The South African company Rand Refinery Limited rebuilt its smelter in 1986, incorporating two 1.5 t TBRCs and a small static reverberatory furnace for cupellation to produce doré bullion containing gold and silver. The original concept was to produce doré bullion directly from the TBRCs, but this proved impossible, as it was found impossible to take the oxidation stage to completion while maintaining temperatures at which the doré would remain molten. Consequently, the reverberatory cupellation furnace was necessary to complete the process.
In January 1993, the management team of Rand Refinery decided to review alternate technologies to replace the TBRC–reverberatory furnace circuit, with the objective of having cupellation undertaken in a single stage. After evaluating the possibility of modifying the existing TBRCs by replacing the existing lance–burner combination with a separate lance and burner, and considering complete replacement of the TBRCs with an Ausmelt top-submerged lance furnace, Rand Refinery decided to replace one of the TBRC with a 4 t BBOC. The remaining TBRC is used to treat litharge slag to recover the lead for sale.
The Rand Refinery BBOC was commissioned in 1994. The operators reported a 28% reduction in the operating costs when the BBOC’s costs were compared with those of the TBRC–reverberatory furnace combination. This included a 45% reduction in bulk oxygen costs and halving the number of operators required to run the plant. The BBOC’s refractory life was 13 weeks, compared to an average refractory life of 2 weeks for the TBRCs. Other maintenance costs also fell.
Broken Hill Associated Smelters
The Broken Hill Associated Smelters Proprietary Limited (“BHAS”) lead smelter, now owned by Nyrstar NV, has been the world’s largest lead smelter. Its staff was responsible for many significant technical developments in the lead smelting industry, including the updraft sinter plant and continuous lead refining.
Until 1990, BHAS recovered silver in a two-stage reverberatory cupellation process. This process suffered from low recoveries (80–83%), a long cycle time (4–5 days) that caused large in-process inventories, inefficient use of labor and energy, and poor workplace hygiene. After a test work program undertaken at Ausmelt’s facilities in Melbourne, BHAS switched to using a process based on the Sirosmelt top-submerged lance in June 1990.
The change to the lance-based furnace increased oxygen utilization to 95% and the cycle time was reduced to a little less than eight hours, “but the grade of the doré which could be economically produced was poor.” The doré from the new furnace still contained about 0.8% lead and 0.4% copper. It was also found impractical to cast anode plates of doré directly from the Sirosmelt furnace, so the Sirosmelt doré had to undergo a further refining step in a reverberatory furnace, together with a sodium nitrate flux.
Then, in 1996, BHAS decided to modernize the refining circuit and replaced the Sirosmelt silver refining furnace with a BBOC furnace. Commissioning of the modernized refining circuit was completed in 1999, and the lead throughput was increased by 11%, with the silver refining capacity increasing to over 400 t/y.
The BBOC process proved to be “generally successful”, although it did suffer some problems with the lance jamming that were attributed to higher than expected levels of zinc in the feed, due to problems removing the zinc in earlier stages of the refinery circuit. The higher levels of zinc also caused higher than expected refractory wear and excessive lance consumption, because the heat generated by oxidizing the zinc was greater than that of oxidizing lead.
The BBOC furnace proved to be capable of producing doré containing as little as 0.01% lead and less than 0.1% copper at a temperature around 1050 °C, but BHAS wanted to cast the doré directly into anode plates using an existing doré casting conveyor. Casting using the existing conveyor proved impossible at an operating temperature of 1050 °C, because the high thermal conductivity of the silver resulted in it freezing before it reached the molds. Consequently, BHAS decided to increase the operating temperature to 1100–1150 °C so that the silver remained liquid until cast into the anode molds. A side effect of this is that the lead and copper content of the product doré are higher than if the furnace is operated at 1050 °C, at 0.2% lead and 0.6% copper. Thermodynamic calculations have shown that this is unavoidable at this higher operating temperature.
Other lead smelters
Besides the smelters named so far, the BBOC has been licensed to the operators of the Trail smelter in British Columbia, the Belledune smelter in New Brunswick, the Noyelles Godault smelter in France, the Korea Zinc zinc smelter in Onsan, South Korea, and the lead smelter at Chanderiya in India.
Other applications
In addition to its use in recovering silver in lead refineries, the BBOC has been used to treat anode slimes from copper electrolytic refineries.
Anode slimes are composed of solid particles that do not dissolve in the electrolyte in the refining cells. This includes the gold and silver present in the copper anodes that are being refined. As with recovering silver in lead smelting, reverberatory furnaces are often used in the copper refining industry for the purification and recovery of gold and silver from anode slimes. However, the reverberatory furnaces suffer from similar disadvantages in copper anode doré production as they do in lead refineries, including resulting in a large inventory of gold in the system. Other furnace types used, include top-blown rotary converters and short rotary furnaces.
ASARCO Amarillo copper refinery
The ASARCO Amarillo copper refinery switched in 1991 from reverberatory furnace treatment of anode slimes to a BBOC to reduce the gold inventory. The original reverberatory furnace had a 15 t capacity. The production cycle of the reverberatory furnace was typically 7–10 days, with the final doré production being about 8 t per cycle.
A single 3 t capacity BBOC was installed, and it was found to increase rejection of selenium from the slimes, with a reduction in fluxing requirements of about 80%.
Sumitomo Metal Mining Niihama refinery
In the 1990s, the Niihama copper refinery, owned by Sumitomo Metal Mining Company Limited (“Sumitomo”), treated copper anode slimes generated in-house, together with anode slimes from Sumitomo’s Toyo refinery and lead refinery slime from the Harima Imperial Smelting Process smelter. A total of 1200 tonnes per year (“t/y”) of anode slimes and 400 t/y of lead refinery slimes were treated using a process flow sheet that included a chloridizing step to remove separate the lead as lead chloride (PbCl2) and a reverberatory-type doré furnace. It produced about 200 t of silver, 22 t of gold, 1.5 t of palladium, 300 kilograms (“kg”) of platinum and 40 kg of rhodium, as well as 60 t of selenium, 50 t of bismuth, 900 kg of tellurium and 150 t of antimony alloy annually.
The gold production doubled during the decade to 1996, as its concentration in anode slimes and the quantity of anode slimes increased. To enable this, Sumitomo decided in 1990 to upgrade the refinery, and as part of that upgrade, installed a 3.5 t-capacity BBOC to replace its reverberatory doré furnace in October 1992.
Sumitomo reported that, while the old oil-fired reverberatory furnace had served it well for many years, it had the following drawbacks:
its operation was labor-intensive
it had a low fuel efficiency
there was a high waste gas volume
the reaction rate was low.
Sumitomo investigated both the TBRC and BBOC furnaces before making a selection. It chose the BBOC over the TBRC technology because of the ease of control of the bath temperature, its high oxygen efficiency and its simple maintenance.
Sumitomo found that the impurity contents of BBOC doré anodes was high when the furnace was first commissioned. This was because it was important to determine the endpoint of the oxidation reactions to maximize the quality of the anodes. Sumitomo found that this could be determined by measuring the oxygen content of the off-gas using oxygen sensors based on stabilized zirconia with an Fe/FeO reference electrode.
Sumitomo subsequently adapted the BBOC to allow the chloridizing step to be undertaken in the furnace, thus eliminating the need for a separate chloridizing furnace for lead chloride production. This was done in February 1994 and was reported to be “giving very good results.”
Takehara copper refinery
The Takehara copper refinery of the Mitsui Mining & Smelting Company Limited of Japan commissioned a BBOC in its precious metals department in 1993.
Prior to the installation of the BBOC, the Takehara refinery refined a mixture of copper and lead anode slimes in a three reverberatory furnaces (two operating and one being rebricked) in a process that had a cycle time of 104 hours for refining 6 t of bullion.
The reverberatory furnaces were replaced with a single BBOC with a charge capacity of 6 t of feed. The cycle time was reduced to 50 hours. The use of the BBOC reduced the energy consumption from 74 GJ/t to 27 GJ/t and also had better bismuth elimination than the reverberatory furnaces.
Advantages
The following advantages have been reported for the BBOC:
very high oxygen efficiency – the injection of oxygen directly into the reaction zone within the furnace results in much greater oxygen efficiency (close to 100%) than with reverberatory furnaces (8% for the Niihama furnace) or top-blown rotary converters (about 30%)
reduced off-gas volume – the use of industrial oxygen and the high oxygen efficiency of the process means that excess air is not required to achieve the results. This reduces the off-gas volume and thus the cost of the off-gas train and handling equipment. Rand Refinery reported that the off-gas volume of the BBOC was about 75% of that of a TBRC with a special lance conversion and only 19% of that of top-submerged lance smelting. Niihama refinery reported that its BBOC had 15% of the off-gas volume of its reverberatory furnace while producing 1.8 times the product
higher reaction rates – by injecting the oxygen directly into the reaction zone, the reaction rates are much higher than in reverberatory furnaces where the oxygen has first to penetrate the slag layer. BRM reported a reaction rate per unit of furnace volume of 10–20 times that of the reverberatory furnace
lower refractory wear – Rand Refinery reported that the refractory linings of its TBRC furnaces needed replacing after approximately two weeks, while the linings of its BBOC furnace lasted about 14 weeks
lower precious metal inventories – a consequence of the higher reaction rates is that smaller furnace volumes are required and there are smaller cycle times. This results in lower precious metal inventories. In lead slimes bullion processing, the silver inventory was reduced from 4.5 t to 1.25 t after replacing a reverberatory furnace with a BBOC and at BRM the silver inventory fell from 11.5 t to 3.1 t with the introduction of the BBOC furnace
better energy efficiency – a supplementary burner is needed only during heating the charge and doré casting operations. During cupellation, the oxidation reactions provide sufficient heat to maintain temperature. There was a 92% reduction in fuel consumption per tonne of doré treated reported for the BBOC at the Niihama refinery
better product quality – BHAS reported that lead and copper levels in silver produced from the BBOC of 0.01% and 0.1% respectively were possible when the furnace was operating under design conditions, compared to 0.04% and 0.2% for the old reverberatory furnace, and 0.8% and 0.4% for the Sirosmelt furnace. Rand Refinery reported that a doré bullion of 99.2% was achievable. BRM reported that its doré is 99.7% silver
higher recoveries of precious metals – due to changes in the way the BBOC is operated compare to reverberatory furnaces, notably in being able to use deeper layers of slag, there is an increase in the recovery of precious metals compared to the reverberatory furnaces. Replacement of reverberatory furnaces with BBOC furnaces saw the direct silver recovery increase from 92.5% to 97.5% at BRM and from 70% to over 95% at Niihama
simple vessel design – the BBOC has a relatively simple vessel design, without the complex moving parts of TBRCs
good process control – the high oxygen utilization allows good process control, particularly when combined with an oxygen sensor in the off-gas system
lower labor requirements – the BBOC has a lower labor requirement than reverberatory furnaces, top-submerged lance furnaces and TBRCs
lower operating costs – lower labor requirements, lower fuel requirements and longer refractory life contributed to a 28.3% reduction in overall operating costs when the BBOC was installed at the Rand Refinery
lower capital cost – the BBOC is a simpler furnace than TBRC or top-submerged lance furnaces. Rand Refinery reported a capital cost comparison indicating that its BBOC option was 67% of the cost of a top-submerged lance option.
References
Metallurgy
Smelting
Metallurgical processes
Industrial furnaces | Bottom-blown oxygen converter | [
"Chemistry",
"Materials_science",
"Engineering"
] | 5,269 | [
"Smelting",
"Metallurgical processes",
"Metallurgy",
"Materials science",
"Industrial furnaces",
"nan"
] |
33,533,216 | https://en.wikipedia.org/wiki/Phytotechnology | Phytotechnology (; ) implements solutions to scientific and engineering problems in the form of plants. It is distinct from ecotechnology and biotechnology as these fields encompass the use and study of ecosystems and living beings, respectively. Current study of this field has mostly been directed into contaminate removal (phytoremediation), storage (phytosequestration) and accumulation (see hyperaccumulators). Plant-based technologies have become alternatives to traditional cleanup procedures because of their low capital costs, high success rates, low maintenance requirements, end-use value, and aesthetic nature.
Overview
Phytotechnology is the application of plants to engineering and science problems. Phytotechnology uses ecosystem services to provide for a specifically engineered solution to a problem. Ecosystem services, broadly defined fall into four broad categories: provisioning (i.e. production of food and water), regulating (i.e. the control of climate and disease) supporting (i.e. nutrient cycles and crop pollination), and cultural (i.e. spiritual and recreational benefits). Many times only one of these ecosystem services is maximized in the design of the space. For instance a constructed wetland may attempt to maximize the cooling properties of the system to treat water from a wastewater treatment facility before introduction to a river. The designed benefit is a reduction of water temperature for the river system while the constructed wetland itself provides habitat and food for wildlife as well as walking trails for recreation. Most phytotechnology has been focused on the abilities of plants to remove pollutants from the environment. Other technologies such as green roofs, green walls and bioswales are generally considered phytotechnology. Taking a broad view: even parks and landscaping could be viewed as phytotechnology.
However, there is very little consensus over a definition of phytotechnology even within the field. The Phytotechnology Technical and Regulatory Guidance and Decision Trees, Revised defines phytotechnology as, "Phytotechnologies are a set of technologies using plants to remediate or contain contaminants in soil, groundwater, surface water, or sediments." The United Nations Environment Programme defines phytotechnology as, "the application of science and engineering to study problems and provide solutions involving plants." A third definition from the Department of Environmental Engineering Indonesia, gives it as, "a technology which is based on the application of plants as solar driven and living technology for improving environmental sanitation and conservation problems."
Rationale for use
In phytotechnology the naturally existing properties of plants are used to accomplish defined outcomes with ecosystem services in a designed environment. The phytotechnologic system uses these properties, broadly the degradation/use of chemicals in the environment and the transport and storage of water, to change the output of the system. These mechanisms have evolved since the beginning of angiosperms 1000 mya and have become quite effective. The diversity of plants also gives versatility to the phytotechnologic system. Plants from the native environment capable to handle many applications and non-natives for more specific projects (such as hyperaccumulators for heavy metal removal). Ancillary benefits are a factor. Community use, education use, tax credits, habitat creation, increased sustainability and aesthetics are all benefits of phytotechnology.
The cost of the system is also lower compared to traditional remediation technologies in many cases. Without pumping systems, electricity costs, infrastructure and other costs. Even if the initial investment is higher in some cases (notably green roofs) the costs over the lifetime of the project will be less.
Cautions against use
Plants will not tolerate certain conditions. Too much pollution, water, salt or other variables can kill the plants in the system. Water solubility of the pollutants affects the system. Plants also have mechanisms to halt the uptake of substances and may not remove a contaminate completely in an acceptable time frame. The length of time in which the project must be completed is another limiting factor. Many phytotechnologies take at least two years or longer to reach maturity and some could be designed as legacy projects, with lifespans which may be 100 years or more. In more temperate climates the systems may become inactive or much less active in the winter months and may not be usable at all in more arctic environments.
Mechanisms of action
There are many physiological properties of plants which can be used in phytotechnology. The mechanisms work synergistically to achieve the goals set by a project.
Phytosequestration
Phytosequestration is the ability of plants to sequester certain contaminants in root zone. This is accomplished through several of the plant's physiological mechanisms. Phytochemicals extruded in the root zone can immobilize or precipitation of the target contaminant. The transport proteins associated with the root also can irreversibly bind and stabilize target contaminants. Contaminants can also be taken up by the root and sequestered in the vacuoles in the root system.
Phytohydraulics
Phytohydraulics is the ability of plants to capture, transport and transpire water from the environment. This action in turn contains pollutants and controls the hydrology of the environment. This mechanism does not degrade the contaminant.
Rhizodegradation
Rhizodegradation is the enhancement of microbial degradation of contaminants in the rhizosphere. The presence of a contaminant in the soil will naturally provide an environment for bacteria and fungi which can use the containment as a source of energy. The root systems of plants, in most cases, will form a symbiotic relationship with the organisms in the soil. The oxygen and water transported by the roots allows for greater growth of beneficial soil microorganisms. This allows for greater breakdown of the contaminant and quicker remediation. This is the primary means through which organic contaminants can be remediated.
Rhizofiltration
Rhizofiltration is the adsorption onto plant roots or absorption into plant roots of contaminants that are in solution surrounding the root zone.
Phytoextraction
Phytoextraction is the ability to take contaminants into the plant. The plant material is then removed and safely stored or destroyed.
Phytovolatilization
Phytovolatilization is the ability to take up contaminants in the transpiration stream and then transpire volatile contaminants. The contaminant is remediation by removal through plants.
Phytodegradation
Phytodegradation is the ability of plants to take up and degrade the contaminants. Contaminants are degraded through internal enzymatic activity and photosynthetic oxidation/reduction.
References
United Nations. United Nations Environment Programme. Phytotechnologies: A Technical Approach in Environmental Management. 2003. Web. <http://wedocs.unep.org/bitstream/handle/20.500.11822/9159/fito.pdf?sequence=1&isAllowed=y>.
ITRC (Interstate Technology & Regulatory Council). 2009. Phytotechnology Technical and Regulatory Guidance and Decision Trees, Revised. PHYTO-3. Washington, D.C.: Interstate Technology & Regulatory Council, Phytotechnologies Team, Tech Reg Update. www.itrcweb.org
Trihadiningrum, Y., H. Basri, M. Mukhlisin, D. Listiyanawati, and N.A. Jalil. "Phytotechnology, a Nature Based Approach for Sustainable Sanitation and Conservation." Water Environment Partnership Asia. WEPA, n.d. Web. 26 Oct 2011. <http://www.wepa-db.net/pdf/0810forum/presentation07.pdf>.
Biotechnology | Phytotechnology | [
"Biology"
] | 1,665 | [
"nan",
"Biotechnology"
] |
33,538,356 | https://en.wikipedia.org/wiki/Solid-state%20fermentation | Solid state fermentation (SSF) is a biomolecule manufacturing process used in the food, pharmaceutical, cosmetic, fuel and textile industries. These biomolecules are mostly metabolites generated by microorganisms grown on a solid support selected for this purpose. This technology for the culture of microorganisms is an alternative to liquid or submerged fermentation, used predominantly for industrial purposes.
Processes
This process consists of depositing a solid culture substrate, such as rice or wheat bran, on flatbeds after seeding it with microorganisms; the substrate is then left in a temperature-controlled room for several days.
Liquid state fermentation is performed in tanks, which can reach at an industrial scale. Liquid culture is ideal for the growing of unicellular organisms such as bacteria or yeasts.
To achieve liquid aerobic fermentation, it is necessary to constantly supply the microorganism with oxygen, which is generally done via stirring the fermentation media. Accurately managing the synthesis of the desired metabolites requires regulating temperature, soluble oxygen, ionic strength, pH and control nutrients.
Applying this growing technique to filamentous fungi leads to difficulties. The fungus develops in its vegetative form, generating hyphae or multicellular ramous filaments, while a septum separates the cells. As this mycelium develops in a liquid environment, it generates abundant viscosity in the growing medium, reducing oxygen solubility, while stirring disrupts the cell network increasing cell mortality.
In nature, filamentous fungi grow on the ground, decomposing vegetal compounds under naturally ventilated conditions. Therefore, solid state fermentation enables the optimal development of filamentous fungi, allowing the mycelium to spread on the surface of solid compounds among which air can flow.
Solid state fermentation uses culture substrates with low water levels (reduced water activity), which is particularly appropriate for mould. The methods used to grow filamentous fungi using solid state fermentation allow the best reproduction of their natural environment. The medium is saturated with water but little of it is free-flowing. The solid medium comprises both the substrate and the solid support on which the fermentation takes place. The substrate used is generally composed of vegetal byproducts such as beet pulp or wheat bran.
At the beginning of the growth process, the substrates and solid culture compounds are non-soluble compounds composed of very large, biochemically complex molecules that the fungus will cut off to get essential C and N nutrients. To develop its natural substrate, the fungal organism sets forth its entire genetic potential to produce the metabolites necessary for its growth. The composition of the growth medium guides the microorganism's metabolism towards the production of enzymes that release bio-available single molecules such as sugars or amino acids by carving out macromolecules. Therefore, when selecting the components of the growth medium it is possible to guide the cells towards the production of the desired metabolite(s), mainly enzymes that transform polymers (cellulose, hemicellulose, pectins, proteins) into single moieties in a very efficient and cost-effective manner.
Compared to submerged fermentation processes, solid state fermentation is more cost-effective: smaller vessels, lower water consumption, reduced wastewater treatment costs and lower energy consumption (no need to heat up water, poor mechanical energy input due to smooth stirring).
Cultivating on heterogeneous substrates requires expertise to maintain optimal growth conditions. Air flow monitoring is key because it impacts temperature, oxygen supply and moisture. In order to maintain sufficient moisture content for the growth of filamentous fungus, waterlogged air is used and may require further addition of water. In most cases, solid state fermentation does not require a completely sterile environment as the initial sterilization of the fermentation substrate associated with the rapid colonization of the substrate by the fungous microorganism limits the development of the autochthonous flora.
Uses
Traditional food production
Traditionally, SSF has been used in Asian countries to produce Koji using rice to manufacture alcoholic beverages such as Sake or Koji using soybean seeds. The latter produces sauces such as soy sauce or other foods. In Western countries, the traditional manufacturing process of many foods uses SSF. Examples include fermented bakery products such as bread or for the maturing of cheese. SSF is also widely used to prepare raw materials such as chocolate and coffee; typically cacao bean fermentation and coffee bean skin removal are SSF processes carried out under natural tropical conditions.
Enzyme production
Enzymes and enzymatic complexes able to break down difficult-to-transform macromolecules such as cellulose, hemicelluloses, pectin and proteins. Solid state fermentation is well suited for the production of various enzymatic complexes composed of multiple enzymes. Enzymatic compounds generated by SSF find outlets in all sectors where digestibility, solubility or viscosity is needed.
This is why SSF enzymes are widely used in the following industries:
fruit and vegetable transformation (pectinases)
baking (hemicellulases)
animal feeding (hemicellulases and cellulases)
bio ethanol (cellulases and hemicellulases)
brewing and distilling (hemicellulases)
Outlook
Liquid, submerged and solid state fermentation are age-old techniques used for the preservation and manufacturing of foods. During the second half of the twentieth century, liquid state fermentation developed on an industrial scale to manufacture vital metabolites such as antibiotics.
Economic changes and growing environmental awareness generate new perspectives for solid state fermentation. SSF adds value to insoluble agricultural byproducts thanks to its higher energy efficiency and reduced water consumption.
The renewal of SSF is now possible thanks to engineering firms, mainly from Asia, that have developed a new generation of equipment. Fujiwara makes vessels able to transform substrate areas up to for the production of soy sauce or sake. Other companies use solid state fermentation for enzyme complexes. In France Lyven has manufactured Pectinases and Hemicellulases on beet pulp and wheat bran since 1980. The company (now part of Soufflet Group) is now involved in a global R&D programme focusing on SSF technology.
See also
Aspergillus oryzae
Micro-organism
Notes
References
External links
Fermentation | Solid-state fermentation | [
"Chemistry",
"Biology"
] | 1,346 | [
"Biochemistry",
"Cellular respiration",
"Fermentation"
] |
23,429,522 | https://en.wikipedia.org/wiki/Corner-point%20grid | In geometry, a corner-point grid is a tessellation of a Euclidean 3D volume, where the base cell has 6 faces (hexahedron).
A set of straight lines defined by their end points define the pillars of the corner-point grid. The pillars have a lexicographical ordering that determines neighbouring pillars. On each pillar, a constant number of nodes (corner-points) is defined. A corner-point cell is now the volume between 4 neighbouring pillars and two neighbouring points on each pillar.
Each cell can be identified by integer coordinates , where the coordinate runs along the pillars, and and span each layer. The cells are ordered naturally, where the index runs the fastest and the slowest.
Data within the interior of such cells can be computed by trilinear interpolation from the boundary values at the 8 corners, 12 edges, and 6 faces.
In the special case of all pillars being vertical, the top and bottom face of each corner-point cell are described by bilinear surfaces and the side faces are planes.
Corner-point grids are supported by most reservoir simulation software, and has become an industry standard.
Degeneracy
A main feature of the format is the ability to define erosion surfaces in geological modelling, effectively done by collapsing nodes along each pillar. This means that the corner-point cells degenerate and may have less than 6 faces.
For the corner-point grids, non-neighboring connections are supported, meaning that grid cells that are not neighboring in ijk-space can be defined as neighboring. This feature allows for representation of faults with significant throw/displacement. Moreover, the neighboring grid cells do not need to have matching cell faces (just overlap).
References
Corner Point Grid. Open Porous Media Initiative
Aarnes J, Krogstad S and Lie KA (2006). Multiscale Mixed/Mimetic Methods on Corner Point Grids. SINTEF ICT, Dept. Applied Mathematics
Tessellation
Geometry | Corner-point grid | [
"Physics",
"Mathematics"
] | 404 | [
"Tessellation",
"Euclidean plane geometry",
"Geometry",
"Geometry stubs",
"Planes (geometry)",
"Symmetry"
] |
23,432,071 | https://en.wikipedia.org/wiki/Purity%20%28quantum%20mechanics%29 | In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as
where is the density matrix of the state and is the trace operation. The purity defines a measure on quantum states, giving information on how much a state is mixed.
Mathematical properties
The purity of a normalized quantum state satisfies , where is the dimension of the Hilbert space upon which the state is defined. The upper bound is obtained by and (see trace).
If is a projection, which defines a pure state, then the upper bound is saturated: (see Projections). The lower bound is obtained by the completely mixed state, represented by the matrix .
The purity of a quantum state is conserved under unitary transformations acting on the density matrix in the form , where is a unitary matrix. Specifically, it is conserved under the time evolution operator , where is the Hamiltonian operator.
Physical meaning
A pure quantum state can be represented as a single vector in the Hilbert space. In the density matrix formulation, a pure state is represented by the matrix
However, a mixed state cannot be represented this way, and instead is represented by a convex combination of pure states
while for normalization. The purity parameter is related to the coefficients: If only one coefficient is equal to 1, the state is pure. Indeed, the purity is when the state is completely mixed, i.e.
where are orthonormal vectors that constitute a basis of the Hilbert space.
Geometrical representation
On the Bloch sphere, pure states are represented by a point on the surface of the sphere, whereas mixed states are represented by an interior point. Thus, the purity of a state can be visualized as the degree to which the point is close to the surface of the sphere.
For example, the completely mixed state of a single qubit is represented by the center of the sphere, by symmetry.
A graphical intuition of purity may be gained by looking at the relation between the density matrix and the Bloch sphere,
where is the vector representing the quantum state (on or inside the sphere), and is the vector of the Pauli matrices.
Since Pauli matrices are traceless, it still holds that . However, by virtue of
hence
which agrees with the fact that only states on the surface of the sphere itself are pure (i.e. ).
Relation to other concepts
Linear entropy
Purity is trivially related to the linear entropy of a state by
The linear entropy is a lower approximation to the von Neumann entropy S, which is defined as
The linear entropy then is obtained by expanding , around a pure state, ; that is, expanding in terms of the non-negative matrix in the formal Mercator series for the logarithm,
and retaining just the leading term. Both the linear and the von Neumann entropy measure the degree of mixing of a state, although the linear entropy is easier to calculate, as it does not require diagonalization of the density matrix. Some authors define linear entropy with a different normalization
which ensures that the quantity ranges from zero to unity.
Entanglement
A 2-qubits pure state can be written (using Schmidt decomposition) as , where are the bases of respectively, and . Its density matrix is . The degree in which it is entangled is related to the purity of the states of its subsystems, , and similarly for (see partial trace). If this initial state is separable (i.e. there's only a single ), then are both pure. Otherwise, this state is entangled and are both mixed. For example, if which is a maximally entangled state, then are both completely mixed.
For 2-qubits (pure or mixed) states, the Schmidt number (number of Schmidt coefficients) is at most 2. Using this and Peres–Horodecki criterion (for 2-qubits), a state is entangled if its partial transpose has at least one negative eigenvalue. Using the Schmidt coefficients from above, the negative eigenvalue is . The negativity of this eigenvalue is also used as a measure of entanglement – the state is more entangled as this eigenvalue is more negative (up to for Bell states). For the state of subsystem (similarly for ), it holds that:
And the purity is .
One can see that the more entangled the composite state is (i.e. more negative), the less pure the subsystem state.
Inverse Participation Ratio (IPR)
In the context of localization, a quantity closely related to the purity, the so-called inverse participation ratio (IPR) turns out to be useful. It is defined as the integral (or sum for finite system size) over the square of the density in some space, e.g., real space, momentum space, or even phase space, where the densities would be the square of the real space wave function , the square of the momentum space wave function , or some phase space density like the Husimi distribution, respectively.
The smallest value of the IPR corresponds to a fully delocalized state, for a system of size , where the IPR yields . Values of the IPR close to 1 correspond to localized states (pure states in the analogy), as can be seen with the perfectly localized state , where the IPR yields . In one dimension IPR is directly proportional to the inverse of the localization length, i.e., the size of the region over which a state is localized. Localized and delocalized (extended) states in the framework of condensed matter physics then correspond to insulating and metallic states, respectively, if one imagines an electron on a lattice not being able to move in the crystal (localized wave function, IPR is close to one) or being able to move (extended state, IPR is close to zero).
In the context of localization, it is often not necessary to know the wave function itself; it often suffices to know the localization properties. This is why the IPR is useful in this context. The IPR basically takes the full information about a quantum system (the wave function; for a -dimensional Hilbert space one would have to store values, the components of the wave function) and compresses it into one single number that then only contains some information about the localization properties of the state. Even though these two examples of a perfectly localized and a perfectly delocalized state were only shown for the real space wave function and correspondingly for the real space IPR, one could obviously extend the idea to momentum space and even phase space; the IPR then gives some information about the localization in the space at consideration, e.g. a plane wave would be strongly delocalized in real space, but its Fourier transform then is strongly localized, so here the real space IPR would be close to zero and the momentum space IPR would be close to one.
References
Quantum mechanics | Purity (quantum mechanics) | [
"Physics"
] | 1,430 | [
"Theoretical physics",
"Quantum mechanics"
] |
23,437,336 | https://en.wikipedia.org/wiki/Inverse%20magnetostrictive%20effect | The inverse magnetostrictive effect, magnetoelastic effect or Villari effect, after its discoverer Emilio Villari, is the change of the magnetic susceptibility of a material when subjected to a mechanical stress.
Explanation
The magnetostriction characterizes the shape change of a ferromagnetic material during magnetization, whereas the inverse magnetostrictive effect characterizes the change of sample magnetization (for given magnetizing field strength ) when mechanical stresses are applied to the sample.
Qualitative explanation of magnetoelastic effect
Under a given uni-axial mechanical stress , the flux density for a given magnetizing field strength may increase or decrease. The way in which a material responds to stresses depends on its saturation magnetostriction . For this analysis, compressive stresses are considered as negative, whereas tensile stresses are positive.
According to Le Chatelier's principle:
This means, that when the product is positive, the flux density increases under stress. On the other hand, when the product is negative, the flux density decreases under stress. This effect was confirmed experimentally.
Quantitative explanation of magnetoelastic effect
In the case of a single stress acting upon a single magnetic domain, the magnetic strain energy density can be expressed as:
where is the magnetostrictive expansion at saturation, and is the angle between the saturation magnetization and the stress's direction.
When and are both positive (like in iron under tension), the energy is minimum for = 0, i.e. when tension is aligned with the saturation magnetization. Consequently, the magnetization is increased by tension.
Magnetoelastic effect in a single crystal
In fact, magnetostriction is more complex and depends on the direction of the crystal axes. In iron, the [100] axes are the directions of easy magnetization, while there is little magnetization along the [111] directions (unless the magnetization becomes close to the saturation magnetization, leading to the change of the domain orientation from [111] to [100]). This magnetic anisotropy pushed authors to define two independent longitudinal magnetostrictions and .
In cubic materials, the magnetostriction along any axis can be defined by a known linear combination of these two constants. For instance, the elongation along [110] is a linear combination of and .
Under assumptions of isotropic magnetostriction (i.e. domain magnetization is the same in any crystallographic directions), then and the linear dependence between the elastic energy and the stress is conserved, . Here, , and are the direction cosines of the domain magnetization, and , , those of the bond directions, towards the crystallographic directions.
Method of testing the magnetoelastic properties of magnetic materials
Method suitable for effective testing of magnetoelastic effect in magnetic materials should fulfill the following requirements:
magnetic circuit of the tested sample should be closed. Open magnetic circuit causes demagnetization, which reduces magnetoelastic effect and complicates its analysis.
distribution of stresses should be uniform. Value and direction of stresses should be known.
there should be the possibility of making the magnetizing and sensing windings on the sample - necessary to measure magnetic hysteresis loop under mechanical stresses.
Following testing methods were developed:
tensile stresses applied to the strip of magnetic material in the shape of a ribbon. Disadvantage: open magnetic circuit of the tested sample.
tensile or compressive stresses applied to the frame-shaped sample. Disadvantage: only bulk materials may be tested. No stresses in the joints of sample columns.
compressive stresses applied to the ring core in the sideways direction. Disadvantage: non-uniform stresses distribution in the core .
tensile or compressive stresses applied axially to the ring sample. Disadvantage: stresses are perpendicular to the magnetizing field.
Applications of magnetoelastic effect
Magnetoelastic effect can be used in development of force sensors. This effect was used for sensors:
in civil engineering.
for monitoring of large diesel engines in locomotives.
for monitoring of ball valves.
for biomedical monitoring.
Inverse magnetoelastic effects have to be also considered as a side effect of accidental or intentional application of mechanical stresses to the magnetic core of inductive component, e.g. fluxgates or generator/motor stators when installed with interference fits.
References
See also
Magnetostriction
Magnetocrystalline anisotropy
Magnetism
Magnetic ordering | Inverse magnetostrictive effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 905 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
39,058,868 | https://en.wikipedia.org/wiki/Bioresorbable%20metal | Bioresorbable (also called biodegradable or bioabsorbable) metals are metals or their alloys that degrade safely within the body. The primary metals in this category are magnesium-based and iron-based alloys, although recently zinc has also been investigated. Currently, the primary uses of bioresorbable metals are as stents for blood vessels (for example bioresorbable stents) and other internal ducts.
Background
Although bioabsorbable polymers and other materials have come into widespread use in recent years, degradable metals have not yet had the same success in the medical industry.
Driving force for development
The driving force behind the development of bioresorbable metals is primarily due to their ability to provide metal-like mechanical properties while degrading safely in the body. This is especially relevant in orthopaedic applications, where although many surgeries only require implants to provide temporary support (allowing the surrounding tissue to heal), the majority of current bio-metals are permanent (e.g. stainless steel, titanium). Degradation of the implant means that intervention or secondary surgery will not be necessary to remove the material at the end of its functional life, providing significant savings in both cost and time for the patient and health care system. In addition, the corrosion products of current bio-metals (which will still corrode in the body to some degree) can generally not be considered biocompatible.
Potential applications
There are a number of applications for biodegradable metals, including cardiovascular implants (i.e. stents) and orthopedics. It is in this latter category where these materials offer the greatest potential. Bioresorbable metals are able to withstand loads that would destroy any currently available polymers, and offer much greater plasticity than bioceramics, which are brittle and prone to fracture. A well-designed implant could provide the exact mechanical support needed for different areas (through alloying and metal working), and load would be transferred to the surrounding tissue over time, letting it heal and reducing the effects of stress shielding. A summary of the primary benefits and drawbacks of magnesium biomaterials has been provided by Kirkland.
Considerations and issues facing bioresorbable metal development
Changing shape over time
It is the same advantage that bioresorbable metals possess over non-degradable current materials, their biodegradability, that poses the greatest challenges to their development and wider use. The degradable nature of any implant means that their shape and thus mechanical properties will change through its lifetime. This means that lifecycle analysis must be performed on any implant, especially one designed for orthopedic applications where failure could result in death.
Lack of standards
Current standards for corrosion of metals have been found to not apply well to bioresorbable metals during in vitro testing. This is a significant problem as the majority of tests performed in the research community are a mix of other standards from both the biomedical and the engineering (e.g. corrosion) communities, often making comparison between results difficult.
Corrosion Product Toxicity
Even though all elements in a bioresorbable metal may themselves be considered biocompatible, the morphology and elemental makeup (or combination of elements) of the degradation products may cause adverse reactions in the body. In addition, the rapid evolution of hydrogen gas that is concomitant with Mg-alloy degradation may cause addition problems in vivo. It is therefore crucial to intricately understand the corrosion of each implant and the products that are release, in light of their toxicity and the likelihood of inflammation. The majority of studies in the literature have focused on elements that are known to be biocompatible or abundant in the body, such as calcium and zinc.
Potential bioresorbable metal candidates
Although all metals will degrade and eventually disappear inside the body through the processes of corrosion and wear, true bioresorbable metals must have an appreciable degradation rate to allow the implant to be absorbed in a practical amount of time in reference to their application. Also, any degradation product would have to be safely metabolized or excreted by the body to avoid toxicity and inflammation.
Magnesium
Perhaps the most widely investigated material in this category, magnesium was originally investigated as a potential biomaterial in 1878 when it was used by physician Edward C. Huse in wire form as a ligature to stop bleeding. Development continued into the 1920s, after which Mg-based biomaterials fell out of general investigation due to their poor performance (likely due to impurities in the alloys drastically increasing corrosion). It was not until the late 1990s that interest started to pick up again, Mg has a density close to that of bone and is absorbed by the body .Mg is of interest for orthopedic applications due to its relatively low cost, high specific strength, and near-bone elastic modulus, which avoids stress shielding and allows uniform distribution of tissue stress
Currently, most research on Mg is focused on reducing and controlling the rate of degradation, with many alloys corroding too rapidly (in vitro) for any practical application.
Iron
The majority of iron-based alloy research has been focused on cardiovascular applications, such as stents. However this area receives much less interest in the research community than Mg-based alloys.
Zinc
To date little work has been published on the use of a primarily zinc-based biomaterial, with corrosion rates found to be very low and zinc within a tolerable toxicity range .Furthermore, Pure Zn has poor mechanical behavior, with
a tensile strength of around 100–150 MPa and an elongation of 0.3–2%, which is far from reaching the strength required as an orthopedic implant material (tensile strength is more than 300 MPa,
elongation more than 15%). Alloy and composite fabrication have proven to be excellent ways to improve the mechanical performance of Zn.
Biodegradable bulk metallic glasses
Although strictly speaking a side-category, a related, relatively new area of interest has been the investigation of bioabsorbable metallic glass, with a group at UNSW currently investigating these novel materials.
References
Biomaterials | Bioresorbable metal | [
"Physics",
"Biology"
] | 1,268 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
39,059,578 | https://en.wikipedia.org/wiki/Computer%20Atlas%20of%20Surface%20Topography%20of%20Proteins | Computer Atlas of Surface Topography of Proteins (CASTp) aims to provide comprehensive and detailed quantitative characterization of topographic features of protein, is now updated to version 3.0. Since its release in 2006, the CASTp server has ≈45000 visits and fulfills ≈33000 calculation requests annually. CASTp has been proven as a confident tool for a wide range of researches, including investigations of signaling receptors, discoveries of cancer therapeutics, understanding of mechanism of drug actions, studies of immune disorder diseases, analysis of protein–nanoparticle interactions, inference of protein functions and development of high-throughput computational tools. This server is maintained by Jie Liang's lab in University of Illinois at Chicago.
Geometric Modeling Principles
For the calculation strategy of CASTp, alpha-shape and discrete-flow methods are applied to the protein binding site, also the measurement of pocket size by the program of CAST by Liang et al. in 1998, then updated by Tian et al. in 2018. Firstly, CAST identifies atoms which form the protein pocket, then calculates the volume and area, identifies the atoms forming the rims of pocket mouth, computes how many mouth openings for each pocket, predict the area and circumference of mouth openings, finally locates cavities and calculate their size. The secondary structures were calculated by DSSP. The single amino acid annotations were fetched from UniProt database, then mapped to PDB structures following residue-level information from SIFTS database.
Instructions of Protein Pocket Calculation
Input
Protein structures in PDB format, and a probe radius.
Searching
Users can either search for pre-computed result by 4-letter PDB ID, or upload their own PDB file for customized computation. The core algorithm helps in finding the pocket or cavity with capability of housing a solvent, with a default or adjusted diameter.
Output
CASTp identifies all surface pockets, interior cavities and cross channels, provides detailed delineation of all atoms participating in their formation, including the area and volume of pocket or void as well as measurement of numbers of mouth opening of a particular pocket ID by solvent accessible surface model (Richards' surface) and by molecular surface model (Connolly surface), all calculated analytically. The core algorithm helps in finding the pocket or cavity with capability of housing a solvent with a diameter of 1.4 Å. This online tool also supports PyMOL and UCSF Chimera plugin for molecular visualization.
Why CASTp is Useful?
Protein science, from an amino acid to sequences and structures
Proteins are large, complex molecules that playing critical roles to maintain the normal functioning of the human body. They are essential not just for the structure and function, but also the regulation among the body's tissues and organs. Proteins are made up of hundreds of smaller units called amino acids that are attached to one another by peptide bonds, forming a long chain.
Protein active sites
Usually, the active site of a protein locates on its center of action and, the key to its function. The first step is the detection of active sites on the protein surface and an exact description of their features and boundaries. These specifications are vital inputs for subsequent target druggability prediction or target comparison. Most of the algorithms for active site detection are based on geometric modeling or energetic features based calculation.
The role of protein pockets
The shape and properties of the protein surface determine what interactions are possible with ligands and other macromolecules. Pockets are an important yet ambiguous feature of this surface. During drug discovery process, the first step in screening for lead compounds and potential molecules as drugs is usually a selection of the shape of the binding pocket. Shape plays a role in many computational pharmacological methods. Based on existing results, most features important to predicting drug-binding were depended on size and shape of the binding pocket, with the chemical properties of secondary importance. The surface shape is also important for interactions between protein and water. However, defining discrete pockets or possible interaction sites still remains unclear, due to the shape and location of nearby pockets affected promiscuity and diversity of binding sites. Since most pockets are open to solvent, to define the border of a pocket is the primary difficulty. Those closed to solvent we refer to as buried cavities. With the benefit of well-defined extent, area and volume, buried cavities are more straightforward to locate. In contrast, the border of an open pocket defines its mouth and it provides the cut-off for determination of the surface area and volume. Even defining the pocket as a set of residues does not define the volume or the mouth of the pocket.
Druggability role prediction
In pharmaceutical industry, the current priority strategy for target assessment is high-throughput screening (HTS). NMR screenings are applied against large compound datasets. Chemical characteristics of compounds binding against specific targets are measured, so how well the compound sets bind to the chemical space will decide the binding efficiency. Success rates of virtually docking of the drug-like ligands into the active sites of the target proteins would be detected for prioritization, while most of the active sites are located at the pockets.
With the benefits of large amount of structural data, computational methods from different perspectives for druggability prediction have been introduced during the last 30 years with positive results, as a vital instrument to accelerate the prediction accessibility. Many candidates have been integrated into drug discovery pipeline already since then.
New Features in CASTp 3.0
Pre-computed results for biological assemblies
For a lot of proteins deposited in Protein Data Bank, the asymmetric unit might be different from biological unit, which would make the computational result biologically irrelevant. So the new CASTp 3.0 computed the topological features for biological assemblies, overcome the barriers between asymmetric unit and biological assemblies.
Imprints of negative volumes of topological features
In the 1st release of CASTp server in 2006, only geometric and topological features of those surface atoms participated in the formation of protein pockets, cavities, and channels. The new CASTp added the "negative volume" of the space, referred to the space encompassed by the atoms formed these geometric and topological features.
Comprehensive annotation on single amino-acid polymorphism
The latest CASTp integrated protein annotations aligned with the sequence, including the brief feature, positions, description, and reference of the domains, motifs, and single amino-acid polymorphisms.
Improved user interface & convenient visualization
The new CASTp now incorporated 3Dmol.js for structural visualization, made users able to browse, to interact the protein 3D model, and to examine the computational results in latest web-browsers including Chrome, Firefox, Safari, et al. Users can pick their own representation style of the atoms which form each topographic feature, and to edit the colors by their own preferences.
References
Bioinformatics
Proteomics
Structural biology
Computational biology | Computer Atlas of Surface Topography of Proteins | [
"Chemistry",
"Engineering",
"Biology"
] | 1,401 | [
"Biological engineering",
"Bioinformatics",
"Structural biology",
"Computational biology",
"Biochemistry"
] |
39,062,059 | https://en.wikipedia.org/wiki/Data%20Plane%20Development%20Kit | The Data Plane Development Kit (DPDK) is an open source software project managed by the Linux Foundation. It provides a set of data plane libraries and network interface controller polling-mode drivers for offloading TCP packet processing from the operating system kernel to processes running in user space. This offloading achieves higher computing efficiency and higher packet throughput than is possible using the interrupt-driven processing provided in the kernel.
DPDK provides a programming framework for x86, ARM, and PowerPC processors and enables faster development of high speed data packet networking applications. It scales from mobile processors, such as Intel Atom, to server-grade processors, such as Intel Xeon. It supports instruction set architectures such as Intel, IBM POWER8, EZchip, and ARM. It is provided and supported under the open-source BSD license.
DPDK was created by Intel engineer Venky Venkatesan, who is affectionately known as "The Father of DPDK." He died in 2018 after a long battle with cancer.
Overview
The DPDK framework creates a set of libraries for specific hardware/software environments through the creation of an Environment Abstraction Layer (EAL). The EAL hides the environment specifics and provides a standard programming interface to libraries, available hardware accelerators and other hardware and operating system (Linux, FreeBSD) elements. Once the EAL is created for a specific environment, developers link to the library to create their applications. For instance, EAL provides the frameworks to support Linux, FreeBSD, Intel IA-32 or 64-bit, IBM POWER9 and ARM 32- or 64-bit.
The EAL also provides additional services including time references, generic bus access, trace and debug functions and alarm operations.
Using DPDK libraries one can implement a low overhead run-to-completion, pipeline or staged, event driven, or hybrid model completely in userspace eliminating kernel and kernel to user copy. Hardware assists from NIC/Regex/Accelerators, libraries enhanced to make use of Intelligence Storage Acceleration (ISA) for bulk performance and accessing devices via polling helps to eliminate the performance overhead of interrupt too. Hugepages are used for large memory pool allocation, to decrease the amount of lookups and page management.
The DPDK also includes software examples that highlight best practices for software architecture, tips for data structure design and storage, application profiling and performance tuning utilities and tips that address common network performance deficits.
Libraries
The DPDK includes data plane libraries and optimized network interface controller (NIC) drivers for the following:
A queue manager implements lockless queues
A buffer manager pre-allocates fixed size buffers
A memory manager allocates pools of objects in memory and uses a ring to store free objects; ensures that objects are spread equally on all DRAM channels
Poll mode drivers (PMD) are designed to work without asynchronous notifications, reducing overhead
A packet framework – a set of libraries that are helpers to develop packet processing
All libraries are stored in the dpdk/lib/librte_* directories
Plugins
The DPDK includes drivers for many hardware types. There have been some additional out-of-tree plugin drivers in the past, which are now considered deprecated.
provides PMD Ethernet layer supporting Vmxnet3 paravirtualized NIC; superseded by full VMXNET3 support in native DPDK.
provides a Virtual PMD Ethernet layer through shared memory based on 2 memory copies of packets
Environment
The DPDK was originally designed to run using a bare-metal mode which is currently deprecated. DPDK's EAL provides support for Linux or FreeBSD userland application.
EAL can be extended in order to support any processors.
Ecosystem
DPDK is now an open-source project under the Linux Foundation, supported by many companies. DPDK is governed by a Governing Board. The technical activities are overseen by a Technical Board. Beside Intel, which is a contributor to the DPDK, several other vendors also support the DPDK within their products and some offer additional training, support and professional services. The list of vendors who have announced DPDK support includes: 6WIND, ALTEN Calsoft Labs, Advantech, Brocade, Big Switch Networks, Mellanox Technologies, Radisys, Tieto, Wind River, Lanner Inc. and NXP.
Projects
The pfSense project published a road map on 25 February 2015, in which developer Jim Thompson announced the rewriting of the pfSense core—including pf, network packet forwarding and shaping, link bonding, IPsec—using DPDK: "We have a goal of being able to forward, with packet filtering at rates of at least 14.88 Mpps. This is 'line rate' on a 10 Gbps interface. There is simply no way to use today's FreeBSD (or Linux) in-kernel stacks for this type of load."
Open vSwitch (OVS) has a limited set of features running userland that can be leveraged to bypass the Linux kernel OVS processing. This use case of OVS with DPDK userland is usually named OVS-DPDK. It is mostly deployed with OpenStack Neutron but it assumes that many features and software-defined networking (SDN) capabilities of Openstack are disabled. For instance, when OVS-DPDK is used, Neutron provides a lower level of security than when OVS kernel is used (no stateful firewalling, less security group).
The FD.IO VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs, and can leverage the Poll Mode Drivers for both NICs and cryptographic acceleration hardware and libraries. VPP supports and uses the DPDK library.
TRex is an open source traffic generator using DPDK. It generates L4–7 traffic based on pre-processing and smart replay of real traffic templates. TRex amplifies both client and server side traffic and can scale to 200 Gbit/s with one UCS using Intel XL710. TRex also supports multiple streams, ability to change any packet field and provides per stream statistics, latency and jitter.
DTS (DPDK Test Suite) is a Python-based framework for functional tests and benchmarks. It is an open-source project, started in 2014, and is hosted on dpdk.org. It supports both software traffic generators like Scapy and dpdk-pktgen, and a hardware traffic generator like Ixia.
DPDK has support for several SRIOV network drivers, enabling creating a PF (Physical Function) and VFs, and also to launch VMs (like QEMU VMs) and assign VFs to them using PCI Passthrough
DDP (Dynamic Device Personalization) is one of the new advanced features implemented with DPDK. It allows you to load
firmware for a device dynamically, without resetting the host.
See also
Express Data Path
References
Free routing software
Networking hardware
Linux Foundation projects
Software using the BSD license | Data Plane Development Kit | [
"Engineering"
] | 1,530 | [
"Computer networks engineering",
"Networking hardware"
] |
39,062,357 | https://en.wikipedia.org/wiki/Tellurium%20monoxide | The diatomic molecule tellurium monoxide has been found as a transient species. Previous work that claimed the existence of TeO solid has not been substantiated. The coating on DVDs called tellurium suboxide may be a mixture of tellurium dioxide and tellurium metal.
History
Tellurium monoxide was first reported in 1883 by E. Divers and M. Shimose. It was supposedly created by the thermal decomposition of tellurium sulfoxide in a vacuum, and was shown to react with hydrogen chloride in a 1913 report. Later work has not substantiated the claim that this was a pure solid compound. By 1984, the company Panasonic was working on an erasable optical disk drive containing "tellurium monoxide" (really a mixture of Te and TeO2).
See also
Tellurium dioxide
Tellurium trioxide
Lead carbide – originally thought to be a pure compound, but now considered more likely to be a mixture of carbon and lead
Iodine pentabromide – originally thought to be a pure compound, but now considered to probably be a mixture of iodine monobromide and excess unreacted bromine
References
Tellurium(II) compounds
Oxides
Interchalcogens
Hypothetical chemical compounds | Tellurium monoxide | [
"Chemistry"
] | 262 | [
"Oxides",
"Hypotheses in chemistry",
"Salts",
"Theoretical chemistry",
"Hypothetical chemical compounds"
] |
39,062,473 | https://en.wikipedia.org/wiki/Dirac%E2%80%93von%20Neumann%20axioms | In mathematical physics, the Dirac–von Neumann axioms give a mathematical formulation of quantum mechanics in terms of operators on a Hilbert space. They were introduced by Paul Dirac in 1930 and John von Neumann in 1932.
Hilbert space formulation
The space is a fixed complex Hilbert space of countably infinite dimension.
The observables of a quantum system are defined to be the (possibly unbounded) self-adjoint operators on .
A state of the quantum system is a unit vector of , up to scalar multiples; or equivalently, a ray of the Hilbert space .
The expectation value of an observable A for a system in a state is given by the inner product .
Operator algebra formulation
The Dirac–von Neumann axioms can be formulated in terms of a C*-algebra as follows.
The bounded observables of the quantum mechanical system are defined to be the self-adjoint elements of the C*-algebra.
The states of the quantum mechanical system are defined to be the states of the C*-algebra (in other words the normalized positive linear functionals ).
The value of a state on an element is the expectation value of the observable if the quantum system is in the state .
Example
If the C*-algebra is the algebra of all bounded operators on a Hilbert space , then the bounded observables are just the bounded self-adjoint operators on . If is a unit vector of then is a state on the C*-algebra, meaning the unit vectors (up to scalar multiplication) give the states of the system. This is similar to Dirac's formulation of quantum mechanics, though Dirac also allowed unbounded operators, and did not distinguish clearly between self-adjoint and Hermitian operators.
See also
Axiomatic quantum field theory
References
Operator algebras
Mathematical quantization
Axioms | Dirac–von Neumann axioms | [
"Physics"
] | 385 | [
"Mathematical quantization",
"Quantum mechanics"
] |
39,063,460 | https://en.wikipedia.org/wiki/KeyMod | KeyMod is a universal interface system for firearm accessory components. The concept was first created by VLTOR Weapon Systems of Tucson, Arizona, and released through Noveske Rifleworks of Grants Pass, Oregon, before being published open sourced in the public domain for adoption by the entire firearms accessory industry. The name "KeyMod" was coined by Eric Kincel (then working for VLTOR Weapon Systems) following the naming trend of other VLTOR accessories with the suffix "Mod" meaning modular, and "Key" being a reference to the key-hole profile of the mounting slots.
History
VLTOR Weapon Systems had previously pursued a design which was the basis for the KeyMod system. While developing the first prototype systems, Eric Kincel of VLTOR Weapon systems was approached by John Noveske of Noveske Rifleworks with a design for a universal accessory attachment system. After a short collaboration, during which Todd Krawczyk of Noveske Rifleworks suggested an improvement to the accessory lock/anti-rotation nut, John Noveske decided to adopt what became the KeyMod system for the NSR series of hand guards and accessories. Kincel's design was developed simultaneously, but without knowledge of the independently developed keyhole slot system by Accuracy International, until after KeyMod's release.
The specifications for the KeyMod system were first published in July 2012. The current revision was released in October 2012.
In 2017, a summary report of testing conducted by NSWC-Crane for USSOCOM indicated that, while comparable in endurance and rough handling testing, M-LOK greatly outperformed KeyMod in repeatability, drop testing and failure load testing.
Description
KeyMod is an open-source design released for use and distribution in the public domain in an effort to standardize universal attachment systems in the firearm accessories market. The KeyMod system is intended to be used as a direct attachment method for firearm accessories such as flashlight mounts, laser modules, sights, scope mounts, vertical grips, rail panels, hand stops, barricade supports, and many others.
The goal is to eliminate the need for the rail to be fully outfitted with 1913 rails covering the entire handguard. The KeyMod system allows a user to place MIL-STD-1913 rails wherever needed, even in the 45° positions at times. The KeyMod system consists of two parts: the KeyMod slot and the KeyMod nut. The slot is distinctive with a larger diameter through-hole combined with a narrow slot. The slot is chamfered on the backside, while the through hole is sized for clearance of a quick-detach sling swivel (approximately 3/8" diameter).
The nut is stepped, and the larger diameter end is chamfered around 270 degrees of its diameter. The angled face created is meant to interface with the chamfer on the backside of the KeyMod slot. The full diameter is left intact to create two flats on the nut which align the nut to the slot, and allow it to be indexed to the accessory as well as to the KeyMod slot. This eliminates the need to align the nuts to the holes prior to accessory installation. In most accessories, the screw is swaged after assembly to ensure that it cannot be backed out of the nut. This prevents loss of small parts (screws, nuts or other small parts used in the assembly of the accessory). The spacing of the holes is critical and is based on MIL-STD-1913 spacing to allow the greatest modularity with existing accessories.
The KeyMod specifications call out a "recoil lug" on the accessories, which is intended to interface with the larger through hole portion and resist slippage of accessories during counter-recoil. The combination of the angled interface of the nut to the KeyMod slot and the recoil lug to the through hole make for a very strong attachment point which will not slip under harsh recoil or counter recoil. It also provides for an excellent return-to-zero when removed and re-installed.
Technical specifications
The specifications for the KeyMod system were initially released by VLTOR Weapon Systems as an open-source set of drawings. As such, some manufacturers have added their own variations to the system, such as using the through-hole portion as a sling-swivel attachment point. The critical interface dimensions, however, still follow the specifications. In an effort to ensure interface dimensions are kept consistent and repeatable, Bravo Company MFG released a set of drawings for KeyMod gauges that allow for expedient inspection of the 100° chamfer feature in January 2014.
See also
Rail Integration System, generic term for a system for attaching accessories to small firearms
Weaver rail mount, early system used for scope mounts, still has some popularity in the civilian market
Picatinny rail (MIL-STD-1913), improved and standardized version of the Weaver mount. Used both for scope mounts and for accessories (such as extra sling mounts, vertical grips, bipods etc.). Major popularity in the civilian market.
NATO Accessory Rail- further development from the MIL-STD-1913
UIT rail, an older standard used for mounting slings particularly on competition firearms
M-LOK - free licensed competing standard to KeyMod
Zeiss rail, a ringless scope mounting standard
References
Firearm components
Mechanical standards | KeyMod | [
"Technology",
"Engineering"
] | 1,108 | [
"Firearm components",
"Mechanical standards",
"Components",
"Mechanical engineering"
] |
39,067,533 | https://en.wikipedia.org/wiki/Autowave%20reverberator | In the theory of autowave phenomena an autowave reverberator is an autowave vortex in a two-dimensional active medium.
A reverberator appears a result of a rupture in the front of a plane autowave. Such a rupture may occur, for example, via collision of the front with a nonexcitable obstacle. In this case, depending on the conditions, either of two phenomena may arise: a spiral wave, which rotates around the obstacle, or an autowave reverberator which rotates with its tip free.
Introduction
The reverberator was one of the first autowave solutions, researchers found, and, because of this historical context, it remains by nowadays the most studied autowave object.
Up until the late 20th century, the term "auto-wave reverberator" was used very active and widely in the scientific literature, written by soviet authors, because of active developing these investigations in USSR (for more details, see "A brief history of autowave researches" in Autowave). And, inasmuch as the soviet scientific literature was very often republished in English translation (see e.g.), the term "autowave reverberator" became known also in English-speaking countries.
The reverberator is often confused with another state of the active medium, which is similar to it, - with the spiral wave. Indeed, at a superficial glance, these two autowave solutions look almost identical. Moreover, the situation is further complicated by the fact that the spiral wave may under certain circumstances become the reverberator, and the reverberator may, on the contrary, become the spiral wave!
However, it must be remembered that many features of rotating autowaves were quite thoroughly studied as long ago as the 1970s, and already at that time some significant differences in properties of a spiral wave and a reverberator were revealed. Unfortunately, all the detailed knowledge from those years remains now scattered in different publications of the 1970-1990s, which became little-known now even for the new generations of researchers, not to mention the people that are far from this research topic. Perhaps, the only book in that it were more or less completely brought together in the form of abstracts basic information about autowaves, known at the time of its publication, remains still the Proceedings „Autowave processes in systems with diffusion“, which was published in 1981 and became already a rare bibliographic edition in nowadays; its content was partially reiterated in another book in 2009.
The differences between a reverberator and a spiral wave are considered below in detail. But for the beginning it is useful to demonstrate these differences with a simple analogy. Everyone knows well the seasons of a year... Under some conditions, winter can turn into summer, and summer, on the contrary, into winter; and, moreover, these miraculous transformations occur quite regularly! However, though a winter and a summer are similar, for example, in regular alternation of day and night, you cannot think of saying that winter and summer are the same thing, can you? Nearly the same things are with reverberator and spiral waves; and therefore they should not be confused.
It is useful also to keep in mind that it is known now, in addition to the rotating-wave, quite a number of other autowave solutions, and every year the number grows continuously with increasing speed. Because of these causes (or as a result of these events), it was found during the 21st century that many of the conclusions about the properties of autowaves, - which were widely known among readers of the early papers on the subject as well as widely discussed in the press of that time, - unfortunately, proved to be a sort of erroneous hasty generalizations.
Basic information
"Historical" definition
On the question of terminology
Types of reverberator behaviour
The "classical" regimes
Various autowave regimes, such as plane waves or spiral waves can exist in an active media, but only under certain conditions on the medium properties. Using the FitzhHugh-Nagumo model for a generic active medium, Winfree constructed a diagram depicting the regions of parameter space in which the principle phenomena may be observed. Such diagrams are a common way of presenting the different dynamical regimes observed in both experimental and theoretical settings. They are sometimes called flower gardens since the paths traced by autowave tips may often resemble the petals of a flower. A flower garden for the FitzHugh-Nagumo model is shown to the right. It contains: the line ∂P, which confines the range of the model parameters under which impulses can propagate through one-dimensional medium, and plane autowaves can spread in the two-dimensional medium; the "rotor boundary" ∂R, which confines the range of the parameters under which there can be the reverberators rotating around fixed cores (i.e. performing uniform circular rotation); the meander boundary ∂M and the hyper-meander boundary ∂C, which confine the areas where two-period and more complex (possibly chaotic) regimes can exist. Rotating autowaves with large cores exist only in the areas with parameters close to the boundary ∂R.
Similar autowave regimes were also obtained for the other models — Beeler-Reuter model, Barkley model, Aliev-Panfilov model, Fenton-Karma model etc.
It was also shown that these simple autowave regimes should be common to all active media because a system of differential equations of any complexity, which describes this or that active medium, can be always simplified to two equations.
In the simplest case without drift (i.e., the regime of uniform circular rotation), the tip of a reverberator rotates around a fixed point along the circumference of a certain radius (the circular motion of the tip of the reverberator). The autowave cannot penetrate into the circle bounded by this circumference. As far as it approaches the centre of the reverberator rotation, the amplitude of the excitation pulse is reduced, and, at a relatively low excitability of the medium there is a region of finite size in the centre of reverberator, where the amplitude of the excitation pulse is zero (recall that we speak now about a homogeneous medium, for each point of which its properties are the same). This area of low amplitude in the centre of the reverberator is usually called the core of the reverberator. The existence of such a region in the center of reverberator seems, at first glance, quite incomprehensible, as it borders all the time with the excited sites. A detailed investigation of this phenomenon showed that resting area in the centre of reverberator remains of its normal excitability, and the existence of a quiescent region in the centre of the reverberator is related to the phenomenon of the critical curvature. In the case of "infinite" homogeneous medium, the core radius and the speed of the rotor rotation are determined only by the properties of the medium itself, rather than the initial conditions. The shape of the front of the rotating spiral wave in the distance from the centre of rotation is close to the evolvent of the circumference - the boundaries of its core. The certain size of the core of the reverberator is conditioned by that the excitation wave, which circulates in a closed path, should completely fit in this path without bumping into its own refractory tail.
As the critical size of the reverberator, it is understood as the minimum size of the homogeneous medium in which the reverberator can exist indefinitely. For assessing the critical size of the reverberator one uses sometimes the size of its core, assuming that adjacent to the core region of the medium should be sufficient for the existence of sustainable re-entry. However, the quantitative study of the dependence of the reverberator behaviour on conductivity of rapid transmembrane current (that characterize the excitability of the medium), it was found that the critical size of the reverberator and the size its core are its different characteristics, and the critical size of the reverberator is much greater, in many cases, than the size of its core (i.e. reverberator dies, even the case, if its core fits easily in the boundaries of the medium and its drift is absent)
Regimes of induced drift
At meander and hyper-meander, the displacement of the center of autowave rotation (i.e. its drift) is influenced by the forces generated by the very same rotating autowave.
However, in result of the scientific study of rotating autowaves was also identified a number of external conditions that force reverberator drift. It can be, for example, the heterogeneity of the active medium by any parameter. Perhaps, it is the works Biktasheva, where different types of the reverberator drift are currently represented the most completely (although there are other authors who are also involved in the study of drift of the autowave reverberator).
In particular, Biktashev offers to distinguish the following types of reverberator drift in the active medium:
Resonant drift.
Inhomogeneity induced drift.
Anisotropy induced drift.
Boundary induced drift (see also).
Interaction of spirals.
High frequency induced drift.
Note that even for such a simple question, what should be called a drift of autowaves, and what should not be called, there is still no agreement among researchers. Some researchers (mostly mathematicians) tends to consider as reverberator drift only those of its displacement, which occur under the influence of external events (and this view is determined exactly by the peculiarity of the mathematical approach to the study of autowaves). The other part of the researchers did not find significant differences between the spontaneous displacement of reverberator in result of the events generated by it itself, and its displacement as a result of external influences; and therefore these researchers tend to believe that meander and hyper-meander are also variants of drift, namely the spontaneous drift of the reverberator. There was not debate on this question of terminology in the scientific literature, but it can be found easily these features of describing the same phenomena by the different authors.
Autowave lacet
In the numerical study of reverberator using the Aliev-Panfilov model, the phenomenon of bifurcation memory was revealed, when the reverberator changes spontaneously its behaviour from meander to uniform circular rotation; this new regime was named autowave lacet.
Briefly, spontaneous deceleration of the reverberator drift by the forces generated by the reverberator itself occurs during the autowave lacet, with the velocity of its drift decreasing gradually down to zero in the result. The regime meander thus degenerates into a simple uniform circular rotation. As already mentioned, this unusual process is related to phenomenon of bifurcation memory.
When autowave lacet was discovered, the first question arose: Does the meander exist ever or the halt of the reverberator drift can be observed every time in all the cases, which are called meander, if the observation will be sufficiently long? The comparative quantitative analysis of the drift velocity of reverberator in the regimes of meander and lacet revealed a clear difference between these two types of evolution of the reverberator: while the drift velocity quickly goes to a stationary value during meander, a steady decrease in the drift velocity of the vortex can be observed during the lacet, in which can be clearly identified the phase of slow deceleration and phase of rapid deceleration of the drift velocity.
The revealing of autowave lacet may be important for cardiology. It is known that reverberators show remarkable stability of their properties, they behave "at their discretion", and their behaviour can significantly affect only the events that occur near the tip of reverberator. The fact that the behaviour of the reverberator can significantly be affected only by the events that occur near its core, results, for example, in the fact that, at a meeting with reverberator nonexcitability heterogeneity (e.g. small myocardial scar), the tip of the rotating wave "sticks" to this heterogeneity, and reverberator begins to rotate around the stationary nonexcitability obstacles. The transition from polymorphic to monomorphic tachycardia is observed on the ECG in such cases. This phenomenon is called the "anchoring" of spiral wave.
However, it was found in the simulations that spontaneous transition of polymorphic tachycardia in monomorphic one can be observed also on the ECG during the autowave lacet; in other words, the lacet may be another mechanism of transformation of polymorphic ventricular tachycardia in a monomorphic. Thus, the autowave theory predicts the existence of special type of ventricular arrhythmias, conditionally called "lacetic", which cardiologists do not still distinguish in diagnostics.
The reasons for distinguishing between variants of rotating autowaves
Recall that from 1970th to the present time it is customary to distinguish three variants rotating autowaves:
wave in the ring,
spiral wave,
autowave reverberator.
Dimensions of the core of reverberator is usually less than the minimal critical size of the circular path of circulation, which is associated with the phenomenon of critical curvature. In addition, the refractory period appears to be longer for the waves with non-zero curvature (reverberator and spiral wave) and begins to increase with decreasing the excitability of the medium before the refractory period for the plane waves (in the case of circular rotation). These and other significant differences between the reverberator and the circular rotation of excitation wave make us distinguish these two regimes of re-entry.
The figure shows the differences found in the behavior of the plane autowave circulating in the ring and reverberator. You can see that, in the same local characteristics of the excitable medium (excitability, refractoriness, etc., given by the nonlinear member), there are significant quantitative differences between dependencies of the reverberator characteristics and characteristics of the regime of one-dimensional rotation of impulse, although respective dependencies match qualitatively.
Notes
References
Books
Papers
External links
Several simple classical models of autowaves (JS + WebGL), that can be run directly in your web browser; developed by Evgeny Demidov.
Biophysics
Computational science
Biomedical cybernetics
Nonlinear systems
Mathematical modeling
Parabolic partial differential equations | Autowave reverberator | [
"Physics",
"Mathematics",
"Biology"
] | 3,061 | [
"Mathematical modeling",
"Applied and interdisciplinary physics",
"Applied mathematics",
"Computational science",
"Nonlinear systems",
"Biophysics",
"Dynamical systems"
] |
39,070,884 | https://en.wikipedia.org/wiki/C7H10O7 | {{DISPLAYTITLE:C7H10O7}}
The molecular formula C7H10O7 (molar mass: 206.15 g/mol, exact mass: 206.0427 u) may refer to:
Homocitric acid
Homoisocitric acid
Molecular formulas | C7H10O7 | [
"Physics",
"Chemistry"
] | 62 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
47,910,741 | https://en.wikipedia.org/wiki/Nektar%2B%2B | Nektar++ is a spectral/hp element framework designed to support the construction of efficient high-performance scalable solvers for a wide range of partial differential equations (PDE). The code is released as open-source under the MIT license. Although primarily driven by application-based research, it has been designed as a platform to support the development of novel numerical techniques in the area of high-order finite element methods.
Nektar++ is modern object-oriented code written in C++ and is being actively developed by members of the SherwinLab at Imperial College London (UK) and Kirby's group at the University of Utah (US).
Capabilities
Nektar++ includes the following capabilities:
One-, two- and three-dimensional problems;
Multiple and mixed element types, i.e. triangles, quadrilaterals, tetrahedra, prisms and hexahedra;
Both hierarchical and nodal expansion bases with variable and heterogeneous polynomial order between elements;
Continuous Galerkin, discontinuous Galerkin, hybridizable discontinuous Galerkin and flux reconstruction operators;
Multiple implementations of finite element operators for efficient execution on a wide range of CPU architectures;
Comprehensive range of explicit, implicit and implicit-explicit (IMEX) time-integration schemes;
Preconditioners tailored to high-order finite element methods;
Numerical stabilization techniques such as dealiasing and spectral vanishing viscosity;
Parallel execution and scalable to thousands of processor cores;
Pre-processing tools to generate meshes, or manipulate and convert meshes generated with third-party software into a Nektar++-readable format;
Extensive post-processing capabilities for manipulating output data;
Cross platform support for Linux, Mac OS X and Windows;
Support for running jobs on cloud computing platforms via the prototype Nekkloud interface from the libhpc project;
Wide user community, support and annual workshop.
Stable versions of the software are released on a 1-month basis and it is supported by an extensive testing framework which ensures correctness across a range of platforms and architectures.
Other capabilities currently under active development include p-adaption, r-adaption and support for accelerators (GPGPU, Intel Xeon Phi).
Application domains
The development of the Nektar++ framework is driven by a number of aerodynamics and biomedical engineering applications and consequently the software package includes a number of pre-written solvers for these areas.
Incompressible flow
This solver time-integrates the incompressible Navier-Stokes equations for performing large-scale direct numerical simulation (DNS) in complex geometries. It also supports the linearised and adjoint forms of the Navier-Stokes equations for evaluating hydrodynamic stability of flows.
Compressible flow
External aerodynamics simulations of high-speed compressible flows are supported through solution of the compressible Euler or Navier-Stokes equations.
Cardiac Electrophysiology
This solver supports the solution of the monodomain model and bidomain model of action potential propagation through myocardium.
Other application areas
shallow water equations;
reaction-diffusion-advection problems;
pulse wave propagation solver for modelling arterial networks;
acoustic perturbation equations;
linear elasticity equations.
License
Nektar++ is free and open source software, released under the MIT license.
Alternative software
Free and open-source software
Nek5000 (BSD)
Advanced Simulation Library (AGPL)
Code Saturne (GPL)
FEATool Multiphysics
Gerris Flow Solver (GPL)
OpenFOAM (GPL)
SU2 code (LGPL)
PyFR
Proprietary software
ADINA CFD
ANSYS CFX
ANSYS Fluent
COMSOL Multiphysics
Pumplinx
Simcenter STAR-CCM+
KIVA (software)
RELAP5-3D
References
External links
Official resources
Nektar++ home page
Nektar++ Gitlab repository
Computational fluid dynamics
Free science software
Free computer-aided design software
Scientific simulation software | Nektar++ | [
"Physics",
"Chemistry"
] | 835 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
47,915,081 | https://en.wikipedia.org/wiki/Nitrogen%20clathrate | Nitrogen clathrate or nitrogen hydrate is a clathrate consisting of ice with regular crystalline cavities that contain nitrogen molecules. Nitrogen clathrate is a variety of air hydrates. It occurs naturally in ice caps on Earth, and is believed to be important in the outer Solar System on moons such as Titan and Triton which have a cold nitrogen atmosphere.
Properties
Nitrogen clathrate hydrate has a density range of 0.95 to 1.00 gcm−3 varying depending on how full of the nitrogen the cavities are. So it may float or sink in water. Thermal conductivity is 0.5 Wm−1K−1 which is about a quarter that of ice. The linear thermal expansion, and heat capacity are similar to that of ice. The clathrate is much more resistant to shear stresses than pure water ice, yet the Young's modulus is about the same.
At 0.6 °C a pressure of at least 171.3 bars is required to start forming nitrogen clathrate in water. At -29.1 °C, the pressure required reduces to 71.5 bars.
Additional molecules can allow a mixed nitrogen clathrate to form at lower pressures. For example, carbon disulfide only needs a third the pressure, and with cyclohexane only a quarter pressure is required.
The Raman spectrum of nitrogen clathrate shows a N-N stretching frequency at 2322.4 cm−1, this is smaller than for nitrogen dissolved in water (2325.0 cm−1) and gaseous nitrogen (2327.7 cm−1). It has an O-H stretching vibration at 3092.1 cm−1, which compares to 3125.3 cm−1 in ice.
Structure
The lowest pressure structure of nitrogen clathrate is called clathrate structure-II or CS-II. It is a cubic crystal structure with a unit cell edge of 17.3 Å. The clathrate has two kinds of cavity that can contain the guest nitrogen molecules. Each unit cell has eight large and 16 small cavities along with 136 water molecules. The large cavity has twelve pentagonal faces, and four hexagonal faces with a cavity radius of 4.73 Å. It is called the hexadecahedral cavity. The symbol for these cavities is 51264. The small pentagondodecahedral cavities have twelve pentagon shaped faces and a radius of 3.91 Å. These cavities have a symbol of 512 The large cavities can contain two nitrogen molecules and the small cavities can contain one molecule. The disassociation pressure of nitrogen increases with increasing temperature. At 300K the nitrogen pressure is 2.06 kbar and at 285.6K the pressure is 0.55 kbar.
There are four different nitrogen clathrate phases depending on pressure. At higher pressures the CS-II phase changes to a hexagonal structure termed SH. The SH unit cell contains 34 water molecules, 20 small cavities (512), 20 medium cavities (435663) and 36 large cavities (51268). At still higher pressures a tetragonal form (termed ST) (425864) exists. At even higher pressures a phase called a filled ice structure (FIS) is formed. This has alternate layers of water and nitrogen molecules.
The quadruple points in the phase diagram are where nitrogen gas, water or ice, and two different solid phases of clathrate are in equilibrium. One quadruple point is at 143 bars and −1.3 °C where ice, clathrate hydrate, water and nitrogen gas are all present. At 6,500 bars and 41.5 °C there are two different clathrates, the low pressure hydrate, and hydrate-1. At 12,500 bars and 46.5° hydrate-1 and −2 are in equilibrium, and at 15,250 bars and 52.5° above which there is no liquid water, but rather ice 6.
Production
Nitrogen hydrate clathrate can be made by applying high pressures to nitrogen gas on water. Crystals can take weeks to grow. Another way to produce it, without using applied pressure, is to first make amorphous solid water by condensing water vapour at 77 K. This absorbs nitrogen gas at a pressure of 1 atmosphere. When the temperature is raised to 113K the amorphous phase changes to a crystalline form, and trapped nitrogen converts some ice into a clathrate.
Applications
One way to perform carbon capture from combustion products is to compress it with water to try to form a carbon dioxide clathrate. Since the air for burning also contains nitrogen, the fumes from combustion contain mostly nitrogen, and so nitrogen clathrate formation also comes into effect. A pressure of 77 bars is required to start forming clathrate from 17% carbon dioxide – 83% nitrogen mix at 0.6 °C. The clathrate formed contains much more carbon dioxide than nitrogen, and so can separate out carbon dioxide to leave behind nitrogen. Using tetrahydrofuran at 1 molar concentration allows a mixed THF-carbon dioxide-nitrogen clathrate to form at much lower pressures (3.45 bars), but much less gas is consumed and it is much slower.
Nitrogen clathrate has been studied as a route to achieving a low pressure hydrogen clathrate for hydrogen storage. Forming hydrogen clathrate hydrate requires very high pressures, but by starting with nitrogen clathrate, multiple hydrogen molecules can substitute for nitrogen in the large cavities. However this is inefficient, also yielding a lot of ice.
Occurrence
On the Earth nitrogen clathrate is found in ice caps at a depth of 1000 m or more. Air bubbles that have been trapped are pressurised at this depth to 100 bars, and the nitrogen can combine with the cold ice to form a clathrate; however, this can be contaminated with dioxygen, forming an air clathrate.
On the Saturnian moon Titan, nitrogen clathrate is predicted to be stable and exist along with ice on the surface, and deeper into the crust. It may also exist as a solid layer beneath the interior ocean. Nitrogen is the predominant component of the atmosphere. The clathrate may serve as a reservoir for nitrogen, and clathrates may also store methane, hydrogen sulfide, krypton and xenon. Clathrates formed at −178 °C are predicted to be predominantly nitrogen clathrate, with a smaller proportion of methane clathrate. Propane and ethane only form minute constituents.
In the protosolar nebula, nitrogen clathrate is predicted to condense in a significant amount of about one percent, at temperatures below 45 K. However carbon dioxide and carbon monoxide clathrate is expected to be more common. This would affect the composition of comets. In the gases coming out of comet 67P/Churyumov–Gerasimenko the ROSINA instrument on the Rosetta detected molecular nitrogen. N2 coming out of the comet could result from decomposing nitrogen clathrate or nitrogen trapped in amorphous ice. The ratio to carbon monoxide (30 times more CO) suggests that the comet condensed at a temperature of 30 K.
On Mars the nitrogen pressure is far too low to produce nitrogen clathrate itself, but nitrogen likely makes up a small fraction of carbon dioxide clathrate which condenses at the poles. At 138 K it is predicted to make up 0.015% and at 161 K 0.032%. This proportion is smaller than that of argon, which is four times more abundant in the clathrate. 99.8% or more of the clathrate gas is carbon dioxide.
References
Further reading
Raman spectrum, has info on multi nitrogen molecules per cavity
Nitrogen compounds
Clathrate hydrates | Nitrogen clathrate | [
"Chemistry"
] | 1,624 | [
"Clathrates",
"Hydrates",
"Clathrate hydrates"
] |
47,919,963 | https://en.wikipedia.org/wiki/Copper-based%20reversible-deactivation%20radical%20polymerization | Copper-based reversible-deactivation radical polymerization (Cu-based RDRP) is a member of the class of reversible-deactivation radical polymerization. In this system, various copper species are employed as the transition-metal catalyst for reversible activation/deactivation of the propagating chains responsible for uniform polymer chain growth.
History of Copper-catalyzed RDRP
Although copper complexes (in combination with relevant ligands) have long been used as catalysts for organic reactions such as atom transfer radical addition (ATRA) and copper(I)-catalyzed alkyne-azide cycloaddition (CuAAC), copper complex catalyzed RDRP was not reported until 1995 when Jin-Shan Wang and Krzysztof Matyjaszewski introduced it as atom transfer radical polymerization (ATRP). ATRP with copper as catalyst quickly became one of the most robust and commonly used RDRP techniques for designing and synthesizing polymers with well-defined composition, functionalities, and architecture. Due to some inherited drawbacks, such the persistent radical effect (PRE), several advanced ATRP techniques have been developed, including activators regenerated by electron transfer (ARGET) ATRP and initiators for continuous activator regeneration (ICAR) ATRP.
One intriguing catalyst, metallic copper, has also been applied to these modified ATRP systems. The polymerization using Cu(0) and suitable ligands was introduced for the first time by Krzysztof Matyjaszewski in 1997. However, then, in 2006, the Cu(0) – mediated RDRP of MA (in combination with tris(2-(dimethylamino)ethyl)amine(Me6TREN) as ligand in polar solvents) was reported, with a very different mechanism, single electron transfer living radical polymerization (SET-LRP) postulated by Virgil Percec. Initiated by this mechanistic difference, many research articles were published during recent years which aimed to shed a light on this specific polymerization reaction, and the discussion of the mechanisms has been a very striking episode in the field of polymer science.
Discussion of the mechanism
Supplemental activator and reducing agent atom-transfer radical polymerization (SARA ATRP)
In the case of RDRP reactions in the presence of Cu(0), one of the mechanistic models proposed in the literature is called the supplemental activator and reducing agent atom-transfer radical polymerization (SARA ATRP). The SARA ATRP is characterized by the traditional ATRP reactions of activation by Cu(I) and deactivation by Cu(II) at the core of the process, with Cu(0) acting primarily as a supplemental activator of alkyl halides and a reducing agent for the Cu(II) through comproportionation. There is minimal kinetic contribution of disproportionation because Cu(I) primarily activates alkyl halides and activation of all alkyl halides occurs by inner sphere electron transfer (ISET).
Single electron transfer living radical polymerization (SET-LRP)
Another model is called single-electron transfer living radical polymerization (SET-LRP), where Cu(0) is the exclusive activator of alkyl halides – a process that occurs by outer sphere electron transfer (OSET). The generated Cu(I) disproportionates ‘spontaneously’ into highly reactive ‘nascent’ Cu(0) and Cu(II) species, instead of participating in the activation of alkyl halides, and there is minimal comproportionation.
Copper-based reversible-deactivation radical polymerization (Cu-based RDRP)
One unique experimental phenomenon in the Cu(0)-mediated RDRP systems with Me6TREN/DMSO as ligand/solvent is that the existence of an apparent induction period in the early stage and the absence of this induction period was observed by adding extra Cu(II) to the reaction system or employing PMDETA as ligand. This intriguing phenomenon cannot be explained either by SARA ATRP or SET-LRP, therefore, another mechanism was proposed by Wenxin Wang: copper-based reversible-deactivation radical polymerization (Cu-based RDRP) (previously called Cu(0)-mediated RDRP).
The Cu-based RDRP mechanism showed that induction period is originated from the accumulation of soluble copper species during that initial unstable stage. It was demonstrated that Cu(I) act as a powerful activator even under conditions favoring its disproportionation (in Me6TREN/DMSO system), whilst Cu(0) is a supplemental activator and reducing agent and both disproportionation and comproportionation coexist. Cu(II) can be consumed by both the comproportionation and deactivation reaction, the relative extent of which depends on the reactivity of monomers and initiators.
In Cu-based RDRP systems, there are two coexisting equilibria which must be simultaneously considered - the polymerization equilibrium (chain propagation, activation/deactivation, chain termination) and the copper conversion equilibrium (disproportionation/disproportionation). Different polymerization parameters (initiator, ligand, and solvent etc.) affect the polymerization process by synergistically influencing these two equilibria. For different monomers, the characteristics of the above two equilibria will differ accordingly, thus requiring a reconsideration of the combination of polymerization parameters. Based on the proper understanding of the kinetic control mechanism of Cu-based RDRP, the long-standing challenges with the controlled polymerization of numerous monomers were overcame (low monomer conversion, poorly controlled MWs/MWDs, complex or multi reaction procedures, etc.), allowing the successful controlled polymerization of various vinyl monomers (ranging from the highly active AM, NIPAM, DMA, MA, etc. to the less active MMA, St, etc.).
See also
Reversible-deactivation radical polymerization
Atom-transfer radical-polymerization
References
Polymerization reactions | Copper-based reversible-deactivation radical polymerization | [
"Chemistry",
"Materials_science"
] | 1,306 | [
"Polymerization reactions",
"Polymer chemistry"
] |
37,596,615 | https://en.wikipedia.org/wiki/Asymptotic%20safety%20in%20quantum%20gravity | Asymptotic safety (sometimes also referred to as nonperturbative renormalizability) is a concept in quantum field theory which aims at finding a consistent and predictive quantum theory of the gravitational field. Its key ingredient is a nontrivial fixed point of the theory's renormalization group flow which controls the behavior of the coupling constants in the ultraviolet (UV) regime and renders physical quantities safe from divergences. Although originally proposed by Steven Weinberg to find a theory of quantum gravity, the idea of a nontrivial fixed point providing a possible UV completion can be applied also to other field theories, in particular to perturbatively nonrenormalizable ones. In this respect, it is similar to quantum triviality.
The essence of asymptotic safety is the observation that nontrivial renormalization group fixed points can be used to generalize the procedure of perturbative renormalization. In an asymptotically safe theory the couplings do not need to be small or tend to zero in the high energy limit but rather tend to finite values: they approach a nontrivial UV fixed point. The running of the coupling constants, i.e. their scale dependence described by the renormalization group (RG), is thus special in its UV limit in the sense that all their dimensionless combinations remain finite. This suffices to avoid unphysical divergences, e.g. in scattering amplitudes. The requirement of a UV fixed point restricts the form of the bare action and the values of the bare coupling constants, which become predictions of the asymptotic safety program rather than inputs.
As for gravity, the standard procedure of perturbative renormalization fails since Newton's constant, the relevant expansion parameter, has negative mass dimension rendering general relativity perturbatively nonrenormalizable. This has driven the search for nonperturbative frameworks describing quantum gravity, including asymptotic safety which in contrast to other approaches is characterized by its use of quantum field theory methods, without depending on perturbative techniques, however. At the present time, there is accumulating evidence for a fixed point suitable for asymptotic safety, while a rigorous proof of its existence is still lacking.
Motivation
Gravity, at the classical level, is described by Einstein's field equations of general relativity,
. These equations combine the spacetime geometry encoded in the metric with the matter content comprised in the energy–momentum tensor . The quantum nature of matter has been tested experimentally, for instance quantum electrodynamics is by now one of the most accurately confirmed theories in physics. For this reason quantization of gravity seems plausible, too. Unfortunately the quantization cannot be performed in the standard way (perturbative renormalization): Already a simple power-counting consideration signals the perturbative nonrenormalizability since the mass dimension of Newton's constant is . The problem occurs as follows. According to the traditional point of view renormalization is implemented via the introduction of counterterms that should cancel divergent expressions appearing in loop integrals. Applying this method to gravity, however, the counterterms required to eliminate all divergences proliferate to an infinite number. As this inevitably leads to an infinite number of free parameters to be measured in experiments, the program is unlikely to have predictive power beyond its use as a low energy effective theory.
It turns out that the first divergences in the quantization of general relativity which cannot be absorbed in counterterms consistently (i.e. without the necessity of introducing new parameters) appear already at one-loop level in the presence of matter fields.
At two-loop level the problematic divergences arise even in pure gravity.
In order to overcome this conceptual difficulty the development of nonperturbative techniques was required, providing various candidate theories of quantum gravity.
For a long time the prevailing view has been that the very concept of quantum field theory even though remarkably successful in the case of the other fundamental interactions is doomed to failure for gravity. By way of contrast, the idea of asymptotic safety retains quantum fields as the theoretical arena and instead abandons only the traditional program of perturbative renormalization.
History
After having realized the perturbative nonrenormalizability of gravity, physicists tried to employ alternative techniques to cure the divergence problem, for instance resummation or extended theories with suitable matter fields and symmetries, all of which come with their own drawbacks. In 1976, Steven Weinberg proposed a generalized version of the condition of renormalizability, based on a nontrivial fixed point of the underlying renormalization group (RG) flow for gravity.
This was called asymptotic safety.
The idea of a UV completion by means of a nontrivial fixed point of the renormalization groups had been proposed earlier by Kenneth G. Wilson and Giorgio Parisi in scalar field theory
(see also Quantum triviality).
The applicability to perturbatively nonrenormalizable theories was first demonstrated explicitly for the non-linear sigma model and for a variant of the Gross–Neveu model.
As for gravity, the first studies concerning this new concept were performed in spacetime dimensions in the late seventies. In exactly two dimensions there is a theory of pure gravity that is renormalizable according to the old point of view. (In order to render the Einstein–Hilbert action dimensionless, Newton's constant must have mass dimension zero.) For small but finite perturbation theory is still applicable, and one can expand the beta-function (-function) describing the renormalization group running of Newton's constant as a power series in . Indeed, in this spirit it was possible to prove that it displays a nontrivial fixed point.
However, it was not clear how to do a continuation from to dimensions as the calculations relied on the smallness of the expansion parameter . The computational methods for a nonperturbative treatment were not at hand by this time. For this reason the idea of asymptotic safety in quantum gravity was put aside for some years. Only in the early 90s, aspects of dimensional gravity have been revised in various works, but still not continuing the dimension to four.
As for calculations beyond perturbation theory, the situation improved with the advent of new functional renormalization group methods, in particular the so-called effective average action (a scale dependent version of the effective action). Introduced in 1993 by Christof Wetterich and Tim R Morris for scalar theories, and by Martin Reuter and Christof Wetterich for general gauge theories (on flat Euclidean space), it is similar to a Wilsonian action (coarse grained free energy) and although it is argued to differ at a deeper level, it is in fact related by a Legendre transform. The cutoff scale dependence of this functional is governed by a functional flow equation which, in contrast to earlier attempts, can easily be applied in the presence of local gauge symmetries also.
In 1996, Martin Reuter constructed a similar effective average action and the associated flow equation for the gravitational field.
It complies with the requirement of background independence, one of the fundamental tenets of quantum gravity. This work can be considered an essential breakthrough in asymptotic safety related studies on quantum gravity as it provides the possibility of nonperturbative computations for arbitrary spacetime dimensions. It was shown that at least for the Einstein–Hilbert truncation, the simplest ansatz for the effective average action, a nontrivial fixed point is indeed present.
These results mark the starting point for many calculations that followed. Since it was not clear in the pioneer work by Martin Reuter to what extent the findings depended on the truncation ansatz considered, the next obvious step consisted in enlarging the truncation. This process was initiated by Roberto Percacci and collaborators, starting with the inclusion of matter fields.
Up to the present many different works by a continuously growing community – including, e.g., - and Weyl tensor squared truncations – have confirmed independently that the asymptotic safety scenario is actually possible: The existence of a nontrivial fixed point was shown within each truncation studied so far. Although still lacking a final proof, there is mounting evidence that the asymptotic safety program can ultimately lead to a consistent and predictive quantum theory of gravity within the general framework of quantum field theory.
Main ideas
Theory space
The asymptotic safety program adopts a modern Wilsonian viewpoint on quantum field theory. Here the basic input data to be fixed at the beginning are, firstly, the kinds of quantum fields carrying the theory's degrees of freedom and, secondly, the underlying symmetries. For any theory considered, these data determine the stage the renormalization group dynamics takes place on, the so-called theory space. It consists of all possible action functionals depending on the fields selected and respecting the prescribed symmetry principles. Each point in this theory space thus represents one possible action. Often one may think of the space as spanned by all suitable field monomials. In this sense any action in theory space is a linear combination of field monomials, where the corresponding coefficients are the coupling constants, . (Here all couplings are assumed to be dimensionless. Couplings can always be made dimensionless by multiplication with a suitable power of the RG scale.)
Renormalization group flow
The renormalization group (RG) describes the change of a physical system due to smoothing or averaging out microscopic details when going to a lower resolution. This brings into play a notion of scale dependence for the action functionals of interest. Infinitesimal RG transformations map actions to nearby ones, thus giving rise to a vector field on theory space. The scale dependence of an action is encoded in a "running" of the coupling constants parametrizing this action, , with the RG scale . This gives rise to a trajectory in theory space (RG trajectory), describing the evolution of an action functional with respect to the scale. Which of all possible trajectories is realized in Nature has to be determined by measurements.
Taking the UV limit
The construction of a quantum field theory amounts to finding an RG trajectory which is infinitely extended in the sense that the action functional described by is well-behaved for all values of the momentum scale parameter , including the infrared limit and the ultraviolet (UV) limit . Asymptotic safety is a way of dealing with the latter limit. Its fundamental requirement is the existence of a fixed point of the RG flow. By definition this is a point in the theory space where the running of all couplings stops, or, in other words, a zero of all beta-functions: for all . In addition that fixed point must have at least one UV-attractive direction. This ensures that there are one or more RG trajectories which run into the fixed point for increasing scale. The set of all points in the theory space that are "pulled" into the UV fixed point by going to larger scales is referred to as UV critical surface. Thus the UV critical surface consists of all those trajectories which are safe from UV divergences in the sense that all couplings approach finite fixed point values as . The key hypothesis underlying asymptotic safety is that only trajectories running entirely within the UV critical surface of an appropriate fixed point can be infinitely extended and thus define a fundamental quantum field theory. It is obvious that such trajectories are well-behaved in the UV limit as the existence of a fixed point allows them to "stay at a point" for an infinitely long RG "time".
With regard to the fixed point, UV-attractive directions are called relevant, UV-repulsive ones irrelevant, since the corresponding scaling fields increase and decrease, respectively, when the scale is lowered. Therefore, the dimensionality of the UV critical surface equals the number of relevant couplings. An asymptotically safe theory is thus the more predictive the smaller is the dimensionality of the corresponding UV critical surface.
For instance, if the UV critical surface has the finite dimension it is sufficient to perform only measurements in order to uniquely identify Nature's RG trajectory. Once the relevant couplings are measured, the requirement of asymptotic safety fixes all other couplings since the latter have to be adjusted in such a way that the RG trajectory lies within the UV critical surface. In this spirit the theory is highly predictive as infinitely many parameters are fixed by a finite number of measurements.
In contrast to other approaches, a bare action which should be promoted to a quantum theory is not needed as an input here. It is the theory space and the RG flow equations that determine possible UV fixed points. Since such a fixed point, in turn, corresponds to a bare action, one can consider the bare action a prediction in the asymptotic safety program. This may be thought of as a systematic search strategy among theories that are already "quantum" which identifies the "islands" of physically acceptable theories in the "sea" of unacceptable ones plagued by short distance singularities.
Gaussian and non-Gaussian fixed points
A fixed point is called Gaussian if it corresponds to a free theory. Its critical exponents agree with the canonical mass dimensions of the corresponding operators which usually amounts to the trivial fixed point values for all essential couplings . Thus standard perturbation theory is applicable only in the vicinity of a Gaussian fixed point. In this regard asymptotic safety at the Gaussian fixed point is equivalent to perturbative renormalizability plus asymptotic freedom. Due to the arguments presented in the introductory sections, however, this possibility is ruled out for gravity.
In contrast, a nontrivial fixed point, that is, a fixed point whose critical exponents differ from the canonical ones, is referred to as non-Gaussian. Usually this requires for at least one essential . It is such a non-Gaussian fixed point that provides a possible scenario for quantum gravity. As yet, studies on this subject thus mainly focused on establishing its existence.
Quantum Einstein gravity (QEG)
Quantum Einstein gravity (QEG) is the generic name for any quantum field theory of gravity that (regardless of its bare action) takes the spacetime metric as the dynamical field variable and whose symmetry is given by diffeomorphism invariance. This fixes the theory space and an RG flow of the effective average action defined over it, but it does not single out a priori any specific action functional. However, the flow equation determines a vector field on that theory space which can be investigated. If it displays a non-Gaussian fixed point by means of which the UV limit can be taken in the "asymptotically safe" way, this point acquires the status of the bare action.
Quantum quadratic gravity (QQG)
A specific realisation of QEG is quantum quadratic gravity (QQG). This a quantum extension of general relativity obtained by adding all local quadratic-in-curvature terms to the Einstein-Hilbert Lagrangian. QQG, besides being renormalizable, has also been shown to feature a UV fixed point (even in the presence of realistic matter sectors). It can, therefore, be regarded as a concrete realisation of asymptotic safety.
Implementation via the effective average action
Exact functional renormalization group equation
The primary tool for investigating the gravitational RG flow with respect to the energy scale at the nonperturbative level is the effective average action for gravity. It is the scale dependent version of the effective action where in the underlying functional integral field modes with covariant momenta below are suppressed while only the remaining are integrated out. For a given theory space, let and denote the set of dynamical and background fields, respectively. Then satisfies the following Wetterich–Morris-type functional RG equation (FRGE):
Here is the second functional derivative of with respect to the quantum fields at fixed . The mode suppression operator provides a -dependent mass-term for fluctuations with covariant momenta and vanishes for .
Its appearance in the numerator and denominator renders the supertrace both infrared and UV finite, peaking at momenta . The FRGE is an exact equation without any perturbative approximations. Given an initial condition it determines for all scales uniquely.
The solutions of the FRGE interpolate between the bare (microscopic) action at and the effective action at . They can be visualized as trajectories in the underlying theory space. Note that the FRGE itself is independent of the bare action. In the case of an asymptotically safe theory, the bare action is determined by the fixed point functional .
Truncations of the theory space
Let us assume there is a set of basis functionals spanning the theory space under consideration so that any action functional, i.e. any point of this theory space, can be written as a linear combination of the 's. Then solutions of the FRGE have expansions of the form
Inserting this expansion into the FRGE and expanding the trace on its right-hand side in order to extract the beta-functions, one obtains the exact RG equation in component form: . Together with the corresponding initial conditions these equations fix the evolution of the running couplings , and thus determine completely. As one can see, the FRGE gives rise to a system of infinitely many coupled differential equations since there are infinitely many couplings, and the -functions can depend on all of them. This makes it very hard to solve the system in general.
A possible way out is to restrict the analysis on a finite-dimensional subspace as an approximation of the full theory space. In other words, such a truncation of the theory space sets all but a finite number of couplings to zero, considering only the reduced basis with . This amounts to the ansatz
leading to a system of finitely many coupled differential equations, , which can now be solved employing analytical or numerical techniques.
Clearly a truncation should be chosen such that it incorporates as many features of the exact flow as possible. Although it is an approximation, the truncated flow still exhibits the nonperturbative character of the FRGE, and the -functions can contain contributions from all powers of the couplings.
Evidence from truncated flow equations
Einstein–Hilbert truncation
As described in the previous section, the FRGE lends itself to a systematic construction of nonperturbative approximations to the gravitational beta-functions by projecting the exact RG flow onto subspaces spanned by a suitable ansatz for . In its simplest form, such an ansatz is given by the Einstein–Hilbert action where Newton's constant and the cosmological constant depend on the RG scale . Let and denote the dynamical and the background metric, respectively. Then reads, for arbitrary spacetime dimension ,
Here is the scalar curvature constructed from the metric . Furthermore, denotes the gauge fixing action, and the ghost action with the ghost fields and .
The corresponding -functions, describing the evolution of the dimensionless Newton constant and the dimensionless cosmological constant , have been derived for the first time in reference for any value of the spacetime dimensionality, including the cases of below and above dimensions. In particular, in dimensions they give rise to the RG flow diagram shown on the left-hand side. The most important result is the existence of a non-Gaussian fixed point suitable for asymptotic safety. It is UV-attractive both in - and in -direction.
This fixed point is related to the one found in dimensions by perturbative methods in the sense that it is recovered in the nonperturbative approach presented here by inserting into the -functions and expanding in powers of . Since the -functions were shown to exist and explicitly computed for any real, i.e., not necessarily integer value of , no analytic continuation is involved here. The fixed point in dimensions, too, is a direct result of the nonperturbative flow equations, and, in contrast to the earlier attempts, no extrapolation in is required.
Extended truncations
Subsequently, the existence of the fixed point found within the Einstein–Hilbert truncation has been confirmed in subspaces of successively increasing complexity. The next step in this development was the inclusion of an -term in the truncation ansatz.
This has been extended further by taking into account polynomials of the scalar curvature (so-called -truncations),
and the square of the Weyl curvature tensor.
Also, f(R) theories have been investigated in the Local Potential Approximation finding nonperturbative fixed points in support of the Asymptotic Safety scenario, leading to the so-called Benedetti–Caravelli (BC) fixed point. In such BC formulation, the differential equation for the Ricci scalar R is overconstrained, but some of these constraints can be removed via the resolution of movable singularities.
Moreover, the impact of various kinds of matter fields has been investigated.
Also computations based on a field reparametrization invariant effective average action seem to recover the crucial fixed point.
In combination these results constitute strong evidence that gravity in four dimensions is a nonperturbatively renormalizable quantum field theory, indeed with a UV critical surface of reduced dimensionality, coordinatized by only a few relevant couplings.
Microscopic structure of spacetime
Results of asymptotic safety related investigations indicate that the effective spacetimes of QEG have fractal-like properties on microscopic scales. It is possible to determine, for instance, their spectral dimension and argue that they undergo a dimensional reduction from 4 dimensions at macroscopic distances to 2 dimensions microscopically.
In this context it might be possible to draw the connection to other approaches to quantum gravity, e.g. to causal dynamical triangulations, and compare the results.
Physics applications
Phenomenological consequences of the asymptotic safety scenario have been investigated in many areas of gravitational physics. As an example, asymptotic safety in combination with the Standard Model allows a statement about the mass of the Higgs boson and the value of the fine-structure constant.
Furthermore, it provides possible explanations for particular phenomena in cosmology and astrophysics, concerning black holes or inflation, for instance. These different studies take advantage of the possibility that the requirement of asymptotic safety can give rise to new predictions and conclusions for the models considered, often without depending on additional, possibly unobserved, assumptions.
Criticism
Some researchers argued that the current implementations of the asymptotic safety program for gravity have unphysical features, such as the running of the Newton constant. Others argued that the very concept of asymptotic safety is a misnomer, as it suggests a novel feature compared to the Wilsonian RG paradigm, while there is none (at least in the quantum field theory context, where this term is also used).
See also
Asymptotic freedom
Causal dynamical triangulation
Causal sets
Critical phenomena
Euclidean quantum gravity
Fractal cosmology
Functional renormalization group
Loop quantum gravity
Perturbative renormalization
Planck scale
Physics applications of asymptotically safe gravity
Regge Calculus
Quantum gravity
Renormalization group
Ultraviolet fixed point
References
Further reading
External links
The Asymptotic Safety FAQs A collection of questions and answers about asymptotic safety and a comprehensive list of references.
Asymptotic Safety in quantum gravity A Scholarpedia article about the same topic with some more details on the gravitational effective average action.
The Quantum Theory of Fields: Effective or Fundamental? A talk by Steven Weinberg at CERN on July 7, 2009.
Asymptotic Safety - 30 Years Later All talks of the workshop held at the Perimeter Institute on November 5 – 8, 2009.
Four radical routes to a theory of everything An article by Amanda Gefter on quantum gravity, published 2008 in New Scientist (Physics & Math).
(From 1:11:28 to 1:18:10 in the video, Weinberg gives a brief discussion of asymptotic safety. Also see Weinberg's answer to Cecilia Jarlskog's question at the end of the lecture. The 2009 Källén lecture was recorded on February 13, 2009.)
Concepts in physics
Quantum gravity
Quantum field theory
Renormalization group
Fixed points (mathematics)
Scaling symmetries
Physical cosmology | Asymptotic safety in quantum gravity | [
"Physics",
"Astronomy",
"Mathematics"
] | 5,105 | [
"Physical phenomena",
"Unsolved problems in physics",
"Quantum mechanics",
"Topology",
"Statistical mechanics",
"Physics beyond the Standard Model",
"Astronomical sub-disciplines",
"Dynamical systems",
"Mathematical analysis",
"Scaling symmetries",
"Theoretical physics",
"Critical phenomena",
... |
37,605,500 | https://en.wikipedia.org/wiki/Colloidal%20probe%20technique | The colloidal probe technique is commonly used to measure interaction forces acting between colloidal particles and/or planar surfaces in air or in solution. This technique relies on the use of an atomic force microscope (AFM). However, instead of a cantilever with a sharp AFM tip, one uses the colloidal probe. The colloidal probe consists of a colloidal particle of few micrometers in diameter that is attached to an AFM cantilever. The colloidal probe technique can be used in the sphere-plane or sphere-sphere geometries (see figure). One typically achieves a force resolution between 1 and 100 pN and a distance resolution between 0.5 and 2 nm.
The colloidal probe technique has been developed in 1991 independently by Ducker and Butt. Since its development this tool has gained wide popularity in numerous research laboratories, and numerous reviews are available in the scientific literature.
Alternative techniques to measure force between surfaces involve the surface forces apparatus, total internal reflection microscopy, and optical tweezers techniques to with video microscopy.
Purpose
The possibility to measure forces involving particles and surfaces directly is essential since such forces are relevant in a variety of processes involving colloidal and polymeric systems. Examples include particle aggregation, suspension rheology, particle deposition, and adhesion processes. One can equally study similar biological phenomena, such as deposition of bacteria or the infection of cells by viruses. Forces are equally most informative to investigate the mechanical properties of interfaces, bubbles, capsules, membranes, or cell walls. Such measurements permit to make conclusions about the elastic or plastic deformation or eventual rupture in such systems.
The colloidal probe technique provides a versatile tool to measure such forces between a colloidal particle and a planar substrate or between two colloidal particles (see figure above). The particles used in such experiments have typically a diameter between 1–10 μm. Typical applications involve measurements of electrical double layer forces and the corresponding surface potentials or surface charge, van der Waals forces, or forces induced by adsorbed polymers.
Principle
The colloidal probe technique uses a standard AFM for the force measurements. But instead the AFM cantilever with an attached sharp tip one uses the colloidal probe. This colloidal probe is normally obtained by attaching a colloidal particle to a cantilever. By recording the deflection of the cantilever as a function of the vertical displacement of the AFM scanner one can extract the force acting between the probe and the surface as a function of the surface separation. This type of AFM operation is referred to as the force mode. With this probe, one can study interactions between various surfaces and probe particles in the sphere-plane geometry. It is also possible to study forces between colloidal particles by attaching another particle to the substrate and perform the measurement in the sphere-sphere geometry, see figure above.
The force mode used in the colloidal probe technique is illustrated in the figure on the left. The scanner is fabricated from piezoelectric crystals, which enable its positioning with a precision better than 0.1 nm. The scanner is lifted towards the probe and thereby one records the scanner displacement D. At the same time, the deflection of the cantilever ξ is monitored as well, typically with a comparable precision. One measures the deflection by focusing a light beam originating from a non-coherent laser diode to the back of the cantilever and detecting the reflected beam with a split photodiode. The lever signal S represents the difference in the photocurrents originating from the two halves of the diode. The lever signal is therefore proportional to the deflection ξ.
During an approach-retraction cycle, one records the lever signal S as a function of the vertical displacement D of the scanner. Suppose for the moment that the probe and the substrate are hard and non-deformable objects and that no forces are acting between them when they are not in contact. In such a situation, one refers to a hard-core repulsion. The cantilever will thus not deform as long not being in contact with the substrate. When the cantilever touches the substrate, its deflection will be the same as the displacement of the substrate. This response is referred to as the constant compliance or contact region. The lever signal S as a function of the scanner displacement D is shown in the figure below. This graph consists of two straight lines resembling a hockey-stick. When the surfaces are not in contact, the lever signal will be denoted as S0. This value corresponds to the non-deformed lever. In the constant compliance region, the lever signal is simply a linear function of the displacement, and can be represented as a straight line
S = a D + b
The parameters a and b can be obtained from a least-squares fit of the constant compliance region. The inverse slope a−1 is also referred to as the optical lever sensitivity. By inverting this relation for the lever signal S0, which corresponds to the non-deformed lever, one can accurately obtain the contact point from D0 = (S0 − b)/a. Depending on the substrate, the precision in determining this contact point is between 0.5–2 nm. In the constant compliance region, the lever deformation is given by
ξ = (S − S0)/a
In this fashion, one can detect deflections of the cantilever with typical resolution of better than 0.1 nm.
Let us now consider the relevant situation where the probe and the substrate interact. Let us denote by F(h) the force between the probe and the substrate. This force depends on the surface separation h.
In equilibrium, this force is compensated by the restoring force of the spring, which is given by the Hooke's law
F = k ξ
where k is the spring constant of the cantilever. Typical spring constants of AFM cantilevers are in the range of 0.1−10 N/m. Since the deflection is monitored with a precision better 0.1 nm, one typically obtains a force resolution of 1−100 pN. The separation distance can be obtained from the displacement of the scanner and the cantilever deflection
h = ξ + D − D0
Figure below illustrates how the cantilever responds to different force profiles. In the case of a soft repulsive force, the cantilever is repelled from the surface and only slowly approaches the constant compliance region. In such situations, it might be actually difficult to identify this region correctly. When the force is attractive, the cantilever is attracted to the surface and may become unstable. From stability considerations one finds that the cantilever will be unstable provided
dF/dh > k
This instability is illustrated in the right panel of the figure on the right. As the cantilever approaches, the slope of the force curve increases. When the slope becomes larger than the spring constant of the cantilever, the cantilever jumps into contact when the slope of the force curve exceeds the force constant of the cantilever. Upon retraction, the same phenomenon happens, but the point where the cantilever jumps out is reached at a smaller separation. Upon approach and retraction, the system will show a hysteresis. In such situations, a part of the force profile cannot be probed. However, this problem can be avoided by using a stiffer cantilever, albeit at the expense of an inferior force resolution.
Extensions
The colloidal probes are normally fabricated by gluing a colloidal particle to a tip-less cantilever with a micromanipulator in air. The subsequent rewetting of the probe may lead to the formation of nanosized bubbles on the probe surface. This problem can be avoided by attaching the colloidal particles under wet conditions in AFM fluid cell to appropriately functionalized cantilevers. While the colloidal probe technique is mostly used in the sphere-plane geometry, it can be also used in the sphere-sphere geometry. The latter geometry further requires a lateral centering of the two particles, which can be either achieved with an optical microscope or an AFM scan. The results obtained in these two different geometries can be related with the Derjaguin approximation.
The force measurements rely on an accurate value of the spring constant of the cantilever. This spring constant can be measured by different techniques. The thermal noise method is the simplest to use, as it is implemented on most AFMs. This approach relies on the determination of the mean square amplitude of the cantilever displacement due to spontaneous thermal fluctuations. This quantity is related to the spring constant by means of the equipartition theorem. In the added mass method one attaches a series of metal beads to the cantilever and each case one determines the resonance frequency. By exploiting the relation for a harmonic oscillator between the resonance frequency and the mass added one can evaluate the spring constant as well. The frictional force method relies on measurement of the approach and retract curves of the cantilever through a viscous fluid. Since the hydrodynamic drag of a sphere close to a planar substrate is known theoretically, the spring constant of the cantilever can be deduced. The geometrical method exploits relations between the geometry of the cantilever and its elastic properties.
The separation is normally measured from the onset of the constant compliance region. While the relative surface separation can be determined with a resolution of 0.1 nm or better, the absolute surface separation is obtained from the onset of the constant compliance region. While this onset can be determined for solid samples with a precision between 0.5–2 nm, the location of this onset can be problematic for soft repulsive interactions and for deformable surfaces. For this reason, techniques have been developed to measure the surface separation independently (e.g., total internal reflection microscopy, reflection interference contrast microscopy).
By scanning the sample with the colloidal probe laterally permits to exploit friction forces between the probe and the substrate. Since this technique exploits the torsion of the cantilever, to obtain quantitative data the torsional spring constant of the cantilever must be determined.
A related technique involving similar type of force measurements with the AFM is the single molecular force spectroscopy. However, this technique uses a regular AFM tip to which a single polymer molecule is attached. From the retraction part of the force curve, one can obtain information about stretching of the polymer or its peeling from the surface.
See also
Surface forces
References
Chemistry
Materials science
Colloidal chemistry | Colloidal probe technique | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,202 | [
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Materials science",
"Colloids",
"Surface science",
"nan"
] |
46,414,675 | https://en.wikipedia.org/wiki/Quantum%20mechanical%20scattering%20of%20photon%20and%20nucleus | In pair production, a photon creates an electron positron pair. In the process of photons scattering in air (e.g. in lightning discharges), the most important interaction is the scattering of photons at the nuclei of atoms or molecules. The full quantum mechanical process of pair production can be described by the quadruply differential cross section given here:
with
This expression can be derived by using a quantum mechanical symmetry between pair production and Bremsstrahlung. is the atomic number, the fine structure constant, the reduced Planck constant and the speed of light. The kinetic energies of the positron and electron relate to their total energies and momenta via
Conservation of energy yields
The momentum of the virtual photon between incident photon and nucleus is:
where the directions are given via:
where is the momentum of the incident photon.
In order to analyse the relation between the photon energy and the emission angle between photon and positron, Köhn and Ebert integrated the quadruply differential cross section over and . The double differential cross section is:
with
and
This cross section can be applied in Monte Carlo simulations. An analysis of this expression shows that positrons are mainly emitted in the direction of the incident photon.
References
Quantum mechanics
Particle physics | Quantum mechanical scattering of photon and nucleus | [
"Physics"
] | 254 | [
"Theoretical physics",
"Quantum mechanics",
"Particle physics"
] |
24,909,382 | https://en.wikipedia.org/wiki/EN%20206%2BA2 | EN 206+A2 Concrete – Part 1: Specification, performance, production and conformity (formerly EN 206 and EN 206-1 and EN 206+A1) is a European standard elaborated by the CEN/TC 104 "Concrete and related products" technical committee.
See also
List of EN standards
European Committee for Standardization
EN 197-1: Cement – Part 1 : Composition, specifications and conformity criteria for common cements
EN 197-2: Cement – Part 2 : Conformity evaluation
EN 1992 Eurocode 2: Design of concrete structures
References
External links
European Committee for Standardization
Cement
Concrete
Construction standards
00206+A1 | EN 206+A2 | [
"Physics",
"Engineering"
] | 125 | [
"Structural engineering",
"Construction standards",
"Materials stubs",
"Construction",
"Materials",
"Concrete",
"Matter"
] |
24,910,628 | https://en.wikipedia.org/wiki/Post%20in%20ground | A post in ground construction, also called earthfast or hole-set posts, is a type of construction in which vertical, roof-bearing timbers, called posts, are in direct contact with the ground. They may be placed into excavated postholes, driven into the ground, or on sills which are set on the ground without a foundation. Earthfast construction is common from the Neolithic period to the present and is used worldwide. Post-in-the-ground construction is sometimes called an "impermanent" form, used for houses which are expected to last a decade or two before a better quality structure can be built.
Post in ground construction can also include sill on grade, wood-lined cellars, and pit houses. Most pre-historic and medieval wooden dwellings worldwide were built post in ground.
History
This type of construction is often believed to be an intermediate form between a palisade construction and a stave construction. Because the postholes are easily detected in archaeological surveys, they can be distinguished from the other two.
Post in ground was one of the timber construction methods used for French colonial structures in New France; it was called poteaux-en-terre.
The Japanese also used a type of earthfast construction until the eighteenth century, which they call Hottate-bashira (literally "embedded pillars").
The Dogon people in Africa use post in ground construction for their toguna, community gathering places typically located in the center of villages for official and informal meetings.
Poteaux-en-terre
In the historical region of New France in North America, poteaux-en-terre was a historic style of earthfast timber framing. This method is similar to poteaux-sur-sol, but the boulin (hewn posts) are planted in the ground rather than landing on a sill plate. The spaces between the boulin are filled with bousillage (reinforced mud) or pierrotage (stones and mud). Surviving examples of both types of structures can be found at Ste. Genevieve, Missouri.
Gallery of poteaux-en-terre
See also
French colonization of the Americas
Old Spanish Fort (Pascagoula, Mississippi). The La Pointe-Krebs House.
Pit-house
Post church
Ste. Genevieve, Missouri
Stilt house
References
External links
EARTHFAST ARCHITECTURE IN EARLY MAINE
Earthfast Architecture at the Association for the Preservation of Virginia Antiquities
Building engineering
History of construction
New France
French-Canadian culture in the United States
French-American culture in Missouri
Missouri culture
French colonial architecture
Foundations (buildings and structures)
ja:掘立柱 | Post in ground | [
"Engineering"
] | 522 | [
"Structural engineering",
"Building engineering",
"History of construction",
"Foundations (buildings and structures)",
"Construction",
"Civil engineering",
"Architecture"
] |
24,912,841 | https://en.wikipedia.org/wiki/Granulation | Granulation is the process of forming grains or granules from a powdery or solid substance, producing a granular material. It is applied in several technological processes in the chemical and pharmaceutical industries. Typically, granulation involves agglomeration of fine particles into larger granules, typically of size range between 0.2 and 4.0 mm depending on their subsequent use. Less commonly, it involves shredding or grinding solid material into finer granules or pellets.
From powder
The granulation process combines one or more powder particles and forms a granule that will allow tableting to be within required limits. It is the process of collecting particles together by creating bonds between them. Bonds are formed by compression or by using a binding agent. Granulation is extensively used in the pharmaceutical industry, for manufacturing of tablets and pellets. This way predictable and repeatable process is possible and granules of consistent quality can be produced.
Granulation is carried out for various reasons, one of which is to prevent the segregation of the constituents of powder mix. Segregation is due to differences in the size or density of the components of the mix. Normally, the smaller and/or denser particles tend to concentrate at the base of the container with the larger and/or less dense ones on the top. An ideal granulation will contain all the constituents of the mix in the correct proportion in each granule and segregation of granules will not occur.
Many powders, because of their small size, irregular shape or surface characteristics, are cohesive and do not flow well. Granules produced from such a cohesive system will be larger and more isodiametric (roughly spherical), both factors contributing to improved flow properties.
Some powders are difficult to compact even if a readily compactable adhesive is included in the mix, but granules of the same powders are often more easily compacted. This is associated with the distribution of the adhesive within the granule and is a function of the method employed to produce the granule.
For example, if one were to make tablets from granulated sugar versus powdered sugar, powdered sugar would be difficult to compress into a tablet and granulated sugar would be easy to compress. Powdered sugar’s small particles have poor flow and compression characteristics. These small particles would have to be compressed very slowly for a long period of time to make a worthwhile tablet. Unless the powdered sugar is granulated, it could not efficiently be made into a tablet that has good tablet characteristics such as uniform content or consistent hardness.
Two types of granulation technologies are employed: wet granulation and dry granulation.
Wet granulation
In wet granulation, granules are formed by the addition of a granulation liquid onto a powder bed which is under the influence of an impeller (in a high-shear granulator), screws (in a twin screw granulator) or air (in a fluidized bed granulator). The agitation resulting in the system along with the wetting of the components within the formulation results in the aggregation of the primary powder particles to produce wet granules. The granulation liquid (fluid) contains a solvent or carrier material which must be volatile so that it can be removed by drying, and depending on the intended application, be non-toxic. Typical liquids include water, ethanol and isopropanol either alone or in combination. The liquid solution can be either aqueous based or solvent-based. Aqueous solutions have the advantage of being safer to deal with than other solvents.
Water mixed into the powders can form bonds between powder particles that are strong enough to lock them together. However, once the water dries, the powders may fall apart. Therefore, water may not be strong enough to create and hold a bond.The binding of the particles together with the use of liquid is a combination of capillary and clinging forces until more permanent bonding is established.
States of liquid saturation in granules can exist; pendular state is when the molecules are held together by liquid bridges at the contact points. Capillary state occurs once the granule is fully saturated. Filling all voids with liquid, while surface liquid is pulled down back into pores. Funicular state alteration linking the pendular and capillary where voids are not fully saturated with liquid. Liquid assist in binding onto the particles which become distressed in a tumbling drum. In such instances, a liquid solution that includes a binder (pharmaceutical glue) is required. Povidone, which is a polyvinyl pyrrolidone (PVP), is one of the most commonly used pharmaceutical binders. PVP is dissolved in water or solvent and added to the process. When PVP and a solvent/water are mixed with powders, PVP forms a bond with the powders during the process, and the solvent/water evaporates (dries). Once the solvent/water has been dried and the powders have formed a more densely held mass, then the granulation is milled. This process results in the formation of granules.
The process can be very simple or very complex depending on the characteristics of the powders, the final objective of tablet making, and the equipment that is available. In the traditional wet granulation method the wet mass is forced through a sieve to produce wet granules which are subsequently dried.
Wet granulation is traditionally a batch process in the pharmaceutical production, however, the batch type wet granulations are foreseen to be replaced more and more by continuous wet granulation in the pharmaceutical industry in the future. The shift from batch to continuous technologies has been recommended by the Food and Drug Administration. This continuous wet granulation technology can be carried out on a twin-screw extruder into which solid materials and water can be fed at various parts. In the extruder the materials are mixed and granulated due to the intermesh of the screws, especially at the kneading elements.
Dry granulation
The dry granulation process is used to form granules without a liquid solution because the product granulated may be sensitive to moisture and heat. Forming granules without moisture requires compacting and densifying the powders. In this process the primary powder particles are aggregated under high pressure. A swaying granulator or a roll compactor can be used for the dry granulation.
Dry granulation can be conducted under two processes; either a large tablet (slug) is produced in a heavy duty tabletting press or the powder is squeezed between two counter-rotating rollers to produce a continuous sheet or ribbon of material.
When a tablet press is used for dry granulation, the powders may not possess enough natural flow to feed the product uniformly into the die cavity, resulting in varying density. The roller compactor (granulator-compactor) uses an auger-feed system that will consistently deliver powder uniformly between two pressure rollers. The powders are compacted into a ribbon or small pellets between these rollers and milled through a low-shear mill. When the product is compacted properly, then it can be passed through a mill and final blend before tablet compression.
Typical roller compaction processes consist of the following steps: convey powdered material to the compaction area, normally with a screw feeder, compact powder between two counter-rotating rolls with applied forces, mill resulting compact to desired particle size distribution. Roller compacted particle are typically dense, with sharp-edged profiles.
From solids
In plastic recycling, granulation is the process of shredding plastic objects to be recycled into flakes or pellets, suitable for later reuse in plastics extrusion. In the first stage, plastic objects to be recycled are fed to an electric motor-powered cutting chamber, which continually cuts the material using one of several types of cutting systems. Some systems use a scissor-like cutting motion, chevron or V-type rotor helical rotor or fly knives. The material is ground into all the smaller flakes until they became fine enough to fall through a mesh screen. In wet-granulation lines, water is continually sprayed in the cutting chamber to remove the debris and impurities, and acts as a lubricant of the steel blades; in dry-granulation lines, water is not present, but such technology generally produces output of lower quality than the wet technology. While the process is relatively simple, it must be carefully parametrized, as the high temperatures resulting from friction can damage the material and affect its plasticity. Regular maintenance and sharpening of the scissor blades are essential, as well as close monitoring of the process due to potential clogging and jamming.
In many cases, granulation may be the only step required before the plastics can be reused for manufacturing of new products. In other, the new or recycled plastic material must be remade into pellets. The material is molten and extruded into thin rods, which are then cooled in a water tank and finely chopped into small cylindrical pellets.
Fertilizers
Granulation is significant for fertilizers since granulated product is more economical to ship and store, not to mention easier to apply.
See also
Aggregate (composite)
Grain size
Particle size
Particle-size distribution
Granulation tissue
References
Sources
Handbook of Pharmaceutical Granulation - 3rd Edition, Editor - Dilip M. Parikh
Pharmaceutics - The science of dosage form design - M. E. Aulton 2nd EDT
Pharmaceutical dosage forms and drug delivery system - Loyd V. Allen, Nicholas G. Popovich & Howard C. Ansel 8th EDT
Lachman leon, Industrial pharmacy, special Indian edition, CBS publishers
External links
The Granulation Process 101: Basic Technologies for Tablet Making by Michael D. Tousey
Granularity of materials
Plastic recycling
Drug manufacturing | Granulation | [
"Physics",
"Chemistry"
] | 2,011 | [
"Particle technology",
"Materials",
"Granularity of materials",
"Matter"
] |
24,917,031 | https://en.wikipedia.org/wiki/Williams%E2%80%93Landel%E2%80%93Ferry%20equation | The Williams–Landel–Ferry Equation (or WLF Equation) is an empirical equation associated with time–temperature superposition.
The WLF equation has the form
where is the decadic logarithm of the WLF shift factor, T is the temperature, Tr is a reference temperature chosen to construct the compliance master curve and C1, C2 are empirical constants adjusted to fit the values of the superposition parameter aT.
The equation can be used to fit (regress) discrete values of the shift factor aT vs. temperature. Here, values of shift factor aT are obtained by horizontal shift log(aT) of creep compliance data plotted vs. time or frequency in double logarithmic scale so that a data set obtained experimentally at temperature T superposes with the data set at temperature Tr. A minimum of three values of aT are needed to obtain C1, C2, and typically more than three are used.
Once constructed, the WLF equation allows for the estimation of the temperature shift factor for temperatures other than those for which the material was tested. In this way, the master curve can be applied to other temperatures. However, when the constants are obtained with data at temperatures above the glass transition temperature (Tg), the WLF equation is applicable to temperatures at or above Tg only; the constants are positive and represent Arrhenius behavior. Extrapolation to temperatures below Tg is erroneous. When the constants are obtained with data at temperatures below Tg, negative values of C1, C2 are obtained, which are not applicable above Tg and do not represent Arrhenius behavior. Therefore, the constants obtained above Tg are not useful for predicting the response of the polymer for structural applications, which necessarily must operate at temperatures below Tg.
The WLF equation is a consequence of time–temperature superposition (TTSP), which mathematically is an application of
Boltzmann's superposition principle. It is TTSP, not WLF, that allows the assembly of a compliance master curve that spans more time, or frequency, than afforded by the time available for experimentation or the frequency range of the instrumentation, such as dynamic mechanical analyzer (DMA).
While the time span of a TTSP master curve is broad, according to Struik, it is valid only if the data sets did not suffer from ageing effects during the test time. Even then, the master curve represents a hypothetical material that does not age. Effective Time Theory. needs to be used to obtain useful prediction for long term time.
Having data above Tg, it is possible to predict the behavior (compliance, storage modulus, etc.) of viscoelastic materials for temperatures T>Tg, and/or for times/frequencies longer/slower than the time available for experimentation. With the master curve and associated WLF equation it is possible to predict the mechanical properties of the polymer out of time scale of the machine (typically to Hz), thus extrapolating the results of multi-frequency analysis to a broader range, out of measurement range of machine.
Predicting the Effect of Temperature on Viscosity by the WLF Equation
The Williams-Landel-Ferry model, or WLF for short, is usually used for polymer melts or other fluids that have a glass transition temperature.
The model is:
where T-temperature, , , and are empiric parameters (only three of them are independent from each other).
If one selects the parameter based on the glass transition temperature, then the parameters , become very similar for the wide class of polymers. Typically, if is set to match the glass transition temperature , we get
17.44
and
K.
Van Krevelen recommends to choose
K, then
and
101.6 K.
Using such universal parameters allows one to guess the temperature dependence of a polymer by knowing the viscosity at a single temperature.
In reality the universal parameters are not that universal, and it is much better to fit the WLF parameters to the experimental data, within the temperature range of interest.
Further reading
Williams-Landel-Ferry model
Time–temperature superposition
Viscoelasticity
References
Polymers | Williams–Landel–Ferry equation | [
"Chemistry",
"Materials_science"
] | 849 | [
"Polymers",
"Polymer chemistry"
] |
24,917,912 | https://en.wikipedia.org/wiki/Wine%20to%20Water | Wine To Water is a non-profit organization concerned with clean water distribution and sanitation training. The organization was founded by Doc Hendley in 2007.
History
The idea for Wine To Water was born with the first fundraiser being held in Raleigh, North Carolina in early 2004. The founder, Doc Hendley was a bartender in Raleigh, and had a strong desire utilize the bar and nightclub industry as a way to bring about positive change in the world. After becoming aware of the world's water crisis, Hendley held fundraisers at wine tastings and bars in the Raleigh area. The funds from the events would be used to implement clean water projects around the world. The projects financed include digging and repairing wells, supplying areas with filtration systems and storage containers, and educating locals on how to maintain fresh water supplies.
Nearly one billion people worldwide lack access to clean water, and roughly 3.5 million people die each year because of water related issues. Almost half of these deaths are attributed to diarrhea. Wine To Water has expanded greatly recently, and is currently working on water projects in Sudan, India, Cambodia, Uganda, Ethiopia, Peru, and Kenya.
Wine To Water is based in Boone, North Carolina, and has clean water projects running in multiple international locations. Wine To Water has received a great deal of media attention on the local, national, and international levels. Doc Hendley was selected by CNN as a top ten finalist for the 2012 CNN Heroes. As of 2009, Wine To Water had implemented sustainable drinking water initiatives to over 25,000 individuals. By October 2014, Wine To Water had expanding to include projects in 18 countries on four continents, supplying clean water to over 300,000 individuals.
Projects
Sudan
Wine To Water’s work in Sudan includes on rehabilitating wells, delivering water to war-torn regions, and installing a water system they constructed for an orphanage located in the capital. Hendley and his organization have provided relief for locations in the Sudan that have been deemed unsafe for humanitarian work.
India
A leper colony located on the outskirts of New Delhi lacked access to clean water. WTW has installed a new running water system for this colony.
Cambodia
Cambodia is a country surrounded by water, yet most of the citizens lack access to clean water. Roughly of the deaths in Cambodia can be attributed to a lack of clean water.
One focus of Wine To Water in Phnom Penh, is to supply Bio Sand filters for the residences and communities that do not have access to clean water. The primary focus, however, is the drilling of new water wells for communities. As of October, 2009, WTW has drilled over 70 wells for the people of Cambodia. Using local supplies and hand pumps, WTW is able to cut the cost of drilling a well to one fifth the original cost.
Uganda
Work in Uganda is focused on several areas: first is the distribution of Bio-Sand filters, second is the training of local persons to manufacture their own filters, lastly the formation of training centers around the country to educate the citizens about how to clean water and the importance of this process.
Peru
Wine To Water's work in Peru is currently centered in Trujillo. Water projects underway as of 2009 include hand digging wells, as well as supplying households, orphanages and daycares with a water pumping and storage system to ensure clean water for the community. Individual filtration systems are being distributed in additional areas exposed to contaminated water sources.
Haiti
In response to the 2010 Haiti earthquake, Wine To Water responded by partnering with Filter Pure to distribute 500 ceramic water filters. The filters provide clean water for a family of 10 for up to five years. Along with Filter Pure, Wine To Water has also begun to build a Haitian run ceramic filter factory to ensure clean water will continue to get to those who need it most.
References
External links
Wine to Water discussed in Congress
Articles.cnn.com
CNN
Doc Hendley CNN Hero
Charities based in North Carolina
Development charities based in the United States
Organizations established in 2003
2003 establishments in North Carolina
Water supply | Wine to Water | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 820 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
43,283,728 | https://en.wikipedia.org/wiki/The%20Tor%20Project | The Tor Project, Inc. is a 501(c)(3) research-education nonprofit organization based in Winchester, Massachusetts. It is founded by computer scientists Roger Dingledine, Nick Mathewson, and five others. The Tor Project is primarily responsible for maintaining software for the Tor anonymity network.
History
The Tor Project, Inc. was founded on December 22, 2006 by computer scientists Roger Dingledine, Nick Mathewson and five others. The Electronic Frontier Foundation (EFF) acted as the Tor Project's fiscal sponsor in its early years, and early financial supporters of the Tor Project included the U.S. International Broadcasting Bureau, Internews, Human Rights Watch, the University of Cambridge, Google, and Netherlands-based Stichting NLnet.
In October 2014, the Tor Project hired the public relations firm Thomson Communications in order to improve its public image (particularly regarding the terms "Dark Net" and "hidden services") and to educate journalists about the technical aspects of Tor.
In May 2015, the Tor Project ended the Tor Cloud Service.
In December 2015, the Tor Project announced that it had hired Shari Steele, former executive director of the Electronic Frontier Foundation, as its new executive director. Roger Dingledine, who had been acting as interim executive director since May 2015, remained at the Tor Project as a director and board member. Later that month, the Tor Project announced that the Open Technology Fund would be sponsoring a bug bounty program that was coordinated by HackerOne. The program was initially invite-only and focuses on finding vulnerabilities that are specific to the Tor Project's applications.
On May 25, 2016, Tor Project employee Jacob Appelbaum stepped down from his position; this was announced on June 2 in a two-line statement by Tor. Over the following days, allegations of sexual mistreatment were made public by several people.
On July 13, 2016, the complete board of the Tor Project – Meredith Hoban Dunn, Ian Goldberg, Julius Mittenzwei, Rabbi Rob Thomas, Wendy Seltzer, Roger Dingledine and Nick Mathewson – was replaced with Matt Blaze, Cindy Cohn, Gabriella Coleman, Linus Nordberg, Megan Price and Bruce Schneier. A new anti-harassment policy has been approved by the new board, as well as a conflicts of interest policy, procedures for submitting complaints, and an internal complaint review process. The affair continues to be controversial, with considerable dissent within the Tor community.
In 2020, due to the COVID-19 pandemic, the Tor project's core team let go of 13 employees, leaving a working staff of 22 people.
In 2023, the Tails Project approached the Tor Project to merge operations. The merger was completed on September 26, 2024, stating that, "By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project."
Funding
, 80% of the Tor Project's $2 million annual budget came from the United States government, with the U.S. State Department, the Broadcasting Board of Governors, and the National Science Foundation as major contributors, "to aid democracy advocates in authoritarian states". The Swedish government and other organizations provided the other 20%, including NGOs and thousands of individual sponsors. Dingledine said that the United States Department of Defense funds are more similar to a research grant than a procurement contract. Tor executive director Andrew Lewman said that even though it accepts funds from the U.S. federal government, the Tor service did not collaborate with the NSA to reveal identities of users.
In June 2016, the Tor Project received an award from Mozilla's Open Source Support program (MOSS). The award was "to significantly enhance the Tor network's metrics infrastructure so that the performance and stability of the network can be monitored and improvements made as appropriate."
Tools
Metrics Portal
Analytics for the Tor network, including graphs of its available bandwidth and estimated user-base. This is a great resource for researchers interested in detailed statistics about Tor.
Nyx
a terminal (command line) application for monitoring and configuring Tor, intended for command-line enthusiasts and ssh connections. This functions much like top does for system usage, providing real time information on Tor's resource utilization and state.
Onionoo
Web-based protocol to learn about currently running Tor relays and bridges.
OnionShare
An open source tool that allows users to securely and anonymously share a file of any size.
Open Observatory of Network Interference (OONI)
a global observation network, monitoring network censorship, which aims to collect high-quality data using open methodologies, using Free and Open Source Software (FL/OSS) to share observations and data about the various types, methods, and amounts of network tampering in the world.
Orbot
Tor for Android and iOS devices, developed and maintained in collaboration with the Guardian Project.
Orlib
a library for use by any Android application to route Internet traffic through Orbot/Tor.
Pluggable Transports (PT)
helps circumvent censorship. Transforms the Tor traffic flow between the client and the bridge. This way, censors who monitor traffic between the client and the bridge will see innocent-looking transformed traffic instead of the actual Tor traffic.
Relay Search
Site providing an overview of the Tor network.
Shadow
a discrete-event network simulator that runs the real Tor software as a plug-in. Shadow is open-source software that enables accurate, efficient, controlled, and repeatable Tor experimentation.
Stem
Python Library for writing scripts and applications that interact with Tor.
Tails (The Amnesic Incognito Live System)
a live CD/USB distribution pre-configured so that everything is safely routed through Tor and leaves no trace on the local system.
Tor
free software and an open network that helps a user defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security. The organization has also implemented the software in Rust named Arti.
Tor Browser
a customization of Mozilla Firefox which uses a Tor circuit for browsing anonymously and with other features consistent with the Tor mission.
Tor Phone
A phone that routes its network traffic through the Tor network. Initially based on a CopperheadOS custom ROM prototype, using Tor with Orbot and Tor Browser are supported by custom Android operating systems CalyxOS and DivestOS. GrapheneOS encourages the use of Orbot VPN, but doesn't recommend the Tor Browser because of security concerns.
TorBirdy
Extension for Thunderbird and related *bird forks to route connections through the Tor network.
txtorcon
Python and Twisted event-based implementation of the Tor control protocol. Unit-tests, state and configuration abstractions, documentation. It is available on PyPI and in Debian.
Recognition
In March 2011, the Tor Project received the Free Software Foundation's 2010 Award for Projects of Social Benefit. The citation read, "Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in both Iran and more recently Egypt."
In September 2012, the Tor Project received the 2012 EFF Pioneer Award, along with Jérémie Zimmermann and Andrew Huang.
In November 2012, Foreign Policy magazine named Dingledine, Mathewson, and Syverson among its Top 100 Global Thinkers "for making the web safe for whistleblowers".
In 2014, Roger Dingledine, Nick Mathewson and Paul Syverson received the USENIX Test of Time Award for their paper titled "Tor: The Second-Generation Onion Router", which was published in the Proceedings of the 13th USENIX Security Symposium, August 2004.
In 2021, the Tor Project was awarded the Levchin Prize for real-world cryptography.
See also
Privacy software
References
External links
.
2006 establishments in Massachusetts
501(c)(3) organizations
Computer science organizations
Computer security organizations
Internet privacy organizations
Organizations based in Cambridge, Massachusetts
Organizations based in Seattle
Non-profit organizations based in Massachusetts
Scientific organizations established in 2006
Science and technology in Massachusetts
Tor onion services | The Tor Project | [
"Technology"
] | 1,699 | [
"Computer science",
"Computer science organizations"
] |
43,286,284 | https://en.wikipedia.org/wiki/Kaolin%20clotting%20time | Kaolin clotting time (KCT) is a sensitive test to detect lupus anticoagulants. There is evidence that suggests it is the most sensitive test for detecting lupus anticoagulants. It can also detect factor VIII inhibitors but is sensitive to unfractionated heparin as well.
The KCT on whole blood is known as the "Activated Clotting Time" (ACT) and is widely used in various instruments during surgery such as cardiac bypass to monitor heparin.
History
KCT was first described by Dr. Joel Margolis in 1958. Later on, it was found to be very sensitive to lupus anticoagulants but was only reliable when test plasmas were mixed with normal plasma in various proportions. It became the preferred method for lupus anticoagulant testing after Dr. Wilhelm Lubbe showed it to be a good marker for recurrent fetal loss.
Principle
KCT is similar to the activated partial thromboplastin time test, except it does not use exogenous phospholipid. Thus, a confirmatory test that uses excess phospholipid is needed to validate the presence of lupus anticoagulants. Otherwise, diluting the test plasma in normal plasma before testing provides characteristic mixing patterns.
Kaolin is the surface activator, and the test also requires small amounts of cell fragments and plasma lipids to provide the phospholipid surface required for coagulation. Therefore, the sample quality is important for the validity of the screening test.
Method
The test combines a test plasma with kaolin, and after a brief pre-incubation and the addition of calcium chloride, the time to clot (in seconds) is measured. Mixes of patient plasma with normal plasma are recommended for testing.
Interpretation
The KCT test/control ratio of greater than or equal to 1.2 indicates that a defect is present. If the test/control ratio is between 1.1 and 1.2, the test is equivocal.
A good way of expressing the result using mixes is to calculate the Rosner index. If A is the KCT of normal plasma, B is that of the 1:1 mix and C is that of the patient plasma, then the Rosner index is 100x(B-A)/C. Values above 15 indicate a positive result but in most cases labs set their own cutoff values.
If the KCT is less than 60 seconds, this suggests that the test plasma is contaminated with platelet fragments; therefore, the test is not valid.
See also
Dilute Russell's viper venom time
Partial thromboplastin time
Activated clotting time
References
Blood tests | Kaolin clotting time | [
"Chemistry"
] | 559 | [
"Blood tests",
"Chemical pathology"
] |
43,287,600 | https://en.wikipedia.org/wiki/K%C3%B4di%20Husimi | Kōji Husimi (June 29, 1909 – May 8, 2008, ) was a Japanese theoretical physicist who served as the president of the Science Council of Japan. Husimi trees in graph theory, the Husimi Q representation in quantum mechanics, and Husimi's theorem in the mathematics of paper folding are named after him.
Education and career
Husimi studied at the University of Tokyo, graduating in 1933. He spent a year there as an assistant, and then moved to Osaka University in 1934, where he soon began working with Seishi Kikuchi. At Osaka, he became Dean of the Faculty of Science. He moved to Nagoya University in 1961, and directed the plasma institute there. He retired in 1973, and became a professor emeritus of both Nagoya and Osaka.
Contributions
Physics
A 1940 paper by Husimi introduced the Husimi Q representation in quantum mechanics. Husimi also gave the name to the kagome lattice, frequently used in statistical mechanics.
Graph theory
In the mathematical area of graph theory, the name "Husimi tree" has come to refer to two different kinds of graphs: cactus graphs (the graphs in which each edge belongs to at most one cycle) and block graphs (the graphs in which, for every cycle, all diagonals of the cycle are edges). Husimi studied cactus graphs in a 1950 paper, and the name "Husimi trees" was given to these graphs in a later paper by Frank Harary and George Eugene Uhlenbeck. Due to an error by later researchers, the name came to be applied to block graphs as well, causing it to become ambiguous and fall into disuse.
Pacifism and world affairs
Husimi was an early member of the Science Council of Japan, joining it in 1949, and it was largely through his efforts that the Science Council in 1954 issued a statement proposing principles for the peaceful use of nuclear power and opposing the continued existence of nuclear weapons. This statement, in turn, led to the Japanese law outlawing military uses of nuclear technology.
Later, he served as president of the Science Council of Japan from 1977 to 1982. He was also a frequent participant in the Pugwash Conferences on Science and World Affairs and a leader of the Committee of Seven for World Peace.
Recreational mathematics
Husimi's recreational interests included origami; he designed several variations of the traditional orizuru (paper crane), folded on paper shaped as a rhombus instead of the usual square, and studied the properties of the bird base that allow it to be varied within a continuous family of deformations.
With his wife, Mitsue Husimi, he wrote a book on the mathematics of origami, which included a theorem characterizing the folding patterns with four folds meeting at a single vertex that may be folded flat. The generalization of this theorem to arbitrary numbers of folds at a single vertex is sometimes called Husimi's theorem.
References
Further reading
1909 births
2008 deaths
Japanese physicists
Graph theorists
Quantum physicists
University of Tokyo alumni
Academic staff of Osaka University
Academic staff of Nagoya University
Presidents of the Physical Society of Japan | Kôdi Husimi | [
"Physics",
"Mathematics"
] | 640 | [
"Quantum physicists",
"Quantum mechanics",
"Graph theory",
"Mathematical relations",
"Graph theorists"
] |
43,291,417 | https://en.wikipedia.org/wiki/Dry%20low%20emission | Dry low emission (abbreviation DLE) is a technology that reduces NOx emissions that exhausts out of gas fired turbines.
The amount of NOx produced depends on the combustion temperature.
When the combustion takes place at a lower temperature the NOx emissions are reduced.
Gas turbines with DLE technology were developed to achieve lower emissions without using water or steam to reduce combustion temperature (Wet Low Emission (WLE) technology).
WLE technology demands cleaning of large amounts of water, is heavy, takes more space and can be difficult to install offshore.
A DLE combustor uses the principle of lean premixed combustion, and is similar to the SAC combustor (Single Annular Combustor) with some exceptions.
A DLE combustor takes up more space than a SAC turbine and if the turbine is changed it can not be connected directly to existing equipment without considerable changes in the positioning of the equipment.
The SAC turbine has one single concentric ring where the DLE turbine has two or three rings with premixers depending on gas turbine type.
DLE technology demands an advanced control system with a large number of burners.
DLE results in lower NOx emissions because the process is run with less fuel and air, the temperature is lower and combustion takes place at a lower temperature.
Background for the technology
Increased focus on environmental issues led to increased research on new and better gas turbines with water and steam cooling methods in the middle of the 1970s.
The best technology could in 1980 reduce NOx emissions to 42 ppm and this was later reduced to 25 ppm.
In the 1990s ammonium and catalysators was tested and late in the 1980s the turbine producers started to develop "Dry Low Emission-technology" (DLE)
to be able to get around the technology that demanded water or steam injection.
During the next ten years the DLE technology was developed and installed in many places leading to a reduction of NOx emissions less than 25 ppm.
It is difficult to achieve less than 9 ppm NOx emissions with DLE turbines.
To achieve a reduction from 25 ppm to 9 ppm more than 6 percent air must pass through the premixer.
Newer generations of DLE burners have an extra injection leading to better control.
Additional systems like "selective catalytic reduction" (SCR) are necessary to achieve emissions lower than 2.5 ppm.
Technologies using water or steam (Wet Low Emission (WLE)) can achieve approximately the same level of NOx emissions (25–42 ppm) when they lower the combustion temperature.
Use of DLE-technology in the world
This is not a complete list, but examples of where and when DLE technology has been implemented or are planned to be implemented.
In Europe
DLE turbines were introduced offshore in Norway in 1998.
All gas turbines installed offshore in Norway after year 2000 which uses only gas as fuel are DLE-turbines.
The Kårstø facilities were planned with DLE technology.
DLE technology is used by Statoil ASA, Hammerfest LNG.
The Norwegian Storting has decided that the Gina Krogh platform shall not be equipped with DLE technology due to the fact that all off Utsirahøyden will have electrical power supplies from the mainland.
30 April 2012 a gas turbine generator with DLE technology was opened in Waidhaus in Bavaria in Germany.
In North America
In Alberta in Canada it was in 2003 planned to use DLE technology in the power supplies.
In Oceania
DLE is planned to be used in Australia.
In Asia
The Malampaya field in the Republic of the Philippines was awarded in August 1998, construction of the topside was begun in June 1999 and completed on 28 March. 2001.
The topside was equipped with the world's first RB211 with DLE technology.
References
Petroleum technology | Dry low emission | [
"Chemistry",
"Engineering"
] | 778 | [
"Petroleum engineering",
"Petroleum technology"
] |
43,292,838 | https://en.wikipedia.org/wiki/Dengue%20vaccine | Dengue vaccine is a vaccine used to prevent dengue fever in humans. Development of dengue vaccines began in the 1920s but was hindered by the need to create immunity against all four dengue serotypes. As of 2023, there are two commercially available vaccines, sold under the brand names Dengvaxia and Qdenga.
Dengvaxia is only recommended in those who have previously had dengue fever or populations in which most people have been previously infected due to phenomenon known as antibody-dependent enhancement. The value of Dengvaxia is limited by the fact that it may increase the risk of severe dengue in those who have not previously been infected. In 2017, more than 733,000 children and more than 50,000 adult volunteers were vaccinated with Dengvaxia regardless of serostatus, which led to a controversy. Qdenga is designated for people not previously infected.
There are other vaccine candidates in development including live attenuated, inactivated, DNA and subunit vaccines.
History
In December 2018, Dengvaxia was approved in the European Union.
In May 2019, Dengvaxia was approved in the United States as the first vaccine approved for the prevention of dengue disease caused by all dengue virus serotypes (1, 2, 3, and 4) in people ages nine through 16 who have laboratory-confirmed previous dengue infection and who live in endemic areas. Dengue is endemic in the US territories of American Samoa, Guam, Puerto Rico, and the US Virgin Islands.
The safety and effectiveness of the vaccine were determined in three randomized, placebo-controlled studies involving approximately 35,000 individuals in dengue-endemic areas, including Puerto Rico, Latin America, and the Asia Pacific region. The vaccine was determined to be approximately 76 percent effective in preventing symptomatic, laboratory-confirmed dengue disease in individuals 9 through 16 years of age who previously had laboratory-confirmed dengue disease.
In March 2021, the European Medicines Agency accepted the filing package for TAK-003 (Qdenga) intended for markets outside of the EU.
In August 2022, the Indonesian FDA approved Qdenga for use in individuals six years to 45 years of age and became the first authority in the world to approve Qdenga. Qdenga was approved in the European Union in December 2022.
CYD-TDV (Dengvaxia)
CYD-TDV, sold under the brand name Dengvaxia and made by Sanofi Pasteur, is a live attenuated tetravalent vaccine that is administered as three separate injections, with the initial dose followed by two additional shots given six and twelve months later. The US Food and Drug Administration (FDA) granted the application for Dengvaxia priority review designation and a tropical disease priority review voucher. The approval of Dengvaxia was granted to Sanofi Pasteur.
The vaccine has been approved in 19 countries and the European Union, but it is not approved in the US for use in individuals not previously infected by any dengue virus serotype or for whom this information is unknown.
Dengvaxia is a chimeric vaccine made using recombinant DNA technology by replacing the PrM (pre-membrane) and E (envelope) structural genes of the yellow fever attenuated 17D strain vaccine with those from the four dengue serotypes. Evidence indicates that CYD-TDV is partially effective in preventing infection, but may lead to a higher risk of severe disease in those who have not been previously infected and then do go on to contract the disease. It is not clear why the vaccinated seronegative population had more serious adverse outcomes. A plausible hypothesis is the phenomenon of antibody-dependent enhancement (ADE). American virologist Scott Halstead was one of the first researchers to identify the ADE phenomenon. Dr. Halstead and his colleague Dr. Phillip Russell proposed that the vaccine only be used after antibody testing, to check for prior dengue exposure and avoid vaccination of sero-negative individuals.
Common side effects include headache, pain at the site of injection, and general muscle pains. Severe side effects may include anaphylaxis. Use is not recommended in people with poor immune function. Safety of use during pregnancy is unclear. Dengvaxia is a weakened but live vaccine and works by triggering an immune response against four types of dengue virus.
Dengvaxia became commercially available in 2016 in 11 countries: Mexico, the Philippines, Indonesia, Brazil, El Salvador, Costa Rica, Paraguay, Guatemala, Peru, Thailand, and Singapore. In 2019 it was approved for medical use in the United States. It is on the World Health Organization's List of Essential Medicines.
In 2017, the manufacturer recommended that the vaccine only be used in people who have previously had a dengue infection, as outcomes may be worsened in those who have not been previously infected due to antibody-dependent enhancement. This led to a controversy in the Philippines where more than 733,000 children and more than 50,000 adult volunteers were vaccinated regardless of serostatus.
The World Health Organization (WHO) recommends that countries should consider vaccination with the dengue vaccine CYD-TDV only if the risk of severe dengue in seronegative individuals can be minimized either through pre-vaccination screening or recent documentation of high seroprevalence rates in the area (at least 80% by age nine years).
The WHO updated its recommendations regarding the use of Dengvaxia in 2018, based on long-term safety data stratified by serostatus on 29 November 2017. Seronegative vaccine recipients have an excess risk of severe dengue compared to unvaccinated seronegative individuals. For every 13 hospitalizations prevented in seropositive vaccinees, there would be 1 excess hospitalization in seronegative vaccinees per 1,000 vaccinees. WHO recommends serological testing for past dengue infection
In 2017, the manufacturer recommended that the vaccine only be used in people who have previously had a dengue infection as otherwise there was evidence it may worsen subsequent infections. The initial protocol did not require baseline blood samples before vaccination to establish an understanding of increased risk of severe dengue in participants who had not been previously exposed. In November 2017, Sanofi acknowledged that some participants were put at risk of severe dengue if they had no prior exposure to the infection; subsequently, the Philippine government suspended the mass immunization program with the backing of the WHO which began a review of the safety data.
Phase III trials in Latin America and Asia involved over 31,000 children between the ages of two and 14 years. In the first reports from the trials, vaccine efficacy was 56.5% in the Asian study and 64.7% in the Latin American study in patients who received at least one injection of the vaccine. Efficacy varied by serotype. In both trials vaccine reduced by about 80% the number of severe dengue cases. An analysis of both the Latin American and Asian studies at the 3rd year of follow-up showed that the efficacy of the vaccine was 65.6% in preventing hospitalization in children older than nine years of age, but considerably greater (81.9%) for seropositive children (indicating previous dengue infection) at baseline. The vaccination series consists of three injections at 0, 6 and 12 months.
The vaccine was approved in Mexico, the Philippines, and Brazil in December 2015, and in El Salvador, Costa Rica, Paraguay, Guatemala, Peru, Indonesia, Thailand, and Singapore in 2016. Under the brand name Dengvaxia, it is approved for use for those aged nine years of age and older and can prevent all four serotypes.
TAK-003 (Qdenga)
TAK-003 or DENVax, sold under the brand name Qdenga and made by Takeda, is a recombinant chimeric attenuated vaccine with DENV1, DENV3, and DENV4 components on a dengue virus type 2 (DENV2) backbone originally developed at Mahidol University in Bangkok and now funded by Inviragen (DENVax) and (TAK-003). Phase I and II trials were conducted in the United States, Colombia, Puerto Rico, Singapore and Thailand. The 18-month data published in the journal Lancet Infectious Diseases, indicate that TAK-003 produced sustained antibody responses against all four virus strains, regardless of previous dengue exposure and dosing schedule.
Data from the phase III trial, which began in September 2016, show that TAK-003 was efficacious against symptomatic dengue. TAK-003 appears to not lack efficacy in seronegative people or potentially cause them harm, unlike CYD-TDV. The data appear to show only moderate efficacy in other dengue serotypes than DENV2.
Qdenga received approval for use in the European Union in 2022 for people aged 4 and above, and is also approved in the United Kingdom, Brazil, Argentina, Indonesia, and Thailand. Takeda voluntarily withdrew their application for the vaccination's approval in the United States in July 2023 after the FDA sought further data from the firm, which the company stated could not be provided during the current review cycle.
In development
TV-003/005
TV-003/005 is a tetravalent admixture of monovalent vaccines, that was developed by NIAID, that were tested separately for safety and immunogenicity. The vaccine passed phase I trials and phase II studies in the US, Thailand, Bangladesh, India, and Brazil.
The National Institutes of Health has conducted phase I and phase II studies in over 1,000 participants in the US. It has also conducted human challenge studies while having conducted NHP model studies successfully.
NIH has licensed their technology for further development and commercial scale manufacturing to Panacea Biotec, Serum Institute of India, Instituto Butantan, Vabiotech, Merck, and Medigen.
In Brazil, phase III studies are being conducted by Instituto Butantan in collaboration with NIH. Panacea Biotec has conducted phase II clinical studies in India.
A company in Vietnam (Vabiotech) is conducting safety tests and developing a clinical trial plan. All four companies are involved in studies of a TetraVax-DV vaccine in conjunction with the US NIH.
TDENV PIV
TDENV PIV (tetravalent dengue virus purified inactivated vaccine) is undergoing phase I trials as part of a collaboration between GlaxoSmithKline (GSK) and the Walter Reed Army Institute of Research (WRAIR). A synergistic formulation with another live attenuated candidate vaccine (prime-boost strategy) is also being evaluated in a phase II study. In prime-boosting, one type of vaccine is followed by a boost with another type in an attempt to improve immunogenicity.
V180
Merck is studying recombinant subunit vaccines expressed in Drosophila cells. , it had completed phase I stage and found V180 formulations to be generally well tolerated.
DNA vaccines
In 2011, the Naval Medical Research Center attempted to develop a monovalent DNA plasmid vaccine, but early results showed it to be only moderately immunogenic.
Society and culture
Legal status
On 13 October 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Qdenga, intended for prophylaxis against dengue disease. The applicant for this medicinal product is Takeda GmbH. The active substance of Qdenga is dengue tetravalent vaccine (live, attenuated), a viral vaccine containing live attenuated dengue viruses which replicate locally and elicit humoral and cellular immune responses against the four dengue virus serotypes. Qdenga was approved for medical use in the European Union in December 2022.
In February 2023, Qdenga was approved by the UK Medicines and Healthcare products Regulatory Agency (MHRA) for people aged four years and older.
In April 2023, the (ANMAT) gave the green light to the use of the tetravalent vaccine TAK-003 known as Qdenga, developed by the Japanese laboratory Takeda Pharmaceutical Company, making it the only vaccine approved to date. to combat dengue in Argentina. It has been used in the 2024 dengue epidemic.
In July 2023, Takeda withdrew its application for Qdenga before the FDA, citing the FDA's requirement for additional data not captured in the phase III studies.
Economics
In Indonesia, Dengvaxia costs about for the recommended three doses as of 2016. Indonesia was the first country to approve Qdenga, in late 2022.
Controversies
Philippines
The 2017 dengue vaccine controversy in the Philippines involved a vaccination program run by the Philippines Department of Health (DOH). The DOH vaccinated schoolchildren with Sanofi Pasteur's CYD-TDV (Dengvaxia) dengue vaccine. Some of the children who received the vaccine had never been infected by the dengue virus before. The program was stopped when Sanofi Pasteur advised the government that the vaccine could put previously uninfected people at a somewhat higher risk of a severe case of dengue fever. A political controversy erupted over whether the program was run with sufficient care and who should be held responsible for the alleged harm to the vaccinated children.
References
External links
Dengue fever
Vaccines
World Health Organization essential medicines (vaccines)
Wikipedia medicine articles ready to translate | Dengue vaccine | [
"Biology"
] | 2,814 | [
"Vaccination",
"Vaccines"
] |
29,559,759 | https://en.wikipedia.org/wiki/Liquid%20metal | A liquid metal is a metal or a metal alloy which is liquid at or near room temperature.
The only stable liquid elemental metal at room temperature is mercury (Hg), which is molten above −38.8 °C (234.3 K, −37.9 °F). Three more stable elemental metals melt just above room temperature: caesium (Cs), which has a melting point of 28.5 °C (83.3 °F); gallium (Ga) (30 °C [86 °F]); and rubidium (Rb) (39 °C [102 °F]). The radioactive metal francium (Fr) is probably liquid close to room temperature as well. Calculations predict that the radioactive metals copernicium (Cn) and flerovium (Fl) should also be liquid at room temperature.
Alloys can be liquid if they form a eutectic, meaning that the alloy's melting point is lower than any of the alloy's constituent metals. The standard metal for creating liquid alloys used to be mercury, but gallium-based alloys, which are lower both in their vapor pressure at room temperature and toxicity, are being used as a replacement in various applications.
Thermal and electrical conductivity
Alloy systems that are liquid at room temperature have thermal conductivity far superior to ordinary non-metallic liquids, allowing liquid metal to efficiently transfer energy from the heat source to the liquid. They also have a higher electrical conductivity that allows the liquid to be pumped more efficiently, by electromagnetic pumps. This results in the use of these materials for specific heat conducting and/or dissipation applications.
Another advantage of liquid alloy systems is their inherent high densities.
Viscosity
The viscosity of liquid metals can vary greatly depending on the atomic composition of the liquid, especially in the case of alloys. In particular, the temperature dependence of the viscosity of liquid metals may range from the standard Arrhenius law dependence, to a much steeper (non-Arrhenius) dependence such as that given empirically by the Vogel-Fulcher-Tammann equation.
A physical model for the viscosity of liquid metals, which captures this great variability in terms of the underlying interatomic interactions, was also developed.
The electrical resistance of a liquid metal can be estimated by means of the Ziman formula, which gives the resistance in terms of the static structure factor of the liquid as can be determined by neutron or X-ray scattering measurements.
Wetting to metallic and non-metallic surfaces
Once oxides have been removed from the substrate surface, most liquid metals will wet most metallic surfaces. At room temperature, liquid metals are often reactive and soluble to metallic surfaces, though some solid metals are resistant to attack by the common liquid metals. For example gallium is corrosive to all metals except tungsten and tantalum, which have a high resistance to corrosion, more so than niobium, titanium and molybdenum.
Similar to indium, gallium and gallium-containing alloys have the ability to wet to many non-metallic surfaces such as glass and quartz. Gently rubbing the alloy into the surface may help induce wetting. However, this observation of "wetting by rubbing into glass surface" has created a widely spread misconception that the gallium-based liquid metals wet glass surfaces, as if the liquid breaks free of the oxide skin and wets the surface. The reality is the opposite: the oxide makes the liquid wet the glass. In more details: as the liquid is rubbed into and spread onto the glass surface, the liquid oxidizes and coats the glass with a thin layer of oxide (solid) residues, on which the liquid metal wets. In other words, what is seen is a gallium-based liquid metal wetting its solid oxide, not glass. Apparently, the above misconception was caused by the super-fast oxidation of the liquid gallium in even a trace amount of oxygen, i.e., nobody observed the true behavior of a liquid gallium on glass, until research at the UCLA debunked the above myth by testing Galinstan, a gallium-based alloy that is liquid at room temperature, in an oxygen-free environment. Note: These alloys form a thin dull looking oxide skin that is easily dispersed with mild agitation. The oxide-free surfaces are bright and lustrous.
Applications
Applications of liquid metals include thermostats, switches, barometers, heat transfer systems, and thermal cooling and heating designs. They can also be used to conduct heat and electricity between non-metallic and metallic surfaces. Due to their free-flowing nature, another potential application is wearable and medical devices, where material deformability is important.
Liquid metal is sometimes used as a thermal interface material between coolers and processors because of its high thermal conductivity. The PlayStation 5 video game console uses liquid metal to cool components inside the console. Liquid metal cooled nuclear reactors also use them.
Liquid metal can sometimes be used for biological applications, i.e., making interconnects that flex without fatigue. As Galinstan is not particularly toxic, wires made from silicone with a core of liquid metal would be ideal for intracardiac pacemakers and neural implants where delicate brain tissue cannot tolerate a conventional solid implant. In fact, a wire constructed of this material can be stretched to 3 or even 5 times its length and still conduct electricity, returning to its original size and shape with no loss.
Due to their unique combination of high surface tension and fluidic deformability, liquid metals are useful for creating soft actuators. The force-generating mechanisms in liquid metal actuators are typically achieved by modulation of their surface tension. For instance, a liquid metal droplet can be designed to bridge two moving parts (e.g., in robotic systems) in such a way to generate contraction when the surface tension increases. The principles of muscle-like contraction in liquid metal actuators have been studied for their potential as a next-generation artificial muscle that offers several liquid-specific advantages over other solid materials.
Liquid-mirror telescopes can use liquid metals formed into a parabola through a spinning tank to serve as the primary mirror of a reflecting telescope.
The Spallation Neutron Source employs liquid metals as targets for generating pulsed neutron beams.
See also
Electromagnetic pump
NaK
Fusible alloy
Galinstan
References
Fusible alloys
Brazing and soldering
Alloys
Amorphous metals | Liquid metal | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,318 | [
"Metallurgy",
"Unsolved problems in physics",
"Fusible alloys",
"Amorphous metals",
"Chemical mixtures",
"Alloys",
"Amorphous solids"
] |
29,564,581 | https://en.wikipedia.org/wiki/Aircraft%20design%20process | The aircraft design process is a loosely defined method used to balance many competing and demanding requirements to produce an aircraft that is strong, lightweight, economical and can carry an adequate payload while being sufficiently reliable to safely fly for the design life of the aircraft. Similar to, but more exacting than, the usual engineering design process, the technique is highly iterative, involving high-level configuration tradeoffs, a mixture of analysis and testing and the detailed examination of the adequacy of every part of the structure. For some types of aircraft, the design process is regulated by civil airworthiness authorities.
This article deals with powered aircraft such as airplanes and helicopter designs.
Design constraints
Purpose
The design process starts with the aircraft's intended purpose. Commercial airliners are designed for carrying a passenger or cargo payload, long range and greater fuel efficiency whereas fighter jets are designed to perform high speed maneuvers and provide close support to ground troops. Some aircraft have specific missions, for instance, amphibious airplanes have a unique design that allows them to operate from both land and water, some fighters, like the Harrier jump jet, have VTOL (vertical take-off and landing) ability, helicopters have the ability to hover over an area for a period of time.
The purpose may be to fit a specific requirement, e.g. as in the historical case of a British Air Ministry specification, or fill a perceived "gap in the market"; that is, a class or design of aircraft which does not yet exist, but for which there would be significant demand.
Aircraft regulations
Another important factor that influences the design are the requirements for obtaining a type certificate for a new design of aircraft. These requirements are published by major national airworthiness authorities including the US Federal Aviation Administration and the European Aviation Safety Agency.
Airports may also impose limits on aircraft, for instance, the maximum wingspan allowed for a conventional aircraft is to prevent collisions between aircraft while taxiing.
Financial factors and market
Budget limitations, market requirements and competition set constraints on the design process and comprise the non-technical influences on aircraft design along with environmental factors. Competition leads to companies striving for better efficiency in the design without compromising performance and incorporating new techniques and technology.
In the 1950s and '60s, unattainable project goals were regularly set, but then abandoned, whereas today troubled programs like the Boeing 787 and the Lockheed Martin F-35 have proven far more costly and complex to develop than expected.
More advanced and integrated design tools have been developed. Model-based systems engineering predicts potentially problematic interactions, while computational analysis and optimization allows designers to explore more options early in the process. Increasing automation in engineering and manufacturing allows faster and cheaper development.
Technology advances from materials to manufacturing enable more complex design variations like multifunction parts. Once impossible to design or construct, these can now be 3D printed, but they have yet to prove their utility in applications like the Northrop Grumman B-21 or the re-engined A320neo and 737 MAX. Airbus and Boeing also recognize the economic limits, that the next airliner generation cannot cost more than the previous ones did.
Environmental factors
An increase in the number of aircraft also means greater carbon emissions. Environmental scientists have voiced concern over the main kinds of pollution associated with aircraft, mainly noise and emissions. Aircraft engines have been historically notorious for creating noise pollution and the expansion of airways over already congested and polluted cities have drawn heavy criticism, making it necessary to have environmental policies for aircraft noise. Noise also arises from the airframe, where the airflow directions are changed. Improved noise regulations have forced designers to create quieter engines and airframes. Emissions from aircraft include particulates, carbon dioxide (CO2), sulfur dioxide (SO2), carbon monoxide (CO), various oxides of nitrates and unburnt hydrocarbons. To combat the pollution, ICAO set recommendations in 1981 to control aircraft emissions. Newer, environmentally friendly fuels have been developed and the use of recyclable materials in manufacturing have helped reduce the ecological impact due to aircraft. Environmental limitations also affect airfield compatibility. Airports around the world have been built to suit the topography of the particular region. Space limitations, pavement design, runway end safety areas and the unique location of airport are some of the airport factors that influence aircraft design. However changes in aircraft design also influence airfield design as well, for instance, the recent introduction of new large aircraft (NLAs) such as the superjumbo Airbus A380, have led to airports worldwide redesigning their facilities to accommodate its large size and service requirements.
Safety
The high speeds, fuel tanks, atmospheric conditions at cruise altitudes, natural hazards (thunderstorms, hail and bird strikes) and human error are some of the many hazards that pose a threat to air travel.
Airworthiness is the standard by which aircraft are determined fit to fly. The responsibility for airworthiness lies with the national civil aviation regulatory bodies, manufacturers, as well as owners and operators.
The International Civil Aviation Organization sets international standards and recommended practices on which national authorities should base their regulations. The national regulatory authorities set standards for airworthiness, issue certificates to manufacturers and operators and the standards of personnel training. Every country has its own regulatory body such as the Federal Aviation Administration in USA, DGCA (Directorate General of Civil Aviation) in India, etc.
The aircraft manufacturer makes sure that the aircraft meets existing design standards, defines the operating limitations and maintenance schedules and provides support and maintenance throughout the operational life of the aircraft. The aviation operators include the passenger and cargo airliners, air forces and owners of private aircraft. They agree to comply with the regulations set by the regulatory bodies, understand the limitations of the aircraft as specified by the manufacturer, report defects and assist the manufacturers in keeping up the airworthiness standards.
Most of the design criticisms these days are built on crashworthiness. Even with the greatest attention to airworthiness, accidents still occur. Crashworthiness is the qualitative evaluation of how aircraft survive an accident. The main objective is to protect the passengers or valuable cargo from the damage caused by an accident. In the case of airliners the stressed skin of the pressurized fuselage provides this feature, but in the event of a nose or tail impact, large bending moments build all the way through the fuselage, causing fractures in the shell, causing the fuselage to break up into smaller sections. So the passenger aircraft are designed in such a way that seating arrangements are away from areas likely to be intruded in an accident, such as near a propeller, engine nacelle undercarriage etc. The interior of the cabin is also fitted with safety features such as oxygen masks that drop down in the event of loss of cabin pressure, lockable luggage compartments, safety belts, lifejackets, emergency doors and luminous floor strips. Aircraft are sometimes designed with emergency water landing in mind, for instance the Airbus A330 has a 'ditching' switch that closes valves and openings beneath the aircraft slowing the ingress of water.
Design optimization
Aircraft designers normally rough-out the initial design with consideration of all the constraints on their design. Historically design teams used to be small, usually headed by a Chief Designer who knows all the design requirements and objectives and coordinated the team accordingly. As time progressed, the complexity of military and airline aircraft also grew. Modern military and airline design projects are of such a large scale that every design aspect is tackled by different teams and then brought together. In general aviation a large number of light aircraft are designed and built by amateur hobbyists and enthusiasts.
Computer-aided design of aircraft
In the early years of aircraft design, designers generally used analytical theory to do the various engineering calculations that go into the design process along with a lot of experimentation. These calculations were labour-intensive and time-consuming. In the 1940s, several engineers started looking for ways to automate and simplify the calculation process and many relations and semi-empirical formulas were developed. Even after simplification, the calculations continued to be extensive. With the invention of the computer, engineers realized that a majority of the calculations could be automated, but the lack of design visualization and the huge amount of experimentation involved kept the field of aircraft design stagnant. With the rise of programming languages, engineers could now write programs that were tailored to design an aircraft. Originally this was done with mainframe computers and used low-level programming languages that required the user to be fluent in the language and know the architecture of the computer. With the introduction of personal computers, design programs began employing a more user-friendly approach.
Design aspects
The main aspects of aircraft design are:
Aerodynamics
Propulsion
Controls
Mass
Structure
All aircraft designs involve compromises of these factors to achieve the design mission.
Wing design
The wing of a fixed-wing aircraft provides the lift necessary for flight. Wing geometry affects every aspect of an aircraft's flight. The wing area will usually be dictated by the desired stalling speed but the overall shape of the planform and other detail aspects may be influenced by wing layout factors. The wing can be mounted to the fuselage in high, low and middle positions. The wing design depends on many parameters such as selection of aspect ratio, taper ratio, sweepback angle, thickness ratio, section profile, washout and dihedral. The cross-sectional shape of the wing is its airfoil. The construction of the wing starts with the rib which defines the airfoil shape. Ribs can be made of wood, metal, plastic or even composites.
The wing must be designed and tested to ensure it can withstand the maximum loads imposed by maneuvering, and by atmospheric gusts.
Fuselage
The fuselage is the part of the aircraft that contains the cockpit, passenger cabin or cargo hold.
Empennage
Propulsion
Aircraft propulsion may be achieved by specially designed aircraft engines, adapted auto, motorcycle or snowmobile engines, electric engines or even human muscle power. The main parameters of engine design are:
Maximum engine thrust available
Fuel consumption
Engine mass
Engine geometry
The thrust provided by the engine must balance the drag at cruise speed and be greater than the drag to allow acceleration. The engine requirement varies with the type of aircraft. For instance, commercial airliners spend more time in cruise speed and need more engine efficiency. High-performance fighter jets need very high acceleration and therefore have very high thrust requirements.
Landing gear
Weight
The weight of the aircraft is the common factor that links all aspects of aircraft design such as aerodynamics, structure, and propulsion, all together. An aircraft's weight is derived from various factors such as empty weight, payload, useful load, etc. The various weights are used to then calculate the center of mass of the entire aircraft. The center of mass must fit within the established limits set by the manufacturer.
Structure
The aircraft structure focuses not only on strength, aeroelasticity, durability, damage tolerance, stability, but also on fail-safety, corrosion resistance, maintainability and ease of manufacturing. The structure must be able to withstand the stresses caused by cabin pressurization, if fitted, turbulence and engine or rotor vibrations.
Design process and simulation
The design of any aircraft starts out in three phases
Conceptual design
Aircraft conceptual design involves sketching a variety of possible configurations that meet the required design specifications. By drawing a set of configurations, designers seek to reach the design configuration that satisfactorily meets all requirements as well as go hand in hand with factors such as aerodynamics, propulsion, flight performance, structural and control systems. This is called design optimization. Fundamental aspects such as fuselage shape, wing configuration and location, engine size and type are all determined at this stage. Constraints to design like those mentioned above are all taken into account at this stage as well. The final product is a conceptual layout of the aircraft configuration on paper or computer screen, to be reviewed by engineers and other designers.
Preliminary design phase
The design configuration arrived at in the conceptual design phase is then tweaked and remodeled to fit into the design parameters. In this phase, wind tunnel testing and computational fluid dynamic calculations of the flow field around the aircraft are done. Major structural and control analysis is also carried out in this phase. Aerodynamic flaws and structural instabilities if any are corrected and the final design is drawn and finalized. Then after the finalization of the design lies the key decision with the manufacturer or individual designing it whether to actually go ahead with the production of the aircraft. At this point several designs, though perfectly capable of flight and performance, might have been opted out of production due to their being economically nonviable.
Detail design phase
This phase simply deals with the fabrication aspect of the aircraft to be manufactured. It determines the number, design and location of ribs, spars, sections and other structural elements. All aerodynamic, structural, propulsion, control and performance aspects have already been covered in the preliminary design phase and only the manufacturing remains. Flight simulators for aircraft are also developed at this stage.
Delays
Some commercial aircraft have experienced significant schedule delays and cost overruns in the development phase. Examples of this include the Boeing 787 Dreamliner with a delay of 4 years with massive cost overruns, the Boeing 747-8 with a two-year delay, the Airbus A380 with a two-year delay and US$6.1 billion in cost overruns, the Airbus A350 with delays and cost overruns, the Bombardier C Series, Global 7000 and 8000, the Comac C919 with a four-year delay and the Mitsubishi Regional Jet, which was delayed by four years and ended up with empty weight issues.
Program development
An existing aircraft program can be developed for performance and economy gains by stretching the fuselage, increasing the MTOW, enhancing the aerodynamics, installing new engines, new wings or new avionics.
For a 9,100 nmi long range at Mach 0.8/FL360, a 10% lower TSFC saves 13% of fuel, a 10% L/D increase saves 12%, a 10% lower OEW saves 6% and all combined saves 28%.
Re-engine
Fuselage stretch
See also
Index of aviation articles
Aerospace engineering
Aircraft manufacturer
Iron bird (aviation)
References
External links
Re-engine
Aerospace engineering
Aerodynamics
Design | Aircraft design process | [
"Chemistry",
"Engineering"
] | 2,877 | [
"Aerospace engineering",
"Aerodynamics",
"Design",
"Fluid dynamics"
] |
28,184,481 | https://en.wikipedia.org/wiki/Silicon%20Photonics%20Link | Silicon Photonics Link is a silicon-based optical data connection developed by Intel Corporation which uses silicon photonics and hybrid silicon laser, it provides 50 Gbit/s bandwidth. Intel expected the technology to be in products by 2015.
This technology is enabled and well supported by academic and industrial research work at Intel labs, 50 Gbit/s multi-color transmission line at Cornell University and Columbia University.
See also
List of device bandwidths
Thunderbolt (Light Peak)
Universal Serial Bus (USB)
References
External links
The 50G Silicon Photonics Link
http://www.intel.com/content/www/us/en/research/intel-labs-silicon-photonics-research.html
http://www.intel.com/content/www/us/en/data-center/silicon-photonics-research.html
Computer buses
Intel products
Silicon photonics | Silicon Photonics Link | [
"Materials_science"
] | 180 | [
"Nanotechnology",
"Silicon photonics"
] |
28,189,638 | https://en.wikipedia.org/wiki/Affilin | Affilins are artificial proteins designed to selectively bind antigens. Affilin proteins are structurally derived from human ubiquitin (historically also from gamma-B crystallin). Affilin proteins are constructed by modification of surface-exposed amino acids of these proteins and isolated by display techniques such as phage display and screening. They resemble antibodies in their affinity and specificity to antigens but not in structure, which makes them a type of antibody mimetic. Affilin was developed by Scil Proteins GmbH as potential new biopharmaceutical drugs, diagnostics and affinity ligands.
Structure
Two proteins, gamma-B crystallin and ubiquitin, have been described as scaffolds for Affilin proteins. Certain amino acids in these proteins can be substituted by others without losing structural integrity, a process creating regions capable of binding different antigens, depending on which amino acids are exchanged. In both types, the binding region is typically located in a beta sheet structure, whereas the binding regions of antibodies, called complementarity-determining regions, are flexible loops.
Based on gamma crystallin
Historically, Affilin molecules were based on gamma crystallin, a family of proteins found in the eye lens of vertebrates, including humans. It consists of two identical domains with mainly beta sheet structure and a total molecular mass of about 20 kDa. The eight surface-exposed amino acids 2, 4, 6, 15, 17, 19, 36, and 38 are suitable for modification.
Based on ubiquitin
Ubiquitin, as the name suggests, is a highly conserved protein occurring ubiquitously in eukaryotes. It consists of 76 amino acids in three and a half alpha helix windings and five strands constituting a beta sheet. For example, the eight surface-exposed exchangeable amino acids 2, 4, 6, 62, 63, 64, 65, and 66 are located at the beginning of the first N-terminal beta strand (2, 4, 6), at the nearby beginning of the C-terminal strand and the loop leading up to it (63–66). The resulting Affilin proteins are about 10 kDa in mass.
Properties
The molecular mass of crystallin and ubiquitin based Affilin proteins is only one eighth or one sixteenth of an IgG antibody, respectively. This leads to an improved tissue permeability, heat stability up to 90 °C (195 °F), and stability towards proteases as well as acids and bases. The latter enables Affilin proteins to pass through the intestine, but like most proteins they are not absorbed into the bloodstream. Renal clearance, another consequence of their small size, is the reason for their short plasma half-life, generally a disadvantage for potential drugs.
Production
Molecular libraries of Affilin proteins are generated by randomizing sets of amino acids by mutagenesis methods. Substituting selected amino acids at the potential binding site gives 198 ≈ 17,000,000,000 possible combinations (e.g.: eight amino acids substituted by 19 amino acids). Cysteine is excluded because of its liability to form disulfide bonds. In an Affilin protein comprising two modified ubiquitin molecules, for example, up to 14 amino acids are exchanged, resulting in 8 × 1017 combinations, but not all of these are realized in a given library.
The next step is the selection of Affilin proteins that bind the desired target protein. To this end display techniques such as phage display or ribosome display are used. The fitting species are isolated and characterized physically, chemically and pharmacologically. Subsequent dimerisation or multimerisation can increase plasma half-life and, due to avidity, affinity to the target protein. Alternatively, multispecific Affilin molecules can be generated, binding different targets simultaneously. Radionuclides or cytotoxins can be conjugated to Affilin proteins, making them potential tumour therapeutics and diagnostics. Conjugation of cytokines has also been tested in vitro.
Large-scale production of Affilin proteins is facilitated by E. coli and other organisms commonly used in biotechnology.
References
External links
Scil Proteins, the developer
Antibody mimetics | Affilin | [
"Chemistry"
] | 878 | [
"Antibody mimetics",
"Molecular biology"
] |
31,976,290 | https://en.wikipedia.org/wiki/Smoothed%20finite%20element%20method | Smoothed finite element methods (S-FEM) are a particular class of numerical simulation algorithms for the simulation of physical phenomena. It was developed by combining meshfree methods with the finite element method. S-FEM are applicable to solid mechanics as well as fluid dynamics problems, although so far they have mainly been applied to the former.
Description
The essential idea in the S-FEM is to use a finite element mesh (in particular triangular mesh) to construct numerical models of good performance. This is achieved by modifying the compatible strain field, or construct a strain field using only the displacements, hoping a Galerkin model using the modified/constructed strain field can deliver some good properties. Such a modification/construction can be performed within elements but more often beyond the elements (meshfree concepts): bring in the information from the neighboring elements. Naturally, the strain field has to satisfy certain conditions, and the standard Galerkin weak form needs to be modified accordingly to ensure the stability and convergence. A comprehensive review of S-FEM covering both methodology and applications can be found in ("Smoothed Finite Element Methods (S-FEM): An Overview and Recent Developments").
History
The development of S-FEM started from the works on meshfree methods, where the so-called weakened weak (W2) formulation based on the G space theory were developed. The W2 formulation offers possibilities for formulate various (uniformly) "soft" models that works well with triangular meshes. Because triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence automation in modeling and simulation. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides. This allows easy error estimation for generally complicated problems, as long as a triangular mesh can be generated. Typical W2 models are the Smoothed Point Interpolation Methods (or S-PIM). The S-PIM can be node-based (known as NS-PIM or LC-PIM), edge-based (ES-PIM), and cell-based (CS-PIM). The NS-PIM was developed using the so-called SCNI technique. It was then discovered that NS-PIM is capable of producing upper bound solution and volumetric locking free. The ES-PIM is found superior in accuracy, and CS-PIM behaves in between the NS-PIM and ES-PIM. Moreover, W2 formulations allow the use of polynomial and radial basis functions in the creation of shape functions (it accommodates the discontinuous displacement functions, as long as it is in G1 space), which opens further rooms for future developments.
The S-FEM is largely the linear version of S-PIM, but with most of the properties of the S-PIM and much simpler. It has also variations of NS-FEM, ES-FEM and CS-FEM. The major property of S-PIM can be found also in S-FEM.
List of S-FEM models
Node-based Smoothed FEM (NS-FEM)
Edge-based Smoothed FEM (ES-FEM)
Face-based Smoothed FEM (FS-FEM)
Cell-based Smoothed FEM (CS-FEM)
Node/Edge-based Smoothed FEM (NS/ES-FEM)
Alpha FEM method (Alpha FEM)
Beta FEM method (Beta FEM)
Applications
S-FEM has been applied to solve the following physical problems:
Mechanics for solid structures and piezoelectrics;
Fracture mechanics and crack propagation;
Nonlinear and contact problems;
Stochastic analysis;
Heat transfer;
Structural acoustics;
Adaptive analysis;
Limited analysis;
Crystal plasticity modeling.
Basic Formulation of S-FEM
The fundamental problem addressed by SFEM is typically the solution of Poisson's equation with Dirichlet boundary conditions, given as follows:
Δu+f=0 in Ω, u=g on ΓD
where Ω is the domain and Γ is its boundary, consisting of ΓD=Γ. Here, u: Ω→R is the trial solution, f: Ω→R is a given function, and g represents Dirichlet boundary conditions.
S-FEM involves discretizing the domain Ω using finite element meshes, which can be global or local. The global mesh represents the entire domain, while the local mesh is used to discretize regions requiring high resolution within the global domain. The local domain is assumed to be included in the global domain (ΩL⊆ΩG).
Weak Formulation
The weak form of the problem is derived by multiplying the equation by suitable test functions and integrating over the domain. In SFEM, the weak form is expressed as follows: Given f and g, find u∈U such that for all w∈V,
aΩ(w,u)=LΩ(w)
where aΩ is a bilinear form, and LΩ is a linear functional.
S-FEM Formulation
In S-FEM, the trial solution u and test functions w are defined separately for the global (ΩG) and local (ΩL) domains. The trial solution spaces UG, UL and test function spaces VG, VL are defined accordingly. The weak form in the S-FEM formulation becomes:
aΩ′(w,u)=LΩ′(w)
where aΩ′(⋅,⋅) and LΩ′(⋅) are modified bilinear forms and linear functionals, respectively, to accommodate the S-FEM approach.
Challenges
One of the primary challenges of S-FEM is the difficulty in exact integration of the submatrices representing the relationship between global and local meshes (KGL and KLG). Additionally, the matrix K can become singular, posing numerical challenges in solving the resulting linear algebraic equations.
These challenges and potential solutions are discussed in detail in the literature, aiming to improve the efficiency and accuracy of S-FEM for various applications.
B-spline S-FEM (BFSEM)
S-FEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from the method’s strengths. These issues can be address by incorporating cubic B-spline functions with C squared continuity across element boundaries as the global basis function. To avoid matrix singularity, applying different basis functions to different meshes. In a recent study Lagrange basis functions were used as local basis functions. With this method the numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional S-FEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional S-FEM.
See also
Finite element method
Meshfree methods
Weakened weak form
Loubignac iteration
References
External links
Continuum mechanics
Finite element method
Numerical differential equations
Partial differential equations
Structural analysis | Smoothed finite element method | [
"Physics",
"Engineering"
] | 1,542 | [
"Structural engineering",
"Continuum mechanics",
"Structural analysis",
"Classical mechanics",
"Mechanical engineering",
"Aerospace engineering"
] |
31,979,549 | https://en.wikipedia.org/wiki/Veblen%E2%80%93Young%20theorem | In mathematics, the Veblen–Young theorem, proved by , states that a projective space of dimension at least 3 can be constructed as the projective space associated to a vector space over a division ring.
Non-Desarguesian planes give examples of 2-dimensional projective spaces that do not arise from vector spaces over division rings, showing that the restriction to dimension at least 3 is necessary.
Jacques Tits generalized the Veblen–Young theorem to Tits buildings, showing that those of rank at least 3 arise from algebraic groups.
generalized the Veblen–Young theorem to continuous geometry, showing that a complemented modular lattice of order at least 4 is isomorphic to the principal right ideals of a von Neumann regular ring.
Statement
A projective space S can be defined abstractly as a set P (the set of points), together with a set L of subsets of P (the set of lines), satisfying these axioms :
Each two distinct points p and q are in exactly one line.
Veblen's axiom: If a, b, c, d are distinct points and the lines through ab and cd meet, then so do the lines through ac and bd.
Any line has at least 3 points on it.
The Veblen–Young theorem states that if the dimension of a projective space is at least 3 (meaning that there are two non-intersecting lines) then the projective space is isomorphic with the projective space of lines in a vector space over some division ring K.
References
Theorems in projective geometry
Theorems in algebraic geometry | Veblen–Young theorem | [
"Mathematics"
] | 317 | [
"Theorems in algebraic geometry",
"Theorems in projective geometry",
"Theorems in geometry"
] |
49,289,621 | https://en.wikipedia.org/wiki/Annealed%20pyrolytic%20graphite | Annealed Pyrolytic Graphite (APG), also known as Thermally Annealed Pyrolytic Graphite (TPG), is a form of synthetic graphite that offers excellent in-plane thermal conductivity. As with pyrolytic carbon or pyrolytic graphite (PG), APG is also low in mass, is electrically conductive, and offers diamagnetic properties that allow it to levitate in magnetic fields.
Physical Properties
APG is an anisotropic material with extremely high in-plane thermal conductivity (1,700 W/m-K at room temperature ) and low through-thickness conductivity. Its laminate structure remains stable across a wide temperature range allowing it to be used in a variety of heat transfer applications. APG's conductivity generally increases as the temperature decreases, peaking at 2,800 W/m-K at approximately 150 K. Unlike pyrolytic graphite, the x-y planar conductivity is consistent across each basal plane, thus the conductivity in the center planes is consistent with the outer planes. The in-plane covalently bonded carbon atoms in a hexagonal geometry account for APG's high in-plane thermal conductivity and its high in-plane stiffness. Through its thickness, these hexagonal planes are weakly bonded (van der Waals bonds) resulting in a material with poor through-thickness thermal conductivity, stiffness, and strength.
Synthesis
APG is produced in a process similar method to Highly Oriented Pyrolytic Graphite (HOPG), where hydrocarbon gas is heated until it breaks down into carbon. Pyrolytic graphite (PG) is then grown on plates using a chemical vapor deposition (CVD) process. The PG is then annealed at high temperature to form the more planar and more uniform carbon structure of APG, described above. The primary difference between the HOPG and APG synthesis methods is that the APG annealing process does not require the use of induced stresses, resulting in a more affordable and practical bulk material for production use.
Applications
APG is primarily used as a heat spreader for the thermal management of high-end electronics. Due to its poor mechanical properties APG is typically encapsulated within in structural metallic materials. Aluminum is the most commonly chosen encapsulant for its strength, low mass, cost, manufacturability, and thermal conductivity. Since APG's conductivity is much lower through its thickness, thermal vias are sometimes inserted into the assembly to transfer heat into the graphite, as shown in Figure 1. These vias are typically composed of aluminum or copper. Thin, flexible sheets of APG can be encapsulated in thin flexible materials, such as polymers, aluminum foil, or copper foil to create what is known as a Thermal Strap.
Aerospace: Aluminum-APG plates are most commonly used as heat spreader plates to transfer heat away from high power density electronics in aircraft and spacecraft.
Scientific Cameras: Cu-APG plates are used to cool and isothermalize CCD detectors at cryogenic temperatures.
References
Heat transfer
Allotropes of carbon
Heat conduction
Spacecraft components
Computer hardware cooling | Annealed pyrolytic graphite | [
"Physics",
"Chemistry"
] | 667 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Allotropes of carbon",
"Allotropes",
"Thermodynamics",
"Heat conduction"
] |
49,290,419 | https://en.wikipedia.org/wiki/Metallogels | Metallogels are one-dimensional nanostructured materials, which constitute a growing class in the Supramolecular chemistry field. Non-covalent interactions, such as hydrophobic interactions, π-π interactions, and hydrogen bonding, are among the responsible forces for the formation of those gels from small molecules. However, the main driving force for the formation of a metallogel is the metal-ligand coordination. Once the structure has been established, it resists gravitational force when inverted.
Synthesis Method
Since the properties of gels depend on the type of non-covalent interactions involved, the metal-ligand interaction provide not only thermodynamic stability, but also kinetic liability. The general method for synthesizing gels is to heat the solution, which contains the metal ion being used and investigated, along with the ligand that will form the metallogel around it, as well as any other compounds used to create the appropriate conditions for the reaction to proceed well, until all added solids (depending on the type of gel prepared) are dissolved in the solvent used, and then cooling it down until the gels are self-assembled and properly formed. However, this method has not shown favorable results with the additions of several transition metals, along with the use of lanthanides, in an acetonitrile nitrile solution of the ligand. In these studies, the ligand used is a 2,6-Bis(1′-alkylbenzimidazolyl)pyridine, due to its commercial abundance and the wide variety of synthetic pathways that allow the functionalization of this ligand, and therefore the chemical tuning of the metallogel. Therefore, under controlled heating and cooling conditions, the addition of a source of transition metals to the solution containing the ligand, along with lanthanide ions yielded stable gels, who have passed the inversion test.
Self-assembling occurs from the influence of non-covalent interactions, as depicted in figure1. These linear, self-assembling compounds can continue to self-assemble forming columnar, helical structures that further aggregate to form bundles of fibers.
Another approach to form gels as functional nanomaterials, is the bottom up method used in subcomponent self-assembly. This method aims to save resources, shorten the time of synthesis, and offer a wider range of gels by the quick exchange of one of the reaction components.
Examples
Although the synthesis method generally is the same, relying on the self-assembly of small molecules in the appropriate conditions, the metallogels differ mainly in the metal ion used, which directly influences their functions, chemical, optic, and electronic properties. Among numerous metal ions that are used, gold ions have been investigated for their wide variety of foreseen applications, as discussed in the applications section. They are further divided into two categories, based on the type of solvent used during the synthesis process. Gold Organometallogelators which are formed by Au(I) in trinuclear gold(I) pyrazolate complexes with long akyl chains, which appears as a red-luminescent organogel. Gold Hydrometallogelators are made of glutathione and Au (III), which appear as a transparent gel. Silver metal ions also show properties of self-assembly, since they have high affinity to bind nitrogen, which can act as the driving force to form stable supramolecular structures.
However, copper ions have a promiscuous nature that allows them to bind to a variety of ligands, which readily form stable metallogels with tunable properties, widening the scope of their applications. Bipyridines were among the most important ligands, since the formation of those metallogels can lead to research about the coordination of copper ions to DNA base pairs. Oxalic acid dihydrate is another important ligand, that easily forms stable structures when copper salts are added, which can be used as proton conductors. Furthermore, a bile acid–picolinic acid conjugate can form gels in solvents that are 30%-50% organic. The increased water content renders this gel more bio-compatible, offering room for further investigation.
Palladium ions were among the transition metals used to form catalytic and irreversible metallogels, as well.
Applications
Metallogels obtain multi-responsive property to widely environmental responses. In particular, metallogels which are made of transition metal and lanthanoid are thermo-responsive, mechano-responsive, chemo-responsive, and photo-responsive. A metallogel system of Co/La shows inverse gel-sol transition when being heated to 100°C. When being heated, the orange color of solution remains unchanged suggesting the reaction of La/ligands only due to the heat. Such behavior is classified as thermo-response. Metallogels are also mechano-responsive. A system of Zn/La shows the formation of gel-like material upon addition of CH3CN as solvent followed by a gently shake. However, this metarial turns to transparent liquid after sitted for 20 seconds. As an example of a chemo-response, adding a small amount of formic acid to Zn/Eu will cause the breakdown of gel-like material as well as its mechanical stability and light-emission. Different system of metal and lanthanoids show different emission bands on photoluminescent spectra. Co/Eu emits no band on the spectra due to presence of low energy metal of the system. Zn/La shows signal of metal-bound ligand-based at 397 nm while Zn/Eu shows signals of lanthanide metal at 581, 594, 616, 652 nm and signals of ligand at 397 nm indicating that the ligand is sensitive to metal biding.
In addition to the multi-responsive properties, gold metallogels can prove to be useful in cosmetics, food processing, and lubrication. Those gels are used in drug delivery, to trap active enzymes and bacteria inside them. Furthermore, the basis of certain technologies, producing valves, clutches, and dampers, rely on the multi-responsive nature of metallogels to electric and magnetic stimuli.
A recent study on metal organic gels involving Cadmium and Zinc ion shows promising results to absorb dyes, which emulates the ability of natural systems to get rid of toxic material that are difficult to decompose.
References
Nanomaterials | Metallogels | [
"Materials_science"
] | 1,338 | [
"Nanotechnology",
"Nanomaterials"
] |
49,291,242 | https://en.wikipedia.org/wiki/Hyder%20flare | A Hyder flare is slow, large-scale brightening that occurs in the solar chromosphere. It resembles a large but feeble solar flare and is identifiable as the signature of the sudden disappearance of a solar prominence (a "disparition brusque"). These events occur in the quiet Sun, away from active regions or sunspot groups, and typically in the polar crown filament zone near the Sun's poles. Hyder flares have a two-ribbon morphology and can be faintly observed in chromospheric emission lines such as Hα or as enhanced absorption in He I 1083 nm line.
Hyder flares are caused by the unstable eruption of a magnetic filament channel; the filament rises and may escape from the Sun as a part of a coronal mass ejection, and the visible flare marks the magnetic connectivity of the coronal disturbance.
Unlike active region flares, Hyder flares take a much longer time to reach peak intensity, as much as 30 to 80 minutes and then can continue for several hours. They have not caused any interference with Earthly communications like solar flares, and are rather weak.
The discovery of Hyder flares has been mainly associated with Charles Hyder who developed the mechanism describing them in 1967. Some disagree with Hyder's findings
and especially with his interesting mechanism, explaining what actually produces the flare.
Although rare, a notable occurrence that took place November 1, 2014, confirmed that they display special characteristics distinguishing them from solar flares.
Cause
One explanation for these solar flares comes from Hyder's two-fold observations.
First, such flares tend to have a parallel double-ribbon shape, with one ribbon on either side of the magnetic polarity inversion line under a
filament.
Second, these flares tend not to be associated with geomagnetic storms.
Quiescent filaments have been believed to belong to a magnetic trough, which can disappear due to the field's reconfiguration. When this happens, the filamentary material is said to be thrown into the corona, creating a typical solar flare. Hyder explains that the process for Hyder flares differs, in that sometimes the filamentary material instead cascades down the outer sides of the elevated magnetic trough, or ridge, to interact with the lower chromospheric material that is producing the flare. If this falling process is not symmetrical on either side, then there will be a double parallel ribbon shape form, whereas a symmetrical fall will produce only a single parallel ribbon. A sporadic or insufficient fall of filamentary material will cause bright knots of solar flares to be produced.
History
Hyder flares were first observed by Max Waldmeier in 1938, who wrote a paper describing the phenomenon of suddenly disappearing filaments (disparition brusque), and mentioned that these can be associated with flare-like brightenings.
Subsequent research wasn't completed until Charles Hyder published two papers in 1967 with the journal Solar Physics in which a proposed mechanism underlying Hyder flares was discussed in detail.
The Hyder mechanism immediately came into controversy, most notably by Harold Zirin. Zirin questioned the filament falling down the side of the magnetic ridge, stating that magnetic reconfigurations will always create ejection. Comparisons to Hyder's 1968 publications were discussed in Harold Zirin and D. Russo Lackner's Volume 6, Issue 1 of Solar Physics pages 86–103: The Solar Flares of August 28 and 30, 1996.
Occurrences
As Hyder flares are notably rare, few occurrences have been recorded since their discovery. The most notable event took place between 0400 and 0600 UTC on November 1, 2014 and was defined as a C-Class flare. Scientists noted that the eruption caused plasma to be accelerated towards the Sun, which then caused several flashes of X-rays upon impact. The remaining plasma was ejected out into interplanetary space and formed a large core of coronal mass ejection.
Hazards
Hyder flares are generally lower in intensity relative to active region flares, and it is commonly accepted that they pose no immediate threat to Earth. These flares can potentially affect space weather, however, which could disrupt electronics. Because of this, many precautions must be taken to prevent damages to airplane navigation and/or government technologies.
References
Solar phenomena | Hyder flare | [
"Physics"
] | 905 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
49,292,146 | https://en.wikipedia.org/wiki/Neuro%20biomechanics | Neuro biomechanics is a field dedicated to the general study of human movement from various basic perspectives: musculo-skeletal functional anatomy, CNS and neuro-muscular physiology, physics, control theory with cybernetics and computer science. It is based upon the research of bioengineering researchers, neuro-surgery, orthopedic surgery and biomechanists. Neuro Biomechanics are utilized by neurosurgeons, orthopedic surgeons and primarily by integrated physical medicine practitioners. Practitioners are focused on aiding people in the restoration of biomechanics of the skeletal system in order to measurably improve nervous system function, health, function, quality of life, reduce pain and the progression of degenerative joint and disc disease.
Neuro: of or having to do with the nervous system. Nervous system: An organ system that coordinates the activities of muscles, monitors organs, constructs and processes data received from the senses and initiates actions. The human nervous system coordinates the functions of itself and all organ systems including but not limited to the cardiovascular system, respiratory system, skin, digestive system, immune system, hormonal, metabolic, musculoskeletal, endocrine system, blood and reproductive system. Optimal function of the organism as a whole depends upon the proper function of the nervous system.
Biomechanics: (biology, physics) The branch of biophysics that deals with the mechanics of the human or animal body; especially concerned with muscles and the skeleton. The study of biomechanical influences upon nervous system function and load bearing joints.
Research:
Research on established ideal mechanical models for the human locomotor system.
Panjabi MM, Journal of Biomechanics, 1974. A note on defining body parts configurations
Gracovetsky S. Spine 1986; The Optimum Spine
Yoganandan, Spine 1996
Harrison. Spine 2004 Modeling of the Sagittal Cervical Spine as a Method to Discriminate Hypolordosis: Results of Elliptical and Circular Modeling in 72 Asymptomatic Subjects, 52 Acute Neck Pain Subjects, and 70 Chronic Neck Pain Subjects; Spine 2004
Panjabi et al. Spine 1997 Whiplash produces and S-Shape curve...
Harrision DE, JMPT 2003, Increasing the Cervical Lordosis with CBP Seated Combined Extension-Compression and Transverse Load Cervical Traction with Cervical Manipulation: Non-randomized Clinical Control Trial
Harrison. Journal of Spinal Disorders 1998: Elliptical Modeling of the Sagittal Lumbar Lordosis and Segmental Rotation Angles as a Method to Discriminate Between Normal and Low Back Pain Subjects.
Gleb DE. Spine 1995: An Analysis of Sagittal Spinal Alignment in 100 Asymptomatic Middle and Older Aged Volunteers.
Janik TJ, Journal Orthop Res, 1998, Can the sagittal lumbar curvature be closely approximated by an ellipse?
Harrision. Spine 2001: Methods for Cervical Mensuration analyzed
Voutsinas SA, Clinical Orthorpedics 1986
Bernhardt M. Spien 1989: Segmental Analysis of the Sagittal Plane Alignment of the Normal Thoracic and Lumbar Spines and Thoracolumbar Junction.
Harrision DE, Archives of Physical Medicine and Rehabilitation 2002, A new 3-point bending traction method for restoring cervical lordosis and cervical manipulation: a nonrandomized clinical controlled trial
Helliwell PS, Journal of Bone and Joint Surgery, 1994 The straight cervical spine: does it indicate muscle spasm?
Banks, R. Journal of Crash Prevention and Injury Control 2000: Alignment of the lumbar vertebrae in a driving posture
Troyanovich SJ, JMPT 1998 Structural rehabilitation of the spine and posture:
Sheng-Yun L. PLoS One 2014: Comparison of Modic Changes in the Lumbar and Cervical Spine, in 3167 Patients wit hand without Spinal Pain.
Harrison DE. Journal of Spinal Disorders & Techniques: 2002, Can the Thoracic Kyphosis Be Modeled With a Simple Geometric Shape?: The Results of Circular and Elliptical Modeling in 80 Asymptomatic Patients
Research regarding Primary non surgical treatment:
Review of surgical outcomes regarding biomechanics, biomechanical effects on neurologic function.
Treatment:
Non-Surgical
Surgical
References
Biological engineering
Biomechanics | Neuro biomechanics | [
"Physics",
"Engineering",
"Biology"
] | 882 | [
"Biomechanics",
"Biological engineering",
"Mechanics"
] |
49,292,255 | https://en.wikipedia.org/wiki/Edifenphos | Edifenphos (O-ethyl-S,S-diphenyldithiophosphate, EDDP) is a systemic fungicide that inhibits phosphatidylcholine biosynthesis. It was introduced in 1966 by Bayer to combat blast fungus and Pellicularia sasakii in rice cultivation. It was never authorized for use in the EU.
References
Fungicides
Organothiophosphate esters
Ethyl esters | Edifenphos | [
"Chemistry",
"Biology"
] | 98 | [
"Fungicides",
"Organic compounds",
"Biocides",
"Organic compound stubs",
"Organic chemistry stubs"
] |
49,292,744 | https://en.wikipedia.org/wiki/Astronomy%20education | Astronomy education or astronomy education research (AER) refers both to the methods currently used to teach the science of astronomy and to an area of pedagogical research that seeks to improve those methods. Specifically, AER includes systematic techniques honed in science and physics education to understand what and how students learn about astronomy and determine how teachers can create more effective learning environments.
Education is important to astronomy as it impacts both the recruitment of future astronomers and the appreciation of astronomy by citizens and politicians who support astronomical research. Astronomy has been taught throughout much of recorded human history, and has practical application in timekeeping and navigation. Teaching astronomy contributes to an understanding of physics and the origin of the world around us, a shared cultural background, and a sense of wonder and exploration. It includes education of the general public through planetariums, books, and instructive presentations, plus programs and tools for amateur astronomy, and University-level degree programs for professional astronomers. Astronomy organizations provide educational functions and societies in about 100 nation states around the world.
In schools, particularly at the collegiate level, astronomy is aligned with physics and the two are often combined to form a Department of Physics and Astronomy. Some parts of astronomy education overlap with physics education, however, astronomy education has its own arenas, practitioners, journals, and research. This can be demonstrated in the identified 20-year lag between the emergence of AER and physics education research. The body of research in this field are available through electronic sources such as the Searchable Annotated Bibliography of Education Research (SABER) and the American Astronomical Society's database of the contents of their journal "Astronomy Education Review" (see link below).
The National Aeronautics and Space Administration (NASA) has also created a Center for Astronomy Education, a program designed to support the professional development of astronomy instructors through the NASA JPL Exoplanet Exploration Public Engagement Program and the Spitzer Education and Outreach Program.
See also
European Association for Astronomy Education
Universe Awareness
Astronomical Society of the Pacific
References
External links
Homepage of IAU Commission 46 on Astronomy Education
Astronomy Education Resources at the Astronomical Society of the Pacific
Resource Guides for Astronomy Education
Subject Index to the journal "Astronomy Education Review"
iSTAR international Studies of Astronomy education Research Database of Articles, Theses, & Dissertations
Physics education
Science education | Astronomy education | [
"Physics",
"Astronomy"
] | 464 | [
"Astronomy education",
"Applied and interdisciplinary physics",
"Physics education"
] |
49,293,529 | https://en.wikipedia.org/wiki/Natural%20resistance-associated%20macrophage%20protein | Natural resistance-associated macrophage proteins (Nramps), also known as metal ion (Mn2+-iron) transporters (TC# 2.A.55), are a family of metal transport proteins found throughout all domains of life. Taking on an eleven-helix LeuT fold, the Nramp family is a member of the large APC Superfamily of secondary carriers. They transport a variety of transition metals such as manganese, cadmium, and manganese using an alternating access mechanism characteristic of secondary transporters.
The name "natural resistance-associated" macrophage proteins arises from the role in resistance of intracellular bacterial pathogens played by some animal homologs. Several human pathologies may result from defects in Nramp-dependent Fe2+ or Mn2+ transport, including iron overload, neurodegenerative diseases and innate susceptibility to infectious diseases.
Human homologs
Humans and rodents possess two distinct Nramp proteins. The broad specificity NRAMP2 (DMT1) transports a range of divalent metal cations. Studies have shown that it transports Fe2+ and H+ with a 1:1 stoichiometry and apparent affinities of 6 μm and about 1 μm, respectively. Variable H+:Fe2+ stoichiometry has also been reported. The order of substrate preference for NRAMP2 is:
Fe2+> Zn2+> Mn2+> Co2+> Ca2+> Cu2+> Ni2+> Pb2+
Many of these ions can inhibit iron absorption. Mutation of NRAMP2 in rodents leads to defective endosomal iron export within the ferritin cycle, impaired intestinal iron absorption and microcytic anemia. Symptoms of Mn2+ deficiency are also seen. It is found in apical membranes of intestinal epithelial cells but also in late endosomes and lysosomes.
In contrast to the widely expressed NRAMP2, NRAMP1 is expressed primarily in macrophages and monocytes and appears to have a preference for Mn2+ rather than Fe2+. NRAMP1 (TC# 2.A.55.2.3) has been reported to function by metal:H+ antiport. It is hypothesized that a deficiency for Mn2+ or some other metal prevents the generation of reactive oxygenic and nitrogenic compounds that are used by macrophage to combat pathogens. This hypothesis is supported by studies on the bacterial Nramp homologs which exhibit extremely high selectivity for Mn2+ over Fe2+, Zn2+ and other divalent cations. Regulation of these transporters in bacteria can occur through Fur, OxyR, and most commonly a DtxR homolog, MntR.
Smf and other homologues
The Smf1 protein of Saccharomyces cerevisiae appears to catalyze high-affinity (KM = 0.3 μm) Mn2+ uptake while the closely related Smf2 protein may catalyze low affinity (KM = 60 μm) Mn2+ uptake in the same organism. Both proteins also mediate H+-dependent Fe2+ uptake. These proteins are of 575 and 549 amino acyl residues in length and are predicted to have 8-12 transmembrane α-helical spanners. The E. coli homologue of 412 aas exhibits 11 putative and confirmed TMSs with the N-terminus in and the C-terminus out. The yeast proteins may be localized to the vacuole and/or the plasma membrane of the yeast cell. Indirect and some direct experiments suggest that they may be able to transport several heavy metals including Mn2+, Cu2+, Cd2+ and Co2+. A third yeast protein, Smf3p, appears to be exclusively intracellular, possibly in the Golgi. NRAMP2 (Slc11A2) of Homo sapiens (TC# 2.A.55.2.1) has a 12 TMS topology with intracellular N- and C-termini. Two-fold structural symmetry in the arrangement of membrane helices for TM1-5 and TM6-10 (conserved Slc2 hydrophobic core) is suggested.
Transport reaction
The generalized transport reaction catalyzed by NRAMP family proteins is:
Me2+ (out) + H+ (out) ⇌ Me2+ (in) + H+ (in).
Structure and Mechanism
All Nramp proteins have eleven to twelve transmembrane helices, the first ten of which form a canonical LeuT fold, common throughout the APC superfamily. Metal uptake in Nramp proteins is typically stimulated by acidic pH and accompanied by proton influx, although many homologs have also shown proton uniport. This has been explained in the Deinococcus radiodurans homolog as the result of spatially segregated metal and proton pathways that rely on a longer-range allosteric connection rather than the direct structural connection seen in canonical symporters. Metal uptake requires alternating access bulk conformation change, in which the protein changes from an outward-open state to an inward-open state upon metal binding, while proton uptake can occur through a simpler channel-like mechanism.
See also
Solute carrier family
Ferroportin
References
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | Natural resistance-associated macrophage protein | [
"Biology"
] | 1,130 | [
"Protein families",
"Protein classification",
"Membrane proteins"
] |
49,293,854 | https://en.wikipedia.org/wiki/Axis%20of%20evil%20%28cosmology%29 | The "axis of evil" is a name given to an unsubstantiated correlation between the plane of the Solar System and aspects of the cosmic microwave background (CMB). It gives the plane of the Solar System and hence the location of Earth a greater significance than might be expected by chance – a result which has been claimed to be evidence of a departure from the Copernican principle. A 2016 study compared isotropic and anisotropic cosmological models against WMAP and Planck data and found no evidence for anisotropy.
Overview
The cosmic microwave background (CMB) radiation signature presents a direct large-scale view of the universe that can be used to identify whether our position or movement has any particular significance. There has been much publicity about analysis of results from the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck mission that show both expected and unexpected anisotropies in the CMB. The motion of the solar system and the orientation of the plane of the ecliptic are aligned with features of the microwave sky, which appear to be caused by structure at the edge of the observable universe. Specifically, with respect to the ecliptic plane, the "top half" of the CMB is slightly cooler than the "bottom half"; furthermore, the quadrupole and octupole axes are only a few degrees apart, and these axes are aligned with the top/bottom divide.
Lawrence Krauss is quoted as follows in a 2006 Edge.org article:
The new results are either telling us that all of science is wrong and we're the center of the universe, or maybe the data is simply incorrect, or maybe it's telling us there's something weird about the microwave background results and that maybe, maybe there's something wrong with our theories on the larger scales.
Observations
Some anomalies in the background radiation have been reported which are aligned with the plane of our solar system. These are unexplained by the Copernican principle and suggest that the solar system's alignment is special in relation to the background radiation of the universe. Land and Magueijo in 2005 dubbed this alignment the "axis of evil" owing to the implications for current models of the cosmos, although several later studies have shown systematic errors in the collection of those data and the way they have been processed. Various studies of the CMB anisotropy data either confirm the Copernican principle, model the alignments in a non-homogeneous universe still consistent with the principle, or attempt to explain them as local phenomena. Some of these alternate explanations were discussed by Copi et al., who claimed that data from the Planck satellite could shed significant light on whether the preferred direction and alignments were spurious. Coincidence is a possible explanation. Chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect, people want to find unusual things."
Data from the Planck Telescope published in 2013 has since found stronger evidence for the anisotropy. "For a long time, part of the community was hoping that this would go away, but it hasn't", says Dominik Schwarz of the University of Bielefeld in Germany.
In 2015, there was no consensus on the nature of this and other observed anomalies and their statistical significance is unclear. For example, a study that includes the Planck mission results shows how masking techniques could introduce errors that when taken into account can render several anomalies, including the axis of evil, not statistically significant. A 2016 study compared isotropic and anisotropic cosmological models against WMAP and Planck data and found no evidence for anisotropy.
See also
List of unsolved problems in physics
References
Radio astronomy
Astronomical radio sources
Cosmic background radiation
Astrophysics
Inflation (cosmology)
Observational astronomy
Physical cosmological concepts | Axis of evil (cosmology) | [
"Physics",
"Astronomy"
] | 812 | [
"Physical cosmological concepts",
"Astronomical radio sources",
"Concepts in astrophysics",
"Astronomical events",
"Observational astronomy",
"Astrophysics",
"Radio astronomy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
49,295,086 | https://en.wikipedia.org/wiki/Mirvetuximab%20soravtansine | Mirvetuximab soravtansine, sold under the brand name Elahere, is a medication used as a treatment for epithelial ovarian cancer, fallopian tube cancer, or primary peritoneal cancer. Mirvetuximab soravtansine is a folate receptor alpha directed antibody and microtubule inhibitor conjugate.
The most common adverse reactions, including laboratory abnormalities, were vision impairment, fatigue, increased aspartate aminotransferase, nausea, increased alanine aminotransferase, keratopathy, abdominal pain, decreased lymphocytes, peripheral neuropathy, diarrhea, decreased albumin, constipation, increased alkaline phosphatase, dry eye, decreased magnesium, decreased leukocytes, decreased neutrophils, and decreased hemoglobin.
Mirvetuximab soravtansine was approved for medical use in the United States in November 2022. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Mirvetuximab soravtansine is indicated for the treatment of adults with folate receptor alpha (FRα) positive, platinum-resistant epithelial ovarian cancer, fallopian tube cancer, or primary peritoneal cancer, who have received one to three prior systemic treatment regimens. Recipients are selected for therapy based on an FDA-approved test.
Adverse effects
The product labeling includes a boxed warning for ocular toxicity.
History
Efficacy was evaluated in Study 0417 (NCT04296890), a single-arm trial of 106 participants with FRα positive, platinum-resistant epithelial ovarian, fallopian tube, or primary peritoneal cancer. Participants were permitted to receive up to three prior lines of systemic therapy. All participants were required to have received bevacizumab. The trial enrolled participants whose tumors were positive for FRα expression as determined by the above assay. Participants were excluded if they had corneal disorders, ocular conditions requiring ongoing treatment, Grade >1 peripheral neuropathy, or noninfectious interstitial lung disease.
Efficacy was evaluated in Study 0416 (MIRASOL, NCT04209855), a multicenter, open-label, active-controlled, randomized, two-arm trial in 453 participants with platinum-resistant epithelial ovarian, fallopian tube, or primary peritoneal cancer. Participants were permitted to receive up to three prior lines of systemic therapy. The trial enrolled participants whose tumors were positive for FRα expression as determined by the VENTANA FOLR1 (FOLR1-2.1) RxDx Assay. Participants were randomized (1:1) to receive mirvetuximab soravtansine-gynx 6 mg/kg (based on adjusted ideal body weight) as an intravenous infusion every 3 weeks or investigator’s choice of chemotherapy (paclitaxel, pegylated liposomal doxorubicin, or topotecan) until disease progression or unacceptable toxicity. The results from this trial satisfy the post-marketing requirement of the previous accelerated approval for mirvetuximab soravtansine-gynx.
Society and culture
Legal status
In September 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Elahere, intended for the treatment of adults with folate receptor-alpha (FRα) positive epithelial ovarian, fallopian tube and primary peritoneal cancer. The applicant for this medicinal product is AbbVie Deutschland GmbH & Co. KG. Mirvetuximab soravtansine was authorized for use in the European Union in November 2024.
Names
Mirvetuximab soravtansine is the international nonproprietary name (INN).
References
Further reading
External links
Monoclonal antibodies for tumors
Antibody-drug conjugates | Mirvetuximab soravtansine | [
"Biology"
] | 871 | [
"Antibody-drug conjugates"
] |
49,296,685 | https://en.wikipedia.org/wiki/Biogenesis%20of%20lysosome-related%20organelles%20complex%203 | BLOC-3 or biogenesis of lysosome-related organelles complex 3 is a ubiquitously expressed multisubunit protein complex.
Interactions
biogenesis of lysosome-related organelles complex 3 has been shown to interact with Rab9A.
Complex Components
The identified protein subunits of BLOC-1 include:
HPS1,
HPS4
References
Cell biology | Biogenesis of lysosome-related organelles complex 3 | [
"Biology"
] | 80 | [
"Cell biology"
] |
49,301,338 | https://en.wikipedia.org/wiki/Rayleigh%20fractionation | Rayleigh fractionation describes the evolution of a system with multiple phases in which one phase is continuously removed from the system through fractional distillation. It is used in particular to describe isotopic enrichment or depletion as material moves between reservoirs in an equilibrium process. Rayleigh fractionation holds particular importance in hydrology and meteorology as a model for the isotopic differentiation of meteoric water due to condensation.
The Rayleigh equation
The original Rayleigh equation was derived by Lord Rayleigh for the case of fractional distillation of mixed liquids.
This is an exponential relation that describes the partitioning of isotopes between two reservoirs as one reservoir decreases in size. The equations can be used to describe an isotope fractionation process if: (1) material is continuously removed from a mixed system containing molecules of two or more isotopic species (e.g., water with 18O and 16O, or sulfate with 34S and 32S), (2) the fractionation accompanying the removal process at any instance is described by the fractionation factor a, and (3) a does not change during the process. Under these conditions, the evolution of the isotopic composition in the residual (reactant) material is described by:
where R = ratio of the isotopes (e.g., 18O/16O) in the reactant, R0 = initial ratio, X = the concentration or amount of the more abundant (lighter) isotope (e.g.,16O), and X0 = initial concentration. Because the concentration of X >> Xh (heavier isotope concentration), X is approximately equal to the amount of original material in the phase. Hence, if = fraction of material remaining, then:
For large changes in concentration, such as they occur during e.g. distillation of heavy water, these formulae need to be integrated over the distillation trajectory. For small changes such as occur during transport of water vapour through the atmosphere, the differentiated equation will usually be sufficient.
See also
Isotope analysis
References
Chemistry
Hydrology
Atmospheric_chemistry | Rayleigh fractionation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 430 | [
"Hydrology",
"nan",
"Environmental engineering"
] |
49,303,418 | https://en.wikipedia.org/wiki/C15H14N2O2 | {{DISPLAYTITLE:C15H14N2O2}}
The molecular formula C15H14N2O2 may refer to:
Licarbazepine
Nepafenac
Pyrrolidonyl-β-naphthylamide
Molecular formulas | C15H14N2O2 | [
"Physics",
"Chemistry"
] | 57 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
49,305,863 | https://en.wikipedia.org/wiki/Strength%20of%20Materials%20%28journal%29 | Strength of Materials () is a bimonthly peer-reviewed scientific journal covering the field of strength of materials and structural elements, mechanics solid deformed body. It was established in 1969 and is published by Springer Science+Business Media on behalf of the Pisarenko Institute of Problems of Strength of the National Academy of Sciences of Ukraine. The editor-in-chief is V.V. Kharchenko. According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.620.
References
External links
Springer Science+Business Media academic journals
Academic journals established in 1969
Multilingual journals
Bimonthly journals
Materials science journals
Academic journals published in Ukraine | Strength of Materials (journal) | [
"Materials_science",
"Engineering"
] | 135 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
36,174,770 | https://en.wikipedia.org/wiki/HARPS-N |
HARPS-N, the High Accuracy Radial velocity Planet Searcher for the Northern hemisphere is a high-precision radial-velocity spectrograph, installed at the Italian Telescopio Nazionale Galileo, a 3.58-metre telescope located at the Roque de los Muchachos Observatory on the island of La Palma, Canary Islands, Spain.
HARPS-N is the counterpart for the Northern Hemisphere of the similar HARPS instrument installed on the ESO 3.6 m Telescope at La Silla Observatory in Chile. It allows for planetary research in the northern sky which hosts the Cygnus and Lyra constellations. In particular it allows for detailed follow up research to Kepler mission planet candidates, which are located in the Cygnus constellation region.
The instrument's main scientific goals are the discovery and characterization of terrestrial super-Earths by combining the measurements using transit photometry and doppler spectroscopy which provide both, the size and mass of the exoplanet. Based on the resulting density, rocky (terrestrial) Super-Earths can be distinguished from gaseous exoplanets.
The HARPS-N Project is a collaboration between the Geneva Observatory (lead), the Center for Astrophysics in Cambridge (Massachusetts), the Universities of St. Andrews and Edinburgh, the Queen's University Belfast, the UK Astronomy Technology Centre and the Italian Istituto Nazionale di Astrofisica.
First light on sky
First light on sky was obtained by HARPS-N on March 27, 2012, and official operations started on August 1, 2012.
See also
ESPRESSO
Euler Telescope
Geneva Extrasolar Planet Search
Next-Generation Transit Survey
SuperWASP
References
External links
Official web page of the HARPS-N Project
HARPS-N page in the TNG web site
Astronomical instruments
Telescope instruments
Exoplanet search projects
Spectrographs | HARPS-N | [
"Physics",
"Chemistry",
"Astronomy"
] | 378 | [
"Spectroscopy stubs",
"Exoplanet search projects",
"Spectrum (physical sciences)",
"Telescope instruments",
"Astronomy stubs",
"Spectrographs",
"Astronomical instruments",
"Astronomy projects",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
36,177,989 | https://en.wikipedia.org/wiki/Dark%20radiation | Dark radiation (also dark electromagnetism) is a postulated type of radiation that mediates interactions of dark matter.
By analogy to the way photons mediate electromagnetic interactions between particles in the Standard Model (called baryonic matter in cosmology), dark radiation is proposed to mediate interactions between dark matter particles. Similar to dark matter particles, the hypothetical dark radiation does not interact with Standard Model particles.
There has been no notable evidence for the existence of such radiation; baryonic matter contains multiple interacting particle types, but it is not known if dark matter does. Cosmic microwave background data may indicate that the number of effective neutrino degrees of freedom is more than 3.046, which is slightly more than the standard case for 3 types of neutrino. This extra degree of freedom could arise from having a non-trivial amount of dark radiation in the universe. One possible candidate for dark radiation is the sterile neutrino.
See also
References
Radiation
Dark matter
Dark concepts in astrophysics | Dark radiation | [
"Physics",
"Chemistry",
"Astronomy"
] | 206 | [
"Transport phenomena",
"Dark matter",
"Physical phenomena",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Astrophysics",
"Waves",
"Dark concepts in astrophysics",
"Radiation",
"Particle physics",
"Exotic matter",
"Particle physics stubs",
"Phys... |
36,179,328 | https://en.wikipedia.org/wiki/Transgenerational%20design | Transgenerational design is the practice of making products and environments compatible with those physical and sensory impairments associated with human aging and which limit major activities of daily living. The term transgenerational design was coined in 1986, by Syracuse University industrial design professor James J. Pirkl to describe and identify products and environments that accommodate, and appeal to, the widest spectrum of those who would use them—the young, the old, the able, the disabled—without penalty to any group.
The transgenerational design concept emerged from his federally funded design-for-aging research project, Industrial design Accommodations: A Transgenerational Perspective. The project's two seminal 1988 publications provided detailed information about the aging process; informed and sensitized industrial design professionals and design students about the realities of human aging; and offered a useful set of guidelines and strategies for designing products that accommodate the changing needs of people of all ages and abilities.
Overview
The transgenerational design concept establishes a common ground for those who are committed to integrating age and ability within the consumer population. Its underlying principle is that people, including those who are aged or impaired, have an equal right to live in a unified society.
Transgenerational design practice recognizes that human aging is a continuous, dynamic process that starts at birth and ends with death, and that throughout the aging process, people normally experience occurrences of illness, accidents and declines in physical and sensory abilities that impair one's independence and lifestyle. But most injuries, impairments and disabilities typically occur more frequently as one grows older and experiences the effects of senescence (biological aging). Four facts clarify the interrelationship of age with physical and sensory vulnerability:
young people become old
young people can become disabled
old people can become disabled
disabled people become old
Within each situation, consumers expect products and services to fulfill and enhance their lifestyle, both physically and symbolically. Transgenerational design focuses on serving their needs through what Cagan and Vogel call "a value oriented product development process". They note that a product is "deemed of value to a customer if it offers a strong effect on lifestyle, enabling features, and meaningful ergonomics" resulting in products that are "useful, usable, and desirable" during both short and long term use by people of all ages and abilities.
Transgenerational design is "framed as a market-aware response to population aging that fulfills the need for products and environments that can be used by both young and old people living and working in the same environment".
Benefits
Transgenerational design benefits all ages and abilities by creating a harmonious bond between products and the people that use them. It satisfies the psychological, physiological, and sociological factors desired—and anticipated—by users of all ages and abilities:
Safety
Comfort
Convenience
Usability
Ergonomics
Accommodation
Transgenerational design addresses each element and accommodates the user—regardless of age or ability—by providing a sympathetic fit and unencumbered ease of use. Such designs provide greater accessibility by offering wider options and more choices, thereby preserving and extending one's independence, and enhancing the quality of life for all ages and abilities—at no group's expense.
Transgenerational designs accommodate rather than discriminate and sympathize rather than stigmatize. They do this by:
bridging the transitions across life's stages
responding to the widest range of individual differences
helping people remain active and independent
adapting to changing sensory and physical needs
maintaining one's dignity and self-respect
enabling one to choose the appropriate means to accomplish activities of daily living
History
Transgenerational design emerged during the mid-1980s coincident with the conception of universal design, an outgrowth of the disability rights movement and earlier barrier-free concepts. In contrast, transgenerational design grew out of the Age Discrimination Act of 1975, which prohibited "discrimination on the basis of age in programs and activities receiving Federal financial assistance", or excluding, denying or providing different or lesser services on the basis of age. The ensuing political interest and debate over the Act's 1978 amendments, which abolished mandatory retirement at age 65, made the issues of aging a major public policy concern by injecting it into the mainstream of societal awareness.
Background
At the start of the 1980s, the oldest members of the population, having matured during the Great Depression, were being replaced by a generation of Baby Boomers, steadily reaching middle age and approaching the threshold of retirement. Their swelling numbers signaled profound demographic changes ahead that would steadily expand the aging population throughout the world.
Advancements in medical research were also changing the image of old age—from a social problem of the sick, poor, and senile, whose solutions depend on public policy—to the emerging reality of an active aging population having vigor, resources, and time to apply both.
Responding to the public's growing awareness, the media, public policy, and some institutions began to recognize the impending implications. Time and Newsweek devoted cover stories to the "Greying of America". Local radio stations began replacing their rock-and-roll formats with music targeted to more mature tastes. The Collegiate Forum (Dow Jones & Co., Inc.) devoted its Fall 1982 issue entirely to articles on the aging work force. A National Research Conference on Technology and Aging, and the Office of Technological Assessment of the House of Representatives, initiated a major examination of the impact of science and technology on older Americans”.
In 1985, the National Endowment for the Arts, the Administration on Aging, the Farmer's Home Administration, and the Department of Housing and Urban Development signed an agreement to improve building, landscape, product and graphic design for older Americans, which included new research applications for old age that recognized the potential for making products easier to use by the elderly, and therefore more appealing and profitable.
Development
In 1987, recognizing the implications of population aging, Syracuse University’s Department of Design, All-University Gerontology Center, and Center for Instructional Development initiated and collaborated on an interdisciplinary project, Industrial Design Accommodations: A Transgenerational Perspective. The year-long project, supported by a Federal grant, joined the knowledge base of gerontology with the professional practice of industrial design.
The project defined "the three aspects of aging as physiological, sociological, and psychological; and divided the designer’s responsibility into aesthetic, technological, and humanistic concerns".
The strong interrelationship between the physiological aspects of aging and industrial design's humanistic aspects established the project's instructional focus and categorized the physiological aspects of aging as the sensory and physical factors of vision, hearing, touch, and movement. This interrelationship was translated into a series of reference tables, which related specific physical and sensory factors of aging, and were included in the resulting set of design guidelines to:
sensitize designers and design students to the aging process
provide them with appropriate knowledge about this process
accommodate the changing needs of our transgenerational population
The project produced and published two instructional manuals—one for instructors and one for design professionals—each containing a detailed set of "design guidelines and strategies for designing transgenerationalproducts". Under terms of the grant, instructional manuals were distributed to all academic programs of industrial design recognized by the National Association of Schools of Art and Design (NASAD).
Chronology
1988: The term ‘transgenerational design’ first appears to have been publicly recognized and acknowledged by the Bristol-Myers Company in its annual report, which stated, "The trend towards transgenerational design seems to be catching on in some fields", noting that “transgenerational design has the added advantage of circumventing the stigmatizing label of being ‘old’ ”.
1989: The results of the 1987 Federal grant project were first presented at the national conference, Exploration: Technological Innovations for an Aging Population, supported in part by the American Association of Retired Persons (AARP) and the National Institute on Aging. The proceedings focused “on current efforts to address the impact of technology and an aging population, identification of high impact issues and problems, innovative ideas, and potential solutions”.
Also in 1989 Design News, the Japanese design magazine, introduced “the new concept of transgenerational design (for) coping with the needs of an aging population and its strategy”, stating that “the impact will soon be felt by all global institutions” and “alter the present course of industrial design practice and education”.
1990: The OXO company introduced the first group of 15 Good Grips kitchen tools to the U.S. Market. “These ergonomically-designed, transgenerational tools set a new standard for the industry and raised the bar to consumer expectation for comfort and performance”. Sam Farber, OXOs founder, stated that “population trends demand transgenerational products, products that will be useful to you throughout the course of your life” because “it extends the life of a product and its materials by anticipating the whole experience of the user”.
1991: The Fall issue of the Design Management Journal addressed the issue of “Responsible Design” and introduced the transgenerational design concept in the article, “Transgenerational Design: A Strategy Whose Time Has Arrived”. The article presented a description, the rationale, and examples of early transgenerational products, and offered “insights on the rationale and benefits of such a transgenerational approach”.
1993: The September–October issue of ‘’AARP The Magazine’’ exposed the transgenerational design concept to the readers in a featured article, “This Bold House”, describing the concept, details, and benefits of a transgenerational house. The article noted that “easy-grip handles, flat thresholds, and adjustable-height vanities are just the beginning in the world’s most accessible house,” providing families of all ages and abilities with “what they will want and need their whole lives”.
In November, the transgenerational design concept was introduced in presentations to the European design community at the international symposiums, “Designing for Our Future Selves”, held at the Royal College of Art in London and the Netherlands Design Institute in Rotterdam.
1994: The book, Transgenerational Design: Products for an Aging Population (Pirkl 1994), may be regarded as the prime mover of the widespread acceptance and practice of the transgenerational design concept. It presented the first specialized content and photographic examples of transgenerational products and environments, offering “practical strategies in response to population aging, along with case study examples based on applying a better understanding of age-related capabilities”. It introduced the transgenerational design concept to the international design and gerontology communities, broadening the conventional idea of “environmental support” to include the product environment, sparking scholarly discussions and comparisons with other emerging concepts: (universal design, design for all, inclusive design, and gerontechnology).
1995: The transgenerational design concept was presented at the first of the ‘’International Guest Lecture Series by World Experts’’, sponsored by the European Design for Aging Network (DAN) held consecutively at five international symposiums, “Designing for Our Future Selves”: Royal College of Art, London, November 15; Eindhoven University of Technology, Eindhoven, November 16–19; The Netherlands Design Institute, Amsterdam, November 21; University of Art and Design, Helsinki, November 23–25; and National College of Art and Design, Dublin, November 26–29.
2000: “The Transgenerational House: A Case Study in Accessible Design and Construction” was presented in June at ‘’Designing for the 21st Century: An International Conference on Universal Design’’, held at the Rhode Island College of Art and Design, Providence, RI.
2007: Architectural Graphic Standards, published by the American Institute of Architects and commonly referred to as the “architects bible”, presented a “Transgenerational House” case study in its "Inclusive Design" section. Described as an “intricate exploration in how the execution of detailed thought can create a living environment that serves the young and old alike, across generations”, the study includes plans for the room layout, kitchen, laundry, master bath, adjustable-height vanity, and roll-in shower.
2012: The proliferation of transgenerational design has diminished the tendency to associate age and disability with deficit, decline and incompetence by providing a market-aware response to population aging and the need for living and work environments used by young and old people living and working in the same environment.
Continuing to emerge as a growing strategy for developing products, services and environments that accommodate people of all ages and abilities, "transgenerational design has been adopted by major corporations, like Intel, Microsoft and Kodak” who are “looking at product development the same way as designing products for people with visual, hearing and physical impairments,” so that people of any age can use them.
Discussions between designers and marketers are indicating that successful transgenerational design “requires the right balance of upfront research work, solid human factors analysis, extensive design exploration, testing and a lot of thought to get it right”, and that “transgenerational design is applicable to any consumer products company—from appliance manufacturers to electronics companies, furniture makers, kitchen and bath and mainstream consumer products companies”.
See also
Ageless computing
Curb cut effect
Development plan
Disability rights movement
Inclusion (disability rights)
Inclusive design
Sensory friendly
Urban planning
References
Industrial design
Ageing | Transgenerational design | [
"Engineering"
] | 2,766 | [
"Industrial design",
"Design engineering",
"Design"
] |
36,182,069 | https://en.wikipedia.org/wiki/Marasmius%20bulliardii | Marasmius bulliardii is a species of agaric fungus in the family Marasmiaceae. It was first described scientifically by French mycologist Lucien Quélet in 1878.
See also
List of Marasmius species
References
External links
bulliardii
Fungi described in 1878
Fungi of Europe
Fungus species | Marasmius bulliardii | [
"Biology"
] | 63 | [
"Fungi",
"Fungus species"
] |
36,182,167 | https://en.wikipedia.org/wiki/Gold%28III%29%20iodide | Gold iodide is the chemical compound with the formula . Although is predicted to be stable, gold(III) iodide remains an example of a nonexistent or unstable compound. Attempts to isolate pure samples result in the formation of gold(I) iodide and iodine:
References
Gold(III) compounds
Iodides
Metal halides
Gold–halogen compounds
Hypothetical chemical compounds | Gold(III) iodide | [
"Chemistry"
] | 82 | [
"Inorganic compounds",
"Theoretical chemistry stubs",
"Hypotheses in chemistry",
"Salts",
"Inorganic compound stubs",
"Theoretical chemistry",
"Metal halides",
"Hypothetical chemical compounds"
] |
39,071,360 | https://en.wikipedia.org/wiki/Moment-area%20theorem | The moment-area theorem is an engineering tool to derive the slope, rotation and deflection of beams and frames. This theorem was developed by Mohr and later stated namely by Charles Ezra Greene in 1873. This method is advantageous when we solve problems involving beams, especially for those subjected to a series of concentrated loadings or having segments with different moments of inertia.
Theorem 1
The change in slope between any two points on the elastic curve equals the area of the M/EI (moment) diagram between these two points.
where,
= moment
= flexural rigidity
= change in slope between points A and B
= points on the elastic curve
Theorem 2
The vertical deviation of a point A on an elastic curve with respect to the tangent which is extended from another point B equals the moment of the area under the M/EI diagram between those two points (A and B). This moment is computed about point A where the deviation from B to A is to be determined.
where,
= moment
= flexural rigidity
= deviation of tangent at point A with respect to the tangent at point B
= points on the elastic curve
Rule of sign convention
The deviation at any point on the elastic curve is positive if the point lies above the tangent, negative if the point is below the tangent; we measured it from left tangent, if θ is counterclockwise direction, the change in slope is positive, negative if θ is clockwise direction.
Procedure for analysis
The following procedure provides a method that may be used to determine the displacement and slope at a point on the elastic curve of a beam using the moment-area theorem.
Determine the reaction forces of a structure and draw the M/EI diagram of the structure.
If there are only concentrated loads on the structure, the problem will be easy to draw M/EI diagram which will results a series of triangular shapes.
If there are mixed with distributed loads and concentrated, the moment diagram (M/EI) will results parabolic curves, cubic, etc.
Then, assume and draw the deflection shape of the structure by looking at M/EI diagram.
Find the rotations, change of slopes and deflections of the structure by using the geometric mathematics.
References
External links
Area-Moment Method. (n.d.)
Moment (physics)
Structural analysis | Moment-area theorem | [
"Physics",
"Mathematics",
"Engineering"
] | 471 | [
"Structural engineering",
"Physical quantities",
"Structural analysis",
"Quantity",
"Mechanical engineering",
"Aerospace engineering",
"Moment (physics)"
] |
39,078,701 | https://en.wikipedia.org/wiki/Crack%20closure | Crack closure is a phenomenon in fatigue loading, where the opposing faces of a crack remain in contact even with an external load acting on the material. As the load is increased, a critical value will be reached at which time the crack becomes open. Crack closure occurs from the presence of material propping open the crack faces and can arise from many sources including plastic deformation or phase transformation during crack propagation, corrosion of crack surfaces, presence of fluids in the crack, or roughness at cracked surfaces.
Description
During cyclic loading, a crack will open and close causing the crack tip opening displacement (CTOD) to vary cyclically in phase with the applied force. If the loading cycle includes a period of negative force or stress ratio (i.e. ), the CTOD will remain equal to zero as the crack faces are pressed together. However, it was discovered that the CTOD can also be zero at other times even when the applied force is positive preventing the stress intensity factor reaching its minimum. Thus, the amplitude of the stress intensity factor range, also known as the crack tip driving force, is reduced relative to the case in which no closure occurs, thereby reducing the crack growth rate. The closure level increases with stress ratio and above approximately , the crack faces do not contact and closure does not typically occur.
The applied load will generate a stress intensity factor at the crack tip, producing a crack tip opening displacement, CTOD. Crack growth is generally a function of the stress intensity factor range, for an applied loading cycle and is
However, crack closure occurs when the fracture surfaces are in contact below the opening level stress intensity factor even though under positive load, allowing us to define an effective stress intensity range as
which is less than the nominal applied .
History
The phenomenon of crack closure was first discovered by Elber in 1970. He observed that a contact between the fracture surfaces could take place even during cyclic tensile loading. The crack closure effect helps explain a wide range of fatigue data, and is especially important in the understanding of the effect of stress ratio (less closure at higher stress ratio) and short cracks (less closure than long cracks for the same cyclic stress intensity).
Crack closure mechanisms
Plasticity-induced crack closure
The phenomenon of plasticity-induced crack closure is associated with the development of residual plastically deformed material on the flanks of an advancing fatigue crack.
The degree of plasticity at the crack tip is influenced by the level of material constraint. The two extreme cases are:
Under plane stress conditions, the piece of material in the plastic zone is elongated, which is mainly balanced by an out-of-the-plane flow of the material. Hence, the plasticity-induced crack closure under plane stress conditions can be expressed as a consequence of the stretched material behind the crack tip, which can be considered as a wedge that is inserted in the crack and reduces the cyclic plastic deformation at the crack tip and hence the fatigue crack growth rate.
Under plane strain conditions and constant load amplitudes, there is no plastic wedge at large distances behind the crack tip. However, the material in the plastic wake is plastically deformed. It is plastically sheared; this shearing induces a rotation of the original piece of material, and as a consequence, a local wedge is formed in the vicinity of the crack tip.
Phase-transformation-induced crack closure
Deformation-induced martensitic transformation in the stress field of the crack tip is another possible reason to cause crack closure. It was first studied by Pineau and Pelloux and Hornbogen in metastable austenitic stainless steels. These steels transform from the austenitic to the martensitic lattice structure under sufficiently high deformation, which leads to an increase of the material volume ahead of the crack tip. Therefore, compression stresses are likely to arise as the crack surfaces contact each other. This transformation-induced closure is strongly influenced by the size and geometry of the test specimen and of the fatigue crack.
Oxide-induced crack closure
Oxide-induced closure occurs where rapid corrosion occurs during crack propagation. It is caused when the base material at the fracture surface is exposed to gaseous and aqueous atmospheres and becomes oxidized. Although the oxidized layer is normally very thin, under continuous and repetitive deformation, the contaminated layer and the base material experience repetitive breaking, exposing even more of the base material, and thus produce even more oxides. The oxidized volume grows and is typically larger than the volume of the base material around the crack surfaces. As such, the volume of the oxides can be interpreted as a wedge inserted into the crack, reducing the effect stress intensity range. Experiments have shown that oxide-induced crack closure occurs at both room and elevated temperature, and the oxide build-up is more noticeable at low R-ratios and low (near-threshold) crack growth rates.
Roughness-induced crack closure
Roughness induced closure occurs with Mode II or in-plane shear type of loading, which is due to the misfit of the rough fracture surfaces of the crack’s upper and lower parts. Due to the anisotropy and heterogeneity in the micro structure, out-of-plane deformation occurs locally when Mode II loading is applied, and thus microscopic roughness of fatigue fracture surfaces is present. As a result, these mismatch wedges come into contact during the fatigue loading process, resulting in crack closure. The misfit in the fracture surfaces also takes place in the far field of the crack, which can be explained by the asymmetric displacement and rotation of material.
Roughness induced crack closure is justifiable or valid when the roughness of the surface is of same order as the crack opening displacement. It is influenced by such factors as grain size, loading history, material mechanical properties, load ratio and specimen type.
References
Fracture mechanics | Crack closure | [
"Materials_science",
"Engineering"
] | 1,183 | [
"Structural engineering",
"Materials degradation",
"Materials science",
"Fracture mechanics"
] |
39,080,774 | https://en.wikipedia.org/wiki/Stabilized%20inverse%20Q%20filtering | Stabilized inverse Q filtering is a data processing technology for enhancing the resolution of reflection seismology images where the stability of the method used is considered. Q is the anelastic attenuation factor or the seismic quality factor, a measure of the energy loss as the seismic wave moves. To obtain a solution when we make computations with a seismic model we always have to consider the problem of instability and try to obtain a stabilized solution for seismic inverse Q filtering.
Basics
When a wave propagates through subsurface materials both energy dissipation and velocity dispersion takes place. Inverse Q filtering is a method to restore the energy loss due to energy dissipation (amplitude compensation) and to correct the time-shift of the data due to velocity dispersion.
Wang has written an excellent book on the subject of inverse Q filtering, Seismic inverse Q filtering (2008), and discuss the subject of stabilizing the method. He writes:
“The phase-only inverse Q filter mentioned above is unconditionally stable. However, if including the accompanying amplitude compensation in the inverse Q filter, stability is a major issue of concern in implementation.”
Hale (1981) found that the inverse Q filter overcompensated the amplitudes for the later events in a seismic trace. Therefore, in order to obtain reasonable amplitude, the amplitude spectrum of the computed filter has to be clipped at some maximum gain to prevent undue amplitude at later times. On basis of this concept Wang proposed a stabilized inverse Q filtering approach that was able to compensate simultaneously for both attenuation and dispersion.” The unclipped version of Wang’s solution is presented in the wikipedia article seismic inverse Q filtering. The solution is based on the theory of wavefield downward continuation. In this outline here I will compute on a clipped version by introducing low-pass filtering. Both Hale and Wang introduced low-passfiltering as a method for stabilization.
Calculations
We have the equation for seismic inverse Q filtering from Wang:
Time is denoted τ, frequency is w and i is the imaginary unit. Qr and wr are reference values representing damping and frequency for a certain frequency. To demonstrate stability we can simply bypass using a reference frequency and get a more simple equation:
The sum of these plane waves gives the time-domain seismic signal,
On figure 1 is presented the solution of (2/2.b) for a seismic model for different Q-values, which clearly indicates the numerical instability. Number on top of figure 1 corresponds with the Q number, 1=Q1, 2=Q2 etc. The results are close to the results presented in Wang’s book (each trace is scaled individually, so artefacts are stronger on trace 5 than on trace 4). However, Wang also considered phase compensation. Computations here are for amplitude only inversion since the phase compensation is unnecessary to demonstrate instability because it is always stable.
Low-passfiltering and inverse Q-filtering
In practice, the artefacts caused by numerical instability can be suppressed by a low-pass filter. Hale wrote that the unclipped IQF of a seismogram amplified the Nyquist frequency by a factor 7x106 when we had the ratio t/Q=10 and concluded that for typical seismograms with lengths longer than 1000 samples and Q value around 100, data is seldom pure enough to warrant the use of unclipped IQF. Wang introduced a cutoff frequency to set up a criterion for the stabilization by a mathematical formula. However, considering Hales’ article it could be sufficient to simply remove the Nyquist frequency. That means to let the frequency close to Nyquist frequency be the cutoff frequency. On fig.2 we see a seismic model giving us benchmark data for inverse Q-filtering (red graph). We will see that IQF of this model will amplify the Nyquist frequency by a factor little less than 5x106.
Figure 3 is the amplitude-only inverse Q filtered trace of figure 2 for Q=50 (trace 4). The result clearly indicates the numerical instability. Artefacts are seen through the whole trace.
We will try to remove the artefacts by applying a low-pass filter on the trace of figure 3. We used MATLABS signal processing tool and created a low-passfilter (Zero-phase IIR-filter) on fig.4 with cutoff frequency at 120 Hz. The amplitude response of the filter is in blue and the phase in green.
The result of filtering trace on fig.3 with the low-passfilter of fig.4 is shown on fig.5. All artefacts are removed and we are left with the impulse response that can be compared with the original model on fig.2.
Frequency-response
A study of the frequency response of the trace of figure 3 (unclipped) and figure 5 (clipped) will give more insight into the filtering process. Figure 6 shows the magnitude of the frequency response as a function of digital frequency before filtering. This representation gives a good picture of what happens around the Nyquist frequency when filtering with the low-pass filter is done. Unstable energy is accumulated close to the Nyquistfrequency. After filtering the unstable energy around the Nyquist-frequency is completely removed, and fig.7 give the frequency response of the impulse response of fig.5.
Notes
References
External links
Stabilized inverse Q filtering by Knut Sørsdal
Seismology measurement
Geophysics | Stabilized inverse Q filtering | [
"Physics"
] | 1,117 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
21,953,714 | https://en.wikipedia.org/wiki/Selected%20ion%20monitoring | Selected ion monitoring (SIM) is a mass spectrometry scanning mode in which only a limited mass-to-charge ratio range is transmitted/detected by the instrument, as opposed to the full spectrum range. This mode of operation typically results in significantly increased sensitivity. Due to their inherent nature, this technique is most effective—and therefore most common—on quadrupole mass spectrometers, Orbitrap, and Fourier transform ion cyclotron resonance mass spectrometers.
See also
Selected reaction monitoring
References
Mass spectrometry | Selected ion monitoring | [
"Physics",
"Chemistry"
] | 113 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
21,953,783 | https://en.wikipedia.org/wiki/Selected%20reaction%20monitoring | Selected reaction monitoring (SRM), also called multiple reaction monitoring (MRM), is a method used in tandem mass spectrometry in which an ion of a particular mass is selected in the first stage of a tandem mass spectrometer and an ion product of a fragmentation reaction of the precursor ions is selected in the second mass spectrometer stage for detection.
Variants
A general case of SRM can be represented by
where the precursor ion ABCD+ is selected by the first stage of mass spectrometry (MS1), dissociates into molecule AB and product ion CD+, and the latter is selected by the second stage of mass spectrometry (MS2) and detected. The precursor and product ion pair is called a SRM "transition".
Consecutive reaction monitoring (CRM) is the serial application of three or more stages of mass spectrometry to SRM, represented in a simple case by
where ABCD+ is selected by MS1, dissociates into molecule AB and ion CD+. The ion is selected in the second mass spectrometry stage MS2 then undergoes further fragmentation to form ion D+ which is selected in the third mass spectrometry stage MS3 and detected.
Multiple reaction monitoring (MRM) is the application of selected reaction monitoring to multiple product ions from one or more precursor ions, for example
where ABCD+ is selected by MS1 and dissociates by two pathways, forming either AB+ or CD+. The ions are selected sequentially by MS2 and detected. Parallel reaction monitoring (PRM) is the application of SRM with parallel detection of all transitions in a single analysis using a high resolution mass spectrometer.
Proteomics
SRM can be used for targeted quantitative proteomics by mass spectrometry. Following ionization in, for example, an electrospray source, a peptide precursor is first isolated to obtain a substantial ion population of mostly the intended species. This population is then fragmented to yield product ions whose signal abundances are indicative of the abundance of the peptide in the sample. This experiment can be performed on triple quadrupole mass spectrometers, where mass-resolving Q1 isolates the precursor, q2 acts as a collision cell, and mass-resolving Q3 is cycled through the product ions which are detected upon exiting the last quadrupole by an electron multiplier. A precursor/product pair is often referred to as a transition. Much work goes into ensuring that transitions are selected that have maximum specificity.
Using isotopic labeling with heavy-labeled (e.g., D, 13C, or 15N) peptides to a complex matrix as concentration standards, SRM can be used to construct a calibration curve that can provide the absolute quantification (i.e., copy number per cell) of the native, light peptide, and by extension, its parent protein.
SRM has been used to identify the proteins encoded by wild-type and mutant genes (mutant proteins) and quantify their absolute copy numbers in tumors and biological fluids, thus answering the basic questions about the absolute copy number of proteins in a single cell, which will be essential in digital modelling of mammalian cells and human body, and the relative levels of genetically abnormal proteins in tumors, and proving useful for diagnostic applications. SRM has also been used as a method of triggering full product ion scans of peptides to either a) confirm the specificity of the SRM transition, or b) detect specific post-translational modifications which are below the limit of detection of standard MS analyses. In 2017, SRM has been developed to be a highly sensitive and reproducible mass spectrometry-based protein targeted detection platform (entitled "SAFE-SRM"), and it has been demonstrated that the SRM-based new pipeline has major advantages in clinical proteomics applications over traditional SRM pipelines, and it has demonstrated a dramatically improved diagnostic performance over that from antibody-based protein biomarker diagnostic methods, such as ELISA.
See also
Protein mass spectrometry
Quantitative proteomics
References
External links
SRMatlas; quantify proteins in complex proteome digests by mass spectrometry
Mass spectrometry
Proteomics | Selected reaction monitoring | [
"Physics",
"Chemistry"
] | 878 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
21,957,017 | https://en.wikipedia.org/wiki/Innovation%20%28signal%20processing%29 | In time series analysis (or forecasting) — as conducted in statistics, signal processing, and many other fields — the innovation is the difference between the observed value of a variable at time t and the optimal forecast of that value based on information available prior to time t. If the forecasting method is working correctly, successive innovations are uncorrelated with each other, i.e., constitute a white noise time series. Thus it can be said that the innovation time series is obtained from the measurement time series by a process of 'whitening', or removing the predictable component. The use of the term innovation in the sense described here is due to Hendrik Bode and Claude Shannon (1950) in their discussion of the Wiener filter problem, although the notion was already implicit in the work of Kolmogorov.
In contrast, the residual is the difference between the observed value of a variable at time t and the optimal updated state of that value based on information available till (including) time t.
See also
Kalman filter
Filtering problem (stochastic processes)
Errors and residuals in statistics
Innovation butterfly
References
Statistical signal processing | Innovation (signal processing) | [
"Engineering"
] | 230 | [
"Statistical signal processing",
"Engineering statistics"
] |
21,957,823 | https://en.wikipedia.org/wiki/Medical%20%26%20Biological%20Engineering%20%26%20Computing | Medical & Biological Engineering & Computing is a monthly peer-reviewed medical journal and an official publication of the International Federation of Medical and Biological Engineering. It is published by Springer Science+Business Media. It covers research in biomedical engineering and bioengineering. It was established as a bimonthly publication in 1963 under the title Medical Electronics & Biological Engineering. It publishes original research articles, reviews, and technical notes.
References
External links
11th Mediterranean Conference on Medical and Biological Engineering and Computing – the MEDICON 2007 -
The 12th Mediterranean Conference on Medical and Biological Engineering and Computing – MEDICON 2010 -
International Congress of students and young doctors (bio)medicine in Bosnia and Herzegovina "Medicon" -
Systems Medicine for the Delivery of Better Healthcare Services – MEDICON 2016 -
2017 MEDICON conference in Croatia -
English-language journals
Academic journals established in 1963
Springer Science+Business Media academic journals
Biomedical informatics journals
Monthly journals | Medical & Biological Engineering & Computing | [
"Biology"
] | 182 | [
"Bioinformatics",
"Biomedical informatics journals"
] |
21,958,324 | https://en.wikipedia.org/wiki/Hydrogeology%20Journal | Hydrogeology Journal is a peer-reviewed scientific journal published eight times a year by Springer Science+Business Media. It was established in 1992 and is the official journal of the International Association of Hydrogeologists. The journal publishes papers on both theoretical and applied aspects of hydrogeology. Papers focus on integrating subsurface hydrology and geology with other supporting disciplines (such as geochemistry, geophysics, geomorphology, geobiology, surface-water hydrology, tectonics, mathematics, numerical modeling, economics, and sociology) to explain phenomena observed in the field. The journal has a 2013 impact factor of 1.718. The editor-in-chief is Clifford I. Voss (United States Geological Survey).
References
External links
English-language journals
Geology journals
Hydrology journals
Academic journals established in 1992
Springer Science+Business Media academic journals | Hydrogeology Journal | [
"Environmental_science"
] | 176 | [
"Hydrology",
"Hydrology journals"
] |
21,958,369 | https://en.wikipedia.org/wiki/Debus%E2%80%93Radziszewski%20imidazole%20synthesis | The Debus–Radziszewski imidazole synthesis is a multi-component reaction used for the synthesis of imidazoles from a 1,2-dicarbonyl, an aldehyde, and ammonia or a primary amine. The method is used commercially to produce several imidazoles. The process is an example of a multicomponent reaction.
The reaction can be viewed as occurring in two stages. In the first stage, the dicarbonyl and two ammonia molecules condense with the two carbonyl groups to give a diimine:
In the second stage, this diimine condenses with the aldehyde:
However, the actual reaction mechanism is not certain.
This reaction is named after Heinrich Debus and .
A modification of this general method, where one equivalent of ammonia is replaced by an amine, affords N-substituted imidazoles in good yields.
This reaction has been applied to the synthesis of a range of 1,3-dialkylimidazolium ionic liquids by using various readily available alkylamines.
References
Nitrogen heterocycle forming reactions
Name reactions
Multicomponent ring-condensations | Debus–Radziszewski imidazole synthesis | [
"Chemistry"
] | 242 | [
"Name reactions",
"Ring forming reactions",
"Organic reactions"
] |
21,960,197 | https://en.wikipedia.org/wiki/24%20Vulpeculae | 24 Vulpeculae is a single, yellow-hued star in the northern constellation of Vulpecula. It is faintly visible to the naked eye with an apparent visual magnitude of 5.30. The distance to this star can be estimated from its annual parallax shift of , which yields a separation of roughly 409 light years. It is moving further away with a heliocentric radial velocity of +15 km/s.
This is an evolved giant star with a stellar classification of G8III, having exhausted the hydrogen at its core and moved off the main sequence. It is a red clump giant, indicating it is presently on the horizontal branch and is generating energy through helium fusion in its core region. The interferometry-measured angular diameter of 24 Vul is , which, at its estimated distance, equates to a physical radius of about 16 times the radius of the Sun.
24 Vulpeculae is about 251 million years old and is spinning with a projected rotational velocity of 5.02 km/s. It has 3.41 times the mass of the Sun and is radiating 191 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,981 K. This is the probable (99.4% chance) source of X-ray emission coming from these coordinates.
References
External links
G-type giants
Horizontal-branch stars
Vulpecula
Durchmusterung objects
Vulpeculae, 24
192944
099951
7753 | 24 Vulpeculae | [
"Astronomy"
] | 314 | [
"Vulpecula",
"Constellations"
] |
21,963,702 | https://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford%20theorem | The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics that gives necessary and sufficient conditions under which a strictly positive probability distribution (of events in a probability space) can be represented as events generated by a Markov network (also known as a Markov random field). It is the fundamental theorem of random fields. It states that a probability distribution that has a strictly positive mass or density satisfies one of the Markov properties with respect to an undirected graph G if and only if it is a Gibbs random field, that is, its density can be factorized over the cliques (or complete subgraphs) of the graph.
The relationship between Markov and Gibbs random fields was initiated by Roland Dobrushin and Frank Spitzer in the context of statistical mechanics. The theorem is named after John Hammersley and Peter Clifford, who proved the equivalence in an unpublished paper in 1971. Simpler proofs using the inclusion–exclusion principle were given independently by Geoffrey Grimmett, Preston and Sherman in 1973, with a further proof by Julian Besag in 1974.
Proof outline
It is a trivial matter to show that a Gibbs random field satisfies every Markov property. As an example of this fact, see the following:
In the image to the right, a Gibbs random field over the provided graph has the form . If variables and are fixed, then the global Markov property requires that: (see conditional independence), since forms a barrier between and .
With and constant, where and . This implies that .
To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proved:
Lemma 1
Let denote the set of all random variables under consideration, and let and denote arbitrary sets of variables. (Here, given an arbitrary set of variables , will also denote an arbitrary assignment to the variables from .)
If
for functions and , then there exist functions and such that
In other words, provides a template for further factorization of .
In order to use as a template to further factorize , all variables outside of need to be fixed. To this end, let be an arbitrary fixed assignment to the variables from (the variables not in ). For an arbitrary set of variables , let denote the assignment restricted to the variables from (the variables from , excluding the variables from ).
Moreover, to factorize only , the other factors need to be rendered moot for the variables from . To do this, the factorization
will be re-expressed as
For each : is where all variables outside of have been fixed to the values prescribed by .
Let
and
for each so
What is most important is that when the values assigned to do not conflict with the values prescribed by , making "disappear" when all variables not in are fixed to the values from .
Fixing all variables not in to the values from gives
Since ,
Letting
gives:
which finally gives:
Lemma 1 provides a means of combining two different factorizations of . The local Markov property implies that for any random variable , that there exists factors and such that:
where are the neighbors of node . Applying Lemma 1 repeatedly eventually factors into a product of clique potentials (see the image on the right).
End of Proof
See also
Markov random field
Conditional random field
Notes
Further reading
Probability theorems
Theorems in statistics
Markov networks | Hammersley–Clifford theorem | [
"Mathematics"
] | 698 | [
"Mathematical problems",
"Theorems in probability theory",
"Mathematical theorems",
"Theorems in statistics"
] |
21,964,675 | https://en.wikipedia.org/wiki/Para-Chloroamphetamine | para-Chloroamphetamine (PCA), also known as 4-chloroamphetamine (4-CA), is a serotonin–norepinephrine–dopamine releasing agent (SNDRA) and serotonergic neurotoxin of the amphetamine family. It is used in scientific research in the study of the serotonin system, as a serotonin releasing agent (SRA) at lower doses to produce serotonergic effects, and as a serotonergic neurotoxin at higher doses to produce long-lasting depletions of serotonin.
PCA has also been clinically studied as an appetite suppressant and antidepressant, but findings of neurotoxicity in animals discouraged further evaluation. It has also been encountered as a designer drug, although it never achieved popularity, again perhaps due to its neurotoxicity.
Effects
PCA was studied clinically as an appetite suppressant and antidepressant and its effects in these studies were described. It has been said to have only slight stimulant effects and to behave more like an antidepressant than a stimulant. At doses of 80 to 90mg daily, in 3doses, it produced no significant acute psychoactive effects and produced few adverse effects. However, sleep disturbances and nausea were mentioned. No hallucinogenic effects have been reported.
The profile of PCA is analogous to that of naphthylaminopropane (NAP; PAL-287), a highly potent and well-balanced SNDRA with only weak stimulant-like effects. It is thought that concomitant robust serotonin release suppresses the stimulating and rewarding effects of dopamine release.
Pharmacology
Monoamine releasing agent
PCA acts as a serotonin, norepinephrine, and dopamine releasing agent (SNDRA). Its values for monoamine release are 28.3nM for serotonin, 23.5 to 26.2nM for norepinephrine, and 42.2 to 68.5nM for dopamine, making it a potent and well-balanced SNDRA.
Short-term effects
In animals, doses of PCA of 0.5 to 5mg/kg acutely produce a variety of behavioral and neurochemical effects thought to be due to serotonin release. Consequent enhancement of serotonergic signaling, serotonergic effects like myoclonus, the serotonin behavioral syndrome, including tremor, rigidity, Straub tail, hindlimb abduction, lateral head weaving, and reciprocal forepaw treading, inhibition of startle response sensitization, suppression of sexual behavior in females, and the head-twitch response. Non-behavioral or physiological effects include activation of the hypothalamic–pituitary–adrenal axis (HPA axis), increased prolactin secretion, and increased plasma renin activity. PCA and other SRAs like MDMA and α-ethyltryptamine (αET) produce locomotor hyperactivity in animals and this is thought to be serotonin-dependent. It is mimicked by serotonin 5-HT1B receptor activation. However, PCA is also reported to produce amphetamine-like hyperactivity and stereotypy, as well as amphetamine-like enhancement of conditioned avoidance responding that is independent of serotonergic signaling.
PCA does not show effects like those of the selective norepinephrine and dopamine releasing agent (NDRA) amphetamine in animals but instead fully substitutes for other serotonin releasing agents like (+)-MBDB and MMAI in rodent drug discrimination tests. The findings with PCA are in contrast to those with para-fluoroamphetamine (PFA), which acts as a selective NDRA similarly to amphetamine, fully substitutes for amphetamine in animals, and fails to substitute for (+)-MBDB or MMAI. As touched on, PCA can robustly produce the head-twitch response, which is a behavioral proxy of psychedelic-like effects. However, PCA does not seem to produce hallucinogenic effects in humans, and hence its activity in the head-twitch paradigm has been described as a false-positive for psychedelic effects. The head-twitch response with PCA appears to be dependent on induction of serotonin release and not on direct serotonin receptor agonism by PCA, as it is blocked by destruction of presynaptic serotonergic nerve terminals or by serotonin synthesis inhibition. Relatedly, PCA is said not to be a serotonin 5-HT2A receptor agonist (at concentrations up to 10,000nM). However, PCA might nonetheless act as a direct serotonin 5-HT2 receptor agonist at high doses, as head twitches induced by it are not blocked by serotonin synthesis inhibition at these doses. Although PCA has been reported to produce the head-twitch response, a more modern study reported that it did not do so, at least unless the serotonin transporter (SERT) was artificially expressed in a population of medial prefrontal cortex (mPFC) serotonergic neurons that normally lack the SERT.
While extracellular serotonin levels and serotonergic signaling are acutely increased by PCA, there is a concomitant depletion of serotonin stores. The depletion includes a decrease in total serotonin content, 5-hydroxyindoleacetic acid (5-HIAA) content, and tryptophan hydroxylase activity. The acute depletion of serotonin stores by PCA is likely due to inhibition of tryptophan hydroxylase. How this occurs is unclear, as PCA does not inhibit tryptophan hydroxylase in vitro except at very high concentrations. The initial serotonin depletion by lower doses of PCA are not permanent and can readily reverse after a few hours. As such, low doses of PCA, such as 2mg/kg, are regarded as non-neurotoxic. The dopaminergic and noradrenergic systems are also substantially impacted by acute PCA. However, dopamine and norepinephrine levels are only slightly changed. In addition, the effects on the dopaminergic and noradrenergic systems are of relatively short duration and return to normal within 24hours, analogously to the case of the serotonin system. In line with the preceding neurochemical findings, tolerance to various of the behavioral effects of acute PCA has been found to develop.
Due to its activity as a serotonin releasing agent, PCA is employed in scientific research to acutely enhance and study serotonin signaling.
Long-term serotonergic neurotoxicity
At higher doses (e.g., 10mg/kg) and for longer amounts of exposure, PCA produces extremely long-lasting depletion of serotonin and loss of serotonergic function that is considered to reflect serotonergic neurotoxicity. This includes depletion of serotonin content, 5-HIAA content, serotonin turnover, tryptophan hydroxylase, serotonin reuptake capacity, and serotonin transporters for weeks or months. As an example, brain serotonin continued to be reduced by 41% after 38days. In addition, many serotonin-containing nerve fibers become undetectable and appear to be lost. There have also been observations of nerve degeneration in the days after PCA administration. Different serotonergic areas and projections are differentially susceptible to the neurotoxicity of PCA, with the dorsal raphe nuclei more susceptible and the median raphe nuclei, raphe obscurus, raphe pallidus, dentate gyrus, hypothalamus, and spinal cord all resistant. PCA is selective for serotonin, without causing depletion of norepinephrine or dopamine.
There are behavioral consequences of the serotonergic neurotoxicity of PCA. Affected animals are still quite normal in overall appearance. However, hypoactivity, increased defecation in the open field test, and failed acquisition of shock avoidance in the Y-maze task are all apparent. In addition, increased locomotion in response to the dopamine agonist apomorphine has been observed, which is consistent with findings that serotonin may inhibit certain aspects of dopamine signaling. Failure of acquisition of a two-way conditioned avoidance response has been observed, and this could be completely prevented with the SRI zimelidine (see more on this below). Various other changes and deficits have been seen as well. The effects of the non-selective serotonin receptor agonist and serotonergic psychedelic 5-MeO-DMT have been found to be greatly potentiated following PCA, which may reflect receptor supersensitivity in an attempt at compensation for serotonin depletion. Conversely, the behavioral and physiological serotonergic effects of acute low-dose PCA challenge are attenuated after high-dose neurotoxic PCA exposure, which may reflect reduced available serotonin stores for release.
Mechanisms of neurotoxicity
Although the ultimate cause is cytotoxicity to serotonergic neurons, the mechanisms leading to the serotonergic neurotoxicity of PCA are unknown. However, uptake of PCA into neurons by the serotonin transporter (SERT) appears to be required. Serotonin reuptake inhibitors (SRIs) like fluoxetine can block both the acute short-term effects and the long-term serotonergic neurotoxicity of PCA. In addition, they can be given 4hours after PCA administration, when acute serotonin depletion has already occurred, and will still completely protect against the long-term neurotoxicity. However, the SRI must be long-lasting; the short-acting SRI clomipramine, given before PCA, prevented acute serotonin depletion, but PCA outlasted clomipramine in the body, and the same degree of long-term neurotoxicity occurred as if clomipramine had not been administered.
It has been theorized that a toxic metabolite of PCA may be formed and that this metabolite is responsible for its neurotoxicity. However, no compelling evidence in support of this hypothesis has emerged. Severe depletion of serotonin by the combination of para-chlorophenylalanine (PCPA) and reserpine substantially protects against the serotonergic neurotoxicity of PCA. This might be due to serotonin forming neurotoxic metabolites, for instance 5,6-dihydroxytryptamine (5,6-DHT), in the context of PCA's actions. Similarly to prophylactic serotonin depletion, α-methyl-p-tyrosine, which depletes dopamine, protects against the serotonergic neurotoxicity of PCA as well. It thus appears that dopamine is involved in the neurotoxicity of PCA, which is notable as PCA is a potent dopamine releasing agent in addition to inducing the release of serotonin.
It has been reported that direct intracerebroventricular injection of PCA into the brain, in contrast to peripheral administration, failed to produce serotonergic neurotoxicity. This was the case even with continuous infusion for two days. This seems like it may lend credence to the toxic metabolite theory of PCA neurotoxicity, as a peripherally formed metabolite of PCA might be required for neurotoxicity to occur. However, no toxic metabolite has still yet been identified and no other support for the hypothesis has surfaced. Inhibiting the metabolism of PCA does not reduce tryptophan hydroxylase inactivation, suggesting that a metabolite is not responsible for this effect.
There are species differences in the neurotoxicity of PCA between rats and mice, which may help to shed light on the underlying mechanisms.
Structure–activity relationships of neurotoxicity
The drug is the most potent serotonergic neurotoxin of a series of amphetamines. In terms of structure–activity relationships, the α-methyl group appears to be essential for the neurotoxicity, and the α-ethyl analogue is less potent as a neurotoxin. Other side chain homologues with shorter or longer chains were less potent or inactive. Moving the chloro substituent to other positions on the phenyl ring, as in ortho-chloroamphetamine (OCA) and meta-chloroamphetamine (MCA), resulted in no significant serotonergic depletion, in contrast to the marked depletion with PCA. However, this was found to be due to rapid metabolism in the case of MCA, and inhibiting its metabolism resulted in potent neurotoxicity as with PCA. Conversely, OCA still does not produce apparent neurotoxicity.
para-Bromoamphetamine (PBA) and para-bromomethamphetamine (PBMA) show similar serotonergic neurotoxicity to PCA and PCMA. Conversely, para-fluoroamphetamine decreases serotonin levels but its effects appear to be much less persistent than those of PCA. Other 4-substituted amphetamines have reduced neurotoxicity (4-trifluoromethylamphetamine, 4-phenoxyamphetamine) or are inactive (4-methylamphetamine, para-methoxyamphetamine (PMA)) in terms of serotonin depletion. Fenfluramine and norfenfluramine, which are 3-trifluoromethylamphetamines, produce very long-lasting serotonergic neurotoxicity similarly to PCA but are slightly less active.
The closely related N-methylated derivative, para-chloromethamphetamine (PCMA), which is rapidly and extensively metabolized to para-chloroamphetamine in vivo, has neurotoxic properties as well, and is only slightly less potent than PCA in this regard. Other N-alkylated analogues of PCA also metabolize at least in part into PCA and produce serotonergic neurotoxicity. However, they show reduced activity, which may be due to their extent of conversion into PCA being reduced.
In contrast to PCA, the phentermine (i.e., α-methylated) analogue of PCA, chlorphentermine, which acts as a highly selective SRA, does not appear to produce serotonergic neurotoxicity.
Rigid analogues of PCA, like 6-chloro-2-aminotetralin (6-CAT), have also been assessed. 6-CAT depletes serotonin similarly to PCA, but its effects are smaller and shorter-lasting. Another analogue, Org 6582, in which a third ring structure has been added, is a selective serotonin reuptake inhibitor (SSRI) and no longer shows the serotonergic neurotoxicity of PCA and 6-CAT.
Use as a neurotoxin in scientific research
PCA is useful and widely employed as a serotonergic neurotoxin in scientific research. A variety of scientific findings have been made and published through employment of PCA. The drug is advantageous over other serotonergic neurotoxins like 5,6-dihydroxytryptamine (5,6-DHT) and 5,7-dihydroxytryptamine (5,7-DHT) in that it is active by systemic administration. Conversely, 5,6-DHT and 5,7-DHT do not cross the blood–brain barrier and must be administered directly into the brain. PCA also produces a different anatomical pattern of serotonergic neurotoxicity than 5,6-DHT and 5,7-DHT, which can be useful as well if there is a need to study different serotonergic areas or pathways.
Other actions
PCA has been found to act as a relatively potent monoamine oxidase A (MAO-A) inhibitor, with an of 1,900 to 4,000nM.
PCA has been reported to act as an agonist of the rat trace amine-associated receptor (TAAR1). Conversely, it is not a significant agonist of the human TAAR1. The drug also appears to be inactive as an agonist of the mouse TAAR1. TAAR1 agonism has been implicated in modulating the effects of monoamine releasing agents (MRAs). In contrast to PCA, the MRA MDMA is a potent agonist of the mouse TAAR1. MDMA-induced in-vivo brain serotonin and dopamine release and hyperlocomotion are augmented in TAAR1 knockout mice relative to normal mice, whereas the in-vivo brain serotonin and dopamine release of PCA are not different between normal mice and TAAR1 knockout mice. In the same study, the TAAR1 agonist o-phenyl-3-iodotyramine (o-PIT) blunted the dopamine and serotonin release of PCA in mouse synaptosomes in vitro, an effect that was absent in synaptosomes from TAAR1 knockout mice. These findings led to conclusions that TAAR1 agonism by MDMA but not PCA auto-inhibits and constrains its own effects in rodents. Unlike in rodents however, MDMA is not a significant TAAR1 agonist in humans.
Chemistry
PCA, also known as 4-chloroamphetamine, is a phenethylamine and amphetamine derivative.
Analogues of PCA include para-chloromethamphetamine (PCMA/4-CMA), para-bromoamphetamine (PBA/4-BA), para-fluoroamphetamine (PFA/4-FA), para-iodoamphetamine (PIA/4-IA), 4-methylamphetamine (4-MA), meta-chloroamphetamine (MCA/4-CA), ortho-chloroamphetamine (OCA/2-CA), 3,4-dichloroamphetamine (3,4-DCA), 2,4-dichloroamphetamine (2,4-DCA), chlorphentermine, 4-chloromethcathinone (4-CMC; clephedrone), 4-chlorophenylisobutylamine (4-CAB; AEPCA), 6-chloro-2-aminotetralin (6-CAT), 5-iodo-2-aminoindane (5-IAI), and Org 6582, among others.
History
PCA was first synthesized by 1936 and was first developed for potential medical use in the 1960s.
Society and culture
Legal status
China
As of October 2015, 4-CA is a controlled substance in China.
United States
PCA is not a scheduled compound in the United States.
References
4-Chlorophenyl compounds
Abandoned drugs
Designer drugs
Experimental antidepressants
Monoamine oxidase inhibitors
Monoaminergic neurotoxins
Serotonin-norepinephrine-dopamine releasing agents
Substituted amphetamines
TAAR1 agonists | Para-Chloroamphetamine | [
"Chemistry"
] | 4,248 | [
"Drug safety",
"Abandoned drugs"
] |
45,031,114 | https://en.wikipedia.org/wiki/Jiangmen%20Underground%20Neutrino%20Observatory | The Jiangmen Underground Neutrino Observatory (JUNO) is a medium baseline reactor neutrino experiment under construction at Kaiping, Jiangmen in Guangdong province in Southern China. It aims to determine the neutrino mass hierarchy and perform precision measurements of the Pontecorvo–Maki–Nakagawa–Sakata matrix elements. It will build on the mixing parameter results of many previous experiments. The collaboration was formed in July 2014 and construction began January 10, 2015. Funding is provided by a collaboration of international institutions. Originally scheduled to begin taking data in 2023, as of October 2024, the US$376 million JUNO facility is slated to come online in the latter half of 2025.
Planned as a follow-on to the Daya Bay Reactor Neutrino Experiment, it was originally to be sited in the same area, but the construction of a third nuclear reactor (the Lufeng Nuclear Power Plant) in that region would disrupt the experiment, which depends on maintaining a fixed distance to nearby nuclear reactors. Instead it was moved west to a site (Jingji town, Kaiping, Jiangmen) located 53 km from both of the Yangjiang and Taishan nuclear power plants.
Detector
The main detector consists of a diameter transparent acrylic glass sphere containing 20,000 tonnes of linear alkylbenzene liquid scintillator, surrounded by a stainless steel truss supporting approximately 43,200 photomultiplier tubes (17,612 large diameter tubes, and 25,600 tubes filling in the gaps between them), immersed in a water pool instrumented with 2400 additional photomultiplier tubes as a muon veto. As of 2022, construction of the detector is well underway. Deploying this underground will detect neutrinos with excellent energy resolution. The overburden includes 270 m of granite mountain, which will reduce cosmic muon background.
The much larger distance to the reactors (compared to less than 2 km for the Daya Bay far detector) makes the experiment better able to distinguish neutrino oscillations, but requires a much larger, and better-shielded, detector to detect a sufficient number of reactor neutrinos.
Physics
The main approach of the JUNO Detector in measuring neutrino oscillations is the observation of electron antineutrinos () coming from two nuclear power plants at approximately 53 km distance. Since the expected rate of neutrinos reaching the detector is known from processes in the power plants, the absence of a certain neutrino flavor can give an indication of transition processes.
The quantitative part of the experiment requires measuring neutrino flavour oscillations as a function of distance. This seems impossible, as both the reactors and detector are completely immovable, but the speed of oscillation varies with energy (details at ). As the reactors emit neutrinos with a range of energies, a range of effective distances can be observed, limited by the accuracy with which each neutrino's energy can be measured.
Although not the primary goal, the detector is sensitive to atmospheric neutrinos, geoneutrinos and neutrinos from supernovae as well.
Expected sensitivity
Daya Bay and RENO measured θ13 and determined it has a large non-zero value. Daya Bay will be able to measure the value to ≈4% precision and RENO ≈7% after several years. JUNO is designed to improve uncertainty in several neutrino parameters to less than 1%.
See also
Daya Bay
RENO
Double Chooz
KamLAND
NOνA
Wang Yifang
References
External links
Jiangmen Underground Neutrino Observatory web site
JUNO at Shanghai Jiao Tong University
JUNO documents at INFN
Physics experiments
Underground laboratories
Neutrino observatories
Science and technology in China | Jiangmen Underground Neutrino Observatory | [
"Physics"
] | 774 | [
"Experimental physics",
"Physics experiments"
] |
45,037,213 | https://en.wikipedia.org/wiki/ClearSign%20Technologies | ClearSign Technologies Corporation (ClearSign) is a United States-based company that develops emission-control technology.
Products
ClearSign develops technology intended to increase energy efficiency and emissions standards of combustion systems, primarily industrial and commercial boilers and furnaces. Its products include Duplex Burner Architecture, which the company says reduces combustion burner flame length by more than 80-percent, in turn increasing thermal capacity and reducing operating costs. Duplex Burner Architecture won the "New Technology Development of the Year Award" at the 2014 West Coast Oil & Gas Awards. The company's other major product is Electrodynamic Combustion Control, which uses computer-controlled electric fields to control the flame shape in boilers, kilns, and furnaces, preventing pollution from forming.
History
ClearSign was formed in Seattle, Washington in 2008. Its first chief executive officer was Richard Rutkowski, who also became chairman of the board of directors. Rutkowski was a co-founder of projection technology company Microvision and nanotechnology company Lumera.
In 2012 ClearSign held an initial public offering which, according to the company, raised $13.8 million. ClearSign chose to delay adopting accounting standards required of publicly traded companies under the Sarbanes-Oxley Act, taking advantage of exemptions in the JOBS Act designed to make it cheaper for development-stage companies to raise capital. It is believed ClearSign may have been the first company in the U.S. to do so.
In December 2014, Rutkowski resigned as CEO. He was replaced by board member Stephen Pirnat. The following September, the company was named "Technology Company of the Year" by Petroleum Economist.
On November 12, 2019, ClearSign Combustion Corporation (Nasdaq: CLIR) (“ClearSign” or the “Company”), announced that the Company has changed its name to ClearSign Technologies Corporation. The Company’s ticker “CLIR” will remain the same.
References
External links
2013 MIT Technology Review article on ClearSign's Electrodynamic Combustion Control
2008 establishments in the United States
2008 in the environment
Companies based in Seattle
Environmental technology
Industrial emissions control
Companies listed on the Nasdaq | ClearSign Technologies | [
"Chemistry",
"Engineering"
] | 446 | [
"Chemical process engineering",
"Industrial emissions control",
"Environmental engineering"
] |
45,037,302 | https://en.wikipedia.org/wiki/Aston%20Medal | The Aston Medal is awarded by the British Mass Spectrometry Society to individuals who have worked in the United Kingdom and have made outstanding contributions to our understanding of the biological, chemical, engineering, mathematical, medical, or physical sciences relating directly to mass spectrometry. The medal is named after one of Britain's founders of mass spectrometry and 1922 Nobel prize winner Francis William Aston.
The award is made sporadically, with no more than one medal being awarded each year. Recipients of this honour receive a gold-plated medal with a portrait of Francis Aston as well as an award certificate.
Recipients
1989 – Allan Maccoll
1990 – John H. Beynon
1996 – Brian Green
1998 – Keith Jennings
1999 – Dai Games
2003 – Colin Pillinger
2005 – Tom Preston
2006 – John Todd
2008 – Robert Bateman
2010 – Richard Evershed
2011 – Carol Robinson
2013 – Tony Stace
2017 – R. Graham Cooks
2023 – Alexander Makarov
See also
List of chemistry awards
References
External links
Landmarks in the last 50 years of British Mass Spectrometry
Academic awards
Mass spectrometry awards
British science and technology awards | Aston Medal | [
"Physics"
] | 230 | [
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometry awards"
] |
45,038,067 | https://en.wikipedia.org/wiki/Balmer%20jump | The Balmer jump, Balmer discontinuity, or Balmer break is the difference of intensity of the stellar continuum spectrum on either side of the limit of the Balmer series of hydrogen, at approximately 364.5 nm. It is caused by electrons being completely ionized directly from the second energy level of a hydrogen atom (bound-free absorption), which creates a continuum absorption at wavelengths shorter than 364.5 nm.
In some cases the Balmer discontinuity can show continuum emission, usually when the Balmer lines themselves are strongly in emission. Other hydrogen spectral series also show bound-free absorption and hence a continuum discontinuity, but the Balmer jump in the near UV has been the most observed.
The strength of the continuum absorption, and hence the size of the Balmer jump, depends on temperature and density in the region responsible for the absorption. At cooler stellar temperatures, the density most strongly affects the strength of the discontinuity and this can be used to classify stars on the basis of their surface gravity and hence luminosity. This effect is strongest in A class stars, but in hotter stars temperature has a much larger effect on the Balmer jump than surface gravity.
See also
Lyman-break galaxy
References
Hydrogen physics
Astronomical spectroscopy | Balmer jump | [
"Physics",
"Chemistry"
] | 260 | [
"Spectrum (physical sciences)",
"Spectroscopy",
"Astronomical spectroscopy",
"Astrophysics"
] |
23,440,465 | https://en.wikipedia.org/wiki/%CE%91-Aminoadipate%20pathway | The α-aminoadipate pathway is a biochemical pathway for the synthesis of the amino acid L-lysine. In the eukaryotes, this pathway is unique to several species of yeast, higher fungi (containing chitin in their cell walls), and the euglenids. It has also been reported from bacteria of the genus Thermus and also in Pyrococcus horikoshii, potentially suggesting a wider distribution than previously thought. This uniqueness of the pathway makes it a potentially interesting target for antimycotics.
Pathway overview
This pathway is a part of the glutamate family of amino acid biosynthetic pathways. The reaction steps in the pathway are similar to the citric acid cycle.
The first step in the pathway is condensation of acetyl-CoA with α-ketoglutarate, which gives homocitrate. This reaction is catalyzed by homocitrate synthase. Homocitrate is then converted to homoaconitate by homoaconitase and then to homoisocitrate. This is then decarboxylated by homoisocitrate dehydrogenase, which results in α-ketoadipate. A nitrogen atom is added from glutamate by aminoadipate aminotransferase to form the α-aminoadipate, from which this pathway gets its name. This is then reduced by aminoadipate reductase via an acyl-enzyme intermediate to a semialdehyde. Reaction with glutamate by one class of saccharopine dehydrogenase yields saccharopine which is then cleaved by a second saccharopine dehydrogenase to yield lysine and oxoglutarate.
Conversion of lysine to α-ketoadipate during degradation of lysine proceeds via the same steps, but in reverse.
See also
Adipic acid
References
Metabolism
Biosynthesis
Metabolic pathways | Α-Aminoadipate pathway | [
"Chemistry",
"Biology"
] | 403 | [
"Biochemistry",
"Biosynthesis",
"Cellular processes",
"Chemical synthesis",
"Metabolic pathways",
"Metabolism"
] |
23,441,832 | https://en.wikipedia.org/wiki/TB-21007 | TB-21007 is a nootropic drug which acts as a subtype-selective inverse agonist at the α5 containing GABAA receptors.
See also
GABAA receptor negative allosteric modulator
GABAA receptor § Ligands
References
Primary alcohols
Benzo(c)thiophenes
GABAA receptor negative allosteric modulators
Ketones
Nootropics
2-Thiazolyl compounds
Thioethers | TB-21007 | [
"Chemistry"
] | 96 | [
"Ketones",
"Functional groups"
] |
23,445,475 | https://en.wikipedia.org/wiki/Advanced%20Fuel%20Cycle%20Initiative | The Advanced Fuel Cycle Initiative (AFCI) is an extensive research and development effort of the United States Department of Energy (DOE). The mission and focus of AFCI is to enable the safe, secure, economic and sustainable expansion of nuclear energy by conducting research, development, and demonstration focused on nuclear fuel recycling and waste management to meet U.S. needs.
The program was absorbed into the GNEP project, which was renamed IFNEC.
Focus
Continue critical fuel cycle research, development and demonstration (RD&D) activities
Pursue development of policy and regulatory framework to support fuel cycle closure
Determine and develop RD&D infrastructure needed to mature technologies
Establish advanced modeling and simulation program element
Implement a science-based RD&D program
Campaigns
The AFCI is an extensive RD&D effort to close the fuel cycle. The different areas within the AFCI are separated into campaigns. The RD&D of each campaign is completed by the United States Department of Energy's national laboratories.
Transmutation fuels
Fast reactor development
Separations
Waste forms
Grid Appropriate Reactor Campaign
Safeguards
Systems analysis
Modeling and simulation
Safety and regulatory
Transmutation fuels
The mission of the Transmutation Fuels Campaign is the generation of data, methods and models for fast reactor transmutation fuels and targets qualification by performing RD&D activities on fuel fabrication and performance. The campaign is led by Idaho National Laboratory.
Reactor development
The mission of the Reactor Campaign is to develop advanced recycling reactor technologies required for commercial deployment in a closed nuclear fuel cycle. The Reactor Campaign is led at Argonne National Laboratory.
Separations
The mission of the Separations Campaign is to develop and demonstrate industrially deployable and economically feasible technologies for the recycling of used nuclear fuel to provide improved safety, security and optimized waste management. The campaign is led by Idaho National Laboratory. This entails alternatives to the de facto standard PUREX process, which is used by all countries that engage in large scale civilian nuclear reprocessing, but has been phased out for civilian uses in the US over nuclear proliferation concerns, with the US exerting diplomatic pressure to see it phased out globally.
Waste Forms Campaign
The mission of the Waste Forms Campaign is to develop and demonstrate durable waste forms and processes to enable safe and cost-effective waste management as an integral part of a closed nuclear fuel cycle by establishing a fundamental understanding of behavior through closely coupled theory, experiment and modeling. This campaign is led at Argonne National Laboratory.
Grid Appropriate Reactor Campaign
The mission of the Grid Appropriate Reactor Campaign is to enable U.S. leadership in the global expansion of nuclear energy by conducting research, development, and demonstration of technologies and innovative reactor designs that offer enhanced safety, security, and proliferation resistance and that are appropriately sized for infrastructure-limited countries.
Safeguards
The mission of the Safeguards Campaign is to ensure that domestic fuel cycle facilities fully meet requirements under regulatory frameworks; thereby assuring that nuclear materials have not been diverted or misused. The campaign is led at Sandia National Laboratories.
Systems analysis
The mission of the Systems Analysis Campaign is to conduct systems-wide analyses of nuclear energy development and infrastructure deployment to enable a requirements-driven process for all technical activities, and to inform strategic planning and key program decisions. The campaign is led at Idaho National Laboratory.
Modeling and simulation
The mission of the Modeling and Simulation Campaign is to rapidly create, and deploy “science-based” verified and validated modeling and simulation capabilities essential for the design, implementation, and operation of future nuclear energy systems with the goal of improving future U.S. energy security. These AFCI activities are led at Argonne National Laboratory.
Safety and regulatory
The mission of the Safety and Regulatory Campaign is to ensure that regulatory and licensing requirements for future facilities and technologies are appropriately considered and incorporated during the course of technology development.
References
"Advanced Fuel Cycle Program," Idaho National Laboratory website
"Review of DOE's Nuclear Energy Research and Development Program" National Academies Press
AFCI Quarterly Report Transmutation Engineering LA-UR-06-3096
An Evaluation of the Proliferation Resistant Characteristics of Light Water Reactor Fuel with the Potential for Recycle in the United States
United States Department of Energy
Nuclear technology | Advanced Fuel Cycle Initiative | [
"Physics"
] | 839 | [
"Nuclear technology",
"Nuclear physics"
] |
33,548,913 | https://en.wikipedia.org/wiki/Dehaene%E2%80%93Changeux%20model | The Dehaene–Changeux model (DCM), also known as the global neuronal workspace, or global cognitive workspace model, is a part of Bernard Baars's global workspace model for consciousness.
It is a computer model of the neural correlates of consciousness programmed as a neural network. It attempts to reproduce the swarm behaviour of the brain's higher cognitive functions such as consciousness, decision-making and the central executive functions. It was developed by cognitive neuroscientists Stanislas Dehaene and Jean-Pierre Changeux beginning in 1986. It has been used to provide a predictive framework to the study of inattentional blindness and the solving of the Tower of London test.
History
The Dehaene–Changeux model was initially established as a spin glass neural network attempting to represent learning and to then provide a stepping stone towards artificial learning among other objectives. It would later be used to predict observable reaction times within the priming paradigm and in inattentional blindness.
Structure
General structure
The Dehaene–Changeux model is a meta neural network (i.e. a network of neural networks) composed of a very large number of integrate-and-fire neurons programmed in either a stochastic or deterministic way. The neurons are organised in complex thalamo-cortical columns with long-range connexions and a critical role played by the interaction between von Economo's areas. Each thalamo-cortical column is composed of pyramidal cells and inhibitory interneurons receiving a long-distance excitatory neuromodulation which could represent noradrenergic input.
A swarm and a multi-agent system composed of neural networks
Among others Cohen & Hudson (2002) had already used "meta neural networks as intelligent agents for diagnosis". Similarly to Cohen & Hudson, Dehaene & Changeux have established their model as an interaction of meta-neural networks (thalamocortical columns) themselves programmed in the manner of a "hierarchy of neural networks that together act as an intelligent agent", in order to use them as a system composed of a large scale of inter-connected intelligent agents for predicting the self-organized behaviour of the neural correlates of consciousness. Jain et al. (2002) had already clearly identified spiking neurons as intelligent agents since the lower bound for computational power of networks of spiking neurons is the capacity to simulate in real-time for boolean-valued inputs any Turing machine. The DCM being composed of a very large number of interacting sub-networks which are themselves intelligent agents, it is formally a multi-agent system programmed as a swarm or neural networks and a fortiori of spiking neurons.
Behavior
The DCM exhibits several surcritical emergent behaviors such as multistability and a Hopf bifurcation between two very different regimes which may represent either sleep or arousal with a various all-or-none behaviors which Dehaene et al. use to determine a testable taxonomy between different states of consciousness.
Scholarly reception
Self-organized criticality
The Dehaene–Changeux model contributed to the study of nonlinearity and self-organized criticality in particular as an explanatory model of the brain's emergent behaviors, including consciousness. Studying the brain's phase-locking and large-scale synchronization, Kitzbichler et al. (2011a) confirmed that criticality is a property of human brain functional network organization at all frequency intervals in the brain's physiological bandwidth.
Furthermore, exploring the neural dynamics of cognitive efforts after, inter alia, the Dehaene–Changeux model, Kitzbichler et al. (2011b) demonstrated how cognitive effort breaks the modularity of mind to make human brain functional networks transiently adopt a more efficient but less economical configuration. Werner (2007a) used the Dehaene–Changeux global neuronal workspace to defend the use of statistical physics approaches for exploring phase transitions, scaling and universality properties of the so-called "Dynamic Core" of the brain, with relevance to the macroscopic electrical activity in EEG and EMG. Furthermore, building from the Dehaene–Changeux model, Werner (2007b) proposed that the application of the twin concepts of scaling and universality of the theory of non-equilibrium phase transitions can serve as an informative approach for elucidating the nature of underlying neural-mechanisms, with emphasis on the dynamics of recursively reentrant activity flow in intracortical and cortico-subcortical neuronal loops. Friston (2000) also claimed that "the nonlinear nature of asynchronous coupling enables the rich, context-sensitive interactions that characterize real brain dynamics, suggesting that it plays a role in functional integration that may be as important as synchronous interactions".
States of consciousness and phenomenology
It contributed to the study of phase transition in the brain under sedation, and notably GABA-ergic sedation such as that induced by propofol (Murphy et al. 2011, Stamatakis et al. 2010). The Dehaene–Changeux model was contrasted and cited in the study of collective consciousness and its pathologies (Wallace et al. 2007). Boly et al. (2007) used the model for a reverse somatotopic study, demonstrating a correlation between baseline brain activity and somatosensory perception in humans. Boly et al. (2008) also used the DCM in a study of the baseline state of consciousness of the human brain's default network.
Adversarial collaboration to test the Dehaene–Changeux model and integrated information theory
In 2019, the Templeton Foundation announced funding in excess of $6,000,000 to test opposing empirical predictions of Dehaene–Changeux model and a rival theory (integrated information theory, or IIT). The originators of both theories signed off on experimental protocols and data analyses as well as the exact conditions that satisfy if their championed theory correctly predicted the outcome or not. Initial results were revealed in June 2023. None of the Dehaene–Changeux model predictions passed what was agreed upon pre-registration while two out of three of IIT's predictions passed that threshold.
Publications
Rialle, V and Stip, E. (May 1994). "Cognitive modeling in psychiatry: from symbolic models to parallel and distributed models". J Psychiatry Neurosci. 19(3): 178–192.
Zigmond, Michael J. (1999). Fundamental neuroscience. Academic Press, p1551.
Dehaene, Stanislas (2001). The cognitive neuroscience of consciousness. MIT Press, p. 13.
Ravi Prakash, Om Prakash, Shashi Prakash, Priyadarshi Abhishek, and Sachin Gandotra (2008). "Global workspace model of consciousness and its electromagnetic correlates". Ann Indian Acad Neurol. Jul–Sep; 11(3): 146–153.
Gazzaniga, Michael S. (2004). The cognitive neurosciences. MIT Press, p. 1146.
Laureys, Steven; et al. (2006). The boundaries of consciousness: neurobiology and neuropathology. Volume 150 of Progress in Brain Research. Elsevier, p. 45.
Naccache, L. (March 2007). "Cognitive aging considered from the point of view of cognitive neurosciences of consciousness". Psychologie & NeuroPsychiatrie du vieillissement. Volume 5, Number 1, 17–21.
Hans Liljenström, Peter Århem (2008). Consciousness transitions: phylogenetic, ontogenetic, and physiological aspects. Elsevier, p. 126.
Tim Bayne, Axel Cleeremans, Patrick Wilken (2009). The Oxford companion to consciousness. Oxford University Press, p. 332.
Bernard J. Baars, Nicole M. Gage (2010). Cognition, brain, and consciousness: introduction to cognitive neuroscience. Academic Press, p. 287.
Carlos Hernández, Ricardo Sanz, Jaime Gómez-Ramirez, Leslie S. Smith, Amir Hussain, Antonio Chella, Igor Aleksander (2011). From Brains to Systems: Brain-Inspired Cognitive Systems. Volume 718 of Advances in Experimental Medicine and Biology Series. Springer, p. 230.
See also
Artificial consciousness
Complex system
Neuroscience
References
External links
"Selected publications of Stanislas Dehaene"
INSERM-CEA Cognitive Neuroimaging Unit.
Consciousness
Cognition
Cognitive architecture
Cognitive modeling
Artificial neural networks
Machine learning algorithms
Computational neuroscience
Cognitive neuroscience | Dehaene–Changeux model | [
"Engineering"
] | 1,790 | [
"Artificial intelligence engineering",
"Cognitive architecture"
] |
41,799,059 | https://en.wikipedia.org/wiki/Harder%E2%80%93Narasimhan%20stratification | In algebraic geometry and complex geometry, the Harder–Narasimhan stratification is any of a stratification of the moduli stack of principal G-bundles by locally closed substacks in terms of "loci of instabilities". In the original form due to Harder and Narasimhan, G was the general linear group; i.e., the moduli stack was the moduli stack of vector bundles, but, today, the term refers to any of generalizations. The scheme-theoretic version is due to Shatz and so the term "Shatz stratification" is also used synonymously. The general case is due to Behrend.
References
Further reading
Nitin Nitsure, Schematic Harder-Narasimhan Stratification
Algebraic geometry
Stratifications | Harder–Narasimhan stratification | [
"Mathematics"
] | 171 | [
"Fields of abstract algebra",
"Topology",
"Algebraic geometry",
"Stratifications"
] |
41,799,896 | https://en.wikipedia.org/wiki/Clumping%20factor | The clumping factor is a measurement of how density varies within a gaseous medium, and is commonly used in astrophysical settings where gas is not distributed uniformly. Gas densities can vary over many orders of magnitude, from the low density plasma in the Intergalactic medium between galaxies, to the neutral and dense molecular regions in the interstellar medium inside of galaxies. Moreover, gas throughout space is turbulent implying it has density structure on all spatial scales.
The amount that gas clumps is important to know in astronomy when trying to infer gas properties from observations. The clumping of gas, and not just the amount of gas present, affects the luminosity of gas as it cools. The clumping factor is a measure of the density variation of a medium. It is defined as:
where averaging is spatial. It is related to the variance of the density field by the square of the average density:
Cooling rates and emission scale as the particle number density squared (collision rates have this scaling). Therefore, the clumping factor can be used to convert from density inferred by emission observations assuming uniform density, to true average gas density:
References
Space plasmas
Equations of astronomy | Clumping factor | [
"Physics",
"Astronomy"
] | 244 | [
"Space plasmas",
"Plasma physics",
"Concepts in astronomy",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Plasma physics stubs",
"Equations of astronomy"
] |
41,801,168 | https://en.wikipedia.org/wiki/Chemical%20Engineering%20Science | Chemical Engineering Science is a peer-reviewed scientific journal covering all aspects of chemical engineering. It is published by Elsevier and was established in 1951. The editor-in-chief is A.P.J. Middelberg (University of Queensland).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2019 impact factor of 3.871.
References
External links
Chemical engineering journals
Elsevier academic journals
Academic journals established in 1951
Semi-monthly journals
English-language journals | Chemical Engineering Science | [
"Chemistry",
"Engineering"
] | 110 | [
"Chemical engineering",
"Chemical engineering journals"
] |
41,803,032 | https://en.wikipedia.org/wiki/Isotropic%20solid | In condensed matter physics and continuum mechanics, an isotropic solid refers to a solid material for which physical properties are independent of the orientation of the system. While the finite sizes of atoms and bonding considerations ensure that true isotropy of atomic position will not exist in the solid state, it is possible for measurements of a given property to yield isotropic results, either due to the symmetries present within a crystal system, or due to the effects of orientational averaging over a sample (e.g. in an amorphous solid or a polycrystalline metal). Isotropic solids tend to be of interest when developing models for physical behavior of materials, as they tend to allow for dramatic simplifications of theory; for example, conductivity in metals of the cubic crystal system can be described with single scalar value, rather than a tensor. Additionally, cubic crystals are isotropic with respect to thermal expansion and will expand equally in all directions when heated.
Isotropy should not be confused with homogeneity, which characterizes a system’s properties as being independent of position, rather than orientation. Additionally, all crystal structures, including the cubic crystal system, are anisotropic with respect to certain properties, and isotropic to others (such as density).
The anisotropy of a crystal’s properties depends on the rank of the tensor used to describe the property, as well as the symmetries present within the crystal. The rotational symmetries within cubic crystals, for example, ensure that the dielectric constant (a 2nd rank tensor property) will be equal in all directions, whereas the symmetries in hexagonal systems dictate that the measurement will vary depending on whether the measurement is made within the basal plane. Due to the relationship between the dielectric constant and the optical index of refraction, it would be expected for cubic crystals to be optically isotropic, and hexagonal crystals to be optically anisotropic; Measurements of the optical properties of cubic and hexagonal CdSe confirm this understanding.
Nearly all single crystal systems are anisotropic with respect to mechanical properties, with Tungsten being a very notable exception, as it is a cubic metal with stiffness tensor coefficients that exist in the proper ratio to allow for mechanical isotropy. In general, however, cubic crystals are not mechanically isotropic. However, many materials, such as structural steel, tend to be encountered and utilized in a polycrystalline state. Due to random orientation of the grains within the material, measured mechanical properties tend to be averages of the values associated with different crystallographic directions, with the net effect of apparent isotropy. As a result, it is typical for parameters such as the Young's Modulus to be reported independent of crystallographic direction. Treating solids as mechanically isotropic greatly simplifies analysis of deformation and fracture (as well as of the elastic fields produced by dislocations ). However, preferential orientation of grains (called texture) can occur as a result of certain types of deformation and recrystallization processes, which will create anisotropy in mechanical properties of the solid.
References
External links
Orientation (geometry)
Condensed matter physics | Isotropic solid | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 670 | [
"Materials science stubs",
"Condensed matter stubs",
"Continuum mechanics",
"Phases of matter",
"Classical mechanics",
"Materials science",
"Materials",
"Topology",
"Space",
"Condensed matter physics",
"Geometry",
"Spacetime",
"Orientation (geometry)",
"Matter"
] |
41,807,186 | https://en.wikipedia.org/wiki/Perchloratoborate | Perchloratoborate is an anion of the form [B(ClO4)4]−. It can form partly stable solid salts with heavy alkali metals. They are more stable than nitratoborate salts. K[B(ClO4)4] decomposes at 35 °C, Rb[B(ClO4)4] is stable to 50 °C, and Cs[B(ClO4)4] can exist up to 80 °C.
Perchloratoborates are analogous to perchloratoaluminates ([Al(ClO4)4]−).
Another related anion is the chloroperchloratoborate, Cl3B(ClO4).
Boron perchlorate itself is unstable above −5 °C.
Decomposition
On thermal decomposition the alkali perchloratoborate salts form an alkali perchlorate, and boron trioxide as a solid residue, and gas containing dichlorine heptoxide, chlorine dioxide, chlorine, and oxygen.
2 M[B(ClO4)4] → 2 MClO4 + B2O3 + (3 Cl2O7 or 6 ClO2 + O2 or 6 Cl2 + O2)
When the alkali perchloratoborates first start to decompose at the lower temperatures, the reaction is endothermic, and dichlorine heptoxide is formed. However, if caesium perchloratoborate is heated the decomposition becomes exothermic above 90 °C, and at 100 °C it explodes exothermically forming chlorine and oxygen.
Reactions
When rubidium perchloratoborate is reacted with extra perchloric acid, it forms RbBO(ClO4)2.
In water, alkali perchloratoborates decompose exothermically to form boric acid, perchloric acid, and the perchlorate.
Formation
Nitronium perchloratoborate (NO2B(ClO4)4)can be formed by reacting nitronium perchlorate with boron trichloride in solution. Similarly ammonium perchlorate reacts with BCl3 forming ammonium perchloratoborate.
The metal perchloratoborates can also be formed from the metal perchlorate dissolved in anhydrous perchloric acid reacting with boron trichloride. Another way is to react a metal chloridoborate (MBCl4) with perchloric acid. Chloridoborates can be made from the metal chloride and boron trichloride dissolved in nitrosyl chloride.
Extra Cl2O7 drives the reaction forward.
BCl3 + 3HClO4 → B(ClO4)3
Also formed is BCl2ClO4 and BCl(ClO4)2 which disproportionates above −78 °C to the boron perchlorate and dichloroboron perchlorate.
B(ClO4)3 + →
Properties
Caesium perchloratoborate is hygroscopic. It has a density of 2.5 g/cm3. It has no colour.
Infrared absorption bands are observed in caesium perchloratoborate at 640 and 1,087 cm−1.
Potassium perchloratoborate has density 2.18 g/cm3, and rubidium perchloratoborate has density 2.32 g/cm3.
The three alkali perchloratoborates fume in moist air, are all crystalline and colourless.
References
Perchlorates
Borates
Anions | Perchloratoborate | [
"Physics",
"Chemistry"
] | 761 | [
"Matter",
"Anions",
"Perchlorates",
"Salts",
"Ions"
] |
30,984,911 | https://en.wikipedia.org/wiki/Pressure%20jump | Pressure jump is a technique used in the study of chemical kinetics. It involves making rapid changes to the pressure of an experimental system and observing the return to equilibrium or steady state. This allows the study of the shift in equilibrium of reactions that equilibrate in periods between milliseconds to hours (or longer), these changes often being observed using absorption spectroscopy, or fluorescence spectroscopy though other spectroscopic techniques such as CD, FTIR or NMR can also be used.
Historically, pressure jumps were limited to one direction. Most commonly fast drops in pressure were achieved by using a quick release valve or a fast burst membrane. Modern equipment can achieve pressure changes in both directions using either double reservoir arrangements (good for large changes in pressure) or pistons operated by piezoelectric actuators (often faster than valve based approaches). Ultra fast pressure drops can be achieved using electrically disintegrated burst membranes. The ability to automatically repeat measurements and average the results is useful since the reaction amplitudes are often small.
The fractional extent of the reaction (i.e. the percentage change in concentration of a measurable species) depends on the molar volume change (ΔV°) between the reactants and products and the equilibrium position. If K is the equilibrium constant and P is the pressure then the volume change is given by:
where R is the universal gas constant and T is the absolute temperature. The volume change can thus be understood to be the pressure dependency of the change in Gibbs free energy associated with the reaction.
When a single step in a reaction is perturbed in a pressure jump experiment, the reaction follows a single exponential decay function with the reciprocal time constant (1/τ) equal to the sum of the forward and reverse intrinsic rate constants. In more complex reaction networks, when multiple reaction steps are perturbed, then the reciprocal time constants are given by the eigenvalues of the characteristic rate equations. The ability to observe intermediate steps in a reaction pathway is one of the attractive features of this technology.
References
Chemical kinetics | Pressure jump | [
"Chemistry"
] | 422 | [
"Chemical kinetics",
"Chemical reaction engineering"
] |
30,990,996 | https://en.wikipedia.org/wiki/Suspension%20array%20technology | Suspension array technology (or SAT) is a high throughput, large-scale, and multiplexed screening platform used in molecular biology. SAT has been widely applied to genomic and proteomic research, such as single nucleotide polymorphism (SNP) genotyping, genetic disease screening, gene expression profiling, screening drug discovery and clinical diagnosis. SAT uses microsphere beads (5.6 um in diameter) to prepare arrays. SAT allows for the simultaneous testing of multiple gene variants through the use of these microsphere beads as each type of microsphere bead has a unique identification based on variations in optical properties, most common is fluorescent colour. As each colour and intensity of colour has a unique wavelength, beads can easily be differentiated based on their wavelength intensity. Microspheres are readily suspendable in solution and exhibit favorable kinetics during an assay. Similar to flat microarrays (e.g. DNA microarray), an appropriate receptor molecule, such as DNA oligonucleotide probes, antibodies, or other proteins, attach themselves to the differently labeled microspheres. This produces thousands of microsphere array elements. Probe-target hybridization is usually detected by optically labeled targets, which determines the relative abundance of each target in the sample.
Overview of SAT using DNA hybridization
DNA is extracted from cells used to create test fragments. These test fragments are added to a solution containing a variety of microsphere beads. Each type of microsphere bead contains a known DNA probe with a unique fluorescent identity. Test fragments and probes on the microsphere beads are allowed to hybridize to each other. Once hybridized, the microsphere beads are sorted, usually using flow cytometry. This allows for the detection of each of the gene variants from the original sample. The resulting data collected will indicate the relative abundance of each hybridized sample to the microsphere.
Multiplexing
Since microsphere beads are easily suspended in solution and each microsphere retains its identity when hybridized to the test sample, a typical suspension array experiment can analyze a wide range of biological analysis in a single reaction, called "multiplexing". In general, each type of microsphere used in an array is individually prepared in bulk. For example, the commercially available microsphere arrays from Luminex xMAP technology uses a 10X10 element array. This array involves beads with red and infrared dyes, each with ten different intensities, to give a 100-element array. Thus, the array size would increase exponentially if multiple dyes are used. For example, five different dyes with 10 different intensities per dye will give rise to 100,000 different array elements.
Procedure
Sample targeting
When using different types of microspheres, SAT is capable of simultaneously testing multiple variables, such as DNA and proteins, in a given sample. This allows SAT to analyze variety of molecular targets during a single reaction. The common nucleic acid detection method includes direct DNA hybridization. The direct DNA hybridization approach is the simplest suspension array assay whereby 15 to 20 bp DNA oligonucleotides attached to microspheres are amplified using PCR. This is the optimized probe length as it minimizes the melting temperature variation among different probes during probe-target hybridization. After amplifying one DNA oligoprobe of interest, it can be used to create 100 different probes on 100 different sets of microspheres, each with the capability of capturing 100 potential targets (if using a 100-plex array). Similarly, target DNA samples are usually PCR amplified and labeled. Hybridization between the capture probe and the target DNA is achieved by melting and annealing complementary target DNA sequences to their capture probes located on the microspheres. After washing to remove non-specific binding between sequences, only strongly paired probe-targets will remain hybridized.
Sorting and detection with flow cytometry
For more details on this topic, see flow cytometry
Since the optical identity of each microsphere is known, the quantification of target samples hybridized to the microspheres can be achieved by comparing the relative intensity of target markers in one set of microspheres to target markers in another set of microspheres using flow cytometry. Microspheres can be sorted based using both their unique optical properties and level of hybridization to the target sequence.
Strengths
Rapid/high throughput: In multiplex analysis, a 100-plex assay can be analyzed in every 30 seconds. The recent reported high-throughput flow cytometry can sample a 96-well plate in 1 minute, and theoretically, the 100-plex assay with this system can be analyzed in less than 1 second, or potentially deliver 12 million samples per day.
High array density/multiplex: Compared to flat microarrays, SAT allows one to perform parallel measurements. A few microliters of microspheres could contain thousands of array elements and each array element is represented by hundreds of individual microspheres. Thus, the measurement by flow cytometry represents a replicate analysis of each array element.
Effective gathering of information: One of the benefits of using SAT is that it allows you to take one sample from a patient or research organism and simultaneously test for multiple gene variants. Thus, from a single sample you can determine which virus from a series of viruses a patient has, or which base pair mutation is present in the organism with a unique phenotype.
Cost-effective: Currently, commercially available suspension array kits costs $0.10-$0.25 per sequence tested.
Weaknesses
Relatively low array size: Although it has the potential to use an increased amount of dyes to generate millions of different array elements, the current generation of commercially available microsphere arrays (from Luminex xMAP technology) only uses two sets of dyes and therefore can only detect ~100 targets per experiment.
Hybridization between different sets of probes and target sequences requires a specific annealing temperature, which is affected by length and sequence of the oligonucleotide probe. Therefore, for every experiment, only one possible annealing temperature can be used. Thus, all probes used in given experiment must be designed to hybridize to the target at the same temperature. Although introducing base pair mismatch in some sets of the probes could minimize annealing temperature differences between each set of probes, the hybridization problem is still significant if more than 10-20 targets are tested in one reaction.
References
External links
Luminex products
Gene expression
Bioinformatics
DNA
Microtechnology
Molecular biology techniques
Microarrays | Suspension array technology | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 1,354 | [
"Biochemistry methods",
"Genetics techniques",
"Biological engineering",
"Microtechnology",
"Microarrays",
"Gene expression",
"Materials science",
"Bioinformatics",
"Molecular biology techniques",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
30,992,771 | https://en.wikipedia.org/wiki/Alpha%20strike%20%28engineering%29 | Alpha strike is a term referring to the event when an alpha particle, a composite charged particle composed of two protons and two neutrons, enters a computer and modifies the data or operation of a component in the computer.
Alpha strikes can disturb the silicon substrate of the transistors in a computer through their electronic stopping power, causing the transistor to flip states if the charge imparted by the strike crosses a critical threshold (QCrit). This, in turn, can corrupt the information stored by that transistor and create a cascading effect on the operation of the component that encases it.
History
The first widely recognized radiation-generated error in a computer was the appearance of random errors in the Intel 4k 2107 DRAM in the late 1970s. This problem was investigated by Timothy C. Mays and Murray H. Woods, who (in 1979) reported that the errors were caused by alpha decay from trace amounts of uranium and thorium induced in the seminal paper surrounding the chip.
Since then, there have been multiple incidents of computer errors due to radiation, including error reports from computers onboard spacecraft, corrupted data from voting machines, and crashes on computers onboard aircraft.
According to a study from Hughes Aircraft Company, anomalies in satellite communication attributed to galactic cosmic radiation is on the order of (3.1×10−3) transistors per year. This rate is an estimate of the number of noticeable cascading errors in communication between satellites per satellite.
Modern Impact
Alpha strikes are limiting the computing capabilities of computers onboard high-altitude vehicles as the energy an alpha particle imparts on the transistors of a computer is far more consequential for smaller transistors. As a result, computers with smaller transistors and higher computing capability are more prone to errors and crashes than computers with larger transistors.
One potential solution for optimizing the performance of computers onboard spacecraft while limiting the number of errors in the computer is the use of radiation protection. There a numerous materials under consideration as radiation shields, each with its own tradeoff between cost, weight, thermal diffusivity, and signal permittivity. One potential solution being explored by scientists and engineers is hydrogenated carbon nanofibers, a material that is light and can absorb alpha strikes through its internal structure.
See also
Alpha decay
Cosmic ray
Satellite
Radiation protection
References
Radiation effects
Computer engineering | Alpha strike (engineering) | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 486 | [
"Physical phenomena",
"Computer engineering",
"Radiation effects",
"Computer engineering stubs",
"Materials science",
"Radiation",
"Condensed matter physics",
"Computing stubs",
"Electrical engineering"
] |
30,993,329 | https://en.wikipedia.org/wiki/Soai%20reaction | In organic chemistry, the Soai reaction is the alkylation of pyrimidine-5-carbaldehyde with diisopropylzinc. The reaction is autocatalytic and leads to rapidly increasing amounts of the same enantiomer of the product. The product pyrimidyl alcohol is chiral and induces that same chirality in further catalytic cycles. Starting with a low enantiomeric excess ("ee") produces a product with very high enantiomeric excess. The reaction has been studied for clues about the origin of homochirality among certain classes of biomolecules.
The Japanese chemist Kensō Soai (1950–) discovered the reaction in 1995. For his work in "elucidating the origins of chirality and homochirality", Soai received the Chemical Society of Japan award in 2010.
Other chiral additives can be used as the initial source of asymmetric induction, with the major product of that first reaction being rapidly amplified. For example, Soai's group has demonstrated that even chiral quaternary hydrocarbons, which have no clear Lewis basic site for binding the nucleophile, are nonetheless capable of inducing asymmetric catalysis in the reaction.
The chiral induction is believed to occur as a result of interactions between the C–H bonds of the alkane and the pi electrons of the aldehyde.
In another example, Soai and coworkers showed that even [15N](2R, 3S)-bis(dimethylamino)butane, whose chirality results solely due to the difference between 14N and 15N (7% isotopic mass difference), gave 45% ee when used as a stoichiometric ligand.
References
Further reading
Stereochemistry
Catalysis
Organic reactions
Name reactions | Soai reaction | [
"Physics",
"Chemistry"
] | 390 | [
"Catalysis",
"Stereochemistry",
"Organic reactions",
"Name reactions",
"Space",
"nan",
"Spacetime",
"Chemical kinetics"
] |
43,302,850 | https://en.wikipedia.org/wiki/ABJM%20superconformal%20field%20theory | In theoretical physics, ABJM theory is a quantum field theory studied by Ofer Aharony, Oren Bergman, Daniel Jafferis, and Juan Maldacena. It provides a holographic dual to M-theory on . The ABJM theory is also closely related to Chern–Simons theory, and it serves as a useful toy model for solving problems that arise in condensed matter physics. It is a theory defined on superspace.
See also
6D (2,0) superconformal field theory
Notes
References
Conformal field theory
Supersymmetric quantum field theory
String theory | ABJM superconformal field theory | [
"Physics",
"Astronomy"
] | 124 | [
"Astronomical hypotheses",
"Supersymmetric quantum field theory",
"Quantum physics stubs",
"Quantum mechanics",
"String theory",
"Supersymmetry",
"Symmetry"
] |
24,918,060 | https://en.wikipedia.org/wiki/Semilinear%20map | In linear algebra, particularly projective geometry, a semilinear map between vector spaces V and W over a field K is a function that is a linear map "up to a twist", hence semi-linear, where "twist" means "field automorphism of K". Explicitly, it is a function that is:
additive with respect to vector addition:
there exists a field automorphism θ of K such that . If such an automorphism exists and T is nonzero, it is unique, and T is called θ-semilinear.
Where the domain and codomain are the same space (i.e. ), it may be termed a semilinear transformation. The invertible semilinear transforms of a given vector space V (for all choices of field automorphism) form a group, called the general semilinear group and denoted by analogy with and extending the general linear group. The special case where the field is the complex numbers and the automorphism is complex conjugation, a semilinear map is called an antilinear map.
Similar notation (replacing Latin characters with Greek ones) is used for semilinear analogs of more restricted linear transformations; formally, the semidirect product of a linear group with the Galois group of field automorphisms. For example, PΣU is used for the semilinear analogs of the projective special unitary group PSU. Note, however, that it was only recently noticed that these generalized semilinear groups are not well-defined, as pointed out in – isomorphic classical groups G and H (subgroups of SL) may have non-isomorphic semilinear extensions. At the level of semidirect products, this corresponds to different actions of the Galois group on a given abstract group, a semidirect product depending on two groups and an action. If the extension is non-unique, there are exactly two semilinear extensions; for example, symplectic groups have a unique semilinear extension, while has two extensions if n is even and q is odd, and likewise for PSU.
Definition
A map for vector spaces and over fields and respectively is -semilinear, or simply semilinear, if there exists a field homomorphism such that for all , in and in it holds that
A given embedding of a field in allows us to identify with a subfield of , making a -semilinear map a K-linear map under this identification. However, a map that is -semilinear for a distinct embedding will not be K-linear with respect to the original identification , unless is identically zero.
More generally, a map between a right -module and a left -module is -semilinear if there exists a ring antihomomorphism such that for all , in and in it holds that
The term semilinear applies for any combination of left and right modules with suitable adjustment of the above expressions, with being a homomorphism as needed.
The pair is referred to as a dimorphism.
Related
Transpose
Let be a ring isomorphism, a right -module and a right -module, and a -semilinear map. Define the transpose of as the mapping that satisfies
This is a -semilinear map.
Properties
Let be a ring isomorphism, a right -module and a right -module, and a -semilinear map. The mapping
defines an -linear form.
Examples
Let with standard basis . Define the map by
f is semilinear (with respect to the complex conjugation field automorphism) but not linear.
Let – the Galois field of order , p the characteristic. Let . By the Freshman's dream it is known that this is a field automorphism. To every linear map between vector spaces V and W over K we can establish a -semilinear map
Indeed every linear map can be converted into a semilinear map in such a way. This is part of a general observation collected into the following result.
Let be a noncommutative ring, a left -module, and an invertible element of . Define the map , so , and is an inner automorphism of . Thus, the homothety need not be a linear map, but is -semilinear.
General semilinear group
Given a vector space V, the set of all invertible semilinear transformations (over all field automorphisms) is the group ΓL(V).
Given a vector space V over K, ΓL(V) decomposes as the semidirect product
where Aut(K) is the automorphisms of K. Similarly, semilinear transforms of other linear groups can be defined as the semidirect product with the automorphism group, or more intrinsically as the group of semilinear maps of a vector space preserving some properties.
We identify Aut(K) with a subgroup of ΓL(V) by fixing a basis B for V and defining the semilinear maps:
for any . We shall denoted this subgroup by Aut(K)B. We also see these complements to GL(V) in ΓL(V) are acted on regularly by GL(V) as they correspond to a change of basis.
Proof
Every linear map is semilinear, thus . Fix a basis B of V. Now given any semilinear map f with respect to a field automorphism , then define by
As f(B) is also a basis of V, it follows that g is simply a basis exchange of V and so linear and invertible: .
Set . For every in V,
thus h is in the Aut(K) subgroup relative to the fixed basis B. This factorization is unique to the fixed basis B. Furthermore, GL(V) is normalized by the action of Aut(K)B, so .
Applications
Projective geometry
The groups extend the typical classical groups in GL(V). The importance in considering such maps follows from the consideration of projective geometry. The induced action of on the associated projective space P(V) yields the , denoted , extending the projective linear group, PGL(V).
The projective geometry of a vector space V, denoted PG(V), is the lattice of all subspaces of V. Although the typical semilinear map is not a linear map, it does follow that every semilinear map induces an order-preserving map . That is, every semilinear map induces a projectivity. The converse of this observation (except for the projective line) is the fundamental theorem of projective geometry. Thus semilinear maps are useful because they define the automorphism group of the projective geometry of a vector space.
Mathieu group
The group PΓL(3,4) can be used to construct the Mathieu group M24, which is one of the sporadic simple groups; PΓL(3,4) is a maximal subgroup of M24, and there are many ways to extend it to the full Mathieu group.
See also
Antilinear map
Complex conjugate vector space
References
Functions and mappings
Linear algebra
Linear operators
Projective geometry | Semilinear map | [
"Mathematics"
] | 1,479 | [
"Functions and mappings",
"Mathematical analysis",
"Mathematical objects",
"Linear operators",
"Mathematical relations",
"Linear algebra",
"Algebra"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.