id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
955,164
https://en.wikipedia.org/wiki/Geometric%20group%20theory
Geometric group theory is an area in mathematics devoted to the study of finitely generated groups via exploring the connections between algebraic properties of such groups and topological and geometric properties of spaces on which these groups can act non-trivially (that is, when the groups in question are realized as geometric symmetries or continuous transformations of some spaces). Another important idea in geometric group theory is to consider finitely generated groups themselves as geometric objects. This is usually done by studying the Cayley graphs of groups, which, in addition to the graph structure, are endowed with the structure of a metric space, given by the so-called word metric. Geometric group theory, as a distinct area, is relatively new, and became a clearly identifiable branch of mathematics in the late 1980s and early 1990s. Geometric group theory closely interacts with low-dimensional topology, hyperbolic geometry, algebraic topology, computational group theory and differential geometry. There are also substantial connections with complexity theory, mathematical logic, the study of Lie groups and their discrete subgroups, dynamical systems, probability theory, K-theory, and other areas of mathematics. In the introduction to his book Topics in Geometric Group Theory, Pierre de la Harpe wrote: "One of my personal beliefs is that fascination with symmetries and groups is one way of coping with frustrations of life's limitations: we like to recognize symmetries which allow us to recognize more than what we can see. In this sense the study of geometric group theory is a part of culture, and reminds me of several things that Georges de Rham practiced on many occasions, such as teaching mathematics, reciting Mallarmé, or greeting a friend". History Geometric group theory grew out of combinatorial group theory that largely studied properties of discrete groups via analyzing group presentations, which describe groups as quotients of free groups; this field was first systematically studied by Walther von Dyck, student of Felix Klein, in the early 1880s, while an early form is found in the 1856 icosian calculus of William Rowan Hamilton, where he studied the icosahedral symmetry group via the edge graph of the dodecahedron. Currently combinatorial group theory as an area is largely subsumed by geometric group theory. Moreover, the term "geometric group theory" came to often include studying discrete groups using probabilistic, measure-theoretic, arithmetic, analytic and other approaches that lie outside of the traditional combinatorial group theory arsenal. In the first half of the 20th century, pioneering work of Max Dehn, Jakob Nielsen, Kurt Reidemeister and Otto Schreier, J. H. C. Whitehead, Egbert van Kampen, amongst others, introduced some topological and geometric ideas into the study of discrete groups. Other precursors of geometric group theory include small cancellation theory and Bass–Serre theory. Small cancellation theory was introduced by Martin Grindlinger in the 1960s and further developed by Roger Lyndon and Paul Schupp. It studies van Kampen diagrams, corresponding to finite group presentations, via combinatorial curvature conditions and derives algebraic and algorithmic properties of groups from such analysis. Bass–Serre theory, introduced in the 1977 book of Serre, derives structural algebraic information about groups by studying group actions on simplicial trees. External precursors of geometric group theory include the study of lattices in Lie groups, especially Mostow's rigidity theorem, the study of Kleinian groups, and the progress achieved in low-dimensional topology and hyperbolic geometry in the 1970s and early 1980s, spurred, in particular, by William Thurston's Geometrization program. The emergence of geometric group theory as a distinct area of mathematics is usually traced to the late 1980s and early 1990s. It was spurred by the 1987 monograph of Mikhail Gromov "Hyperbolic groups" that introduced the notion of a hyperbolic group (also known as word-hyperbolic or Gromov-hyperbolic or negatively curved group), which captures the idea of a finitely generated group having large-scale negative curvature, and by his subsequent monograph Asymptotic Invariants of Infinite Groups, that outlined Gromov's program of understanding discrete groups up to quasi-isometry. The work of Gromov had a transformative effect on the study of discrete groups and the phrase "geometric group theory" started appearing soon afterwards. (see e.g.). Modern themes and developments Notable themes and developments in geometric group theory in 1990s and 2000s include: Gromov's program to study quasi-isometric properties of groups. A particularly influential broad theme in the area is Gromov's program of classifying finitely generated groups according to their large scale geometry. Formally, this means classifying finitely generated groups with their word metric up to quasi-isometry. This program involves: The study of properties that are invariant under quasi-isometry. Examples of such properties of finitely generated groups include: the growth rate of a finitely generated group; the isoperimetric function or Dehn function of a finitely presented group; the number of ends of a group; hyperbolicity of a group; the homeomorphism type of the Gromov boundary of a hyperbolic group; asymptotic cones of finitely generated groups (see e.g.); amenability of a finitely generated group; being virtually abelian (that is, having an abelian subgroup of finite index); being virtually nilpotent; being virtually free; being finitely presentable; being a finitely presentable group with solvable Word Problem; and others. Theorems which use quasi-isometry invariants to prove algebraic results about groups, for example: Gromov's polynomial growth theorem; Stallings' ends theorem; Mostow rigidity theorem. Quasi-isometric rigidity theorems, in which one classifies algebraically all groups that are quasi-isometric to some given group or metric space. This direction was initiated by the work of Schwartz on quasi-isometric rigidity of rank-one lattices and the work of Benson Farb and Lee Mosher on quasi-isometric rigidity of Baumslag–Solitar groups. The theory of word-hyperbolic and relatively hyperbolic groups. A particularly important development here is the work of Zlil Sela in 1990s resulting in the solution of the isomorphism problem for word-hyperbolic groups. The notion of a relatively hyperbolic groups was originally introduced by Gromov in 1987 and refined by Farb and Brian Bowditch, in the 1990s. The study of relatively hyperbolic groups gained prominence in the 2000s. Interactions with mathematical logic and the study of the first-order theory of free groups. Particularly important progress occurred on the famous Tarski conjectures, due to the work of Sela as well as of Olga Kharlampovich and Alexei Myasnikov. The study of limit groups and introduction of the language and machinery of non-commutative algebraic geometry gained prominence. Interactions with computer science, complexity theory and the theory of formal languages. This theme is exemplified by the development of the theory of automatic groups, a notion that imposes certain geometric and language theoretic conditions on the multiplication operation in a finitely generated group. The study of isoperimetric inequalities, Dehn functions and their generalizations for finitely presented group. This includes, in particular, the work of Jean-Camille Birget, Aleksandr Olʹshanskiĭ, Eliyahu Rips and Mark Sapir essentially characterizing the possible Dehn functions of finitely presented groups, as well as results providing explicit constructions of groups with fractional Dehn functions. The theory of toral or JSJ-decompositions for 3-manifolds was originally brought into a group theoretic setting by Peter Kropholler. This notion has been developed by many authors for both finitely presented and finitely generated groups. Connections with geometric analysis, the study of C*-algebras associated with discrete groups and of the theory of free probability. This theme is represented, in particular, by considerable progress on the Novikov conjecture and the Baum–Connes conjecture and the development and study of related group-theoretic notions such as topological amenability, asymptotic dimension, uniform embeddability into Hilbert spaces, rapid decay property, and so on (see e.g.). Interactions with the theory of quasiconformal analysis on metric spaces, particularly in relation to Cannon's conjecture about characterization of hyperbolic groups with Gromov boundary homeomorphic to the 2-sphere. Finite subdivision rules, also in relation to Cannon's conjecture. Interactions with topological dynamics in the contexts of studying actions of discrete groups on various compact spaces and group compactifications, particularly convergence group methods Development of the theory of group actions on -trees (particularly the Rips machine), and its applications. The study of group actions on CAT(0) spaces and CAT(0) cubical complexes, motivated by ideas from Alexandrov geometry. Interactions with low-dimensional topology and hyperbolic geometry, particularly the study of 3-manifold groups (see, e.g.,), mapping class groups of surfaces, braid groups and Kleinian groups. Introduction of probabilistic methods to study algebraic properties of "random" group theoretic objects (groups, group elements, subgroups, etc.). A particularly important development here is the work of Gromov who used probabilistic methods to prove the existence of a finitely generated group that is not uniformly embeddable into a Hilbert space. Other notable developments include introduction and study of the notion of generic-case complexity for group-theoretic and other mathematical algorithms and algebraic rigidity results for generic groups. The study of automata groups and iterated monodromy groups as groups of automorphisms of infinite rooted trees. In particular, Grigorchuk's groups of intermediate growth, and their generalizations, appear in this context. The study of measure-theoretic properties of group actions on measure spaces, particularly introduction and development of the notions of measure equivalence and orbit equivalence, as well as measure-theoretic generalizations of Mostow rigidity. The study of unitary representations of discrete groups and Kazhdan's property (T) The study of Out(Fn) (the outer automorphism group of a free group of rank n) and of individual automorphisms of free groups. Introduction and the study of Culler-Vogtmann's outer space and of the theory of train tracks for free group automorphisms played a particularly prominent role here. Development of Bass–Serre theory, particularly various accessibility results and the theory of tree lattices. Generalizations of Bass–Serre theory such as the theory of complexes of groups. The study of random walks on groups and related boundary theory, particularly the notion of Poisson boundary (see e.g.). The study of amenability and of groups whose amenability status is still unknown. Interactions with finite group theory, particularly progress in the study of subgroup growth. Studying subgroups and lattices in linear groups, such as , and of other Lie groups, via geometric methods (e.g. buildings), algebro-geometric tools (e.g. algebraic groups and representation varieties), analytic methods (e.g. unitary representations on Hilbert spaces) and arithmetic methods. Group cohomology, using algebraic and topological methods, particularly involving interaction with algebraic topology and the use of morse-theoretic ideas in the combinatorial context; large-scale, or coarse (see e.g.) homological and cohomological methods. Progress on traditional combinatorial group theory topics, such as the Burnside problem, the study of Coxeter groups and Artin groups, and so on (the methods used to study these questions currently are often geometric and topological). Examples The following examples are often studied in geometric group theory: Amenable groups Free Burnside groups The infinite cyclic group Z Free groups Free products Outer automorphism groups Out(Fn) (via outer space) Hyperbolic groups Mapping class groups (automorphisms of surfaces) Symmetric groups Braid groups Coxeter groups General Artin groups Thompson's group F CAT(0) groups Arithmetic groups Automatic groups Fuchsian groups, Kleinian groups, and other groups acting properly discontinuously on symmetric spaces, in particular lattices in semisimple Lie groups. Wallpaper groups Baumslag–Solitar groups Fundamental groups of graphs of groups Grigorchuk group See also The ping-pong lemma, a useful way to exhibit a group as a free product Amenable group Nielsen transformation Tietze transformation References Books and monographs These texts cover geometric group theory and related topics. External links Jon McCammond's Geometric Group Theory Page What is Geometric Group Theory? By Daniel Wise Open Problems in combinatorial and geometric group theory Geometric group theory Theme on arxiv.org Group theory
Geometric group theory
[ "Physics", "Mathematics" ]
2,672
[ "Geometric group theory", "Group actions", "Group theory", "Fields of abstract algebra", "Symmetry" ]
955,298
https://en.wikipedia.org/wiki/Biopunk
Biopunk (a portmanteau of "biotechnology" or "biology" and "punk") is a subgenre of science fiction that focuses on biotechnology. It is derived from cyberpunk, but focuses on the implications of biotechnology rather than mechanical cyberware and information technology. Biopunk is concerned with synthetic biology. It is derived from cyberpunk and involve bio-hackers, biotech megacorporations, and oppressive government agencies that manipulate human DNA. Most often keeping with the dark atmosphere of cyberpunk, biopunk generally examines the dark side of genetic engineering and depicts the potential perils of biotechnology. Description Biopunk is a subgenre of science fiction closely related to cyberpunk that focuses on the near-future (most often unintended) consequences of the biotechnology revolution following the invention of recombinant DNA. Biopunk stories explore the struggles of individuals or groups, often the product of human experimentation, against a typically dystopian backdrop of totalitarian governments and megacorporations which misuse biotechnologies as means of social control and profiteering. Unlike cyberpunk, it builds not on information technology, but on synthetic biology. Like in postcyberpunk fiction, individuals are usually modified and enhanced not with cyberware, but by genetic manipulation. A common feature of biopunk fiction is the "black clinic", which is a laboratory, clinic, or hospital that performs illegal, unregulated, or ethically dubious biological modification and genetic engineering procedures. Many features of biopunk fiction have their roots in William Gibson's Neuromancer, one of the first cyberpunk novels. One of the prominent writers in this field is Paul Di Filippo, though he called his collection of such stories ribofunk, a blend of "ribosome" and "funk". Di Filippo suggests that precursors of biopunk fiction include H. G. Wells' The Island of Doctor Moreau; Julian Huxley's The Tissue-Culture King; some of David H. Keller's stories, Damon Knight's Natural State and Other Stories; Frederik Pohl and Cyril M. Kornbluth's Gravy Planet; novels of T. J. Bass and John Varley; Greg Bear's Blood Music and Bruce Sterling's Schismatrix. The stories of Cordwainer Smith, including his first and most famous Scanners Live in Vain, also foreshadow biopunk themes. Another example is the New Jedi Order series published from 1999 to 2003, which prominently feature the Yuuzhan Vong who exclusively use biotechnology. See also List of biopunk works Cyberpunk Cyberpunk derivatives Nanopunk Dieselpunk Steampunk Solarpunk Seapunk Genetic engineering in fiction Grinder (biohacking) Human enhancement Transhumanism References External links Hackteria.org, a community for bio-artists Biology and culture Biocybernetics Bioinformatics Molecular genetics Postmodernism Science fiction genres Synthetic biology Systems biology Subcultures Transhumanism 1990s neologisms
Biopunk
[ "Chemistry", "Technology", "Engineering", "Biology" ]
643
[ "Synthetic biology", "Biological engineering", "Systems biology", "Genetic engineering", "Transhumanism", "Bioinformatics", "Molecular genetics", "Ethics of science and technology", "Molecular biology" ]
956,499
https://en.wikipedia.org/wiki/Application%20firewall
An application firewall is a form of firewall that controls input/output or system calls of an application or service. It operates by monitoring and blocking communications based on a configured policy, generally with predefined rule sets to choose from. The two primary categories of application firewalls are network-based and host-based. History Gene Spafford of Purdue University, Bill Cheswick at AT&T Laboratories, and Marcus Ranum described a third-generation firewall known as an application layer firewall. Marcus Ranum's work, based on the firewall created by Paul Vixie, Brian Reid, and Jeff Mogul, spearheaded the creation of the first commercial product. The product was released by DEC, named the DEC SEAL by Geoff Mulligan - Secure External Access Link. DEC's first major sale was on June 13, 1991, to Dupont. Under a broader DARPA contract at TIS, Marcus Ranum, Wei Xu, and Peter Churchyard developed the Firewall Toolkit (FWTK) and made it freely available under license in October 1993. The purposes for releasing the freely available, not for commercial use, FWTK were: to demonstrate, via the software, documentation, and methods used, how a company with (at the time) 11 years experience in formal security methods, and individuals with firewall experience, developed firewall software; to create a common base of very good firewall software for others to build on (so people did not have to continue to "roll their own" from scratch); to "raise the bar" of firewall software being used. However, FWTK was a basic application proxy requiring the user interactions. In 1994, Wei Xu extended the FWTK with the Kernel enhancement of IP stateful filter and socket transparent. This was the first transparent firewall, known as the inception of the third generation firewall, beyond a traditional application proxy (the second generation firewall), released as the commercial product known as Gauntlet firewall. Gauntlet firewall was rated one of the top application firewalls from 1995 until 1998, the year it was acquired by Network Associates Inc, (NAI). Network Associates continued to claim that Gauntlet was the "worlds most secure firewall" but in May 2000, security researcher Jim Stickley discovered a large vulnerability in the firewall, allowing remote access to the operating system and bypassing the security controls. Stickley discovered a second vulnerability a year later, effectively ending Gauntlet firewalls' security dominance. Description Application layer filtering operates at a higher level than traditional security appliances. This allows packet decisions to be made based on more than just source/destination IP Address or ports and can also use information spanning across multiple connections for any given host. Network-based application firewalls Network-based application firewalls operate at the application layer of a TCP/IP stack and can understand certain applications and protocols such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext Transfer Protocol (HTTP). This allows it to identify unwanted applications or services using a non standard port or detect if an allowed protocol is being abused. Modern versions of network-based application firewalls can include the following technologies: Encryption offloading Intrusion prevention system Data loss prevention Web application firewalls (WAF) are a specialized version of a network-based appliance that acts as a reverse proxy, inspecting traffic before being forwarded to an associated server. Host-based application firewalls A host-based application firewall monitors application system calls or other general system communication. This gives more granularity and control, but is limited to only protecting the host it is running on. Control is applied by filtering on a per process basis. Generally, prompts are used to define rules for processes that have not yet received a connection. Further filtering can be done by examining the process ID of the owner of the data packets. Many host-based application firewalls are combined or used in conjunction with a packet filter. Due to technological limitations, modern solutions such as sandboxing are being used as a replacement of host-based application firewalls to protect system processes. Implementations There are various application firewalls available, including both free and open source software and commercial products. Mac OS X Starting with Mac OS X Leopard, an implementation of the TrustedBSD MAC framework (taken from FreeBSD), was included. The TrustedBSD MAC framework is used to sandbox services and provides a firewall layer, given the configuration of the sharing services in Mac OS X Leopard and Snow Leopard. Third-party applications can provide extended functionality, including filtering out outgoing connections by app. Linux This is a list of security software packages for Linux, allowing filtering of application to OS communication, possibly on a by-user basis: AppArmor Kerio Control — a commercial product from Kerio Technologies ModSecurity — also works under Windows, Mac OS X, Oracle Solaris and other versions of Unix. ModSecurity is designed to work with the Web servers IIS, Apache2 and NGINX. Portmaster — an activity monitoring application by Safing. It is also available for Microsoft Windows. Systrace Zorp firewall Windows Portmaster Microsoft Defender Firewall WinGate Network appliances These devices may be sold as hardware, software, or virtualized network appliances. Next-Generation Firewalls: Cisco Firepower Threat Defense Check Point Fortinet FortiGate Series Juniper Networks SRX Series Palo Alto Networks SonicWALL TZ/NSA/SuperMassive Series Web Application Firewalls/LoadBalancers: A10 Networks Web Application Firewall Barracuda Networks Web Application Firewall/Load Balancer ADC Citrix NetScaler F5 Networks BIG-IP Application Security Manager Fortinet FortiWeb Series KEMP Technologies Imperva Others: CloudFlare Meraki Smoothwall Snapt Inc See also ModSecurity Computer security Content-control software Proxy server Information security Application security Network security References External links Web Application Firewall, Open Web Application Security Project Web Application Firewall Evaluation Criteria, from the Web Application Security Consortium Safety in the cloud(s): 'Vaporizing' the Web application firewall to secure cloud computing Firewall software Packets (information technology) Data security Cyberwarfare
Application firewall
[ "Engineering" ]
1,287
[ "Cybersecurity engineering", "Data security" ]
956,525
https://en.wikipedia.org/wiki/Knox%20Mine%20disaster
The Knox Mine disaster was a mining accident on January 22, 1959, at the River Slope Mine, an anthracite coal mine, in Jenkins Township, Pennsylvania. The Susquehanna River broke through the ceiling and flooded the mine. Twelve miners were killed. The accident marked nearly the end of deep mining in the northern anthracite field of Pennsylvania. Accident The River Slope mine was leased by the Knox Coal Company from the Pennsylvania Coal Company. In late 1956, with the first lease approaching exhaustion, Knox extended the mine into a new area, much of which was under the Susquehanna. It was legal to mine under the river, but various required precautions were neglected. The thickness of the roof (rock under which drilling was done) was largely unknown. It was supposed to be established by drilling boreholes down from the riverbed. A thickness of at least was considered normal. But the mine was extended into areas where no boreholes had been drilled, and Knox did not drill any new ones. Knox dug chambers beyond what had been requested in the original proposal, without updating mine maps, and chambers climbed toward the surface, to follow the coal seam. Ultimately it was found that the location of the cave-in had a roof cover of only . On the day of the cave-in, the river was dangerously high and icy due to a thaw and heavy rain. The hole in the riverbed caused the river to flood into many interconnected mine galleries in the Wyoming Valley between the right-bank (western shore) town of Exeter, Pennsylvania, and the left-bank (eastern shore) town of Port Griffith in Jenkins Township, near Pittston. It took three days to plug the hole, which was done by dumping large railroad cars, smaller mine cars, culm, and other debris into the whirlpool formed by the water draining into the mine. Eventually, an estimated of water filled the mines. Twelve mineworkers died, out of 81 who had reported to work. Amedeo Pancotti was awarded the Carnegie Medal for climbing up the abandoned Eagle Air Shaft and alerting rescuers, which resulted in the safe recovery of 33 men including Pancotti himself. The bodies of the twelve who died were never recovered, despite efforts to pump the water out of the mine. The victims were Samuel Altieri, John Baloga, Benjamin Boyar, Francis Burns, Charles Featherman, Joseph Gizenski, Dominick Kaveliski, Frank Orlowski, Eugene Ostrowski, William Sinclair, Daniel Stefanides, and Herman Zelonis. Aftermath and legacy In the months after the hole in the riverbed was plugged, the mine was made safe for entry by sealing the breach. First a cofferdam was built around it in the Susquehanna; then water was pumped out from inside the cofferdam to expose the riverbed; and loam and clay were dumped over the breach. When it was safe to enter the mine, a workforce reinforced the spot with iron bars, built wooden bulkheads, and poured concrete into the prepared area through boreholes that had been drilled in the riverbed. Finally the cofferdam was removed, allowing the Susquehanna to take its course. Seven people were indicted on charges of involuntary manslaughter as a result of the disaster, including Robert Dougherty and Louis Fabrizio, owners of the Knox Coal Company; August J. Lippi, president of District 1 of the United Mine Workers; the superintendent and an assistant foreman; and two engineers from the Pennsylvania Coal Company. Although some were convicted, the convictions were reversed on appeal. Twelve persons and three companies were convicted for giving or accepting bribes, or violations of the Taft-Hartley labor law, or tax evasion. These included Dougherty, Fabrizio, Lippi, and two union officials, who served jail time. During the course of his trials, Lippi was found to be secretly a co-owner of the Knox Coal Company, in violation of Taft-Hartley labor law. After the disaster, the widows of the twelve victims did not receive death benefit payments from the Anthracite Health and Welfare Fund for more than four years. Within months of the Knox mine disaster, large companies including the Pennsylvania Coal Company, from which the River Slope Mine had been leased, started withdrawing from the anthracite business. By the 1970's no underground mines were extracting anthracite from the northern field. Anthracite production nationally had been in decline since 1917, with only a small rebound during World War II. See also Avondale Mine disaster References Bibliography External links Underground Miners archive on Knox Mine Disaster Pittston holds remembrance of Knox Mine Disaster U.S. Mine Safety Report, archived Jan. 14, 2015 from the original "Knox Mine Disaster" Pennsylvania Historical Marker Anthracite Coal Region of Pennsylvania Coal mining disasters in Pennsylvania Disasters in Pennsylvania Engineering failures History of Luzerne County, Pennsylvania January 1959 events in the United States 1959 disasters in the United States 1959 in Pennsylvania 1959 mining disasters
Knox Mine disaster
[ "Technology", "Engineering" ]
1,011
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
956,709
https://en.wikipedia.org/wiki/Time%20clock
A time clock, sometimes known as a clock card machine, punch clock, or time recorder, is a device that records start and end times for hourly employees (or those on flexi-time) at a place of business. In mechanical time clocks, this was accomplished by inserting a heavy paper card, called a time card, into a slot on the time clock. When the time card hit a contact at the rear of the slot, the machine would print day and time information (a timestamp) on the card. One or more time cards could serve as a timesheet or provide the data to fill one. This allowed a timekeeper to have an official record of the hours an employee worked to calculate the pay owed an employee. The terms bundy clock, or just bundy have been used in Australian English for time clocks. The term comes from brothers Willard and Harlow Bundy. History Origins An early type of time clock, the autograph recorder, was patented in Chicago USA by Charles E, Van Voorhis in 1888, and manufactured by the Chicago Time Register Company. In this design, the employee signs his name on roll and operates a lever to obtain an automatic time stamp from an associated clock mechanism.. An improved version of the system was developed by Dr Alexander Shiels of "Kosmoid" associations (patent issued 1904) and later re-manufactured, again under the "Kosmoid" brand, by the Rusmoid Company of London, in around 1920. An early and influential time clock, sometimes described as the first, was invented on November 20, 1888, by Willard Le Grand Bundy, a jeweler in Auburn, New York. His patent of 1890 speaks of mechanical time recorders for workers in terms that suggest that earlier recorders already existed, but Bundy's had various improvements; for example, each worker had his own key. A year later his brother, Harlow Bundy, organized the Bundy Manufacturing Company, and began mass-producing time clocks. In 1900, the time recording business of Bundy Manufacturing, along with two other time equipment businesses, was consolidated into the International Time Recording Company (ITR). In 1911, ITR, Bundy Mfg., and two other companies were amalgamated (via stock acquisition), forming a fifth company, Computing-Tabulating-Recording Company (CTR), which would later change its name to IBM. The Bundy clock (see image) was used by Birmingham City Transport to ensure that bus drivers did not depart from outlying termini before the due time; now preserved at Walsall Arboretum. In 1909, Halbert P. Gillette explained about the state of the art around time clocks in those days: An example of this other form of time clock, made by IBM, is pictured. The face shows employee numbers which would be dialed up by employees entering and leaving the factory. The day and time of entry and exit was punched onto cards inside the box. Mid 20th century In 1958, IBM's Time Equipment Division was sold to the Simplex Time Recorder Company. However, in the United Kingdom ITR (a subsidiary of IBM United Kingdom Ltd.) was the subject of a management buy-out in 1963 and reverted to International Time Recorders. In 1982, International Time Recorders was acquired by Blick Industries of Swindon, England, who were themselves later absorbed by Stanley Security Systems. The first punched-card system to be linked to a Z80 microprocessor was developed by Kronos Incorporated in the late 1970s and introduced as a product in 1979. Late 20th century In the late 20th century, time clocks started to move away from the mechanical machines to computer-based, electronic time and attendance systems. The employee registers with the system by swiping a magnetic stripe card, scanning a barcode, bringing an RFID (radio-frequency identification) tag close to a reader, entering a number or using a biometric reader. These systems are much more advanced than the mechanical time clock: various reports can be generated, including on compliance with the European Working Time Directive, and a Bradford factor report. Employees can also use the system to request holidays, enter absence requests and view their worked hours. User interfaces can be personalized and offer robust self-service capabilities. Electronic time clock machines are manufactured in many designs by companies in China and sold under various brand names in places around the world, with accompanying software to extract the data from a single time clock machine, or several machines, and process the data into reports. In most cases local suppliers offer technical support and in some cases installation services. More recently, time clocks have started to adopt technology commonly seen in phones and tablets – called 'Smartclocks'. The "state of the art" smartclocks come with multi-touch screens, full color displays, real time monitoring for problems, wireless networking and over the air updates. Some of the smartclocks use front-facing cameras to capture employee clock-ins to deter "buddy clocking" or "buddy punching", whereby one employee fraudulently records the time of another. This problem usually requires expensive biometric devices. With the increasing popularity of cloud-based software, some of the newer time clocks are built to work seamlessly with the cloud. Types Basic time clock A basic time clock will just stamp the date and time on a time card, similar to a parking validation machine. These will usually be activated by a button that a worker must press to stamp their card, or stamp upon full insertion. Some machines use punch hole cards instead of stamping, which can facilitate automated processing on machinery not capable of optical character recognition. There are also variations based on manufacture and machine used, and whether the user wants to record weekly or monthly recordings. The time cards usually have the workdays, "time in", and "time out" areas marked on them so that employees can "punch in" or "punch out" in the correct place. The employee may be responsible for lining up the correct area of the card to be punched or stamped. Some time clocks feature a bell or signal relay to alert employees as to a certain time or break. Fraudulent operation of time clocks can include overstamping, where one time is stamped over another, and buddy-punching, where one employee records time for another. In extreme cases, employees can use buddy-punching to skip entire days of work or accumulate additional overtime. Self-calculating machines Self-calculating machines are similar to basic time clocks. Nevertheless, at the end of each period, the total time recorded is added up, allowing for quicker processing by human resources or payroll. These machines sometimes have other functions such as automatic stamping, dual-colour printing, and automated column shift. Software-based Time and Attendance Systems Software-based time and attendance systems operate similarly to traditional paper-based methods, but they leverage computers and electronic check-in terminals for data input. These systems often integrate with human resources and payroll software, streamlining processes and reducing administrative overhead. While the upfront investment can be considerable—making them more common in larger organizations with over 30 employees—the efficiency gains and reduction in manual errors typically result in significant long-term cost savings. For instance, tools such as online time calculators, like those provided by platforms such as TimeCalculatorApp.net, help businesses efficiently manage and reconcile time-related data. These calculators offer features that simplify complex time calculations, improving accuracy in payroll and project management. TimeCalculatorApp.net for Personal Record-Keeping In addition to its utility for businesses, TimeCalculatorApp.net is also a practical tool for individuals seeking to keep track of their personal time records. The platform enables users to perform a variety of time-related calculations, such as tracking hours worked, managing study schedules, or logging personal projects. Its user-friendly interface and features, such as the ability to calculate total hours and split time periods, make it particularly useful for freelancers, students, and hobbyists. By automating these calculations, the app reduces the risk of manual errors and ensures more accurate record-keeping. Mobile time tracking With the mass market proliferation of mobile devices (smart phones, handheld devices), new types of self-calculating time tracking systems have been invented which allow a mobile workforce – such as painting companies or construction companies - to track employees 'on' and 'off' hours. This is generally accomplished through either a mobile application, or an IVR based phone call in system. Using a mobile device allows enterprises to better validate that their employees or suppliers are physically 'clocking in' at a specific location using the GPS functionality of a mobile phone for extra validation. Biometrics Biometric time clocks are a feature of more advanced time and attendance systems. Rather than using a key, code or chip to identify the user, they rely on a unique attribute of the user, such as a hand print, finger print, finger vein, palm vein, facial recognition, iris or retina. The user will have their attribute scanned into the system. Biometric readers are often used in conjunction with an access control system, granting the user access to a building, and at the same time clocking them in recording the time and date. These systems also attempt to cut down on fraud such as "buddy clocking." When combined with an access control system they can help prevent other types of fraud such as 'ghost employees', where additional identities are added to payroll but do not exist. See also Time and attendance Timekeeper Timesheet Game clock Stopwatch Workforce management References External links IBM Time Clocks (PDF files) National Museum of American History: International Dial Time Recorder Clock www.timerecorder.de/ (mostly in German, but partly translated into English) is one of the most comprehensive online documentations of the history of time recorders and time clocks Harlow Bundy's home in Binghamton, it was restored and now houses a museum dedicated to the Bundys. Washington Becomes Third State to Enact Biometric Privacy Law Texas Business and Commerce Code - BUS & COM § 503.001 | FindLaw Products introduced in 1888 Clocks Working time
Time clock
[ "Physics", "Technology", "Engineering" ]
2,068
[ "Physical systems", "Machines", "Clocks", "Measuring instruments" ]
30,700,069
https://en.wikipedia.org/wiki/Otto%20Redlich
Otto Redlich (November 4, 1896 – August 14, 1978) was an Austrian physical chemist who is best known for his development of equations of state like the Redlich-Kwong equation. Redlich also made numerous other contributions to science. He won the Haitinger Prize of the Austrian Academy of Sciences in 1932. Biography Redlich was born 1896 in Vienna, Austria. He went to school in the Döbling district of Vienna. After finishing school in 1915 he joined the Austrian Hungarian Army and served as an artillery officer, mainly at the Italian front, in World War I. He was wounded and became a prisoner of war in August 1918. He returned to Vienna after the war in 1919. He studied chemistry and received his doctorate in 1922 for work on the equilibrium of nitric acid, nitrous and nitric oxide. Redlich worked for one year in industry before joining Emil Abel at the University of Vienna. He became a lecturer in 1929 and a professor in 1937. During this time he developed the Teller-Redlich isotopic product rule. After the Anschluss in March 1938, Austria became a part of Nazi Germany, and with the implementation of the Nuremberg Laws all government employed Jews lost their jobs, including academics. Like many other scientists, Redlich tried to leave Nazi-governed Austria. With the help of the Emergency Committee in Aid of Displaced Foreign Scholars, Redlich was able to emigrate to the United States in December 1938. He gave lectures at several universities and met Gilbert N. Lewis and Linus Pauling. Harold Urey helped him to obtain a position in Washington State College. In 1945 he left the college to work in industry, at Shell Development Co. in Emeryville, California. He published his paper on the improvement of the ideal gas equation in 1949, today known as the Redlich–Kwong equation of state. In 1962 Redlich retired from Shell and received a position at University of California at Berkeley. He died in California in 1978. Bibliography References Jewish emigrants from Austria after the Anschluss to the United States Washington State University faculty Academic staff of the University of Vienna University of California, Berkeley faculty Jewish American scientists Scientists from Vienna Thermodynamicists Austrian physical chemists American physical chemists 1978 deaths 1896 births Austrian chemical engineers American chemical engineers 20th-century American engineers 20th-century American Jews 20th-century American chemists
Otto Redlich
[ "Physics", "Chemistry" ]
482
[ "Thermodynamics", "Thermodynamicists" ]
40,038,058
https://en.wikipedia.org/wiki/International%20Society%20for%20Soil%20Mechanics%20and%20Geotechnical%20Engineering
The International Society for Soil Mechanics and Geotechnical Engineering (ISSMGE) is an international professional association, presently based in London, representing engineers, academics and contractors involved in geotechnical engineering. It is a federation of 90 member societies representing 91 countries around the world, which together give it a total of some 21,000 individual members. There are also 43 corporate associates from industry. The current ISSMGE President is Dr Marc Ballouz. History The ISSMGE originated in the International Conference on Soil Mechanics and Foundation Engineering, held in June 1936 at Harvard University as one of many events held to mark the university's 300th anniversary. Arthur Casagrande of the Harvard faculty gained university support for an international conference on soil mechanics and successfully persuaded Karl Terzaghi, who was then working in Vienna, to preside. The conference attracted 206 delegates from 20 countries. The success of the five-day conference led participants to decide to establish a quadrennial International Conference on Soil Mechanics and Foundation Engineering (ICSMFE) as a permanent institution. An executive committee was set up, with Terzaghi as the first president, Casagrande as its first secretary, and Philip Rutledge, also of Harvard, as treasurer. Due to World War II, the second ICSMFE was not held until June 1948. The 1948 conference, held in Rotterdam, had 596 delegates. After the third ICSMFE was held five years later in Zurich in 1953, the organization settled into a pattern of meeting every four years. In the meantime, a first regional conference was held in Australasia in 1952. Terzaghi was first president of the ICSMFE, serving until 1957 when he was succeeded by Alec Skempton. Casagrande succeeded Skempton, holding the presidency from 1961 to 1965. A steering committee was established in 1981, and it became the Board in 1985. This meets annually; the Council meets every two years. The current name, the International Society for Soil Mechanics and Geotechnical Engineering, was adopted in 1997. ISSMGE membership has increased through its history. There were 32 member societies and 2500 individual members by 1957, 50 societies and 11,500 individuals in 1977, and 71 societies and 16,500 individuals in 1998. As of 2012, ISSMGE's reported membership totaled 19,000 individuals in 90 countries. ISSMGE is a member of the Federation of International Geo-Engineering Societies (FedIGS). ISSMGE Presidents and Secretaries-General The ISSMGE Presidents over its history are as follows: The secretaries general have been as follows: Activities The ISSMGE organises conferences on subjects including deep foundations, earthquake engineering and underground construction. Its main events continue to be the quadrennial International Conference on Soil Mechanics and Geotechnical Engineering (ICSMGE), plus five quadrennial regional conferences, Young Geotechnical Engineers' Conferences, and specialist international conferences, symposia and workshops. In addition to a bimonthly bulletin and various technical committee reports, the society publishes an official scientific journal in collaboration with Geoengineer.org, the International Journal of Geoengineering Case Histories. This is a peer-reviewed online journal that presents reports of observations and data collected in the practice of geotechnical engineering, earthquake engineering, environmental geotechnics, and engineering geology. References External links ISSMGE website Federation of International Geo-Engineering Societies City, University of London Geotechnical organizations International organisations based in London International professional associations Organisations based in the London Borough of Islington Soil Mechanics Scientific organizations established in 1936 Soil mechanics
International Society for Soil Mechanics and Geotechnical Engineering
[ "Physics", "Engineering" ]
735
[ "Soil mechanics", "Applied and interdisciplinary physics", "Geotechnical organizations", "Civil engineering organizations" ]
40,039,971
https://en.wikipedia.org/wiki/Parbuckle%20salvage
Parbuckle salvage, or parbuckling, is the righting of a sunken vessel using rotational leverage. A common operation with smaller watercraft, parbuckling is also employed to right large vessels. In 1943, the was rotated nearly 180 degrees to upright after being sunk in the attack on Pearl Harbor, and the Italian cruise ship Costa Concordia was successfully parbuckled off the west coast of Italy in September 2013, the largest salvage operation of that kind to date. Mechanical advantage and difficulties While the mechanical advantage used by a laborer to parbuckle a cask up an incline is 2:1, parbuckling salvage is not so limited. Each of the 21 winches used to roll the Oklahoma used cables that passed through two 17-part tackle assemblies (17:1 advantage). Eight sheaves, eight sheaves, and one sheave comprised just half the mechanical effort. A major concern during salvage is preventing rotational torque from becoming a transverse force moving the ship sideways. , lost like the Oklahoma in the Pearl Harbor attack, was meant to be recovered by a similar rotation after the Oklahoma. As the Utah was rotated, however, its hull did not catch on the harbor bottom, and the vessel slid toward Ford Island. The Utah recovery effort was abandoned. Righting of Oklahoma Oklahoma weighed about . Twenty-one electric winches were installed on Ford Island, anchored in concrete foundations. They operated in unison. Each winch pulled about by a wire operated through a block system which gave an advantage of seventeen, for a total pull of 21×20×17, or . In order to increase the leverage, the wire passed over a wooden strut arrangement (a bent) which stood on the bottom of the ship about high. Oil had been removed from the ship through the bottom. The ship was lightened by air inside the hull. There was a large amount of weight in the ship which may have been removed prior to righting, but not all could be accessed. About one-third of the ammunition was taken off together with some of the machinery. The blades of the two propellers were also taken off, but more to avoid damage to them than to reduce weight. Tests were made to check whether restraining forces should be used to prevent sliding toward Ford Island. It was indicated that the soil under the aft part of the ship prevented sliding, whereas the bow section rested in soupy mud which permitted it. To prevent sliding, about 2200 tons of coral soil were deposited near the bow section. During righting, excess soil under the starboard side was washed away by high-pressure jets operated by divers. The ship rolled as it should have and was right-side up by 16 June 1943, the work having started 8 March 1943. The mean draft of the ship after righting was c. . Righting of Costa Concordia Following its capsizing and sinking in January 2012, the hull of Costa Concordia lay starboard side to the seaward face of a small outcropping very near the mouth of the harbor of Giglio, Italy, resting precariously on the incline to deeper water. To right the vessel, four key pieces of apparatus were required: a "holdback" system of chains attached to the island on one end and the hull on the other to ensure Costa Concordia rolled in place; a man-made ledge inserted into the island face to provide a landing surface for the vessel; a series of sponsons attached to the hull's port side so as, when flooded, to increase the torque on the hull and to unburden the strand jacks; an arrangement of cables rising from the edge of the ledge over the sponsons on the port side of the hull. Tensioning the cables started the roll of the ship. At about the halfway-to-vertical position the sponsons were filled with seawater, and Costa Concordia completed its roll to upright upon the ledge. The hull was rotated 65 degrees to become vertical. Parbuckling was accomplished in three phases: Freeing the hull Phase of rotation using cables Rotation by ballasting with sponsons At the completion of parbuckling, Costa Concordia rested on the ledge at a depth of . Holdback system The holdback system consisted of 56 chains in total, of which 22 chains were attached to the port side to go under the hull to the island. Each chain was long and weighed about . Each link weighed . Ledge The ledge was part steel and part grout. There were six steel platforms. The three larger platforms measured each; the three smaller platforms measured each. The 6 platforms were supported by 21 pillars of diameter each and plunged for an average of in the granite sea face of Giglio. The grout filled the space between the land side of the platforms and the sea bed. It totaled 1,180 individual bags with a volume of over and over in weight. The grout bags contained an "ecofriendly cement," and were built with eyelets to aid post-recovery cleanup. Sponsons Eleven steel sponsons were installed on the port side of the hull: two long horizontal sponsons; two long vertical sponsons and seven short vertical sponsons. Each long horizontal sponson measured , weighed about , provided of buoyancy. Each long vertical sponson measured , weighed of about , provided about of buoyancy. Each short vertical sponson measured , weighed about , provided about of buoyancy. Two steel "blister" tanks were connected together at the hull's bow. They measured in length, in height each, and had a total breadth of about . The whole blister structure (the two blister tanks, the tubular frame and the three anchor pipes) weighed about . They provided a net buoyancy of to the bow section. Cables The cable system provided a force of about to start the Costa Concordia's rotation. Phase 1 – freeing the hull The hull of Costa Concordia rested on two spurs of rock, and was severely deformed from the weight of the ship pressing down on the spurs. This phase began when the strand jacks exerted force and the ship started to return to an upright position. This was "without doubt one of the most delicate phases of the entire recovery plan." Phase 2 – rotation using cables This phase began when the hull lifted from the seabed. Rotation continued by tensioning the cables operated by the strand jacks, and continued until the sponson water intakes reached sea level. Phase 3 – rotation by ballasting with sponsons The hull continued to rotate, pulled down by the weight of seawater added to the sponsons. The strand jacks and cables went slack. Redundant systems were designed as a guard against failure. For example, two seawater inlet valves were provided to each sponson. List of parbuckle-salvaged vessels MS Herald of Free Enterprise MV Janra MV Repubblica di Genova MSC Napoli's separated stern section Barge Larvik Rock Fishing trawler Nieuwpoort 28 Fishing vessel Sandy Point MS Costa Concordia Jackup work barge Sep Orion See also References External links Pearl Harbor Raid, 7 December 1941 Salvage of USS Oklahoma, 1942–44 Salvage of the battleship USS Oklahoma following the attack on Pearl Harbor 1942–46 The Parbuckling Project: Concordia wreck removal project informative website German website with a time-lapse video of the parbuckling of the Costa Concordia. Marine engineering Marine salvage Rotation
Parbuckle salvage
[ "Physics", "Engineering" ]
1,513
[ "Physical phenomena", "Classical mechanics", "Rotation", "Motion (physics)", "Marine engineering" ]
40,044,236
https://en.wikipedia.org/wiki/Segs4Vets
Segs4Vets, a continuing program which began in 2005, is a grass-roots effort sustained and administered by volunteers in the United States that provide Segway PT vehicles to disabled United States military personnel. The program which made its first presentation in September 2005 to three recipients who had sustained injuries in Operation Iraqi Freedom (OIF), was conceived and implemented with the assistance of Gen. Ralph "Ed" Eberhart, USAF (Ret), President of the Armed Forces Benefits Association. Following its first presentation, the Segs4Vets program began the process of seeking a waiver which would allow the presentation of Segway's to active-duty military personnel who had been severely injured and permanently disabled while serving in support of Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF). In August 2006, the Segs4Vets program became the only recipient of a blanket waiver from the United States military allowing a donation in excess of $1000 to active-duty military personnel. The Segs4Vets program provides successful candidates with a universally designed mobility device which aims to draw attention away from their disability. The Segway is a tool that aims to help many of the mobility issues facing our disabled veterans, in a manner which is psychologically uplifting and physically beneficial. Since 2005, the organization has provided over 1200 Segways to wounded veterans. Severely Injured Marines and Sailors Initiative (SIMS) In May 2006, Colonel William J. O'Brien, USMC (Ret), the Director of the Department of the Navy's Severely Injured Marines and Sailors Initiative (SIMS), a pilot program enacted under Deputy Assistant Secretary of the Navy H.C. Barney Barnum, to facilitate the full integration of injured service members into the Marine Corps and Navy, or to assist in their transition into the private sector, became aware of the Segs4Vets program. In July 2006, the small staff of the SIMS program, Colonel O'Brien, HMC Christine Jensen, USN, and Joseph Wade began collaborating with the Segs4Vets program to encourage it to serve more OEF & OIF severely injured. The results of this collaboration resulted in the first major Segs4Vets presentation ceremony on December 7, 2006, during a SIMS luncheon at the Army Navy Country Club in Arlington, Virginia. In May 2007, the SIMS program concluded its work having identified gaps in coverage for the severely injured and recommended solutions for those deficiencies. However, that first Segs4Vets presentation ceremony set a standard for future Segs4Vets ceremonies held twice annually in Washington DC, San Antonio, TX and San Diego, CA. Colonel O'Brien, HMC Jensen, Joe Wade, and now retired Secretary Barnum continue to play an active role in the Segs4Vets program. Training Assessment Programs In 2006, the program began setting up training and assessment programs at military medical centers which provided rehabilitative care for OEF and OIF severely injured. These centers include Walter Reed Army Medical Center in February 2006, National Naval Medical Center in May 2006, Brooke Army Medical Center in November 2006 and the Naval Medical Center San Diego in November 2008. Notable recipients Sergeant Kortney Clemons, USA, (Ret) Awards and accolades 2008 Secretary of the Army Public Service Award for distinguished public service in providing outstanding support to our Nation's Veterans 2010 Spirit of Hope Award presented by the Office of the Secretary of Defense 2016 Congressional Medal of Honor Society's Distinguished Citizen Award presented to Jerry Kerr for embodying the characteristics of the Medal of Honor Society Independent Charities of America Seal of Excellence Member of the Military Family and Veterans Service Organization of America (M.F.V.S.O.A) References External links Official website of Segs4Vets Wounded Vietnamese-American Soldier Receives Segway Wounded Vets Increase Mobility with Segways Monster Cable and Segs4Vets Segs4Vets program brings mobility, healing Segs4Vets Program Honors Wounded Warriors Segs4Vets offers Segways to disabled veterans Segs4Vets Program Honors Wounded Warriors Soldier's journey from Iraq to Scott AFB shows heroism, determination Wounded servicemembers get new Segways Fifty-one more wounded warriors are given Segways Owner of Segway Donates 1,000 Segways for Wounded Warriors Before His Death American veterans' organizations United States military support organizations Accessibility Mobility devices Assistive technology Medical equipment Non-profit organizations based in St. Louis 501(c)(3) organizations Charities based in Missouri Organizations established in 2005 2005 establishments in Missouri
Segs4Vets
[ "Engineering", "Biology" ]
941
[ "Accessibility", "Design", "Medical equipment", "Medical technology" ]
34,726,547
https://en.wikipedia.org/wiki/Denjoy%E2%80%93Koksma%20inequality
In mathematics, the Denjoy–Koksma inequality, introduced by as a combination of work of Arnaud Denjoy and the Koksma–Hlawka inequality of Jurjen Ferdinand Koksma, is a bound for Weyl sums of functions f of bounded variation. Statement Suppose that a map f from the circle T to itself has irrational rotation number α, and p/q is a rational approximation to α with p and q coprime, |α – p/q| < 1/q2. Suppose that φ is a function of bounded variation, and μ a probability measure on the circle invariant under f. Then References Theorems in analysis Inequalities
Denjoy–Koksma inequality
[ "Mathematics" ]
141
[ "Theorems in mathematical analysis", "Mathematical analysis", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
34,727,315
https://en.wikipedia.org/wiki/Toroidal%20moment
In electromagnetism, a toroidal moment is an independent term in the multipole expansion of electromagnetic fields besides magnetic and electric multipoles. In the electrostatic multipole expansion, all charge and current distributions can be expanded into a complete set of electric and magnetic multipole coefficients. However, additional terms arise in an electrodynamic multipole expansion. The coefficients of these terms are given by the toroidal multipole moments as well as time derivatives of the electric and magnetic multipole moments. While electric dipoles can be understood as separated charges and magnetic dipoles as circular currents, axial (or electric) toroidal dipoles describes toroidal (donut-shaped) charge arrangements whereas polar (or magnetic) toroidal dipole (also called anapole) correspond to the field of a solenoid bent into a torus. Classical toroidal dipole moment A complex expression allows the current density J to be written as a sum of electric, magnetic, and toroidal moments using Cartesian or spherical differential operators. The lowest order toroidal term is the toroidal dipole. Its magnitude along direction i is given by Since this term arises only in an expansion of the current density to second order, it generally vanishes in a long-wavelength approximation. However, a recent study comes to the result that the toroidal multipole moments are not a separate multipole family, but rather higher order terms of the electric multipole moments. Quantum toroidal dipole moment In 1957, Yakov Zel'dovich found that because the weak interaction violates parity symmetry, a spin- Dirac particle must have a toroidal dipole moment, also known as an anapole moment, in addition to the usual electric and magnetic dipoles. The interaction of this term is most easily understood in the non-relativistic limit, where the Hamiltonian is where , , and are the electric, magnetic, and anapole moments, respectively, and is the vector of Pauli matrices. The nuclear toroidal moment of cesium was measured in 1997 by Wood et al.. Symmetry properties of dipole moments All dipole moments are vectors which can be distinguished by their differing symmetries under spatial inversion () and time reversal (). Either the dipole moment stays invariant under the symmetry transformation ("+1") or it changes its direction ("−1"): Magnetic toroidal moments in condensed matter physics In condensed matter magnetic toroidal order can be induced by different mechanisms: Order of localized spins breaking spatial inversion and time reversal. The resulting toroidal moment is described by a sum of cross products of the spins Si of the magnetic ions and their positions ri within the magnetic unit cell: T = Σi ri × Si Formation of vortices by delocalized magnetic moments. On-site orbital currents (as found in multiferroic CuO). Orbital loop currents have been proposed in copper oxides superconductors that might be important to understand high-temperature superconductivity. Experimental verification of symmetry-breaking by such orbital currents has been claimed in cuprates through polarized neutron-scattering. Magnetic toroidal moment and its relation to the magnetoelectric effect The presence of a magnetic toroidic dipole moment T in condensed matter is due to the presence of a magnetoelectric effect: Application of a magnetic field H in the plane of a toroidal solenoid leads via the Lorentz force to an accumulation of current loops and thus to an electric polarization perpendicular to both T and H. The resulting polarization has the form (with ε being the Levi-Civita symbol). The resulting magnetoelectric tensor describing the cross-correlated response is thus antisymmetric. Ferrotoroidicity in condensed matter physics A phase transition to spontaneous long-range order of microscopic magnetic toroidal moments has been termed ferrotoroidicity. It is expected to fill the symmetry schemes of primary ferroics (phase transitions with spontaneous point symmetry breaking) with a space-odd, time-odd macroscopic order parameter. A ferrotoroidic material would exhibit domains which could be switched by an appropriate field, e.g. a magnetic field curl. Both of these hallmark properties of a ferroic state have been demonstrated in an artificial ferrotoroidic model system based on a nanomagnetic array The existence of ferrotoroidicity is still under debate and clear-cut evidence has not been presented yet—mostly due to the difficulty to distinguish ferrotoroidicity from antiferromagnetic order, as both have no net magnetization and the order parameter symmetry is the same. Anapole dark matter All CPT self-conjugate particles, in particular the Majorana fermion, are forbidden from having any multipole moments other than toroidal moments. At tree level (i.e. without allowing loops in Feynman diagrams) an anapole-only particle interacts only with external currents, not with free-space electromagnetic fields, and the interaction cross-section diminishes as the particle velocity slows. For this reason, heavy Majorana fermions have been suggested as plausible candidates for cold dark matter. See also Spheromak Dynamic toroidal dipole Anapole References Literature Stefan Nanz: Toroidal Multipole Moments in Classical Electrodynamics. Springer 2016. Electromagnetism Moment (physics)
Toroidal moment
[ "Physics", "Mathematics" ]
1,108
[ "Electromagnetism", "Physical phenomena", "Physical quantities", "Quantity", "Fundamental interactions", "Moment (physics)" ]
34,732,769
https://en.wikipedia.org/wiki/Internal%20oxidation
Internal oxidation, in corrosion of metals, is the process of formation of corrosion products (e.g. a metal oxide) within the metal bulk. In other words, the corrosion products are created away from the metal surface, and they are isolated from the surface. Internal oxidation occurs when some components of the alloy are oxidized in preference to the balance of the bulk. The oxidizer is often oxygen diffusing through the metal bulk from the interface, but it can be also another element (for example sulfur or nitrogen). Internal oxidation is a well-known corrosion mechanism of nickel-based alloys in the temperature range of 500 to 1200 °C. Internal oxidation is distinct from selective leaching. References Corrosion
Internal oxidation
[ "Chemistry", "Materials_science" ]
145
[ "Metallurgy", "Corrosion", "Electrochemistry", "Electrochemistry stubs", "Materials degradation", "Physical chemistry stubs", "Chemical process stubs" ]
34,732,976
https://en.wikipedia.org/wiki/Metal-phosphine%20complex
A metal-phosphine complex is a coordination complex containing one or more phosphine ligands. Almost always, the phosphine is an organophosphine of the type R3P (R = alkyl, aryl). Metal phosphine complexes are useful in homogeneous catalysis. Prominent examples of metal phosphine complexes include Wilkinson's catalyst (Rh(PPh3)3Cl), Grubbs' catalyst, and tetrakis(triphenylphosphine)palladium(0). Preparation Many metal phosphine complexes are prepared by reactions of metal halides with preformed phosphines. For example, treatment of a suspension of palladium chloride in ethanol with triphenylphosphine yields monomeric bis(triphenylphosphine)palladium(II) chloride units. [PdCl2]n + 2PPh3 → PdCl2(PPh3)2 The first reported phosphine complexes were cis- and trans-PtCl2(PEt3)2 reported by Cahours and Gal in 1870. Often the phosphine serves both as a ligand and as a reductant. This property is illustrated by the synthesis of many platinum-metal complexes of triphenylphosphine: RhCl3(H2O)3 + 4PPh3 → RhCl(PPh3)3 + OPPh3 + 2HCl + 2H2O M-PR3 bonding Phosphines are L-type ligands. Unlike most metal ammine complexes, metal phosphine complexes tend to be lipophilic, displaying good solubility in organic solvents. Phosphine ligands are also π-acceptors. Their π-acidity arises from overlap of P-C σ* anti-bonding orbitals with filled metal orbitals. Aryl- and fluorophosphines are stronger π-acceptors than alkylphosphines. Trifluorophosphine (PF3) is a strong π-acid with bonding properties akin to those of the carbonyl ligand. In early work, phosphine ligands were thought to utilize 3d orbitals to form M-P pi-bonding, but it is now accepted that d-orbitals on phosphorus are not involved in bonding. The energy of the σ* orbitals is lower for phosphines with electronegative substituents, and for this reason phosphorus trifluoride is a particularly good π-acceptor. Steric properties In contrast to tertiary phosphines, tertiary amines, especially arylamine derivatives, are reluctant to bind to metals. The difference between the coordinating power of PR3 and NR3 reflects the greater steric crowding around the nitrogen atom, which is smaller. By changes in one or more of the three organic substituents, the steric and electronic properties of phosphine ligands can be manipulated. The steric properties of phosphine ligands can be ranked by their Tolman cone angle or percent buried volume. Spectroscopy An important technique for the characterization of metal-PR3 complexes is 31P NMR spectroscopy. Substantial shifts occur upon complexation. 31P-31P spin-spin coupling can provide insight into the structure of complexes containing multiple phosphine ligands. Reactivity Phosphine ligands are usually "spectator" rather than "actor" ligands. They generally do not participate in reactions, except to dissociate from the metal center. In certain high temperature hydroformylation reactions, the scission of P-C bonds is observed however. The thermal stability of phosphines ligands is enhanced when they are incorporated into pincer complexes. Applications to homogeneous catalysis One of the first applications of phosphine ligands in catalysis was the use of triphenylphosphine in "Reppe" chemistry (1948), which included reactions of alkynes, carbon monoxide, and alcohols. In his studies, Reppe discovered that this reaction more efficiently produced acrylic esters using NiBr2(PPh3)2 as a catalyst instead of NiBr2. Shell developed cobalt-based catalysts modified with trialkylphosphine ligands for hydroformylation (now a rhodium catalyst is more commonly used for this process). The success achieved by Reppe and his contemporaries led to many industrial applications. Illustrative PPh3 complexes Tetrakis(triphenylphosphine)palladium(0) is widely used to catalyse C-C coupling reactions in organic synthesis, see Heck reaction. Wilkinson's catalyst, RhCl(PPh3)3 is a square planar Rh(I) complex of historical significance used to catalyze the hydrogenation of alkenes. Vaska's complex, trans-IrCl(CO)(PPh3)2, is also historically significant; it was used to establish the scope of oxidative addition reactions. This early work provided the insights that led to the flowering of the area of homogeneous catalysis. NiCl2(PPh3)2 is a tetrahedral (spin triplet) complex of Ni(II). In contrast PdCl2(PPh3)2 is square planar. Stryker's reagent, [(PPh3)CuH]6, PPh3-stabilized transition metal hydride cluster that used as a reagent for "conjugate reductions". (Triphenylphosphine)iron tetracarbonyl (Fe(CO)4(PPh3)) and bis(triphenylphosphine)iron tricarbonyl (Fe(CO)3(PPh3)2). Complexes of other organophosphorus ligands The popularity and usefulness of phosphine complexes has led to the popularization of complexes of many related organophosphorus ligands. Complexes of arsines have also been widely investigated, but are avoided in practical applications because of concerns about toxicity. Complexes of primary and secondary phosphines Most work focuses on complexes of triorganophosphines, but primary and secondary phosphines, respectively RPH2 and R2PH, also function as ligands. Such ligands are less basic and have small cone angles. These complexes are susceptible to deprotonation leading to phosphido-bridged dimers and oligomers: 2 LnM(PR2H)Cl → [LnM(μ-PR2)]2 + 2 HCl Complexes of PRx(OR')3−x Nickel(0) complexes of phosphites, e.g., Ni[P(OEt)3]4 are useful catalysts for hydrocyanation of alkenes. Related complexes are known for phosphinites (R2P(OR')) and phosphonites (RP(OR')2). Diphosphine complexes Due to the chelate effect, ligands with two phosphine groups bind more tightly to metal centers than do two monodentate phosphines. The conformational properties of diphosphines makes them especially useful in asymmetric catalysis, e.g. Noyori asymmetric hydrogenation. Several diphosphines have been developed, prominent examples include 1,2-bis(diphenylphosphino)ethane (dppe) and 1,1'-Bis(diphenylphosphino)ferrocene, the trans spanning xantphos and spanphos. The complex dichloro(1,3-bis(diphenylphosphino)propane)nickel is useful in Kumada coupling. References Coordination complexes Catalysis
Metal-phosphine complex
[ "Chemistry" ]
1,635
[ "Catalysis", "Chemical kinetics", "Coordination chemistry", "Coordination complexes" ]
34,733,019
https://en.wikipedia.org/wiki/Jacobson%E2%80%93Morozov%20theorem
In mathematics, the Jacobson–Morozov theorem is the assertion that nilpotent elements in a semi-simple Lie algebra can be extended to sl2-triples. The theorem is named after , . Statement The statement of Jacobson–Morozov relies on the following preliminary notions: an sl2-triple in a semi-simple Lie algebra (throughout in this article, over a field of characteristic zero) is a homomorphism of Lie algebras . Equivalently, it is a triple of elements in satisfying the relations An element is called nilpotent, if the endomorphism (known as the adjoint representation) is a nilpotent endomorphism. It is an elementary fact that for any sl2-triple , e must be nilpotent. The Jacobson–Morozov theorem states that, conversely, any nilpotent non-zero element can be extended to an sl2-triple. For , the sl2-triples obtained in this way are made explicit in . The theorem can also be stated for linear algebraic groups (again over a field k of characteristic zero): any morphism (of algebraic groups) from the additive group to a reductive group H factors through the embedding Furthermore, any two such factorizations are conjugate by a k-point of H. Generalization A far-reaching generalization of the theorem as formulated above can be stated as follows: the inclusion of pro-reductive groups into all linear algebraic groups, where morphisms in both categories are taken up to conjugation by elements in , admits a left adjoint, the so-called pro-reductive envelope. This left adjoint sends the additive group to (which happens to be semi-simple, as opposed to pro-reductive), thereby recovering the above form of Jacobson–Morozov. This generalized Jacobson–Morozov theorem was proven by by appealing to methods related to Tannakian categories and by by more geometric methods. References Lie algebras Algebraic groups
Jacobson–Morozov theorem
[ "Mathematics" ]
429
[ "Theorems in algebra", "Mathematical theorems", "Mathematical problems", "Algebra" ]
34,733,083
https://en.wikipedia.org/wiki/Jessen%E2%80%93Wintner%20theorem
In mathematics, the Jessen–Wintner theorem, introduced by , asserts that a random variable of Jessen–Wintner type, meaning the sum of an almost surely convergent series of independent discrete random variables, is of pure type. References Probability theorems
Jessen–Wintner theorem
[ "Mathematics" ]
54
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
34,738,409
https://en.wikipedia.org/wiki/Carleson%E2%80%93Jacobs%20theorem
In mathematics, the Carleson–Jacobs theorem, introduced by , describes the best approximation to a continuous function on the unit circle by a function in a Hardy space. Notes References Theorems in complex analysis Hardy spaces
Carleson–Jacobs theorem
[ "Mathematics" ]
44
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in complex analysis", "Mathematical analysis stubs" ]
34,738,529
https://en.wikipedia.org/wiki/Lange%27s%20conjecture
In algebraic geometry, Lange's conjecture is a theorem about stability of vector bundles over curves, introduced by and proved by Montserrat Teixidor i Bigas and Barbara Russo in 1999. Statement Let C be a smooth projective curve of genus greater or equal to 2. For generic vector bundles and on C of ranks and degrees and , respectively, a generic extension has E stable provided that , where is the slope of the respective bundle. The notion of a generic vector bundle here is a generic point in the moduli space of semistable vector bundles on C, and a generic extension is one that corresponds to a generic point in the vector space . An original formulation by Lange is that for a pair of integers and such that , there exists a short exact sequence as above with E stable. This formulation is equivalent because the existence of a short exact sequence like that is an open condition on E in the moduli space of semistable vector bundles on C. References Notes Vector bundles Algebraic curves Theorems in algebraic geometry Conjectures that have been proved
Lange's conjecture
[ "Mathematics" ]
213
[ "Theorems in algebraic geometry", "Conjectures that have been proved", "Theorems in geometry", "Mathematical problems", "Mathematical theorems" ]
34,738,966
https://en.wikipedia.org/wiki/Optical%20radiation
Optical radiation is the part of the electromagnetic spectrum with wavelengths between 100 nm and 1 mm. This range includes visible light, infrared light, and part of the ultraviolet spectrum. Optical radiation is non-ionizing, and can be focused with lenses and manipulated by other optical elements. Optics is the study of how to manipulate optical radiation. Sources Optical radiation may be divided into two types: Artificial optical radiation Artificial optical radiation is produced by artificial sources, including coherent sources such as lasers and incoherent sources such as UV lights, common light bulbs, radiant heaters, welding equipment, etc. Natural optical radiation Natural optical radiation is primarily produced by the Sun. Effects Exposure to optical radiation can result in negative health effects. All wavelengths across this range of the spectrum, from UV to IR, can produce thermal injury to the surface layers of the skin, including the eye. When it comes from natural sources, this sort of thermal injury might be called a sunburn. However, thermal injury from infrared radiation could also occur in a workplace, such as a foundry, where such radiation is generated by industrial processes. At the other end of this range, UV light has enough photon energy that it can cause direct effects to protein structure in tissues, and is well established as carcinogenic in humans. Occupational exposures to UV light occur in welding and brazing operations, for example. Excessive exposure to natural or artificial UV-radiation means immediate (acute) and long-term (chronic) damage to the eye and skin. Occupational exposure limits may be one of two types: rate limited or dose limited. Rate limits characterize the exposure based on effective energy (radiance or irradiance, depending on the type of radiation and the health effect of concern) per area per time, and dose limits characterize the exposure as a total acceptable dose. The latter is applied when the intensity of the radiation is great enough to produce a thermal injury. Specifications The European Union (EU) has laid down minimum harmonized requirements for the protection of workers against the risks arising from exposure to Artificial Optical Radiation (e.g. UVA, laser, etc.) in the Directive 2006/25/EC. A Non-binding guide to good practice for implementing Directive 2006/25/EC "Artificial Optical Radiation" is available on this page. References Electromagnetic spectrum Electromagnetic radiation Occupational hazards
Optical radiation
[ "Physics" ]
474
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic radiation", "Electromagnetic spectrum", "Radiation" ]
34,739,005
https://en.wikipedia.org/wiki/Materials%20genome
Materials genome is an analogy to genomes in biology, but in a conceptual sense: the many important phases, defects, and processes that make up engineered materials are the "genome" of materials science. The Materials Genome Initiative (MGI) is a federal, multi-agency effort to design, manufacture, and deploy materials and materials-based technologies significantly faster and cheaper than ever before. The MGI partially references the Human Genome Project, but only conceptually, and is a broader and less targeted effort. History The name "materials genome" was coined in December 2002 by Dr. Zi-Kui Liu who incorporated the company MaterialsGenome, Inc. in Pennsylvania, USA, and filed the trademark protection on March 5, 2004 (78512752). The Certificate of Registration (Reg. No. 4,224,035) was issued by the Patent and Trademark office on October 16, 2012. In 2005, Zi-Kui Liu and Pierre Villars jointly crafted a proposal on "Materials Genome Foundation" when they met in Switzerland. In January 2006, they presented the proposal to a panel organized by ASM International. In 2008, United States Automotive Materials Partnership (USAMP) supported by the United States Department of Energy funded a smaller version of the concept of "Materials Genome Foundation", which resulted in the development of the software package ESPEI. In June 2011, the name "materials genome" was used in the United States National Science and Technology Council "Materials Genome Initiative" in consent with MaterialsGenome, Inc. In 2014, Liu published an article to reflect "Perspective on Materials Genome" in English, and in Chinese In early 2015, Chenxi Qian, Todd Siler and Geoffrey Ozin jointly published a paper in Small, discussing about the possibilities and limitations of a nanomaterials genome, expanding concept of the materials genome by incorporating the important parameters of nanomaterials such as size and shape information into the roadmap. In May 2016, a group led by Sorelle A. Friedler, Joshua Schrier and Alexander J. Norquist claimed the first machine-learning-assisted materials discovery, as an example of humanity's early attempts to design and develop materials via data-driven approaches proposed by the Materials Genome Initiative. In May 2017, progress on the Materials Genome Initiative was reviewed in 2017 at a workshop sponsored by NSF and future directions were identified. References External links https://www.nist.gov/mgi https://www.matse.psu.edu/directory/zi-kui-liu https://www.fordham.edu/info/28651/joshua_schrier Materials
Materials genome
[ "Physics" ]
551
[ "Materials", "Matter" ]
34,739,129
https://en.wikipedia.org/wiki/Anelastic%20attenuation%20factor
In reflection seismology, the anelastic attenuation factor, often expressed as seismic quality factor or Q (which is inversely proportional to attenuation factor), quantifies the effects of anelastic attenuation on the seismic wavelet caused by fluid movement and grain boundary friction. As a seismic wave propagates through a medium, the elastic energy associated with the wave is gradually absorbed by the medium, eventually ending up as heat energy. This is known as absorption (or anelastic attenuation) and will eventually cause the total disappearance of the seismic wave. Quality factor, Q Q is defined as where is the fraction of energy lost per cycle. The earth preferentially attenuates higher frequencies, resulting in the loss of signal resolution as the seismic wave propagates. Quantitative seismic attribute analysis of amplitude versus offset effects is complicated by anelastic attenuation because it is superimposed upon the AVO effects. The rate of anelastic attenuation itself also contains additional information about the lithology and reservoir conditions such as porosity, saturation and pore pressure so it can be used as a useful reservoir characterization tool. Therefore, if Q can be accurately measured then it can be used for both compensation for the loss of information in the data and for seismic attribute analysis. Measurement of Q Spectral ratio method The geometry of a zero-offset vertical seismic profile (VSP) makes it an ideal survey to use for the calculation of Q using the spectral ratio method. This is because of the coincident raypaths that traverse a given rock layer, ensuring that the only path difference between two reflected waves (one from the top of the interval and one from the bottom) is the interval of interest. Stacked surface seismic reflection traces would offer similar signal-to-noise ratio over a much larger area but cannot be used with this method because every sample represents a different raypath and therefore will have experienced different attenuation effects. Seismic wavelets captured before and after traversing a medium with seismic quality factor, Q, on coincident raypaths will have amplitudes that are related as follows: ; where and are the amplitudes at frequency after and before traversing the medium; is the reflection coefficient; is the geometrical spreading factor and is the time taken to traverse the medium. Taking logarithms of both sides and rearranging: This equation shows that if the logarithm of the spectral ratio of the amplitudes before and after traversing the medium is plotted as a function of frequency, it should yield a linear relationship with an intercept measuring the elastic losses (R and G) and the gradient measuring the inelastic losses, which can be used to find Q. The above formulation implies that Q is independent of frequency. If Q is frequency-dependent, the spectral ratio method can produce systematic bias in Q estimates In practice prominent phases seen on seismograms are used for estimating the Q. Lg is often the strongest phase on the seismogram at regional distances from 2° to 25°, because of its small-energy leakage into the mantle and used frequently for estimation of crustal Q. However, attenuation of this phase has different characteristics at oceanic crust. Lg may be suddenly disappeared along a particular propagation path which is commonly seen at continental-oceanic transition zones. This phenomenon refers as "Lg-Blockage" and its exact mechanism is still a puzzle. See also Acoustic attenuation Attenuation References Seismology measurement Geophysics
Anelastic attenuation factor
[ "Physics" ]
721
[ "Applied and interdisciplinary physics", "Geophysics" ]
35,883,894
https://en.wikipedia.org/wiki/Cimemoxin
Cimemoxin (INN), or cyclohexylmethylhydrazine, is a hydrazine monoamine oxidase inhibitor (MAOI) antidepressant which was never marketed. Synthesis It possesses 50 times the relative activity of iproniazid and 25x nialamide (see patent). 3-Cyclohexene-1-carbaldehyde [100-50-5] (aka 1,2,3,6-Tetrahydrobenzaldehyde) is reacted with N-acetylhydrazine to give the hydrazone, which is reduced by catalytic hydrogenation. The acetyl group is removed by acid hydrolysis. See also Monoamine oxidase inhibitor Hydrazine (antidepressant) References Antidepressants Hydrazines Monoamine oxidase inhibitors
Cimemoxin
[ "Chemistry" ]
176
[ "Functional groups", "Hydrazines" ]
35,883,924
https://en.wikipedia.org/wiki/Domoxin
Domoxin (INN) is a hydrazine derivative monoamine oxidase inhibitor (MAOI) antidepressant which was never marketed. See also Monoamine oxidase inhibitor Hydrazine (antidepressant) References Antidepressants Benzodioxans Hydrazines Monoamine oxidase inhibitors
Domoxin
[ "Chemistry" ]
67
[ "Functional groups", "Hydrazines" ]
35,884,241
https://en.wikipedia.org/wiki/Gabriel%E2%80%93Popescu%20theorem
In mathematics, the Gabriel–Popescu theorem is an embedding theorem for certain abelian categories, introduced by . It characterizes certain abelian categories (the Grothendieck categories) as quotients of module categories. There are several generalizations and variations of the Gabriel–Popescu theorem, given by (for an AB5 category with a set of generators), , (for triangulated categories). Theorem Let A be a Grothendieck category (an AB5 category with a generator), G a generator of A and R be the ring of endomorphisms of G; also, let S be the functor from A to Mod-R (the category of right R-modules) defined by S(X) = Hom(G,X). Then the Gabriel–Popescu theorem states that S is full and faithful and has an exact left adjoint. This implies that A is equivalent to the Serre quotient category of Mod-R by a certain localizing subcategory C. (A localizing subcategory of Mod-R is a full subcategory C of Mod-R, closed under arbitrary direct sums, such that for any short exact sequence of modules , we have M2 in C if and only if M1 and M3 are in C. The Serre quotient of Mod-R by any localizing subcategory is a Grothendieck category.) We may take C to be the kernel of the left adjoint of the functor S. Note that the embedding S of A into Mod-R is left-exact but not necessarily right-exact: cokernels of morphisms in A do not in general correspond to the cokernels of the corresponding morphisms in Mod-R. References [Remark: "Popescu" is spelled "Popesco" in French.] External links Category theory Functors Theorems in abstract algebra
Gabriel–Popescu theorem
[ "Mathematics" ]
407
[ "Functions and mappings", "Mathematical structures", "Theorems in algebra", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Functors", "Mathematical relations", "Theorems in abstract algebra" ]
35,885,689
https://en.wikipedia.org/wiki/Alternant%20hydrocarbon
An alternant hydrocarbon is any conjugated hydrocarbon system which does not possess an odd-membered ring. For such systems it is possible to undertake a starring process, in which the carbon atoms are divided into two sets: all the carbons in one set are marked with a star such that no two starred or unstarred atoms are bonded to each other. Here the starred set contains the highest number of atoms. When this condition is met, the secular determinant in the Hückel approximation has a simpler form, since cross-diagonal elements between atoms in the same set are necessarily 0. Alternant hydrocarbons display three very interesting properties: The molecular orbital energies for the π system are paired, that is for an orbital of energy there is one of energy . The coefficients of two paired molecular orbitals are the same at the same site, except for a sign change in the unstarred set. The population or electron density at all sites is equal to unity in the ground state, so the distribution of π electrons is uniform across the whole molecule. Moreover, if the alternant hydrocarbon contains an odd number of atoms then there must be an unpaired orbital with zero bonding energy (a non-bonding orbital). For this orbital, the coefficients on the atomic sites can be written down without calculation: the coefficient on all the orbitals belonging to the smaller (unstarred) set are 0, and the sum of the coefficients of the (starred) orbitals around them must also be 0. Simple algebra allows the assignment of all coefficients and then normalize them. This procedure permits the prediction of reactivity patterns and can be exploited to calculate Dewar's reactivity numbers for all sites. References Theoretical chemistry Molecular physics
Alternant hydrocarbon
[ "Physics", "Chemistry" ]
355
[ "Molecular physics", "Theoretical chemistry stubs", "Theoretical chemistry", " molecular", "nan", "Atomic", "Molecular physics stubs", " and optical physics" ]
35,886,071
https://en.wikipedia.org/wiki/Isolation%20condenser
In a reactor core isolation cooling system ("RCIC"), an isolation condenser (IC or iso. condenser; also isolation condenser system) is one of the emergency reactor safety systems in some nuclear plants (boiling water reactor safety systems). Emergency passive system It is a passive system for cooling of some reactors (BWR/2, BWR/3 ..., and the (E)SBWR series) in nuclear production, located above containment in a pool of water open to atmosphere. In operation, decay heat boils steam, which is drawn into the heat exchanger and condensed; then it falls by weight of gravity back into the reactor. This process keeps the cooling water in the reactor, making it unnecessary to use powered feedwater pumps. The water in the open pool slowly boils off, venting clean steam to the atmosphere. This makes it unnecessary to run mechanical systems to remove heat. Periodically, the pool must be refilled, a simple task for a fire truck. The (E)SBWR reactors provide three days' supply of water in the pool. Some older reactors also have IC systems, including Fukushima Dai-ichi reactor 1, however their water pools may not be as large. Under normal conditions, the IC system is not activated, but the top of the IC condenser is connected to the reactor's steam lines through an open valve. Steam enters the IC condenser and condenses until it is filled with water. When the IC system is activated, a valve at the bottom of the IC condenser is opened which connects to a lower area on the reactor. The water falls to the reactor via gravity, allowing the condenser to fill with steam, which then condenses. This cycle runs continuously until the bottom valve is closed. Problems In case of electricity failure, the valves close automatically, and operators have to open them manually, which can be difficult in case an accident has already released radioactive steam inside the building. During the accident at the Fukushima nuclear plant in 2011, the operators did not open the valve manually, and emergency system had been activated too late and could not work for long. Operators did not know if they should have left the valves open or not when the tanks of two condensers were emptied of their water cooling. References Bibliography AEC, Severe Accident Analyses of Fukushima-Daiichi Units 1 External links Patent for isolation condenser Light water reactors Energy conversion Nuclear power Nuclear technology Nuclear power stations Power station technology Nuclear safety and security Nuclear power plant components
Isolation condenser
[ "Physics" ]
523
[ "Physical quantities", "Nuclear power", "Nuclear technology", "Power (physics)", "Nuclear physics" ]
27,874,410
https://en.wikipedia.org/wiki/Eastern%20Analytical%20Symposium
The Eastern Analytical Symposium (EAS) and Exposition is an American organization that sponsors a Symposium and Exposition generally held in Princeton, New Jersey, every November. The Symposium is attended by over 2000 scientists and typically contains several hundred papers by the world's leading authorities on analytical chemistry. The associated exposition contains information on technology and information from companies that provide instrumentation and services for the community of analytical scientists. In addition, the EAS provides an ongoing education program that includes technical short courses and professional development workshops for laboratory scientists, as well as general-interest sessions directed at the public, especially students and their chemistry teachers. Sponsors The Eastern Analytical Symposium and Exposition is sponsored by the following organizations: the Analytical Division of the American Chemical Society, the American Chemical Society New York and New Jersey Sections, the American Microchemical Society, the Chromatography Forum of Delaware Valley, the Coblentz Society, the New York Microscopical Society, the Society for Applied Spectroscopy's Delaware Valley, New York, and New England Sections, the Association of Laboratory Managers (ALMA), and the New Jersey Association of Forensic Scientists. Awards The Governing Board of the Eastern Analytical Symposium presents awards each year for outstanding contributions and achievements in general analytical chemistry and in five specific areas of analysis. The award inscriptions read, "In Recognition of Outstanding Achievements in the Field of -----". Analytical Chemistry Magnetic Resonance Vibrational spectroscopy Chemometrics Mass Spectrometry Separation Science or Chromatography The Governing Board each year also honors a Young Investigator who is making an impact on the field of analytical chemistry. The 2017 Awardee will be Prof. Dwight R. Stoll. In addition to the EAS Awards, awards presented by sponsoring organizations at the Symposium include: The Benedetti-Pichler Award of the American Microchemical Society The Ernst Abbe Award of the New York Microscopical Society The Gold Medal of the New York Society for Applied Spectroscopy From 2012 to 2014, an award was also presented to a New Faculty active in NMR. Past award recipients can be found here. History Since its founding in 1959, EAS has become a premier venue for analysts to learn about new technologies, new applications for older technologies, and developments in such diverse fields as bioanalysis, pharmaceutical analysis, forensic science, laboratory management, and environmental analysis. Throughout the years, the EAS has been the place where innovations in analytical science have been introduced to the community of analytical scientists. The first EAS was held in 1959 at the Hotel New Yorker in New York City, with 1200 attendees at 12 technical sessions. The exposition had 38 exhibitors who displayed the latest in analytical supplies and instrumentation. During the early years in New York, the EAS was held at various hotels in the city, as the attendance grew. At the 10th EAS, a workshop on electrochemical techniques was the origin of the exhibitor workshops, which later would become a standard feature of the EAS program. By the 15th symposium, major awards were given out as part of the program of the Symposium, including the Meggers Memorial Award, the Hassler Award in Applied Spectroscopy, and the Anachem Award. In 1973, the EAS was suspended to support the emerging FACSS (Federation of Analytical Chemistry and Spectroscopy Societies), whose meetings were held at a similar time of the year. After two years, an EAS mini-symposium was held, and in 1977, the EAS returned to its original format in New York City. During the late 1970s and the 1980s, the EAS was moved from location to location New York City, as attendance continued to expand. During this period, the Governing Board of the Eastern Analytical Symposium began to present its own awards for excellence in analysis. In 1986, the first EAS Award for Outstanding Contributions in the Fields of Analytical Chemistry was presented to Professor George Morrison, who - aside from his many scientific contributions - had been instrumental in the early development of EAS. At the same meeting, the EAS Award for Outstanding Contributions to Separations Science was presented to Professor Csaba Horvath. In subsequent years, EAS awards of contributions to other areas would be added to recognize contributions to various areas of analysis. Currently, the EAS presents the six major awards listed above to distinguished scientists from around the world at the annual Symposium. As the 1990s dawned, it became necessary for the Eastern Analytical Symposium to find a venue for the meeting to meet the needs of a growing meeting. In 1990, the EAS moved to the then-new Garden State Convention and Exhibit Center in Somerset, New Jersey. As the symposium continued to grow, even the GSCEC seemed limited. In 2000, the EAS was moved to the Atlantic City Convention Center, where it remained for two years. However, beginning in the early 2000s, it was decided to move the EAS back to the Garden State Convention and Exhibit Center. Although the Eastern Analytical Symposium started as a regional meeting, where persons interested in practical analytical chemistry from laboratories in the Northeast could meet to discuss problems of common interest, it has grown to international stature, with attendance from analysts from laboratories in companies and universities across the world. The Symposium has further grown to emphasize a wide variety of technologies and areas of application that could only be dreamed of in 1959. Applications to traditional areas of analysis are still represented among the talks, but unique to the Eastern Analytical Symposium are areas such as cultural heritage and forensic science. As the 21st century has dawned, EAS continues to provide an inclusive home for practical analytical studies, to educate about the latest technologies, and to inform its audience about the current state of analysis. After several decades of holding the meeting in Somerset, NJ, the annual symposium was moved to nearby Princeton, NJ in 2017. References External links Eastern Analytical Symposium Scientific societies based in the United States Spectroscopy Chromatography Analytical chemistry
Eastern Analytical Symposium
[ "Physics", "Chemistry" ]
1,179
[ "Chromatography", "Molecular physics", "Spectrum (physical sciences)", "Separation processes", "Instrumental analysis", "nan", "Spectroscopy" ]
29,168,254
https://en.wikipedia.org/wiki/Partial%20group%20algebra
In mathematics, a partial group algebra is an associative algebra related to the partial representations of a group. Examples The partial group algebra is isomorphic to the direct sum: See also Group ring Group representation Notes References Algebras Representation theory of groups
Partial group algebra
[ "Mathematics" ]
52
[ "Algebras", "Mathematical structures", "Algebraic structures" ]
29,168,663
https://en.wikipedia.org/wiki/Macromonomer
In polymer chemistry, a macromonomer (or macromer) is a macromolecule with one end-group that enables it to act as a reactive monomer and undergo further polymerization. Macromonomers will contribute a single repeat unit to a chain of the completed macromolecule. Several macromonomers have been successfully synthesized utilizing various methods such as controlled radical polymerization (CRP) and copper-catalyzed "click" coupling. Due to the larger size of macromonomers (as opposed to the size of regular monomers), synthetic challenges are brought about, giving reason for the analysis of polymerization mechanisms. Recent studies have shown that macromonomer polymerization kinetics and mechanisms can be significantly affected by the topological effect. Macromonomers are also used in controlled graft copolymerization. References Polymer chemistry
Macromonomer
[ "Chemistry", "Materials_science", "Engineering" ]
178
[ "Materials science", "Polymer chemistry" ]
29,172,575
https://en.wikipedia.org/wiki/LNAPL%20transmissivity
LNAPL transmissivity is the discharge of light non-aqueous phase liquid (LNAPL) through a unit width of aquifer for a unit gradient. Scholars Alex Mayer and S. Majid Hassanizadeh define LNAPL transmissivity as the "product of the porous medium permeability and the LNAPL relative permeability, which in turn is a function of saturation, and the thickness of the LNAPL". They wrote that once LNAPL is taken away, a lower recovery rate occurs because the "saturation and thickness of the mobile LNAPL fraction decreases". LNAPL transmissivity is a summary parameter that takes into account soil type and physical properties (e.g., porosity and permeability), LNAPL physical fluid properties(e.g., density and viscosity) and LNAPL saturation (i.e., amount of LNAPL present within the pore network). Consequently, LNAPL transmissivity is comparable across soil types, LNAPL types and recoverable LNAPL volumes. More importantly, for LNAPL recovery from a given well, the soil and LNAPL physical properties do not change significantly through time. What changes, is the LNAPL saturation (amount of LNAPL present). As a result, LNAPL transmissivity decreases in direct proportion to the decrease in LNAPL saturation achievable through liquid recovery technology. LNAPL Transmissivity is not the only piece of data required when evaluating a site overall, because it requires a good LNAPL conceptual model in order to calculate. However, it is a superior summary metric to gauged LNAPL thickness to represent LNAPL recoverability and migration risk (e.g., on site maps) and direct remediation efforts. References Chemical properties Hydrology
LNAPL transmissivity
[ "Chemistry", "Engineering", "Environmental_science" ]
400
[ "Hydrology", "nan", "Environmental engineering" ]
29,172,732
https://en.wikipedia.org/wiki/Mirror%20nuclei
In physics, mirror nuclei are a pair of isobars of two different elements where the number of protons of isobar one (Z1) equals the number of neutrons of isobar two (N2) and the number of protons of isotope two (Z2) equals the number of neutrons in isotope one (N1); in short: Z1 = N2 and Z2 = N1. This implies that the mass numbers of the isotopes are the same: N1 + Z1 = N2 + Z2. Examples of mirror nuclei include: Pairs of mirror nuclei have the same spin and parity. If we constrain to odd number of nucleons (A=Z+N) then we find mirror nuclei that differ from one another by exchanging a proton by a neutron. Interesting to observe is their binding energy which is mainly due to the strong interaction and also due to Coulomb interaction. Since the strong interaction is invariant to protons and neutrons one can expect these mirror nuclei to have very similar binding energies. In 2020 strontium-73 and bromine-73 were found to not behave as expected. The ground state of has spin and parity 1/2−, whereas the ground state of was inferred to have spin and parity 5/2−, matching a low-lying 27 keV excited state of . References Nuclear physics
Mirror nuclei
[ "Physics" ]
287
[ "Nuclear physics" ]
29,177,295
https://en.wikipedia.org/wiki/Theory%20of%20solar%20cells
The theory of solar cells explains the process by which light energy in photons is converted into electric current when the photons strike a suitable semiconductor device. The theoretical studies are of practical use because they predict the fundamental limits of a solar cell, and give guidance on the phenomena that contribute to losses and solar cell efficiency. Working explanation Photons in sunlight hit the solar panel and are absorbed by semi-conducting materials. Electrons (negatively charged) are knocked loose from their atoms as they are excited. Due to their special structure and the materials in solar cells, the electrons are only allowed to move in a single direction. The electronic structure of the materials is very important for the process to work, and often silicon incorporating small amounts of boron or phosphorus is used in different layers. An array of solar cells converts solar energy into a usable amount of direct current (DC) electricity. Photogeneration of charge carriers When a photon hits a piece of semiconductor, one of three things can happen: The photon can pass straight through the semiconductor — this (generally) happens for lower energy photons. The photon can reflect off the surface. The photon can be absorbed by the semiconductor if the photon energy is higher than the band gap value. This generates an electron-hole pair and sometimes heat depending on the band structure. When a photon is absorbed, its energy is given to an electron in the crystal lattice. Usually this electron is in the valence band. The energy given to the electron by the photon "excites" it into the conduction band where it is free to move around within the semiconductor. The network of covalent bonds that the electron was previously a part of now has one fewer electron. This is known as a hole, and it has positive charge. The presence of a missing covalent bond allows the bonded electrons of neighboring atoms to move into the "hole", leaving another hole behind, thus propagating holes throughout the lattice in the opposite direction to the movement of the negatively electrons. It can be said that photons absorbed in the semiconductor create electron-hole pairs. A photon only needs to have energy greater than that of the band gap in order to excite an electron from the valence band into the conduction band. However, the solar frequency spectrum approximates a black body spectrum at about 5,800 K, and as such, much of the solar radiation reaching the Earth is composed of photons with energies greater than the band gap of silicon (1.12eV), which is near to the ideal value for a terrestrial solar cell (1.4eV). These higher energy photons will be absorbed by a silicon solar cell, but the difference in energy between these photons and the silicon band gap is converted into heat (via lattice vibrations — called phonons) rather than into usable electrical energy. The p–n junction The most commonly known solar cell is configured as a large-area p–n junction made from silicon. As a simplification, one can imagine bringing a layer of n-type silicon into direct contact with a layer of p-type silicon. n-type doping produces mobile electrons (leaving behind positively charged donors) while p-type doping produces mobile holes (and negatively charged acceptors). In practice, p–n junctions of silicon solar cells are not made in this way, but rather by diffusing an n-type dopant into one side of a p-type wafer (or vice versa). If a piece of p-type silicon is placed in close contact with a piece of n-type silicon, then a diffusion of electrons occurs from the region of high electron concentration (the n-type side of the junction) into the region of low electron concentration (p-type side of the junction). When the electrons diffuse into the p-type side, each one annihilates a hole, making that side net negatively charged (because now the number of mobile positive holes is now less than the number of negative acceptors). Similarly, holes diffusing to the n-type side make it more positively charged. However (in the absence of an external circuit) this diffusion current of carriers does not go on indefinitely because the charge build up on either side of the junction produces an electric field that opposes further diffusion of more charges. Eventually, an equilibrium is reached where the net current is zero, leaving a region either side of the junction where electrons and holes have diffused across the junction and annihilated each other called the depletion region because it contains practically no mobile charge carriers. It is also known as the space charge region, although space charge extends a bit further in both directions than the depletion region. Once equilibrium is established, electron-hole pairs generated in the depletion region are separated by the electric field, with the electron attracted to the positive n-type side and holes to the negative p-type side, reducing the charge (and the electric field) built up by the diffusion just described. If the device is unconnected (or the external load is very high) then diffusion current would eventually restore the equilibrium charge by bringing the electron and hole back across the junction, but if the load connected is small enough, the electrons prefer to go around the external circuit in their attempt to restore equilibrium, doing useful work on the way. Charge carrier separation There are two causes of charge carrier motion and separation in a solar cell: drift of carriers, driven by the electric field, with electrons being pushed one way and holes the other way diffusion of carriers from zones of higher carrier concentration to zones of lower carrier concentration (following a gradient of chemical potential). These two "forces" may work one against the other at any given point in the cell. For instance, an electron moving through the junction from the p region to the n region (as in the diagram at the beginning of this article) is being pushed by the electric field against the concentration gradient. The same goes for a hole moving in the opposite direction. It is easiest to understand how a current is generated when considering electron-hole pairs that are created in the depletion zone, which is where there is a strong electric field. The electron is pushed by this field toward the n side and the hole toward the p side. (This is opposite to the direction of current in a forward-biased diode, such as a light-emitting diode in operation.) When the pair is created outside the space charge zone, where the electric field is smaller, diffusion also acts to move the carriers, but the junction still plays a role by sweeping any electrons that reach it from the p side to the n side, and by sweeping any holes that reach it from the n side to the p side, thereby creating a concentration gradient outside the space charge zone. In thick solar cells there is very little electric field in the active region outside the space charge zone, so the dominant mode of charge carrier separation is diffusion. In these cells the diffusion length of minority carriers (the length that photo-generated carriers can travel before they recombine) must be large compared to the cell thickness. In thin film cells (such as amorphous silicon), the diffusion length of minority carriers is usually very short due to the existence of defects, and the dominant charge separation is therefore drift, driven by the electrostatic field of the junction, which extends to the whole thickness of the cell. Once the minority carrier enters the drift region, it is 'swept' across the junction and, at the other side of the junction, becomes a majority carrier. This reverse current is a generation current, fed both thermally and (if present) by the absorption of light. On the other hand, majority carriers are driven into the drift region by diffusion (resulting from the concentration gradient), which leads to the forward current; only the majority carriers with the highest energies (in the so-called Boltzmann tail; cf. Maxwell–Boltzmann statistics) can fully cross the drift region. Therefore, the carrier distribution in the whole device is governed by a dynamic equilibrium between reverse current and forward current. Connection to an external load Ohmic metal-semiconductor contacts are made to both the n-type and p-type sides of the solar cell, and the electrodes connected to an external load. Electrons that are created on the n-type side, or created on the p-type side, "collected" by the junction and swept onto the n-type side, may travel through the wire, power the load, and continue through the wire until they reach the p-type semiconductor-metal contact. Here, they recombine with a hole that was either created as an electron-hole pair on the p-type side of the solar cell, or a hole that was swept across the junction from the n-type side after being created there. The voltage measured is equal to the difference in the quasi Fermi levels of the majority carriers (electrons in the n-type portion and holes in the p-type portion) at the two terminals. Equivalent circuit of a solar cell An equivalent circuit model of an ideal solar cell's p–n junction uses an ideal current source (whose photogenerated current increases with light intensity) in parallel with a diode (whose current represents recombination losses). To account for resistive losses, a shunt resistance and a series resistance are added as lumped elements. The resulting output current equals the photogenerated current minus the currents through the diode and shunt resistor: The junction voltage (across both the diode and shunt resistance) is: where is the voltage across the output terminals. The leakage current through the shunt resistor is proportional to the junction's voltage , according to Ohm's law: By the Shockley diode equation, the current diverted through the diode is: where I0, reverse saturation current n, diode ideality factor (1 for an ideal diode) q, elementary charge k, Boltzmann constant T, absolute temperature the thermal voltage. At 25 °C, volt. Substituting these into the first equation produces the characteristic equation of a solar cell, which relates solar cell parameters to the output current and voltage: An alternative derivation produces an equation similar in appearance, but with on the left-hand side. The two alternatives are identities; that is, they yield precisely the same results. Since the parameters I0, n, RS, and RSH cannot be measured directly, the most common application of the characteristic equation is nonlinear regression to extract the values of these parameters on the basis of their combined effect on solar cell behavior. When RS is not zero, the above equation does not give directly, but it can then be solved using the Lambert W function: When an external load is used with the cell, its resistance can simply be added to RS and set to zero in order to find the current. When is small, we can use the approximation as to produce something much easier to work with Several further simplifications are now possible, such as when which leads to When the current generated by the PV is large compared with the current in the shunt, i.e. (because the shunt resistance is large) there is an analytical solution for for any less than : Otherwise one can solve for using the Lambert W function: However, when RSH is large it's better to solve the original equation numerically. The general form of the solution is a curve with decreasing as increases (see graphs lower down). The slope at small or negative (where the W function is near zero) approaches , whereas the slope at high approaches . Therefore for high optimum output power , it is desirable to have large and should be small. Open-circuit voltage and short-circuit current When the cell is operated at open circuit, = 0 and the voltage across the output terminals is defined as the open-circuit voltage. Assuming the shunt resistance is high enough to neglect the final term of the characteristic equation, the open-circuit voltage VOC is: Similarly, when the cell is operated at short circuit, = 0 and the current through the terminals is defined as the short-circuit current. It can be shown that for a high-quality solar cell (low RS and I0, and high RSH) the short-circuit current is: It is not possible to extract any power from the device when operating at either open circuit or short circuit conditions. Effect of physical size The values of IL, I0, RS, and RSH are dependent upon the physical size of the solar cell. In comparing otherwise identical cells, a cell with twice the junction area of another will, in principle, have double the IL and I0 because it has twice the area where photocurrent is generated and across which diode current can flow. By the same argument, it will also have half the RS of the series resistance related to vertical current flow; however, for large-area silicon solar cells, the scaling of the series resistance encountered by lateral current flow is not easily predictable since it will depend crucially on the grid design (it is not clear what "otherwise identical" means in this respect). Depending on the shunt type, the larger cell may also have half the RSH because it has twice the area where shunts may occur; on the other hand, if shunts occur mainly at the perimeter, then RSH will decrease according to the change in circumference, not area. Since the changes in the currents are the dominating ones and are balancing each other, the open-circuit voltage is practically the same; VOC starts to depend on the cell size only if RSH becomes too low. To account for the dominance of the currents, the characteristic equation is frequently written in terms of current density, or current produced per unit cell area: where J, current density (ampere/cm2) JL, photogenerated current density (ampere/cm2) J0, reverse saturation current density (ampere/cm2) rS, specific series resistance (Ω·cm2) rSH, specific shunt resistance (Ω·cm2). This formulation has several advantages. One is that since cell characteristics are referenced to a common cross-sectional area they may be compared for cells of different physical dimensions. While this is of limited benefit in a manufacturing setting, where all cells tend to be the same size, it is useful in research and in comparing cells between manufacturers. Another advantage is that the density equation naturally scales the parameter values to similar orders of magnitude, which can make numerical extraction of them simpler and more accurate even with naive solution methods. There are practical limitations of this formulation. For instance, certain parasitic effects grow in importance as cell sizes shrink and can affect the extracted parameter values. Recombination and contamination of the junction tend to be greatest at the perimeter of the cell, so very small cells may exhibit higher values of J0 or lower values of RSH than larger cells that are otherwise identical. In such cases, comparisons between cells must be made cautiously and with these effects in mind. This approach should only be used for comparing solar cells with comparable layout. For instance, a comparison between primarily quadratical solar cells like typical crystalline silicon solar cells and narrow but long solar cells like typical thin film solar cells can lead to wrong assumptions caused by the different kinds of current paths and therefore the influence of, for instance, a distributed series resistance contribution to rS. Macro-architecture of the solar cells could result in different surface areas being placed in any fixed volume - particularly for thin film solar cells and flexible solar cells which may allow for highly convoluted folded structures. If volume is the binding constraint, then efficiency density based on surface area may be of less relevance. Transparent conducting electrodes Transparent conducting electrodes are essential components of solar cells. It is either a continuous film of indium tin oxide or a conducting wire network, in which wires are charge collectors while voids between wires are transparent for light. An optimum density of wire network is essential for the maximum solar cell performance as higher wire density blocks the light transmittance while lower wire density leads to high recombination losses due to more distance traveled by the charge carriers. Cell temperature Temperature affects the characteristic equation in two ways: directly, via T in the exponential term, and indirectly via its effect on I0 (strictly speaking, temperature affects all of the terms, but these two far more significantly than the others). While increasing T reduces the magnitude of the exponent in the characteristic equation, the value of I0 increases exponentially with T. The net effect is to reduce VOC (the open-circuit voltage) linearly with increasing temperature. The magnitude of this reduction is inversely proportional to VOC; that is, cells with higher values of VOC suffer smaller reductions in voltage with increasing temperature. For most crystalline silicon solar cells the change in VOC with temperature is about −0.50%/°C, though the rate for the highest-efficiency crystalline silicon cells is around −0.35%/°C. By way of comparison, the rate for amorphous silicon solar cells is −0.20 to −0.30%/°C, depending on how the cell is made. The amount of photogenerated current IL increases slightly with increasing temperature because of an increase in the number of thermally generated carriers in the cell. This effect is slight, however: about 0.065%/°C for crystalline silicon cells and 0.09% for amorphous silicon cells. The overall effect of temperature on cell efficiency can be computed using these factors in combination with the characteristic equation. However, since the change in voltage is much stronger than the change in current, the overall effect on efficiency tends to be similar to that on voltage. Most crystalline silicon solar cells decline in efficiency by 0.50%/°C and most amorphous cells decline by 0.15−0.25%/°C. The figure above shows I-V curves that might typically be seen for a crystalline silicon solar cell at various temperatures. Series resistance As series resistance increases, the voltage drop between the junction voltage and the terminal voltage becomes greater for the same current. The result is that the current-controlled portion of the I-V curve begins to sag toward the origin, producing a significant decrease in and a slight reduction in ISC, the short-circuit current. Very high values of RS will also produce a significant reduction in ISC; in these regimes, series resistance dominates and the behavior of the solar cell resembles that of a resistor. These effects are shown for crystalline silicon solar cells in the I-V curves displayed in the figure to the right. Power lost through the series resistance is . During illumination when and are small relative to photocurrent , power loss also increases quadratically with . Series resistance losses are therefore most important at high illumination intensities. Shunt resistance As shunt resistance decreases, the current diverted through the shunt resistor increases for a given level of junction voltage. The result is that the voltage-controlled portion of the I-V curve begins to sag far from the origin, producing a significant decrease in and a slight reduction in VOC. Very low values of RSH will produce a significant reduction in VOC. Much as in the case of a high series resistance, a badly shunted solar cell will take on operating characteristics similar to those of a resistor. These effects are shown for crystalline silicon solar cells in the I-V curves displayed in the figure to the right. Reverse saturation current If one assumes infinite shunt resistance, the characteristic equation can be solved for VOC: Thus, an increase in I0 produces a reduction in VOC proportional to the inverse of the logarithm of the increase. This explains mathematically the reason for the reduction in VOC that accompanies increases in temperature described above. The effect of reverse saturation current on the I-V curve of a crystalline silicon solar cell are shown in the figure to the right. Physically, reverse saturation current is a measure of the "leakage" of carriers across the p–n junction in reverse bias. This leakage is a result of carrier recombination in the neutral regions on either side of the junction. Ideality factor The ideality factor (also called the emissivity factor) is a fitting parameter that describes how closely the diode's behavior matches that predicted by theory, which assumes the p–n junction of the diode is an infinite plane and no recombination occurs within the space-charge region. A perfect match to theory is indicated when . When recombination in the space-charge region dominate other recombination, however, . The effect of changing ideality factor independently of all other parameters is shown for a crystalline silicon solar cell in the I-V curves displayed in the figure to the right. Most solar cells, which are quite large compared to conventional diodes, well approximate an infinite plane and will usually exhibit near-ideal behavior under standard test conditions (). Under certain operating conditions, however, device operation may be dominated by recombination in the space-charge region. This is characterized by a significant increase in I0 as well as an increase in ideality factor to . The latter tends to increase solar cell output voltage while the former acts to erode it. The net effect, therefore, is a combination of the increase in voltage shown for increasing n in the figure to the right and the decrease in voltage shown for increasing I0 in the figure above. Typically, I0 is the more significant factor and the result is a reduction in voltage. Sometimes, the ideality factor is observed to be greater than 2, which is generally attributed to the presence of Schottky diode or heterojunction in the solar cell. The presence of a heterojunction offset reduces the collection efficiency of the solar cell and may contribute to low fill-factor. Other models While the above model is most common, other models have been proposed, like the d1MxP discrete model. See also References External links PV Lighthouse Equivalent Circuit Calculator Chemistry Explained — Solar Cells from chemistryexplained.com Theories Solar cells Physical chemistry de:Solarzelle#Funktionsprinzip
Theory of solar cells
[ "Physics", "Chemistry" ]
4,544
[ "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
29,179,018
https://en.wikipedia.org/wiki/Converted-wave%20analysis
During seismic exploration, P-waves (also known as primary or compressive waves) penetrate down into the earth. Due to mode conversion, a P-wave can reflect upwards as an S-wave (also known as a secondary, shear or transverse wave) when it hits an interface (e.g., solid-liquid). Other P-wave to S-wave (P-S) conversions can occur, but the down-up conversion is the primary focus. Unlike P-waves, converted shear waves are largely unaffected by fluids. By analyzing the original and converted waves, seismologists obtain additional subsurface information, especially due to (1) differential velocity (VP/VS), (2) asymmetry in the waves' angles of incidence and reflection and (3) amplitude variations. As opposed to analysis of P-wave to P-wave (P-P) reflection, c-wave (P-S) analysis is more complex. C-wave analysis requires at least three times as many measurement channels per station. Variations in reflection depths can cause significant analytic problems. Gathering, mapping, and binning c-wave data is also more difficult than P-P data. However, c-wave analysis can provide additional information needed to create a three-dimensional depth image of rock type, structure, and saturant. For example, changes in VS with respect to VP suggest changing lithology and pore geometry. References Geophysics
Converted-wave analysis
[ "Physics" ]
298
[ "Applied and interdisciplinary physics", "Geophysics" ]
2,107,349
https://en.wikipedia.org/wiki/Heterothermy
Heterothermy or heterothermia (from Greek ἕτερος heteros "other" and θέρμη thermē "heat") is a physiological term for animals that vary between self-regulating their body temperature, and allowing the surrounding environment to affect it. In other words, they exhibit characteristics of both poikilothermy and homeothermy. Definition Heterothermic animals are those that can switch between poikilothermic and homeothermic strategies. These changes in strategies typically occur on a daily basis or on an annual basis. More often than not, it is used as a way to dissociate the fluctuating metabolic rates seen in some small mammals and birds (e.g. bats and hummingbirds), from those of traditional cold blooded animals. In many bat species, body temperature and metabolic rate are elevated only during activity. When at rest, these animals reduce their metabolisms drastically, which results in their body temperature dropping to that of the surrounding environment. This makes them homeothermic when active, and poikilothermic when at rest. This phenomenon has been termed 'daily torpor' and was intensively studied in the Djungarian hamster. During the hibernation season, this animal shows strongly reduced metabolism each day during the rest phase while it reverts to endothermic metabolism during its active phase, leading to normal euthermic body temperatures (around 38 °C). Larger mammals (e.g. ground squirrels) and bats show multi-day torpor bouts during hibernation (up to several weeks) in winter. During these multi-day torpor bouts, body temperature drops to ~1 °C above ambient temperature and metabolism may drop to about 1% of the normal endothermic metabolic rate. Even in these deep hibernators, the long periods of torpor is interrupted by bouts of endothermic metabolism, called arousals (typically lasting between 4–20 hours). These metabolic arousals cause body temperature to return to euthermic levels 35-37 °C. Most of the energy spent during hibernation is spent in arousals (70-80%), but their function remains unresolved. Shallow hibernation patterns without arousals have been described in large mammals (like the black bear,) or under special environmental circumstances. Regional heterothermy Regional heterothermy describes organisms that are able to maintain different temperature "zones" in different regions of the body. This usually occurs in the limbs, and is made possible through the use of counter-current heat exchangers, such as the rete mirabile found in tuna and certain birds. These exchangers equalize the temperature between hot arterial blood going out to the extremities and cold venous blood coming back, thus reducing heat loss. Penguins and many arctic birds use these exchangers to keep their feet at roughly the same temperature as the surrounding ice. This keeps the birds from getting stuck on an ice sheet. Other animals, like the leatherback sea turtle, use the heat exchangers to gather, and retain heat generated by their muscular flippers. There are even some insects which possess this mechanism (see insect thermoregulation), the best-known example being bumblebees, which exhibit counter-current heat exchange at the point of constriction between the mesosoma ("thorax") and metasoma ("abdomen"); heat is retained in the thorax and lost from the abdomen. Using a very similar mechanism, the internal temperature of a honeybee's thorax can exceed 45 °C while in flight. See also Mesotherm Poikilotherm References External links Thermobiology of bats Animal physiology Thermoregulation sv:Heteroterm
Heterothermy
[ "Biology" ]
785
[ "Thermoregulation", "Animals", "Animal physiology", "Homeostasis" ]
2,108,110
https://en.wikipedia.org/wiki/List%20of%20welding%20processes
This is a list of welding processes, separated into their respective categories. The associated N reference numbers (second column) are specified in ISO 4063 (in the European Union published as EN ISO 4063). Numbers in parentheses are obsolete and were removed from the current (1998) version of ISO 4063. The AWS reference codes of the American Welding Society are commonly used in North America. Arc welding Overview article: arc welding Oxyfuel gas welding Overview article: Oxy-fuel welding and cutting Resistance welding Overview article: electric resistance welding Solid-state welding Other types of welding Notes and references Cary, Howard B. and Scott C. Helzer (2005). Modern Welding Technology. Upper Saddle River, New Jersey: Pearson Education. . Lincoln Electric (1994). The Procedure Handbook of Arc Welding. Cleveland: Lincoln Electric. . See also Welding List of welding codes Symbols and conventions used in welding documentation Laser cladding External links Welding process information Resistance welding process information Technology-related lists
List of welding processes
[ "Engineering" ]
203
[ "Welding", "Mechanical engineering" ]
2,108,774
https://en.wikipedia.org/wiki/Subcloning
In molecular biology, subcloning is a technique used to move a particular DNA sequence from a parent vector to a destination vector. Subcloning is not to be confused with molecular cloning, a related technique. Procedure Restriction enzymes are used to excise the gene of interest (the insert) from the parent. The insert is purified in order to isolate it from other DNA molecules. A common purification method is gel isolation. The number of copies of the gene is then amplified using polymerase chain reaction (PCR). Simultaneously, the same restriction enzymes are used to digest (cut) the destination. The idea behind using the same restriction enzymes is to create complementary sticky ends, which will facilitate ligation later on. A phosphatase, commonly calf-intestinal alkaline phosphatase (CIAP), is also added to prevent self-ligation of the destination vector. The digested destination vector is isolated/purified. The insert and the destination vector are then mixed together with DNA ligase. A typical molar ratio of insert genes to destination vectors is 3:1; by increasing the insert concentration, self-ligation is further decreased. After letting the reaction mixture sit for a set amount of time at a specific temperature (dependent upon the size of the strands being ligated; for more information see DNA ligase), the insert should become successfully incorporated into the destination plasmid. Amplification of product plasmid The plasmid is often transformed into a bacterium like E. coli. Ideally when the bacterium divides the plasmid should also be replicated. In the best case scenario, each bacterial cell should have several copies of the plasmid. After a good number of bacterial colonies have grown, they can be miniprepped to harvest the plasmid DNA. Selection In order to ensure growth of only transformed bacteria (which carry the desired plasmids to be harvested), a marker gene is used in the destination vector for selection. Typical marker genes are for antibiotic resistance or nutrient biosynthesis. So, for example, the "marker gene" could be for resistance to the antibiotic ampicillin. If the bacteria that were supposed to pick up the desired plasmid had picked up the desired gene then they would also contain the "marker gene". Now the bacteria that picked up the plasmid would be able to grow in ampicillin whereas the bacteria that did not pick up the desired plasmid would still be vulnerable to destruction by the ampicillin. Therefore, successfully transformed bacteria would be "selected." Example case: bacterial plasmid subcloning In this example, a gene from mammalian gene library will be subcloned into a bacterial plasmid (destination platform). The bacterial plasmid is a piece of circular DNA which contains regulatory elements allowing for the bacteria to produce a gene product (gene expression) if it is placed in the correct place in the plasmid. The production site is flanked by two restriction enzyme cutting sites "A" and "B" with incompatible sticky ends. The mammalian DNA does not come with these restriction sites, so they are built in by overlap extension PCR. The primers are designed to put the restriction sites carefully, so that the coding of the protein is in-frame, and a minimum of extra amino acids is implanted on either side of the protein. Both the PCR product containing the mammalian gene with the new restriction sites and the destination plasmid are subjected to restriction digestion, and the digest products are purified by gel electrophoresis. The digest products, now containing compatible sticky ends with each other (but incompatible sticky ends with themselves) are subjected to ligation, creating a new plasmid which contains the background elements of the original plasmid with a different insert. The plasmid is transformed into bacteria and the identity of the insert is confirmed by DNA sequencing. See also Cloning Molecular cloning Polymerase chain reaction TA cloning References Cloning Molecular biology Biotechnology Molecular biology techniques
Subcloning
[ "Chemistry", "Engineering", "Biology" ]
843
[ "Cloning", "Genetic engineering", "Biotechnology", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry" ]
2,109,352
https://en.wikipedia.org/wiki/Quinonoid%20zwitterion
A quinonoid zwitterion is a mesoionic zwitterion based on quinone-related chemical compounds. The benzene derivate 1,3-dihydroxy-4,6-diaminobenzene is easily oxidized by air in water or methanol to the quinonoid. This compound was first prepared in 1883 and the quinonoid structure first proposed in 1956. In 2002 the compound was found to be more stable and to exist as the zwitterion after a proton transfer. Evidence for this structure is based on nuclear magnetic resonance spectroscopy and x-ray crystallography. The positive charge is delocalized between the amino groups over four bonds involving six pi electrons. The negative charge is spread likewise between the oxygen atoms. See also Questiomycin A - an antibiotic compound containing the quinonoid group References External links Seminar abstract Zwitterions
Quinonoid zwitterion
[ "Physics", "Chemistry" ]
190
[ "Ions", "Zwitterions", "Matter" ]
2,110,057
https://en.wikipedia.org/wiki/Hypsochromic%20shift
In spectroscopy, hypsochromic shift () is a change of spectral band position in the absorption, reflectance, transmittance, or emission spectrum of a molecule to a shorter wavelength (higher frequency). Because the blue color in the visible spectrum has a shorter wavelength than most other colors, this effect is also commonly called a blue shift. It should not be confused with a bathochromic shift, which is the opposite process – the molecule's spectra are changed to a longer wavelength (lower frequency). Hypsochromic shifts can occur because of a change in environmental conditions. For example, a change in solvent polarity will result in solvatochromism. A series of structurally related molecules in a substitution series can also show a hypsochromic shift. Hypsochromic shift is a phenomenon seen in molecular spectra, not atomic spectra - it is thus more common to speak of the movement of the peaks in the spectrum rather than lines. where is the wavelength of the spectral peak of interest and For example, β-acylpyrrole will show a hypsochromic shift of 30-40 nm in comparison with α-acylpyrroles. See also Bathochromic shift, a change in band position to a longer wavelength (lower frequency). References Spectroscopy Chromism
Hypsochromic shift
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
278
[ "Spectroscopy stubs", "Materials science stubs", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Chromism", "Astronomy stubs", "Materials science", "Smart materials", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
2,110,062
https://en.wikipedia.org/wiki/Bathochromic%20shift
In spectroscopy, bathochromic shift (; hence less common alternate spelling "bathychromic") is a change of spectral band position in the absorption, reflectance, transmittance, or emission spectrum of a molecule to a longer wavelength (lower frequency). Because the red color in the visible spectrum has a longer wavelength than most other colors, the effect is also commonly called a red shift. Hypsochromic shift is a change to shorter wavelength (higher frequency). Conditions It can occur because of a change in environmental conditions: for example, a change in solvent polarity will result in solvatochromism. A series of structurally-related molecules in a substitution series can also show a bathochromic shift. Bathochromic shift is a phenomenon seen in molecular spectra, not atomic spectra; it is thus more common to speak of the movement of the peaks in the spectrum rather than lines. where is the wavelength of the spectral peak of interest and Detection Bathochromic shift is typically demonstrated using a spectrophotometer, colorimeter, or spectroradiometer. See also Chromism Solvatochromism Spectroscopy References Chromism Spectroscopy
Bathochromic shift
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
243
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Chromism", "Materials science", "Smart materials", "Spectroscopy" ]
23,102,251
https://en.wikipedia.org/wiki/StarTram
StarTram is a proposed space launch system propelled by maglev technology. The initial Generation 1 facility is proposed to launch cargo only from a mountain peak at an altitude of using an evacuated tube remaining at local surface level. Annual orbital lift was estimated at approximately 150,000 tons. More advanced technology is required for a Generation 2 system for passengers, with a longer track instead gradually curving up at its end to the thinner air at altitude, supported by magnetic levitation, reducing g-forces when each capsule transitions from the vacuum tube to the atmosphere. A SPESIF 2010 presentation stated that Generation 1 could be completed by the year 2020 or later if funding began in 2010, and Generation 2 by 2030 or later. History James R. Powell invented the superconducting maglev concept in the 1960s with a colleague, Gordon Danby, also at Brookhaven National Laboratory, which was subsequently developed into modern maglev trains. Later, Powell co-founded StarTram, Inc. with Dr. George Maise, an aerospace engineer who previously was at Brookhaven National Laboratory from 1974 to 1997 with particular expertise including reentry heating and hypersonic vehicle design. A StarTram design was first published in a 2001 paper and patent, making reference to a 1994 paper on MagLifter. Developed by John C. Mankins, who was manager of Advanced Concept Studies at NASA, the MagLifter concept involved maglev launch assist for a few hundred m/s with a short track, 90% projected efficiency. Noting StarTram is essentially MagLifter taken to a much greater extreme, both MagLifter and StarTram were discussed the following year in a concept study performed by ZHA for NASA's Kennedy Space Center, also considered together by Maglev 2000 with Powell and Danby. Subsequent design modifies StarTram into a generation 1 version, a generation 2 version, and an alternative generation 1.5 variant. John Rather, who served as assistant director for Space Technology (Program Development) at NASA, said: Description Generation 1 System The Gen-1 system proposes to accelerate uncrewed craft at 30 g through a long tunnel, with a plasma window preventing vacuum loss when the exit's mechanical shutter is briefly open, evacuated of air with an MHD pump. (The plasma window is larger than prior constructions, 2.5 MW estimated power consumption itself for diameter). In the reference design, the exit is on the surface of a mountain peak of altitude, where launch velocity at a 10-degree angle takes cargo capsules to low Earth orbit when combined with a small rocket burn providing for orbit circularization. With a bonus from Earth's rotation if firing east, the extra speed, well beyond nominal orbital velocity, compensates for losses during ascent including from atmospheric drag. A 40-ton cargo craft, diameter and length, would experience briefly the effects of atmospheric passage. With an effective drag coefficient of 0.09, peak deceleration for the mountain-launched elongated projectile is momentarily 20 g but halves within the first 4 seconds and continues to decrease as it quickly passes above the bulk of the remaining atmosphere. In the first moments after exiting the launch tube, the heating rate with an optimal nose shape is around 30 kW/cm2 at the stagnation point, though much less over most of the nose, but drops below 10 kW/cm2 within a few seconds. Transpiration water cooling is planned, briefly consuming up to ≈ 100 liters/m2 of water per second. Several percent of the projectile's mass in water is calculated to suffice. The tunnel tube itself for Gen-1 has no superconductors, no cryogenic cooling requirements, and none of it is at higher elevation than the local ground surface. Except for probable usage of SMES as the electrical power storage method, superconducting magnets are only on the moving spacecraft, inducing current into relatively inexpensive aluminum loops on the acceleration tunnel walls, levitating the craft with 10 centimeters clearance, while meanwhile a second set of aluminum loops on the walls carries an AC current accelerating the craft: a linear synchronous motor. Powell predicts a total expense, primarily hardware costs, of $43 per kilogram of payload with 35-ton payloads being launched 10+ times a day, as opposed to rocket launch prices of $10,000 to $25,000 per kilogram to low Earth orbit at the time. The estimated cost of electrical energy to reach the velocity of low Earth orbit is under $1 per kilogram of payload: 6 cents per kilowatt-hour contemporary industrial electricity cost, launch kinetic energy of 38.5 MJ per kilogram, and 87.5% of mass payload, accelerated at high efficiency by this linear electric motor. Generation 2 System The Gen-2 variant of the StarTram is supposed to be for reusable crewed capsules, intended to be low g-force, 2 to 3 g acceleration in the launch tube and an elevated exit at such high altitude () that peak aerodynamic deceleration becomes ≈ 1g. Though NASA test pilots have handled multiple times those g-forces, the low acceleration is intended to allow eligibility to the broadest spectrum of the general public. With such relatively slow acceleration, the Gen-2 system requires length. The cost for the non-elevated majority of the tube's length is estimated to be several tens of millions of dollars per kilometer, proportionately a semi-similar expense per unit length to the tunneling portion of the former Superconducting Super Collider project (originally planned to have of diameter vacuum tunnel excavated for $2 billion) or to some existing maglev train lines where Powell's Maglev 2000 system is claiming major cost-reducing further innovations. An area of Antarctica above sea level is one siting option, especially as the ice sheet is viewed as relatively easy to tunnel through. For the elevated end portion, the design considers magnetic levitation to be relatively less expensive than alternatives for elevating a launch tube of a mass driver (tethered balloons, compressive or inflated aerospace-material megastructures). A 280-megaamp current in ground cables creates a magnetic field of 30 Gauss strength at above sea level (somewhat less above local terrain depending on site choice), while cables on the elevated final portion of the tube carry 14 megaamps in the opposite direction, generating a repulsive force of 4 tons per meter; it is claimed that this would keep the 2-ton/meter structure strongly pressing up on its angled tethers, a tensile structure on grand scale. In the example of niobium-titanium superconductor carrying 2 × 105 amps per cm2, the levitated platform would have 7 cables, each of conductor cross-section when including copper stabilizer. Generation 1.5 System (lower-velocity option) An alternative, Gen-1.5, would launch passenger spacecraft at from a mountaintop at around 6000 meters above sea level from a ≈ tunnel accelerating at ≈ 3 g. Though construction costs would be lower than the Gen-2 version, Gen-1.5 would differ from other StarTram variants by requiring 4+ km/s to be provided by other means, like rocket propulsion. However, the non-linear nature of the rocket equation still makes the payload fraction for such a vehicle significantly greater than that of a conventional rocket unassisted by electromagnetic launch, and a vehicle with high available weight margins and safety factors should be far easier to mass-produce cheaply or make reusable with rapid turnaround than current rockets. Dr. Powell remarks that present launch vehicles "have many complex systems that operate near their failure point, with very limited redundancy," with extreme hardware performance relative to weight being a top driver of expense. (Fuel itself is on the order of 1% of the current costs to orbit). Alternatively, Gen-1.5 could be combined with another non-rocket spacelaunch system, like a Momentum Exchange Tether similar to the HASTOL concept which was intended to take a vehicle to orbit. Because tethers are subject to highly exponential scaling, such a tether would be much easier to build using current technologies than one providing full orbital velocity by itself. The launch tunnel length in this proposal could be reduced by accepting correspondingly larger forces on the passengers. A ≈ tunnel would generate forces of ≈ 10-15 g, which physically fit test pilots have endured successfully in centrifuge tests, but a slower acceleration with a longer tunnel would ease passenger requirements and reduce peak power draw, which in turn would decrease power conditioning expenses. Economics and potential The StarTram ground facility concept is claimed to be reusable after each launch without extensive maintenance, as it would essentially be a large linear synchronous electric motor. This would shift most of the "requirement for achieving orbit to a robust ground infrastructure," intended to have neither high performance relative to weight requirements nor such as the $25,000 per kilogram of flyable dry weight costs of the Space Shuttle. The designers estimate a construction cost for Generation 1 of $19 billion, becoming $67 billion for passenger-capable Generation 2. The alternative Generation 1.5 design, such as launch velocity, would be intermediate in velocity terms between Gen-1's and the Maglifter design (which had $0.2 billion estimated cost for launch assist in the case of a 50-ton vehicle). The Generation 2 goal is $13,000 per person. Up to 4 million people could be sent to orbit per decade per Gen-2 facility if as estimated. Challenges Gen-1 The largest challenge for Gen-1 is considered by the researchers to be sufficiently affordable storage, rapid delivery, and handling of the power requirements. For needed electrical energy storage (discharged over 30 seconds with about 50 gigawatt average and about 100 gigawatts peak), SMES cost performance on such unusual scale is anticipated of around a dollar per kilojoule and $20 per kW-peak. Such would be novel in scale but not greatly different planned cost performance than obtained in other smaller pulsed power energy storage systems (such as quick-discharge modern supercapacitors dropping from $151/kJ to $2.85/kJ cost between 1998 and 2006 while being predicted to later reach a dollar per kJ, lead acid batteries which can be $10 per kW-peak for a few seconds, or experimental railgun compulsator power supplies). The study notes pulsed MHD generators may be an alternative. For MagLifter, General Electric estimated in 1997-2000 that a set of hydroelectric flywheel pulse power generators could be manufactured for a cost equating to $5.40 per kJ and $27 per kW-peak. For StarTram, the SMES design choice is a better (less expensive) approach than pulse generators according to Powell. The single largest predicted capital cost for Gen-1 is the power conditioning, from an initially DC discharge to the AC current wave, dealing for a few seconds with very high power, up to 100 gigawatts, at a cost estimated to be $100 per kW-peak. Yet, compared to some other potential implementations of a coilgun launcher with relatively higher requirements for pulse power switching devices (an example being an escape velocity design of length after a 1977 NASA Ames study determined how to survive atmospheric passage from ground launch), which are not always semiconductor-based, the 130-km acceleration tube length of Gen-1 spreads out energy input requirements over a longer acceleration duration. Such makes peak input power handling requirements be not more than about 2 GW per ton of the vehicle. The tradeoff of greater expense for the tunnel itself is incurred, but the tunnel is estimated to be about $4.4 billion including $1500 per cubic meter excavation, a minority of total system cost. Gen-1.5 The current land speed record of 2.9 km/s was obtained by a sled on 5 kilometers of rail track mostly in a helium-filled tunnel, in a $20 million project. The Gen-1.5 version of the StarTram for launch of passenger RLVs at 4 km/s velocity from the surface of a mountain would be significantly higher speed with a far more massive vehicle. However, such would accelerate in a lengthy vacuum tunnel without air or gas drag, with levitation preventing hypervelocity physical rail contact, and with 3 orders of magnitude higher anticipated funding. Many challenges including high initial capital cost would overlap with Gen-1, though not having the levitated launch tube of Gen-2. Gen-2 Gen-2 introduces particular extra challenge with its elevated launch tube, levitating both the vehicle and part of the tube (unlike Gen-1 and Gen-1.5 which only levitate the vehicle). As of 2010 operating maglev systems levitate the train by approximately . For the Gen-2 version of the StarTram, it is necessary to levitate the track over up to , a distance greater by a factor of 1.5 million. The force between two conducting lines is given by , (Ampère's force law). Here F is the force, the permeability, the electric currents, the length of the lines and their distance. To exert over a distance of in air ( ≈ 1) ground ≈ 280 x 106A is needed if levitated ≈ 14 x 106A. For comparison, in lightning the maximal current is about 105A, c.f. properties of lightning, though resistive power dissipation involved in a current flowing through a conductor is proportional to the voltage drop, high for a lightning discharge of millions of volts in air but ideally zero for a zero-resistance superconductor. While the performance of niobium-titanium superconductor is technically sufficient (a critical current density of 5 x 105 A/cm2 under the relevant magnetic field conditions for the levitated platform, 40% of that in practice after a safety factor), uncertainties on economics include a far more optimistic assumption for Gen-2 of $0.2 per kA-meter of superconductor compared to the $2 per kA-meter assumed for Gen-1 (where Gen-1 doesn't have any of its launch tube levitated but uses superconducting cable for a large SMES and within the maglev craft launched). NbTi was the design choice under the available economies of scale for cooling, since it presently costs $1 per kA-meter, while high temperature superconductors so far still cost much more for the conductor itself per kA-meter. If considering a design with an acceleration up to 10 g (which is higher than the re-entry acceleration of Apollo 16) then the whole track must be at least long for a passenger version of the Gen-2 system. Such length allows use of the approximation for an infinite line to calculate the force. The preceding neglects how only the final portion of the track is levitated, but a more complex calculation only changes the result for force per unit length of it by 10-20% (fgl = 0.8 to 0.9 instead of 1). The researchers themselves do not consider there to be any doubt whether the levitation would work in terms of force exerted (a consequence of Ampère's force law) but see the primary challenge as the practical engineering complexities of erection of the tube, while a substantial portion of engineering analysis focused on handling bending caused by wind. The active structure is calculated to bend by a fraction of a meter per kilometer under wind in the very thin air at its high altitude, a slight curvature theoretically handled by guidance loops, with net levitation force beyond structure weight exceeding wind force by a factor of 200+ to keep tethers taut, and with the help of computer-controlled control tethers. See also Non-rocket spacelaunch Rocket sled launch Vactrain High altitude platform station as a space port ThothX Tower References External links Startram Homepage Exploratory engineering Hypothetical technology Maglev Megastructures Single-stage-to-orbit Space colonization Space launch vehicles of the United States Space technology Vertical transport devices Non-rocket spacelaunch
StarTram
[ "Astronomy", "Technology" ]
3,309
[ "Exploratory engineering", "Transport systems", "Outer space", "Space technology", "Vertical transport devices", "Megastructures" ]
23,105,042
https://en.wikipedia.org/wiki/Primordial%20nuclide
In geochemistry, geophysics and nuclear physics, primordial nuclides, also known as primordial isotopes, are nuclides found on Earth that have existed in their current form since before Earth was formed. Primordial nuclides were present in the interstellar medium from which the solar system was formed, and were formed in, or after, the Big Bang, by nucleosynthesis in stars and supernovae followed by mass ejection, by cosmic ray spallation, and potentially from other processes. They are the stable nuclides plus the long-lived fraction of radionuclides surviving in the primordial solar nebula through planet accretion until the present; 286 such nuclides are known. Stability All of the known 251 stable nuclides, plus another 35 nuclides that have half-lives long enough to have survived from the formation of the Earth, occur as primordial nuclides. These 35 primordial radionuclides represent isotopes of 28 separate elements. Cadmium, tellurium, xenon, neodymium, samarium, osmium, and uranium each have two primordial radioisotopes (, ; , ; , ; , ; , ; , ; and , ). Because the age of the Earth is (4.6 billion years), the half-life of the given nuclides must be greater than about (100 million years) for practical considerations. For example, for a nuclide with half-life (60 million years), this means 77 half-lives have elapsed, meaning that for each mole () of that nuclide being present at the formation of Earth, only 4 atoms remain today. The seven shortest-lived primordial nuclides (i.e., the nuclides with the shortest half-lives) to have been experimentally verified are (), (), (), (), (), (), and (). These are the seven nuclides with half-lives comparable to, or somewhat less than, the estimated age of the universe. (87Rb, 187Re, 176Lu, and 232Th have half-lives somewhat longer than the age of the universe.) For a complete list of the 35 known primordial radionuclides, including the next 28 with half-lives much longer than the age of the universe, see the complete list below. For practical purposes, nuclides with half-lives much longer than the age of the universe may be treated as if they were stable. 87Rb, 187Re, 176Lu, 232Th, and 238U have half-lives long enough that their decay is limited over geological time scales; 40K and 235U have shorter half-lives and are hence severely depleted, but are still long-lived enough to persist significantly in nature. The longest-lived isotope not proven to be primordial is , which has a half-life of , followed by () and (). 244Pu was reported to exist in nature as a primordial nuclide in 1971, but this detection could not be confirmed by further studies in 2012 and 2022. Taking into account that all these nuclides must exist for at least , 146Sm must survive 45 half-lives (and hence be reduced by 245 ≈ ), 244Pu must survive 57 (and be reduced by a factor of 257 ≈ ), and 92Nb must survive 130 (and be reduced by 2130 ≈ ). Mathematically, considering the likely initial abundances of these nuclides, primordial 146Sm and 244Pu should persist somewhere within the Earth today, even if they are not identifiable in the relatively minor portion of the Earth's crust available to human assays, while 92Nb and all shorter-lived nuclides should not. Nuclides such as 92Nb that were present in the primordial solar nebula but have long since decayed away completely are termed extinct radionuclides if they have no other means of being regenerated. As for 244Pu, calculations suggest that as of 2022, sensitivity limits were about one order of magnitude away from detecting it as a primordial nuclide. Because primordial chemical elements often consist of more than one primordial isotope, there are only 83 distinct primordial chemical elements. Of these, 80 have at least one observationally stable isotope and three additional primordial elements have only radioactive isotopes (bismuth, thorium, and uranium). Naturally occurring nuclides that are not primordial Some unstable isotopes which occur naturally (such as , , and ) are not primordial, as they must be constantly regenerated. This occurs by cosmic radiation (in the case of cosmogenic nuclides such as and ), or (rarely) by such processes as geonuclear transmutation (neutron capture of uranium in the case of and ). Other examples of common naturally occurring but non-primordial nuclides are isotopes of radon, polonium, and radium, which are all radiogenic nuclide daughters of uranium decay and are found in uranium ores. The stable argon isotope 40Ar is actually more common as a radiogenic nuclide than as a primordial nuclide, forming almost 1% of the Earth's atmosphere, which is regenerated by the beta decay of the extremely long-lived radioactive primordial isotope 40K, whose half-life is on the order of a billion years and thus has been generating argon since early in the Earth's existence. (Primordial argon was dominated by the alpha process nuclide 36Ar, which is significantly rarer than 40Ar on Earth.) A similar radiogenic series is derived from the long-lived radioactive primordial nuclide 232Th. These nuclides are described as geogenic, meaning that they are decay or fission products of uranium or other actinides in subsurface rocks. All such nuclides have shorter half-lives than their parent radioactive primordial nuclides. Some other geogenic nuclides do not occur in the decay chains of 232Th, 235U, or 238U but can still fleetingly occur naturally as products of the spontaneous fission of one of these three long-lived nuclides, such as 126Sn, which makes up about 10−14 of all natural tin. Another, 99Tc, has also been detected. There are five other long-lived fission products known. Primordial elements A primordial element is a chemical element with at least one primordial nuclide. There are 251 stable primordial nuclides and 35 radioactive primordial nuclides, but only 80 primordial stable elements—hydrogen through lead, atomic numbers 1 to 82, except for technetium (43) and promethium (61)—and three radioactive primordial elements—bismuth (83), thorium (90), and uranium (92). If plutonium (94) turns out to be primordial (specifically, the long-lived isotope Pu), then it would be a fourth radioactive primordial, though practically speaking it would still be more convenient to produce synthetically. Bismuth's half-life is so long that it is often classed with the 80 stable elements instead, since its radioactivity is not a cause for concern. The number of elements is smaller than the number of nuclides, because many of the primordial elements are represented by multiple isotopes. See chemical element for more information. Naturally occurring stable nuclides As noted, these number about 251. For a list, see the article list of elements by stability of isotopes. For a complete list noting which of the "stable" 251 nuclides may be in some respect unstable, see list of nuclides and stable nuclide. These questions do not impact the question of whether a nuclide is primordial, since all "nearly stable" nuclides, with half-lives longer than the age of the universe, are also primordial. Radioactive primordial nuclides Though it is estimated that about 35 primordial nuclides are radioactive (list below), it becomes very hard to determine the exact total number of radioactive primordials, because the total number of stable nuclides is uncertain. There are many extremely long-lived nuclides whose half-lives are still unknown; in fact, all nuclides heavier than dysprosium-164 are theoretically radioactive. For example, it is predicted theoretically that all isotopes of tungsten, including those indicated by even the most modern empirical methods to be stable, must be radioactive and can alpha decay, but this could only be measured experimentally for W. Likewise, all four primordial isotopes of lead are expected to decay to mercury, but the predicted half-lives are so long (some exceeding 10 years) that such decays could hardly be observed in the near future. Nevertheless, the number of nuclides with half-lives so long that they cannot be measured with present instruments—and are considered from this viewpoint to be stable nuclides—is limited. Even when a "stable" nuclide is found to be radioactive, it merely moves from the stable to the unstable list of primordials, and the total number of primordial nuclides remains unchanged. For practical purposes, these nuclides may be considered stable for all purposes outside specialized research. List of 35 radioactive primordial nuclides and measured half-lives These 35 primordial radionuclides are isotopes of 28 elements (cadmium, neodymium, osmium, samarium, tellurium, uranium, and xenon each have two primordial radioisotopes). These nuclides are listed in order of decreasing stability. Many of them are so nearly stable that they compete for abundance with stable isotopes of their respective elements. For three elements (indium, tellurium, and rhenium) a very long-lived radioactive primordial nuclide is more abundant than a stable nuclide. The longest-lived radionuclide known, Te, has a half-life of : 1.6 × 10 times the age of the Universe. Only four of these 35 nuclides have half-lives shorter than, or equal to, the age of the universe. Most of the other 30 have half-lives much longer. The shortest-lived primordial, U, has a half-life of 703.8 million years, about 1/6 the age of the Earth and Solar System. Many of these nuclides decay by double beta decay, though some like Bi decay by other means such as alpha decay. At the end of the list, are two more nuclides: Sm and Pu. They have not been confirmed as primordial, but their half-lives are long enough that minute quantities should persist today. List legends See also Alpha nuclide Table of nuclides sorted by half-life Table of nuclides Isotope geochemistry Radionuclide Mononuclidic element Monoisotopic element Stable isotope List of nuclides List of elements by stability of isotopes Big Bang nucleosynthesis References Geochemistry Radiometric dating Isotopes Metrology
Primordial nuclide
[ "Physics", "Chemistry" ]
2,368
[ "Isotopes", "Radiometric dating", "nan", "Nuclear physics", "Radioactivity" ]
5,291,187
https://en.wikipedia.org/wiki/Ramberg%E2%80%93Osgood%20relationship
The Ramberg–Osgood equation was created to describe the nonlinear relationship between stress and strain—that is, the stress–strain curve—in materials near their yield points. It is especially applicable to metals that harden with plastic deformation (see work hardening), showing a smooth elastic-plastic transition. As it is a phenomenological model, checking the fit of the model with actual experimental data for the particular material of interest is essential. In its original form, the equation for strain (deformation) is here is strain, is stress, is Young's modulus, and and are constants that depend on the material being considered. In this form, and are not the same as the constants commonly seen in the Hollomon equation. The equation is essentially assuming the elastic strain portion of the stress-strain curve, , can be modeled with a line, while the plastic portion, , can be modeled with a power law. The elastic and plastic components are summed to find the total strain. The first term on the right side, , is equal to the elastic part of the strain, while the second term, , accounts for the plastic part, the parameters and describing the hardening behavior of the material. Introducing the yield strength of the material, , and defining a new parameter, , related to as , it is convenient to rewrite the term on the extreme right side as follows: Replacing in the first expression, the Ramberg–Osgood equation can be written as Hardening behavior and yield offset In the last form of the Ramberg–Osgood model, the hardening behavior of the material depends on the material constants and . Due to the power-law relationship between stress and plastic strain, the Ramberg–Osgood model implies that plastic strain is present even for very low levels of stress. Nevertheless, for low applied stresses and for the commonly used values of the material constants and , the plastic strain remains negligible compared to the elastic strain. On the other hand, for stress levels higher than , plastic strain becomes progressively larger than elastic strain. The value can be seen as a yield offset, as shown in figure 1. This comes from the fact that , when . Accordingly, (see Figure 1): elastic strain at yield = plastic strain at yield = = yield offset Commonly used values for are ~5 or greater, although more precise values are usually obtained by fitting of tensile (or compressive) experimental data. Values for can also be found by means of fitting to experimental data, although for some materials, it can be fixed in order to have the yield offset equal to the accepted value of strain of 0.2%, which means: Alternative Formulations Several slightly different alternative formulations of the Ramberg-Osgood equation can be found. As the models are purely empirical, it is often useful to try different models and check which has the best fit with the chosen material. The Ramberg-Osgood equation can also be expressed using the Hollomon parameters where is the strength coefficient (Pa) and is the strain hardening coefficient (no units). Alternatively, if the yield stress, , is assumed to be at the 0.2% offset strain, the following relationship can be derived. Note that is again as defined in the original Ramberg-Osgood equation and is the inverse of the Hollomon's strain hardening coefficient. See also Viscoplasticity#Johnson–Cook flow stress model References Mechanics Materials science
Ramberg–Osgood relationship
[ "Physics", "Materials_science", "Engineering" ]
706
[ "Applied and interdisciplinary physics", "Materials science", "Mechanics", "Mechanical engineering", "nan" ]
5,291,341
https://en.wikipedia.org/wiki/Variable%20cycle%20engine
A variable cycle engine (VCE), also referred to as adaptive cycle engine (ACE), is an aircraft jet engine that is designed to operate efficiently under mixed flight conditions, such as subsonic, transonic and supersonic. An advanced technology engine is a turbine engine that allows different turbines to spin at different, individually optimum speeds, instead of at one speed for all. It emerged on larger airplanes, before finding other applications. The next generation of supersonic transport (SST) may require some form of VCE. To reduce aircraft drag at supercruise, SST engines require a high specific thrust (net thrust/airflow) to minimize the powerplant's cross-sectional area. This implies a high jet velocity supersonic cruise and at take-off, which makes the aircraft noisy. Specific thrust A high specific thrust engine has a high jet velocity by definition, as implied by the approximate equation for net thrust: where: intake mass flow rate fully expanded jet velocity (in the exhaust plume) aircraft flight velocity Rearranging the equation, specific thrust is given by: So for zero flight velocity, specific thrust is directly proportional to jet velocity. The Rolls-Royce/Snecma Olympus 593 in Concorde had a high specific thrust in supersonic cruise and at dry take-off. This made the engines noisy. The problem was compounded by the need for a modest amount of afterburning (reheat) at take-off (and transonic acceleration). Concepts Tandem fan One SST VCE concept is the tandem fan engine. The engine has two fans, both mounted on the low-pressure shaft, separated by a significant axial gap. The engine operates in series mode while cruising and parallel mode take-off, climb-out, approach, and final-descent. In series mode, air enters in the front of the engine. After passing through the front fan, the air passes directly into the second fan, so that the engine behaves much like a turbofan. In parallel mode, air leaving the front fan exits the engine through an auxiliary nozzle on the underside of the nacelle, skipping the rear fan. Intakes on each side of the engine open to capture air and send it directly to the rear fan and the rest of the engine. Parallel mode substantially increases the total air accelerated by the engine, lowering the velocity of the air and accompanying noise. In the 1970s, Boeing modified a Pratt & Whitney JT8D to use a tandem fan configuration and successfully demonstrated the switch from series to parallel operation (and vice versa) with the engine running, albeit at partial power. Mid-tandem fan In the mid-tandem fan concept, a high specific flow single stage fan is located between the high pressure (HP) and low pressure (LP) compressors of a turbojet core. Only bypass air passes through the fan. The LP compressor exit flow passes through passages within the fan disc, directly underneath the fan blades. Some bypass air enters the engine via an auxiliary intake. During take-off and approach the engine behaves much like a conventional turbofan, with an acceptable jet noise level (i.e., low specific thrust). However, for supersonic cruise, the fan variable inlet guide vanes and auxiliary intake close to minimize bypass flow and increase specific thrust. In this mode the engine acts more like a 'leaky' turbojet (e.g. the F404). Mixed-flow turbofan ejector In the mixed-flow turbofan with ejector concept, a low bypass ratio engine is mounted in front of a long tube, called an ejector. The ejector reduces noise. It is deployed during take-off and approach. Turbofan exhaust gases send air into the ejector via an auxiliary air intake, thereby reducing the specific thrust/mean jet velocity of the final exhaust. The mixed-flow design is not particularly efficient at low speed, but is considerably simpler. Three stream The three-stream architecture adds a third, directable air stream. This stream bypasses the core when fuel efficiency is required or through the core for greater power. Under the Versatile Affordable Advanced Turbine Engines (VAATE) program, the U.S Air Force and industry partners developed this concept under the Adaptive Versatile Engine Technology (ADVENT) and the follow-on Adaptive Engine Technology Demonstrator (AETD) and Adaptive Engine Transition Program (AETP) programs. Examples include the General Electric XA100 and the Pratt & Whitney XA101, as well as the propulsion system for the Next Generation Air Dominance (NGAD) fighter. Double bypass General Electric developed a variable cycle engine, known as the GE37 or General Electric YF120, for the YF-22/YF-23 fighter aircraft competition, in the late 1980s. GE used a double bypass/hybrid fan arrangement, but never disclosed how they exploited the concept. The Air Force instead selected the conventional Pratt & Whitney F119 for what became the Lockheed Martin F-22 Raptor. Other geared turbofans Geared turbofans are also used in the following engines, some still in development: Garrett TFE731, Lycoming ALF 502/LF 507, Pratt & Whitney PW1000G, Turbomeca Astafan, and Turbomeca Aspin, and Aviadvigatel PD-18R. Rolls Royce Ultrafan The Rolls Royce Ultrafan is the largest and most efficient engine to allow multiple turbine speeds. The turbines behind the main fan are small and allow more air to pass straight through, while a planetary gearbox "allows the main fan to spin slower and the compressors to spin faster, putting each in their optimal zones." Turboelectric Startup Astro Mechanica is developing what it calls a turboelectric-adaptive jet engine that shifts from turbofan to turbojet to ramjet mode as it accelerates from a standing start to a projected Mach 6. This is achieved by using a dual turbine approach. One turbine acts as an turbogenerator. The second turbine acts as the propulsion unit. The turbogenerator powers an electric motor that controls the compressor of the second turbine. The motor can change speeds to keep the fan turning at the ideal RPM for a specific flight mode. In turbojet and ramjet modes, the inlet is narrowed to compress the air and eliminate bypass. The turbogenerator is commercially available, while the propulsion unit is built by the company. A key innovation is that electric motors have dramatically increased their power density so that the weight of the motor is no longer prohibitive. Instead of a fixed gearbox, it uses an electric motor to turn the turbine(s) behind the fan at an ideal speed for each phase of flight. The company claimed it would support efficient take-off, subsonic, supersonic, and hypersonic speeds. The electric motor is powered by a generator in turn powered by a turbine. The approach relies on the improved power density of novel electric motors such as yokeless dual-rotor axial flux motors that offer far more kw/kg than conventional designs that were too heavy for such an application. Air flows in through a turbogenerator to produce electric power to power an electric motor. The electric motor adaptively controls the propulsion unit, allowing it to behave like a turbofan, turbojet, or ramjet depending on airspeed. In effect the engine can operate at any point along the specific impulse (Isp) curve - high Isp at low speed or low Isp at high speed. It is in some respects similar to turbo-electric marine engines that allow propellers to turn at a different speed than the steam turbines that power them. See also Index of aviation articles References Mechanics Jet engines
Variable cycle engine
[ "Physics", "Technology", "Engineering" ]
1,577
[ "Jet engines", "Mechanics", "Mechanical engineering", "Engines" ]
5,291,424
https://en.wikipedia.org/wiki/Characteristic%20state%20function
The characteristic state function or Massieu's potential in statistical mechanics refers to a particular relationship between the partition function of an ensemble. In particular, if the partition function P satisfies or in which Q is a thermodynamic quantity, then Q is known as the "characteristic state function" of the ensemble corresponding to "P". Beta refers to the thermodynamic beta. Examples The microcanonical ensemble satisfies hence, its characteristic state function is . The canonical ensemble satisfies hence, its characteristic state function is the Helmholtz free energy . The grand canonical ensemble satisfies , so its characteristic state function is the Grand potential . The isothermal-isobaric ensemble satisfies so its characteristic function is the Gibbs free energy . State functions are those which tell about the equilibrium state of a system References Statistical mechanics
Characteristic state function
[ "Physics", "Chemistry" ]
175
[ "Thermodynamics stubs", "Statistical mechanics stubs", "Thermodynamics", "Statistical mechanics", "Physical chemistry stubs" ]
5,291,862
https://en.wikipedia.org/wiki/Hafnium%20carbide
Hafnium carbide (HfC) is a chemical compound of hafnium and carbon. Previously the material was estimated to have a melting point of about 3,900 °C. More recent tests have been able to conclusively prove that the substance has an even higher melting point of 3,958 °C exceeding those of tantalum carbide and tantalum hafnium carbide which were both previously estimated to be higher. However, it has a low oxidation resistance, with the oxidation starting at temperatures as low as 430 °C. Experimental testing in 2018 confirmed the higher melting point yielding a result of 3,982 (±30°C) with a small possibility that the melting point may even exceed 4,000°C. Atomistic simulations conducted in 2015 predicted that a similar compound, hafnium carbonitride (HfCN), could have a melting point exceeding even that of hafnium carbide. Experimental evidence gathered in 2020 confirmed that it did indeed have a higher melting point exceeding 4,000 °C, with more recent ab initio molecular dynamics calculations predicting the HfC0.75N0.22 phase to have a melting point as high as 4,110 ± 62 °C, highest known for any material. Hafnium carbide is usually carbon deficient and therefore its composition is often expressed as HfCx (x = 0.5 to 1.0). It has a cubic (rock-salt) crystal structure at any value of x. Hafnium carbide powder is obtained by the reduction of hafnium(IV) oxide with carbon at 1,800 to 2,000 °C. A long processing time is required to remove all oxygen. Alternatively, high-purity HfC coatings can be obtained by chemical vapor deposition from a gas mixture of methane, hydrogen, and vaporized hafnium(IV) chloride. Because of the technical complexity and high cost of the synthesis, HfC has a very limited use, despite its favorable properties such as high hardness (greater than 9 Mohs) and melting point. The magnetic properties of HfCx change from paramagnetic for x ≤ 0.8 to diamagnetic at larger x. An inverse behavior (dia-paramagnetic transition with increasing x) is observed for TaCx, despite its having the same crystal structure as HfCx. See also Tantalum carbide Tantalum hafnium carbide Hafnium carbonitride References Carbides Hafnium compounds Rock salt crystal structure Refractory materials
Hafnium carbide
[ "Physics" ]
529
[ "Refractory materials", "Materials", "Matter" ]
5,291,923
https://en.wikipedia.org/wiki/Hafnium%28IV%29%20oxide
Hafnium(IV) oxide is the inorganic compound with the formula . Also known as hafnium dioxide or hafnia, this colourless solid is one of the most common and stable compounds of hafnium. It is an electrical insulator with a band gap of 5.3~5.7 eV. Hafnium dioxide is an intermediate in some processes that give hafnium metal. Hafnium(IV) oxide is quite inert. It reacts with strong acids such as concentrated sulfuric acid and with strong bases. It dissolves slowly in hydrofluoric acid to give fluorohafnate anions. At elevated temperatures, it reacts with chlorine in the presence of graphite or carbon tetrachloride to give hafnium tetrachloride. Structure Hafnia typically adopts the same structure as zirconia (ZrO2). Unlike TiO2, which features six-coordinate Ti in all phases, zirconia and hafnia consist of seven-coordinate metal centres. A variety of other crystalline phases have been experimentally observed, including cubic fluorite (Fmm), tetragonal (P42/nmc), monoclinic (P21/c) and orthorhombic (Pbca and Pnma). It is also known that hafnia may adopt two other orthorhombic metastable phases (space group Pca21 and Pmn21) over a wide range of pressures and temperatures, presumably being the sources of the ferroelectricity observed in thin films of hafnia. Thin films of hafnium oxides deposited by atomic layer deposition are usually crystalline. Because semiconductor devices benefit from having amorphous films present, researchers have alloyed hafnium oxide with aluminum or silicon (forming hafnium silicates), which have a higher crystallization temperature than hafnium oxide. Applications Hafnia is used in optical coatings, and as a high-κ dielectric in DRAM capacitors and in advanced metal–oxide–semiconductor devices. Hafnium-based oxides were introduced by Intel in 2007 as a replacement for silicon oxide as a gate insulator in field-effect transistors. The advantage for transistors is its high dielectric constant: the dielectric constant of HfO2 is 4–6 times higher than that of SiO2. The dielectric constant and other properties depend on the deposition method, composition and microstructure of the material. Hafnium oxide (as well as doped and oxygen-deficient hafnium oxide) attracts additional interest as a possible candidate for resistive-switching memories and CMOS-compatible ferroelectric field effect transistors (FeFET memory) and memory chips. Because of its very high melting point, hafnia is also used as a refractory material in the insulation of such devices as thermocouples, where it can operate at temperatures up to 2500 °C. Multilayered films of hafnium dioxide, silica, and other materials have been developed for use in passive cooling of buildings. The films reflect sunlight and radiate heat at wavelengths that pass through Earth's atmosphere, and can have temperatures several degrees cooler than surrounding materials under the same conditions. References Hafnium compounds High-κ dielectrics Transition metal oxides Ferroelectric materials
Hafnium(IV) oxide
[ "Physics", "Materials_science" ]
711
[ "Physical phenomena", "Ferroelectric materials", "Materials", "Electrical phenomena", "Hysteresis", "Matter" ]
5,292,003
https://en.wikipedia.org/wiki/Indium%28III%29%20oxide
Indium(III) oxide (In2O3) is a chemical compound, an amphoteric oxide of indium. Physical properties Crystal structure Amorphous indium oxide is insoluble in water but soluble in acids, whereas crystalline indium oxide is insoluble in both water and acids. The crystalline form exists in two phases, the cubic (bixbyite type) and rhombohedral (corundum type). Both phases have a band gap of about 3 eV. The parameters of the cubic phase are listed in the infobox. The rhombohedral phase is produced at high temperatures and pressures or when using non-equilibrium growth methods. It has a space group Rc No. 167, Pearson symbol hR30, a = 0.5487 nm, b = 0.5487 nm, c = 1.4510 nm, Z = 6 and calculated density 7.31 g/cm3. Conductivity and magnetism Thin films of chromium-doped indium oxide (In2−xCrxO3) are a magnetic semiconductor displaying high-temperature ferromagnetism, single-phase crystal structure, and semiconductor behavior with high concentration of charge carriers. It has possible applications in spintronics as a material for spin injectors. Thin polycrystalline films of indium oxide doped with Zn2+ are highly conductive (conductivity ~105 S/m) and even superconductive at liquid helium temperatures. The superconducting transition temperature Tc depends on the doping and film structure and is below 3.3 K. Synthesis Bulk samples can be prepared by heating indium(III) hydroxide or the nitrate, carbonate or sulfate. Thin films of indium oxide can be prepared by sputtering of indium targets in an argon/oxygen atmosphere. They can be used as diffusion barriers ("barrier metals") in semiconductors, e.g. to inhibit diffusion between aluminium and silicon. Monocrystalline nanowires can be synthesized from indium oxide by laser ablation, allowing precise diameter control down to 10 nm. Field effect transistors were fabricated from those. Indium oxide nanowires can serve as sensitive and specific redox protein sensors. The sol–gel method is another way to prepare nanowires. Indium oxide can serve as a semiconductor material, forming heterojunctions with p-InP, n-GaAs, n-Si, and other materials. A layer of indium oxide on a silicon substrate can be deposited from an indium trichloride solution, a method useful for manufacture of solar cells. Reactions When heated to 700 °C, indium(III) oxide forms In2O, (called indium(I) oxide or indium suboxide), at 2000 °C it decomposes. It is soluble in acids but not in alkali. With ammonia at high temperature indium nitride is formed: In2O3 + 2 NH3 → 2 InN + 3 H2O With K2O and indium metal the compound K5InO4 containing tetrahedral InO45− ions was prepared. Reacting with a range of metal trioxides produces perovskites for example: In2O3 + Cr2O3 → 2InCrO3 Applications Indium oxide is used in some types of batteries, thin film infrared reflectors transparent for visible light (hot mirrors), some optical coatings, and some antistatic coatings. In combination with tin dioxide, indium oxide forms indium tin oxide (also called tin doped indium oxide or ITO), a material used for transparent conductive coatings. In semiconductors, indium oxide can be used as an n-type semiconductor used as a resistive element in integrated circuits. In histology, indium oxide is used as a part of some stain formulations. See also Indium Indium tin oxide Magnetic semiconductor References Indium compounds Semiconductor materials Sesquioxides
Indium(III) oxide
[ "Chemistry" ]
827
[ "Semiconductor materials" ]
5,292,136
https://en.wikipedia.org/wiki/Indium%28III%29%20telluride
Indium(III) telluride (In2Te3) is an inorganic compound. A black solid, it is sometimes described as an intermetallic compound, because it has properties that are metal-like and salt like. It is a semiconductor that has attracted occasional interest for its thermoelectric and photovoltaic applications. No applications have been implemented commercially however. Preparation and reactions A conventional route entails heating the elements in a seal-tube: Indium(III) telluride reacts with strong acids to produce hydrogen telluride. Further reading References Tellurides Indium compounds Semiconductor materials
Indium(III) telluride
[ "Chemistry" ]
128
[ "Semiconductor materials", "Inorganic compounds", "Inorganic compound stubs" ]
5,292,305
https://en.wikipedia.org/wiki/Lanthanum%20carbide
Lanthanum carbide (LaC2) is a chemical compound. It is being studied in relation to the manufacture of certain types of superconductors and nanotubes. Preparation LaC2 can be prepared by reacting lanthanum oxide, La2O3, with carbon in an electric furnace, or by melting pellets of the elements in an arc furnace. Properties LaC2 reacts with water to form acetylene, C2H2 and a mixture of complex hydrocarbons. LaC2 is a metallic conductor, in contrast to CaC2 which is an insulator. The crystal structure of LaC2 shows that it contains C2 units with a C-C bond length of 130.3 pm, which is longer than the C-C bond length in calcium carbide, 119.2 pm, which is close to that of ethyne. The structure of LaC2 can be described as La3+C22−(e-) where the electron enters the conduction band and antibonding orbitals on the C2 anion, increasing the bond length. This is analogous to the bonding present in the nitridoborate, CaNiBN. Lanthanum carbide in carbon nano structures A method for making macroscopic quantities of C60 and the confirmation of the hollow, cagelike structures was published in 1990 by Kratschmer and co-workers. This was followed by the publication of methods for higher fullerenes (C70 and higher). In 1993, scientists discovered how to make a compound which is not as susceptible to moisture and air. They made containers to hold buckminsterfullerenes, or buckyballs; therefore they nicknamed the containers ‘buckyjars’. A few US patents were issued to universities in the mid-1990s; experiments with manufacturing techniques have continued at universities around the globe, including India, Japan, and Sweden. Lanthanum atoms caged in fullerenes In La@C72, the lanthanum appears to stabilize the C72 carbon cage. A 1998 study by Stevenson et al. verified the presence of La@C72 as well as La2@C72, but empty-cage C72 was absent, based on laser desorption mass spectrometry and UV−vis spectroscopy. A 2008 study by Lu et al. showed that La2C72 do not adhere to the isolated pentagon rule (IPR), but has two pairs of fused pentagons at each pole of the cage and that the two La atoms reside close to the two fused-pentagon pairs. This result lends additional support to the idea that the carbon cage is stabilized by the La atoms. In addition to the properties included in the table at right, the magnetic properties of bulk amounts of La@C82 (isolated from various hollow fullerenes) have been tested. Magnetization data for an isolated La@C82 isomer were obtained using a SQUID magnetometer at temperatures ranging from 3 to 300 K. For La@C82 the inverse susceptibility as a function of temperature was observed to follow a Curie-Weiss law. The effective magnetic moment per La@C82 was found to be 0.38μB. Lanthanum carbide has also shown superconductive properties when converted into a layered lanthanum carbide halide La2C2X2 (X=Br,I). Investigations using high-resolution neutron powder diffraction measurements from room temperature to 1.5 Kelvin showed that it has superconductive properties at about 7.03 Kelvin for X=Br and at about 1.7 Kelvin for X=I, respectively. References External links MIT Open Courseware 3.091 – Introduction to Solid State Chemistry 2001 US Patent – Carbide nanomaterials. 1997 US Patent – Storage of hydrogen in layered nanostructures. 1996 US Patent – Metal, alloy, or metal carbide nanoparticles and a process for forming same. 1995 US Patent – Magnetic metal or metal carbide nanoparticles and a process for forming same. Carbides Lanthanum compounds Acetylides Electrides
Lanthanum carbide
[ "Chemistry" ]
855
[ "Electron", "Electrides", "Salts" ]
5,292,415
https://en.wikipedia.org/wiki/Lead%28II%29%20fluoride
Lead(II) fluoride is the inorganic compound with the formula PbF2. It is a white solid. The compound is polymorphic, at ambient temperatures it exists in orthorhombic (PbCl2 type) form, while at high temperatures it is cubic (Fluorite type). Preparation Lead(II) fluoride can be prepared by treating lead(II) hydroxide or lead(II) carbonate with hydrofluoric acid: Pb(OH)2 + 2 HF → PbF2 + 2 H2O Alternatively, it is precipitated by adding hydrofluoric acid to a lead(II) salt solution, or by adding a fluoride salt to a lead salt, such as potassium fluoride to a lead(II) nitrate solution, 2 KF + Pb(NO3)2 → PbF2 + 2 KNO3 or sodium fluoride to a lead(II) acetate solution. 2 NaF + Pb(CH3COO)2 → PbF2 + 2 NaCH3COO It appears as the very rare mineral fluorocronite. Uses Lead(II) fluoride is used in low melting glasses, in glass coatings to reflect infrared rays, in phosphors for television-tube screens, and as a catalyst for the manufacture of picoline. The Muon g−2 experiment uses scintillators in conjunction with silicon photomultipliers. It also serves as a oxygen scavenger in high-temperature fluorine chemistry, as plumbous oxide is relatively volatile. References Fluorides Lead(II) compounds Metal halides Phosphors and scintillators Reagents for organic chemistry Glass compositions Fluorite crystal structure
Lead(II) fluoride
[ "Chemistry" ]
371
[ "Luminescence", "Glass chemistry", "Inorganic compounds", "Glass compositions", "Salts", "Phosphors and scintillators", "Metal halides", "Reagents for organic chemistry", "Fluorides" ]
5,292,533
https://en.wikipedia.org/wiki/Diffuser%20%28optics%29
In optics, a diffuser (also called a light diffuser or optical diffuser) is any material that diffuses or scatters light in some manner to transmit soft light. Diffused light can be easily obtained by reflecting light from a white surface, while more compact diffusers may use translucent material, including ground glass, teflon, opal glass, and greyed glass. Types Perfect reflecting diffuser A perfect (reflecting) diffuser (PRD) is a theoretical perfectly white surface with Lambertian reflectance (its brightness appears the same from any angle of view). It does not absorb light, giving back 100% of the light it receives. Reflective diffusers can be easily characterised by scatterometers. Diffractive diffuser/homogenizer A diffractive diffuser is a kind of diffractive optical element (DOE) that exploits the principles of diffraction and refraction. It uses diffraction to manipulate monochromatic light, giving it a specific spatial configuration and intensity profile. Diffractive diffusers are commonly used in commercially available LED illumination systems. Usually, the diffuser material is GaN or fused silica with processed rough surfaces. LED diffusers can be characterized online using scatterometry-based metrology. Applications Diffusion filters may be used to diffuse the light falling on the subject, or placed between the camera and the subject for a hazy effect. Lighting diffusers, transmitting and reflecting A flash diffuser (also called a speedlight diffuser, or shoot-through diffuser) spreads the light from the flash of a camera. A diffusion filter of this type may also be used in front of a non-flash studio light to soften the light on the scene being shot; such filters are used in still photography, in film lighting, and in stage lighting. In the film and stage industry, a diffusion filter may also be called diffusion gel, or just diffusion. This is by analogy to a color gel, which is another type of lighting gel. Shōji are diffusing window/doors. Reflecting diffusers for photography are generally called "reflectors". In effect, the light will not come from one concentrated source (like a spotlight), but rather will spread out, bounce from reflective ceilings and walls, thus getting rid of harsh light, and hard shadows. This is particularly useful for portrait photographers, since harsh light and hard shadows are usually not considered flattering in a portrait. Objective-lens filters "Diffusion filter" may also refer to a translucent photographic filter used for a special effect. When used in front of the camera lens, a diffusion filter softens subjects and generates a dreamy haze. This effect can also be improvised by smearing petroleum jelly on a UV filter or shooting through a nylon stocking. Diffusion filters may be uniform or may have a clear center area to create a vignette of diffused area around the clear center subject. Diffuser materials Silk sheets can also be used as diffusers, and in fact were until the invention of translucent plastics. "Opal" is a common translucent or opalescent diffusion. Recently, photopolymers have been used for making holographic diffusers. Photopolymers offer better performance than other materials and have a large viewing angle. Also, the process of synthesizing photopolymers is much simpler. See also Beam homogenizer Diffuse reflection Integrating sphere Photon diffusion Beauty dish Reflector (photography) Softbox Soft focus References Optical components Photography equipment Optical filters Photographic lighting Stage lighting
Diffuser (optics)
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
716
[ "Glass engineering and science", "Optical components", "Optical filters", "Filters", "Components" ]
5,293,306
https://en.wikipedia.org/wiki/SNP%20array
In molecular biology, SNP array is a type of DNA microarray which is used to detect polymorphisms within a population. A single nucleotide polymorphism (SNP), a variation at a single site in DNA, is the most frequent type of variation in the genome. Around 335 million SNPs have been identified in the human genome, 15 million of which are present at frequencies of 1% or higher across different populations worldwide. Principles The basic principles of SNP array are the same as the DNA microarray. These are the convergence of DNA hybridization, fluorescence microscopy, and solid surface DNA capture. The three mandatory components of the SNP arrays are: An array containing immobilized allele-specific oligonucleotide (ASO) probes. Fragmented nucleic acid sequences of target, labelled with fluorescent dyes. A detection system that records and interprets the hybridization signal. The ASO probes are often chosen based on sequencing of a representative panel of individuals: positions found to vary in the panel at a specified frequency are used as the basis for probes. SNP chips are generally described by the number of SNP positions they assay. Two probes must be used for each SNP position to detect both alleles; if only one probe were used, experimental failure would be indistinguishable from homozygosity of the non-probed allele. Applications An SNP array is a useful tool for studying slight variations between whole genomes. The most important clinical applications of SNP arrays are for determining disease susceptibility and for measuring the efficacy of drug therapies designed specifically for individuals. In research, SNP arrays are most frequently used for genome-wide association studies. Each individual has many SNPs. SNP-based genetic linkage analysis can be used to map disease loci, and determine disease susceptibility genes in individuals. The combination of SNP maps and high density SNP arrays allows SNPs to be used as markers for genetic diseases that have complex traits. For example, genome-wide association studies have identified SNPs associated with diseases such as rheumatoid arthritis and prostate cancer. A SNP array can also be used to generate a virtual karyotype using software to determine the copy number of each SNP on the array and then align the SNPs in chromosomal order. SNPs can also be used to study genetic abnormalities in cancer. For example, SNP arrays can be used to study loss of heterozygosity (LOH). LOH occurs when one allele of a gene is mutated in a deleterious way and the normally-functioning allele is lost. LOH occurs commonly in oncogenesis. For example, tumor suppressor genes help keep cancer from developing. If a person has one mutated and dysfunctional copy of a tumor suppressor gene and his second, functional copy of the gene gets damaged, they may become more likely to develop cancer. Other chip-based methods such as comparative genomic hybridization can detect genomic gains or deletions leading to LOH. SNP arrays, however, have an additional advantage of being able to detect copy-neutral LOH (also called uniparental disomy or gene conversion). Copy-neutral LOH is a form of allelic imbalance. In copy-neutral LOH, one allele or whole chromosome from a parent is missing. This problem leads to duplication of the other parental allele. Copy-neutral LOH may be pathological. For example, say that the mother's allele is wild-type and fully functional, and the father's allele is mutated. If the mother's allele is missing and the child has two copies of the father's mutant allele, disease can occur. High density SNP arrays help scientists identify patterns of allelic imbalance. These studies have potential prognostic and diagnostic uses. Because LOH is so common in many human cancers, SNP arrays have great potential in cancer diagnostics. For example, recent SNP array studies have shown that solid tumors such as gastric cancer and liver cancer show LOH, as do non-solid malignancies such as hematologic malignancies, ALL, MDS, CML and others. These studies may provide insights into how these diseases develop, as well as information about how to create therapies for them. Breeding in a number of animal and plant species has been revolutionized by the emergence of SNP arrays. The method is based on the prediction of genetic merit by incorporating relationships among individuals based on SNP array data. This process is known as genomic selection. Crop-specific arrays find use in agriculture. References Further reading Molecular biology Gene expression Bioinformatics Microarrays
SNP array
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,000
[ "Biochemistry methods", "Genetics techniques", "Biological engineering", "Microtechnology", "Microarrays", "Gene expression", "Bioinformatics", "Molecular biology techniques", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
5,296,675
https://en.wikipedia.org/wiki/Rhodium%28III%29%20oxide
Rhodium(III) oxide (or Rhodium sesquioxide) is the inorganic compound with the formula Rh2O3. It is a gray solid that is insoluble in ordinary solvents. Structure Rh2O3 has been found in two major forms. The hexagonal form adopts the corundum structure. It transforms into an orthorhombic structure when heated above 750 °C. Production Rhodium oxide can be produced via several routes: Treating RhCl3 with oxygen at high temperatures. Rh metal powder is fused with potassium hydrogen sulfate. Adding sodium hydroxide results in hydrated rhodium oxide, which upon heating converts to Rh2O3. Rhodium oxide thin films can be produced by exposing Rh layer to oxygen plasma. Nanoparticles can be produced by the hydrothermal synthesis. Physical properties Rhodium oxide films behave as a fast two-color electrochromic system: Reversible yellow ↔ dark green or yellow ↔ brown-purple color changes are obtained in KOH solutions by applying voltage ~1 V. Rhodium oxide films are transparent and conductive, like indium tin oxide (ITO) - the common transparent electrode, but Rh2O3 has 0.2 eV lower work function than ITO. Consequently, deposition of rhodium oxide on ITO improves the carrier injection from ITO thereby improving the electrical properties of organic light-emitting diodes. Catalytic properties Rhodium oxides are catalysts for hydroformylation of alkenes, N2O production from NO, and the hydrogenation of CO. See also Rhodium Rhodium(IV) oxide Rhodium-platinum oxide References Transition metal oxides Rhodium(III) compounds Sesquioxides Chromism
Rhodium(III) oxide
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
372
[ "Spectrum (physical sciences)", "Chromism", "Materials science", "Smart materials", "Spectroscopy" ]
5,297,589
https://en.wikipedia.org/wiki/Halpin%E2%80%93Tsai%20model
Halpin–Tsai model is a mathematical model for the prediction of elasticity of composite material based on the geometry and orientation of the filler and the elastic properties of the filler and matrix. The model is based on the self-consistent field method although often consider to be empirical. See also Cadec-online.com implements the Halpin–Tsai model among others. References J. C. Halpin Effect of Environmental Factors on Composite Materials, US Air Force Material Laboratory, Technical Report AFML-TR-67-423, June 1969 J.C. Halpin and J. L. Kardos Halpin-Tsai equations:A review, Polymer Engineering and Science, 1976, v16, N5, pp 344-352 Halpin-Tsai model on about.com Composite materials Continuum mechanics Materials science
Halpin–Tsai model
[ "Physics", "Materials_science", "Engineering" ]
171
[ "Materials science stubs", "Applied and interdisciplinary physics", "Continuum mechanics", "Composite materials", "Materials science", "Classical mechanics", "Materials", "nan", "Matter" ]
5,297,698
https://en.wikipedia.org/wiki/Meltwater%20pulse%201A
Meltwater pulse 1A (MWP1a) is the name used by Quaternary geologists, paleoclimatologists, and oceanographers for a period of rapid post-glacial sea level rise, between 13,500 and 14,700 years ago, during which the global sea level rose between and in about 400–500 years, giving mean rates of roughly /yr. Meltwater pulse 1A is also known as catastrophic rise event 1 (CRE1) in the Caribbean Sea. The rates of sea level rise associated with meltwater pulse 1A are the highest known rates of post-glacial, eustatic sea level rise. Meltwater pulse 1A is also the most widely recognized and least disputed of the named, postglacial meltwater pulses. Other named, postglacial meltwater pulses are known most commonly as meltwater pulse 1A0 (meltwater pulse 19ka), meltwater pulse 1B, meltwater pulse 1C, meltwater pulse 1D, and meltwater pulse 2. It and these other periods of rapid sea level rise are known as meltwater pulses because the inferred cause of them was the rapid release of meltwater into the oceans from the collapse of continental ice sheets. Sea level and timing Meltwater pulse 1A occurred in a period of rising sea level and rapid climate change at the end of the last ice age, known as Termination I. Several researchers have narrowed the period of the pulse to between 13,500 and 14,700 years ago with its peak at about 13,800 years ago. The start of this meltwater event coincides with or closely follows the abrupt onset of the Bølling-Allerød (B-A) interstadial and warming in the NorthGRIP ice core in Greenland at 14,600 years ago. During meltwater pulse 1A, sea level is estimated to have risen at a rate of /yr. This rate of sea level rise was much larger than the rate of current sea level rise, which has been estimated to be in the region of /yr. Source(s) of meltwater pulse 1A The source of meltwaters for meltwater pulse 1A and the path they took remains a matter of continuing controversy. The debate is centered around whether the predominant additions to the sea level rise came from the Antarctic Ice Sheet, the Laurentide Ice Sheet, or the Fennoscandian and Barents Sea Ice Sheets. Antarctic Ice Sheet The technique of sea-level fingerprinting has been used to argue that major contribution to this meltwater pulse came from Antarctica. The magnitude of eustatic sea level rise during meltwater pulse 1A is a significant indicator of its sources. If the eustatic sea level rise was large and closer to than the lower estimates, a significant fraction of the meltwater that caused it likely came from the Antarctic Ice Sheet. A contribution of around in 350 years to meltwater pulse 1A from the Antarctic Ice Sheet could have been caused by Southern Ocean warming. With respect to the Antarctic Ice Sheet, research by Weber and others constructed a well-dated, high-resolution record of the discharge of icebergs from various parts of the Antarctic Ice Sheet for the past 20,000 years, They constructed this record from variations in the amount of iceberg-rafted debris versus time and other environmental proxies in two cores taken from the ocean bottom within Iceberg Alley of the Weddell Sea. The sediments within Iceberg Alley provide a spatially integrated signal of the variability of the discharge of icebergs into the marine waters by the Antarctic Ice Sheet because it is a confluence zone in which icebergs calved from the entire Antarctic Ice Sheet drift along currents, converge, and exit the Weddell Sea to the north into the Scotia Sea. Between 20,000 and 9,000 years ago, this study documented eight well-defined periods of increased iceberg Ice calving and discharge from various parts of the Antarctic Ice Sheet. The highest period of discharge of icebergs recorded in both cores is known as AID6 (Antarctic Iceberg Discharge event 6). AID6 has a relatively abrupt onset at about 15,000 years ago. The peak interval of greatest iceberg discharge and flux from the Antarctic Ice sheet for AID6 is between about 14,800 and 14,400 years ago. The peak discharge is followed by gradual decline in flux until 13,900 years ago, when it abruptly ends. The peak period of iceberg discharge for AID6 is synchronous with the onset of the Bølling interstadial in the Northern Hemisphere meltwater pulse 1A. Weber and others estimated that the flux of icebergs from Antarctica during AID6 contributed a substantial (at least 50%) to the global mean sea-level rise that occurred during meltwater pulse 1A. These icebergs came from the widespread retreat of the Antarctic Ice Sheet at this time, including from the Mac Robertson Land region of the East Antarctic Ice Sheet; the Ross Sea sector of the West Antarctic Ice Sheet; and the Antarctic Peninsula Ice Sheet. Laurentide Ice Sheet On the other hand, other studies have argued for the Laurentide Ice Sheet in North America being the dominant source of this meltwater pulse. As mentioned previously, the source of the contribution to the meltwater pulse can be deduced from the magnitude of sea level rise; a eustatic sea level rise around could plausibly be solely explained by a North American source. Ice sheet modelling work suggests that the abrupt onset of the Bølling-Allerød (B-A) may have triggered the separation of the Cordilleran ice sheet and Laurentide Ice Sheet (and the opening of the ice-free corridor) producing a major contribution to meltwater pulse 1A from the North American ice sheet. Mississippi River meltwater flood events In the case of the Mississippi River, the sediments of the Louisiana continental shelf and slope, including the Orca Basin, within the Gulf of Mexico preserve a variety of paleoclimate and paleohydrologic proxies. They have been used to reconstruct both the duration and discharge of Mississippi River meltwater events and superfloods for the Late glacial and postglacial periods, including the time of meltwater pulse 1A. The chronology of flooding events found by the study of numerous cores on the Louisiana continental shelf and slope are in agreement that the timing of meltwater pulses. For example, meltwater pulse 1A in the Barbados coral record matches quite well with a group of two Mississippi River meltwater flood events, MWF-3 (12,600 ); and MWF-4 (11,900 ). In addition, meltwater pulse 1B in the Barbados coral record matches a cluster of four Mississippi River superflood events, MWF-5, that occurred between 9,900 and 9,100 . The discharge of water coursing down the Mississippi River during meltwater flood MWF-4 is estimated to have been 0.15 sverdrups (million cubic meters per second). This discharge is roughly equivalent to 50% of the global discharge during meltwater pulse 1A. This research also shows that the Mississippi meltwater flood MWF-4 occurred during the Allerød oscillation and had largely stopped before the beginning of the Younger Dryas stadial. The same research found an absence of meltwater floods discharging into the Gulf of Mexico from the Mississippi River for a period of time following meltwater flood MWF-4, known as the cessation event, that corresponds with the Younger Dryas stadial. Prior to Mississippi River meltwater flood MWF-3, two other Mississippi River meltwater floods, MWF-2 and MWF-1, have been recognized. The first of these, MWF-1, consists of three separate, but closely spaced events that occurred between 16,000 and 15,450 (MWF-1a); 15,000 and 14,700 (MWF-1b); and 14,460 and 14,000 (MWF-1c) . Each of these flood events had a discharge of about 0.08 to 0.09 sverdrups (million cubic meters per second). Collectively, they appear to be associated with meltwater pulse 1A0. Later, one of the largest of the Mississippi River meltwater floods, MWF-2, occurred between 13,600 and 13,200 . During its 400 radiocarbon year duration, the maximum discharge of Mississippi River meltwater flood MWF-2 is estimated to have been between 0.15 and 0.19 sverdrups. Despite the large size of Mississippi River meltwater flood MWF-2, it is not known to be associated with an identifiable meltwater pulse in any sea level record. Eurasian Ice Sheet Although the Eurasian Ice Sheet has previously been considered an insignificant, negligible contributor to meltwater pulse 1A, some research suggests it may have contributed to around half of the sea level rise. An ice volume of 4.5-7.9 metres of sea level equivalent was lost over half a millennium during the transition into the Bølling interstadial, with around 3.3-6.7 metres being lost from the ice sheet during the peak warming. Another study estimated 4.6 metres of sea level rise came from the melting of the Fennoscandian Ice Sheet. See also Deglaciation References External links Gornitz, V. (2007) Sea Level Rise, After the Ice Melted and Today. Science Briefs, NASA's Goddard Space Flight Center. (January 2007) Gornitz, V. (2012) The Great Ice Meltdown and Rising Seas: Lessons for Tomorrow. Science Briefs, NASA's Goddard Space Flight Center. (June 2012) Liu, J.P. (2004) Western Pacific Postglacial Sea-level History., River, Delta, Sea Level Change, and Ocean Margin Research Center, Marine, Earth and Atmospheric Sciences, North Carolina State University, Raleigh, NC. Glaciology Oceanography Paleoclimatology Sea level
Meltwater pulse 1A
[ "Physics", "Environmental_science" ]
2,042
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
3,931,817
https://en.wikipedia.org/wiki/Strain%20hardening%20exponent
The strain hardening exponent (also called the strain hardening index), usually denoted , is a measured parameter that quantifies the ability of a material to become stronger due to strain hardening. Strain hardening (work hardening) is the process by which a material's load-bearing capacity increases during plastic (permanent) strain, or deformation. This characteristic is what sets ductile materials apart from brittle materials. The uniaxial tension test is the primary experimental method used to directly measure a material's stress–strain behavior, providing valuable insights into its strain-hardening behavior. The strain hardening exponent is sometimes regarded as a constant and occurs in forging and forming calculations as well as the formula known as Hollomon's equation (after John Herbert Hollomon Jr.) who originally posited it as: where represents the applied true stress on the material, is the true strain, and is the strength coefficient. The value of the strain hardening exponent lies between 0 and 1, with a value of 0 implying a perfectly plastic solid and a value of 1 representing a perfectly elastic solid. Most metals have an -value between 0.10 and 0.50. In one study, strain hardening exponent values extracted from tensile data from 58 steel pipes from natural gas pipelines were found to range from 0.08 to 0.25, with the lower end of the range dominated by high-strength low alloy steels and the upper end of the range mostly normalized steels. Tabulation References External links More complete picture about the strain hardening exponent in the stress–strain curve on www.key-to-steel.com Mechanical engineering Solid mechanics
Strain hardening exponent
[ "Physics", "Engineering" ]
348
[ "Applied and interdisciplinary physics", "Solid mechanics", "Mechanics", "Mechanical engineering" ]
3,931,971
https://en.wikipedia.org/wiki/Gr%C3%BCneisen%20parameter
In condensed matter, Grüneisen parameter is a dimensionless thermodynamic parameter named after German physicist Eduard Grüneisen, whose original definition was formulated in terms of the phonon nonlinearities. Because of the equivalences of many properties and derivatives within thermodynamics (e.g. see Maxwell relations), there are many formulations of the Grüneisen parameter which are equally valid, leading to numerous interpretations of its meaning. Some formulations for the Grüneisen parameter include: where is volume, and are the principal (i.e. per-mass) heat capacities at constant pressure and volume, is energy, is entropy, is the volume thermal expansion coefficient, and are the adiabatic and isothermal bulk moduli, is the speed of sound in the medium, and is density. The Grüneisen parameter is dimensionless. Grüneisen constant for perfect crystals with pair interactions The expression for the Grüneisen constant of a perfect crystal with pair interactions in -dimensional space has the form: where is the interatomic potential, is the equilibrium distance, is the space dimensionality. Relations between the Grüneisen constant and parameters of Lennard-Jones, Morse, and Mie potentials are presented in the table below. The expression for the Grüneisen constant of a 1D chain with Mie potential exactly coincides with the results of MacDonald and Roy. Using the relation between the Grüneisen parameter and interatomic potential one can derive the simple necessary and sufficient condition for Negative Thermal Expansion in perfect crystals with pair interactions A proper description of the Grüneisen parameter represents a stringent test for any type of interatomic potential. Microscopic definition via the phonon frequencies The physical meaning of the parameter can also be extended by combining thermodynamics with a reasonable microphysics model for the vibrating atoms within a crystal. When the restoring force acting on an atom displaced from its equilibrium position is linear in the atom's displacement, the frequencies ωi of individual phonons do not depend on the volume of the crystal or on the presence of other phonons, and the thermal expansion (and thus γ) is zero. When the restoring force is non-linear in the displacement, the phonon frequencies ωi change with the volume . The Grüneisen parameter of an individual vibrational mode can then be defined as (the negative of) the logarithmic derivative of the corresponding frequency : Relationship between microscopic and thermodynamic models Using the quasi-harmonic approximation for atomic vibrations, the macroscopic Grüneisen parameter () can be related to the description of how the vibrational frequencies (phonons) within a crystal are altered with changing volume (i.e. 's). For example, one can show that if one defines as the weighted average where 's are the partial vibrational mode contributions to the heat capacity, such that Proof To prove this relation, it is easiest to introduce the heat capacity per particle ; so one can write This way, it suffices to prove Left-hand side (def): Right-hand side (def): Furthermore (Maxwell relations): Thus This derivative is straightforward to determine in the quasi-harmonic approximation, as only the are V-dependent. This yields See also Debye model Negative thermal expansion Mie–Grüneisen equation of state External links Definition from Eric Weisstein's World of Physics References Condensed matter physics Dimensionless numbers of thermodynamics
Grüneisen parameter
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
713
[ "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Phases of matter", "Materials science", "Condensed matter physics", "Matter" ]
3,932,275
https://en.wikipedia.org/wiki/List%20of%20physical%20constants
The constants listed here are known values of physical constants expressed in SI units; that is, physical quantities that are generally believed to be universal in nature and thus are independent of the unit system in which they are measured. Many of these are redundant, in the sense that they obey a known relationship with other physical constants and can be determined from them. Table of physical constants Uncertainties While the values of the physical constants are independent of the system of units in use, each uncertainty as stated reflects our lack of knowledge of the corresponding value as expressed in SI units, and is strongly dependent on how those units are defined. For example, the atomic mass constant is exactly known when expressed using the dalton (its value is exactly 1 Da), but the kilogram is not exactly known when using these units, the opposite of when expressing the same quantities using the kilogram. Technical constants Some of these constants are of a technical nature and do not give any true physical property, but they are included for convenience. Such a constant gives the correspondence ratio of a technical dimension with its corresponding underlying physical dimension. These include the Boltzmann constant , which gives the correspondence of the dimension temperature to the dimension of energy per degree of freedom, and the Avogadro constant , which gives the correspondence of the dimension of amount of substance with the dimension of count of entities (the latter formally regarded in the SI as being dimensionless). By implication, any product of powers of such constants is also such a constant, such as the molar gas constant . See also List of mathematical constants Mathematical constant Physical constant List of particles Notes References Constants
List of physical constants
[ "Physics", "Mathematics" ]
334
[ "Physical constants", "Quantity", "Physical quantities" ]
33,222,076
https://en.wikipedia.org/wiki/Hidden%20algebra
Hidden algebra provides a formal semantics for use in the field of software engineering, especially for concurrent distributed object systems. It supports correctness proofs. Hidden algebra was studied by Joseph Goguen. It handles features of large software-based systems, including concurrency, distribution, nondeterminism, and local states. It also handled object-oriented features like classes, subclasses (inheritance), attributes, and methods. Hidden algebra generalizes process algebra and transition system approaches. References External links Hidden Algebra Tutorial Abstract algebra Universal algebra Logical calculi Concurrent computing Distributed computing
Hidden algebra
[ "Mathematics", "Technology", "Engineering" ]
118
[ "Computing platforms", "IT infrastructure", "Algebra", "Mathematical logic", "Logical calculi", "Universal algebra", "Concurrent computing", "Software engineering stubs", "Software engineering", "Fields of abstract algebra", "Abstract algebra" ]
24,582,087
https://en.wikipedia.org/wiki/LAR-160
The LAR-160 is a light artillery rocket (hence LAR) with a 160mm calibre, a minimum range of 12 km and a maximum range of 45 km, from a multiple rocket launcher. Each standard launcher holds two 13 rocket Launch Pod Containers (LPC's) for truck or trailer mounting, 18 rocket LPC's for medium armored vehicle's (AMX-13, TAM) and 26 rocket LPC's for mounting on a MBT chassis. A light version is also manufactured which can be carried by helicopters and towed behind vehicles such as a HMMWV. Development The LAR-160 was designed in the late 1970s by Israel Military Industries, it was adopted by the Israeli Defense Forces in 1983. Armament The LAR-160s rocket has undergone continuous development and has resulted in the Mk. I, Mk. II and Mk.IV rockets. Mk. I Rocket The Mk. I rocket is 3.4 m long and has a diameter of 160 mm fueled by solid propellant. The Mk. I is 100 kg and has a 40 kg HE-COFRAM (High Explosive-Controlled Fragmentation) warhead which is activated by an impact fuze or proximity fuze. The Mk. 1 was first used by the Venezuelan Army on an AMX-13 hull. Mk. II Rocket The Mk. II rocket weighs 110 kg and has a 46 kg warhead which is either HE-COFRAM or a cluster warhead containing 104 CL-3022-S4 AP/AM submunitions. A remotely set electronic fuze opens the canister at the appropriate height to give area coverage of about 31,400 m2. All 26 rockets can be fired in under 60 seconds and re-loaded in under five minutes from a conventional truck with a 15 t/m crane. Launcher The LAR-160 incorporates a modern command, control, communications and intelligence system called ACCS, which has a total interface capability to all common artillery elements including meteorological unit, forward observers as well as mapping, GPS and other items. Elevation and traverse of the launchers are performed by an electrohydraulic system, which is backed up by a manual system. When the system is fitted on a wheeled chassis, two hydraulically operated stabilisers are lowered to the ground to provide a more stable firing platform. Service history The system was used extensively by the Georgian Army against Russian and South Ossetian Forces in the 2008 Russo-Georgian War, systems showed to be extremely effective against Ossetian static targets and large Russian convoys, LAR systems were credited with destroying many Russian trucks and disabling armored vehicles. Romania uses a domestic version of the LAR-160, called LAROM and Argentina uses a domestic version as well called TAM VCLC. The HALO Trust reported that Azerbaijan employed the LAR-160 to drop M095 rocket-dispensed cluster bombs around Armenian-populated settlements in the NKR during the 2016 Armenian–Azerbaijani clashes. The Human Rights Watch verified Azerbaijani multiple usage of the LAR-160 to drop M095 rocket-dispensed cluster bombs against populated settlements in Nagorno-Karabakh in October 2020. Operators – 1 (TAM VCLC version) – 30 – 10 – 12 Delivered from Israel in 2007. – (ACCULAR-122 version on MLRS hull) – (LAROM Version) – 20 (mounted on AMX-13 hull) See also References External links LAR-160 on IMI website Lynx launcher vehicle on IMI website Rocket artillery Multiple rocket launchers of Israel Modular rocket launchers Cluster munitions Military equipment introduced in the 1980s
LAR-160
[ "Engineering" ]
743
[ "Modular design", "Modular rocket launchers" ]
24,583,515
https://en.wikipedia.org/wiki/Automatic%20route%20selection
Automatic route selection is a private branch exchange (PBX) feature that allows a system to route a telephone call over the most appropriate carrier and service offering based on factors such as the type of call (i.e., local, local long distance, etc.), the user's class of service (CoS), the time of day, and the day of the week (e.g., workday, weekend, or holiday). ARS can be used to route the landline leg of a call through a cellular network, if it offers lower rates. ARS is of greatest value in a liberalized or deregulated telecom environment where there are multiple competing carriers and rate plans available. ARS generally uses a lookup table rather than parsing a hierarchy of dialed telephone numbers and calculating a least cost route. ARS is also known as Least-cost routing (LCR). References Your Dictionary Telecommunications engineering
Automatic route selection
[ "Engineering" ]
189
[ "Electrical engineering", "Telecommunications engineering" ]
24,585,634
https://en.wikipedia.org/wiki/Lagrangian%20system
In mathematics, a Lagrangian system is a pair , consisting of a smooth fiber bundle and a Lagrangian density , which yields the Euler–Lagrange differential operator acting on sections of . In classical mechanics, many dynamical systems are Lagrangian systems. The configuration space of such a Lagrangian system is a fiber bundle over the time axis . In particular, if a reference frame is fixed. In classical field theory, all field systems are the Lagrangian ones. Lagrangians and Euler–Lagrange operators A Lagrangian density (or, simply, a Lagrangian) of order is defined as an -form, , on the -order jet manifold of . A Lagrangian can be introduced as an element of the variational bicomplex of the differential graded algebra of exterior forms on jet manifolds of . The coboundary operator of this bicomplex contains the variational operator which, acting on , defines the associated Euler–Lagrange operator . In coordinates Given bundle coordinates on a fiber bundle and the adapted coordinates , , ) on jet manifolds , a Lagrangian and its Euler–Lagrange operator read where denote the total derivatives. For instance, a first-order Lagrangian and its second-order Euler–Lagrange operator take the form Euler–Lagrange equations The kernel of an Euler–Lagrange operator provides the Euler–Lagrange equations . Cohomology and Noether's theorems Cohomology of the variational bicomplex leads to the so-called variational formula where is the total differential and is a Lepage equivalent of . Noether's first theorem and Noether's second theorem are corollaries of this variational formula. Graded manifolds Extended to graded manifolds, the variational bicomplex provides description of graded Lagrangian systems of even and odd variables. Alternative formulations In a different way, Lagrangians, Euler–Lagrange operators and Euler–Lagrange equations are introduced in the framework of the calculus of variations. Classical mechanics In classical mechanics equations of motion are first and second order differential equations on a manifold or various fiber bundles over . A solution of the equations of motion is called a motion. See also Lagrangian mechanics Calculus of variations Noether's theorem Noether identities Jet bundle Jet (mathematics) Variational bicomplex References External links Differential operators Calculus of variations Dynamical systems Lagrangian mechanics
Lagrangian system
[ "Physics", "Mathematics" ]
532
[ "Mathematical analysis", "Lagrangian mechanics", "Classical mechanics", "Mechanics", "Differential operators", "Dynamical systems" ]
24,586,129
https://en.wikipedia.org/wiki/Gauge%20symmetry%20%28mathematics%29
In mathematics, any Lagrangian system generally admits gauge symmetries, though it may happen that they are trivial. In theoretical physics, the notion of gauge symmetries depending on parameter functions is a cornerstone of contemporary field theory. A gauge symmetry of a Lagrangian is defined as a differential operator on some vector bundle taking its values in the linear space of (variational or exact) symmetries of . Therefore, a gauge symmetry of depends on sections of and their partial derivatives. For instance, this is the case of gauge symmetries in classical field theory. Yang–Mills gauge theory and gauge gravitation theory exemplify classical field theories with gauge symmetries. Gauge symmetries possess the following two peculiarities. Being Lagrangian symmetries, gauge symmetries of a Lagrangian satisfy Noether's first theorem, but the corresponding conserved current takes a particular superpotential form where the first term vanishes on solutions of the Euler–Lagrange equations and the second one is a boundary term, where is called a superpotential. In accordance with Noether's second theorem, there is one-to-one correspondence between the gauge symmetries of a Lagrangian and the Noether identities which the Euler–Lagrange operator satisfies. Consequently, gauge symmetries characterize the degeneracy of a Lagrangian system. Note that, in quantum field theory, a generating functional may fail to be invariant under gauge transformations, and gauge symmetries are replaced with the BRST symmetries, depending on ghosts and acting both on fields and ghosts. See also Gauge theory (mathematics) Lagrangian system Noether identities Gauge theory Gauge symmetry Yang–Mills theory Gauge group (mathematics) Gauge gravitation theory Notes References Daniel, M., Viallet, C., The geometric setting of gauge symmetries of the Yang–Mills type, Rev. Mod. Phys. 52 (1980) 175. Eguchi, T., Gilkey, P., Hanson, A., Gravitation, gauge theories and differential geometry, Phys. Rep. 66 (1980) 213. Gotay, M., Marsden, J., Stress-energy-momentum tensors and the Belinfante–Rosenfeld formula, Contemp. Math. 132 (1992) 367. Marathe, K., Martucci, G., The Mathematical Foundation of Gauge Theories (North Holland, 1992) . Fatibene, L., Ferraris, M., Francaviglia, M., Noether formalism for conserved quantities in classical gauge field theories, J. Math. Phys. 35 (1994) 1644. Gomis, J., Paris, J., Samuel, S., Antibracket, antifields and gauge theory quantization, Phys. Rep. 295 (1995) 1; arXiv: hep-th/9412228. Giachetta, G. (2008), Mangiarotti, L., Sardanashvily, G., On the notion of gauge symmetries of generic Lagrangian field theory, J. Math. Phys. 50 (2009) 012903; arXiv: 0807.3003. Giachetta, G. (2009), Mangiarotti, L., Sardanashvily, G., Advanced Classical Field Theory (World Scientific, 2009) . Symmetry Gauge theories
Gauge symmetry (mathematics)
[ "Physics", "Mathematics" ]
740
[ "Geometry", "Symmetry" ]
24,587,308
https://en.wikipedia.org/wiki/Wearable%20cardioverter%20defibrillator
A wearable cardioverter defibrillator (WCD) is a non-invasive, external device for patients at risk of sudden cardiac arrest (SCA). It allows physicians time to assess their patient's arrhythmic risk and see if their ejection fraction improves before determining the next steps in patient care. It is a leased device. A summary of the device, its technology and indications was published in 2017 and reviewed by the EHRA Scientific Documents Committee. Description A wearable cardioverter defibrillator (WCD) is an external device with a built-in defibrillator. The WCD is worn directly on the body by patients who are at transient risk for sudden cardiac death (SCD) for short-term risk mitigation and it does not require surgery for use. A WCD is also a temporary therapeutic option in case an implantable cardioverter defibrillator (ICD) cannot be implanted immediately. The WCD enables patients to continue their normal life without constantly worrying about their risk for SCD. The WCD is a non-invasive medical device. It consists of a fabric garment, electrodes located in the fabric garment for sensing and delivering an electrical shock, and a battery-powered device that monitors the patient and connects to the electrodes and defibrillation pads. The WCD is worn under the clothing during the entire day. The WCD should only be removed when taking a shower or bath. The electrodes lie directly on the skin. The monitoring device constantly records heart rate and rhythm. If life-threatening cardiac arrhythmias, such as ventricular tachycardia (VT) or ventricular fibrillation (VF) are detected, the defibrillator delivers one or more treatment shock(s) in order to restore a normal heart rhythm. Since the time between a cardiac arrest and defibrillation is directly linked to survival, a treatment shock must be delivered within a few minutes after an event to be effective. With every passing minute without treatment, the chances of patient survival is reduced by 7-10%. From detecting a life-threatening cardiac arrhythmia to automatically delivering a treatment shock, the WCD usually needs less than a minute. The first treatment shock success rate for resuscitating patients from SCD is 98%. Intervention from bystanders or emergency personnel is not required for the WCD to work. The use of the WCD is recommended for the prevention of SCD in the 2006 international joint guidelines from the American College of Cardiology, American Heart Association, and European Society of Cardiology (ACC/AHA/ESC), European Society of Cardiology (ESC) guidelines from 2015 and 2021, and American Heart Association, American College of Cardiology, Heart Rhythm Society (AHA/ACC/HRS) guidelines from 2017. The International Society for Heart and Lung Transplantation (ISHLT) recommends wearable external defibrillators as a bridge therapy for patients waiting for a heart transplant in their Guidelines for the Care of Cardiac Transplant Candidates. In the United Kingdom (UK) the WCD LifeVest® from ZOLL is available for temporary use on a monthly rental basis since 2017. The WCD is a temporary therapeutic option for patients waiting for an ICD, patients with an ICD that had to be removed (e.g., due to infection), or patients who can't have an ICD but are at transient risk for SCD. The WCD allows physicians time to assess their patient's cardiac arrhythmic risk, make appropriate treatment plans and monitor cardiac output before or after an invasive cardiac procedure (such as bypass surgery, stent placement or heart transplant) or in patients at high risk for SCD after myocardial infarction (MI). Usual wearing time of a WCD is about 3 months but depends on the patient's needs and the prescription of the treating physician. History The use of cardiac defibrillation started in 1947 - first in an open chest and ten years later through a closed chest with high energy levels. In 1972, cardiac defibrillation with intracardiac electrodes delivering much less energy of as low as 30 joules was established, following the development of portable units delivering high energy levels of up to 1000 volt. At Johns Hopkins University, doctors Mirowski, Mower and colleagues started developing implantable cardioverter defibrillators (ICD), and were able to implant an ICD in the first human by 1980. Over the years, the ICD was further improved and is now a standard outpatient procedure. There are limiting factors for direct prophylactic implantation of an ICD. For example, a diagnosed high risk for SCD may be temporary, which would oppose an implantation intended for lifetime use. Per current guidelines (e.g., the ESC guidelines from 2015 and 2021) a patient has to wait at least 40 to 90 days after the cardiac event (e.g., myocardial infarction or newly diagnosed heart failure with reduced left ventricular function) before the decision to implant an ICD should be made. An external, wearable cardioverter-defibrillator with defibrillation features similar to an ICD could be a solution to be used as “bridge” to protect these patients from SCD. In 1986, M. Stephen Heilman and Larry Bowling founded LIFECOR and started the development of the first wearable cardioverter defibrillator (WCD). It was named LifeVest®. This WCD was extensively tested for three years in multi-centre and multinational clinical trials (WEARIT and BIROAD) in the United States and Europe. The results were used to improve the WCD and were also the basis for FDA approval in 2001 for use of the WCD for adult patients who are at high risk for SCD, who are not suitable candidates for an ICD or who refuse to have one. 14 years later (2015), FDA approval was received for the use of the WCD in children, who are at high risk for SCD and are not candidates for an ICD or do not receive one due to lack of parental consent. In 2000, prior to the FDA approvals, the WCD had already received the European CE-certification. In 1986, M. Stephen Heilman and Larry Bowling founded Lifecor and along with a team of former Intec employees who developed the first implantable cardioverter defibrillator (ICD) began development of the WCD. The WCD was extensively tested for three years in 17 major medical centers across the United States and Europe. The clinical data collected from those trials allowed Lifecor to obtain FDA approval for use of the WCD in the United States. In 2001, the FDA approved the LifeVest wearable cardioverter defibrillator (model 2000). The Lifecor business was acquired by ZOLL Medical Corporation in 2006 and Asahi Kasei in 2012. As of 2015, the LifeVest was available in the United States, Europe, Japan, Australia, Israel and Singapore. The WCD LifeVest is marketed in the United Kingdom, United States, Europe, Japan, and several other countries worldwide. According to ZOLL, the LifeVest has been prescribed to more than 200,000 patients worldwide. In July 2021, the FDA approved a second WCD product for the market developed by Kestra Medical Technologies, Inc. This new device has an alternative fabric and garment style specifically for women. Insurance coverage in the United States The WCD is covered by most health plans in the United States, including commercial, state, and federal plans as Durable Medical Equipment (DME) for those patients at high risk of cardiac arrest, including: Primary prevention [Ejection fraction (EF) ≤35% and Myocardial Infarction (MI), Non Ischemic Cardiomyopathy (NICM), or other Dilated Cardiomyopathy (DCM)] including: After recent MI (Coverage during the 40-day ICD waiting period) Before and immediately after CABG or PTCA (Coverage during the 90-day ICD waiting period) Listed for cardiac transplant Recently diagnosed NICM (Coverage during the three-to-nine month ICD waiting period) New York Heart Association (NYHA) Class IV heart failure Terminal disease with life expectancy of less than one year ICD indications when patient condition delays or prohibits ICD implantation ICD explantation Working mechanism of the WCD The WCD uses dry, non-adhesive ECG electrodes to continuously monitor the patient's heart rhythm. Three defibrillation electrodes are placed in the fabric garment, one on the chest (approximately at the level of the apex of the heart) and two on the back (between the shoulder blades). The ECG electrodes are placed inside the fabric garment on the chest providing two independent ECG leads. Prior to delivering a therapeutic shock, the dry defibrillator electrodes automatically deploy conductive gel to protect the skin from possible injury from the treatment. The WCD can deliver up to five consecutive shocks per sequence. Life-saving therapy typically occurs within a minute of the onset of an arrhythmia. The patient is warned when a treatment sequence has been started, e.g. by siren alarms and spoken information. Through interaction with the WCD, an unjustified shock delivery can be prevented by the patient as long as she/he is conscious. If the patient fails to respond, e.g., because the patient has lost consciousness due to an arrhythmia, gel is automatically ejected from under the defibrillation electrodes. If the arrhythmia resolves on its own, no treatment shock is delivered. Action from bystanders is not required, but they are warned by voice information not to touch the patient during defibrillation and to call emergency services. In the electrode belt, four dry, non-adhesive ECG electrodes continuously monitor the patient's heart rhythm. Three defibrillation electrodes are placed in the vest, one on the chest (approximately at the level of the apex of the heart) and two on the back (between the shoulder blades). The ECG electrodes are placed at inside of the vest on the chest providing two independent ECG leads. Prior to delivering a therapeutic shock, the dry defibrillator electrodes automatically deploy conductive gel to protect the skin from possible injury from the treatment. The WCD can deliver up to five consecutive shocks per sequence. Life-saving therapy typically occurs within a minute of the onset of an arrhythmia.The patient is warned when a treating sequence has been started e.g. by siren alarms and spoken information. By pressing two response buttons on the monitor simultaneously, an unjustified shock delivery can be prevented by the patient as long as she/he is conscious. If the patient fails to respond, e.g., because the patient has lost consciousness due to an arrhythmia, gel is automatically ejected from under the defibrillation electrodes. If the arrhythmia resolves on its own, no treatment shock is delivered. Action from bystanders is not required, but they are warned by voice information not to touch the patient during defibrillation and to call the emergency doctor. The patient receives two rechargeable batteries for the WCD. One will be used to operate the monitoring device, the other is charged for daily replacement. Conspicuous ECG sequences or treatments are automatically transmitted to a secure server. The treating physician can view and analyze them via password-protected access. Before a WCD is handed to a patient, the WCD is fitted to the patient for accurate ECG signal detection and the patient receives detailed training to ensure correct handling of the WCD. The efficacy and effectiveness of the WCD has been tested in clinical trials and several international post-marketing studies. If the WCD is worn correctly and ECG signal detection is optimal, the success rate of the first shock is approximately 98%. Hence, the WCD is as effective as an ICD in treating VT and VF. Long term follow-up studies showed that approximately 90% of all patients treated with the WCD are still alive one year after the heart failure incident. Since the WCD is a non-invasive garment, no injuries or scars remain after use and shock delivery. For effective protection, the WCD should be worn 24 hours a day and should only be removed for personal hygiene. Comparison of the WCD to Automated External Defibrillator (AED) and Implantable Cardioverter Defibrillator (ICD) Automated external defibrillators (AED) are portable electronic devices designed to analyze the heart rhythm and inform the operator whether defibrillation is required. They are intended for people of the general population with an unknown risk for heart failure and are usually available in public places and first responder ambulances. AEDs are designed for use by laypersons and provide simple audio and visual instructions for the operator to follow. Electrode pads, placed by an operator on the chest of the patient, are for monitoring and defibrillation. In contrast to the ICD and WCD, an AED needs the immediate activity of a bystander in order to prevent the SCD. WCDs are intended for patients with a known transient risk for SCD and meant for temporary use as described above.The WCD is the ideal therapeutic option to prevent SCD in patients until it is clear that a patient's heart issues are indeed permanent and long-term protection with an ICD must be applied.[36] Implantable cardioverter-defibrillators (ICD) are electronic devices implanted in the chest with a lead to the right ventricle of the heart. They are intended for patients with permanent risk for SCD. An ICD is, like a WCD, designed to detect and terminate cardiac arrhythmias by emergency defibrillation. An extensive invasive surgery is necessary for implantation of the ICD, which is associated with a number of risks and morbidity. Therefore, the decision for an ICD should be carefully taken. The WCD is the ideal therapeutic option to prevent SCD in patients until it is clear that a patient's heart issues are indeed permanent and long-term protection with an ICD must be applied. Living with the WCD The WCD allows patients at high risk for SCD who are discharged from the hospital to return to most of the normal daily activities without constantly worrying about their heart issues and possible fatal outcomes. A retrospective study investigating quality of life in patients who had been fitted with a WCD found that the majority did not feel any impairment in terms of mobility (68%), self-care (83%), daily routine (75%), pain (64%) and mental health (57%). Another prospective study evaluating depression and anxiety in patients eligible for WCD found a trend for better improvement of depression scores in patients who actually received the WCD. Currently a study on the use of the WCD started in the UK. In case of questions concerning which activities are possible, the manufacturer recommends consulting with the treating physician. The manufacturer also advises to avoid activities in loud and/or high vibration environments due to the possibility of missing an alert from WCD. Indications for receiving a WCD The WCD is generally recommended as temporary therapy for all patients who are at risk of SCD and can be prescribed in the UK as a monthly rental device. According to the international guidelines of ACC/AHA/ESC in 2006, the ESC in 2015 and 2021 as well as AHA/ACC/HRS in 2017 patients that may benefit from a WCD include: Patients with reduced left ventricular pump systolic function (LVEF) of ≤ 35% In the first 40 days after a myocardial infarction (MI) without re-vascularization In the first 90 days after coronary re-vascularization with coronary artery bypass graft (CABG) In the first 90 days after percutaneous coronary intervention (PCI) Newly diagnosed Ischemic Heart Failure Patient with Reduced Ejection Fraction (HFrEF) For at least 90 days of optimal medical therapy Patients with ventricular fibrillation (VF) or sustained ventricular tachycardia (VT) Spontaneous or inducible Occurring later than 48 hours after MI Patients on the waiting list for a heart transplantation Bridging the waiting time for patients With indicated or interrupted ICD therapy (e.g., ICD explanted due to infection or intolerance and pending potential re-implantation, delayed implantation for medical reasons including infection, recovery from surgery) With ongoing heart failure medication that needs adjustment With inflammation of the myocardium/myocarditis and waiting for resolution With familial or genetic risk for SCD if diagnostics have not yet been completed and/or an ICD has been ruled out Newly diagnosed non-ischemic heart failure patient with reduced ejection fraction, including dilated cardiomyopathies (DCM) and New York Heart Association (NYHA) stage II-III heart failure patients Patients in a risk phase of pregnancy cardiomyopathy (peripartum CM/PPCM) The ISHLT has listed the WCD as a class I recommendation in its Guidelines for the Care of Cardiac Transplant Candidates since 2006. This means that patients waiting for a heart transplantation who are discharged from hospital should receive a wearable defibrillator to bridge the waiting time until receiving the transplant. The WCD is one of the procedures or treatments for which there is evidence and general agreement that it is beneficial, useful and effective in the given condition. WCD has also been used in the specific circumstance where patients have an ICD but require temporary explantation for radiotherapy in the location of the ICD generator. Clinical trials on the efficacy of the WCD After European CE-certification and FDA approval of WCD (LifeVest), a number of retrospective and prospective registries verified the efficacy and safety of the WCD. Data from more than 30,000 patients who have used the WCD are published for an expansive variety of indications. In the following only an excerpt is presented. Meta-analyses A meta-analysis of 11 comparable studies with approximately 20,000 non-overlapping patients in different indications was published by Nguyen et al. They found an overall mortality rate of 1.4%, a VT/VF rate of 2.6% and a VT/VF-related mortality rate of only 0.2% across all patients. 1.7% of the patients (9.1 patients/100 patient years) had received an appropriate treatment, which was successful in 96%. The inappropriate shock rate was <1.0%. A systematic cross-indication review and meta-analysis of studies reporting treatment rates of WCD was conducted by Masri et al. in 2019. They analysed 28 studies and over 30,000 patients. Over a period of 3 months, 5 per 100 patients received appropriate WCD treatment shocks, and only 2 per 100 patients received inappropriate treatment shocks. Analyses of selection or publication bias (e.g., Egger-test) revealed, that there were no differences between independent and manufacturer-sponsored studies, and no differences between prospective and retrospective studies. According to the authors, the rate of patients, who were appropriately treated with the WCD over 3 months of follow-up, was substantial and much higher in observational studies compared to the RCT included in the analysis. The mortality rate was very low at 0.7 per 100 patients over 3 months. Randomized controlled trial data The first and to this date only randomized controlled trial (RCT) on WCD use with post-MI patients is the VEST Trial, which was first published by Olgin et al. in 2018. In total 2,302 patients were included in the intention-to-treat (ITT) analysis. The primary outcome of the VEST-study, arrhythmic mortality, was 1.6% in the WCD group vs. 2.4% in the control group. The difference was not significant despite a 33% relative risk reduction (RRR). The secondary outcome of the VEST-study, all-cause mortality, was 3.1% in the WCD group and 4.9% in the control group. The difference was significant with a 36% reduction in mortality (RRR). Notably, in this study the average daily wearing time was only 14 to 18 hours/day, hence much lower than supposed according to observational studies. An additional as-treated analysis (ATT) provided as supplemented appendix to the original publication, revealed statistical significance in all mortality endpoints, thus positive results for the use of the WCD. In a per protocol analysis (PPA) published in 2020, the reduction in arrhythmic mortality was 62% and in all-cause mortality 75%, both significant results, which is comparable to the results shown in the ATT analysis. These results indicated that the WCD is highly effective in reducing mortality rates in patients with a high risk for SCD. Essential factors in successful WCD therapy in everyday clinical practice are high wearing compliance and the use of the monitoring system provided by the manufacturer (ZPM Network). Health technology assessments (HTA) Aidelsburger and colleagues published the results of an HTA in 2020. The authors analysed data from 49 studies and concluded the WCD is a safe and effective intervention in cases of sudden cardiac arrest during the time needed for determination of long-term risk management strategies, that the WCD is reliable in detecting VT/VF events and shows a high rate of appropriate shocks which lead to a high rate of successful VT/VF terminations. Cortesi and colleagues published the results from another HTA in 2021. They focused on cost-efficacy comparing the WCD to “standard of care” in patients at risk for SCD after MI or ICD explantation. The authors found that the WCD is a cost-effective treatment option in patients after MI using the data from the VEST study. In patients after ICD explantation the WCD provided even a cost saving of €1,782 compared to 3 weeks hospitalization in a low intensity hospital (standard of care) using data from the Italian NHS. The authors concluded that for the Italian NHS the WCD use contributes to a more effective utilization of resources and to the improvement of patient care in clinical practice. Currently an HTA is planned to be conducted in the UK. Notes References Reek et al., “Clinical Efficacy of the Wearable Defibrillator in Acutely Terminating Episodes of Ventricular Fibrillation Using Biphasic Shocks,” PACE, 2002, 25 (4, part II):577. Wase, “Wearable Defibrillators: A New Tool in the Management of Ventricular tachycardia/Ventricular Fibrillation,” EP Lab Digest, 2005; 12:22–24 Feldman et al., “Use of a Wearable Defibrillator in Terminating Tachyarrhythmias in Patients at High Risk for Sudden Death: Results of WEARIT/BIROAD”, PACE, 2004, 27:4 L–9. Reek et al., “Clinical Efficacy of the Wearable Defibrillator in Acutely Terminating Episodes of Ventricular Fibrillation Using Biphasic Shocks,” PACE, 2002, 25(4, part II):577. Keller et al., “Using the LifeVest as a Bridge to ICD implantation: One Urban Community Hospital’s Experience,” EP Lab Digest, 2008; Vol. 8, Issue 8. Elrod, “Measuring the Effectiveness of Wearable Defibrillators and Implantable Devices: EP Lab Digest Speaks with Jeffrey Olgin, MD about the VEST/PREDICTS study,” EP Lab Digest, 2008; Vol. 8, Issue 7. Medical equipment Cardiology
Wearable cardioverter defibrillator
[ "Biology" ]
5,016
[ "Medical equipment", "Medical technology" ]
24,590,863
https://en.wikipedia.org/wiki/Extended%20aeration
Extended aeration is a method of sewage treatment using modified activated sludge procedures. It is preferred for relatively small waste loads, where lower operating efficiency is offset by mechanical simplicity. Conventional sewage treatment Mechanized sewage treatment typically includes settling in a primary clarifier, followed by biological treatment and a secondary clarifier. Both clarifiers produce waste sludge requiring sewage sludge treatment and disposal. Activated sludge agitates a portion of the secondary clarifier sludge in the primary clarifier effluent. Remaining secondary sludge and all primary sludge typically require digestion prior to disposal. Process modification Extended aeration agitates all incoming waste in the sludge from a single clarifier. The combined sludge starts with a higher concentration of inert solids than typical secondary sludge and the longer mixing time required for digestion of primary solids in addition to dissolved organics produces aged sludge requiring greater mixing energy input per unit of waste oxidized. Applications Extended aeration is typically used in prefabricated "package plants" intended to minimize design costs for waste disposal from small communities, tourist facilities, or schools. In comparison to traditional activated sludge, longer mixing time with aged sludge offers a stable biological ecosystem better adapted for effectively treating waste load fluctuations from variable occupancy situations. Supplemental feeding with something like sugar is sometimes used to sustain sludge microbial populations during periods of low occupancy; but population response to variable food characteristics is unpredictable, and supplemental feeding increases waste sludge volumes. Sludge may be periodically removed by septic tank pumping trucks as sludge volume approaches storage capacity. See also List of waste-water treatment technologies Notes References Environmental engineering Pollution control technologies Sanitation Sewerage Sewerage infrastructure Waste treatment technology Water treatment
Extended aeration
[ "Chemistry", "Engineering", "Environmental_science" ]
366
[ "Water treatment", "Chemical engineering", "Sewerage infrastructure", "Pollution control technologies", "Water pollution", "Sewerage", "Civil engineering", "Environmental engineering", "Water technology", "Waste treatment technology" ]
24,592,849
https://en.wikipedia.org/wiki/C4H9Br
The molecular formula C4H9Br, (molar mass: 137.02 g/mol, exact mass: 135.9888 u) may refer to: 1-Bromobutane 2-Bromobutane tert-Butyl bromide 1-bromo-2-methylpropane Molecular formulas
C4H9Br
[ "Physics", "Chemistry" ]
72
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,593,105
https://en.wikipedia.org/wiki/Quantum%20spacetime
In mathematical physics, the concept of quantum spacetime is a generalization of the usual concept of spacetime in which some variables that ordinarily commute are assumed not to commute and form a different Lie algebra. The choice of that algebra varies from one theory to another. As a result of this change, some variables that are usually continuous may become discrete. Often only such discrete variables are called "quantized"; usage varies. The idea of quantum spacetime was proposed in the early days of quantum theory by Heisenberg and Ivanenko as a way to eliminate infinities from quantum field theory. The germ of the idea passed from Heisenberg to Rudolf Peierls, who noted that electrons in a magnetic field can be regarded as moving in a quantum spacetime, and to Robert Oppenheimer, who carried it to Hartland Snyder, who published the first concrete example. Snyder's Lie algebra was made simple by C. N. Yang in the same year. Overview Physical spacetime is a quantum spacetime when in quantum mechanics position and momentum variables are already noncommutative, obey the Heisenberg uncertainty principle, and are continuous. Because of the Heisenberg uncertainty relations, greater energy is needed to probe smaller distances. Ultimately, according to gravity theory, the probing particles form black holes that destroy what was to be measured. The process cannot be repeated, so it cannot be considered to be a measurement. This limited measurability led many to expect that the usual picture of continuous commutative spacetime breaks down at Planck scale distances, if not sooner. Physical spacetime is expected to be quantum because physical coordinates are slightly noncommutative. The astronomical coordinates of a star are modified by gravitational fields between the observer and the star, as in the deflection of light by the sun, one of the classic tests of general relativity. Therefore, the coordinates actually depend on gravitational field variables. According to quantum theories of gravity, these field variables do not commute; therefore coordinates that depend on them likely do not commute. Both arguments are based on pure gravity and quantum theory, and they limit the measurement of time by the only time constant in pure quantum gravity, the Planck time. Instruments, however, are not purely gravitational but are made of particles. They may set a more severe, larger, limit than the Planck time. Criteria Quantum spacetimes are often described mathematically using the noncommutative geometry of Connes, quantum geometry, or quantum groups. Any noncommutative algebra with at least four generators could be interpreted as a quantum spacetime, but the following desiderata have been suggested: Local Lorentz group and Poincaré group symmetries should be retained, possibly in a generalised form. Their generalisation often takes the form of a quantum group acting on the quantum spacetime algebra. The algebra might plausibly arise in an effective description of quantum gravity effects in some regime of that theory. For example, a physical parameter , perhaps the Planck length, might control the deviation from commutative classical spacetime, so that ordinary Lorentzian spacetime arises as . There might be a notion of quantum differential calculus on the quantum spacetime algebra, compatible with the (quantum) symmetry and preferably reducing to the usual differential calculus as . This would permit wave equations for particles and fields and facilitate predictions for experimental deviations from classical spacetime physics that can then be tested experimentally. The Lie algebra should be semisimple. This makes it easier to formulate a finite theory. Models Several models were found in the 1990s more or less meeting most of the above criteria. Bicrossproduct model spacetime The bicrossproduct model spacetime was introduced by Shahn Majid and Henri Ruegg and has Lie algebra relations for the spatial variables and the time variable . Here has dimensions of time and is therefore expected to be something like the Planck time. The Poincaré group here is correspondingly deformed, now to a certain bicrossproduct quantum group with the following characteristic features. The momentum generators commute among themselves but addition of momenta, reflected in the quantum group structure, is deformed (momentum space becomes a non-abelian group). Meanwhile, the Lorentz group generators enjoy their usual relations among themselves but act non-linearly on the momentum space. The orbits for this action are depicted in the figure as a cross-section of against one of the . The on-shell region describing particles in the upper center of the image would normally be hyperboloids but these are now 'squashed' into the cylinder in simplified units. The upshot is that Lorentz-boosting a momentum will never increase it above the Planck momentum. The existence of a highest momentum scale or lowest distance scale fits the physical picture. This squashing comes from the non-linearity of the Lorentz boost and is an endemic feature of bicrossproduct quantum groups known since their introduction in 1988. Some physicists dub the bicrossproduct model doubly special relativity, since it sets an upper limit to both speed and momentum. Another consequence of the squashing is that the propagation of particles is deformed, even of light, leading to a variable speed of light. This prediction requires the particular to be the physical energy and spatial momentum (as opposed to some other function of them). Arguments for this identification were provided in 1999 by Giovanni Amelino-Camelia and Majid through a study of plane waves for a quantum differential calculus in the model. They take the form In other words, a form which is sufficiently close to classical that one might plausibly believe the interpretation. At the moment, such wave analysis represents the best hope to obtain physically testable predictions from the model. Prior to this work there were a number of unsupported claims to make predictions from the model based solely on the form of the Poincaré quantum group. There were also claims based on an earlier -Poincaré quantum group introduced by Jurek Lukierski and co-workers which were important precursors to the bicrossproduct, albeit without the actual quantum spacetime and with different proposed generators for which the above picture does not apply. The bicrossproduct model spacetime has also been called -deformed spacetime with . q-Deformed spacetime This model was introduced independently by a team working under Julius Wess in 1990 and by Shahn Majid and coworkers in a series of papers on braided matrices starting a year later. The point of view in the second approach is that usual Minkowski spacetime has a description via Pauli matrices as the space of 2 x 2 hermitian matrices. In quantum group theory and using braided monoidal category methods, a natural q-version of this is defined here for real values of as a 'braided hermitian matrix' of generators and relations These relations say that the generators commute as thereby recovering usual Minkowski space. Working with more familiar variables as linear combinations of these, in particular, time is given by a natural braided trace of the matrix and commutes with the other generators (so this model is different from the bicrossproduct one). The braided-matrix picture also leads naturally to a quantity which as returns the usual Minkowski distance (this translates to a metric in the quantum differential geometry). The parameter or is dimensionless and is thought to be a ratio of the Planck scale and the cosmological length. That is, there are indications that this model relates to quantum gravity with a non-zero cosmological constant, the choice of depending on whether this is positive or negative. This describes the mathematically better understood but perhaps less physically justified positive case. A full understanding of this model requires (and was concurrent with the development of) a full theory of 'braided linear algebra' for such spaces. The momentum space for the theory is another copy of the same algebra and there is a certain 'braided addition' of momentum on it expressed as the structure of a braided Hopf algebra or quantum group in a certain braided monoidal category). This theory, by 1993, had provided the corresponding -deformed Poincaré group as generated by such translations and -Lorentz transformations, completing the interpretation as a quantum spacetime. In the process it was discovered that the Poincaré group not only had to be deformed but had to be extended to include dilations of the quantum spacetime. For such a theory to be exact, all particles in the theory need to be massless, which is consistent with experiment, as masses of elementary particles are vanishingly small compared to the Planck mass. If current thinking in cosmology is correct, then this model is more appropriate, but it is significantly more complicated and for this reason its physical predictions have yet to be worked out. Fuzzy or spin model spacetime This refers in modern usage to the angular momentum algebra familiar from quantum mechanics but interpreted in this context as coordinates of a quantum space or spacetime. These relations were proposed by Roger Penrose in his earliest spin network theory of space. It is a toy model of quantum gravity in 3 spacetime dimensions (not the physical 4) with a Euclidean (not the physical Minkowskian) signature. It was again proposed in this context by Gerardus 't Hooft. A further development including a quantum differential calculus and an action of a certain 'quantum double' quantum group as deformed Euclidean group of motions was given by Majid and E. Batista. A striking feature of the noncommutative geometry, is that the smallest covariant quantum differential calculus has one dimension higher than expected, namely 4, suggesting that the above can also be viewed as the spatial part of a 4-dimensional quantum spacetime. The model should not be confused with fuzzy spheres which are finite-dimensional matrix algebras which can be thought of as spheres in the spin model spacetime of fixed radius. Heisenberg model spacetimes The quantum spacetime of Hartland Snyder proposes that where the generate the Lorentz group. This quantum spacetime and that of C. N. Yang entail a radical unification of spacetime, energy-momentum, and angular momentum. The idea was revived in a modern context by Sergio Doplicher, Klaus Fredenhagen and John Roberts in 1995, by letting simply be viewed as some function of as defined by the above relation, and any relations involving it viewed as higher order relations among the . The Lorentz symmetry is arranged so as to transform the indices as usual and without being deformed. An even simpler variant of this model is to let be a numerical antisymmetric tensor, in which context it is usually denoted , so the relations are . In even dimensions , any nondegenerate such theta can be transformed to a normal form in which this really is just the Heisenberg algebra but the difference that the variables are being proposed as those of spacetime. This proposal was once popular because of its familiar form of relations and because it has been argued that it emerges from the theory of open strings landing on D-branes, see noncommutative quantum field theory and Moyal plane. However, this D-brane lives in some of the higher spacetime dimensions in the theory and hence it is not physical spacetime that string theory suggests to be effectively quantum in this way. It also requires subscribing to D-branes as an approach to quantum gravity in the first place. When posited as quantum spacetime, it is hard to obtain physical predictions and one reason for this is that if is a tensor, then by dimensional analysis, it should have dimensions of length, and if this length is speculated to be the Planck length, then the effects would be harder to ever detect than for other models. Noncommutative extensions to spacetime Although not quantum spacetime in the sense above, another use of noncommutative geometry is to tack on 'noncommutative extra dimensions' at each point of ordinary spacetime. Instead of invisible curled up extra dimensions as in string theory, Alain Connes and coworkers have argued that the coordinate algebra of this extra part should be replaced by a finite-dimensional noncommutative algebra. For a certain reasonable choice of this algebra, its representation and extended Dirac operator, the Standard Model of elementary particles can be recovered. In this point of view, the different kinds of matter particles are manifestations of geometry in these extra noncommutative directions. Connes's first works here date from 1989 but has been developed considerably since then. Such an approach can theoretically be combined with quantum spacetime as above. See also Quantum group Quantum geometry Noncommutative geometry Quantum gravity Anabelian topology Quantum reference frame References Further reading . . . . R. P. Grimaldi, Discrete and Combinatorial Mathematics: An Applied Introduction, 4th Ed. Addison-Wesley 1999. J. Matousek, J. Nesetril, Invitation to Discrete Mathematics. Oxford University Press 1998. Taylor E. F., John A. Wheeler, Spacetime Physics, publisher W. H. Freeman, 1963. External links Plus Magazine article on quantum geometry by Marianne Freiberger Mathematical physics
Quantum spacetime
[ "Physics", "Mathematics" ]
2,710
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
24,593,246
https://en.wikipedia.org/wiki/ATM%20Adaptation%20Layer%202
ATM Adaptation Layer 2 (AAL2) is an Asynchronous Transfer Mode (ATM) adaptation layer, used primarily in telecommunications; for example, it is used for the Iu interfaces in the Universal Mobile Telecommunications System, and is also used for transporting digital voice. The standard specifications related to AAL2 are ITU standards I.363.2 and I366.1. What is AAL2? AAL2 is a variable-bitrate connection-oriented low-latency service originally intended to adapt voice for transmission over ATM. Like other ATM adaptation layers, AAL2 defines segmentation and reassembly of higher-layer packets into ATM cells, in this case packets of data containing voice and control information. AAL2 is further separated into two sub-layers that help with the mapping from upper-layer services to ATM cells. These are named Service Specific Convergence Sub-layer (SSCS) and Common Part Sub-layer (CPS). The AAL2 protocol improves on other ATM Adaptation Layers, by packing lots of small packets efficiently into one standard-sized ATM cell of 53 bytes. A one-byte packet thus no longer has an overhead ratio of 52 unused bytes out of 53 (i.e. 98%). Potentially, total of 11 one-byte CPS packets (plus 3/4 of a 12th CPS packet) could squeeze into a single cell. Of course, CPS packets can come in other sizes with other CIDs, too. When the transmission is ready, the CPS packets are all multiplexed together into a single cell and transported over standard ATM network infrastructure. The transport networks for ATM are well standardized fiber optic (SDH/Sonet, i.e. STM-1/OC-3 or higher) or copper cable (PDH, i.e. E1/T1/JT1 or higher bandwidth fixed lines) based synchronous networks with built-in redundancy and OAM-related network features which Ethernet networks never had originally (in order to keep things simple) but are sorely missed in metro Ethernet standard networks. Efforts to improve Ethernet networks are in a sense trying to reinvent the wheel à la ATM. AAL2 is one example of a useful benefit of ATM, as a general standard for Layer 2 protocols. ATM/AAL2's efficient handling of small packets contrasts with Ethernet's minimum payload of 46 bytes vs the 1-byte minimum size for an AAL2 CPS packet. AAL2 is the standard layer 2 protocol used in all Iu interfaces, i.e. the interfaces between UMTS base stations and UMTS Radio Network Controllers (RNCs) (Iu-B), inter-RNCs (Iu-R), UMTS RNCs and UMTS Serving GPRS Support Nodes (SGSNs) (Iu-PS), and UMTS RNCs and media gateways (MGWs) (Iu-CS). AAL2 and the ATM Cell The basic component of AAL2 is the CPS packet. A CPS packet is an unanchored unit of data that can cross ATM cells and can start from anywhere in the payload of the ATM cell, other than the start field (STF). The STF is the first byte of the 48-byte ATM payload. The STF gives the byte index into the ATM cell where the first CPS packet in this cell begins. Byte 0 is the STF. The data from byte 1 ... (STF+1), would be the straddled remainder of the previous ATM cell's final CPS packet. If there is no remainder from the previous cell, the STF is 0, and the first byte of the cell after the STF is also the location of the start of the first CPS packet. The format for the 1 byte STF at the beginning of the ATM cell is: 6 bits - offset field (OSF) 1 bit - sequence number (SN) 1 bit - parity (P) OSF The Offset Field carries the binary value of the offset, in octets, between the end of the P bit and the start of the CPCS-PDU Payload. Values of greater than 47 are not allowed. SN The Sequence Number numbers the stream of CPCS-PDUs. P The Parity bit is used to detect error in the OSF and SN fields. If the ATM cell has fewer than 47 bytes, the remainder will be filled by padding. AAL2u One common adaptation of AAL2, AAL2u, doesn't use the STF field at all. In this case, one single CPS packet is aligned to the beginning of the cell. AAL2u is not used in standardized interfaces, but rather in proprietary equipment implementations where the multiplexing/demultiplexing, etc. that needs to be done for standard AAL2 either is too strenuous, is unsupported, or requires too much overhead (i.e. the 1 byte of STF) from the internal system's point of view. Most computer chips do not support AAL2, so stripping this layer away makes it easier to interwork between the ATM interface and the rest of the network. ATM AAL2 Cell Diagram The following is diagram of the AAL2 ATM cell: AAL2 and the CPS Packet A CPS packet has a 3-byte header and a payload of between one and 45 octets. The standard also defines a 64-octet mode, but this is not commonly used in real 3G networks. The 3-byte CPS header has following fields: 8 bits - channel identifier (CID) 6 bits - length indicator (LI) 5 bits - user to user indication (UUI) 5 bits - header error control (HEC) CID The Channel Identifier identifies the user of the channel. The AAL2 channel is a bi-directional channel and the same channel identification value is used for both directions. The maximum number of multiplexed user channels is 248. As some channels are reserved for other uses, such as peer-to-peer layer management. CE : Channel Element CID = CE -E + ID LI The Length Indicator indicates the length (in number of octets) of the CPS information field, and can have a value between 1 and 45 (default) or sometimes between 1 and 64. For a given CID all channels must be of the same maximum length (either 45 or 64 octets) NB: the LI is one less than the actual length of the payload, so 0 corresponds to the minimum length of 1 octet, and 0x3f to 64 octets. UUI User to User Indication conveys specific information transparently between the users. For example, in SSSAR, UUI is used to indicate that this is the final CPS packet for the SSSAR PDU. HEC This is Header Error Control and checks for errors in the CID, LI and UUI fields. The generator polynomial for the CPS HEC is: ATM AAL2 CPS Packet Diagram The following is a diagram of the CPS packet: References External links Broadband Forum - ATM Forum Technical Specifications AAL2 ITU Standard Network protocols ITU-T recommendations Asynchronous Transfer Mode
ATM Adaptation Layer 2
[ "Engineering" ]
1,517
[ "Asynchronous Transfer Mode", "Computer networks engineering" ]
24,593,416
https://en.wikipedia.org/wiki/Broadfield%20House%20Glass%20Museum
Broadfield House, a Grade II listed building, was home to a glass museum and hot glass studio, owned and operated by Dudley Council museum service and was located in Kingswinford, West Midlands, England. The museum closed on 30 September 2015. It displayed a huge variety of glass objects, dating from the 17th century to the present day, across nine galleries. In addition to the glass displays were several paintings that demonstrate glass making and the local landscape. The museum also displayed glass making tools and ephemera produced by the glass industry. Various events and temporary exhibitions were held throughout its history. To complete the visitor experience it had a shop that sold various souvenirs, books, vintage glassware and products from contemporary glassmakers. The collection is now displayed at Stourbridge Glass Museum. History of the building The building has hosted many contrasting occupants. The original structure was a modest two-storey farmhouse built in the mid or late 18th century and faced Barnett Lane. The threshing barn (now the Hot Glass Studio) dates from the same period and serves as a reminder that two hundred years ago this area was open country and farmland. In the early 1800s the house was transformed into a much grander residence when a fine three-storey Regency block with sash windows and portico was built onto the back of the original building. This then formed the main entrance, reversing the orientation of the original house. In 1943 the house, along with 16 acres, was purchased by Kenneth George MacMaster, an engineering contactor and property developer. The following year MacMaster sold the house to Dennis Smith from Tividale. Smith was the last private owner of Broadfield House and lived here with his family until 1949. In 1949 the house was acquired by Staffordshire County Council for use as a Mothercraft Hostel. Following local government reorganisation in 1966 ownership was transferred to the enlarged County Borough of Dudley. In 1969 Broadfield House became an Old People’s Home. It was not a suitable site, as illustrated by the 44 stairs and no lift. Following the creation of Dudley MBC in 1974, the Council closed the home and began looking at alternative uses for the building. Inevitably the Mothercraft Hostel and Old People's Home left their mark on the building and features remain that are suggestive of an institutional use. In 1976 the idea emerged of using the building as a new home for the Council’s Brierley Hill and Stourbridge Glass Collections. This met with considerable opposition as the people of Brierley Hill and Stourbridge were very protective of their collections and did not want them moved from their respective towns. At the final Council meeting, the decision to go ahead won by only one vote! Conversion work began in 1979 and Broadfield House Glass Museum was officially opened by Princess Michael of Kent on 2 April 1980. The museum closed on 30 September 2015. The collection is now displayed at Stourbridge Glass Museum, which opened on 9 April 2022. Exhibitions The museum held various temporary exhibitions, with local, national and international artists represented, featuring historical and contemporary glassworks. The Studio The Hot Glass Studio is sponsored by The Hulbert Group of Dudley and has been made available for use by graduates and established glass-blowers. Archives and Library The museum housed archives from various sources, containing such items as pattern books, catalogues, description books and invoices. In addition, there is a large collection of images and recorded material providing insight into the people and the manufacturing process. It also housed an extensive reference library of books and information on glassworking, including the entire library of Robert Charleston, former head of glass and ceramics at the Victoria & Albert Museum. The Charleston library includes approximately 700 books as well as his own collection of papers, articles and archival material. Both the library and archive will continue to be in the care of DMBC Museum Service. References External links The Friends of Broadfield House Glass Museum @Glass_museum on Twitter BHGM Flickr album Grade II listed buildings in the West Midlands (county) Glass museums and galleries Buildings and structures in the Metropolitan Borough of Dudley Art museums and galleries in the West Midlands (county) Decorative arts museums in England Museums established in 1980
Broadfield House Glass Museum
[ "Materials_science", "Engineering" ]
845
[ "Glass engineering and science", "Glass museums and galleries" ]
24,593,664
https://en.wikipedia.org/wiki/Quantum%20differential%20calculus
In quantum geometry or noncommutative geometry a quantum differential calculus or noncommutative differential structure on an algebra over a field means the specification of a space of differential forms over the algebra. The algebra here is regarded as a coordinate ring but it is important that it may be noncommutative and hence not an actual algebra of coordinate functions on any actual space, so this represents a point of view replacing the specification of a differentiable structure for an actual space. In ordinary differential geometry one can multiply differential 1-forms by functions from the left and from the right, and there exists an exterior derivative. Correspondingly, a first order quantum differential calculus means at least the following: An --bimodule over , i.e. one can multiply elements of by elements of in an associative way: A linear map obeying the Leibniz rule (optional connectedness condition) The last condition is not always imposed but holds in ordinary geometry when the manifold is connected. It says that the only functions killed by are constant functions. An exterior algebra or differential graded algebra structure over means a compatible extension of to include analogues of higher order differential forms obeying a graded-Leibniz rule with respect to an associative product on and obeying . Here and it is usually required that is generated by . The product of differential forms is called the exterior or wedge product and often denoted . The noncommutative or quantum de Rham cohomology is defined as the cohomology of this complex. A higher order differential calculus can mean an exterior algebra, or it can mean the partial specification of one, up to some highest degree, and with products that would result in a degree beyond the highest being unspecified. The above definition lies at the crossroads of two approaches to noncommutative geometry. In the Connes approach a more fundamental object is a replacement for the Dirac operator in the form of a spectral triple, and an exterior algebra can be constructed from this data. In the quantum groups approach to noncommutative geometry one starts with the algebra and a choice of first order calculus but constrained by covariance under a quantum group symmetry. Note The above definition is minimal and gives something more general than classical differential calculus even when the algebra is commutative or functions on an actual space. This is because we do not demand that since this would imply that , which would violate axiom 4 when the algebra was noncommutative. As a byproduct, this enlarged definition includes finite difference calculi and quantum differential calculi on finite sets and finite groups (finite group Lie algebra theory). Examples For the algebra of polynomials in one variable the translation-covariant quantum differential calculi are parametrized by and take the form This shows how finite differences arise naturally in quantum geometry. Only the limit has functions commuting with 1-forms, which is the special case of high school differential calculus. For the algebra of functions on an algebraic circle, the translation (i.e. circle-rotation)-covariant differential calculi are parametrized by and take the form This shows how -differentials arise naturally in quantum geometry. For any algebra one has a universal differential calculus defined by where is the algebra product. By axiom 3., any first order calculus is a quotient of this. See also Quantum geometry Noncommutative geometry Quantum calculus Quantum group Quantum spacetime Further reading Noncommutative geometry Algebraic structures
Quantum differential calculus
[ "Mathematics" ]
714
[ "Mathematical structures", "Mathematical objects", "Algebraic structures" ]
31,677,000
https://en.wikipedia.org/wiki/California%20Green%20Chemistry%20Initiative
The California Green Chemistry Initiative (CGCI) is a six-part initiative to reduce public and environmental exposure to toxins through improved knowledge and regulation of chemicals; two parts became statute in 2008. The other four parts were not passed, but are still on the agenda of the California Department of Toxic Substances Control green ribbon science panel discussions. The two parts of the California Green Chemistry Initiative that were passed are known as AB 1879 (Chapter 559, Statutes of 2008): Hazardous Materials and Toxic Substances Evaluation and Regulation and SB 509 (Chapter 560, Statutes of 2008): Toxic Information Clearinghouse. Implementation of CGCI has been delayed indefinitely beyond the January 1, 2011. Purpose Green chemistry is the design of chemical products and processes that reduce or eliminate the use and generation of hazardous substances. Green chemistry is based upon twelve principles, identified in “Green Chemistry: Theory and Practice” and adopted by the US Environmental Protection Agency (EPA). It is an innovative technology which encourages the design of safer chemicals and products and minimizes the impact of wastes through increased energy efficiency, the design of chemical products that degrade after use and the use of renewable resources (instead of non-renewable fossil fuel such as petroleum, gas and coal). The Office of Pollution Prevention and Toxics (OPPT), created under the United States Pollution Prevention Act of 1990, promotes the use of chemistry for pollution prevention through voluntary, non-regulatory ' partnerships with academia, industry, other government agencies, and non-governmental organizations. The United States Environmental Protection Agency (EPA) promotes green chemistry as overseen by the OPPT. The California Green Chemistry Initiative moves beyond voluntary partnerships and voluntary information disclosure to require industry reporting and public disclosure. Overview The United States Environmental Protection Agency's most important law to regulate the production, use and disposal of chemicals is the Toxic Substances Control Act of 1976 (TSCA). Over the years, TSCA has fallen behind the industry it is supposed to regulate and is an inadequate tool for providing the protection against today's chemical risks. Green chemistry represents a major paradigm shift in industrial manufacturing as it is a proactive “cradle-to-cradle” approach that focuses environmental protection at the design stage of production processes. In 2008, California governor Arnold Schwarzenegger signed two joined bills, AB 1879 and SB 507, which created California's Green Chemistry Initiative (CGCI). AB 1879 increases regulatory authority over chemicals in consumer products. The law established an advisory panel of scientists, known as the green ribbon science panel, to guide research in chemical policy, create regulations for assessing alternatives, and set up an internet database of research on toxins. SB 509 was designed to ensure that information regarding the hazard traits, toxicological and environmental endpoints, and other vital data is available to the public, to businesses, and to regulators in a Toxics Information Clearinghouse. This legislation marks the biggest leap forward in California chemicals policy in nearly two decades and is intended to improve the health and safety of all Californians by providing the Department of Toxic Substances Control (DTSC) with the authority to control toxic substances in consumer products. The bills were scheduled to go into regulatory affect January 1, 2011 with the adoption of the Green Chemistry Initiative. California has postponed the initiative, indefinitely, due to concerns raised by stakeholders and more specifically, controversial last minute changes in the final draft. The final or third draft contains substantial revisions, including scaled back manufacturer and retailer compliance requirements that were not well received by the environmental community. Assemblyman Mike Feur and several authors of AB 1879, assert that last minute changes by the California DTSC have drastically weakened the Green Chemistry Initiative and limited its scope. They are most concerned with the change to require the state to prove that a chemical is harmful before being regulated, mirroring what is currently required at the Federal level by TSCA. The original draft advocated a precautionary principle, or “cradle-to-cradle” approach. Environmentalists fear that CGCI will not remove chemicals off the shelves, but instead will create “paralysis by analysis” as companies litigate against the DTSC over unfavorable decisions. Physical and social causes Traditional methods of dealing with wastes Society historically managed its industrial and municipal wastes by disposal or incineration. Chemical regulation occurs only after a product is identified as hazardous. This problem-specific approach has led to the release of thousands of potentially harmful chemicals in our environment. Chemical regulation is a continuous game of catch up, in which banned chemicals are replaced with new chemicals that may be just as or more toxic. Many environmental laws are still based on the industrial production model of cradle-to-grave. The term “cradle-to-grave” is used to describe and assess the life-cycle of products, from raw material extraction through materials processing, manufacture, distribution, use and disposal. This traditional approach to chemicals management has serious environmental drawbacks because it does not consider what happens to a product after it is disposed of. The Resource Conservation and Recovery Act (RCRA) of 1976, exemplifies a cradle-to-grave management approach of hazardous waste. RCRA has been largely ineffective because its emphasis is on dealing with waste after it has been created; meanwhile emphasis on waste reduction is minimal. Waste does not disappear, it is simply transported elsewhere. Costly and burdensome hazardous waste disposal in the US has encouraged the exportation of hazardous waste to poor counties and developing nations willing to accept the waste for a fee. The Green Chemistry initiative instead employs a cradle-to-cradle approach, representing a major paradigm shift in environmental policy and provides a proactive solution to toxic waste. The Earth's capacity to accept toxic waste is practically nonexistent. The disposal of hazardous wastes is not the root problem but rather, the root symptom. The critical issue is the creation of toxic wastes. Requiring manufacturers to consider chemical exposure during manufacturing, throughout product use and after disposal, encourages the production of safer products. Consumption and wastes By the time we find a product on a market shelf, 90% of the resources used to create that product was regarded as waste. This accounts for about 136 pounds of resources a week consumed by the average American and 2,000 pounds of waste support that consumption. As the population grows and the economy expands more and more products will be created, consumed, and disposed. Many negative externalities are related to the environmental consequences of production and use, including air pollution, anthropogenic climate change and water pollution. Under the current cycle of production, toxic chemical byproducts will continue to be produced and unleashed on our environment. It is important to carefully consider how toxic wastes are created in order to forgo the possibility of a world that is unsuitable for human life. Transparency issues One of the biggest failures in market transactions is the imbalance of information that is provided to consumer via producer. “Information asymmetry” is an economic concept that is used to explain this failure: it deals with the study of decisions in transactions where one party has more or better information than the other. Due to a lack of information transparency, the public may lack vital information about the health and safety of products found on supermarket shelves. This lack of information may have led to a reversed purchasing decision. Yet without such labeling, consumers must make assumptions based on things like price or expertise. For example, one apple juice brand may be assumed healthier because it cost more and because the brand is advertised as “healthy” and “recommended by mothers”. Further, it may be assumed that the product is safe for consumption if it is sitting on a grocery store shelf and probably would not be approved by the government if it contained harmful chemicals. Assumptions such as these could inform a typical purchasing decision, despite their inaccuracy. Perhaps given more information, the same brand of apple juice would be less desirable if information on unhealthy preservatives, additives or pesticide residues was easily obtained. To make market transactions more efficient, the government could force more accurate labeling about products, laws could require companies to be more transparent, and the government could require that advertising be less persuasive and more informative. The Green Chemistry Initiative of California would address transparency issues by creating a public chemical inventory and requiring more stringent regulation of chemicals that may be toxic. The CGCI Draft Report suggests a green labeling system to identify consumer products with ingredients harmful to human health and the environment. Stakeholder involvement The United States is the world leader in chemicals manufacturing. As a multibillion-dollar industry, the chemical industry has a leading role in the US economy and because of this, a high level of influence in federal decision-making. Central to the modern world economy, it converts raw materials (oil, natural gas, air, water, metals, and minerals) into more than 70,000 different products. The chemical industry—producers of chemicals, household cleansers, plastics, rubber, paints and explosives, keeps a watchful eye on issues including environmental and health policy, taxes and trade. The industry is often the target of environmental groups, which charge that chemicals and chemical waste are polluting the air and water supply. And like most industries with pollution problems, chemical manufacturers oppose meddlesome government regulations that make it more difficult and expensive for them to do business. So do most Republicans, which is why this industry gives nearly three-fourths of its campaign contributions to the GOP. In addition to campaign contributions to elected officials and candidates, companies, labor unions, and other organizations spend billions of dollars each year to lobby Congress and federal agencies. Some special interests retain lobbying firms, many of them located along Washington's legendary K Street; others have lobbyists working in-house. According to OpenSecrets, the total number of clients lobbying for the chemical industry in 2010 was 143, which is the highest number in history. The first group on this list, American Chemistry Council spent $8,130,000 lobbying last year and Crop America, which comes second, spent $2,291,859 lobbying last year, FMC Corporation spent $1,230,000 and Koch Industries spent $8,070,000. The Chemical Industry wants limited testing of chemicals, more lengthy and costly studies of chemicals already proven to be dangerous, and an assumption that we are only exposed to one chemical at a time, and from one source at a time. According to Safer Chemicals, Healthy Families, a broad coalition of groups, including major environmental organizations like the Natural Resources Defense Council and the Environmental Defense Fund, health organizations like the Learning Disabilities Association, Breast Cancer Fund, and the Autism Society of America, health professionals and providers like the American Nurses Association, Planned Parenthood Federation of America, and the Mt. Sinai Children's Environmental Health Center, and concerned parents groups like MomsRising: there is growing national momentum and pressure to change the Toxic Substances Control Act (TSCA), our federal system for overseeing chemical safety, which has not been updated in thirty-five years. Polling data indicates overwhelming support for chemical regulation nationwide. According to polling data conducted by the Mellman Group, 84% say that "tightening controls" on chemical regulation is important, with 50% of those calling it "very important.” Public Health Advocates want public disclosure of safety information for all chemicals in use, prompt action to phase out or reduce the most dangerous chemicals, deciding safety based on real world exposure to all sources of toxic chemicals. History In 2008, California Governor Arnold Schwarzenegger signed two state bills authorizing the state to identify toxic chemicals in industry and consumer products and analyze alternatives. AB 1879, written by Assemblyman Mike Feur, a Los Angeles Democrat, requires the state Department of Toxic Substances Control to assess chemicals and prioritize the most toxic for possible restrictions or bans. The environmental policy council, made up of heads of all state environmental protection agency boards and departments will oversee the program. SB 509, by Senator Joe Simitian, a Palo Alto Democrat, creates an online toxics information clearinghouse with information about the hazards of thousands of chemicals used in California. These bills are intended to put an end to chemical-by-chemical bans and remove harmful products at the design stage. The regulations are expected to motivate manufacturers of consumer products containing chemicals of concern to seek safer alternatives. Supporters of the bill include the California Association of Professional Scientists, the Chemical Industry Council of California, DuPont, BIOCOM, Grocery Manufacturers Association, the Breast Cancer Fund, Catholic Healthcare West, in addition to a broad array of environmental groups such as the Coalition for Clean Air, the Environmental Defense Fund, the Natural Resources Defense Council. The American Electronics Association (AEA) and Ford spoke in opposition to the bill, each requesting an exemption from its provisions. Also opposing were environmental justice advocates who indicated the bill did not go far enough. Meanwhile, large trade associations such as Consumer Specialty Products Association, Western States Petroleum Association, American Chemistry Council, CA Manufacturers and Technology Association, and CA Chamber of Commerce officially withdrew opposition to the measures. Due to outdated and inefficient or otherwise voluntary chemical regulation at the Federal level, the State of California has decided to take regulation into its own hands and develop stricter, environmentally-informed methodologies for dealing with the production of toxic wastes. California's economy is the largest of any state in the US, and is the eighth largest economy in the world. This position gives California an advantage when it comes to environmental standards: the impact of chemical regulation statewide can have a broader impact nationwide if manufacturers desire to stay competitive in California's market. The Green Chemistry Initiative forces statewide industries to comply with greener standards of production, which may spark innovation on a wider basis. The Green Chemistry initiative aims to regulate the creation and use of materials hazardous to human health and the environment by encouraging innovative design and manufacturing, and ultimately safer consumer product alternatives. To develop the regulatory framework, DTSC held a number of stakeholder and public workshops and invited direct public participation in the drafting of regulations on a wiki website. DTSC reportedly received over 57,000 comments and over 800 regulatory suggestions. Regulatory suggestions included industry assessments of risk and safety, alternative chemicals and life-cycle assessments and mandatory industry reporting, full public disclosure of substances contained in products, a green labelling program that would inform consumers of the potential health and environmental impacts of the chemicals contained in products and a mandated surcharge on chemicals and products to support a fund to address environmental problems. In December 2008, DTSC announced six policy recommendations for the Green Chemistry Initiative. In brief, those recommendations are: expand pollution prevention develop green chemistry workforce education and training, research and development, technology transfer online product ingredient network online toxics clearing house accelerate the quest for safer products move toward cradle to cradle economy Two of the six recommendations from this report were adopted: AB 1879 requires the DTSC to implement regulations to identify and prioritize chemicals of concern, evaluate alternatives, and specify regulatory responses where chemicals are found in products. SB 509 requires an online, public toxics information clearinghouse that includes science-based information on the toxicity and hazard traits of chemicals used in daily life. Essentially the recommended policy methods include authority tools that would regulate the approval on new chemicals in a more cautious manner as well as mandate the decimation of information, as provided by manufacturers to the public; innovation would be encouraged under this paradigm to replace harmful chemicals with greener alternatives and the California government would fund programs to help industries produce greener chemicals. Secondly, capacity or learning tools would be provided to the public in the form of the online database, giving the tools so that they have better ability to make market decisions that reflect their interests. Criticism Environmentalists say the amended regulations won't remove toxic products from the shelves and will create "paralysis by analysis," as industries can litigate against DTSC over unfavorable department decisions. Activists say California was poised to lead the way on toxics regulation but now is faced with potentially one of the weakest chemical-regulatory mechanisms in the nation. According to CHANGE (Californians for a Healthy & Green Economy), the revised regulation is a betrayal of the Green Chemistry promise and ignores two years of public input, while caving to backroom industry lobbying. Furthermore, it is a betrayal to public interest groups, businesses, and residents of California and legislators who supported the intent of this bill, to protect Californians and spur a healthy, innovative green economy. Environmentalists say the toxics department gutted the initiative at the behest of the chemical industry, and then put out the changes for public comment during a 15-day period just before Thanksgiving. This was a violation of the law requiring a 45-day public comment period when a substantial reworking of state regulations is proposed. The new Director of California's Department of Toxic Substance Control, Debbie Raphael, announced that mid-October 2011 is the new target date for new draft regulations to implement California's Green Chemistry Law and new draft guidelines were issued October 31, 2011. The public comment period for the latest version of the draft regulations ends December 30, 2011. Implementation of CGCI has been delayed indefinitely beyond the January 1, 2011 deadline due to issues that arose after public review of the third draft. The third draft, which was made public December 2010, contains substantial revisions, including scaled back manufacturer and retailer compliance requirements that were not well received by the environmental community. DTSCs newest draft has made the following changes: All references of nanotechnology are excluded (nano referring to materials with dimensions of 1,000 nanometers or smaller); this change is significant because it would have been considered the most significant attempt to regulate nanomaterials based on environmental or health impacts. The new draft redefines “responsible entities,” which originally referred to the entire business chain of consumer products distribution, including manufacturers, brand name owners, importers, distributors, and retailers, “responsible entities is now limited to manufacturers and retailers . DTSC prioritizes Children's products, personal care products and household products until 2016, after that point all consumer products. The new proposed regulations also eliminate the requirement that the DTSC develop a list of chemicals of consideration and products under consideration. New timeline for implementation of regulations References Green chemistry Pollution in the United States
California Green Chemistry Initiative
[ "Chemistry", "Engineering", "Environmental_science" ]
3,716
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "nan" ]
31,677,158
https://en.wikipedia.org/wiki/Belt%20dryer
A belt dryer (belt drier) a kind of industrial dryer, is an apparatus which is used for continuous drying and cooling of woodchip, pellets, pastes, moulded compounds and panels using air, inert gas, or flue gas. Working principle A Belt dryer / Belt cooler is a device designed for the particularly gentle thermal treatment of product. The wet product is continuously and evenly applied through an infeed chamber onto a perforated belt. The belt, predominantly in horizontal position, carries the product through the drying area which is divided into several sections. In these cells drying gas flows through or over the wet product and dries it. Each cell can be equipped with a ventilating fan and a heat exchanger. This modular design allows the drying and cooling temperatures to be controlled separately in the different sections. Thus, each dryer cell can be individually controlled and the drying / cooling air flow can be varied in each cell. In addition, the speed of the conveyor belt can be varied what gives an additional parameter for setting of drying time. The cells can be heated or cooled directly or indirectly, and all heating media, such as oil, steam, hot water or hot gas can be used. Belt dryers are ideally suited to drying almost any non-flowing product and more granular products that require a lower throughput capacity. Design features Belt dryers / Belt coolers are designed in modular system. Each belt dryer consists of infeed hopper, conveyor belt and discharge end. Different kinds of dryers are possible to construct, e.g. Single-belt dryer Multi-stage dryer Multi-level dryer Multi-belt dryer Ventilation options In general there are two ways of gas flow pattern. The drying air can flow, according to the treatment process, either through or over the product. Heat Sources HEAT EXCHANGERS: These are commonly used for application where a biomass heat source is available such as woodchip boilers to produce hot water or if there is a steam heat source available. OIL OR GAS FIRED BURNERS: If a separate heat source is required a direct fired furnace with diesel, kerosene, LPG or natural gas burner can be used. Alternatively a heat exchanger with the same burner can be used for indirect heating if required. Exemplary conveyor options Chain-guided wire mesh conveyor Chain-guided hinge slat conveyor Chain-guided steel plate conveyor Chainless wire mesh conveyor Feeding variations Granulating mill – filter cake or amorphous and paste-like products respectively Slewing belt conveyor – sensitive and free flowing products Distribution spiral Rotatable arm feeding device – stable products Plates feeding Typical applications Belt dryers are predominantly used in the following industries: Biomass Pelleting Wood industry Chemical industry Anaerobic digestate Pharmaceutical industry Food and feeding-stuff industry Non-metallic minerals industry Plastics industry Ceramics industry Sources and further reading Sattler, Klaus: Thermische Trennverfahren. 3. Aufl. Weinheim: Wiley-VCH, 2001 Draxler, J.: Skriptum zur Vorlesung Thermische Verfahrenstechnik. Loeben: Mountain Universität, 2002 Krischer, O.; Kast, W.:Trocknungstechnik – Die wissenschaftlichen Grundlagen der Trocknungstechnik. 3. Band. 3.Aufl. Berlin: Springer, 1992 References External links Picture of belt dryer for sawdust Picture of belt dryer for grass, forage maize and various biomass products Double Sided Timing Belt Máy sấy bùn thải Dryers Belt drives Industrial machinery
Belt dryer
[ "Chemistry", "Engineering" ]
757
[ "Dryers", "Chemical equipment", "Industrial machinery" ]
31,680,083
https://en.wikipedia.org/wiki/Map%20symbol
A map symbol or cartographic symbol is a graphical device used to visually represent a real-world feature on a map, working in the same fashion as other forms of symbols. Map symbols may include point markers, lines, regions, continuous fields, or text; these can be designed visually in their shape, size, color, pattern, and other graphic variables to represent a variety of information about each phenomenon being represented. Map symbols simultaneously serve several purposes: Declare the existence of geographic phenomena Show location and extent Visualize attribute information Add to (or detract from) the aesthetic appeal of the map, and/or evoke a particular aesthetic reaction (a "look and feel") Establish an overall gestalt order to make the map more or less useful, including visual hierarchy Representing spatial phenomena Symbols are used to represent geographic phenomena, which exist in, and are represented by, a variety of spatial forms. Different kinds of symbols are used to portray different spatial forms. Phenomena can be categorized a number of ways, but two are most relevant to symbology: ontological form and dimensionality. When a symbol is representing a property of the phenomenon as well as its location, the choice of symbol also depends on the nature of that property, usually classified as a Level of measurement. Ontological form Geographic phenomena can be categorized into objects, which are recognizable as a unified whole with a relevant boundary and shape; and masses, in which the notion of boundary and wholeness are not relevant to their identity. Features such as buildings, cities, roads, lakes, and countries are geographic objects that are often portrayed on maps using symbols. Mass phenomena include air, water, vegetation, and rock. These are rarely represented directly on maps; instead, map symbols portray their properties, which usually take the form of geographic fields, such as temperature, moisture content, density, and composition. Dimensionality The number of spatial dimensions needed to represent a phenomenon determine a choice of Geometric primitive; each type of geometric primitive is drawn with a different type of visual symbol. The dimensionality of a map symbol representing a feature may or may not be the same as the dimensionality of the feature in the real world; discrepancies are the result of cartographic generalization to simplify features based on purpose and scale. For example, a three-dimensional road is often represented as a one-dimensional line symbol, while two-dimensional cities are frequently represented by zero-dimensional points. Level of Measurement of Property Many map symbols visualize not just the location and shape of a geographic phenomenon, but also one or more of its properties or attributes. Geographers and cartographers usually categorize properties according to the classification system of Stanley Smith Stevens, or some revision thereof, such as that of Chrisman. Different kinds of symbols and visual variables are better at intuitively representing some levels than others, especially when the visual variable portrays the same kind of differences as the represented attribute. Cognition and semiotics In cartography, the principles of cognition are important since they explain why certain map symbols work. In the past, mapmakers did not care why the symbols worked. This behaviorist view treats the human brain like a black box. Modern cartographers are curious why certain symbols are the most effective. This should help develop a theoretical basis for how brains recognize symbols and, in turn, provide a platform for creating new symbols. According to semiotics, specifically the Semiotic theory of Charles Sanders Peirce, map symbols are "read" by map users when they make a connection between the graphic mark on the map (the sign), a general or specific concept (the interpretant), and a particular feature of the real world (the object or referent). Map symbols can thus be categorized by how they suggest this connection: Iconic symbols (also "image", "pictorial", or "replicative") have a similar appearance to the real-world feature, although it is often in a generalized manner; e.g. a tree icon to represent a forest, brown denoting desert, or green denoting vegetation. Functional symbols (also "representational") directly represent the activity that takes place at the represented feature; e.g. a picture of a skier to represent a ski resort or a tent to represent a campground. Conceptual symbols directly represent a concept related to the represented feature; e.g. a dollar sign to represent an ATM, or a Star of David to represent a Jewish synagogue. Conventional symbols (also "associative") do not have any intuitive relationship but are so commonly used that map readers eventually learn to recognize them; e.g. a red line to represent a highway or a Swiss cross to represent a hospital. Ad hoc symbols (also "abstract") are arbitrary symbols chosen by the cartographer to represent a feature, with no intuitive connection to the interpretant or referent. These can only be interpreted with a legend. An example would be using various colors to represent geologic layers. Visual variables A map symbol is created by altering the visual appearance of a feature, whether a point, line, or region; this appearance can be controlled using one or more visual variables. Jacques Bertin, a French cartographer, developed the concept of visual variables in his 1967 book, "Sémiologie Graphique." Bertin identified seven main categories of visual variables: position, size, shape, value, color, orientation, and texture/grain. Since then, cartographers have modified and expanded this set. Each of these variables may be employed to convey information, to provide contrast between different features and layers, to establish figure-ground contrast and a clear visual hierarchy, or add to the aesthetic appeal of the map. The most common set of visual variables, as canonized in cartography textbooks and the Geographic Information Science and Technology Body of Knowledge, includes the following: Size, how much space a symbol occupies on a map, most commonly refers to the area of point symbols, and the thickness of line symbols, although the cartogram controls the size of area features proportional to a given variable. Size has been shown to be very effective at conveying quantitative data, and in the visual hierarchy. Shape is most commonly discussed in the context of point symbols (as the shapes of lines and areas are typically fixed by geographic reality), and is generally only used to differentiate nominal categories. That said, some maps purposefully manipulate the shape of lines and areas, often for purposes of Cartographic generalization, such as in schematic transit maps, although this distortion is rarely used to convey information, only to reduce emphasis on shape and location. Color Hue is the visual property caused by the blending of various wavelengths of light, which we commonly refer to by color names like "red," "green," or "blue." Maps often use hue to differentiate categories of nominal variables, such as land cover types or geologic layers, or for its psychological connotations, such as red implying heat or danger and blue implying cold or water. Color value or lightness, how light or dark an object appears. Value effectively connotes "more" and "less," an ordinal measure; this makes it a very useful form of symbology in thematic maps, especially choropleth maps. Value also contributes strongly to Visual hierarchy; elements that contrast most with the value of the background tend to stand out most (e.g., black on a white sheet of paper, white on a black computer screen). Color saturation/intensity is the purity or intensity of a color, created by the degree of variety of light composing it; a single wavelength of light is of the highest saturation, while white, black, or gray has no saturation (being an even mixture of all visible wavelengths). Saturation has been found to be of marginal value in representing property information, but is very effective at establishing figure-ground and visual hierarchy, with bright colors generally standing out more than muted tones or shades of gray. Orientation, the direction labels and symbols are facing on a map. Although it is not used as often as many of the other visual variables, it can be useful for communicating information about the real-world orientation of features, such as wind direction and the direction in which a spring flows. Pattern or Texture is the aggregation of large numbers of similar symbols into a composite symbol, such as a forest represented by a random scattering of tree icons. In addition to the visual variables that make up the sub-symbols, there are variables for controlling the pattern as a whole: Grain or Spacing, the distance between the individual symbols. Typically seen as similar to value, only with a weaker effect. Arrangement, the pattern of distribution of the sub-symbols, often either random or as a regular grid. Transparency or Opacity, the mathematical blending of symbols of overlapping features, giving the illusion of underlying symbols being partially visible through overlying symbols. This is a recent addition due to software advancements, and is rarely used to convey specific information, but is used increasingly commonly for aiding the visual hierarchy and increasing aesthetic quality. Cartographers have also proposed analogous sets of controllable variables for animated maps, haptic (touch) maps, and even the use of sound in digital maps. Visual hierarchy An important factor in map symbols is the order in which they are ranked according to their relative importance. This is known as intellectual hierarchy. The most important hierarchy is the thematic symbols and type labels that are directly related to the theme. Next comes the title, subtitle, and legend. The map must also contain base information, such as boundaries, roads, and place names. Data source and notes should be on all maps. Lastly, the scale, neat lines, and north arrow are the least important of the hierarchy of the map. From this we see that the symbols are the single most important thing to build a good visual hierarchy that shows proper graphical representation. When producing a map with good visual hierarchy, thematic symbols should be graphically emphasized. A map with a visual hierarchy that is effective attracts the map user's eyes to the symbols with the most important aspects of the map first and to the symbols with the lesser importance later. Map legend The legend of the map also contains important information and all of the thematic symbols of the map. Symbols that need no explanations, or do not coincide with the theme of the map, are normally omitted from the map legend. Thematic symbols directly represent the maps theme and should stand out. See also NATO Joint Military Symbology Map coloring References External links Symbolization & the Visual Variables, Topic CV-08 in the 2017 Geographic Information Science and Technology Body of Knowledge Cartography Symbols
Map symbol
[ "Mathematics" ]
2,165
[ "Symbols" ]
31,681,255
https://en.wikipedia.org/wiki/Honda%20pumps
Honda pumps are portable pumps which are manufactured in Japan, India, China and the United States. Pump types All Honda Power Equipment petrol-powered pumps utilize a Honda 4-stroke engine, while the submersible pumps use electricity to power the engine. Volume Volume (or transfer) pumps are designed to pump a massive amount of clean water in an economical fashion. References Pumps Honda
Honda pumps
[ "Physics", "Chemistry", "Engineering" ]
77
[ "Pumps", "Turbomachinery", "Physical systems", "Hydraulics", "Mechanical engineering", "Mechanical engineering stubs" ]
1,456,984
https://en.wikipedia.org/wiki/Organic%20synthesis
Organic synthesis is a branch of chemical synthesis concerned with the construction of organic compounds. Organic compounds are molecules consisting of combinations of covalently-linked hydrogen, carbon, oxygen, and nitrogen atoms. Within the general subject of organic synthesis, there are many different types of synthetic routes that can be completed including total synthesis, stereoselective synthesis, automated synthesis, and many more. Additionally, in understanding organic synthesis it is necessary to be familiar with the methodology, techniques, and applications of the subject. Total synthesis A total synthesis refers to the complete chemical synthesis of molecules from simple, natural precursors. Total synthesis is accomplished either via a linear or convergent approach. In a linear synthesis—often adequate for simple structures—several steps are performed sequentially until the molecule is complete; the chemical compounds made in each step are called synthetic intermediates. Most often, each step in a synthesis is a separate reaction taking place to modify the starting materials. For more complex molecules, a convergent synthetic approach may be better suited. This type of reaction scheme involves the individual preparations of several key intermediates, which are then combined to form the desired product. Robert Burns Woodward, who received the 1965 Nobel Prize for Chemistry for several total syntheses including his synthesis of strychnine, is regarded as the grandfather of modern organic synthesis. Some latter-day examples of syntheses include Wender's, Holton's, Nicolaou's, and Danishefsky's total syntheses of the anti-cancer drug paclitaxel (trade name Taxol). Methodology and applications Before beginning any organic synthesis, it is important to understand the chemical reactions, reagents, and conditions required in each step to guarantee successful product formation. When determining optimal reaction conditions for a given synthesis, the goal is to produce an adequate yield of pure product with as few steps as possible. When deciding conditions for a reaction, the literature can offer examples of previous reaction conditions that can be repeated, or a new synthetic route can be developed and tested. For practical, industrial applications additional reaction conditions must be considered to include the safety of both the researchers and the environment, as well as product purity. Synthetic techniques Organic Synthesis requires many steps to separate and purify products. Depending on the chemical state of the product to be isolated, different techniques are required. For liquid products, a very common separation technique is liquid–liquid extraction and for solid products, filtration (gravity or vacuum) can be used. Liquid–liquid extraction Liquid–liquid extraction uses the density and polarity of the product and solvents to perform a separation. Based on the concept of "like-dissolves-like", non-polar compounds are more soluble in non-polar solvents, and polar compounds are more soluble in polar solvents. By using this concept, the relative solubility of compounds can be exploited by adding immiscible solvents into the same flask and separating the product into the solvent with the most similar polarity. Solvent miscibility is of major importance as it allows for the formation of two layers in the flask, one layer containing the side reaction material and one containing the product. As a result of the differing densities of the layers, the product-containing layer can be isolated and the other layer can be removed. Heated reactions and reflux condensers Many reactions require heat to increase reaction speed. However, in many situations increased heat can cause the solvent to boil uncontrollably which negatively affects the reaction, and can potentially reduce product yield. To address this issue, reflux condensers can be fitted to reaction glassware. Reflux condensers are specially calibrated pieces of glassware that possess two inlets for water to run in and out through the glass against gravity. This flow of water cools any escaping substrate and condenses it back into the reaction flask to continue reacting and ensure that all product is contained. The use of reflux condensers is an important technique within organic syntheses and is utilized in reflux steps, as well as recrystallization steps. When being used for refluxing a solution, reflux condensers are fitted and closely observed. Reflux occurs when condensation can be seen dripping back into the reaction flask from the reflux condenser; 1 drop every second or few seconds. For recrystallization, the product-containing solution is equipped with a condenser and brought to reflux again. Reflux is complete when the product-containing solution is clear. Once clear, the reaction is taken off heat and allowed to cool which will cause the product to re-precipitate, yielding a purer product. Gravity and vacuum filtration Solid products can be separated from a reaction mixture using filtration techniques. To obtain solid products a vacuum filtration apparatus can be used. Vacuum filtration uses suction to pull liquid through a Büchner funnel equipped with filter paper, which catches the desired solid product. This process removes any unwanted solution in the reaction mixture by pulling it into the filtration flask and leaving the desired product to collect on the filter paper. Liquid products can also be separated from solids by using gravity filtration. In this separatory method, filter paper is folded into a funnel and placed on top of a reaction flask. The reaction mixture is then poured through the filter paper, at a rate such that the total volume of liquid in the funnel does not exceed the volume of the funnel. This method allows for the product to be separated from other reaction components by the force of gravity, instead of a vacuum. Stereoselective synthesis Most complex natural products are chiral, and the bioactivity of chiral molecules varies with the enantiomer. Some total syntheses target racemic mixtures, which are mixtures of both possible enantiomers. A single enantiomer can then be selected via enantiomeric resolution.   As chemistry has developed methods of stereoselective catalysis and kinetic resolution have been introduced whereby reactions can be directed, producing only one enantiomer rather than a racemic mixture. Early examples include stereoselective hydrogenations (e.g., as reported by William Knowles and Ryōji Noyori) and functional group modifications such as the asymmetric epoxidation by Barry Sharpless; for these advancements in stereochemical preference, these chemists were awarded the Nobel Prize in Chemistry in 2001. Such preferential stereochemical reactions give chemists a much more diverse choice of enantiomerically pure materials. Using techniques developed by Robert B. Woodward paired with advancements in synthetic methodology, chemists have been able synthesize stereochemically selective complex molecules without racemization. Stereocontrol provides the target molecules to be synthesized as pure enantiomers (i.e., without need for resolution). Such techniques are referred to as stereoselective synthesis. Synthesis design Many synthetic procedures are developed from a retrosynthetic framework, a type of synthetic design developed by Elias James Corey, for which he won the Nobel Prize in Chemistry in 1990. In this approach, the synthesis is planned backwards from the product, obliging to standard chemical rules. Each step breaks down the parent structure into achievable components, which are shown via the use of graphical schemes with retrosynthetic arrows (drawn as ⇒, which in effect, means "is made from"). Retrosynthesis allows for the visualization of desired synthetic designs. Automated organic synthesis A recent development within organic synthesis is automated synthesis. To conduct organic synthesis without human involvement, researchers are adapting existing synthetic methods and techniques to create entirely automated synthetic processes using organic synthesis software. This type of synthesis is advantageous as synthetic automation can increase yield with continual "flowing" reactions. In flow chemistry, substrates are continually fed into the reaction to produce a higher yield. Previously, this type of reaction was reserved for large-scale industrial chemistry but has recently transitioned to bench-scale chemistry to improve the efficiency of reactions on a smaller scale. Currently integrating automated synthesis into their work is SRI International, a nonprofit research institute. Recently SRI International has developed Autosyn, an automated multi-step chemical synthesizer that can synthesize many FDA-approved small molecule drugs. This synthesizer demonstrates the versatility of substrates and the capacity to potentially expand the type of research conducted on novel drug molecules without human intervention. Automated chemistry and the automated synthesizers used demonstrate a potential direction for synthetic chemistry in the future. Characterization Necessary to organic synthesis is characterization. Characterization refers to the measurement of chemical and physical properties of a given compound, and comes in many forms. Examples of common characterization methods include: nuclear magnetic resonance (NMR), mass spectrometry, Fourier-transform infrared spectroscopy (FTIR), and melting point analysis. Each of these techniques allow for a chemist to obtain structural information about a newly synthesized organic compound. Depending on the nature of the product, the characterization method used can vary. Relevance Organic synthesis is an important chemical process that is integral to many scientific fields. Examples of fields beyond chemistry that require organic synthesis include the medical industry, pharmaceutical industry, and many more. Organic processes allow for the industrial-scale creation of pharmaceutical products. An example of such a synthesis is Ibuprofen. Ibuprofen can be synthesized from a series of reactions including: reduction, acidification, formation of a Grignard reagent, and carboxylation. In the synthesis of Ibuprofen proposed by Kjonass et al., p-isobutylacetophenone, the starting material, is reduced with sodium borohydride (NaBH4) to form an alcohol functional group. The resulting intermediate is acidified with HCl to create a chlorine group. The chlorine group is then reacted with magnesium turnings to form a Grignard reagent. This Grignard is carboxylated and the resulting product is worked up to synthesize ibuprofen. This synthetic route is just one of many medically and industrially relevant reactions that have been created, and continued to be used. See also Automated synthesis Electrosynthesis Methods in Organic Synthesis (journal) Organic Syntheses (journal) References Further reading External links The Organic Synthesis Archive Chemical synthesis database https://web.archive.org/web/20070927231356/http://www.webreactions.net/search.html https://www.organic-chemistry.org/synthesis/ Prof. Hans Reich's collection of natural product syntheses Chemical synthesis semantic wiki
Organic synthesis
[ "Chemistry" ]
2,181
[ "Organic synthesis", "Chemical synthesis" ]
1,458,192
https://en.wikipedia.org/wiki/Green%E2%80%93Kubo%20relations
The Green–Kubo relations (Melville S. Green 1954, Ryogo Kubo 1957) give the exact mathematical expression for a transport coefficient in terms of the integral of the equilibrium time correlation function of the time derivative of a corresponding microscopic variable (sometimes termed a "gross variable", as in ): One intuitive way to understand this relation is that relaxations resulting from random fluctuations in equilibrium are indistinguishable from those due to an external perturbation in linear response. Green-Kubo relations are important because they relate a macroscopic transport coefficient to the correlation function of a microscopic variable. In addition, they allow one to measure the transport coefficient without perturbing the system out of equilibrium, which has found much use in molecular dynamics simulations. Thermal and mechanical transport processes Thermodynamic systems may be prevented from relaxing to equilibrium because of the application of a field (e.g. electric or magnetic field), or because the boundaries of the system are in relative motion (shear) or maintained at different temperatures, etc. This generates two classes of nonequilibrium system: mechanical nonequilibrium systems and thermal nonequilibrium systems. The standard example of an electrical transport process is Ohm's law, which states that, at least for sufficiently small applied voltages, the current I is linearly proportional to the applied voltage V, As the applied voltage increases one expects to see deviations from linear behavior. The coefficient of proportionality is the electrical conductance which is the reciprocal of the electrical resistance. The standard example of a mechanical transport process is Newton's law of viscosity, which states that the shear stress is linearly proportional to the strain rate. The strain rate is the rate of change streaming velocity in the x-direction, with respect to the y-coordinate, . Newton's law of viscosity states As the strain rate increases we expect to see deviations from linear behavior Another well known thermal transport process is Fourier's law of heat conduction, stating that the heat flux between two bodies maintained at different temperatures is proportional to the temperature gradient (the temperature difference divided by the spatial separation). Linear constitutive relation Regardless of whether transport processes are stimulated thermally or mechanically, in the small field limit it is expected that a flux will be linearly proportional to an applied field. In the linear case the flux and the force are said to be conjugate to each other. The relation between a thermodynamic force F and its conjugate thermodynamic flux J is called a linear constitutive relation, L(0) is called a linear transport coefficient. In the case of multiple forces and fluxes acting simultaneously, the fluxes and forces will be related by a linear transport coefficient matrix. Except in special cases, this matrix is symmetric as expressed in the Onsager reciprocal relations. In the 1950s Green and Kubo proved an exact expression for linear transport coefficients which is valid for systems of arbitrary temperature T, and density. They proved that linear transport coefficients are exactly related to the time dependence of equilibrium fluctuations in the conjugate flux, where (with k the Boltzmann constant), and V is the system volume. The integral is over the equilibrium flux autocovariance function. At zero time the autocovariance is positive since it is the mean square value of the flux at equilibrium. Note that at equilibrium the mean value of the flux is zero by definition. At long times the flux at time t, J(t), is uncorrelated with its value a long time earlier J(0) and the autocorrelation function decays to zero. This remarkable relation is frequently used in molecular dynamics computer simulation to compute linear transport coefficients; see Evans and Morriss, "Statistical Mechanics of Nonequilibrium Liquids", Academic Press 1990. Nonlinear response and transient time correlation functions In 1985 Denis Evans and Morriss derived two exact fluctuation expressions for nonlinear transport coefficients—see Evans and Morriss in Mol. Phys, 54, 629(1985). Evans later argued that these are consequences of the extremization of free energy in Response theory as a free energy minimum. Evans and Morriss proved that in a thermostatted system that is at equilibrium at t = 0, the nonlinear transport coefficient can be calculated from the so-called transient time correlation function expression: where the equilibrium () flux autocorrelation function is replaced by a thermostatted field dependent transient autocorrelation function. At time zero but at later times since the field is applied . Another exact fluctuation expression derived by Evans and Morriss is the so-called Kawasaki expression for the nonlinear response: The ensemble average of the right hand side of the Kawasaki expression is to be evaluated under the application of both the thermostat and the external field. At first sight the transient time correlation function (TTCF) and Kawasaki expression might appear to be of limited use—because of their innate complexity. However, the TTCF is quite useful in computer simulations for calculating transport coefficients. Both expressions can be used to derive new and useful fluctuation expressions quantities like specific heats, in nonequilibrium steady states. Thus they can be used as a kind of partition function for nonequilibrium steady states. Derivation from the fluctuation theorem and the central limit theorem For a thermostatted steady state, time integrals of the dissipation function are related to the dissipative flux, J, by the equation We note in passing that the long time average of the dissipation function is a product of the thermodynamic force and the average conjugate thermodynamic flux. It is therefore equal to the spontaneous entropy production in the system. The spontaneous entropy production plays a key role in linear irreversible thermodynamics – see de Groot and Mazur "Non-equilibrium thermodynamics" Dover. The fluctuation theorem (FT) is valid for arbitrary averaging times, t. Let's apply the FT in the long time limit while simultaneously reducing the field so that the product is held constant, Because of the particular way we take the double limit, the negative of the mean value of the flux remains a fixed number of standard deviations away from the mean as the averaging time increases (narrowing the distribution) and the field decreases. This means that as the averaging time gets longer the distribution near the mean flux and its negative, is accurately described by the central limit theorem. This means that the distribution is Gaussian near the mean and its negative so that Combining these two relations yields (after some tedious algebra!) the exact Green–Kubo relation for the linear zero field transport coefficient, namely, Here are the details of the proof of Green–Kubo relations from the FT. A proof using only elementary quantum mechanics was given by Robert Zwanzig. Summary This shows the fundamental importance of the fluctuation theorem (FT) in nonequilibrium statistical mechanics. The FT gives a generalisation of the second law of thermodynamics. It is then easy to prove the second law inequality and the Kawasaki identity. When combined with the central limit theorem, the FT also implies the Green–Kubo relations for linear transport coefficients close to equilibrium. The FT is, however, more general than the Green–Kubo Relations because, unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, no one has yet been able to derive the equations for nonlinear response theory from the FT. The FT does not imply or require that the distribution of time-averaged dissipation is Gaussian. There are many examples known when the distribution is non-Gaussian and yet the FT still correctly describes the probability ratios. See also Density matrix Fluctuation theorem Fluctuation–dissipation theorem Green's function (many-body theory) Lindblad equation Linear response function References Theoretical physics Thermodynamic equations Statistical mechanics Non-equilibrium thermodynamics
Green–Kubo relations
[ "Physics", "Chemistry", "Mathematics" ]
1,658
[ "Thermodynamic equations", "Equations of physics", "Non-equilibrium thermodynamics", "Theoretical physics", "Thermodynamics", "Statistical mechanics", "Dynamical systems" ]
1,459,010
https://en.wikipedia.org/wiki/Stationary%20phase%20approximation
In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential. This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin. It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others. Basics The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times. Formula Letting denote the set of critical points of the function (i.e. points where ), under the assumption that is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. for ) we have the following asymptotic formula, as : Here denotes the Hessian of , and denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues. For , this reduces to: In this case the assumptions on reduce to all the critical points being non-degenerate. This is just the Wick-rotated version of the formula for the method of steepest descent. An example Consider a function . The phase term in this function, , is stationary when or equivalently, . Solutions to this equation yield dominant frequencies for some and . If we expand as a Taylor series about and neglect terms of order higher than , we have where denotes the second derivative of . When is relatively large, even a small difference will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula, . . This integrates to . Reduction steps The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma. The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by . The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval and quickly tending to 0 outside it. Take , then Fubini's theorem reduces I(k) to a product of integrals over the real line like with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate. In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function). One-dimensional case The essential statement is this one: . In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range (for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say, . This is the model for all one-dimensional integrals with having a single non-degenerate critical point at which has second derivative . In fact the model case has second derivative 2 at 0. In order to scale using , observe that replacing by where is constant is the same as scaling by . It follows that for general values of , the factor becomes . For one uses the complex conjugate formula, as mentioned before. Lower-order terms As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved . See also Common integrals in quantum field theory Laplace's method Method of steepest descent Notes References Bleistein, N. and Handelsman, R. (1975), Asymptotic Expansions of Integrals, Dover, New York. Victor Guillemin and Shlomo Sternberg (1990), Geometric Asymptotics, (see Chapter 1). . Aki, Keiiti; & Richards, Paul G. (2002), Quantitative Seismology (2nd ed.), pp 255–256. University Science Books, Wong, R. (2001), Asymptotic Approximations of Integrals, Classics in Applied Mathematics, Vol. 34. Corrected reprint of the 1989 original. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. xviii+543 pages, . Dieudonné, J. (1980), Calcul Infinitésimal, Hermann, Paris Paris, Richard Bruce (2011), Hadamard Expansions and Hyperasymptotic Evaluation: An Extension of the Method of Steepest Descents, Cambridge University Press, ISBN 978-1-107-00258-6 External links Mathematical analysis Perturbation theory
Stationary phase approximation
[ "Physics", "Mathematics" ]
1,260
[ "Mathematical analysis", "Quantum mechanics", "Perturbation theory" ]
1,460,126
https://en.wikipedia.org/wiki/Chromatic%20polynomial
The chromatic polynomial is a graph polynomial studied in algebraic graph theory, a branch of mathematics. It counts the number of graph colorings as a function of the number of colors and was originally defined by George David Birkhoff to study the four color problem. It was generalised to the Tutte polynomial by Hassler Whitney and W. T. Tutte, linking it to the Potts model of statistical physics. History George David Birkhoff introduced the chromatic polynomial in 1912, defining it only for planar graphs, in an attempt to prove the four color theorem. If denotes the number of proper colorings of G with k colors then one could establish the four color theorem by showing for all planar graphs G. In this way he hoped to apply the powerful tools of analysis and algebra for studying the roots of polynomials to the combinatorial coloring problem. Hassler Whitney generalised Birkhoff’s polynomial from the planar case to general graphs in 1932. In 1968, Ronald C. Read asked which polynomials are the chromatic polynomials of some graph, a question that remains open, and introduced the concept of chromatically equivalent graphs. Today, chromatic polynomials are one of the central objects of algebraic graph theory. Definition For a graph G, counts the number of its (proper) vertex k-colorings. Other commonly used notations include , , or . There is a unique polynomial which evaluated at any integer k ≥ 0 coincides with ; it is called the chromatic polynomial of G. For example, to color the path graph on 3 vertices with k colors, one may choose any of the k colors for the first vertex, any of the remaining colors for the second vertex, and lastly for the third vertex, any of the colors that are different from the second vertex's choice. Therefore, is the number of k-colorings of . For a variable x (not necessarily integer), we thus have . (Colorings which differ only by permuting colors or by automorphisms of G are still counted as different.) Deletion–contraction The fact that the number of k-colorings is a polynomial in k follows from a recurrence relation called the deletion–contraction recurrence or Fundamental Reduction Theorem. It is based on edge contraction: for a pair of vertices and the graph is obtained by merging the two vertices and removing any edges between them. If and are adjacent in G, let denote the graph obtained by removing the edge . Then the numbers of k-colorings of these graphs satisfy: Equivalently, if and are not adjacent in G and is the graph with the edge added, then This follows from the observation that every k-coloring of G either gives different colors to and , or the same colors. In the first case this gives a (proper) k-coloring of , while in the second case it gives a coloring of . Conversely, every k-coloring of G can be uniquely obtained from a k-coloring of or (if and are not adjacent in G). The chromatic polynomial can hence be recursively defined as for the edgeless graph on n vertices, and for a graph G with an edge (arbitrarily chosen). Since the number of k-colorings of the edgeless graph is indeed , it follows by induction on the number of edges that for all G, the polynomial coincides with the number of k-colorings at every integer point x = k. In particular, the chromatic polynomial is the unique interpolating polynomial of degree at most n through the points Tutte’s curiosity about which other graph invariants satisfied such recurrences led him to discover a bivariate generalization of the chromatic polynomial, the Tutte polynomial . Examples Properties For fixed G on n vertices, the chromatic polynomial is a monic polynomial of degree exactly n, with integer coefficients. The chromatic polynomial includes at least as much information about the colorability of G as does the chromatic number. Indeed, the chromatic number is the smallest positive integer that is not a zero of the chromatic polynomial, The polynomial evaluated at , that is , yields times the number of acyclic orientations of G. The derivative evaluated at 1, equals the chromatic invariant up to sign. If G has n vertices and c components , then The coefficients of are zeros. The coefficients of are all non-zero and alternate in signs. The coefficient of is 1 (the polynomial is monic). The coefficient of is We prove this via induction on the number of edges on a simple graph G with vertices and edges. When , G is an empty graph. Hence per definition . So the coefficient of is , which implies the statement is true for an empty graph. When , as in G has just a single edge, . Thus coefficient of is . So the statement holds for k = 1. Using strong induction assume the statement is true for . Let G have edges. By the contraction-deletion principle, Let and Hence .Since is obtained from G by removal of just one edge e, , so and thus the statement is true for k. The coefficient of is times the number of acyclic orientations that have a unique sink, at a specified, arbitrarily chosen vertex. The absolute values of coefficients of every chromatic polynomial form a log-concave sequence. The last property is generalized by the fact that if G is a k-clique-sum of and (i.e., a graph obtained by gluing the two at a clique on k vertices), then A graph G with n vertices is a tree if and only if Chromatic equivalence Two graphs are said to be chromatically equivalent if they have the same chromatic polynomial. Isomorphic graphs have the same chromatic polynomial, but non-isomorphic graphs can be chromatically equivalent. For example, all trees on n vertices have the same chromatic polynomial. In particular, is the chromatic polynomial of both the claw graph and the path graph on 4 vertices. A graph is chromatically unique if it is determined by its chromatic polynomial, up to isomorphism. In other words, G is chromatically unique, then would imply that G and H are isomorphic. All cycle graphs are chromatically unique. Chromatic roots A root (or zero) of a chromatic polynomial, called a “chromatic root”, is a value x where . Chromatic roots have been very well studied, in fact, Birkhoff’s original motivation for defining the chromatic polynomial was to show that for planar graphs, for x ≥ 4. This would have established the four color theorem. No graph can be 0-colored, so 0 is always a chromatic root. Only edgeless graphs can be 1-colored, so 1 is a chromatic root of every graph with at least one edge. On the other hand, except for these two points, no graph can have a chromatic root at a real number smaller than or equal to 32/27. A result of Tutte connects the golden ratio with the study of chromatic roots, showing that chromatic roots exist very close to : If is a planar triangulation of a sphere then While the real line thus has large parts that contain no chromatic roots for any graph, every point in the complex plane is arbitrarily close to a chromatic root in the sense that there exists an infinite family of graphs whose chromatic roots are dense in the complex plane. Colorings using all colors For a graph G on n vertices, let denote the number of colorings using exactly k colors up to renaming colors (so colorings that can be obtained from one another by permuting colors are counted as one; colorings obtained by automorphisms of G are still counted separately). In other words, counts the number of partitions of the vertex set into k (non-empty) independent sets. Then counts the number of colorings using exactly k colors (with distinguishable colors). For an integer x, all x-colorings of G can be uniquely obtained by choosing an integer k ≤ x, choosing k colors to be used out of x available, and a coloring using exactly those k (distinguishable) colors. Therefore: where denotes the falling factorial. Thus the numbers are the coefficients of the polynomial in the basis of falling factorials. Let be the k-th coefficient of in the standard basis , that is: Stirling numbers give a change of basis between the standard basis and the basis of falling factorials. This implies:   and Categorification The chromatic polynomial is categorified by a homology theory closely related to Khovanov homology. Algorithms Computational problems associated with the chromatic polynomial include finding the chromatic polynomial of a given graph G; evaluating at a fixed x for given G. The first problem is more general because if we knew the coefficients of we could evaluate it at any point in polynomial time because the degree is n. The difficulty of the second type of problem depends strongly on the value of x and has been intensively studied in computational complexity. When x is a natural number, this problem is normally viewed as computing the number of x-colorings of a given graph. For example, this includes the problem #3-coloring of counting the number of 3-colorings, a canonical problem in the study of complexity of counting, complete for the counting class #P. Efficient algorithms For some basic graph classes, closed formulas for the chromatic polynomial are known. For instance this is true for trees and cliques, as listed in the table above. Polynomial time algorithms are known for computing the chromatic polynomial for wider classes of graphs, including chordal graphs and graphs of bounded clique-width. The latter class includes cographs and graphs of bounded tree-width, such as outerplanar graphs. Deletion–contraction The deletion-contraction recurrence gives a way of computing the chromatic polynomial, called the deletion–contraction algorithm. In the first form (with a minus), the recurrence terminates in a collection of empty graphs. In the second form (with a plus), it terminates in a collection of complete graphs. This forms the basis of many algorithms for graph coloring. The ChromaticPolynomial function in the Combinatorica package of the computer algebra system Mathematica uses the second recurrence if the graph is dense, and the first recurrence if the graph is sparse. The worst case running time of either formula satisfies the same recurrence relation as the Fibonacci numbers, so in the worst case, the algorithm runs in time within a polynomial factor of on a graph with n vertices and m edges. The analysis can be improved to within a polynomial factor of the number of spanning trees of the input graph. In practice, branch and bound strategies and graph isomorphism rejection are employed to avoid some recursive calls, the running time depends on the heuristic used to pick the vertex pair. Cube method There is a natural geometric perspective on graph colorings by observing that, as an assignment of natural numbers to each vertex, a graph coloring is a vector in the integer lattice. Since two vertices and being given the same color is equivalent to the ’th and ’th coordinate in the coloring vector being equal, each edge can be associated with a hyperplane of the form . The collection of such hyperplanes for a given graph is called its graphic arrangement. The proper colorings of a graph are those lattice points which avoid forbidden hyperplanes. Restricting to a set of colors, the lattice points are contained in the cube . In this context the chromatic polynomial counts the number of lattice points in the -cube that avoid the graphic arrangement. Computational complexity The problem of computing the number of 3-colorings of a given graph is a canonical example of a #P-complete problem, so the problem of computing the coefficients of the chromatic polynomial is #P-hard. Similarly, evaluating for given G is #P-complete. On the other hand, for it is easy to compute , so the corresponding problems are polynomial-time computable. For integers the problem is #P-hard, which is established similar to the case . In fact, it is known that is #P-hard for all x (including negative integers and even all complex numbers) except for the three “easy points”. Thus, from the perspective of #P-hardness, the complexity of computing the chromatic polynomial is completely understood. In the expansion the coefficient is always equal to 1, and several other properties of the coefficients are known. This raises the question if some of the coefficients are easy to compute. However the computational problem of computing ar for a fixed r ≥ 1 and a given graph G is #P-hard, even for bipartite planar graphs. No approximation algorithms for computing are known for any x except for the three easy points. At the integer points , the corresponding decision problem of deciding if a given graph can be k-colored is NP-hard. Such problems cannot be approximated to any multiplicative factor by a bounded-error probabilistic algorithm unless NP = RP, because any multiplicative approximation would distinguish the values 0 and 1, effectively solving the decision version in bounded-error probabilistic polynomial time. In particular, under the same assumption, this rules out the possibility of a fully polynomial time randomised approximation scheme (FPRAS). There is no FPRAS for computing for any x > 2, unless NP = RP holds. See also Chromatic symmetric function Notes References . External links PlanetMath Chromatic polynomial Code for computing Tutte, Chromatic and Flow Polynomials by Gary Haggard, David J. Pearce and Gordon Royle: Graph invariants Graph coloring
Chromatic polynomial
[ "Mathematics" ]
2,832
[ "Graph invariants", "Graph coloring", "Mathematical relations", "Graph theory" ]
1,460,172
https://en.wikipedia.org/wiki/Cyclic%20homology
In noncommutative geometry and related branches of mathematics, cyclic homology and cyclic cohomology are certain (co)homology theories for associative algebras which generalize the de Rham (co)homology of manifolds. These notions were independently introduced by Boris Tsygan (homology) and Alain Connes (cohomology) in the 1980s. These invariants have many interesting relationships with several older branches of mathematics, including de Rham theory, Hochschild (co)homology, group cohomology, and the K-theory. Contributors to the development of the theory include Max Karoubi, Yuri L. Daletskii, Boris Feigin, Jean-Luc Brylinski, Mariusz Wodzicki, Jean-Louis Loday, Victor Nistor, Daniel Quillen, Joachim Cuntz, Ryszard Nest, Ralf Meyer, and Michael Puschnigg. Hints about definition The first definition of the cyclic homology of a ring A over a field of characteristic zero, denoted HCn(A) or Hnλ(A), proceeded by the means of the following explicit chain complex related to the Hochschild homology complex of A, called the Connes complex: For any natural number n ≥ 0, define the operator which generates the natural cyclic action of on the n-th tensor product of A: Recall that the Hochschild complex groups of A with coefficients in A itself are given by setting for all n ≥ 0. Then the components of the Connes complex are defined as , and the differential is the restriction of the Hochschild differential to this quotient. One can check that the Hochschild differential does indeed factor through to this space of coinvariants. Connes later found a more categorical approach to cyclic homology using a notion of cyclic object in an abelian category, which is analogous to the notion of simplicial object. In this way, cyclic homology (and cohomology) may be interpreted as a derived functor, which can be explicitly computed by the means of the (b, B)-bicomplex. If the field k contains the rational numbers, the definition in terms of the Connes complex calculates the same homology. One of the striking features of cyclic homology is the existence of a long exact sequence connecting Hochschild and cyclic homology. This long exact sequence is referred to as the periodicity sequence. Case of commutative rings Cyclic cohomology of the commutative algebra A of regular functions on an affine algebraic variety over a field k of characteristic zero can be computed in terms of Grothendieck's algebraic de Rham complex. In particular, if the variety V=Spec A is smooth, cyclic cohomology of A are expressed in terms of the de Rham cohomology of V as follows: This formula suggests a way to define de Rham cohomology for a 'noncommutative spectrum' of a noncommutative algebra A, which was extensively developed by Connes. Variants of cyclic homology One motivation of cyclic homology was the need for an approximation of K-theory that is defined, unlike K-theory, as the homology of a chain complex. Cyclic cohomology is in fact endowed with a pairing with K-theory, and one hopes this pairing to be non-degenerate. There has been defined a number of variants whose purpose is to fit better with algebras with topology, such as Fréchet algebras, -algebras, etc. The reason is that K-theory behaves much better on topological algebras such as Banach algebras or C*-algebras than on algebras without additional structure. Since, on the other hand, cyclic homology degenerates on C*-algebras, there came up the need to define modified theories. Among them are entire cyclic homology due to Alain Connes, analytic cyclic homology due to Ralf Meyer or asymptotic and local cyclic homology due to Michael Puschnigg. The last one is very close to K-theory as it is endowed with a bivariant Chern character from KK-theory. Applications One of the applications of cyclic homology is to find new proofs and generalizations of the Atiyah-Singer index theorem. Among these generalizations are index theorems based on spectral triples and deformation quantization of Poisson structures. An elliptic operator D on a compact smooth manifold defines a class in K homology. One invariant of this class is the analytic index of the operator. This is seen as the pairing of the class [D], with the element 1 in HC(C(M)). Cyclic cohomology can be seen as a way to get higher invariants of elliptic differential operators not only for smooth manifolds, but also for foliations, orbifolds, and singular spaces that appear in noncommutative geometry. Computations of algebraic K-theory The cyclotomic trace map is a map from algebraic K-theory (of a ring A, say), to cyclic homology: In some situations, this map can be used to compute K-theory by means of this map. A pioneering result in this direction is a theorem of : it asserts that the map between the relative K-theory of A with respect to a nilpotent two-sided ideal I to the relative cyclic homology (measuring the difference between K-theory or cyclic homology of A and of A/I) is an isomorphism for n≥1. While Goodwillie's result holds for arbitrary rings, a quick reduction shows that it is in essence only a statement about . For rings not containing Q, cyclic homology must be replaced by topological cyclic homology in order to keep a close connection to K-theory. (If Q is contained in A, then cyclic homology and topological cyclic homology of A agree.) This is in line with the fact that (classical) Hochschild homology is less well-behaved than topological Hochschild homology for rings not containing Q. proved a far-reaching generalization of Goodwillie's result, stating that for a commutative ring A so that the Henselian lemma holds with respect to the ideal I, the relative K-theory is isomorphic to relative topological cyclic homology (without tensoring both with Q). Their result also encompasses a theorem of , asserting that in this situation the relative K-theory spectrum modulo an integer n which is invertible in A vanishes. used Gabber's result and Suslin rigidity to reprove Quillen's computation of the K-theory of finite fields. See also Noncommutative geometry Notes References . Errata External links A personal note on Hochschild and Cyclic homology Homological algebra
Cyclic homology
[ "Mathematics" ]
1,417
[ "Fields of abstract algebra", "Mathematical structures", "Category theory", "Homological algebra" ]
1,460,235
https://en.wikipedia.org/wiki/Indeterminate%20%28variable%29
In mathematics, an indeterminate or formal variable is a variable (a symbol, usually a letter) that is used purely formally in a mathematical expression, but does not stand for any value. In analysis, a mathematical expression such as is usually taken to represent a quantity whose value is a function of its variable , and the variable itself is taken to represent an unknown or changing quantity. Two such functional expressions are considered equal whenever their value is equal for every possible value of within the domain of the functions. In algebra, however, expressions of this kind are typically taken to represent objects in themselves, elements of some algebraic structure – here a polynomial, element of a polynomial ring. A polynomial can be formally defined as the sequence of its coefficients, in this case , and the expression or more explicitly is just a convenient alternative notation, with powers of the indeterminate used to indicate the order of the coefficients. Two such formal polynomials are considered equal whenever their coefficients are the same. Sometimes these two concepts of equality disagree. Some authors reserve the word variable to mean an unknown or changing quantity, and strictly distinguish the concepts of variable and indeterminate. Other authors indiscriminately use the name variable for both. Indeterminates occur in polynomials, rational fractions (ratios of polynomials), formal power series, and, more generally, in expressions that are viewed as independent objects. A fundamental property of an indeterminate is that it can be substituted with any mathematical expressions to which the same operations apply as the operations applied to the indeterminate. Some authors of abstract algebra textbooks define an indeterminate over a ring as an element of a larger ring that is transcendental over . This uncommon definition implies that every transcendental number and every nonconstant polynomial must be considered as indeterminates. Polynomials A polynomial in an indeterminate is an expression of the form , where the are called the coefficients of the polynomial. Two such polynomials are equal only if the corresponding coefficients are equal. In contrast, two polynomial functions in a variable may be equal or not at a particular value of . For example, the functions are equal when and not equal otherwise. But the two polynomials are unequal, since 2 does not equal 5, and 3 does not equal 2. In fact, does not hold unless and . This is because is not, and does not designate, a number. The distinction is subtle, since a polynomial in can be changed to a function in by substitution. But the distinction is important because information may be lost when this substitution is made. For example, when working in modulo 2, we have that: so the polynomial function is identically equal to 0 for having any value in the modulo-2 system. However, the polynomial is not the zero polynomial, since the coefficients, 0, 1 and −1, respectively, are not all zero. Formal power series A formal power series in an indeterminate is an expression of the form , where no value is assigned to the symbol . This is similar to the definition of a polynomial, except that an infinite number of the coefficients may be nonzero. Unlike the power series encountered in calculus, questions of convergence are irrelevant (since there is no function at play). So power series that would diverge for values of , such as , are allowed. As generators Indeterminates are useful in abstract algebra for generating mathematical structures. For example, given a field , the set of polynomials with coefficients in is the polynomial ring with polynomial addition and multiplication as operations. In particular, if two indeterminates and are used, then the polynomial ring also uses these operations, and convention holds that . Indeterminates may also be used to generate a free algebra over a commutative ring . For instance, with two indeterminates and , the free algebra includes sums of strings in and , with coefficients in , and with the understanding that and are not necessarily identical (since free algebra is by definition non-commutative). See also Indeterminate equation Indeterminate form Indeterminate system Notes References Abstract algebra Polynomials Mathematical series
Indeterminate (variable)
[ "Mathematics" ]
835
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus", "Polynomials", "Abstract algebra", "Algebra" ]
1,460,525
https://en.wikipedia.org/wiki/Oncovirus
An oncovirus or oncogenic virus is a virus that can cause cancer. This term originated from studies of acutely transforming retroviruses in the 1950–60s, when the term oncornaviruses was used to denote their RNA virus origin. With the letters RNA removed, it now refers to any virus with a DNA or RNA genome causing cancer and is synonymous with tumor virus or cancer virus. The vast majority of human and animal viruses do not cause cancer, probably because of longstanding co-evolution between the virus and its host. Oncoviruses have been important not only in epidemiology, but also in investigations of cell cycle control mechanisms such as the retinoblastoma protein. The World Health Organization's International Agency for Research on Cancer estimated that in 2002, infection caused 17.8% of human cancers, with 11.9% caused by one of seven viruses. A 2020 study of 2,658 samples from 38 different types of cancer found that 16% were associated with a virus. These cancers might be easily prevented through vaccination (e.g., papillomavirus vaccines), diagnosed with simple blood tests, and treated with less-toxic antiviral compounds. Causality Generally, tumor viruses cause little or no disease after infection in their hosts, or cause non-neoplastic diseases such as acute hepatitis for hepatitis B virus or mononucleosis for Epstein–Barr virus. A minority of persons (or animals) will go on to develop cancers after infection. This has complicated efforts to determine whether or not a given virus causes cancer. The well-known Koch's postulates, 19th-century constructs developed by Robert Koch to establish the likelihood that Bacillus anthracis will cause anthrax disease, are not applicable to viral diseases. Firstly, this is because viruses cannot truly be isolated in pure culture—even stringent isolation techniques cannot exclude undetected contaminating viruses with similar density characteristics, and viruses must be grown on cells. Secondly, asymptomatic virus infection and carriage is the norm for most tumor viruses, which violates Koch's third principle. Relman and Fredericks have described the difficulties in applying Koch's postulates to virus-induced cancers. Finally, the host restriction for human viruses makes it unethical to experimentally transmit a suspected cancer virus. Other measures, such as A. B. Hill's criteria, are more relevant to cancer virology but also have some limitations in determining causality. Tumor viruses come in a variety of forms: Viruses with a DNA genome, such as adenovirus, and viruses with an RNA genome, like the hepatitis C virus (HCV), can cause cancers, as can retroviruses having both DNA and RNA genomes (Human T-lymphotropic virus and hepatitis B virus, which normally replicates as a mixed double and single-stranded DNA virus but also has a retroviral replication component). In many cases, tumor viruses do not cause cancer in their native hosts but only in dead-end species. For example, adenoviruses do not cause cancer in humans but are instead responsible for colds, conjunctivitis and other acute illnesses. They only become tumorigenic when infected into certain rodent species, such as Syrian hamsters. Some viruses are tumorigenic when they infect a cell and persist as circular episomes or plasmids, replicating separately from host cell DNA (Epstein–Barr virus and Kaposi's sarcoma-associated herpesvirus). Other viruses are only carcinogenic when they integrate into the host cell genome as part of a biological accident, such as polyomaviruses and papillomaviruses. Oncogenic viral mechanism A direct oncogenic viral mechanism involves either insertion of additional viral oncogenic genes into the host cell or to enhance already existing oncogenic genes (proto-oncogenes) in the genome. For example, it has been shown that vFLIP and vCyclin interfere with the TGF-β signaling pathway indirectly by inducing oncogenic host mir17-92 cluster. Indirect viral oncogenicity involves chronic nonspecific inflammation occurring over decades of infection, as is the case for HCV-induced liver cancer. These two mechanisms differ in their biology and epidemiology: direct tumor viruses must have at least one virus copy in every tumor cell expressing at least one protein or RNA that is causing the cell to become cancerous. Because foreign virus antigens are expressed in these tumors, persons who are immunosuppressed such as AIDS or transplant patients are at higher risk for these types of cancers. Chronic indirect tumor viruses, on the other hand, can be lost (at least theoretically) from a mature tumor that has accumulated sufficient mutations and growth conditions (hyperplasia) from the chronic inflammation of viral infection. In this latter case, it is controversial but at least theoretically possible that an indirect tumor virus could undergo "hit-and-run" and so the virus would be lost from the clinically diagnosed tumor. In practical terms, this is an uncommon occurrence if it does occur. DNA oncoviruses DNA oncoviruses typically impair two families of tumor suppressor proteins: tumor proteins p53 and the retinoblastoma proteins (Rb). It is evolutionarily advantageous for viruses to inactivate p53 because p53 can trigger cell cycle arrest or apoptosis in infected cells when the virus attempts to replicate its DNA. Similarly, Rb proteins regulate many essential cell functions, including but not limited to a crucial cell cycle checkpoint, making them a target for viruses attempting to interrupt regular cell function. While several DNA oncoviruses have been discovered, three have been studied extensively. Adenoviruses can lead to tumors in rodent models but do not cause cancer in humans; however, they have been exploited as delivery vehicles in gene therapy for diseases such as cystic fibrosis and cancer. Simian virus 40 (SV40), a polyomavirus, can cause tumors in rodent models but is not oncogenic in humans. This phenomenon has been one of the major controversies of oncogenesis in the 20th century because an estimated 100 million people were inadvertently exposed to SV40 through polio vaccines. The human papillomavirus-16 (HPV-16) has been shown to lead to cervical cancer and other cancers, including head and neck cancer. These three viruses have parallel mechanisms of action, forming an archetype for DNA oncoviruses. All three of these DNA oncoviruses are able to integrate their DNA into the host cell, and use this to transcribe it and transform cells by bypassing the G1/S checkpoint of the cell cycle. Integration of viral DNA DNA oncoviruses transform infected cells by integrating their DNA into the host cell's genome. The DNA is believed to be inserted during transcription or replication, when the two annealed strands are separated. This event is relatively rare and generally unpredictable; there seems to be no deterministic predictor of the site of integration. After integration, the host's cell cycle loses regulation from Rb and p53, and the cell begins cloning to form a tumor. G1/S Checkpoint Rb and p53 regulate the transition between G1 and S phase, arresting the cell cycle before DNA replication until the appropriate checkpoint inputs, such as DNA damage repair, are completed. p53 regulates the p21 gene, which produces a protein which binds to the Cyclin D-Cdk4/6 complex. This prevents Rb phosphorylation and prevents the cell from entering S phase. In mammals, when Rb is active (unphosphorylated), it inhibits the E2F family of transcription factors, which regulate the Cyclin E-Cdk2 complex, which inhibits Rb, forming a positive feedback loop, keeping the cell in G1 until the input crosses a threshold. To drive the cell into S phase prematurely, the viruses must inactivate p53, which plays a central role in the G1/S checkpoint, as well as Rb, which, though downstream of it, is typically kept active by a positive feedback loop. Inactivation of p53 Viruses employ various methods of inactivating p53. The adenovirus E1B protein (55K) prevents p53 from regulating genes by binding to the site on p53 which binds to the genome. In SV40, the large T antigen (LT) is an analogue; LT also binds to several other cellular proteins, such as p107 and p130, on the same residues. LT binds to p53's binding domain on the DNA (rather than on the protein), again preventing p53 from appropriately regulating genes. HPV instead degrades p53: the HPV protein E6 binds to a cellular protein called the E6-associated protein (E6-AP, also known as UBE3A), forming a complex which causes the rapid and specific ubiquitination of p53. Inactivation of Rb Rb is inactivated (thereby allowing the G1/S transition to progress unimpeded) by different but analogous viral oncoproteins. The adenovirus early region 1A (E1A) is an oncoprotein which binds to Rb and can stimulate transcription and transform cells. SV40 uses the same protein for inactivating Rb, LT, to inactivate p53. HPV contains a protein, E7, which can bind to Rb in much the same way. Rb can be inactivated by phosphorylation, or by being bound to a viral oncoprotein, or by mutations—mutations which prevent oncoprotein binding are also associated with cancer. Variations DNA oncoviruses typically cause cancer by inactivating p53 and Rb, thereby allowing unregulated cell division and creating tumors. There may be many different mechanisms which have evolved separately; in addition to those described above, for example, the Human Papillomavirus inactivates p53 by sequestering it in the cytoplasm. SV40 has been well studied and does not cause cancer in humans, but a recently discovered analogue called Merkel cell polyomavirus has been associated with Merkel cell carcinoma, a form of skin cancer. The Rb binding feature is believed to be the same between the two viruses. RNA oncoviruses In the 1960s, the replication process of RNA virus was believed to be similar to other single-stranded RNA. Single-stranded RNA replication involves RNA-dependent RNA synthesis which meant that virus-coding enzymes would make partial double-stranded RNA. This belief was shown to be incorrect because there were no double-stranded RNA found in the retrovirus cell. In 1964, Howard Temin proposed a provirus hypothesis, but shortly after reverse transcription in the retrovirus genome was discovered. Description of virus All retroviruses have three major coding domains; gag, pol and env. In the gag region of the virus, the synthesis of the internal virion proteins are maintained which make up the matrix, capsid and nucleocapsid proteins. In pol, the information for the reverse transcription and integration enzymes are stored. In env, it is derived from the surface and transmembrane for the viral envelope protein. There is a fourth coding domain which is smaller, but exists in all retroviruses. Pol is the domain that encodes the virion protease. Retrovirus enters host cell The retrovirus begins the journey into a host cell by attaching a surface glycoprotein to the cell's plasma membrane receptor. Once inside the cell, the retrovirus goes through reverse transcription in the cytoplasm and generates a double-stranded DNA copy of the RNA genome. Reverse transcription also produces identical structures known as long terminal repeats (LTRs). Long terminal repeats are at the ends of the DNA strands and regulates viral gene expression. The viral DNA is then translocated into the nucleus where one strand of the retroviral genome is put into the chromosomal DNA by the help of the virion integrase. At this point the retrovirus is referred to as provirus. Once in the chromosomal DNA, the provirus is transcribed by the cellular RNA polymerase II. The transcription leads to the splicing and full-length mRNAs and full-length progeny virion RNA. The virion protein and progeny RNA assemble in the cytoplasm and leave the cell, whereas the other copies send translated viral messages in the cytoplasm. Classification DNA viruses Human papillomavirus (HPV), a DNA virus, causes transformation in cells through interfering with tumor suppressor proteins such as p53. Interfering with the action of p53 allows a cell infected with the virus to move into a different stage of the cell cycle, enabling the virus genome to be replicated. Forcing the cell into the S phase of the cell cycle could cause the cell to become transformed. Human papillomavirus infection is a major cause of cervical cancer, vulvar cancer, vaginal cancer, penis cancer, anal cancer, and HPV-positive oropharyngeal cancers. There are nearly 200 distinct human papillomaviruses (HPVs), and many HPV types are carcinogenic. Hepatitis B virus (HBV) is associated with Hepatocarcinoma Epstein–Barr virus (EBV or HHV-4) is associated with four types of cancers Human cytomegalovirus (CMV or HHV-5) is associated with mucoepidermoid carcinoma and possibly other malignancies. Kaposi's sarcoma-associated herpesvirus (KSHV or HHV-8) is associated with Kaposi's sarcoma, a type of skin cancer. Merkel cell polyomavirusa polyoma virusis associated with the development of Merkel cell carcinoma RNA viruses Not all oncoviruses are DNA viruses. Some RNA viruses have also been associated such as the hepatitis C virus as well as certain retroviruses, e.g., human T-lymphotropic virus (HTLV-1) and Rous sarcoma virus (RSV). Overview table Estimated percent of new cancers attributable to the virus worldwide in 2002. NA indicates not available. The association of other viruses with human cancer is continually under research. Main viruses associated with human cancer The main viruses associated with human cancers are the human papillomavirus, the hepatitis B and hepatitis C viruses, the Epstein–Barr virus, the human T-lymphotropic virus, the Kaposi's sarcoma-associated herpesvirus (KSHV) and the Merkel cell polyomavirus. Experimental and epidemiological data imply a causative role for viruses and they appear to be the second most important risk factor for cancer development in humans, exceeded only by tobacco usage. The mode of virally induced tumors can be divided into two, acutely transforming or slowly transforming. In acutely transforming viruses, the viral particles carry a gene that encodes for an overactive oncogene called viral-oncogene (v-onc), and the infected cell is transformed as soon as v-onc is expressed. In contrast, in slowly transforming viruses, the virus genome is inserted, especially as viral genome insertion is an obligatory part of retroviruses, near a proto-oncogene in the host genome. The viral promoter or other transcription regulation elements in turn cause overexpression of that proto-oncogene, which in turn induces uncontrolled cellular proliferation. Because viral genome insertion is not specific to proto-oncogenes and the chance of insertion near that proto-oncogene is low, slowly transforming viruses have very long tumor latency compared to acutely transforming viruses, which already carry the viral oncogene. Hepatitis viruses, including hepatitis B and hepatitis C, can induce a chronic viral infection that leads to liver cancer in 0.47% of hepatitis B patients per year (especially in Asia, less so in North America), and in 1.4% of hepatitis C carriers per year. Liver cirrhosis, whether from chronic viral hepatitis infection or alcoholism, is associated with the development of liver cancer, and the combination of cirrhosis and viral hepatitis presents the highest risk of liver cancer development. Worldwide, liver cancer is one of the most common, and most deadly, cancers due to a huge burden of viral hepatitis transmission and disease. Through advances in cancer research, vaccines designed to prevent cancer have been created. The hepatitis B vaccine is the first vaccine that has been established to prevent cancer (hepatocellular carcinoma) by preventing infection with the causative virus. In 2006, the U.S. Food and Drug Administration approved a human papilloma virus vaccine, called Gardasil. The vaccine protects against four HPV types, which together cause 70% of cervical cancers and 90% of genital warts. In March 2007, the US Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP) officially recommended that females aged 11–12 receive the vaccine, and indicated that females as young as age 9 and as old as age 26 are also candidates for immunization. History The history of cancer virus discovery is intertwined with the history of cancer research and the history of virology. The oldest surviving record of a human cancer is the Babylonian Code of Hammurabi (dated ca. 1754 BC) but scientific oncology could only emerge in the 19th century, when tumors were studied at microscopic level with the help of the compound microscope and achromatic lenses. 19th century microbiology accumulated evidence that implicated bacteria, yeasts, fungi, and protozoa in the development of cancer. In 1926 the Nobel Prize was awarded for documenting that a nematode worm could provoke stomach cancer in rats. But it was not recognized that cancer could have infectious origins until much later as virus had first been discovered by Dmitri Ivanovsky and Martinus Beijerinck at the close of the 19th century. History of non-human oncoviruses The theory that cancer could be caused by a virus began with the experiments of Oluf Bang and Vilhelm Ellerman in 1908 at the University of Copenhagen. Bang and Ellerman demonstrated that avian sarcoma leukosis virus could be transmitted between chickens after cell-free filtration and subsequently cause leukemia. This was subsequently confirmed for solid tumors in chickens in 1910–1911 by Peyton Rous. Rous at the Rockefeller University extended Bang and Ellerman's experiments to show cell-free transmission of a solid tumor sarcoma to chickens (now known as Rous sarcoma). The reasons why chickens are so receptive to such transmission may involve unusual characteristics of stability or instability as they relate to endogenous retroviruses. Charlotte Friend confirmed Bang and Ellerman findings for liquid tumor in mice by . In 1933 Richard Shope and Edward Weston Hurst showed that warts from wild cottontail rabbits contained the Shope papilloma virus. In 1936 John Joseph Bittner identified the mouse mammary tumor virus, an "extrachromosomal factor" (i.e. virus) that could be transmitted between laboratory strains of mice by breast feeding. By the early 1950s, it was known that viruses could remove and incorporate genes and genetic material in cells. It was suggested that such types of viruses could cause cancer by introducing new genes into the genome. Genetic analysis of mice infected with Friend virus confirmed that retroviral integration could disrupt tumor suppressor genes, causing cancer. Viral oncogenes were subsequently discovered and identified to cause cancer. Ludwik Gross identified the first mouse leukemia virus (murine leukemia virus) in 1951 and in 1953 reported on a component of mouse leukemia extract capable of causing solid tumors in mice. This compound was subsequently identified as a virus by Sarah Stewart and Bernice Eddy at the National Cancer Institute, after whom it was once called "SE polyoma". In 1957 Charlotte Friend discovered the Friend virus, a strain of murine leukemia virus capable of causing cancers in immunocompetent mice. Though her findings received significant backlash, they were eventually accepted by the field and cemented the validity of viral oncogenesis. In 1961 Eddy discovered the simian vacuolating virus 40 (SV40). Merck Laboratory also confirmed the existence of a rhesus macaque virus contaminating cells used to make Salk and Sabin polio vaccines. Several years later, it was shown to cause cancer in Syrian hamsters, raising concern about possible human health implications. Scientific consensus now strongly agrees that this is not likely to cause human cancer. History of human oncoviruses In 1964 Anthony Epstein, Bert Achong and Yvonne Barr identified the first human oncovirus from Burkitt's lymphoma cells. A herpesvirus, this virus is formally known as human herpesvirus 4 but more commonly called Epstein–Barr virus or EBV. In the mid-1960s Baruch Blumberg first physically isolated and characterized Hepatitis B while working at the National Institute of Health (NIH) and later the Fox Chase Cancer Center. Although this agent was the clear cause of hepatitis and might contribute to liver cancer hepatocellular carcinoma, this link was not firmly established until epidemiologic studies were performed in the 1980s by R. Palmer Beasley and others. In 1980 the first human retrovirus, Human T-lymphotropic virus 1 (HTLV-I), was discovered by Bernard Poiesz and Robert Gallo at NIH, and independently by Mitsuaki Yoshida and coworkers in Japan. But it was not certain whether HTLV-I promoted leukemia. In 1981 Yorio Hinuma and his colleagues at Kyoto University reported visualization of retroviral particles produced by a leukemia cell line derived from patients with Adult T-cell leukemia/lymphoma. This virus turned out to be HTLV-1 and the research established the causal role of the HTLV-1 virus to ATL. Between 1984 and 1986 Harald zur Hausen and Lutz Gissmann discovered HPV16 and HPV18, together these Papillomaviridae viruses (HPV) are responsible for approximately 70% of human papillomavirus infections that cause cervical cancers. For the discovery that HPV cause human cancer the 2008 Nobel Prize was awarded. In 1987 the Hepatitis C virus (HCV) was discovered by panning a cDNA library made from diseased tissues for foreign antigens recognized by patient sera. This work was performed by Michael Houghton at Chiron, a biotechnology company, and Daniel W. Bradley at the Centers for Disease Control and Prevention (CDC). HCV was subsequently shown to be a major contributor to Hepatocellular carcinoma (liver cancer) worldwide. In 1994 Patrick S. Moore and Yuan Chang at Columbia University), working together with Ethel Cesarman, isolated Kaposi's sarcoma-associated herpesvirus (KSHV or HHV8) using representational difference analysis. This search was prompted by work from Valerie Beral and colleagues who inferred from the epidemic of Kaposi's sarcoma among patients with AIDS that this cancer must be caused by another infectious agent besides HIV, and that this was likely to be a second virus. Subsequent studies revealed that KSHV is the "KS agent" and is responsible for the epidemiologic patterns of KS and related cancers. In 2008 Yuan Chang and Patrick S. Moore developed a new method to identify cancer viruses based on computer subtraction of human sequences from a tumor transcriptome, called digital transcriptome subtraction (DTS). DTS was used to isolate DNA fragments of Merkel cell polyomavirus from a Merkel cell carcinoma and it is now believed that this virus causes 70–80% of these cancers. See also Infectious causes of cancer Carcinogen Oncogenic Oncogene Adult T-cell leukemia/lymphoma Cancer bacteria Oncolytic virus, a virus that infects and kills cancer cells Gag-onc fusion protein List of infectious diseases References External links Carcinogenesis Virology Viruses Infectious causes of cancer
Oncovirus
[ "Biology" ]
5,066
[ "Viruses", "Tree of life (biology)", "Microorganisms" ]
1,460,629
https://en.wikipedia.org/wiki/Effective%20temperature
The effective temperature of a body such as a star or planet is the temperature of a black body that would emit the same total amount of electromagnetic radiation. Effective temperature is often used as an estimate of a body's surface temperature when the body's emissivity curve (as a function of wavelength) is not known. When the star's or planet's net emissivity in the relevant wavelength band is less than unity (less than that of a black body), the actual temperature of the body will be higher than the effective temperature. The net emissivity may be low due to surface or atmospheric properties, such as the greenhouse effect. Star The effective temperature of a star is the temperature of a black body with the same luminosity per surface area () as the star and is defined according to the Stefan–Boltzmann law . Notice that the total (bolometric) luminosity of a star is then , where is the stellar radius. The definition of the stellar radius is obviously not straightforward. More rigorously the effective temperature corresponds to the temperature at the radius that is defined by a certain value of the Rosseland optical depth (usually 1) within the stellar atmosphere. The effective temperature and the bolometric luminosity are the two fundamental physical parameters needed to place a star on the Hertzsprung–Russell diagram. Both effective temperature and bolometric luminosity depend on the chemical composition of a star. The effective temperature of the Sun is around . The nominal value defined by the International Astronomical Union for use as a unit of measure of temperature is . Stars have a decreasing temperature gradient, going from their central core up to the atmosphere. The "core temperature" of the Sun—the temperature at the centre of the Sun where nuclear reactions take place—is estimated to be 15,000,000 K. The color index of a star indicates its temperature from the very cool—by stellar standards—red M stars that radiate heavily in the infrared to the very hot blue O stars that radiate largely in the ultraviolet. Various colour-effective temperature relations exist in the literature. Their relations also have smaller dependencies on other stellar parameters, such as the stellar metallicity and surface gravity. The effective temperature of a star indicates the amount of heat that the star radiates per unit of surface area. From the hottest surfaces to the coolest is the sequence of stellar classifications known as O, B, A, F, G, K, M. A red star could be a tiny red dwarf, a star of feeble energy production and a small surface or a bloated giant or even supergiant star such as Antares or Betelgeuse, either of which generates far greater energy but passes it through a surface so large that the star radiates little per unit of surface area. A star near the middle of the spectrum, such as the modest Sun or the giant Capella radiates more energy per unit of surface area than the feeble red dwarf stars or the bloated supergiants, but much less than such a white or blue star as Vega or Rigel. Planet Blackbody temperature To find the effective (blackbody) temperature of a planet, it can be calculated by equating the power received by the planet to the known power emitted by a blackbody of temperature . Take the case of a planet at a distance from the star, of luminosity . Assuming the star radiates isotropically and that the planet is a long way from the star, the power absorbed by the planet is given by treating the planet as a disc of radius , which intercepts some of the power which is spread over the surface of a sphere of radius (the distance of the planet from the star). The calculation assumes the planet reflects some of the incoming radiation by incorporating a parameter called the albedo (a). An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then: The next assumption we can make is that the entire planet is at the same temperature , and that the planet radiates as a blackbody. The Stefan–Boltzmann law gives an expression for the power radiated by the planet: Equating these two expressions and rearranging gives an expression for the effective temperature: Where is the Stefan–Boltzmann constant. Note that the planet's radius has cancelled out of the final expression. The effective temperature for Jupiter from this calculation is 88 K and 51 Pegasi b (Bellerophon) is 1,258 K. A better estimate of effective temperature for some planets, such as Jupiter, would need to include the internal heating as a power input. The actual temperature depends on albedo and atmosphere effects. The actual temperature from spectroscopic analysis for HD 209458 b (Osiris) is 1,130 K, but the effective temperature is 1,359 K. The internal heating within Jupiter raises the effective temperature to about 152 K. Surface temperature of a planet The surface temperature of a planet can be estimated by modifying the effective-temperature calculation to account for emissivity and temperature variation. The area of the planet that absorbs the power from the star is which is some fraction of the total surface area , where is the radius of the planet. This area intercepts some of the power which is spread over the surface of a sphere of radius . We also allow the planet to reflect some of the incoming radiation by incorporating a parameter called the albedo. An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then: The next assumption we can make is that although the entire planet is not at the same temperature, it will radiate as if it had a temperature over an area which is again some fraction of the total area of the planet. There is also a factor , which is the emissivity and represents atmospheric effects. ranges from 1 to 0 with 1 meaning the planet is a perfect blackbody and emits all the incident power. The Stefan–Boltzmann law gives an expression for the power radiated by the planet: Equating these two expressions and rearranging gives an expression for the surface temperature: Note the ratio of the two areas. Common assumptions for this ratio are for a rapidly rotating body and for a slowly rotating body, or a tidally locked body on the sunlit side. This ratio would be 1 for the subsolar point, the point on the planet directly below the sun and gives the maximum temperature of the planet — a factor of (1.414) greater than the effective temperature of a rapidly rotating planet. Also note here that this equation does not take into account any effects from internal heating of the planet, which can arise directly from sources such as radioactive decay and also be produced from frictions resulting from tidal forces. Earth effective temperature Earth has an albedo of about 0.306 and a solar irradiance () of at its mean orbital radius of 1.5×108 km. The calculation with ε=1 and remaining physical constants then gives an Earth effective temperature of . The actual temperature of Earth's surface is an average as of 2020. The difference between the two values is called the greenhouse effect. The greenhouse effect results from materials in the atmosphere (greenhouse gases and clouds) absorbing thermal radiation and reducing emissions to space, i.e., reducing the planet's emissivity of thermal radiation from its surface into space. Substituting the surface temperature into the equation and solving for ε gives an effective emissivity of about 0.61 for a 288 K Earth. Furthermore, these values calculate an outgoing thermal radiation flux of (with ε=0.61 as viewed from space) versus a surface thermal radiation flux of (with ε≈1 at the surface). Both fluxes are near the confidence ranges reported by the IPCC. See also References External links Effective temperature scale for solar type stars Surface Temperature of Planets Planet temperature calculator Concepts in astrophysics Stellar astronomy Planetary science Thermodynamic properties Electromagnetic radiation Concepts in astronomy
Effective temperature
[ "Physics", "Chemistry", "Astronomy", "Mathematics" ]
1,673
[ "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Concepts in astrophysics", "Concepts in astronomy", "Electromagnetic radiation", "Quantity", "Astrophysics", "Radiation", "Thermodynamics", "Planetary science", "Astronomical sub-disciplines", "Stellar astronomy" ]
1,460,717
https://en.wikipedia.org/wiki/Betweenness%20problem
Betweenness is an algorithmic problem in order theory about ordering a collection of items subject to constraints that some items must be placed between others. It has applications in bioinformatics and was shown to be NP-complete by . Problem statement The input to a betweenness problem is a collection of ordered triples of items. The items listed in these triples should be placed into a total order, with the property that for each of the given triples, the middle item in the triple appears in the output somewhere between the other two items. The items of each triple are not required to be consecutive in the output. Examples As an example, the collection of input triples (2,1,3), (3,4,5), (1,4,5), (2,4,1), (5,2,3) is satisfied by the output ordering 3, 1, 4, 2, 5 but not by 3, 1, 2, 4, 5. In the first of these output orderings, for all five of the input triples, the middle item of the triple appears between the other two items However, for the second output ordering, item 4 is not between items 1 and 2, contradicting the requirement given by the triple (2,4,1). If an input contains two triples like (1,2,3) and (2,3,1) with the same three items but a different choice of the middle item, then there is no valid solution. However, there are more complicated ways of forming a set of triples with no valid solution, that do not contain such a pair of contradictory triples. Complexity showed that the decision version of the betweenness problem (in which an algorithm must decide whether or not there exists a valid solution) is NP-complete in two ways, by a reduction from 3-satisfiability and also by a different reduction from hypergraph 2-coloring. However, it can easily be solved when all unordered triples of items are represented by an ordered triple of the input, by choosing one of the two items that are not between any others to be the start of the ordering and then using the triples involving this item to compare the relative positions of each pair of remaining items. The related problem of finding an ordering that maximizes the number of satisfied triples is MAXSNP-hard, implying that it is impossible to achieve an approximation ratio arbitrarily close to 1 in polynomial time unless P = NP. It remains hard to solve or approximate even for dense instances that include an ordered triple for each possible unordered triple of items. The minimum version of the problem restricted to the tournaments was proven to have polynomial time approximation schemes (PTAS). One can achieve an approximation ratio of 1/3 (in expectation) by ordering the items randomly, and this simple strategy gives the best possible polynomial-time approximation if the unique games conjecture is true. It is also possible to use semidefinite programming or combinatorial methods to find an ordering that satisfies at least half of the triples of any satisfiable instance, in polynomial time. In parameterized complexity, the problem of satisfying as many constraints as possible from a set C of constraints is fixed-parameter tractable when parameterized by the difference q − |C|/3 between the solution quality q found by the parameterized algorithm and the |C|/3 quality guaranteed in expectation by a random ordering. Although not guaranteed to succeed, a greedy heuristic can find solutions to many instances of the betweenness problem arising in practice. Applications One application of betweenness arises in bioinformatics, as part of the process of gene mapping. Certain types of genetic experiments can be used to determine the ordering of triples of genetic markers, but do not distinguish a genetic sequence from its reversal, so the information yielded from such an experiment determines only which one out of three markers is the middle one. The betweenness problem is an abstraction of the problem of assembling a collection of markers into a single sequence given experimental data of this type. The betweenness problem has also been used to model theories of probability, causality, and time. References NP-complete problems Order theory
Betweenness problem
[ "Mathematics" ]
870
[ "NP-complete problems", "Mathematical problems", "Order theory", "Computational problems" ]
38,695,606
https://en.wikipedia.org/wiki/Kvikk%20case
The Kvikk case is about a variety of birth defects in the children of the men who served on HNoMS Kvikk, a Royal Norwegian Navy fast patrol boat (FPB) of the Snøgg class.An investigation found that the ship's electronic systems were not to blame; no other cause has been established.Suspicion arose when two former officers accidentally met in the orthopedic department at Haukeland University Hospital in Bergen, and it was later revealed that in all eleven children already had been born with birth defects from 1987 to 1994. In the end, the case counted 17 injured children, and it was also discovered that the phenomenon of birth defects already had started in 1983. Among the claimed birth defects are clubfoot, thumb hypoplasia, hip dysplasia, congenital heart defects, structural brain damage, cataracts, and other defects. Some of the children have also had developmental delays and behavioral problems. Kvikk was the only vessel in the Norwegian navy that was used as an electronic warfare (EW) vessel, and one widely discussed theory was that the powerful electromagnetic radiation from the boat's radio communication masts and radar led to several of those who served aboard the ship having children with clubfoot, and in some cases stillborn children. The idea was that the powerful radiation possibly damaged genetic material in the sperm of the men who worked aboard. A total of 17 out of 85 children of officers who served at Kvikk have been born with birth defects. Of the other theories about the cause of the deformities is one that Kvikk was the only vessel that was used to experiment with different types of camouflage paint. 1987-94: Kvikk used for electronic warfare In 1987 Kvikk was equipped for electronic warfare, partly by getting an extra radar sender stern rated to 750 watts, which then was used very actively to create radar jamming during exercises and tests. Kvikk went out of service as an EW-vessel in December 1994. Risk for radiation injuries The Norwegian Armed Forces knew before the case came up that very powerful radiation had been measured on Kvikk. There had also been measured heavy radiation at other defense vessels as well as land installations that all exceeded NATO's limits for radiation hazard, such as the HNoMS Narvik and HNoMS Tjeld. Kvikk, however, is a much smaller vessel than those, and the radiation distance to the crew was thus smaller. It was also not uncommon for the crew on Kvikk to reside around the mast during noise transmissions, and they were thus directly exposed to strong electromagnetic radiation from the mast that it has been speculated in whether may have affected their genetic material. As an additional risk factor the radar on Kvikk had a stabilizer that were made to ensure that the radar beam was kept level with the horizon so that it also would work in choppy seas, but this mechanism had a weakness that made the radar tip over many times and thus sending radiation directly down on the deck. Therefore the crew in many cases have been directly exposed to radiation at a very close hold when they were on the deck. In addition, four fathers who worked as electronic service technicians at the workshop of Haakonsvern got children with chromosome abnormalities. They worked, among other things, to correct errors and deficiencies in telecommunications equipment on Kvikk and other marine vessels, and were therefore under testing exposed to high levels of electromagnetic radiation, mostly from radars and communications equipment. It is shown in research from the 60's and 70's that non-ionizing radiation in the microwave range can provide genotoxic mechanisms in germ cells in animals which are then relayed to the offspring, as well as practical examples have shown that radiation have led to infertility in humans, but little recent research supports this. The Norwegian Navy, the Norwegian Radiation Protection Authority (NRPA) and a research group at Norwegian University of Science and Technology have concluded that there is no demonstrable link between the non-ionizing radiation on board and that the children were born with birth defects. The parents in the case have stated that they don't trust the research. 1996: The case comes forward The case became known as the Haakonsvern Navy Base in 1996 issued a press release stating that an unusually high number of children of the employees at the naval base were damaged at birth. Verdens Gang (VG) was the first newspaper that took hold of the case, and the newspaper also found the relationship between the children who were born with birth defects and that their fathers had worked at Kvikk. VG immediately published a headline that read "Crown Prince Haakon Magnus of Norway is one of the many who is currently aboard the MTB vessels and may be exposed to radiation." The navy quickly made a decision to investigate whether there was a correlation between the electromagnetic radiation aboard Kvikk and the genes of those who had served there, but then Kvikk was already broken up - only nine days after the original press release from Haakonsvern. 1998: Initial research by the navy After three officers in February 1996 had notified the navy inspectorate that they had got children with birth defects after serving on Kvikk, the navy the same year gave an internal message that they did not want to hear any more about such abnormalities from the naval staff. The navy began to investigate whether the damages in this and similar cases could be due to radar and radio radiation. In 1998 they concluded that this was not the case and that there was no relationship between serving on the ship and having children with birth defects, so that there was no basis for liability. In the report, the Navy went far in rejecting any possible link whatsoever between birth defects to their children and that the fathers had served on Kvikk, and suggested statistical clustering as an explanation. See also HNoMS Kvikk (P984) Teratology References External links Microwave News, Volume XVIII Number 6 (November and December 1998), page 4 (English) Congenital disorders Electromagnetic radiation Health effects by subject
Kvikk case
[ "Physics" ]
1,247
[ "Electromagnetic radiation", "Physical phenomena", "Radiation" ]
38,705,442
https://en.wikipedia.org/wiki/Kurchatov%20Center%20for%20Synchrotron%20Radiation%20and%20Nanotechnology
The Kurchatov Center for Synchrotron Radiation and Nanotechnology (KCSRN) is a Russian interdisciplinary institute for synchrotron-based research. The source is used for research in fields such as biology, chemistry, physics and palaeontology. As with all synchrotron sources, the Kurchatov source is a user facility. History Construction began in 1986. The intended completion date in 1989 was pushed back due to economic difficulties causing delays. The building was finally completed in December, 1999. Electron accelerator The electron accelerator for the Kurchatov synchrotron was built by Budker Institute of Nuclear Physics, a world leader in accelerator physics. The magnetic structure is very similar to that of the ANKA synchrotron in Karlsruhe. The accelerator includes an injection system, the Sibir-1 booster and the Sibir-2 storage ring. Injection is done at 450 MeV, but an upgrade program was expected to raise the energy level. Radiation is generated by bending magnets at . Critical energy is and superconducting high-field wiggler offers , with 19 poles. References Synchrotron radiation facilities
Kurchatov Center for Synchrotron Radiation and Nanotechnology
[ "Materials_science" ]
238
[ "Materials testing", "Synchrotron radiation facilities" ]
38,707,008
https://en.wikipedia.org/wiki/Order-4%20heptagonal%20tiling
In geometry, the order-4 heptagonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {7,4}. Symmetry This tiling represents a hyperbolic kaleidoscope of 7 mirrors meeting as edges of a regular heptagon. This symmetry by orbifold notation is called *2222222 with 7 order-2 mirror intersections. In Coxeter notation can be represented as [1+,7,1+,4], removing two of three mirrors (passing through the heptagon center) in the [7,4] symmetry. The kaleidoscopic domains can be seen as bicolored heptagons, representing mirror images of the fundamental domain. This coloring represents the uniform tiling t1{7,7} and as a quasiregular tiling is called a heptaheptagonal tiling. Related polyhedra and tiling This tiling is topologically related as a part of sequence of regular tilings with heptagonal faces, starting with the heptagonal tiling, with Schläfli symbol {6,n}, and Coxeter diagram , progressing to infinity. This tiling is also topologically related as a part of sequence of regular polyhedra and tilings with four faces per vertex, starting with the octahedron, with Schläfli symbol {n,4}, and Coxeter diagram , with n progressing to infinity. References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Heptagonal tilings Hyperbolic tilings Isogonal tilings Isohedral tilings Order-4 tilings Regular tilings
Order-4 heptagonal tiling
[ "Physics" ]
433
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
38,707,079
https://en.wikipedia.org/wiki/Snub%20tetraheptagonal%20tiling
In geometry, the snub tetraheptagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of sr{7,4}. Images Drawn in chiral pairs, with edges missing between black triangles: Dual tiling The dual is called an order-7-4 floret pentagonal tiling, defined by face configuration V3.3.4.3.7. Related polyhedra and tiling The snub tetraheptagonal tiling is sixth in a series of snub polyhedra and tilings with vertex figure 3.3.4.3.n. References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Chiral figures Hyperbolic tilings Isogonal tilings Snub tilings Uniform tilings
Snub tetraheptagonal tiling
[ "Physics", "Chemistry" ]
260
[ "Snub tilings", "Isogonal tilings", "Tessellation", "Chirality", "Hyperbolic tilings", "Uniform tilings", "Chiral figures", "Symmetry" ]
38,707,133
https://en.wikipedia.org/wiki/Order-4%20octagonal%20tiling
In geometry, the order-4 octagonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {8,4}. Its checkerboard coloring can be called a octaoctagonal tiling, and Schläfli symbol of r{8,8}. Uniform constructions There are four uniform constructions of this tiling, three of them as constructed by mirror removal from the [8,8] kaleidoscope. Removing the mirror between the order 2 and 4 points, [8,8,1+], gives [(8,8,4)], (*884) symmetry. Removing two mirrors as [8,4*], leaves remaining mirrors *4444 symmetry. Symmetry This tiling represents a hyperbolic kaleidoscope of 8 mirrors meeting as edges of a regular hexagon. This symmetry by orbifold notation is called (*22222222) or (*28) with 8 order-2 mirror intersections. In Coxeter notation can be represented as [8*,4], removing two of three mirrors (passing through the octagon center) in the [8,4] symmetry. Adding a bisecting mirror through 2 vertices of an octagonal fundamental domain defines a trapezohedral *4422 symmetry. Adding 4 bisecting mirrors through the vertices defines *444 symmetry. Adding 4 bisecting mirrors through the edge defines *4222 symmetry. Adding all 8 bisectors leads to full *842 symmetry. The kaleidoscopic domains can be seen as bicolored octagonal tiling, representing mirror images of the fundamental domain. This coloring represents the uniform tiling r{8,8}, a quasiregular tiling and it can be called a octaoctagonal tiling. Related polyhedra and tiling This tiling is topologically related as a part of sequence of regular tilings with octagonal faces, starting with the octagonal tiling, with Schläfli symbol {8,n}, and Coxeter diagram , progressing to infinity. This tiling is also topologically related as a part of sequence of regular polyhedra and tilings with four faces per vertex, starting with the octahedron, with Schläfli symbol {n,4}, and Coxeter diagram , with n progressing to infinity. See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Isohedral tilings Order-4 tilings Regular tilings Octagonal tilings
Order-4 octagonal tiling
[ "Physics" ]
616
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
38,707,221
https://en.wikipedia.org/wiki/Truncated%20order-8%20octagonal%20tiling
In geometry, the truncated order-8 octagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{8,8}. Uniform colorings This tiling can also be constructed in *884 symmetry with 3 colors of faces: Related polyhedra and tiling Symmetry The dual of the tiling represents the fundamental domains of (*884) orbifold symmetry. From [(8,8,4)] (*884) symmetry, there are 15 small index subgroup (11 unique) by mirror removal and alternation operators. Mirrors can be removed if its branch orders are all even, and cuts neighboring branch orders in half. Removing two mirrors leaves a half-order gyration point where the removed mirrors met. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. The symmetry can be doubled to 882 symmetry by adding a bisecting mirror across the fundamental domains. The subgroup index-8 group, [(1+,8,1+,8,1+,4)] (442442) is the commutator subgroup of [(8,8,4)]. References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Order-8 tilings Truncated tilings Uniform tilings Octagonal tilings
Truncated order-8 octagonal tiling
[ "Physics" ]
379
[ "Truncated tilings", "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Uniform tilings", "Symmetry" ]
37,260,503
https://en.wikipedia.org/wiki/Harald%20Hoyer
Harald Hoyer is a computer programmer and photographer, best known for developing the dracut initramfs generator and framework, the udev device manager of Linux, the systemd replacement for the System V init daemon and the Gummiboot EFI boot loader. Harald Hoyer also made various contributions to the Linux Kernel, starting 1997. In 2012, together with Kay Sievers, Hoyer was the main driving force behind merging the , and file system trees into in the Fedora distribution He is employed by Red Hat, Inc. Harald Hoyer resides in Vaterstetten, Germany. References Free software programmers Living people 1971 births Red Hat employees
Harald Hoyer
[ "Technology" ]
136
[ "Computing stubs", "Computer specialist stubs" ]
37,264,560
https://en.wikipedia.org/wiki/COBie
Construction Operations Building Information Exchange (COBie) is a United States-originated specification relating to managed asset information including space and equipment. It is closely associated with building information modeling (BIM) approaches to design, construction, and management of built assets. Purpose COBie helps organisations to electronically capture and record important project data at the point of origin, including equipment lists, product data sheets, warranties, spare parts lists, and preventive maintenance schedules. This information is essential to support operations, maintenance and asset management once the built asset is in service, replacing reliance on uncoordinated, often paper-based, handover information typically created by people who did not participate in the project and delivered many months after the client has taken occupancy of the building (see figure 1). COBie has been incorporated into software for planning, design, construction, commissioning, operations, maintenance, and asset management. COBie may take several approved formats include spreadsheet, STEP-Part 21 (also called IFC file format), and ifcXML. The current COBie test data of record was developed by an international team of designers and builders in the US and UK. This information is available under Creative Commons Licence. History Initial concept (2006-2007 ) COBie was developed by Bill East, of the US Army Corps of Engineers, while at the Construction Engineering Research Laboratory in 2007. The project was funded with an initial grant from the US National Aeronautics and Space Administration (NASA) and the White House Office of Science and Technology Policy (through National Institute of Standards and Technology). Following this introduction, East has led COBie development through buildingSMART International (BSI; formerly the International Alliance for Interoperability) processes. Concept to adoption (2008-2015) From 2008 to 2015, the Construction Engineering Research Lab conducted a series of public events to demonstrate the ability of commercial software to produce and/or consume COBie data (in associated date-related version). In these events, software companies were often arranged at the front of a large conference room in order: planning, design, construction, maintenance management, and asset management. The COBie data flow (COBie is about building equipment only) was demonstrated. Over 90% of those participating delivered information in the COBie spreadsheet format. The other software exported spreadsheet format data from Coordination MVD STEP files so that others could use this data. In 2009, COBie version 2.26 was published as the buildingSMART International Basic FM Handover Model View Model Definition using the Industry Foundation Class Model 2x3. In December 2011, COBie 2.26 was approved and included by the US Chapter of buildingSMART International as part of its National Building Information Model (NBIMS-US) standard, version 2. Around this same time, the US buildingSMART alliance was de-listed as an authorized chapter of the buildingSMART International. In early 2013, buildingSMART was working on a lightweight XML format for COBie, COBieLite, which became available for review in April 2013. In September 2014, a code of practice regarding COBie was issued as a British Standard: "BS 1192-4:2014 Collaborative production of information Part 4: Fulfilling employer’s information exchange requirements using COBie – Code of practice". This requirement is a one line reference to the National Building Information Modeling Standard - United States (NBIMS-US), Chapter 4.2, the document that eventually published COBie version 2.4. In March 2015, the buildingSMART USA published COBie version 2.4 in NBIMS-US, Chapter 4.2. This COBie MVD, produced under contract to the Construction Engineering Research Lab, was created by the buildingSMART international Model View Definition support group, and was based on IFC 4. The main standard contains the project's Information Delivery Manual and Model View Definition as well as business case and implementation resources. Annex A defines the mapping from the EXPRESS-based data model to the COBie spreadsheet format. Annex B defines a National Information Exchange Model (NIEM) based XML schema suitable for use to capture transactional COBie data that does not require a full set of building information exchange. In 2017, the US General Services Administration required COBie as a deliverable in their capital programs in their P-100 document. Certification In 2019, buildingSMART international formed the COBie Certification Subcommittee composed of an international team of COBie experts (from US, UK, Ireland, China and Japan) to offer the COBie Certified Professional(TM) examination. This group published the COBie Educational Curriculum and began offering the COBie Certified Professional exam in 2020. This exam is a two-hour 160 question in-depth exam. To support those interested in sitting for this exam, bSI also introduced a program to evaluate and register educational programs whose courses addressed the content found in the COBie Educational Curriculum. In 2020, buildingSMART international's COBie Certification Subcommittee prepared an introductory "Foundation" level exam. This exam covers the basic facts about the US COBie specification and is available for any bSI Chapter to implement. Unlike the COBie Certified Professional(TM) exam, bSI does require a completion of an authorized training program and the use of a common COBie "book of knowledge". The bSI authorized COBie book of knowledge was published in English in 2021. Translations to German, Portuguese, and Portuguese (Brazilian) are now underway. buildingSMART International's COBie Certification Subcommittee considers this certification activities to be a transitional activity allowing bSI to support an increasingly widespread use of the US-specification while a future more widely acceptable and improved ISO-based replacement is produced. ISO replacement In 2020, buildingSMART international began a project to replace the US-specification with an international standard. The project began by documenting the many lessons learned from the previous 15 years of use of the US specification and updated the original bSI Basic Facility Management Handover Model View Definition. By July 2020, this project had reached the approved activity proposal stage of the bSI standards process. A video about the purpose and content was also published. in 2021, after a delay of over a year due to a dispute regarding the naming rights of the future ISO, it was determined that bSI would no longer include the acronym COBie in its project. With bSI no longer reliant on the previous US-specific name, its project has improved clarity, approach and scope. Given wide interest in delivering ISO-standards supporting many types of projects, not just buildings, bSI's strategic approach is to develop a set of ISO 16739 based specifications that will entirely replace the US specification (the project is called Facility Management Handover - Equipment Maintenance). The project goal is to directly support the handover of building equipment maintenance information while addressing references to objects outside of the building domain. For example, a potential future project, FM Handover - Tunnel Maintenance, would simply replace the IFC objects for buildings with those that describe maintainable items in the "Tunnel" domain. This strategy will also provide the ISO core for information uses needed not only for FM Handover but also for FM activities themselves. bSI planned to roll out this strategy in a series of position papers during 2022, and to enroll members in the project. References Data modeling Computer-aided design Construction Building engineering Building information modeling
COBie
[ "Engineering" ]
1,497
[ "Computer-aided design", "Design engineering", "Building engineering", "Data modeling", "Construction", "Data engineering", "Civil engineering", "Building information modeling", "Architecture" ]
37,265,571
https://en.wikipedia.org/wiki/Touschek%20effect
The Touschek effect describes the scattering and loss of charged particles in a storage ring. It was discovered by Bruno Touschek. It is determined by the average of the scattering rate around the ring In fact, since the momentum acceptance for scattering with energy gain may be different from that for scattering with energy loss, the lifetime must be computed by taking into account the positive and negative momentum acceptances, i.e. A formula for the local scattering rate, given by Bruck, is Here, is the classical particle radius, c is the speed of light, N is the number of particles, is the relativistic gamma factor, is the momentum acceptance, are the RMS horizontal, vertical, and bunch sizes, respectively. where the function F is given by A more accurate formula, valid in a wider range of conditions is derived by Piwinski. Momentum acceptance calculation The standard procedure for computing the momentum acceptance via a tracking code was defined in the paper by Belgroune et al. from the SOLEIL synchrotron. Calculation in beam dynamics codes In order to compute the Touschek lifetime for a real storage ring, one needs a beam dynamics code. The Piwinski formula may be used together with the Elegant code for example. References Accelerator physics
Touschek effect
[ "Physics" ]
261
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
37,270,550
https://en.wikipedia.org/wiki/Dynamic%20aperture%20%28accelerator%20physics%29
The dynamic aperture is the stability region of phase space in a circular accelerator. For hadrons In the case of protons or heavy ion accelerators, (or synchrotrons, or storage rings), there is minimal radiation, and hence the dynamics is symplectic. For long term stability, tiny dynamical diffusion (or Arnold diffusion) can lead an initially stable orbit slowly into an unstable region. This makes the dynamic aperture problem particularly challenging. One may be considering stability over billions of turns. A scaling law for Dynamic aperture vs. number of turns has been proposed by Giovannozzi. For electrons For the case of electrons, the electrons will radiate which causes a damping effect. This means that one typically only cares about stability over thousands of turns. Methods to compute or optimize dynamic aperture The basic method for computing dynamic aperture involves the use of a tracking code. A model of the ring is built within the code that includes an integration routine for each magnetic element. The particle is tracked many turns and stability is determined. In addition, there are other quantities that may be computed to characterize the dynamics, and can be related to the dynamic aperture. One example is the tune shift with amplitude. There have also been other proposals for approaches to enlarge dynamic aperture, such as: References Accelerator physics Dynamical systems
Dynamic aperture (accelerator physics)
[ "Physics", "Mathematics" ]
273
[ "Applied and interdisciplinary physics", "Mechanics", "Experimental physics", "Accelerator physics", "Dynamical systems" ]
21,607,958
https://en.wikipedia.org/wiki/Breachway
A breachway is the shore along a channel, or the whole area around the place where a channel meets the ocean. The Rhode Island coastline has many breachways on its map. Today's permanent breachways have rock jetties that line the sides of the channel to protect against erosion or closing of the waterway. The water channels usually lead to salt water ponds. External links Coastal construction Coastal engineering
Breachway
[ "Engineering" ]
81
[ "Construction", "Coastal engineering", "Coastal construction", "Civil engineering" ]
21,609,404
https://en.wikipedia.org/wiki/Tolerance%20analysis
Tolerance analysis is the general term for activities related to the study of accumulated variation in mechanical parts and assemblies. Its methods may be used on other types of systems subject to accumulated variation, such as mechanical and electrical systems. Engineers analyze tolerances for the purpose of evaluating geometric dimensioning and tolerancing (GD&T). Methods include 2D tolerance stacks, 3D Monte Carlo simulations, and datum conversions. Tolerance stackups or tolerance stacks are used to describe the problem-solving process in mechanical engineering of calculating the effects of the accumulated variation that is allowed by specified dimensions and tolerances. Typically these dimensions and tolerances are specified on an engineering drawing. Arithmetic tolerance stackups use the worst-case maximum or minimum values of dimensions and tolerances to calculate the maximum and minimum distance (clearance or interference) between two features or parts. Statistical tolerance stackups evaluate the maximum and minimum values based on the absolute arithmetic calculation combined with some method for establishing likelihood of obtaining the maximum and minimum values, such as Root Sum Square (RSS) or Monte-Carlo methods. Modeling In performing a tolerance analysis, there are two fundamentally different analysis tools for predicting stackup variation: worst-case analysis and statistical analysis. Worst-case Worst-case tolerance analysis is the traditional type of tolerance stackup calculation. The individual variables are placed at their tolerance limits in order to make the measurement as large or as small as possible. The worst-case model does not consider the distribution of the individual variables, but rather that those variables do not exceed their respective specified limits. This model predicts the maximum expected variation of the measurement. Designing to worst-case tolerance requirements guarantees 100 percent of the parts will assemble and function properly, regardless of the actual component variation. The major drawback is that the worst-case model often requires very tight individual component tolerances. The obvious result is expensive manufacturing and inspection processes and/or high scrap rates. Worst-case tolerancing is often required by the customer for critical mechanical interfaces and spare part replacement interfaces. When worst-case tolerancing is not a contract requirement, properly applied statistical tolerancing can ensure acceptable assembly yields with increased component tolerances and lower fabrication costs. Statistical variation The statistical variation analysis model takes advantage of the principles of statistics to relax the component tolerances without sacrificing quality. Each component's variation is modeled as a statistical distribution and these distributions are summed to predict the distribution of the assembly measurement. Thus, statistical variation analysis predicts a distribution that describes the assembly variation, not the extreme values of that variation. This analysis model provides increased design flexibility by allowing the designer to design to any quality level, not just 100 percent. There are two chief methods for performing the statistical analysis. In one, the expected distributions are modified in accordance with the relevant geometric multipliers within tolerance limits and then combined using mathematical operations to provide a composite of the distributions. The geometric multipliers are generated by making small deltas to the nominal dimensions. The immediate value to this method is that the output is smooth, but it fails to account for geometric misalignment allowed for by the tolerances; if a size dimension is placed between two parallel surfaces, it is assumed the surfaces will remain parallel, even though the tolerance does not require this. Because the CAD engine performs the variation sensitivity analysis, there is no output available to drive secondary programs such as stress analysis. In the other, the variations are simulated by allowing random changes to geometry, constrained by expected distributions within allowed tolerances with the resulting parts assembled, and then measurements of critical places are recorded as if in an actual manufacturing environment. The collected data is analyzed to find a fit with a known distribution and mean and standard deviations derived from them. The immediate value to this method is that the output represents what is acceptable, even when that is from imperfect geometry and, because it uses recorded data to perform its analysis, it is possible to include actual factory inspection data into the analysis to see the effect of proposed changes on real data. In addition, because the engine for the analysis is performing the variation internally, not based on CAD regeneration, it is possible to link the variation engine output to another program. For example, a rectangular bar may vary in width and thickness; the variation engine could output those numbers to a stress program which passes back peak stress as a result and the dimensional variation be used to determine likely stress variations. The disadvantage is that each run is unique, so there will be variation from analysis to analysis for the output distribution and mean, just like would come from a factory. While no official engineering standard covers the process or format of tolerance analysis and stackups, these are essential components of good product design. Tolerance stackups should be used as part of the mechanical design process, both as a predictive and a problem-solving tool. The methods used to conduct a tolerance stackup depend somewhat upon the engineering dimensioning and tolerancing standards that are referenced in the engineering documentation, such as American Society of Mechanical Engineers (ASME) Y14.5, ASME Y14.41, or the relevant ISO dimensioning and tolerancing standards. Understanding the tolerances, concepts and boundaries created by these standards is vital to performing accurate calculations. Tolerance stackups serve engineers by: helping them study dimensional relationships within an assembly. giving designers a means of calculating part tolerances. helping engineers compare design proposals. helping designers produce complete drawings. Concept of Tolerance vector loop The starting point for the tolerance loop; typically this is one side of an intended gap, after pushing the various parts in the assembly to one side or another of their loose range of motion. Vector loops define the assembly constraints that locate the parts of the assembly relative to each other. The vectors represent the dimensions that contribute to tolerance stackup in the assembly. The vectors are joined tip-to-tail, forming a chain, passing through each part in the assembly in succession. A vector loop must obey certain modeling rules as it passes through a part. It must: enter through a joint, follow the datum path to the Datum Reference Frame (DRF), follow a second datum path leading to another joint, and exit to the next adjacent part in the assembly. Additional modeling rules for vector loops include: Loops must pass through every part and every joint in the assembly. A single vector loop may not pass through the same part or the same joint twice, but it may start and end in the same part. If a vector loop includes exactly the same dimension twice, in opposite directions, the dimension is redundant and must be omitted. There must be enough loops to solve for all of the kinematic variables (joint degrees of freedom). You will need one loop for each three variables. The above rules will vary depending on whether 1D, 2D or 3D tolerance stackup method is used. Concerns with tolerance stackups A safety factor is often included in designs because of concerns about: Operational temperature and pressure of the parts or assembly. Wear. Deflection of components after assembly. The possibility or probability that the parts are slightly out of specification (but passed inspection). The sensitivity or importance of the stack (what happens if the design conditions are not met). See also Tolerance coning References ASME publication Y14.41-2003, Digital Product Definition Data Practices Alex Krulikowski (1994), Tolerance Stacks using GD&T, Bryan R. Fischer (2011), Mechanical Tolerance Stackup and Analysis, Jason Tynes (2012), Make It Fit: Introduction to Tolerance Analysis for Mechanical Engineers, Kenneth W. Chase (1999), Tolerance Analysis of 2-D and 3-D Assemblies, Department of Mechanical Engineering Brigham Young University http://www.ttc-cogorno.com/Newsletters/140117ToleranceAnalysis.pdf External links http://www.engineersedge.com/tolerance_chart.htm Geometric Tolerances, Limits Fits Charts, Tolerance Analysis Calculators http://adcats.et.byu.edu/home.php https://tolerancestackup.com/gdt/ https://www.sigmetrix.com/what-is-tolerance-analysis/ Mechanical engineering Statistical process control
Tolerance analysis
[ "Physics", "Engineering" ]
1,679
[ "Statistical process control", "Applied and interdisciplinary physics", "Engineering statistics", "Mechanical engineering" ]
21,610,688
https://en.wikipedia.org/wiki/Joint%20compatibility%20branch%20and%20bound
Joint compatibility branch and bound (JCBB) is an algorithm in computer vision and robotics commonly used for data association in simultaneous localization and mapping. JCBB measures the joint compatibility of a set of pairings that successfully rejects spurious matchings and is hence known to be robust in complex environments. References Computer vision Robot control
Joint compatibility branch and bound
[ "Engineering" ]
66
[ "Robotics engineering", "Packaging machinery", "Robot control", "Artificial intelligence engineering", "Computer vision" ]
21,614,103
https://en.wikipedia.org/wiki/Chemistry%20of%20Materials
Chemistry of Materials is a peer-reviewed scientific journal, published since 1989 by the American Chemical Society. It was founded by Leonard V. Interrante, who was the Editor-in-Chief until 2013. Jillian M. Buriak took over as Editor-in-Chief in January 2014. She was followed by Sara E. Skrabalak, who assumed the position of Editor-in-Chief in November 2020. Abstracting, indexing, and impact factor According to the Journal Citation Reports, Chemistry of Materials has a 2022 impact factor of 8.6. It is indexed in the following bibliographic databases: Scopus Web of Science British Library CAS Source Index See also ACS Materials Letters References External links Chemistry journals Materials science journals American Chemical Society academic journals Academic journals established in 1980 English-language journals
Chemistry of Materials
[ "Materials_science", "Engineering" ]
168
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
21,614,781
https://en.wikipedia.org/wiki/CHELPG
CHELPG (CHarges from ELectrostatic Potentials using a Grid-based method) is an atomic charge calculation scheme developed by Breneman and Wiberg, in which atomic charges are fitted to reproduce the molecular electrostatic potential (MESP) at a number of points around the molecule. The charge calculation methods based on fitting of MESP (including CHELPG) are not well-suitable for the treatment of larger systems, where some of the innermost atoms are located far away from the points at which the MESP is computed. In such a situation, variations of the innermost atomic charges will not lead to significant changes of the MESP outside of the molecule, which means accurate values for the innermost atomic charges are not well-determined by the MESP outside of the molecule. This problem is solved by density derived electrostatic and chemical (DDEC) methods that partition the electron density cloud in order to provide chemically meaningful net atomic charges that approximately reproduce the electrostatic potential surrounding the material. It should be remembered that atomic charges depend on the molecular conformation. The representative atomic charges for flexible molecules hence should be computed as average values over several molecular conformations. A number of alternative MESP charge schemes have been developed, such as those employing Connolly surfaces or geodesic point selection algorithms, in order to improve rotational invariance by increasing the point selection density and reducing anisotropies in the sampled points on the MESP surface. While CHELPG is restricted to non-periodic (e.g., molecular) systems, the DDEC methods can be applied to both non-periodic and periodic materials. CHELPG charges can be computed using the popular ab initio quantum chemical packages such as Gaussian, GAMESS-US and ORCA. References Quantum chemistry Theoretical chemistry Computational chemistry
CHELPG
[ "Physics", "Chemistry" ]
368
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
2,927,563
https://en.wikipedia.org/wiki/Etch%20pit%20density
The etch pit density (EPD) is a measure for the quality of semiconductor wafers. Etching An etch solution is applied on the surface of the wafer where the etch rate is increased at dislocations of the crystal resulting in pits. For GaAs one uses typically molten KOH at 450 degrees Celsius for about 40 minutes in a zirconium crucible. The density of the pits can be determined by optical contrast microscopy. Silicon wafers have usually a very low density of < 100 cm−2 while semi-insulating GaAs wafers have a density on the order of 105 cm−2. Germanium detectors High-purity Germanium detectors require the Ge crystals to be grown with a controlled range of dislocation density to reduce impurities. The etch pitch density requirement is typically within the range 103 to 104 cm−2. Standards The etch pit density can be determined according to DIN 50454-1 and ASTM F 1404. References Semiconductors
Etch pit density
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
208
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
2,928,212
https://en.wikipedia.org/wiki/Bauschinger%20effect
The Bauschinger effect refers to a property of materials where the material's stress/strain characteristics change as a result of the microscopic stress distribution of the material. For example, an increase in tensile yield strength occurs at the expense of compressive yield strength. The effect is named after German engineer Johann Bauschinger. While more tensile cold working increases the tensile yield strength, the local initial compressive yield strength after tensile cold working is actually reduced. The greater the tensile cold working, the lower the compressive yield strength. It is a general phenomenon found in most polycrystalline metals. Based on the cold work structure, two types of mechanisms are generally used to explain the Bauschinger effect: Local back stresses may be present in the material, which assist the movement of dislocations in the reverse direction. The pile-up of dislocations at grain boundaries and Orowan loops around strong precipitates are two main sources of these back stresses. When the strain direction is reversed, dislocations of the opposite sign can be produced from the same source that produced the slip-causing dislocations in the initial direction. Dislocations with opposite signs can attract and annihilate each other. Since strain hardening is related to an increased dislocation density, reducing the number of dislocations reduces strength. The net result is that the yield strength for strain in the opposite direction is less than it would be if the strain had continued in the initial direction. Mechanism of action The Bauschinger effect is primarily attributed to the interaction between dislocations and the internal stress fields within the material. Initially, as external stress is applied, dislocations are generated and traverse the crystal lattice, creating internal stress fields. These fields, in turn, interact with the applied stress, leading to a phenomenon known as work hardening or strain hardening. With the accumulation of dislocations, the material's yield strength rises, hindering further plastic deformation. When stresses are applied in the reverse direction, the dislocations are now aided by the back stresses that were present at the dislocation barriers previously and also because the back stresses at the dislocation barriers in the back are not likely to be strong compared to the previous case. Hence the dislocations glide easily, resulting in lower yield stress for plastic deformation for reversed direction of loading. Bauschinger effect, varies in magnitude based on factors like material composition, crystal structure, and prior plastic deformation. Materials with a higher density of dislocations and more internal stress fields tend to exhibit a more obvious Bauschinger effect. Additionally, the Bauschinger effect often accompanies other phenomena, such as Permanent Softening and Transient effects. There is also a considerable amount of contribution of residual lattice stresses/strains to the Bauschinger Effect in materials that is associated with anisotropy in deformation. During loading-unloading cycles, dislocations do not return to their original position after unloading, which leaves residual strains in the lattice. These strains interact with stresses applied in the opposite direction which affect the materials response to subsequent loading-unloading cycles. The biggest effect observed is plastic yield asymmetry wherein the material will yield at different values in different loading directions. [1] There are three types of residual stresses - type I, type II and type III that contribute to the Bauschinger effect in polycrystalline materials. Type I residual stresses arise during manufacturing due to thermal gradients and usually self-equilibrate over the length comparable to the macroscopic dimension of the material. So, they do not contribute significantly to the Bauschinger effect [2]. However, type II stresses equilibrate at the grain size scale and thus, contribute significantly to the Bauschinger effect. They result from strain incompatibility between neighboring grains due to plastic and elastic anisotropy. Thus, they are responsible to change the material's yield behavior along different directions by affecting dislocation motion along these differently oriented grains [3]. Type III stresses on the other hand arise due to mismatch between the soft matrix material and hard precipitates or dislocation cell walls (microstructural elements). They last over extremely short distances but significantly affect areas having microstructural heterogeneity. Dislocations pile-ups or stress concentration at the grain boundaries are examples of these type of residual stresses [4], [5]. As a whole, these three types of residual stresses impact properties like strength, flexibility, fatigue and durability. Thus, understanding the mechanism of residual stresses is important to mitigate the influence of the Bauschinger effect. References [1]  A. A. Mamun, R. J. Moat, J. Kelleher, and P. J. Bouchard, “Origin of the Bauschinger effect in a polycrystalline material,” Mater. Sci. Eng. A, vol. 707, pp. 576–584, Nov. 2017, doi: 10.1016/j.msea.2017.09.091. [2]  J. Hu, B. Chen, D. J. Smith, P. E. J. Flewitt, and A. C. F. Cocks, “On the evaluation of the Bauschinger effect in an austenitic stainless steel—The role of multi-scale residual stresses,” Int. J. Plast., vol. 84, pp. 203–223, Sep. 2016, doi: 10.1016/j.ijplas.2016.05.009. [3]  B. Chen et al., “Role of the misfit stress between grains in the Bauschinger effect for a polycrystalline material,” Acta Mater., vol. 85, pp. 229–242, Feb. 2015, doi: 10.1016/j.actamat.2014.11.021. [4]  J. H. Kim, D. Kim, F. Barlat, and M.-G. Lee, “Crystal plasticity approach for predicting the Bauschinger effect in dual-phase steels,” Mater. Sci. Eng. A, vol. 539, pp. 259–270, Mar. 2012, doi: 10.1016/j.msea.2012.01.092. [5]  C.-S. Han, R. H. Wagoner, and F. Barlat, “On precipitate induced hardening in crystal plasticity: theory,” Int. J. Plast., vol. 20, no. 3, pp. 477–494, Mar. 2004, doi: 10.1016/S0749-6419(03)00098-6. Consequence of the Bauschinger effect Metal forming operations result in situations exposing the metal workpiece to stresses of reversed sign. The Bauschinger effect contributes to work softening of the workpiece, for example in straightening of drawn bars or rolled sheets, where rollers subject the workpiece to alternate bending stresses, thereby reducing the yield strength and enabling greater cold drawability of the workpiece. Implications The Bauschinger effect have the applications in various fields due to its implications for the mechanical behavior of metallic materials subjected to cyclic loading. It is particularly relevant in applications involving cyclic loading or loading with changes in stress direction, facilitating the design and optimization of engineering structures. Seismic Analysis: Earthquake engineering and seismic design are crucial aspects of geology engineering. During earthquakes, structural components endure alternating stress directions, with the Bauschinger effect influencing material response, energy dissipation, and potential damage accumulation. The Giuffré-Menegotto-Pinto model is widely utilized to accurately predict the seismic performance of structures by incorporating the Bauschinger effect. This model introduces a transition curve in the stress-strain relationship to capture both the Bauschinger effect and the pinching behavior observed in reinforced concrete structures under cyclic loading. Fatigue Life Prediction: Researchers have developed methods and models to incorporate the Bauschinger effect into fatigue life prediction techniques, such as the strain-life and energy-based approaches. This plays a pivotal role in predicting and designing the fatigue life of machinery, vehicles, and engineering structures. A clear understanding of the Bauschinger effect ensures accurate predictions, enhancing the reliability and safety of components subjected to cyclic loading conditions. The strain-life approach correlates the plastic strain amplitude with the number of cycles to failure, while the energy-based approach considers plastic strain energy as a driving force for fatigue damage accumulation. These models integrate the Bauschinger effect by adjusting the calculation of plastic strain energy or introducing additional energy terms to address the asymmetry in hysteresis loops caused by the effect. Aerospace and Automotive Engineering: In aerospace engineering, materials undergo repeated loading cycles during flight, leading to fatigue and deformation. Similarly, in the automotive industry, vehicles endure cyclic loading due to road conditions and operations. Understanding the Bauschinger effect is crucial for predicting material behavior under such conditions and designing components with improved fatigue resistance. Research in this domain focuses on characterizing the Bauschinger effect in alloys and developing predictive models to assess fatigue life, ensuring structural integrity and reliability. Metal Forming: The Bauschinger effect significantly influences the material's flow behavior, strain distribution, and required forming loads during these processes. Hence, understanding the Bauschinger effect is significant for optimizing forming processes, predicting material behavior. Mitigation of the Bauschinger effect To mitigate the influence of the Bauschinger effect and enhance the performance of metallic materials, several strategies and techniques have been developed, including heat and surface treatments, the use of composite materials, and composition optimization. Surface Treatment: This method aims to alleviate the Bauschinger effect by changing the surface properties of metallic materials. Common treatments include creating a protective layer or modifying the surface microstructure through processes such as physical vapor deposition (PVD) coatings. This treatment reduces the Bauschinger effect in the near-surface regions. Another effective approach is shot peening, where high-velocity particles impact the material's surface, inducing compressive residual stresses. These stresses counteract the internal tensile stresses associated with the Bauschinger effect to reduce its impact. Heat Treatment: Heat treatment and thermomechanical processing are widely used to mitigate the Bauschinger effect by relieving residual stresses and dislocation structures within the material. Stress relief annealing is a common approach, where the material is heated to a specific temperature and held for a certain duration, allowing dislocations to rearrange and internal stresses to dissipate. This process reduces the Bauschinger effect by minimizing internal stress fields and achieving a more uniform distribution of dislocations. Composition Optimization and Composite Materials: Optimizing material composition is another effective approach for mitigation, as certain compositions and microstructures exhibit reduced Bauschinger effect. Materials with high stacking fault energy, such as aluminum alloys and austenitic stainless steels, tend to show less pronounced Bauschinger effect due to their enhanced ability to accommodate dislocations. Additionally, hybrid and composite materials offer mitigation potential. Metal-matrix composites (MMCs), for instance, consist of a metallic matrix reinforced with ceramic particles or fibers, which can reduce the Bauschinger effect by constraining dislocation motion in the matrix. Moreover, laminated or graded composite structures strategically combine different materials to mitigate the Bauschinger effect in critical regions while maintaining desired properties elsewhere. See also Backlash, a similar extrinsic behavior in large-scale mechanical systems. Fatigue References Materials science
Bauschinger effect
[ "Physics", "Materials_science", "Engineering" ]
2,467
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
2,928,452
https://en.wikipedia.org/wiki/Benedict%E2%80%93Webb%E2%80%93Rubin%20equation
The Benedict–Webb–Rubin equation (BWR), named after Manson Benedict, G. B. Webb, and L. C. Rubin, is an equation of state used in fluid dynamics. Working at the research laboratory of the M. W. Kellogg Company, the three researchers rearranged the Beattie–Bridgeman equation of state and increased the number of experimentally determined constants to eight. The original BWR equation , where is the molar density. The BWRS equation of state A modification of the Benedict–Webb–Rubin equation of state by Professor Kenneth E. Starling of the University of Oklahoma: , where is the molar density. The 11 mixture parameters (, , etc.) are calculated using the following relations where and are indices for the components, and the summations go over all components. , , etc. are the parameters for the pure components for the th component, is the mole fraction of the th component, and is an interaction parameter. Values of the various parameters for 15 substances can be found in Starling's Fluid Properties for Light Petroleum Systems.. The modified BWR equation (mBWR) A further modification of the Benedict–Webb–Rubin equation of state by Jacobsen and Stewart: where: The mBWR equation subsequently evolved into a 32 term version (Younglove and Ely, 1987) with numerical parameters determined by fitting the equation to empirical data for a reference fluid. Other fluids then are described by using reduced variables for temperature and density. See also Real gas References Further reading . Equations of fluid dynamics
Benedict–Webb–Rubin equation
[ "Physics", "Chemistry" ]
318
[ "Equations of fluid dynamics", "Equations of physics", "Fluid dynamics" ]