id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
24,146,425
https://en.wikipedia.org/wiki/Bound%20water
In hydrology, bound water, is an extremely thin layer of water surrounding mineral surfaces. Water molecules have a strong electrical polarity, meaning that there is a very strong positive charge on one side of the molecule and a strong negative charge on the other. This causes the water molecules to bond to each other and to other charged surfaces, such as soil minerals. Clay in particular has a high ability to bond with water molecules. The strong attraction between these surfaces causes an extremely thin water film (a few molecules thick) to form on the mineral surface. These water molecules are much less mobile than the rest of the water in the soil, and have significant effects on soil dielectric permittivity and freezing-thawing. In molecular biology and food science, bound water refers to the amount of water in body tissues which are bound to macromolecules or organelles. In food science this form of water is practically unavailable for microbiological activities so it would not cause quality decreases or pathogen increases. See also Adsorption Capillary action Effective porosity Surface tension References Hydrology Soil mechanics Soil physics Water
Bound water
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
226
[ "Hydrology", "Applied and interdisciplinary physics", "Soil mechanics", "Soil physics", "Environmental engineering", "Water" ]
24,146,591
https://en.wikipedia.org/wiki/Poincar%C3%A9%20series%20%28modular%20form%29
In number theory, a Poincaré series is a mathematical series generalizing the classical theta series that is associated to any discrete group of symmetries of a complex domain, possibly of several complex variables. In particular, they generalize classical Eisenstein series. They are named after Henri Poincaré. If Γ is a finite group acting on a domain D and H(z) is any meromorphic function on D, then one obtains an automorphic function by averaging over Γ: However, if Γ is a discrete group, then additional factors must be introduced in order to assure convergence of such a series. To this end, a Poincaré series is a series of the form where Jγ is the Jacobian determinant of the group element γ, and the asterisk denotes that the summation takes place only over coset representatives yielding distinct terms in the series. The classical Poincaré series of weight 2k of a Fuchsian group Γ is defined by the series the summation extending over congruence classes of fractional linear transformations belonging to Γ. Choosing H to be a character of the cyclic group of order n, one obtains the so-called Poincaré series of order n: The latter Poincaré series converges absolutely and uniformly on compact sets (in the upper halfplane), and is a modular form of weight 2k for Γ. Note that, when Γ is the full modular group and n = 0, one obtains the Eisenstein series of weight 2k. In general, the Poincaré series is, for n ≥ 1, a cusp form. Notes References . . Automorphic forms Modular forms Mathematical series
Poincaré series (modular form)
[ "Mathematics" ]
343
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus", "Modular forms", "Number theory" ]
24,147,201
https://en.wikipedia.org/wiki/C21H32N2O
{{DISPLAYTITLE:C21H32N2O}} The molecular formula C21H32N2O (molar mass: 328.49 g/mol) may refer to: 77-LH-28-1 Prodiame, or 17β-((3-aminopropyl)amino)estradiol Stanozolol Molecular formulas
C21H32N2O
[ "Physics", "Chemistry" ]
80
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,150,089
https://en.wikipedia.org/wiki/C20H24O2
{{DISPLAYTITLE:C20H24O2}} The molecular formula C20H24O2 (molar mass: 296.40 g/mol, exact mass: 296.1776 u) may refer to: Dimestrol, or dianisylhexene Ethinylestradiol (EE) Exemestane Molecular formulas
C20H24O2
[ "Physics", "Chemistry" ]
76
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,150,184
https://en.wikipedia.org/wiki/C22H30O
{{DISPLAYTITLE:C22H30O}} The molecular formula C22H30O (molar mass: 310.47 g/mol, exact mass: 310.2297 u) may refer to: Desogestrel ERA-63, or ORG-37663 Molecular formulas
C22H30O
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,150,233
https://en.wikipedia.org/wiki/C22H28O2
{{DISPLAYTITLE:C22H28O2}} The molecular formula C22H28O2 may refer to: Etonogestrel, a progestin medication used as birth control for women Methenmadinone, a pregnane steroid which was never marketed Molecular formulas
C22H28O2
[ "Physics", "Chemistry" ]
62
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,150,343
https://en.wikipedia.org/wiki/C23H31NO3
{{DISPLAYTITLE:C23H31NO3}} The molecular formula C23H31NO3 (molar mass: 369.50 g/mol, exact mass: 369.2304 u) may refer to: Diprafenone Norgestimate Molecular formulas
C23H31NO3
[ "Physics", "Chemistry" ]
63
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,150,413
https://en.wikipedia.org/wiki/C30H35NO3
{{DISPLAYTITLE:C30H35NO3}} The molecular formula C30H35NO3 (molar mass: 457.604 g/mol, exact mass: 457.2617 u) may refer to: Levormeloxifene Ormeloxifene (or centchroman) Molecular formulas
C30H35NO3
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,151,045
https://en.wikipedia.org/wiki/C15H25NO2
The molecular formula C15H25NO2 may refer to: Dihydroalprenolol 2,5-Dimethoxy-4-butylamphetamine Nupharamine, an alkaloid found in Nuphar japonica Xibenolol Molecular formulas
C15H25NO2
[ "Physics", "Chemistry" ]
62
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,151,141
https://en.wikipedia.org/wiki/C10H14FN
{{DISPLAYTITLE:C10H14FN}} The molecular formula C10Hx14FN may refer to: 2-Fluoromethamphetamine (2-FMA) 3-Fluoromethamphetamine (3-FMA) 4-Fluoromethamphetamine (4-FMA) References
C10H14FN
[ "Chemistry" ]
76
[ "Isomerism", "Set index articles on molecular formulas" ]
24,151,517
https://en.wikipedia.org/wiki/C13H21NO3
{{DISPLAYTITLE:C13H21NO3}} The molecular formula C13H21NO3 (molar mass : 239.31 g/mol, exact mass : 239.152144) may refer to: α-Ethylmescaline Asymbescaline 2C-O-4 3C-E 2,5-Dimethoxy-4-ethoxyamphetamine EMM (psychedelic) Isoetarine Isoproscaline Levomoprolol Levosalbutamol Metaproscaline MME (psychedelic) Moprolol Proscaline Salbutamol Symbescaline
C13H21NO3
[ "Chemistry" ]
137
[ "Isomerism", "Set index articles on molecular formulas" ]
24,151,824
https://en.wikipedia.org/wiki/C14H23NO2S
{{DISPLAYTITLE:C14H23NO2S}} The molecular formula C14H23NO2S (molar mass: 269.40 g/mol) may refer to: 2C-T-19, or 2,5-dimethoxy-4-butylthiophenethylamine 4C-T-2 Thiobuscaline Thiotrisescaline Molecular formulas
C14H23NO2S
[ "Physics", "Chemistry" ]
89
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,151,914
https://en.wikipedia.org/wiki/C14H23NO3
{{DISPLAYTITLE:C14H23NO3}} The molecular formula C14H23NO3 (molar mass: 253.34 g/mol) may refer to: Arnolol 3C-P Buscaline EEM (psychedelic) EME (psychedelic) MEE (psychedelic) MPM (psychedelic) Trisescaline Molecular formulas
C14H23NO3
[ "Physics", "Chemistry" ]
79
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,152,084
https://en.wikipedia.org/wiki/C11H17BrNO2
{{DISPLAYTITLE:C11H17BrNO2}} The molecular formula C11H17BrNO2 (molar mass: 258.11 g/mol) may refer to: 4-Bromo-3,5-dimethoxyamphetamine 2-Bromo-4,5-methylenedioxyamphetamine Molecular formulas
C11H17BrNO2
[ "Physics", "Chemistry" ]
78
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,152,143
https://en.wikipedia.org/wiki/C14H21NO3
{{DISPLAYTITLE:C14H21NO3}} The molecular formula C14H21NO3 (molar mass : 251.32 g/mol) may refer to : Cyclopropylmescaline Methallylescaline 1-(2-Nitrophenoxy)octane 3C-AL Pivenfrine Molecular formulas
C14H21NO3
[ "Physics", "Chemistry" ]
78
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,152,730
https://en.wikipedia.org/wiki/C12H18BrNO2
The molecular formula C12H18BrNO2 (molar mass: 288.18 g/mol, exact mass: 287.0521 u) may refer to: Methyl-DOB, or 4-bromo-2,5-dimethoxy-N-methylamphetamine N-Ethyl-2C-B Molecular formulas
C12H18BrNO2
[ "Physics", "Chemistry" ]
75
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,153,271
https://en.wikipedia.org/wiki/C19H25NO2
{{DISPLAYTITLE:C19H25NO2}} The molecular formula C19H25NO2 (molar mass: 299.41 g/mol, exact mass: 299.1885 u) may refer to: Buphenine Ethylketazocine (WIN-35,197-2) Proxorphan Molecular formulas
C19H25NO2
[ "Physics", "Chemistry" ]
76
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,153,642
https://en.wikipedia.org/wiki/Jucys%E2%80%93Murphy%20element
In mathematics, the Jucys–Murphy elements in the group algebra of the symmetric group, named after Algimantas Adolfas Jucys and G. E. Murphy, are defined as a sum of transpositions by the formula: They play an important role in the representation theory of the symmetric group. Properties They generate a commutative subalgebra of . Moreover, Xn commutes with all elements of . The vectors constituting the basis of Young's "seminormal representation" are eigenvectors for the action of Xn. For any standard Young tableau U we have: where ck(U) is the content b − a of the cell (a, b) occupied by k in the standard Young tableau U. Theorem (Jucys): The center of the group algebra of the symmetric group is generated by the symmetric polynomials in the elements Xk. Theorem (Jucys): Let t be a formal variable commuting with everything, then the following identity for polynomials in variable t with values in the group algebra holds true: Theorem (Okounkov–Vershik): The subalgebra of generated by the centers is exactly the subalgebra generated by the Jucys–Murphy elements Xk. See also Representation theory of the symmetric group Young symmetrizer References Permutation groups Representation theory Symmetry Representation theory of finite groups Symmetric functions
Jucys–Murphy element
[ "Physics", "Mathematics" ]
294
[ "Algebra", "Fields of abstract algebra", "Symmetric functions", "Geometry", "Representation theory", "Symmetry" ]
24,154,230
https://en.wikipedia.org/wiki/Hazardous%20Substances%20Data%20Bank
The Hazardous Substances Data Bank (HSDB) was a toxicology database on the U.S. National Library of Medicine's (NLM) Toxicology Data Network (TOXNET). It focused on the toxicology of potentially hazardous chemicals, and included information on human exposure, industrial hygiene, emergency handling procedures, environmental fate, regulatory requirements, and related areas. All data were referenced and derived from a core set of books, government documents, technical reports, and selected primary journal literature. Prior to 2020, all entries were peer-reviewed by a Scientific Review Panel (SRP), members of which represented a spectrum of professions and interests. Last Chairs of the SRP are Dr. Marcel J. Cassavant, MD, Toxicology Group, and Dr. Roland Everett Langford, PhD, Environmental Fate Group. The SRP was terminated due to budget cuts and realignment of the NLM. The HSDB was organized into individual chemical records, and contained over 5000 such records. It was accessible free of charge via TOXNET. Users could search by chemical or other name, chemical name fragment, CAS registry number and/or subject terms. Recent additions included radioactive materials and certain mixtures, like crude oil and oil dispersants as well as animal toxins. , there were approximately 5,600 chemical specific HSDB records available. TOXNET databases The Toxicology Data Network (TOXNET) was a group of databases hosted on the National Library of Medicine (NLM) website that covered "chemicals and drugs, diseases and the environment, environmental health, occupational safety and health, poisoning, risk assessment and regulations, and toxicology". TOXNET was managed by the NLM's Toxicology and Environmental Health Information Program (TEHIP) in the Division of Specialized Information Services (SIS). The TOXNET databases included: HSDB: Hazardous Substances Data Bank Peer-reviewed toxicology data for over 5,000 hazardous chemicals TOXLINE 4 million references to literature on biochemical, pharmacological, physiological, and toxicological effects of drugs and other chemicals ChemIDplus Dictionary of over 400,000 chemicals (names, synonyms, and structures) LactMed: Drugs and Lactation Database Drugs and other chemicals to which breastfeeding mothers may be exposed DART: Developmental and Reproductive Toxicology Database References to developmental and reproductive toxicology literature TOXMAP Environmental Health Maps provides searchable, interactive maps of EPA TRI and Superfund data, plus US Census and NCI health data TRI: Toxics Release Inventory Annual environmental releases of over 600 toxic chemicals by U.S. facilities CTD: Comparative Toxicogenomics Database Access to scientific data describing relationships between chemicals, genes and human diseases Household Products Database Potential health effects of chemicals in more than 10,000 common household products Haz-Map Links jobs and hazardous tasks with occupational diseases and their symptoms IRIS: Integrated Risk Information System Hazard identification and dose-response assessment for over 500 chemicals ITER: International Toxicity Estimates for Risk Risk information for over 600 chemicals from authoritative groups worldwide ALTBIB Resources on Alternatives to the Use of Live Vertebrates in Biomedical Research and Testing References External links TOXNET information Biochemistry databases Chemical safety Toxicology Chemical databases Breastfeeding
Hazardous Substances Data Bank
[ "Chemistry", "Biology", "Environmental_science" ]
659
[ "Chemical accident", "Toxicology", "Biochemistry databases", "Chemical databases", "nan", "Biochemistry", "Chemical safety" ]
24,154,448
https://en.wikipedia.org/wiki/C17H24N2O2
{{DISPLAYTITLE:C17H24N2O2}} The molecular formula C17H24N2O2 (molar mass : 288.39 g/mol) may refer to: 4,5-MDO-DiPT 5,6-MDO-DiPT Phenglutarimide Molecular formulas
C17H24N2O2
[ "Physics", "Chemistry" ]
72
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,154,456
https://en.wikipedia.org/wiki/C16H22N2O2
{{DISPLAYTITLE:C16H22N2O2}} The molecular formula C16H22N2O2 (molar mass: 274.36 g/mol) may refer to: 4-Acetoxy-DET 4-Acetoxy-MiPT Isamoltane (CGP-361A) Molecular formulas
C16H22N2O2
[ "Physics", "Chemistry" ]
75
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,154,624
https://en.wikipedia.org/wiki/C15H20N2O2
The molecular formula C15H20N2O2 may refer to: 4-AcO-MET, or metacetin 5,6-MDO-MiPT Fenspiride Sazetidine A (AMOP-H-OH) Molecular formulas
C15H20N2O2
[ "Physics", "Chemistry" ]
56
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,154,760
https://en.wikipedia.org/wiki/C17H26N2O
{{DISPLAYTITLE:C17H26N2O}} The molecular formula C17H26N2O (molar mass: 274.40 g/mol) may refer to: 5-MeO-DPT, a hallucinogenic drug 5-Methoxy-diisopropyltryptamine Phenampromide Ropivacaine Molecular formulas
C17H26N2O
[ "Physics", "Chemistry" ]
83
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
25,548,455
https://en.wikipedia.org/wiki/Everything%20is%20a%20file
"Everything is a file" is an approach to interface design in Unix derivatives. While this turn of phrase does not as such figure as a Unix design principle or philosophy, it is a common way to analyse designs, and informs the design of new interfaces in a way that prefers, in rough order of import: representing objects as file descriptors in favour of alternatives like abstract handles or names, operating on the objects with standard input/output operations returning byte streams to be interpreted by applications (rather than explicitly structured data), and allowing the usage or creation of objects by opening or creating files in the global filesystem name space. The lines between the common interpretations of "file" and "file descriptor" are often blurred when analysing Unix, and nameability of files is the least important part of this principle; thus, it is sometimes described as "Everything is a file descriptor". This approach is interpreted differently with time, philosophy of each system, and the domain to which it's applied. The rest of this article demonstrates notable examples of some of those interpretations, and their repercussions. Objects as file descriptors Under Unix, a directory can be opened like a regular file, containing fixed-size records of (i-node, filename), but directories cannot be written to directly, and are modified by the kernel as a side-effect of creating and removing files within the directory. Some interfaces only follow a subset of these guidelines, for example pipes do not exist on the filesystem — pipe() creates a pair of unnameable file descriptors. The later invention of named pipes (FIFOs) by POSIX fills this gap. This does not mean that the only operations on an object are reading and writing: ioctl() and similar interfaces allow for object-specific operations (like controlling tty characteristics), directory file descriptors can be used to alter path look-ups (with a growing number of *at() system call variants like openat()) or to change the working directory to the one represented by the file descriptor, in both cases preventing race conditions and being faster than the alternative of looking up the entire path. Socket file descriptors require configuration (setting the remote address and connecting) after creation before being used for I/O. A server socket may not be used for I/O directly at all — in connection-based protocols, bind() assigns a local address to a socket, and listen() uses that socket to wait until a remote process connects, then returns a new socket file descriptor representing that direct bidirectional connection. This approach allows management of objects used by a program in a standardised manner, just like any other file — after binding to an address privileges may be dropped, the server socket may be distributed among many processes by fork()ing (respectively closed in subprocesses that should not have access), or the individual connections' sockets may be given as standard input/output to specialised handlers for those connections, as in the super-server/CGI/inetd paradigms. Many interfaces present in early Unixes that do not use file descriptors became duplicated in later designs: the alarm()/setitimer() system calls schedule the delivery of a signal after the specified time elapses; this timer is inherited by children, and persists after exec(). The POSIX timer_create() API serves a similar function, but destroys the timer in child processes and on exec(); these timers identified by opaque handles. Both interfaces always deliver their completions asynchronously, and cannot be poll()ed/select()ed, making their integration into a complex event loop more difficult. The timerfd design (originally found in Linux), turns each timer object into a file descriptor, which can be individually observed with poll() &c. and whose inheritance to child processes can be controlled with the standard close()/CLOEXEC/CLOFORK controls. While the POSIX API has timer_getoverrun() that returns how many times the timer elapsed, this is returned as the result of read() from a timerfd. This operation blocks, so waiting until a timerfd elapses is as easy as reading from it. There is no way to atomically do this with classic Unix or POSIX timers. The timer can be inspected non-blockingly by performing a non-blocking read (a standard I/O operation). Objects in the filesystem namespace Special file types Device special files are a defining characteristic of Unix: initially, opening a regular file with i-node number ≤40 (traditionally stored under /dev) instead returned a file descriptor corresponding to a device, and handled by the device driver. The magic i-node number scheme later became codified into files with type S_IFBLK/S_IFCHR. Opening special files is beholden to the same file-system permissions checks as opening regular files, allowing common access control — chown dmr /usr/dmr /dev/rk0; chmod o= /usr/dmr /dev/rk0 changes the ownership and file access mode of both the directory /usr/dmr and device /dev/rk0. For block devices (hard disks and tape drives), due to their size, this meant unique semantics: they were block-addressed (see ), and programs needed to be written specifically to work correctly with them. This is described as "extremely unfortunate", and later interfaces alleviate this. In many cases, magnetic tapes continue to have unique semantics: some tapes can be partitioned into "files" and the driver signals an end-of-file condition after the end of a partition is reached, so cp /dev/nrst0 file1; cp /dev/nrst0 file2 will create file1 and file2 consisting of two consecutive partitions of the tape — the driver provides an abstraction layer that presents a tape file descriptor as-if it were a regular file to fit into the Everything is a file paradigm. Specialised programs like mt are used to move between partitions on a tape like this, Named pipes (FIFOs) appear as S_IFIFO-type files in the filesystem, can be renamed, and may be opened like regular files. Under Unix derivatives, Unix-domain sockets appear as S_IFSOCK-type files in the filesystem, can be renamed, but cannot be open()ed — one must create the correct type of socket file descriptor and connect() explicitly. Under Plan 9, sockets in the filesystem may be opened like regular files. As a replacement for dedicated system calls Modern systems contain high-performance I/O event notification facilities — kqueue (BSD derivatives), epoll (Linux), IOCP (Windows NT, Solaris), /dev/poll (Solaris) — the control object is generally created (kqueue(), epoll_create()) and configured (kevent(), epoll_ctl()) with dedicated system calls. A /dev/poll instance is created by opening the file "/dev/poll" directly, writing configured objects to observe, and ioctl()s for additional configuration. Memory may be allocated by requesting an anonymous memory mapping — one that doesn't correspond to any file. On modern systems this can be done by specifying no file and MAP_ANONYMOUS; in UNIX System V Release 4, this was done by opening /dev/zero, and mmap()ping it. API filesystems Operating system APIs can be implemented as regular system calls, or as synthetic file-systems. In the former case, system state can only be inspected by specially-written programs shipped with the system, and any additional processing desired by the user needs to either filter and parse the output of those programs, execute them to write the desired state, or must be implemented in the native system programming language. In the latter case, system state is presented as-if it were regular files and directories — on systems with a procfs, information about running processes can be obtained by looking at, canonically, /proc, which contains directories named after the PIDs running on the system, containing files like stat (status) with process metadata, cwd, exe, and root — symbolic links to the process' working directory, executable image, and root directory — or directories like fd which contains symbolic links to the files the process has opened, named after the file descriptors. Because these attributes are presented as files and symbolic links, standard utilities work on them, and one can, say, inspect the identity of the process with grep Uid /proc/1392400/status, go to the same directory as a process is in with cd /proc/1392400/cwd, look what files a process has open with ls -l /proc/1392400/fd, then open a file that process has open with less /proc/1392400/fd/8. This improves ergonomics over parsing this data from the output of a utility. Under Linux, symbolic links under procfs are "magic": they can actually behave like cross-filesystem hard links to the files they point to. This behaviour allows recovery of files removed from the filesystem but still open by a process, and permanently persisting files created by O_TMPFILE in the filesystem (which otherwise cannot be named). 4.4BSD-derived sysctls are key/value mappings managed by the sysctl program, which lists all variables with sysctl -a, the value of one variable with sysctl net.inet.ip.forwarding, and sets it with sysctl -w net.inet.ip.forwarding=1. Under Linux, the equivalent mechanism is provided by procfs under the /proc/sys tree: the respective operations can be done with find /proc/sys/grep -r ^ /proc/sys, cat /proc/sys/net/ipv4/ip_forward, and echo 1 > /proc/sys/net/ipv4/ip_forward. For convenience or standards conformance, dedicated inspection tools (like ps and sysctl) may still be provided, using these filesystems as data sources/sinks. sysfs and debugfs are similar Linux interfaces for further configuring the kernel: writing mem to /sys/power/state will trigger a suspend-to-RAM procedure, and writing 2 to /sys/module/iwlwifi/parameters/led_mode will start blinking the Wi-Fi LED on activity. These are synthetic file-systems because the contents of each file are not stored anywhere verbatim: when the file is read, the appropriate kernel data structures are serialised into the reading process' input buffer, and when the file is written to, the output buffer is parsed. This means that the file abstraction is broken, since the file metadata isn't valid: depending on the filesystem, each file reports a size of 0 or PAGE_SIZE, even though reading the data will yield a different number of bytes. Notes See also Unix architecture Object-oriented analysis and design References Information theory Unix file system technology
Everything is a file
[ "Mathematics", "Technology", "Engineering" ]
2,428
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
25,550,398
https://en.wikipedia.org/wiki/Symbolic%20circuit%20analysis
Symbolic circuit analysis is a formal technique of circuit analysis to calculate the behaviour or characteristic of an electric/electronic circuit with the independent variables (time or frequency), the dependent variables (voltages and currents), and (some or all of) the circuit elements represented by symbols. When analysing electric/electronic circuits, we may ask two types of questions: What is the value of certain circuit variable (voltage, current, resistance, gain, etc.) or what is the relationship between some circuit variables or between a circuit variable and circuit components and frequency (or time). Such relationship may take the form of a graph, where numerical values of a circuit variable are plotted versus frequency or component value (the most common example would be a plot of the magnitude of a transfer function vs. frequency). Symbolic circuit analysis is concerned with obtaining those relationships in symbolic form, i.e., in the form of analytical expression, where the complex frequency (or time) and some or all of the circuit components are represented by symbols. Frequency domain expressions In the frequency domain the most common task of symbolic circuit analysis is to obtain the relationship between input and output variables in the form of a rational function in the complex frequency and symbolic variables : The above relationship is often called the network function. For physical systems, and are polynomials in with real coefficients: where are the zeroes and are the poles of the network function; . While there are several methods for generating coefficients and , no technique exists to obtain exact symbolic expressions for poles and zeroes for polynomials of order higher than 5. Types of symbolic network functions Depending on which parameters are kept as symbols, we may have several different types of symbolic network functions. This is best illustrated on an example. Consider, for instance, the biquad filter circuit with ideal op amps, shown below. We want to obtain a formula for its voltage transmittance (also called the voltage gain) in the frequency domain, . Network function with s as the only variable If the complex frequency is the only variable, the formula will look like this (for simplicity we use the numerical values: ): Semi-symbolic network function If the complex frequency and some circuit variables are kept as symbols (semi-symbolic analysis), the formula may take a form: Fully symbolic network function If the complex frequency and all circuit variables are symbolic (fully symbolic analysis), the voltage transmittance is given by (here ): All expressions above are extremely useful in obtaining insight into operation of the circuit and understanding how each component contributes to the overall circuit performance. As the circuit size increases, however, the number of terms in such expressions grows exponentially. So, even for relatively simple circuits, the formulae become too long to be of any practical value. One way to deal with this problem is to omit numerically insignificant terms from the symbolic expression, keeping the inevitable error below the predetermined limit. Sequence of Expressions form Another possibility to shorten the symbolic expression to manageable length is to represent the network function by a sequence of expressions (SoE). Of course, the interpretability of the formula is lost, but this approach is very useful for repetitive numerical calculations. A software package STAINS (Symbolic Two-port Analysis via Internal Node Suppression) has been developed to generate such sequences. There are several types of SoE that can be obtained from STAINS. For example, the compact SoE for of our biquad is x1 = G5*G3/G6 x2 = -G1-s*C1-G2*x1/(s*C2) x3 = -G4*G8/x2 Ts = x3/G11 The above sequence contains fractions. If this is not desirable (when divisions by zero appear, for instance), we may generate a fractionless SoE: x1 = -G2*G5 x2 = G6*s*C2 x3 = -G4*x2 x4 = x1*G3-(G1+s*C1)*x2 x5 = x3*G8 x6 = -G11*x4 Ts = -x5/x6 Yet another way to shorten the expression is to factorise polynomials and . For our example this is very simple and leads to: Num = G4*G6*G8*s*C2 Den = G11*((G1+s*C1)*G6*s*C2+G2*G3*G5) Ts = Num/Den For larger circuits, however, factorisation becomes a difficult combinatorial problem and the final result may be impractical for both interpretation and numerical calculations. See also Signal-flow graph Topology (electrical circuits) External links SCAM - MATLAB script for computing symbolic circuit transfer functions. How to use Wolfram System Modeller to do symbolic circuit analysis. References Electronic design
Symbolic circuit analysis
[ "Engineering" ]
1,003
[ "Electronic design", "Electronic engineering", "Design" ]
25,555,117
https://en.wikipedia.org/wiki/Eight-dimensional%20space
In mathematics, a sequence of n real numbers can be understood as a location in n-dimensional space. When n = 8, the set of all such locations is called 8-dimensional space. Often such spaces are studied as vector spaces, without any notion of distance. Eight-dimensional Euclidean space is eight-dimensional space equipped with the Euclidean metric. More generally the term may refer to an eight-dimensional vector space over any field, such as an eight-dimensional complex vector space, which has 16 real dimensions. It may also refer to an eight-dimensional manifold such as an 8-sphere, or a variety of other geometric constructions. Geometry 8-polytope A polytope in eight dimensions is called an 8-polytope. The most studied are the regular polytopes, of which there are only three in eight dimensions: the 8-simplex, 8-cube, and 8-orthoplex. A broader family are the uniform 8-polytopes, constructed from fundamental symmetry domains of reflection, each domain defined by a Coxeter group. Each uniform polytope is defined by a ringed Coxeter-Dynkin diagram. The 8-demicube is a unique polytope from the D8 family, and 421, 241, and 142 polytopes from the E8 family. 7-sphere The 7-sphere or hypersphere in eight dimensions is the seven-dimensional surface equidistant from a point, e.g. the origin. It has symbol , with formal definition for the 7-sphere with radius r of The volume of the space bounded by this 7-sphere is which is 4.05871 × r8, or 0.01585 of the 8-cube that contains the 7-sphere. Kissing number problem The kissing number problem has been solved in eight dimensions, thanks to the existence of the 421 polytope and its associated lattice. The kissing number in eight dimensions is 240. Octonions The octonions are a normed division algebra over the real numbers, the largest such algebra. Mathematically they can be specified by 8-tuplets of real numbers, so form an 8-dimensional vector space over the reals, with addition of vectors being the addition in the algebra. A normed algebra is one with a product that satisfies for all x and y in the algebra. A normed division algebra additionally must be finite-dimensional, and have the property that every non-zero vector has a unique multiplicative inverse. Hurwitz's theorem prohibits such a structure from existing in dimensions other than 1, 2, 4, or 8. Biquaternions The complexified quaternions , or "biquaternions," are an eight-dimensional algebra dating to William Rowan Hamilton's work in the 1850s. This algebra is equivalent (that is, isomorphic) to the Clifford algebra and the Pauli algebra. It has also been proposed as a practical or pedagogical tool for doing calculations in special relativity, and in that context goes by the name Algebra of physical space (not to be confused with the Spacetime algebra, which is 16-dimensional.) References H.S.M. Coxeter: H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, Wiley::Kaleidoscopes: Selected Writings of H.S.M. Coxeter (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Table of the Highest Kissing Numbers Presently Known maintained by Gabriele Nebe and Neil Sloane (lower bounds) . (Review). (Second printing) Dimension Multi-dimensional geometry 8 (number) Octonions
Eight-dimensional space
[ "Physics" ]
914
[ "Geometric measurement", "Dimension", "Physical quantities", "Theory of relativity" ]
25,556,965
https://en.wikipedia.org/wiki/Ultraviolet%E2%80%93visible%20spectroscopy%20of%20stereoisomers
Ultraviolet–visible spectroscopy (UV–vis) can distinguish between enantiomers by showing a distinct Cotton effect for each isomer. UV–vis spectroscopy sees only chromophores, so other molecules must be prepared for analysis by chemical addition of a chromophore such as anthracene. Two methods are reported: the octant rule and the exciton chirality method. The octant rule was introduced in 1961 by William Moffitt, R. B. Woodward, A. Moscowitz, William Klyne and Carl Djerassi. This empirical rule allows the prediction of the sign of the Cotton effect by analysing relative orientation of substituents in three dimensions and in this way the absolute configuration of an enantiomer. See also NMR spectroscopy of stereoisomers References Spectroscopy
Ultraviolet–visible spectroscopy of stereoisomers
[ "Physics", "Chemistry" ]
170
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
22,658,615
https://en.wikipedia.org/wiki/Perfect%20ring
In the area of abstract algebra known as ring theory, a left perfect ring is a type of ring over which all left modules have projective covers. The right case is defined by analogy, and the condition is not left-right symmetric; that is, there exist rings which are perfect on one side but not the other. Perfect rings were introduced in Bass's book. A semiperfect ring is a ring over which every finitely generated left module has a projective cover. This property is left-right symmetric. Perfect ring Definitions The following equivalent definitions of a left perfect ring R are found in Anderson and Fuller: Every left R-module has a projective cover. R/J(R) is semisimple and J(R) is left T-nilpotent (that is, for every infinite sequence of elements of J(R) there is an n such that the product of first n terms are zero), where J(R) is the Jacobson radical of R. (Bass' Theorem P) R satisfies the descending chain condition on principal right ideals. (There is no mistake; this condition on right principal ideals is equivalent to the ring being left perfect.) Every flat left R-module is projective. R/J(R) is semisimple and every non-zero left R-module contains a maximal submodule. R contains no infinite orthogonal set of idempotents, and every non-zero right R-module contains a minimal submodule. Examples Right or left Artinian rings, and semiprimary rings are known to be right-and-left perfect. The following is an example (due to Bass) of a local ring which is right but not left perfect. Let F be a field, and consider a certain ring of infinite matrices over F. Take the set of infinite matrices with entries indexed by , and which have only finitely many nonzero entries, all of them above the diagonal, and denote this set by . Also take the matrix with all 1's on the diagonal, and form the set It can be shown that R is a ring with identity, whose Jacobson radical is J. Furthermore R/J is a field, so that R is local, and R is right but not left perfect. Properties For a left perfect ring R: From the equivalences above, every left R-module has a maximal submodule and a projective cover, and the flat left R-modules coincide with the projective left modules. An analogue of the Baer's criterion holds for projective modules. Semiperfect ring Definition Let R be ring. Then R is semiperfect if any of the following equivalent conditions hold: R/J(R) is semisimple and idempotents lift modulo J(R), where J(R) is the Jacobson radical of R. R has a complete orthogonal set e1, ..., en of idempotents with each eiRei a local ring. Every simple left (right) R-module has a projective cover. Every finitely generated left (right) R-module has a projective cover. The category of finitely generated projective -modules is Krull-Schmidt. Examples Examples of semiperfect rings include: Left (right) perfect rings. Local rings. Kaplansky's theorem on projective modules Left (right) Artinian rings. Finite dimensional k-algebras. Properties Since a ring R is semiperfect iff every simple left R-module has a projective cover, every ring Morita equivalent to a semiperfect ring is also semiperfect. Citations References Ring theory
Perfect ring
[ "Mathematics" ]
746
[ "Fields of abstract algebra", "Ring theory" ]
22,659,407
https://en.wikipedia.org/wiki/ATLAS%20of%20Finite%20Groups
The ATLAS of Finite Groups, often simply known as the ATLAS, is a group theory book by John Horton Conway, Robert Turner Curtis, Simon Phillips Norton, Richard Alan Parker and Robert Arnott Wilson (with computational assistance from J. G. Thackray), published in December 1985 by Oxford University Press and reprinted with corrections in 2003 (). The book codified and systematized mathematicians' knowledge about finite groups, including some discoveries that had only been known within Conway's group at Cambridge University. Over the years since its publication, it has proved to be a landmark work of mathematical exposition. It lists basic information about 93 finite simple groups. The classification of finite simple groups indicates that any such group is either a member of an infinite family, such as the cyclic groups of prime order, or one of the 26 sporadic groups. The ATLAS covers all of the sporadic groups and the smaller examples of the infinite families. The authors said that their rule for choosing groups to include was to "think how far the reasonable person would go, and then go a step further." The information provided is generally a group's order, Schur multiplier, outer automorphism group, various constructions (such as presentations), conjugacy classes of maximal subgroups, and, most importantly, character tables (including power maps on the conjugacy classes) of the group itself and bicyclic extensions given by stem extensions and automorphism groups. In certain cases (such as for the Chevalley groups ), the character table is not listed and only basic information is given. The ATLAS is a recognizable large format book (sized 420 mm by 300 mm) with a cherry red cardboard cover and spiral binding. (One later author described it as "appropriately oversized". Another noted that his university library shelved it among the oversized geography books.) The cover lists the authors in alphabetical order by last name (each last name having exactly six letters), which was also the order in which the authors joined the project. The abbreviations by which the authors refer to certain groups, which occasionally differ from those used by some other mathematicians, are known as "ATLAS notation". The book was reappraised in 1995 in the volume The Atlas of Finite Groups: Ten Years on. It was the subject of an American Mathematical Society symposium at Princeton University in 2015, whose proceedings were published as Finite Simple Groups: Thirty Years of the Atlas and Beyond. The ATLAS is being continued in the form of an electronic database, the ATLAS of Finite Group Representations. References Finite groups Mathematics books John Horton Conway 1985 non-fiction books
ATLAS of Finite Groups
[ "Mathematics" ]
532
[ "Mathematical structures", "Algebraic structures", "Finite groups" ]
22,667,049
https://en.wikipedia.org/wiki/Transversality%20theorem
In differential topology, the transversality theorem, also known as the Thom transversality theorem after French mathematician René Thom, is a major result that describes the transverse intersection properties of a smooth family of smooth maps. It says that transversality is a generic property: any smooth map , may be deformed by an arbitrary small amount into a map that is transverse to a given submanifold . Together with the Pontryagin–Thom construction, it is the technical heart of cobordism theory, and the starting point for surgery theory. The finite-dimensional version of the transversality theorem is also a very useful tool for establishing the genericity of a property which is dependent on a finite number of real parameters and which is expressible using a system of nonlinear equations. This can be extended to an infinite-dimensional parametrization using the infinite-dimensional version of the transversality theorem. Finite-dimensional version Previous definitions Let be a smooth map between smooth manifolds, and let be a submanifold of . We say that is transverse to , denoted as , if and only if for every we have that . An important result about transversality states that if a smooth map is transverse to , then is a regular submanifold of . If is a manifold with boundary, then we can define the restriction of the map to the boundary, as . The map is smooth, and it allows us to state an extension of the previous result: if both and , then is a regular submanifold of with boundary, and . Parametric transversality theorem Consider the map and define . This generates a family of mappings . We require that the family vary smoothly by assuming to be a (smooth) manifold and to be smooth. The statement of the parametric transversality theorem is: Suppose that is a smooth map of manifolds, where only has boundary, and let be any submanifold of without boundary. If both and are transverse to , then for almost every , both and are transverse to . More general transversality theorems The parametric transversality theorem above is sufficient for many elementary applications (see the book by Guillemin and Pollack). There are more powerful statements (collectively known as transversality theorems) that imply the parametric transversality theorem and are needed for more advanced applications. Informally, the "transversality theorem" states that the set of mappings that are transverse to a given submanifold is a dense open (or, in some cases, only a dense ) subset of the set of mappings. To make such a statement precise, it is necessary to define the space of mappings under consideration, and what is the topology in it. There are several possibilities; see the book by Hirsch. What is usually understood by Thom's transversality theorem is a more powerful statement about jet transversality. See the books by Hirsch and by Golubitsky and Guillemin. The original reference is Thom, Bol. Soc. Mat. Mexicana (2) 1 (1956), pp. 59–71. John Mather proved in the 1970s an even more general result called the multijet transversality theorem. See the book by Golubitsky and Guillemin. Infinite-dimensional version The infinite-dimensional version of the transversality theorem takes into account that the manifolds may be modeled in Banach spaces. Formal statement Suppose is a map of -Banach manifolds. Assume: (i) and are non-empty, metrizable -Banach manifolds with chart spaces over a field (ii) The -map with has as a regular value. (iii) For each parameter , the map is a Fredholm map, where for every (iv) The convergence on as and for all implies the existence of a convergent subsequence as with If (i)-(iv) hold, then there exists an open, dense subset such that is a regular value of for each parameter Now, fix an element If there exists a number with for all solutions of , then the solution set consists of an -dimensional -Banach manifold or the solution set is empty. Note that if for all the solutions of then there exists an open dense subset of such that there are at most finitely many solutions for each fixed parameter In addition, all these solutions are regular. References Theorems in differential topology Differential geometry
Transversality theorem
[ "Mathematics" ]
898
[ "Theorems in differential topology", "Theorems in topology" ]
20,083,580
https://en.wikipedia.org/wiki/Triose%20phosphate%20translocator
The triose phosphate translocator is an integral membrane protein found in the inner membrane of chloroplasts. It exports triose phosphate (Dihydroxyacetone phosphate) in exchange for inorganic phosphate and is therefore classified as an antiporter. The imported phosphate is then used for ATP regeneration via the light-dependent-reaction; the ATP may then for example be used for further reactions in the Calvin-cycle. The translocator protein is responsible for exporting all the carbohydrate produced in photosynthesis by plants and therefore most of the carbon in food that one eats has been transported by the triose phosphate translocator. Its three-dimensional structure was reported in 2017, revealing how it recognizes two different substrates to catalyze the strict 1:1 exchange. References Photosynthesis Plant physiology Metabolism Agronomy
Triose phosphate translocator
[ "Chemistry", "Biology" ]
177
[ "Plant physiology", "Plants", "Photosynthesis", "Cellular processes", "Biochemistry", "Metabolism", "Chemical process stubs" ]
20,094,074
https://en.wikipedia.org/wiki/Laser-heated%20pedestal%20growth
Laser-heated pedestal growth (LHPG) or laser floating zone (LFZ) is a crystal growth technique. A narrow region of a crystal is melted with a powerful CO2 or Nd:YAG laser. The laser and hence the floating zone, is moved along the crystal. The molten region melts impure solid at its forward edge and leaves a wake of purer material solidified behind it. This technique for growing crystals from the melt (liquid/solid phase transition) is used in materials research. Advantages The main advantages of this technique are the high pulling rates (60 times greater than the conventional Czochralski technique) and the possibility of growing materials with very high melting points. In addition, LHPG is a crucible-free technique, which allows single crystals to be grown with high purity and low stress. The geometric shape of the crystals (the technique can produce small diameters), and the low production cost, make the single-crystal fibers (SCF) produced by LHPG suitable substitutes for bulk crystals in many devices, especially those that use high-melting-point materials. However, single-crystal fibers must have equal or superior optical and structural qualities compared to bulk crystals to substitute for them in technological devices. This can be achieved by carefully controlling the growth conditions. Optical elements Until 1980, laser-heated crystal growth used only two laser beams focused over the source material. This condition generated a high radial thermal gradient in the molten zone, making the process unstable. Increasing the number of beams to four did not solve the problem, although it improved the growth process. An improvement to the laser-heated crystal growth technique was made by Fejer et al., who incorporated a special optical component known as a reflaxicon, consisting of an inner cone surrounded by a larger coaxial cone section, both with reflecting surfaces. This optical element converts the cylindrical laser beam into a larger diameter hollow cylinder surface. This optical component allows radial distribution of the laser energy over the molten zone, reducing radial thermal gradients. The axial temperature gradient in this technique can go as high as 10000 °C/cm, which is very high when compared to traditional crystal growth techniques (10–100 °C/cm). Convection speed A feature of the LHPG technique is its high convection speed in the liquid phase due to Marangoni convection. It is possible to see that it spins very fast. Even when it appears to be standing still, it is in fact spinning fast on its axis. See also Crystal structure Crystallite Crystallization and engineering aspects Fractional crystallization Micro-pulling-down Nucleation Protocrystalline Recrystallization (metallurgy) Seed crystal References Crystals Crystallography Materials science Mineralogy Methods of crystal growth
Laser-heated pedestal growth
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
564
[ "Applied and interdisciplinary physics", "Methods of crystal growth", "Materials science", "Crystallography", "Crystals", "Condensed matter physics", "nan" ]
6,583,188
https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur%20theorem
In functional analysis, a field of mathematics, the Banach–Mazur theorem is a theorem roughly stating that most well-behaved normed spaces are subspaces of the space of continuous paths. It is named after Stefan Banach and Stanisław Mazur. Statement Every real, separable Banach space is isometrically isomorphic to a closed subspace of , the space of all continuous functions from the unit interval into the real line. Comments On the one hand, the Banach–Mazur theorem seems to tell us that the seemingly vast collection of all separable Banach spaces is not that vast or difficult to work with, since a separable Banach space is "only" a collection of continuous paths. On the other hand, the theorem tells us that is a "really big" space, big enough to contain every possible separable Banach space. Non-separable Banach spaces cannot embed isometrically in the separable space , but for every Banach space , one can find a compact Hausdorff space and an isometric linear embedding of into the space of scalar continuous functions on . The simplest choice is to let be the unit ball of the continuous dual , equipped with the w*-topology. This unit ball is then compact by the Banach–Alaoglu theorem. The embedding is introduced by saying that for every , the continuous function on is defined by The mapping is linear, and it is isometric by the Hahn–Banach theorem. Another generalization was given by Kleiber and Pervin (1969): a metric space of density equal to an infinite cardinal is isometric to a subspace of , the space of real continuous functions on the product of copies of the unit interval. Stronger versions of the theorem Let us write for . In 1995, Luis Rodríguez-Piazza proved that the isometry can be chosen so that every non-zero function in the image is nowhere differentiable. Put another way, if consists of functions that are differentiable at at least one point of , then can be chosen so that This conclusion applies to the space itself, hence there exists a linear map that is an isometry onto its image, such that image under of (the subspace consisting of functions that are everywhere differentiable with continuous derivative) intersects only at : thus the space of smooth functions (with respect to the uniform distance) is isometrically isomorphic to a space of nowhere-differentiable functions. Note that the (metrically incomplete) space of smooth functions is dense in . References Theory of continuous functions Functional analysis Theorems in functional analysis
Banach–Mazur theorem
[ "Mathematics" ]
532
[ "Theorems in mathematical analysis", "Functions and mappings", "Functional analysis", "Theory of continuous functions", "Mathematical objects", "Theorems in functional analysis", "Topology", "Mathematical relations" ]
6,585,596
https://en.wikipedia.org/wiki/Theory%20of%20two-level%20planning
The theory of two-level planning (alternatively, Kornai–Liptak decomposition) is a method that decomposes large problems of linear optimization into sub-problems. This decomposition simplifies the solution of the overall problem. The method also models a method of coordinating economic decisions so that decentralized firms behave so as to produce a global optimum. It was introduced by the Hungarian economist János Kornai and the mathematician Tamás Lipták in 1965. It is an alternative to Dantzig–Wolfe decomposition. Description The LP problem must have a special structure, known as a block angular structure. This is the same structure required for the Dantzig Wolfe decomposition: There are some constraints on overall resources (D) for which a central planning agency is assumed to be responsible, and n blocks of coefficients (F1 through Fn) that are the concern of individual firms. The central agency starts the process by providing each firm with tentative resource allocations which satisfy the overall constraints D. Each firm optimizes its local decision variables assuming the global resource allocations are as indicated. The solution of the firm LP's yield Lagrange multipliers (prices) for the global resources which the firms transmit back to the planning agency. In the next iteration, the central agency uses the information received from firms to come up with a revised resource allocation; for example if firm i reports a high shadow price for resource j, the agency will grant more of this resource to this firm and less to other firms. The revised tentative allocations are sent back to the individual firms and the process continues. It has been shown that this process will converge (though not necessarily in a finite number of steps) towards the global solution for the overall problem. (In contrast the Dantzig Wolfe method converges in a finite number of steps). The DW and KL methods are dual: in DW the central market establishes prices (based on firm demands for resources) and sends these to the firms who then modify the quantities they demand, while in KL the central agency sends out quantity information to firms and receives bids (i.e. firm specific pricing information) from firms. See also Dantzig–Wolfe decomposition Benders' decomposition Column generation References J. Kornai, T. Liptak: Two-level Planning, Econometrica, 1965, Vol. 33, pp. 141–169. Linear programming Decomposition methods
Theory of two-level planning
[ "Engineering" ]
497
[ "Decomposition methods", "Industrial engineering" ]
6,586,455
https://en.wikipedia.org/wiki/Lycorine
Lycorine is a toxic crystalline alkaloid found in various Amaryllidaceae species, such as the cultivated bush lily (Clivia miniata), surprise lilies (Lycoris), and daffodils (Narcissus). It may be highly poisonous, or even lethal, when ingested in certain quantities. Regardless, it is sometimes used medicinally, a reason why some groups may harvest the very popular Clivia miniata. Source Lycorine is found in different species of Amaryllidaceae which include flowers and bulbs of daffodil, snowdrop (Galanthus) or spider lily (Lycoris). Lycorine is the most frequent alkaloid of Amaryllidaceae. The earliest diversification of Amaryllidaceae was most likely in North Africa and the Iberian peninsula and that lycorine is one of the oldest in the Amaryllidaceae alkaloid biosynthetic pathway. Mechanism of action There is currently very little known about the mechanism of action of lycorine, although there have been some tentative hypotheses advanced concerning the metabolism of the alkaloid, based on experiments carried out upon beagle dogs. Lycorine inhibits protein synthesis, and may inhibit ascorbic acid biosynthesis, although studies on the latter are controversial and inconclusive. Presently, it serves some interest in the study of certain yeasts, the principal organism on which lycorine is tested. It is known that lycorine weakly inhibits acetylcholinesterase (AChE) and ascorbic acid biosynthesis. The IC50 of lycorine was found to vary between the different species it can be found in, but a common deduction from the experiments on lycorine was that it had some effect on inhibiting AChE. Lycorine exhibits cytostatic effects by targeting the actin cytoskeleton rather than by inducing apoptosis in cancer cells, though lycorine has been found to induce apoptosis or arrest the cell cycle at different points in various cell lines. Toxicity Poisoning by lycorine most often occurs through the ingestion of daffodil bulbs. Daffodil bulbs are sometimes confused with onions, leading to accidental poisoning. In a study of dosage used on beagle dogs, the first sign of nausea was observed at as little of a dose of 0.5 mg/kg and occurred within a 2.5 hour span. The effective dose to induce emesis in the dogs was seen to be 2.0 mg/kg and lasted no longer than 2.5 hours after administration. Symptoms Symptoms of lycorine toxicity are nausea, vomiting, diarrhea, and convulsions. Current research Lycorine has been seen to have promising biological and pharmacological activities such as antibacterial, antiviral, or anti-inflammatory effects and may have anticancer properties. It has displayed various inhibitory properties towards multiple cancer cell lines that include, lymphoma, carcinoma, multiple myeloma, melanoma, leukemia, human A549 non-small-cell lung cancer, human OE21 esophageal cancer and more. Lycorine has many derivatives used for anti-cancer research such as lycorine hydrochloride (LH) which is a novel anti-ovarian cancer agent, and data has shown that LH effectively inhibited mitotic proliferation of Hey1B cells with very low toxicity. This drug could be used for effective anti-ovarian cancer therapy in the future. References External links Isoquinoline alkaloids Quinoline alkaloids Diols Phenanthridines Plant toxins Acetylcholinesterase inhibitors Protein synthesis inhibitors
Lycorine
[ "Chemistry" ]
802
[ "Chemical ecology", "Plant toxins", "Alkaloids by chemical classification", "Tetrahydroisoquinoline alkaloids", "Quinoline alkaloids" ]
6,587,493
https://en.wikipedia.org/wiki/Candareen
A candareen (; ; Singapore English usage: hoon) is a traditional measurement of weight in East Asia. It is equal to 10 cash and is of a mace. It is approximately 378 milligrams. A troy candareen is approximately . In Hong Kong, one candareen is 0.3779936375 grams and, in the Weights and Measures Ordinance, it is ounces avoirdupois. In Singapore, one candareen is 0.377994 grams. The word candareen comes from the Malay kandūri. An earlier English form of the name was condrin. The candareen was also formerly used to describe a unit of currency in imperial China equal to 10 li () and is of a mace. The Mandarin Chinese word fēn is used to denote of a Chinese renminbi yuan but the term candareen for that currency is now obsolete. Postal denomination On 1 May 1878 the Imperial Maritime Customs was opened to the public and China's first postage stamps, the "Large Dragons" (), were issued to handle payment. The stamps were inscribed "CHINA" in both Latin and Chinese characters, and denominated in candareens. See also Postage stamps and postal history of China#Imperial China Chinese units of measurement Economy of China Economic history of China (Pre-1911) Economic history of China (1912–1949) References Chinese units in Hong Kong Currencies of Asia Currencies of China Modern obsolete currencies Units of mass
Candareen
[ "Physics", "Mathematics" ]
316
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
6,587,584
https://en.wikipedia.org/wiki/Mace%20%28unit%29
A mace (; Hong Kong English usage: tsin; Southeast Asian English usage: chee) is a traditional Chinese measurement of weight in East Asia that was also used as a currency denomination. It is equal to 10 candareens and is of a tael or approximately 3.78 grams. A troy mace is approximately 3.7429 grams. In Hong Kong, one mace is grams. and in Ordinance 22 of 1884, it is ounces avoirdupois. In Singapore, one mace (referred to as chee) is grams. In imperial China, 10 candareens equaled 1 mace which was of a tael and, like the other units, was used in weight-denominated silver currency system. A common denomination was 7 mace and 2 candareens, equal to one silver Chinese yuan. Name Like other similar measures such as tael and catty, the English word "mace" derives from Malay, in this case through Dutch maes, plural masen, from Malay mas which, in turn, derived from Sanskrit (), a word related to "mash," another name for the urad bean, and masha, a traditional Indian unit of weight equal to 0.97 gram. This word is unrelated to other uses of "mace" in English. The Chinese word for mace is qián (), which is also a generic word for "money" in Mandarin Chinese. The same Chinese character (kanji) was used for the Japanese sen, the former unit equal to of a Japanese yen, the Korean chŏn (revised: jeon), the former unit equal to of a Korean won, and for the Vietnamese tiền, a currency used in late imperial Vietnam, although none of these has ever been known as "mace" in English. See also Chinese units of measurement Economic history of China (Pre-1911) Economic history of China (1912–1949) Economy of China Hong Kong units of measurement Taiwanese units of measurement References Currencies of China Currencies of Asia Modern obsolete currencies Chinese units in Hong Kong Units of mass
Mace (unit)
[ "Physics", "Mathematics" ]
431
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
6,588,022
https://en.wikipedia.org/wiki/Lithium%20borohydride
Lithium borohydride (LiBH4) is a borohydride and known in organic synthesis as a reducing agent for esters. Although less common than the related sodium borohydride, the lithium salt offers some advantages, being a stronger reducing agent and highly soluble in ethers, whilst remaining safer to handle than lithium aluminium hydride. Preparation Lithium borohydride may be prepared by the metathesis reaction, which occurs upon ball-milling the more commonly available sodium borohydride and lithium bromide: NaBH4 + LiBr → NaBr + LiBH4 Alternatively, it may be synthesized by treating boron trifluoride with lithium hydride in diethyl ether: BF3 + 4 LiH → LiBH4 + 3 LiF Reactions Lithium borohydride is useful as a source of hydride (H–). It can react with a range of carbonyl substrates and other polarized carbon structures to form a hydrogen–carbon bond. It can also react with Brønsted–Lowry-acidic substances (sources of H+) to form hydrogen gas. Reduction reactions As a hydride reducing agent, lithium borohydride is stronger than sodium borohydride but weaker than lithium aluminium hydride. Unlike the sodium analog, it can reduce esters to alcohols, nitriles and primary amides to amines, and can open epoxides. The enhanced reactivity in many of these cases is attributed to the polarization of the carbonyl substrate by complexation to the lithium cation. Unlike the aluminium analog, it does not react with nitro groups, carbamic acids, alkyl halides, or secondary and tertiary amides. Hydrogen generation Lithium borohydride reacts with water to produce hydrogen. This reaction can be used for hydrogen generation. Although this reaction is usually spontaneous and violent, somewhat-stable aqueous solutions of lithium borohydride can be prepared at low temperature if degassed, distilled water is used and exposure to oxygen is carefully avoided. Energy storage Lithium borohydride is renowned as one of the highest-energy-density chemical energy carriers. Although presently of no practical importance, the solid liberates 65 MJ/kg heat upon treatment with atmospheric oxygen. Since it has a density of 0.67 g/cm3, oxidation of liquid lithium borohydride gives 43 MJ/L. In comparison, gasoline gives 44 MJ/kg (or 35 MJ/L), while liquid hydrogen gives 120 MJ/kg (or 8.0 MJ/L). The high specific energy density of lithium borohydride has made it an attractive candidate to propose for automobile and rocket fuel, but despite the research and advocacy, it has not been used widely. As with all chemical-hydride-based energy carriers, lithium borohydride is very complex to recycle (i.e. recharge) and therefore suffers from a low energy conversion efficiency. While batteries such as lithium-ion carry an energy density of up to 0.72 MJ/kg and 2.0 MJ/L, their DC-to-DC conversion efficiency can be as high as 90%. In view of the complexity of recycling mechanisms for metal hydrides, such high energy-conversion efficiencies are not practical with present technology. See also Direct borohydride fuel cell Notes References Borohydrides Lithium salts Reducing agents el:Υδρίδιο του λιθίου
Lithium borohydride
[ "Chemistry" ]
749
[ "Reducing agents", "Redox", "Lithium salts", "Salts" ]
6,588,064
https://en.wikipedia.org/wiki/Bridge-Building%20Brotherhood
The Bridge-Building Brotherhood (; ) is said to have been a religious association active during the 12th and 13th centuries and whose purpose was building bridges. Legend Building bridges greatly helped travelers and in particular pilgrims. It was customary for a bishop to grant indulgences to those who, by money or labor, contributed to the construction of a bridge, even when no brotherhood or religious organization was involved. The register of the Archbishop of York, Walter de Gray, shows examples of indulgences granted in the 13th century for the building of bridges. The brotherhood Fratres Pontifices ("Bridgebuilding Brotherhood" in English), or Frères Pontifes, is said to have been founded in the latter part of the 12th century by St. Bénézet (a Provençal variant of the name Benedict). Bénézet was a youth who, according to legend, was divinely inspired to build the Pont Saint-Bénézet across the Rhône at Avignon. The old bridge at Avignon, some arches of which still remain, dates from the end of the 12th century. Up to the present days, St. Bénézet is venerated in Avignon as the builder of the bridge and founder of the Frères Pontifes. The Fratres Pontifices are believed to have been very active, and to have built other bridges at Pont de Bonpas, Lourmarin, Mallemort and Mirabeau. They also are said to have maintained hospices at the chief fords of the principal rivers, besides building bridges and looking after ferries. The bridge over the Rhône at Pont-Saint-Esprit has been attributed to the Frères Pontifes, too. The Brotherhood is supposed to have consisted of three branches-- knights, clergy and artisans, where the knights usually had contributed most of the funds and were sometimes called donati, the clergy were usually monks who represented the church, and the artisans were the workers who actually built the bridges. Sisters are sometimes mentioned as belonging to the same association. In addition to the construction of bridges, the brotherhood allegedly often attended to the lodging and entertainment of travelers and the collection of alms or quête. There are conflicting reports regarding the recognizance of the Fratres Pontifices by Pope Clement III. One source states that the brotherhood was recognized by Clement III in 1189, and other sources report that Clement III addressed a Papal Bull to the Fratres Pontifices in 1191, but the authenticity of that Papal Bull is questioned. History Historical research, however, led to the conclusion that no brotherhood of the kind described by the legend ever existed. There are no historical sources relating to the existence of any such order and there is no evidence of any of the numerous bridges allegedly built by the Order. It is inconceivable that a youth accompanied by some followers without any construction experience should have built a 900 m long stone arch bridge in an era when all experience and tradition of building large bridges had been lost and when all skilled trades were strictly controlled by the respective guilds. In that era, when neither banks nor banknotes nor demand deposits existed, the financial means for such a large project could be put up only by collecting coins or later on, indulgences. This kind of financing required the sustained initiative of persons interested in the project, typically the heads of the local trading houses, who got together in a confrèrie (corresponding to a present-day syndicate or citizens' initiative) in order to collect the funds over the prolonged period of time required for the execution of the project. Such a confrèrie had nothing to do with a religious order or even less with a monastery, save that often monasteries were asked to audit the use of the funds since they were one of the very few institutions capable of rendering such services. The construction works were executed by professional builders not related to any religious order. The title "Pontifex Avenione / Pontife d'Avigon" (bridge builder of Avignon) appears not to have been mentioned prior to 1665. The legend was developed into a vivid history by François-René de Chateaubriand (1768–1848) and also by Eugène Viollet-le-Duc (1814–1879). During the Romantic era, other writers have had the brotherhood executing bridges throughout Europe and even in countries as far away as Britain and Sweden (although there was never any historical report of such extensive activities). The "Frères Pontifes" are a legend without any historical background. The most surprising aspect is their success in making it into the most serious reference works such as the Encyclopædia Britannica or the German Brockhaus Enzyklopädie References External links Bridges
Bridge-Building Brotherhood
[ "Engineering" ]
964
[ "Structural engineering", "Bridges" ]
6,588,124
https://en.wikipedia.org/wiki/Pompeiu%20problem
In mathematics, the Pompeiu problem is a conjecture in integral geometry, named for Dimitrie Pompeiu, who posed the problem in 1929, as follows. Suppose f is a nonzero continuous function defined on a Euclidean space, and K is a simply connected Lipschitz domain, so that the integral of f vanishes on every congruent copy of K. Then the domain is a ball. A special case is Schiffer's conjecture. References External links Pompeiu problem at Department of Geometry, Bolyai Institute, University of Szeged, Hungary Pompeiu problem at SpringerLink encyclopaedia of mathematics The Pompeiu problem, Schiffer's conjecture, Mathematical analysis Integral geometry Conjectures Unsolved problems in geometry
Pompeiu problem
[ "Mathematics" ]
165
[ "Geometry problems", "Mathematical analysis", "Unsolved problems in mathematics", "Mathematical analysis stubs", "Unsolved problems in geometry", "Conjectures", "Mathematical problems" ]
6,588,491
https://en.wikipedia.org/wiki/Desulfurization
Desulfurization or desulphurisation is a chemical process for the removal of sulfur from a material. This involves either the removal of sulfur from a molecule (e.g. A=S → A:) or the removal of sulfur compounds from a mixture such as oil refinery streams. These processes are of great industrial and environmental importance as they provide the bulk of sulfur used in industry (Claus process and Contact process), sulfur-free compounds that could otherwise not be used in a great number of catalytic processes, and also reduce the release of harmful sulfur compounds into the environment, particularly sulfur dioxide (SO2) which leads to acid rain. Processes used for desulfurization include hydrodesulfurization, the SNOX process and the wet sulfuric acid process (WSA process). See also Shell–Paques process Flue-gas desulfurization Biodesulfurization References Desulfurization
Desulfurization
[ "Chemistry" ]
190
[ "Desulfurization", "Chemical process stubs", "Separation processes" ]
6,589,624
https://en.wikipedia.org/wiki/Anaerobic%20clarigester
The anaerobic clarigester is a form of anaerobic digester. It is regarded as being the ancestor of the upflow anaerobic sludge blanket digestion (UASB) anaerobic digester. A clarigester treats dilute biodegradable feedstocks and separates out solid and hydraulic (liquid) retention times. A diagram comparing the UASB, anaerobic clarigester and anaerobic contact processes can be found on the FAO website. See also Anaerobic digestion Anaerobic digester types Biogas Expanded granular sludge bed digestion List of wastewater treatment technologies Upflow anaerobic sludge blanket digestion References Anaerobic digester types
Anaerobic clarigester
[ "Engineering" ]
150
[ "Civil engineering", "Civil engineering stubs" ]
27,409,800
https://en.wikipedia.org/wiki/Aluminium%20alloy%20inclusions
An inclusion is a solid particle in liquid aluminium alloy. It is usually non-metallic and can be of different nature depending on its source. Problems related to inclusions Inclusions can create problems in the casting when they are large and in too high concentration. Here are examples of problems related to inclusions: Pinholes in light gauge foil Flange cracks in beverage containers Surface streaks in bright automotive trim and lithographic material Breakage in wire drawing operation Increased tool wear and tear Increased porosity Loss of pressure tightness of engine blocks Poor machinability Cosmetic defect in apparent surfaces Diminished mechanical properties (e.g. Ultimate Tensile Strength, Yield Strength, Elongation) Inclusion types Oxide films In contact with ambient air, liquid aluminium reacts with the oxygen and form an oxide film layer (gamma-Al2O3). This layer becomes thicker with time. When molten aluminium is disturbed, this oxide film gets mixed inside the melt. Aluminium carbide In primary aluminium production, aluminium carbides (Al4C3) originates from the reduction of alumina where carbon anodes and cathodes are in contact with the mix. Later in the process, any carbon tools in contact with the liquid aluminium can react and create carbides. Magnesium oxides In aluminium alloys containing magnesium, magnesium oxides (MgO), cuboids (MgAl2O4-cuboid) and metallurgical spinel (MgAl2O4-spinel) can form. They result from the reaction between magnesium and oxygen in the melt. More of them will form with time and temperature. Spinel can be highly detrimental because of its big size and high hardness. Refractory materials Particles of refractory material in contact with aluminium can detach and become inclusions. We can find graphite inclusions (C), alumina inclusions (alpha-Al2O3), CaO, SiO2, … After some time, graphite refractory in contact with aluminium will react to create aluminum carbides (harder and more detrimental inclusions). In aluminium alloy containing magnesium, the magnesium reacts with some refractories to create rather big and hard inclusions similar to spinels. Unreacted refractory particles can originate from the degradation of refractory materials which comes in contact with the melt. Chlorides Chloride inclusions (MgCl2, NaCl, CaCl2, …) are a special type of inclusion as they are liquid in liquid metal. When aluminium solidifies, they form spherical voids similar to hydrogen gas porosity but the void contains a chloride crystal formed when aluminium became colder. Fluxing salt Fluxing salt, like chlorides are also liquid inclusions. They come from flux treatments added to the melt for cleaning. Intentionally added inclusions Titanium boride (TiB2) is intentionally added to the melt for grain refinement to improve mechanical properties. Phosphorus is added to the melt hypereutectic alloys for modification of the silicon phase for better mechanical properties. This creates AlP inclusions. Boron treatment inclusions ( (Ti, V)B2 ) form when boron is added to the melt to increase conductivity by precipitating vanadium and titanium. Less frequently found inclusions The following inclusion types can also be found in aluminium alloys: alumina needles (Al2O3), nitrides (AlN), iron oxides (FeO), manganese oxides (MnO), fluorides (Na3AlF6, NaF, CaF2, …), aluminium borides (AlB2, AlB12), borocarbides (Al4C4B). Bone ash (Ca3(PO4)2) sometimes added to patch cracks in the trough can be found as inclusions in the melt. Inclusion measurement Several methods exist to measure the inclusion content in liquid aluminium. The most common methods are PoDFA, Prefil, K-Mold and LiMCA. Measuring the inclusions is of great help to understand the impact of furnace preparation, alloying practice, feedstock mix, settling time, and similar parameters on melt cleanliness. PoDFA The PoDFA method provides information on the composition and concentration of the inclusions in molten aluminum. PoDFA is widely used for process characterization and optimization, as well as product improvement. It allows to quickly and accurately assess the effects of various operating practices on metal cleanliness or identify filtration efficiency. The PoDFA method was developed by Rio Tinto Alcan in the 70s. The metallographic analysis method has been optimized for over the years on a wide variety of alloys. The measurement principle is the following: A predetermined quantity of liquid aluminum is filtered under controlled conditions using a very fine porosity filter. Inclusions in the melt are concentrated at the filter surface by a factor of about 10,000. The filter, along with the residual metal, is then cut, mounted and polished before being analyzed under an optical microscope by a trained PoDFA metallographer. Prefil The Prefil method is similar to PoDFA but, in addition to the metallographic analysis, Prefil provides also an immediate feedback on metal cleanliness from the metal flowrate through the filter. Because everything about the filtration is well controlled (pressure, metal temperature, ...), the only parameter affecting the filtration speed is the inclusion content. One can determine the cleanliness level from the filtration curve (weight of metal filtered as a function of time). K-Mold K-Mold is a fracture test method. Liquid metal is cast into a mold containing notches. Once solidified, the resulting bar is bent to expose a fracture surface. The visual observation of inclusions on the fracture is used to determine a K-value for the melt and compared to a preset standard. This method is rather imprecise and therefore only suitable when metal contains large inclusions and inclusion clusters. LiMCA The LiMCA method measures the total concentration and size distribution of inclusions present in aluminum alloys. Its measuring principle is based on an objective and user-independent method. The LiMCA CM system can characterize the cleanliness of a melt at time intervals in the order of one minute. It can therefore monitor, in real-time, the evolution of cleanliness along a cast as a function of process parameters and melt-handling practices. The heart of the LiMCA measuring system consists of a closed glass tube (electrically insulating material) bearing a small orifice at its bottom. The tube is positioned in liquid metal. By creating a vacuum inside the tube, the metal with the suspended inclusions to be detected is forced through the small orifice. Two electrodes are necessary: one inside the tube and the other outside. Both electrodes are immersed in the liquid metal. A constant electric current is applied between the electrodes. The current flows through the liquid metal by the small orifice in the tube. When an inclusion enters the orifice, it displaces its volume of conducting fluid, temporarily raising the electrical resistance. The increase of resistance generates a voltage pulse. The magnitude of the voltage pulse is a function of the volume of the particle. The duration of the pulse is related to the transit time of the inclusion. The voltage pulses are amplified and their amplitude measured digitally. The size distribution and total concentration are displayed in real-time on a computer screen. Inclusion removal In order to get a good quality product, removing the inclusion becomes necessary. Liquid metal filtration through a ceramic medium is an efficient way to clean the metal. Different types of ceramic media are used in-line in foundries, such as ceramic foam filters, porous tube filters, bonded ceramic filters, and deep bed filters. See also Non-metallic inclusions for inclusions in steel Hydrogen gas porosity References Casting (manufacturing) Metallurgy
Aluminium alloy inclusions
[ "Chemistry", "Materials_science", "Engineering" ]
1,623
[ "Metallurgy", "Materials science", "nan" ]
27,414,616
https://en.wikipedia.org/wiki/Copper%20zinc%20water%20filtration
Copper zinc water filtration is a high-purity brass water filtration process that relies on the redox potential of dissolved oxygen in water in the presence of a zinc anode and copper cathode. It uses dissolved impurities within water as constituent substrate, which are reduced to more physiologically inert compounds. Due to inherent limitations in bactericidal and antiprotozoal activity and poor filtration of organic chemicals (in particular organophosphate pesticides), copper zinc water filters are not commonly used in the household setting unless combined with carbon based systems. They also have application in the industrial setting to extend the life of carbon based filtration systems for waste water effluent. Chemistry In the filtration process, zinc acts as an anode and copper as a cathode in an electrolytic cell. Ionic contaminants are removed by electron exchange (a redox reaction), in which they are converted to a more physiologically inert form. This redox reaction generates an electric potential of about 300mV, which may be responsible for the partial antimicrobial effect, along with hydroxyl radicals that form during the process. Specifications The process can remove chlorine, hydrogen sulfide, heavy metals, iron, and can reduce certain inorganic contaminants. The filter also inhibits the growth of algae, fungi, and bacteria to an extent. Copper zinc filtration has been used in municipal processing, and the treatment of medical and dental waste water, as well as for industrial effluents. They can be a component of whole home water filtration systems at point of entry or inline with shower heads or sink heads at point of use, as they remove many forms of dissolved chlorine and are effective at higher temperatures. One of the earlier described commercial methods for copper zinc water filtration is via kinetic degradation fluxion media (KDF), a product whose main filtration line consists of brass granules with varying proportions of zinc and copper alloy. It was developed in 1984 and patented by Don Heskett in 1987. An alternative KDF media is a matrix of fine metal similar to steel wool. Filtration and usage Amongst copper zinc water filtration methods, KDF is certified to the NSF International Standard 61 for water treatment plant applications, and the 2010 NSF standard for drinking water treatment units. A 2005 report by the US Department of Health and Human Services found that, under normal operating conditions, a treatment of contaminated groundwater in the Cedar Brook area consisting of KDF and activated carbon filtration removed volatile organic compounds and mercury to levels compliant with the state drinking water standards, though they also noted the water used may already need to be "exceptionally clean" prior to filtration. Limitations Copper zinc water filtration does not remove organic chemicals, such as pesticides and disinfection byproducts, nor is it effective against the parasitic cysts of giardia or cryptosporidium. and must be periodically backwashed with hot water to clean them. This reduces their efficiency, and the pollutants dislodged by washing can lead to water contamination. The United States Environmental Protection Agency found that copper zinc water filtration can remove mercury from contaminated water, but only at low concentrations, and recommends that for highly contaminated water other processes be used. Due to their bactericidal action, copper zinc water filtration devices are considered by the EPA to be "pesticidal". However, Stanford physician Paul Auerbach recommends against their usage as a sole means of germicidal water treatment, and he does not include them amongst his recommended protozoal disinfection methods at either point of entry or point of use. A 1995 United States Environmental Protection Agency report found that such systems were employed at approximately 20 US-based cooling towers in 1993. The report documented variable results, with some systems discontinued because they were ineffective at controlling bacterial growth, though in other instances they were preferred because of comparatively safe waste production and simpler maintenance. There is also concern about environmental damage due to the release of zinc in areas with high concentrations of metals or certain pollutants, in particular copper and chlorine. Publications of the American Water Works Association do not recommend the use of copper zinc water filtration systems to treat chlorinated water that outflows to streams. Studies have also shown that regulation standards for the systems can vary widely or be nonexistent depending on the industry and region of their usage. See also References Further reading Materials safety data sheet published by General Electric. Water treatment Copper alloys Zinc alloys Brass Water filters
Copper zinc water filtration
[ "Chemistry", "Engineering", "Environmental_science" ]
951
[ "Water filters", "Copper alloys", "Water treatment", "Filters", "Water pollution", "Alloys", "Zinc alloys", "Environmental engineering", "Water technology" ]
27,420,170
https://en.wikipedia.org/wiki/Electric%20discharge%20in%20gases
Electric discharge in gases occurs when electric current flows through a gaseous medium due to ionization of the gas. Depending on several factors, the discharge may radiate visible light. The properties of electric discharges in gases are studied in connection with design of lighting sources and in the design of high voltage electrical equipment. Discharge types In cold cathode tubes, the electric discharge in gas has three regions, with distinct current–voltage characteristics: I: Townsend discharge, below the breakdown voltage. At low voltages, the only current is that due to the generation of charge carriers in the gas by cosmic rays or other sources of ionizing radiation. As the applied voltage is increased, the free electrons carrying the current gain enough energy to cause further ionization, causing an electron avalanche. In this regime, the current increases from femtoamperes to microamperes, i.e. by nine orders of magnitude, for very little further increase in voltage. The voltage-current characteristics begins tapering off near the breakdown voltage and the glow becomes visible. II: glow discharge, which occurs once the breakdown voltage is reached. The voltage across the electrodes suddenly drops and the current increases to milliampere range. At lower currents, the voltage across the tube is almost current-independent; this is used in glow discharge voltage-regulator tubes. At lower currents, the area of the electrodes covered by the glow discharge is proportional to the current. At higher currents the normal glow turns into abnormal glow, the voltage across the tube gradually increases, and the glow discharge covers more and more of the surface of the electrodes. Low-power switching (glow-discharge thyratrons), voltage stabilization, and lighting applications (e.g. Nixie tubes, decatrons, neon lamps) operate in this region. III: arc discharge, which occurs in the ampere range of the current; the voltage across the tube drops with increasing current. High-current switching tubes, e.g. triggered spark gap, ignitron, thyratron and krytron (and its vacuum tube derivate, sprytron, using vacuum arc), high-power mercury-arc valves and high-power light sources, e.g. mercury-vapor lamps and metal halide lamps, operate in this range. Glow discharge is facilitated by electrons striking the gas atoms and ionizing them. For formation of glow discharge, the mean free path of the electrons has to be reasonably long but shorter than the distance between the electrodes; glow discharges therefore do not readily occur at both too low and too high gas pressures. The breakdown voltage for the glow discharge depends nonlinearly on the product of gas pressure and electrode distance according to Paschen's law. For a certain pressure × distance value, there is a lowest breakdown voltage. The increase of strike voltage for shorter electrode distances is related to too long mean free path of the electrons in comparison with the electrode distance. A small amount of a radioactive element may be added into the tube, either as a separate piece of material (e.g. nickel-63 in krytrons) or as addition to the alloy of the electrodes (e.g. thorium), to preionize the gas and increase the reliability of electrical breakdown and glow or arc discharge ignition. A gaseous radioactive isotope, e.g. krypton-85, can also be used. Ignition electrodes and keepalive discharge electrodes can also be employed. The E/N ratio between the electric field E and the concentration of neutral particles N is often used, because the mean energy of electrons (and therefore many other properties of discharge) is a function of E/N. Increasing the electric intensity E by some factor q has the same consequences as lowering gas density N by factor q. Its SI unit is V·cm2, but the Townsend unit (Td) is frequently used. Application in analog computation The use of a glow discharge for solution of certain mapping problems was described in 2002. According to a Nature news article describing the work, researchers at Imperial College London demonstrated how they built a mini-map that gives tourists luminous route indicators. To make the one-inch London chip, the team etched a plan of the city centre on a glass slide. Fitting a flat lid over the top turned the streets into hollow, connected tubes. They filled these with helium gas, and inserted electrodes at key tourist hubs. When a voltage is applied between two points, electricity naturally runs through the streets along the shortest route from A to B – and the gas glows like a tiny glowing strip light. The approach itself provides a novel visible analog computing approach for solving a wide class of maze searching problems based on the properties of lighting up of a glow discharge in a microfluidic chip. References
Electric discharge in gases
[ "Physics", "Chemistry" ]
977
[ "Physical phenomena", "Matter", "Electrical discharge in gases", "Plasma phenomena", "Ions" ]
27,421,560
https://en.wikipedia.org/wiki/Bureau%20of%20Ocean%20Energy%20Management
The Bureau of Ocean Energy Management (BOEM) is an agency within the United States Department of the Interior, established in 2010 by Secretarial Order. On May 19, 2010, Secretary of the Interior Ken Salazar signed a Secretarial Order dividing the Minerals Management Service (MMS) into three independent entities: BOEM, the Bureau of Safety and Environmental Enforcement, and the Office of Natural Resources Revenue. The most important legislation for BOEM is the Outer continental shelf (OCS) Lands Act to facilitate the federal government’s leasing of its offshore mineral resources and energy resources. In addition to the OCS Lands Act, the Submerged Lands Act (SLA) of 1953 grants individual states rights to the natural resources of submerged lands from the coastline to no more than 3 nautical miles (5.6 km) into the Atlantic, Pacific, the Arctic Oceans, and the Gulf of Mexico. The only exceptions are Texas and the west coast of Florida, where state jurisdiction extends from the coastline to no more than 3 marine leagues (16.2 km) into the Gulf of Mexico. BOEM’s mission BOEM’s stated mission is to "manage development of U.S. Outer Continental Shelf (OCS) energy, mineral, and geological resources in an environmentally and economically responsible way.”" Offshore energy The Outer Continental Shelf (OCS) is a significant source of oil and gas for the nation's energy supply. As of May 1, 2021, BOEM managed about 2,287 active oil and gas leases on approximately 12.1 million OCS acres. In 2009, the Department of the Interior announced the final regulations for the Outer Continental Shelf (OCS) Renewable Energy Program, which was authorized by the Energy Policy Act of 2005 (EPAct). These regulations provide a framework for issuing leases, easements and rights-of-way for OCS activities that support production and transmission of energy from sources other than oil and natural gas. Marine minerals BOEM is the only federal agency with the authority to lease marine minerals from the OCS, including responding to commercial requests for OCS minerals such as gold, manganese, or other hard minerals. Carbon sequestration Carbon sequestration (CS) refers to a process of storing captured carbon dioxide (CO2) that leads to a reduction of CO2 in the atmosphere. Carbon sequestration activities can take many forms. One form of long-term storage is injection of captured CO2 into suitable underground geologic formations. On November 15, 2021, the Infrastructure Investment and Jobs Act was signed into law and gave the Department of the Interior the authority to grant a lease, easement, or right-of-way on the Outer Continental Shelf (OCS) for long-term sequestration of carbon dioxide that would otherwise go into the atmosphere and contribute to further climate change. BOEM is working with the Bureau of Safety and Environmental Enforcement (BSEE) on a draft rule to implement this authority over the OCS CS projects. Environmental studies BOEM’s environmental program ensures that environmental protection is a foremost and indispensable consideration in BOEM's decision-making. BOEM uses science and law to inform our environmental analyses, conduct consultations, and design and conduct research. The environmental program informs three major areas that BOEM regulates on the outer continental shelf: oil and gas, renewable energy, and non-energy minerals such as sand and gravel or hard minerals. Directors The agency's first director, serving from June 2010 to May 2014, was Tommy Beaudreau. The second director was Abigail Ross Hopper, serving from January 2015 to January 2017. From 2017 to 2021, deputy director Walter Cruickshank served as the acting director. From February 2021 to January 2023, the director was Amanda Lefton. In an announcement with United States Secretary of Energy Jennifer Granholm on April 27, 2022, Lefton said that her agency would focus on efforts to promote offshore wind projects, saying that BOEM would work to "inspire confidence and demonstrate commitment" for lease planning and calling it her "number-one priority," National Fisherman reported. In January 2023, Lefton announced her resignation, effective January 19. , the director is Elizabeth Klein. Shipwrecks BOEM keeps records of shipwrecks, to ensure the Nation's important historical sites are protected when offshore activities take place on the OCS. These shipwrecks, particularly when over fifty years old, may be eligible for listing on the National Register of Historic Places, and any new wells or pipelines have to be studied for their potential effect on archaeological sites on the outer continental shelf. List of shipwrecks The BOEM maintains a list of shipwrecks and the location. Northern Eagle (built 1857), a fishing schooner lost 1908-03-01 Carrie Strong (lost 1916) W.H. Marston (lost 1927) Western Empire was abandoned during a hurricane on September 18, 1875. Further research ruled out the wreck as the Western Empire, and it is now believed to be a naval ship (now referred to as the BOEMRE Vessel ID No. 359) that may have been used as a merchant vessel. Nokomis (lost 1905) World War II shipwrecks There were over 100 attacks on ships in the Gulf of Mexico by German U-boats. Several were listed by the MMS and maintained by the BOEM. SS Gulfoil (built 1912, lost 1942-05-17), sunk by German submarine U-506 SS Gulfpenn (built 1921, lost 1942-05-13), sunk by German submarine U-506 SS Robert E. Lee (built 1924, lost 1942-07-30), sunk by German submarine U-166 SS Alcoa Puritan (built 1941, lost 1942-06-05), sunk by German submarine U-507 SS Carrabulle (built 1920, lost 1942-05-26), sunk by German submarine U-106. SS Amapala (built 1924, lost 1942-05-16), sunk by German submarine U-507 The only known German U-boat to be sunk in the Gulf is U-166. After sinking the SS Robert E. Lee, the United States Navy patrol craft PC-566 reported hitting and sinking the submarine. This was questioned, and the sinking was attributed to a United States Coast Guard Grumman G-44 Widgeon that reported an attack over 100 miles away, thought to be the U-166. In 2001 the wreckage of U-166 was identified near the wreckage of the Robert E. Lee, and in 2014 the record was set straight that PC-566 had actually sunk U-166. In 2014 the position, , was designated a war grave. See also Title 30 of the Code of Federal Regulations Worst Case Discharge Wind power in the United States Second Happy Time References External links Bureau of Ocean Energy Management Official website Bureau of Ocean Energy Management in the Federal Register Government agencies established in 2011 United States Department of the Interior agencies Oil wells Natural resources agencies in the United States Environmental agencies in the United States 2011 establishments in the United States
Bureau of Ocean Energy Management
[ "Chemistry" ]
1,458
[ "Petroleum technology", "Oil wells" ]
18,900,634
https://en.wikipedia.org/wiki/Codd%27s%20theorem
Codd's theorem states that relational algebra and the domain-independent relational calculus queries, two well-known foundational query languages for the relational model, are precisely equivalent in expressive power. That is, a database query can be formulated in one language if and only if it can be expressed in the other. The theorem is named after Edgar F. Codd, the father of the relational model for database management. The domain independent relational calculus queries are precisely those relational calculus queries that are invariant under choosing domains of values beyond those appearing in the database itself. That is, queries that may return different results for different domains are excluded. An example of such a forbidden query is the query "select all tuples other than those occurring in relation R", where R is a relation in the database. Assuming different domains, i.e., sets of atomic data items from which tuples can be constructed, this query returns different results and thus is clearly not domain independent. Codd's Theorem is notable since it establishes the equivalence of two syntactically quite dissimilar languages: relational algebra is a variable-free language, while relational calculus is a logical language with variables and quantification. Relational calculus is essentially equivalent to first-order logic, and indeed, Codd's Theorem had been known to logicians since the late 1940s. Query languages that are equivalent in expressive power to relational algebra were called relationally complete by Codd. By Codd's Theorem, this includes relational calculus. Relational completeness clearly does not imply that any interesting database query can be expressed in relationally complete languages. Well-known examples of inexpressible queries include simple aggregations (counting tuples, or summing up values occurring in tuples, which are operations expressible in SQL but not in relational algebra) and computing the transitive closure of a graph given by its binary edge relation (see also expressive power). Codd's theorem also doesn't consider SQL nulls and the three-valued logic they entail; the logical treatment of nulls remains mired in controversy. Additionally, SQL has multiset semantics as allowing duplicate rows. Nevertheless, relational completeness constitutes an important yardstick by which the expressive power of query languages can be compared. Notes References External links Relational model Theorems in the foundations of mathematics
Codd's theorem
[ "Mathematics" ]
478
[ "Foundations of mathematics", "Mathematical logic", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
18,903,091
https://en.wikipedia.org/wiki/Curtis%E2%80%93Hedlund%E2%80%93Lyndon%20theorem
The Curtis–Hedlund–Lyndon theorem is a mathematical characterization of cellular automata in terms of their symbolic dynamics. It is named after Morton L. Curtis, Gustav A. Hedlund, and Roger Lyndon; in his 1969 paper stating the theorem, Hedlund credited Curtis and Lyndon as co-discoverers. It has been called "one of the fundamental results in symbolic dynamics". The theorem states that a function from a shift space to itself represents the transition function of a one-dimensional cellular automaton if and only if it is continuous (with respect to the Cantor topology) and equivariant (with respect to the shift map). More generally, it asserts that the morphisms between any two shift spaces (that is, continuous mappings that commute with the shift) are exactly those mappings which can be defined uniformly by a local rule. The version of the theorem in Hedlund's paper applied only to one-dimensional finite automata, but a generalization to higher dimensional integer lattices was soon afterwards published by , and it can be even further generalized from lattices to discrete groups. One important consequence of the theorem is that, for reversible cellular automata, the reverse dynamics of the automaton can also be described by a cellular automaton. Definitions An alphabet is any finite set of symbols, which may be thought of as the states of the cells in a cellular automaton. A configuration is a bi-infinite sequence of symbols from the alphabet: . A position in a configuration is an integer, the index of one of the symbols in the sequence; the positions may be thought of as the cells of a cellular automaton. A pattern is a finite set of positions and an assignment of symbols to each of these positions. The shift space is the set of all possible configurations over a given alphabet. It may be given the structure of a topological space according to the Cantor topology, in which the fundamental open sets are the sets of configurations that match any single pattern and the open sets are arbitrary unions of fundamental open sets. In this topology, a function from configurations to configurations is continuous if, for any fixed pattern defining a fundamental open set , the set of configurations mapped by into can itself be described by a (possibly infinite) set of patterns, with the property that a configuration belongs to if and only if it matches a pattern in . The shift map is a particular continuous function on the shift space that transforms a configuration into a new configuration in which each symbol is shifted one position over from its previous position: that is, for every integer , . A function is equivariant under the shift map if the transformation on configurations described by commutes with the shift map; that is, for every configuration , it must be the case that . Intuitively, this means that every position of the configuration is updated by using the same rule as every other position. A cellular automaton is defined by a rule for computing the new value of each position in a configuration based only on the values of cells in a prior-fixed finite neighborhood surrounding the position, with all positions of the configuration being updated simultaneously based on the same update rule. That is, the new value of a position is a function only of the values of the cells in its neighborhood rather than depending more generally on an unbounded number of cells of the previous configuration. The function that uses this rule to map a configuration of the cellular automaton into its successor configuration is necessarily equivariant with respect to the shift map, by the assumption that all positions use the same update rule. It is also necessarily continuous in the Cantor topology: if is a fixed pattern, defining a fundamental open set , then is defined by a finite set of patterns, the assignments to cells in the neighborhood of that cause to produce . The Curtis–Hedlund–Lyndon theorem states that these two properties are sufficient to define cellular automata: every continuous equivariant function is the update rule of a cellular automaton. Proof Ceccherini-Silberstein and Coornaert provide the following proof of the Curtis–Hedlund–Lyndon theorem. Suppose is a continuous shift-equivariant function on the shift space. For each configuration , let be the pattern consisting of the single symbol that appears at position zero of . By continuity of , there must exist a finite pattern in such that, if the positions outside are changed arbitrarily but the positions within are fixed to their values in , then the result of applying remains the same at position zero. Equivalently, there must exist a fundamental open set such that belongs to and such that for every configuration in , and have the same value at position zero. These fundamental open sets (for all possible configurations ) form an open cover of the shift space. However, the shift space is a compact space: it is a product of finite topological spaces with the alphabet as their points, so compactness follows from Tychonoff's theorem. By compactness, every open cover has a finite subcover. The finite set of positions appearing in this finite subcover may be used as the neighborhood of position zero in a description of as a cellular automaton rule. The same proof applies more generally when the set of integer positions is replaced by any discrete group , the space of configurations is replaced by the set of functions from to a finite alphabet, and shift-equivariance is replaced by equivariance under the action of on itself. In particular, it applies to cellular automata defined on an integer grid of any dimension. Counterexample for infinite alphabets Consider the space of bi-infinite sequences of integers, and define a function from this space to itself according to the rule that, if , then for every position , . This rule is the same for each position, so it is shift-equivariant. And it can be shown to be continuous according to the Cantor topology: for each finite pattern in , there is a pattern in with at most twice as many positions that forces to generate , consisting of the cells in together with the cells whose values are copied into . However, despite being continuous and equivariant, is not a cellular automaton rule, because the value of any cell can potentially depend on the value of any other cell rather than only depending on the cells in any prior-fixed finite neighborhood. Application to reversible cellular automata A cellular automaton is said to be reversible when every configuration of the automaton has exactly one predecessor. It follows by a compactness argument that the function mapping each configuration to its predecessor is itself continuous in the shift space, and it is clearly also shift-invariant. Therefore, by the Curtis–Hedlund–Lyndon theorem, the time-reversed dynamics of the cellular automaton may itself be generated using a different cellular automaton rule. However, the neighborhood of a cell in the reverse automaton may be significantly larger than the neighborhood of the same cell in the forward automaton. Generalization One can generalize the definition of cellular automaton to those maps that are defined by rules for computing the new value of each position in a configuration based on the values of cells in a finite but variable neighborhood surrounding the position. In this case, as in the classical definition, the local rule is the same for all cells, but the neighborhood is also a function of the configuration around the position. The counterexample given above for a continuous and shift-equivariant map which is not a classical cellular automaton, is an example of a generalized cellular automaton. When the alphabet is finite, the definition of generalized cellular automata coincides with the classical definition of cellular automata due to the compactness of the shift space. Generalized cellular automata were proposed by where it was proved they correspond to continuous shift-equivariant maps. See also Surjunctive group References Theorems in discrete mathematics Articles containing proofs Symbolic dynamics Cellular automata
Curtis–Hedlund–Lyndon theorem
[ "Mathematics" ]
1,645
[ "Discrete mathematics", "Symbolic dynamics", "Recreational mathematics", "Theorems in discrete mathematics", "Cellular automata", "Mathematical problems", "Articles containing proofs", "Mathematical theorems", "Dynamical systems" ]
1,342,156
https://en.wikipedia.org/wiki/Artin%E2%80%93Mazur%20zeta%20function
In mathematics, the Artin–Mazur zeta function, named after Michael Artin and Barry Mazur, is a function that is used for studying the iterated functions that occur in dynamical systems and fractals. It is defined from a given function as the formal power series where is the set of fixed points of the th iterate of the function , and is the number of fixed points (i.e. the cardinality of that set). Note that the zeta function is defined only if the set of fixed points is finite for each . This definition is formal in that the series does not always have a positive radius of convergence. The Artin–Mazur zeta function is invariant under topological conjugation. The Milnor–Thurston theorem states that the Artin–Mazur zeta function of an interval map is the inverse of the kneading determinant of . Analogues The Artin–Mazur zeta function is formally similar to the local zeta function, when a diffeomorphism on a compact manifold replaces the Frobenius mapping for an algebraic variety over a finite field. The Ihara zeta function of a graph can be interpreted as an example of the Artin–Mazur zeta function. See also Lefschetz number Lefschetz zeta-function References Zeta and L-functions Dynamical systems Fixed points (mathematics)
Artin–Mazur zeta function
[ "Physics", "Mathematics" ]
286
[ "Mathematical analysis", "Fixed points (mathematics)", "Topology", "Mechanics", "Dynamical systems" ]
1,343,550
https://en.wikipedia.org/wiki/P%20wave
A P wave (primary wave or pressure wave) is one of the two main types of elastic body waves, called seismic waves in seismology. P waves travel faster than other seismic waves and hence are the first signal from an earthquake to arrive at any affected location or at a seismograph. P waves may be transmitted through gases, liquids, or solids. Nomenclature The name P wave can stand for either pressure wave (as it is formed from alternating compressions and rarefactions) or primary wave (as it has high velocity and is therefore the first wave to be recorded by a seismograph). The name S wave represents another seismic wave propagation mode, standing for secondary or shear wave, a usually more destructive wave than the primary wave. Seismic waves in the Earth Primary and secondary waves are body waves that travel within the Earth. The motion and behavior of both P and S waves in the Earth are monitored to probe the interior structure of the Earth. Discontinuities in velocity as a function of depth are indicative of changes in phase or composition. Differences in arrival times of waves originating in a seismic event like an earthquake as a result of waves taking different paths allow mapping of the Earth's inner structure. P wave shadow zone Almost all the information available on the structure of the Earth's deep interior is derived from observations of the travel times, reflections, refractions and phase transitions of seismic body waves, or normal modes. P waves travel through the fluid layers of the Earth's interior, and yet they are refracted slightly when they pass through the transition between the semisolid mantle and the liquid outer core. As a result, there is a P wave "shadow zone" between 103° and 142° from the earthquake's focus, where the initial P waves are not registered on seismometers. In contrast, S waves do not travel through liquids. As an earthquake warning Advance earthquake warning is possible by detecting the nondestructive primary waves that travel more quickly through the Earth's crust than do the destructive secondary and Rayleigh waves. The amount of warning depends on the delay between the arrival of the P wave and other destructive waves, generally on the order of seconds up to about 60 to 90 seconds for deep, distant, large quakes such as the 2011 Tohoku earthquake. The effectiveness of a warning depends on accurate detection of the P waves and rejection of ground vibrations caused by local activity (such as trucks or construction). Earthquake early warning systems can be automated to allow for immediate safety actions, such as issuing alerts, stopping elevators at the nearest floors, and switching off utilities. Propagation Velocity In isotropic and homogeneous solids, a P wave travels in a straight line longitudinally; thus, the particles in the solid vibrate along the axis of propagation (the direction of motion) of the wave energy. The velocity of P waves in that kind of medium is given by where is the bulk modulus (the modulus of incompressibility), is the shear modulus (modulus of rigidity, sometimes denoted as and also called the second Lamé parameter), is the density of the material through which the wave propagates, and is the first Lamé parameter. In typical situations in the interior of the Earth, the density usually varies much less than or , so the velocity is mostly "controlled" by these two parameters. The elastic moduli P wave modulus, , is defined so that and thereby Typical values for P wave velocity in earthquakes are in the range 5 to 8 km/s. The precise speed varies according to the region of the Earth's interior, from less than 6 km/s in the Earth's crust to 13.5 km/s in the lower mantle, and 11 km/s through the inner core. Geologist Francis Birch discovered a relationship between the velocity of P waves and the density of the material the waves are traveling in: which later became known as Birch's law. (The symbol is an empirically tabulated function, and is a constant.) See also Earthquake warning system Lamb waves Love wave S wave Surface wave References External links Animation of a P wave P-wave velocity calculator Purdue's catalog of animated illustrations of seismic waves Animations illustrating simple wave propagation concepts by Jeffrey S. Barker Bayesian Networks for Earthquake Magnitude Classification in a Early Warning System Waves Fluid dynamics Seismology measurement Seismology
P wave
[ "Physics", "Chemistry", "Engineering" ]
896
[ "Physical phenomena", "Chemical engineering", "Waves", "Motion (physics)", "Piping", "Fluid dynamics" ]
1,343,980
https://en.wikipedia.org/wiki/Deontic%20logic
Deontic logic is the field of philosophical logic that is concerned with obligation, permission, and related concepts. Alternatively, a deontic logic is a formal system that attempts to capture the essential logical features of these concepts. It can be used to formalize imperative logic, or directive modality in natural languages. Typically, a deontic logic uses OA to mean it is obligatory that A (or it ought to be (the case) that A), and PA to mean it is permitted (or permissible) that A, which is defined as . In natural language, the statement "You may go to the zoo OR the park" should be understood as instead of , as both options are permitted by the statement. When there are multiple agents involved in the domain of discourse, the deontic modal operator can be specified to each agent to express their individual obligations and permissions. For example, by using a subscript for agent , means that "It is an obligation for agent (to bring it about/make it happen) that ". Note that could be stated as an action by another agent; One example is "It is an obligation for Adam that Bob doesn't crash the car", which would be represented as , where B="Bob doesn't crash the car". Etymology The term deontic is derived from the (gen.: ), meaning "that which is binding or proper." Standard deontic logic In Georg Henrik von Wright's first system, obligatoriness and permissibility were treated as features of acts. Soon after this, it was found that a deontic logic of propositions could be given a simple and elegant Kripke-style semantics, and von Wright himself joined this movement. The deontic logic so specified came to be known as "standard deontic logic," often referred to as SDL, KD, or simply D. It can be axiomatized by adding the following axioms to a standard axiomatization of classical propositional logic: In English, these axioms say, respectively: If A is a tautology, then it ought to be that A (necessitation rule N). In other words, contradictions are not permitted. If it ought to be that A implies B, then if it ought to be that A, it ought to be that B (modal axiom K). If it ought to be that A, then it is permitted that A (modal axiom D). In other words, if it's not permitted that A, then it's not obligatory that A. FA, meaning it is forbidden that A, can be defined (equivalently) as or . There are two main extensions of SDL that are usually considered. The first results by adding an alethic modal operator in order to express the Kantian claim that "ought implies can": where . It is generally assumed that is at least a KT operator, but most commonly it is taken to be an S5 operator. In practical situations, obligations are usually assigned in anticipation of future events, in which case alethic possibilities can be hard to judge. Therefore, obligation assignments may be performed under the assumption of different conditions on different branches of timelines in the future, and past obligation assignments may be updated due to unforeseen developments that happened along the timeline. The other main extension results by adding a "conditional obligation" operator O(A/B) read "It is obligatory that A given (or conditional on) B". Motivation for a conditional operator is given by considering the following ("Good Samaritan") case. It seems true that the starving and poor ought to be fed. But that the starving and poor are fed implies that there are starving and poor. By basic principles of SDL we can infer that there ought to be starving and poor! The argument is due to the basic K axiom of SDL together with the following principle valid in any normal modal logic: If we introduce an intensional conditional operator then we can say that the starving ought to be fed only on the condition that there are in fact starving: in symbols O(A/B). But then the following argument fails on the usual (e.g. Lewis 73) semantics for conditionals: from O(A/B) and that A implies B, infer OB. Indeed, one might define the unary operator O in terms of the binary conditional one O(A/B) as , where stands for an arbitrary tautology of the underlying logic (which, in the case of SDL, is classical). Semantics of standard deontic logic The accessibility relation between possible world is interpreted as acceptability relations: is an acceptable world (viz. ) if and only if all the obligations in are fulfilled in (viz. ). Anderson's deontic logic Alan R. Anderson (1959) shows how to define in terms of the alethic operator and a deontic constant (i.e. 0-ary modal operator) standing for some sanction (i.e. bad thing, prohibition, etc.): . Intuitively, the right side of the biconditional says that A's failing to hold necessarily (or strictly) implies a sanction. In addition to the usual modal axioms (necessitation rule N and distribution axiom K) for the alethic operator , Anderson's deontic logic only requires one additional axiom for the deontic constant : , which means that there is alethically possible to fulfill all obligations and avoid the sanction. This version of the Anderson's deontic logic is equivalent to SDL. However, when modal axiom T is included for the alethic operator (), it can be proved in Anderson's deontic logic that , which is not included in SDL. Anderson's deontic logic inevitably couples the deontic operator with the alethic operator , which can be problematic in certain cases. Dyadic deontic logic An important problem of deontic logic is that of how to properly represent conditional obligations, e.g. If you smoke (s), then you ought to use an ashtray (a). It is not clear that either of the following representations is adequate: Under the first representation it is vacuously true that if you commit a forbidden act, then you ought to commit any other act, regardless of whether that second act was obligatory, permitted or forbidden (Von Wright 1956, cited in Aqvist 1994). Under the second representation, we are vulnerable to the gentle murder paradox, where the plausible statements (1) if you murder, you ought to murder gently, (2) you do commit murder, and (3) to murder gently you must murder imply the less plausible statement: you ought to murder. Others argue that must in the phrase to murder gently you must murder is a mistranslation from the ambiguous English word (meaning either implies or ought). Interpreting must as implies does not allow one to conclude you ought to murder but only a repetition of the given you murder. Misinterpreting must as ought results in a perverse axiom, not a perverse logic. With use of negations one can easily check if the ambiguous word was mistranslated by considering which of the following two English statements is equivalent with the statement to murder gently you must murder: is it equivalent to if you murder gently it is forbidden not to murder or if you murder gently it is impossible not to murder ? Some deontic logicians have responded to this problem by developing dyadic deontic logics, which contain binary deontic operators: means it is obligatory that A, given B means it is permissible that A, given B. (The notation is modeled on that used to represent conditional probability.) Dyadic deontic logic escapes some of the problems of standard (unary) deontic logic, but it is subject to some problems of its own. Other variations Many other varieties of deontic logic have been developed, including non-monotonic deontic logics, paraconsistent deontic logics, dynamic deontic logics, and hyperintensional deontic logics. History Early deontic logic Philosophers from the Indian Mimamsa school to those of Ancient Greece have remarked on the formal logical relations of deontic concepts and philosophers from the late Middle Ages compared deontic concepts with alethic ones. In his Elementa juris naturalis (written between 1669 and 1671), Gottfried Wilhelm Leibniz notes the logical relations between the licitum (permitted), the illicitum (prohibited), the debitum (obligatory), and the indifferens (facultative) are equivalent to those between the possibile, the impossibile, the necessarium, and the contingens respectively. Mally's first deontic logic and von Wright's first "plausible" deontic logic Ernst Mally, a pupil of Alexius Meinong, was the first to propose a formal system of deontic logic in his Grundgesetze des Sollens (1926) and he founded it on the syntax of Whitehead's and Russell's propositional calculus. Mally's deontic vocabulary consisted of the logical constants and , unary connective , and binary connectives and . * Mally read as "A ought to be the case".* He read as "A requires B" .* He read as "A and B require each other."* He read as "the unconditionally obligatory" .* He read as "the unconditionally forbidden". Mally defined , , and as follows: Def. Def. Def. Mally proposed five informal principles: (i) If A requires B and if B requires C, then A requires C.(ii) If A requires B and if A requires C, then A requires B and C.(iii) A requires B if and only if it is obligatory that if A then B.(iv) The unconditionally obligatory is obligatory.(v) The unconditionally obligatory does not require its own negation. He formalized these principles and took them as his axioms: I. II. III. IV. V. From these axioms Mally deduced 35 theorems, many of which he rightly considered strange. Karl Menger showed that is a theorem and thus that the introduction of the ! sign is irrelevant and that A ought to be the case if A is the case. After Menger, philosophers no longer considered Mally's system viable. Gert Lokhorst lists Mally's 35 theorems and gives a proof for Menger's theorem at the Stanford Encyclopedia of Philosophy under Mally's Deontic Logic. The first plausible system of deontic logic was proposed by G. H. von Wright in his paper Deontic Logic in the philosophical journal Mind in 1951. (Von Wright was also the first to use the term "deontic" in English to refer to this kind of logic although Mally published the German paper Deontik in 1926.) Since the publication of von Wright's seminal paper, many philosophers and computer scientists have investigated and developed systems of deontic logic. Nevertheless, to this day deontic logic remains one of the most controversial and least agreed-upon areas of logic. G. H. von Wright did not base his 1951 deontic logic on the syntax of the propositional calculus as Mally had done, but was instead influenced by alethic modal logics, which Mally had not benefited from. In 1964, von Wright published A New System of Deontic Logic, which was a return to the syntax of the propositional calculus and thus a significant return to Mally's system. (For more on von Wright's departure from and return to the syntax of the propositional calculus, see Deontic Logic: A Personal View and A New System of Deontic Logic, both by Georg Henrik von Wright.) G. H. von Wright's adoption of the modal logic of possibility and necessity for the purposes of normative reasoning was a return to Leibniz. Although von Wright's system represented a significant improvement over Mally's, it raised a number of problems of its own. For example, Ross's paradox applies to von Wright's deontic logic, allowing us to infer from "It is obligatory that the letter is mailed" to "It is obligatory that either the letter is mailed or the letter is burned", which seems to imply it is permissible that the letter is burned. The Good Samaritan paradox also applies to his system, allowing us to infer from "It is obligatory to nurse the man who has been robbed" that "It is obligatory that the man has been robbed". Another major source of puzzlement is Chisholm's paradox, named after American philosopher and logician Roderick Chisholm. There is no formalisation in von Wright's system of the following claims that allows them to be both jointly satisfiable and logically independent: It ought to be that Jones goes (to the assistance of his neighbors). It ought to be that if Jones goes, then he tells them he is coming. If Jones doesn't go, then he ought not tell them he is coming. Jones doesn't go Several extensions or revisions of Standard Deontic Logic have been proposed over the years, with a view to solve these and other puzzles and paradoxes (such as the Gentle Murderer and Free choice permission). Jørgensen's dilemma Deontic logic faces Jørgensen's dilemma. This problem is best seen as a trilemma. The following three claims are incompatible: Logical inference requires that the elements (premises and conclusions) have truth-values. Normative statements do not have truth-values. There are logical inferences between normative statements. Responses to this problem involve rejecting one of the three premises. Input/output logics reject the first premise. They provide inference mechanism on elements without presupposing that these elements have truth-values. Alternatively, one can deny the second premise. One way to do this is to distinguish between the norm itself and a proposition about the norm. According to this response, only the proposition about the norm (as is the case for Standard Deontic Logic) has a truth-value. For example, it may be hard to assign a truth-value to the argument "Take all the books off the table!", but ("Take all the books off the table"), which means "It is obligatory to take all the books off the table", can be assigned a truth-value, because it is in the indicative mood. Finally, one can deny the third premise. But this is to deny that there is a logic of norms worth investigating. See also Deontological ethics Free choice inference Moral reasoning Norm (philosophy) Notes Bibliography Lennart Åqvist, 1994, "Deontic Logic" in D. Gabbay and F. Guenthner, ed., Handbook of Philosophical Logic: Volume II Extensions of Classical Logic, Dordrecht: Kluwer. Dov Gabbay, John Horty, Xavier Parent et al. (eds.)2013, Handbook of Deontic Logic and Normative Systems, London: College Publications, 2013. Hilpinen, Risto, 2001, "Deontic Logic," in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Oxford: Blackwell. External links Contrary-to-Duty Paradox, Internet Encyclopedia of Philosophy. Modal logic Philosophical logic Deontic logic
Deontic logic
[ "Mathematics" ]
3,277
[ "Mathematical logic", "Modal logic" ]
1,344,164
https://en.wikipedia.org/wiki/Data-flow%20diagram
A data-flow diagram is a way of representing a flow of data through a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram has no control are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart. There are several notations for displaying data-flow diagrams. The notation presented above was described in 1979 by Tom DeMarco as part of structured analysis. For each data flow, at least one of the endpoints (source and / or destination) must exist in a process. The refined representation of a process can be done in another data-flow diagram, which subdivides this process into sub-processes. The data-flow diagram is a tool that is part of structured analysis and data modeling. When using UML, the activity diagram typically takes over the role of the data-flow diagram. A special form of data-flow plan is a site-oriented data-flow plan. Data-flow diagrams can be regarded as inverted Petri nets, because places in such networks correspond to the semantics of data memories. Analogously, the semantics of transitions from Petri nets and data flows and functions from data-flow diagrams should be considered equivalent. History The DFD notation draws on graph theory, originally used in operational research to model workflow in organizations, and in computer science to model the flow of inputs and outputs across computations. DFD originated from the structured analysis and design technique methodology in the middle of the 1970s. It was first proposed by Larry Constantine, and popularized by Edward Yourdon, Tom DeMarco, Chris Gane and Trish Sarson, who enriched the diagramming technique with different notations, data dictionary practices and guidance for the hierarchical decomposition of processes. The primary aim of data-flow diagrams in the context of structured design was to build complex modular systems, rationalizing the interdependencies across different modules. Data-flow diagrams (DFD) quickly became a popular way to visualize the major steps and data involved in software-system processes. DFDs were usually used to show data flow in a computer system, although they could in theory as well be applied to business process modeling. DFDs were useful to document the major data flows or to explore a new high-level design in terms of data flow. DFD components DFD consists of processes, flows, warehouses, and terminators. There are several ways to view these DFD components. Process The process (function, transformation) is part of a system that transforms inputs to outputs. The symbol of a process is a circle, an oval, a rectangle or a rectangle with rounded corners (according to the type of notation). The process is named in one word, a short sentence, or a phrase that is clearly to express its essence. Data flow Data flow (flow, dataflow) shows the transfer of information (sometimes also material) from one part of the system to another. The symbol of the flow is the arrow. The flow should have a name that determines what information (or what material) is being moved. Exceptions are flows where it is clear what information is transferred through the entities that are linked to these flows. Material shifts are modeled in systems that are not merely informative. Flow should only transmit one type of information (material). The arrow shows the flow direction (it can also be bi-directional if the information to/from the entity is logically dependent—e.g. question and answer). Flows link processes, warehouses and terminators. Warehouse The warehouse (datastore, data store, file, database) is used to store data for later use. The symbol of the store is two horizontal lines, the other way of view is shown in the DFD Notation. The name of the warehouse is a plural noun (e.g. orders)—it derives from the input and output streams of the warehouse. The warehouse does not have to be just a data file but can also be, for example, a folder with documents, a filing cabinet, or a set of optical discs. Therefore, viewing the warehouse in a DFD is independent of implementation. The flow from the warehouse usually represents reading of the data stored in the warehouse, and the flow to the warehouse usually expresses data entry or updating (sometimes also deleting data). The warehouse is represented by two parallel lines between which the memory name is located (it can be modeled as a UML buffer node). Terminator The terminator is an external entity that communicates with the system and stands outside of the system. It can be, for example, various organizations (e.g. a bank), groups of people (e.g. customers), authorities (e.g. a tax office) or a department (e.g. a human-resources department) of the same organization, which does not belong to the model system. The terminator may be another system with which the modeled system communicates. Rules for creating DFD Entity names should be comprehensible without further comments. DFD is a system created by analysts based on interviews with system users. It is determined for system developers, on one hand, project contractor on the other, so the entity names should be adapted for model domain or amateur users or professionals. Entity names should be general (independent, e.g. specific individuals carrying out the activity), but should clearly specify the entity. Processes should be numbered for easier mapping and referral to specific processes. The numbering is random, however, it is necessary to maintain consistency across all DFD levels (see DFD Hierarchy). DFD should be clear, as the maximum number of processes in one DFD is recommended to be from 6 to 9, minimum is 3 processes in one DFD. The exception is the so-called contextual diagram where the only process symbolizes the model system and all terminators with which the system communicates. DFD consistency DFD must be consistent with other models of the system—entity relationship diagram, state-transition diagram, data dictionary, and process specification models. Each process must have its name, inputs and outputs. Each flow should have its name (exception see Flow). Each Data store must have input and output flow. Input and output flows do not have to be displayed in one DFD—but they must exist in another DFD describing the same system. An exception is warehouse standing outside the system (external storage) with which the system communicates. DFD hierarchy To make the DFD more transparent (i.e. not too many processes), multi-level DFDs can be created. DFDs that are at a higher level are less detailed (aggregate more detailed DFD at lower levels). The contextual DFD is the highest in the hierarchy (see DFD Creation Rules). The so-called zero level is followed by DFD 0, starting with process numbering (e.g. process 1, process 2). In the next, the so-called first level—DFD 1—the numbering continues For example, process 1 is divided into the first three levels of the DFD, which are numbered 1.1, 1.2, and 1.3. Similarly, processes in the second level (DFD 2) are numbered 2.1.1, 2.1.2, 2.1.3, and 2.1.4. The number of levels depends on the size of the model system. DFD 0 processes may not have the same number of decomposition levels. DFD 0 contains the most important (aggregated) system functions. The lowest level should include processes that make it possible to create a process specification for roughly one A4 page. If the mini-specification should be longer, it is appropriate to create an additional level for the process where it will be decomposed into multiple processes. For a clear overview of the entire DFD hierarchy, a vertical (cross-sectional) diagram can be created. The warehouse is displayed at the highest level where it is first used and at every lower level as well. See also Activity diagram Business Process Model and Notation Control-flow diagram Data island Dataflow Data and information visualization Directed acyclic graph Drakon-chart Functional flow block diagram Function model IDEF0 Pipeline Structured analysis and design technique Structure chart System context diagram Value-stream mapping Workflow List of graphical methods References Bibliography Scott W. Ambler. The Object Primer 3rd Edition Agile Model Driven Development with UML 2 Schmidt, G., Methode und Techniken der Organisation. 13. Aufl., Gießen 2003 Stahlknecht, P., Hasenkamp, U.: Einführung in die Wirtschaftsinformatik. 12. Aufl., Berlin 2012 Gane, Chris; Sarson, Trish. Structured Systems Analysis: Tools and Techniques. New York: Improved Systems Technologies, 1977. . P. 373 Demarco, Tom. Structured Analysis and System Specification. New York: Yourdon Press, 1979. . P. 352. Yourdon, Edward. Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design. New York: Yourdon Press, 1979. . P. 473. Page-Jones, Meilir. Practical Guide to Structured Systems Design. New York: Yourdon Press, 1988. . P. 384. Yourdon, Edward. Modern Structured Analysis. New York: Yourdon Press, 1988. . P. 688. External links Information systems Diagrams Graph drawing Systems analysis Modeling languages Data engineering
Data-flow diagram
[ "Technology", "Engineering" ]
2,021
[ "Information systems", "Data engineering", "Information technology", "Software engineering" ]
1,344,213
https://en.wikipedia.org/wiki/Nephoscope
A nephoscope is a 19th-century instrument for measuring the altitude, direction, and velocity of clouds, using transit-time measurement. This is different from a nephometer, which is an instrument used in measuring the amount of cloudiness. Description A nephoscope emits a light ray, which strikes and reflects off the base of a targeted cloud. The distance to the cloud can be estimated using the delay between sending the light ray and receiving it back: Mirror nephoscope Developed by Carl Gottfrid Fineman, this instrument consists of a magnetic compass, the case of which is covered with a black mirror, around which is movable a circular metal frame. A little window in this mirror enables the observer to see the tip of the compass needle underneath. On the surface of the mirror are engraved three concentric circles and four diameters; one of the latter passes through the middle of the little window. The mirror constitutes a compass card, its radii corresponding to the cardinal points. On the movable frame surrounding the mirror is fixed a vertical pointer graduated in millimeters, which can be moved up and down by means of a rack and pinion. The whole apparatus is mounted on a tripod stand provided with leveling screws. To make an observation, the mirror is adjusted to the horizontal with the leveling-screws, and is oriented to the meridian by moving the whole apparatus until the compass needle is seen through the window, to lie in the north-south line of the mirror (making, however, allowance for the magnetic declination). The observer stands in such a position as to bring the image of any chosen part of a cloud at the center of the mirror. The vertical pointer is also adjusted by screwing it up or down and by rotating it around the mirror until its tip is reflected in the center of the mirror. As the image of the cloud moves toward the circumference of the mirror, the observer moves his head so as to keep the tip of the pointer and the cloud image in coincidence. The radius along which the image moves gives the direction of the cloud's movement, and the time required to pass from one circle to the next its relative speed, which may be reduced to certain arbitrary units. This instrument is, however, not very easy to use, and gives only moderately accurate measurements. Comb nephoscope Developed by Louis Besson in 1912, this apparatus consists of a horizontal bar fitted with several equidistant spikes and mounted on the upper end of a vertical pole which can be rotated on its axis. When an observation is to be made, the observer places himself in such a position that the central spike is projected on any chosen part of a cloud. Then, without altering his position, he causes the "comb" to turn by means of two cords in such a manner that the cloud is seen to follow along the line of spikes. A graduated circle, turning with the vertical pole, gives the direction of the cloud's motion. It is read with the aid of a fixed pointer. Moreover, when the apparatus is once oriented, the observer can determine the relative speed of the cloud by noting the time the latter requires to pass from one spike to the next. If the instrument stands on level ground, so that the observer's eye is always at the same height, and if the interval between two successive spikes is equal to one-tenth of their altitude above the eye-level of the observer, one only needs to multiply the time required for the cloud to pass over one interval by 10 to determine the time the cloud travels a horizontal distance equal to its altitude. Besson revived an old method, invented by Bravais for measuring the actual height of clouds. The apparatus in this case consists of a plate of glass having parallel faces, mounted on a graduated vertical circle which indicates its angle of inclination. A sheet of water, situated at a lower level, serves as a mirror to reflect the cloud. The water is contained in a reservoir of blackened cement surrounded by shrubbery, and is only a small fraction of an inch in depth, so that the wind may not disturb its level surface. The observer, having mounted the glass plate on the horizontal axis of a theodolite set on a window-sill some 30 or 40 feet above the ground, places his eye close to it and adjusts its inclination so that the images of a cloud reflected in the plate and in the sheet of water coincide. Then from a curve traced once for all on a sheet of plotting paper he reads off the altitude of the cloud corresponding to the observed angle on the glass plate. The curve is plotted from simple trigonometrical calculations. At the Observatory of Montsouris, the degree of cloudiness, i. e., the amount of the whole sky covered with clouds at a given moment, is determined by means of the nephometer, also devised by Besson. This consists of a convex glass mirror, a segment of a sphere, about twelve inches in diameter, in which is seen the reflection of the celestial vault divided into ten sections of equal area by means of lines engraved on the glass. As shown in the front page engraving, the meteorologist observes through an eyepiece fixed in an invariable position with respect to the mirror, which latter turns freely on a vertical axis. The observer, whose own image partly obstructs sections 8, 9. and 10, notes the degree of cloudiness in the sections numbered 1 to 7. The cloudiness of each section is estimated on a scale of 0 to 10: zero meaning no clouds and 10 entirely overcast. The observer would then rotate the mirror and eyepiece 180 degrees and observes the cloudiness in sections 7, 5, and 2, which represent the regions of the sky that at the first observation corresponded to sections 8, 9, and 10. Grid nephoscope The grid nephoscope is a variation of the comb nephoscope, invented in Norway. Russian nephoscope Mikhail Pomortsev invented a nephoscope in Russia in 1894. References Meteorological instrumentation and equipment Russian inventions
Nephoscope
[ "Technology", "Engineering" ]
1,250
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
30,123,945
https://en.wikipedia.org/wiki/Stochastic%20volatility%20jump
In mathematical finance, the stochastic volatility jump (SVJ) model is suggested by Bates. This model fits the observed implied volatility surface well. The model is a Heston process for stochastic volatility with an added Merton log-normal jump. It assumes the following correlated processes: where S is the price of security, μ is the constant drift (i.e. expected return), t represents time, Z1 is a standard Brownian motion, q is a Poisson counter with density λ. References Mathematical finance Financial models
Stochastic volatility jump
[ "Mathematics" ]
116
[ "Applied mathematics", "Mathematical finance" ]
30,128,939
https://en.wikipedia.org/wiki/Safety%20factor%20%28plasma%20physics%29
In a toroidal fusion power reactor, the magnetic fields confining the plasma are formed in a helical shape, winding around the interior of the reactor. The safety factor, labeled q or q(r), is the ratio of the times a particular magnetic field line travels around a toroidal confinement area's "long way" (toroidally) to the "short way" (poloidally). The term "safety" refers to the resulting stability of the plasma; plasmas that rotate around the torus poloidally about the same number of times as toroidally are inherently less susceptible to certain instabilities. The term is most commonly used when referring to tokamak devices. Although the same considerations apply in stellarators, by convention the inverse value is used, the rotational transform, or i. The concept was first developed by Martin David Kruskal and Vitaly Shafranov, who noticed that the plasma in pinch effect reactors would be stable if q was larger than 1. Macroscopically, this implies that the wavelength of the potential instability is longer than the reactor. This condition is known as the Kruskal–Shafranov limit. Background The key concept in magnetic confinement fusion is that ions and electrons in a plasma will rotate around magnetic lines of force. A simple way to confine a plasma would be to use a solenoid, a series of circular magnets mounted along a cylinder that generates uniform lines of force running down the long axis of the cylinder. A plasma generated in the center of the cylinder would be confined to run along the lines down the inside of the tube, keeping it away from the walls. However, it would be free to move along the axis and out the ends of the cylinder. One can close the ends by bending the solenoid around into a circle, forming a torus (a ring or donut). In this case, the particles will still be confined to the middle of the cylinder, and even if they move along it they would never exit the ends - they would circle the apparatus endlessly. However, Fermi noted a problem with this arrangement; consider a series of circular magnets with the toroidal confinement area threaded through their centers, the magnets will be closer together on the inside of the ring, with a stronger field. Particles in such a system will drift up or down across the torus. The solution to this problem is to add a secondary magnetic field at right angles to the first. The two magnetic fields will mix to produce a new combined field that is helical, like the stripes on a barber pole. A particle orbiting such a field line will find itself near the outside of the confinement area at some times, and near the inside at others. Although a test particle would always be drifting up (or down) compared to the field, since the field is rotating, that drift will, compared to the confinement chamber, be up or down, in or out, depending on its location along the cylinder. The net effect of the drift over a period of several orbits along the long axis of the reactor nearly adds up to zero. Rotational transform The effect of the helical field is to bend the path of a particle so it describes a loop around the cross section of the containment cylinder. At any given point in its orbit around the long axis of the toroid, the particle will be moving at an angle, θ. In the simple case, when the particle has completed one orbit of the reactor's major axis and returned to its original location, the fields will have made it complete one orbit of the minor axis as well. In this case the rotational transform is 1. In the more typical case, the fields do not "line up" this way, and the particle will not return to exactly the same location. In this case the rotational transform is calculated thus: where R is the major radius, the minor radius, the poloidal field strength, and the toroidal field. As the fields typically vary with their location within the cylinder, varies with location on the minor radius, and is expressed i(r). Safety factor In the case of an axisymmetric system, which was common in earlier fusion devices, it is more common to use the safety factor, which is simply the inverse of the rotational transform: The safety factor is essentially a measure of the "windiness" of the magnetic fields in a reactor. If the lines are not closed, the safety factor can be expressed as the pitch of the field: As the fields vary across the minor axis, q also varies and is often expressed as q(r). On the inside of the cylinder on a typical tokamak it converges on 1, while at the outside it is nearer 6 to 8. Kruskal–Shafranov limit Toroidal arrangements are a major class of magnetic fusion energy reactor designs. These are subject to a number of inherent instabilities that cause the plasma to exit the confinement area and hit the walls of the reactor on the order of milliseconds, far too rapidly to be used for energy generation. Among these is the kink instability, which is caused by small variations in the plasma shape. Areas where the plasma is slightly further from the centerline will experience a force outwards, causing a growing bulge that will eventually reach the reactor wall. These instabilities have a natural pattern based on the rotational transform. This leads to a characteristic wavelength of the kinks, which is based on the ratio of the two magnetic fields that mix to form the twisted field in the plasma. If that wavelength is longer than the long radius of the reactor, then they cannot form. That is, if the length along the major radius is: Then the plasma would be stable to this major class of instabilities. Basic mathematical rearrangement, removing the from both sides and moving the major radius R to the other side of the equality produces: Which produces the simple rule of thumb that as long as the safety factor is greater than one at all points in the plasma, it will be naturally stable to this major class of instabilities. This principle led Soviet researchers to run their toroidal pinch machines with reduced current, leading to the stabilization that provided much higher performance in their T-3 machine in the late 1960s. In more modern machines, the plasma is pressed to the outside section of the chamber, producing a cross sectional shape like a D instead of a circle, which reduces the area with lower safety factor and allows higher currents to be driven through the plasma. See also Troyon limit Notes References Jeffrey Freidberg, "Plasma Physics and Fusion Energy", Cambridge University Press, 2007 Fusion power Plasma parameters
Safety factor (plasma physics)
[ "Physics", "Chemistry" ]
1,357
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
30,129,507
https://en.wikipedia.org/wiki/Electrochemical%20engineering
Electrochemical engineering is the branch of chemical engineering dealing with the technological applications of electrochemical phenomena, such as electrosynthesis of chemicals, electrowinning and refining of metals, flow batteries and fuel cells, surface modification by electrodeposition, electrochemical separations and corrosion. According to the IUPAC, the term electrochemical engineering is reserved for electricity-intensive processes for industrial or energy storage applications and should not be confused with applied electrochemistry, which comprises small batteries, amperometric sensors, microfluidic devices, microelectrodes, solid-state devices, voltammetry at disc electrodes, etc. More than 6% of the electricity is consumed by large-scale electrochemical operations in the US. Scope Electrochemical engineering combines the study of heterogeneous charge transfer at electrode/electrolyte interphases with the development of practical materials and processes. Fundamental considerations include electrode materials and the kinetics of redox species. The development of the technology involves the study of the electrochemical reactors, their potential and current distribution, mass transport conditions, hydrodynamics, geometry and components as well as the quantification of its overall performance in terms of reaction yield, conversion efficiency, and energy efficiency. Industrial developments require further reactor and process design, fabrication methods, testing, and product development. Electrochemical engineering considers current distribution, fluid flow, mass transfer, and the kinetics of the electro reactions to design efficient electrochemical reactors. Most electrochemical operations are performed in filter-press reactors with parallel plate electrodes or, less often, in stirred tanks with rotating cylinder electrodes. Fuel cell and flow battery stacks are types of filter-press reactors. Most of them are continuous operations. History This branch of engineering emerged gradually from chemical engineering as electrical power sources became available in the mid-19th century. Michael Faraday described his laws of electrolysis in 1833, relating for the first time the amount of electrical charge and converted mass. In 1886 Charles Martin Hall developed a cheap electrochemical process for extracting aluminium from its ore in molten salts, constituting the first true large-scale electrochemical industry. Later, Hamilton Castner improved the process aluminium manufacturing and devised the electrolysis of brine in large mercury cells for the production of chlorine and caustic soda, effectively founding the chlor-alkali industry with Karl Kellner in 1892. The next year, Paul L. Hulin patented filter-press type electrochemical cells in France. Charles Frederick Burgess developed the electrolytic refining of iron ca. 1904 and later ran a successful battery company. Burgess published one of the first texts on the field in 1920. Industrial electrochemistry followed an empirical approach during the first three decades of the 20th century. After the Second World War, interest focused on the fundaments of electrochemical reactions. Among other developments, the potentiostat (1937) enabled such studies. A critical advance was provided by the work of Carl Wagner and Veniamin Levich in 1962, who linked the hydrodynamics of a flowing electrolyte towards a rotating disc electrode with the mass transport control of the electrochemical reaction through a rigorous mathematical treatment. The same year, Wagner described "The Scope of Electrochemical Engineering" for the first time as a separate discipline from a physicochemical perspective. During the 60s and 70s Charles W. Tobias, who is regarded as the "father of electrochemical engineering" by the Electrochemical Society, was concerned with ionic transport by diffusion, migration, and convection, exact solutions of potential and current distribution problems, conductance in heterogeneous media, quantitative description of processes in porous electrodes. Also in the 60s, John Newman pioneered the study of many of the physicochemical laws that govern electrochemical systems, demonstrating how complex electrochemical processes could be analysed mathematically to correctly formulate and solve problems associated with batteries, fuel cells, electrolyzer, and related technologies. In Switzerland, Norbert Ibl contributed to experimental and theoretical studies of mass transfer and potential distribution in electrolyses, especially at porous electrodes. Fumio Hine carried out equivalent developments in Japan. In addition, several individuals, including Kuhn, Kreysa, Rousar, Fleischmann, Alkire, Coeuret, Pletcher, and Walsh established many other training centers and, with their colleagues, developed important experimental and theoretical methods of study. Currently, the main tasks of electrochemical engineering consist of the development of efficient, safe, and sustainable technologies for the production of chemicals, metal recovery, remediation, and decontamination technologies as well as the design of fuel cells, flow batteries, and industrial electrochemical reactors. The history of electrochemical engineering has been summarised by Wendt, Lapicque, and Stankovic. Applications Electrochemical engineering is applied in industrial water electrolysis, electrolysis, electrosynthesis, electroplating, fuel cells, flow batteries, decontamination of industrial effluents, electrorefining, electrowinning, etc. The primary example of an electrolysis-based process is the Chloralkali process for caustic soda and chlorine production. Other inorganic chemicals produced by electrolysis include: Ammonium persulfate Chlorine Electrowinning Fluorine Hydrogen peroxide Manganese dioxide Ozone Potassium dichromate Potassium permanganate Sodium chlorate Sodium hypochlorite Sodium persulfate Silver nitrate White lead (Basic lead carbonate) Conventions The established performance criteria, definitions, and nomenclature for electrochemical engineering can be found in Kreysa et al. and an IUPAC report. Awards Castner Medal Carl Wagner Medal Vittorio de Nora Award See also Chloralkali process Electrochemical cell Electrochemical energy conversion Electrodeionization Electrodialysis Electrofiltration Flow battery Fuel cell Galvanic cell Isotope electrochemistry Magnetoelectrochemistry Photoelectrochemistry References Bibliography T.F. Fuller, John N. Harb, Electrochemical Engineering, John Wiley & Sons, 2018. H. Wright (ed.), Electrochemical Engineering: Emerging Technologies and Applications, Willford Press, 2016. D. Stolten, B. Emonts, Fuel Cell Science and Engineering: Materials, Processes, Systems and Technology, John Wiley & Sons, 2012. D.D. Macdonald, P. Schmuki (eds.), Electrochemical Engineering, in Encyclopedia of Electrochemistry, Vol. 5, Wiley-VCH, 2007. J. Newman, K.E. Thomas-Alyea, Electrochemical Systems, 3rd ed., John Wiley & Sons, Hoboken NJ, 2004. (1st ed. 1973). V.M. Schmidt, Elektrochemische Verfahrenstechnik, Wiley-VCH, 2003. H. Pütter, Industrial Electroorganic Chemistry, in Organic Electrochemistry, 4th ed., H. Lund, O. Hammerich (eds.), Marcel Dekker, New York, 2001. F.C. Walsh, Un Primer Curso de Ingeniería Electroquímica, Editorial Club Universitario, Alicante, España, 2000. M.P. Grotheer, Electrochemical Processing, Inorganic, in Kirk-Othmer Encyclopedia of Chemical Technology, 5th ed., Vol. 9, P. 618, John Wiley & Sons, 2000. H. Wendt, G. Kreysa, Electrochemical Engineering: Science and Technology in Chemical and Other Industries, Springer, Berlin 1999. R.F. Savinell, Tutorials in Electrochemical Engineering - Mathematical Modeling, Pennington, The Electrochemical Society, 1999. A. Geoffrey, Electrochemical Engineering Principles, Prentice Hall, 1997. F. Goodrige, K. Scott Electrochemical Process Engineering - A Guide to the Design of Electrolytic Plant, Plenum Press, New York & London, 1995. J. Newman, R.E. White (eds.), Proceedings of the Douglas N. Bennon Memorial Symposium. Topics in Electrochemical Engineering, The Electrochemical Society, Proceedings Vol. 94-22, 1994. F. Lapicque, A. Storck, A.A. Wragg, Electrochemical Engineering and Energy, Springer, 1994. F.C. Walsh, A First Course in Electrochemical Engineering, The Electrochemical Consultancy, Romsey UK, 1993. F. Coeuret, A. Storck, Eléments de Génie Électrochimique, 2nd ed., Éditions TEC et DOC / Lavoisier, Paris, 1993. (1st ed. 1984) F. Coeuret, Introducción a la Ingeniería Electroquímica, Editorial Reverté, Barcelona, 1992. K. Scott, Electrochemical Reaction Engineering, Academic Press, London, 1991. G. Prentice, Electrochemical Engineering Principles, Prentice Hall, 1991. D. Pletcher, F.C. Walsh, Industrial Electrochemistry, 2nd ed., Chapman and Hall, London, 1990. J.D. Genders, D. Pletcher, Electrosynthesis - From Laboratory, to Pilot, to Production, The Electrosynthesis Company, New York, 1990. M.I. Ismail, Electrochemical Reactors Their Science and Technology - Part A: Fundamentals, Electrolysers, Batteries, and Fuel Cells, Elsevier, Amsterdam, 1989. T.R. Beck, Industrial Electrochemical Processes, in Techniques of Electrochemistry, E. Yeager, A.J. Salkind (eds.), Wiley, New York, 1987. E. Heitz, G. Kreysa, Principles of Electrochemical Engineering, John Wiley & Sons, 1986. I. Roušar, A. Kimla, K. Micka, Electrochemical Engineering, Elsevier, Amsterdam, 1986. T.Z. Fahidy, Principles of Electrochemical Reactor Analysis, Elsevier, Amsterdam, 1985. F. Hine, Electrode Processes and Electrochemical Engineering, Springer, Boston, 1985. R.E. White, (ed.), Electrochemical Cell Design, Springer, 1984. P. Horsman, B.E. Conway, S. Sarangapani (eds.), Comprehensive Treatise of Electrochemistry. Vol. 6 Electrodics: Transport, Plenum Press, New York, 1983. D. Pletcher, Industrial Electrochemistry, 1st ed., Chapman and Hall, London, 1982. J.O’M. Bockris, B.E. Conway, E. Yeager, R.E. White, (eds.) Comprehensive Treatise of Electrochemistry. Vol. 2: Electrochemical Processing, Plenum Press, New York, 1981. D.J. Pickett, Electrochemical Reactor Design, 2nd ed., Elsevier, Amsterdam, 1979. P. Gallone, Trattato di Ingegneria Elettrochimica, Tamburini, Milan, 1973. A. Kuhn, Industrial Electrochemical Processes, Elsevier, Amsterdam, 1971. C.L. Mantell, Electrochemical Engineering, 4th ed., McGraw-Hill, New York, 1960. C.L. Mantell, Industrial Electrochemistry, 2nd ed., McGraw-Hill, New York, 1940. C.F. Burgess, H.B. Pulsifer, B.B. Freud, Applied Electrochemistry and Metallurgy, American Technical Society, Chicago, 1920. A.J. Hale, The Manufacture of Chemicals by Electrolysis, Van Nostrand Co., New York, 1919. External links Working Party on Electrochemical Engineering - WPEE SCI Castner Medal on Industrial Electrochemistry Carl Wagner Medal of Excellence in Electrochemical Engineering ECS Vittorio de Nora Award IEEE H.H. Dow Memorial Student Achievement Award Electrochemistry Chemical engineering Chemical processes Hydrogen production Industrial processes Industrial gases
Electrochemical engineering
[ "Chemistry", "Engineering" ]
2,464
[ "Chemical engineering", "Electrochemical engineering", "Chemical processes", "Electrochemistry", "Industrial gases", "nan", "Electrolysis", "Chemical process engineering", "Electrical engineering" ]
28,688,869
https://en.wikipedia.org/wiki/The%20Grand%20Design%20%28book%29
The Grand Design is a popular-science book written by physicists Stephen Hawking and Leonard Mlodinow and published by Bantam Books in 2010. The book examines the history of scientific knowledge about the universe and explains eleven-dimensional M-theory. The authors of the book point out that a Unified Field Theory (a theory, based on an early model of the universe, proposed by Albert Einstein and other physicists) may not exist. It argues that invoking God is not necessary to explain the origins of the universe, and that the Big Bang is a consequence of the laws of physics alone. In response to criticism, Hawking said: "One can't prove that God doesn't exist, but science makes God unnecessary." When pressed on his own religious views by the 2010 Channel 4 documentary Genius of Britain, he clarified that he did not believe in a personal God. Published in the United States on September 7, 2010, the book became the number one bestseller on Amazon.com just a few days after publication. It was published in the United Kingdom on September 9, 2010, and became the number two bestseller on Amazon.co.uk on the same day. It topped the list of adult non-fiction books of The New York Times Non-fiction Best Seller list in September–October 2010. Synopsis The book examines the history of scientific knowledge about the universe. It starts with the Ionian Greeks, who claimed that nature works by laws, and not by the will of the gods. It later presents the work of Nicolaus Copernicus, who advocated the concept that the Earth is not located in the center of the universe. It has tried to explain the topics in an easier manner. Many examples related from daily life, mythology and history have been taken to explain, such as- Viking Mythology about Skoll and Hati, movie The Matrix, Ptolemaic universe. The authors then describe the theory of quantum mechanics using, as an example, the probable movement of an electron around a room. The presentation has been described as easy to understand by some reviewers, but also as sometimes "impenetrable," by others. The central claim of the book is that the theory of quantum mechanics and the theory of relativity together help us understand how universes could have formed out of nothing. The authors write: The authors explain, in a manner consistent with M-theory, that as the Earth is only one of several planets in our Solar System, and as our Milky Way galaxy is only one of many galaxies, the same may apply to our universe itself: that is, our universe may be one of a huge number of universes. The book concludes with the statement that only some universes of the multiple universes (or multiverse) support life forms and that we are located in one of those universes. The laws of nature that are required for life forms to exist appear in some universes by pure chance , Hawking and Mlodinow explain (see Anthropic principle). Reactions Positive reactions Evolutionary biologist and advocate for atheism Richard Dawkins welcomed Hawking's position and said that "Darwinism kicked God out of biology but physics remained more uncertain. Hawking is now administering the coup de grace." Theoretical physicist Sean M. Carroll, writing in The Wall Street Journal, described the book as speculative but ambitious: "The important lesson of The Grand Design is not so much the particular theory being advocated but the sense that science may be able to answer the deep 'Why?' questions that are part of fundamental human curiosity." Cosmologist Lawrence Krauss, in his article "Our Spontaneous Universe", wrote that "there are remarkable, testable arguments that provide firmer empirical evidence of the possibility that our universe arose from nothing. ... If our universe arose spontaneously from nothing at all, one might predict that its total energy should be zero. And when we measure the total energy of the universe, which could have been anything, the answer turns out to be the only one consistent with this possibility. Coincidence? Maybe. But data like this coming in from our revolutionary new tools promise to turn much of what is now metaphysics into physics. Whether God survives is anyone's guess." James Trefil, a professor of physics at George Mason University, said in his Washington Post review: "I've waited a long time for this book. It gets into the deepest questions of modern cosmology without a single equation. The reader will be able to get through it without bogging down in a lot of technical detail and will, I hope, have his or her appetite whetted for books with a deeper technical content. And who knows? Maybe in the end the whole multiverse idea will actually turn out to be right!" Canada Press journalist Carl Hartman said: "Cosmologists, the people who study the entire cosmos, will want to read British physicist and mathematician Stephen Hawking's new book. The Grand Design may sharpen appetites for answers to questions like 'Why is there something rather than nothing?' and 'Why do we exist?' – questions that have troubled thinking people at least as far back as the ancient Greeks." Writing in the Los Angeles Times, Michael Moorcock praised the authors: "their arguments do indeed bring us closer to seeing our world, universe and multiverse in terms that a previous generation might easily have dismissed as supernatural. This succinct, easily digested book could perhaps do with fewer dry, academic groaners, but Hawking and Mlodinow pack in a wealth of ideas and leave us with a clearer understanding of modern physics in all its invigorating complexity." German daily Süddeutsche Zeitung devoted the whole opening page of its culture section to The Grand Design. CERN physicist and novelist reviews the history of the theory of everything from the 18th century to M-theory, and takes Hawking's conclusion on God's existence as a very good joke which he obviously welcomes very much. Best selling author Deepak Chopra in an interview with CNN said: "We have to congratulate Leonard and Stephen for finally, finally contributing to the climatic overthrow of the superstition of materialism. Because everything that we call matter comes from this domain which is invisible, which is beyond space and time. All religious experience is based on just three basic fundamental ideas...And nothing in the book invalidates any of these three ideas". Critical reactions John Lennox, Professor of Mathematics at Oxford University, declared "nonsense remains nonsense, even when talked by world-famous scientists." He points to several self-contradictory elements within the central claim of the text, as well as many logical errors made throughout the book which claims "philosophy is dead." Roger Penrose in the FT doubts that adequate understandings can come from this approach, and points out that "unlike quantum mechanics, M-theory enjoys no observational support whatsoever". Joe Silk in Science suggests that "Some humbleness would be welcome here...A century or two hence...I expect that M-theory will seem as naïve to cosmologists of the future as we now find Pythagoras's cosmology of the harmony of the spheres". Gerald Schroeder in "The Big Bang Creation: God or the Laws of Nature" explains that "The Grand Design breaks the news, bitter to some, that … to create a universe from absolute nothing God is not necessary. All that is needed are the laws of nature. … [That is,] there can have been a big bang creation without the help of God, provided the laws of nature pre-date the universe. Our concept of time begins with the creation of the universe. Therefore if the laws of nature created the universe, these laws must have existed prior to time; that is the laws of nature would be outside of time. What we have then is totally non-physical laws, outside of time, creating a universe. Now that description might sound somewhat familiar. Very much like the biblical concept of God: not physical, outside of time, able to create a universe." Dwight Garner in The New York Times was critical of the book, saying: "The real news about The Grand Design is how disappointingly tinny and inelegant it is. The spare and earnest voice that Mr. Hawking employed with such appeal in A Brief History of Time has been replaced here by one that is alternately condescending, as if he were Mr. Rogers explaining rain clouds to toddlers, and impenetrable." Craig Callender, in the New Scientist, was not convinced by the theory promoted in the book: "M-theory ... is far from complete. But that doesn't stop the authors from asserting that it explains the mysteries of existence ... In the absence of theory, though, this is nothing more than a hunch doomed – until we start watching universes come into being – to remain untested. The lesson isn't that we face a dilemma between God and the multiverse, but that we shouldn't go off the rails at the first sign of coincidences." Paul Davies in The Guardian wrote: "The multiverse comes with a lot of baggage, such as an overarching space and time to host all those bangs, a universe-generating mechanism to trigger them, physical fields to populate the universes with material stuff, and a selection of forces to make things happen. Cosmologists embrace these features by envisaging sweeping "meta-laws" that pervade the multiverse and spawn specific bylaws on a universe-by-universe basis. The meta-laws themselves remain unexplained – eternal, immutable transcendent entities that just happen to exist and must simply be accepted as given. In that respect the meta-laws have a similar status to an unexplained transcendent god." Davies concludes "there is no compelling need for a supernatural being or prime mover to start the universe off. But when it comes to the laws that explain the big bang, we are in murkier waters." Dr. Marcelo Gleiser, in his article "Hawking And God: An Intimate Relationship", stated that "contemplating a final theory is inconsistent with the very essence of physics, an empirical science based on the gradual collection of data. Because we don’t have instruments capable of measuring all of Nature, we cannot ever be certain that we have a final theory. There’ll always be room for surprises, as the history of physics has shown again and again. In fact, I find it quite pretentious to imagine that we humans can achieve such a thing. ... Maybe Hawking should leave God alone." Physicist Peter Woit, of Columbia University, has criticized the book: "One thing that is sure to generate sales for a book of this kind is to somehow drag in religion. The book's rather conventional claim that "God is unnecessary" for explaining physics and early universe cosmology has provided a lot of publicity for the book. I'm in favor of naturalism and leaving God out of physics as much as the next person, but if you're the sort who wants to go to battle in the science/religion wars, why you would choose to take up such a dubious weapon as M-theory mystifies me." In Scientific American, John Horgan is not sympathetic to the book: "M-theory, theorists now realize, comes in an almost infinite number of versions, which "predict" an almost infinite number of possible universes. Critics call this the "Alice's Restaurant problem," a reference to the refrain of the old Arlo Guthrie folk song: "You can get anything you want at Alice's Restaurant." Of course, a theory that predicts everything really doesn't predict anything... The anthropic principle has always struck me as so dumb that I can't understand why anyone takes it seriously. It's cosmology's version of creationism. ... The physicist Tony Rothman, with whom I worked at Scientific American in the 1990s, liked to say that the anthropic principle in any form is completely ridiculous and hence should be called CRAP. ... Hawking is telling us that unconfirmable M-theory plus the anthropic tautology represents the end of that quest. If we believe him, the joke's on us." The Economist is also critical of the book: Hawking and Mlodinow "...say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors' interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be. Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think." The Bishop of Swindon, Dr. Lee Rayfield, said, "Science can never prove the non-existence of God, just as it can never prove the existence of God." Anglican priest, Cambridge theologian and psychologist Rev. Dr. Fraser N. Watts said "a creator God provides a reasonable and credible explanation of why there is a universe, and ... it is somewhat more likely that there is a God than that there is not. That view is not undermined by what Hawking has said." British scientist Baroness Greenfield also criticized the book in an interview with BBC Radio: "Of course they can make whatever comments they like, but when they assume, rather in a Taliban-like way, that they have all the answers, then I do feel uncomfortable." She later claimed her Taliban remarks were "not intended to be personal", saying she "admired Stephen Hawking greatly" and "had no wish to compare him in particular to the Taliban". Denis Alexander responded to Stephen Hawking's The Grand Design by stating that "the 'god' that Stephen Hawking is trying to debunk is not the creator God of the Abrahamic faiths who really is the ultimate explanation for why there is something rather than nothing", adding that "Hawking's god is a god-of-the-gaps used to plug present gaps in our scientific knowledge." "Science provides us with a wonderful narrative as to how [existence] may happen, but theology addresses the meaning of the narrative". Mathematician and philosopher of science Wolfgang Smith wrote a chapter-by-chapter summary and critique of the book, first published in Sophia: The Journal of Traditional Studies, and subsequently published as "From Physics to Science Fiction: Response to Stephen Hawking" in the 2012 edition of his collection of essays, Science & Myth. See also A Question and Answer Guide to Astronomy References Astronomy books Books by Stephen Hawking Books critical of religion Cosmology books Popular physics books 2010 non-fiction books Bantam Books books
The Grand Design (book)
[ "Astronomy" ]
3,148
[ "Astronomy books", "Works about astronomy" ]
28,696,519
https://en.wikipedia.org/wiki/Arizona%20Accelerator%20Mass%20Spectrometry%20Laboratory
Arizona Accelerator Mass Spectrometry Laboratory focuses on the study of cosmogenic isotopes, and in particular the study of radiocarbon, or Carbon-14. As a laboratory, part of its aim is to function as a research center, training center, and general community resource. Its stated mission is conducting original research in cosmogenic isotopes. The AMS laboratory was established in 1981 at the University of Arizona. This laboratory is used primarily to provide radiocarbon measurements. Hence, coverage in research areas is multidisciplinary. Coverage of dating objects includes general interest and scientific interest. For example, dating of the dead sea scrolls was accomplished using this method. Tandem accelerators Two, tandem accelerators at this facility accelerate energies up to 3 million volts (3 MeV). The function of these accelerators is to measure scarce, (cosmogenic) isotopes such as aluminium-26, beryllium-10, iodine-129 and the aforementioned carbon-14. In other words, the accelerators are used for measuring rare isotopes that are produced within earth materials, such as rocks or soil, in Earth's atmosphere, and in extraterrestrial objects such as meteorites. These are cosmogenic isotopes, produced from interaction with cosmic rays. Scope Established in 1981, this facility is a National Science Foundation research facility. It is operated by both the Physics Department and the Geosciences Department of the University of Arizona. It is tasked with both scientific inquiry and education. Topical coverage of investigations includes archaeology, art history, forensic science, radioactive tracer studies, radiometric dating, the carbon cycle, cosmic ray physics, meteorites, geology, paleoclimate, faunal extinctions, hydrologic balance, frequency rate of forest fires, terrestrial magnetic field, solar wind, ocean sciences and instrument development. References External links Accelerator Mass Spectrometry Laboratory University of Arizona Accelerator mass spectrometry National Science Foundation Laboratories in the United States Science and technology in Arizona 1981 establishments in Arizona
Arizona Accelerator Mass Spectrometry Laboratory
[ "Physics" ]
413
[ "Accelerator mass spectrometry", "Mass spectrometry", "Spectrum (physical sciences)" ]
28,697,509
https://en.wikipedia.org/wiki/GIM%20mechanism
In particle physics, the Glashow–Iliopoulos–Maiani (GIM) mechanism is allegedly the mechanism through which flavour-changing neutral currents (FCNCs) are suppressed in loop diagrams. It also explains why weak interactions that change strangeness by 2 (ΔS = 2 transitions) are suppressed, while those that change strangeness by 1 (ΔS = 1 transitions) are allowed, but only in charged current interactions. It is named after physicists Sheldon Glashow, John Iliopoulos and Luciano Maiani. History The mechanism was put forth in a famous paper by ; at that time, only three quarks (up, down, and strange) were thought to exist. James Bjorken and Glashow [] had previously predicted a fourth quark, but there was little evidence for its existence. The GIM mechanism however, required the existence of a fourth quark, and the prediction of the charm quark is usually credited to Glashow, Iliopoulos, and Maiani (initials "G. I. M."). Description The mechanism relies on the unitarity of the charged weak current flavor mixing matrix, which enters in the two vertices of a one-loop box diagram involving W boson exchanges. Even though Z0 boson exchanges are flavor-neutral (i.e. prohibit FCNC), the box diagram induces FCNC, but at a very small level. The smallness is set by the mass-squared difference of the different virtual quarks exchanged in the box diagram, originally the u-c quarks, on the scale of the W mass. The smallness of this quantity accounts for the suppressed induced FCNC, dictating a rare decay, , illustrated in the figure. If that mass difference were ignorable, the minus sign between the two interfering box diagrams (itself a consequence of unitarity of the Cabibbo matrix) would lead to a complete cancellation, and thus a null effect. References Further reading Standard Model
GIM mechanism
[ "Physics" ]
415
[ "Standard Model", "Particle physics" ]
27,071,123
https://en.wikipedia.org/wiki/Lam%C3%A9%27s%20stress%20ellipsoid
Lamé's stress ellipsoid is an alternative to Mohr's circle for the graphical representation of the stress state at a point. The surface of the ellipsoid represents the locus of the endpoints of all stress vectors acting on all planes passing through a given point in the continuum body. In other words, the endpoints of all stress vectors at a given point in the continuum body lie on the stress ellipsoid surface, i.e., the radius-vector from the center of the ellipsoid, located at the material point in consideration, to a point on the surface of the ellipsoid is equal to the stress vector on some plane passing through the point. In two dimensions, the surface is represented by an ellipse. Once the equations of the ellipsoid is known, the magnitude of the stress vector can then be obtained for any plane passing through that point. To determine the equation of the stress ellipsoid we consider the coordinate axes taken in the directions of the principal axes, i.e., in a principal stress space. Thus, the coordinates of the stress vector on a plane with normal unit vector passing through a given point is represented by And knowing that is a unit vector we have which is the equation of an ellipsoid centered at the origin of the coordinate system, with the lengths of the semiaxes of the ellipsoid equal to the magnitudes of the principal stresses, i.e. the intercepts of the ellipsoid with the principal axes are . The first stress invariant is directly proportional to the sum of the principal radii of the ellipsoid. The second stress invariant is directly proportional to the sum of the three principal areas of the ellipsoid. The three principal areas are the ellipses on each principal plane. The third stress invariant is directly proportional to the volume of the ellipsoid. If two of the three principal stresses are numerically equal the stress ellipsoid becomes an ellipsoid of revolution. Thus, two principal areas are ellipses and the third is a circle. If all of the principal stresses are equal and of the same sign, the stress ellipsoid becomes a sphere and any three perpendicular directions can be taken as principal axes. The stress ellipsoid by itself, however, does not indicate the plane on which the given traction vector acts. Only for the case where the stress vector lies along one of the principal directions it is possible to know the direction of the plane, as the principal stresses act perpendicular to their planes. To find the orientation of any other plane we used the stress-director surface or stress director quadric represented by the equation The stress represented by a radius-vector of the stress ellipsoid acts on a plane oriented parallel to the tangent plane to the stress-director surface at the point of its intersection with the radius-vector. References Bibliography Classical mechanics Materials science Elasticity (physics) Solid mechanics Mechanics
Lamé's stress ellipsoid
[ "Physics", "Materials_science", "Engineering" ]
614
[ "Physical phenomena", "Solid mechanics", "Applied and interdisciplinary physics", "Elasticity (physics)", "Deformation (mechanics)", "Classical mechanics", "Materials science", "Mechanics", "nan", "Mechanical engineering", "Physical properties" ]
27,078,863
https://en.wikipedia.org/wiki/Dumas%20method%20of%20molecular%20weight%20determination
The Dumas method of molecular weight determination was historically a procedure used to determine the molecular weight of an unknown volatile substance. The method was designed by the French chemist Jean Baptiste André Dumas, after whom the procedure is now named. Dumas used the method to determine the vapour densities of elements (mercury, phosphorus, sulfur) and inorganic compounds. Today, modern methods such as mass spectrometry and elemental analysis are used to determine the molecular weight of a substance. Determination The procedure entailed placing a small quantity of the unknown substance into a tared vessel of known volume. The vessel is then heated to a known temperature, such as in a boiling water bath, causing the entire sample to vaporize and completely displace the air from the vessel. The vessel is then sealed, such as with a flame to melt the neck of a glass flask, dried, and re-weighed. By subtracting the tare of the vessel, the actual mass of the unknown vapor within the vessel can be calculated. Assuming the unknown compound behaves as an ideal gas, the number of moles of the unknown compound, , can be determined by using the ideal gas law, where the pressure, , is the atmospheric pressure, is the measured volume of the vessel, is the absolute temperature of the hot bath, and is the gas constant. The molecular weight of the chemical is then simply the mass in grams of the vapor within the vessel divided by the calculated number of mole. Assumptions Two major assumptions are used in this method: The compound vapor behaves as an ideal gas (follows all 5 postulates of the kinetic theory of gases) Either the volume of the vessel does not vary significantly between room temperature and the working temperature, or the volume of the vessel may be accurately determined at the working temperature See also Victor Meyer apparatus Cryoscopy and ebullioscopy, two other methods for the determination of molecular weights References Further reading External links https://web.archive.org/web/20091229043650/http://chemlabs.uoregon.edu/Classes/Exton/CH228/Dumas.pdf https://web.archive.org/web/20100820010803/http://wwwchem.csustan.edu/chem1102/molwt.htm Molecular physics
Dumas method of molecular weight determination
[ "Physics", "Chemistry" ]
495
[ "Molecular physics", " molecular", "nan", "Atomic", "Molecular physics stubs", " and optical physics" ]
27,078,873
https://en.wikipedia.org/wiki/Dumas%20method
In analytical chemistry, the Dumas method is a method of elemental analysis for the quantitative determination of nitrogen in chemical substances based on a method first described by Jean-Baptiste Dumas in 1826. The Dumas technique has been automated and instrumentalized, so that it is capable of rapidly measuring the crude protein concentration of food samples. This automatic Dumas technique has replaced the Kjeldahl method as the standard method of analysis for nutritional labelling of protein content of foods (except in high fat content foods where the Kjeldahl method is still preferred due to fire risks). Method The method consists of combusting a sample of known mass to a temperature between 800 and 900 °C in the presence of oxygen. This leads to the release of carbon dioxide, water and nitrogen. The gases are then passed over special columns (such as potassium hydroxide aqueous solution) that absorb the carbon dioxide and water. A column containing a thermal conductivity detector at the end is then used to separate the nitrogen from any residual carbon dioxide and water and the remaining nitrogen content is measured. The instrument must first be calibrated by analyzing a material that is pure and has a known nitrogen concentration. The measured signal from the thermal conductivity detector for the unknown sample can then be converted into a nitrogen content. As with the Kjeldahl method, conversion of the concentration of nitrogen in a sample to the crude protein content is performed using conversion factors which depend on the particular amino acid sequence of the measured protein. Advantages and limitations The Dumas method has the advantages of being easy to use and fully automatable. It has been developed into a considerably faster method than the Kjeldahl method, and can take a few minutes per measurement, as compared to the hour or more for Kjeldahl. It also does not make use of toxic chemicals or catalysts. One major disadvantage is its high initial cost, although new technology developments are reducing this drawback. Also, as with Kjeldahl, it does not give a measure of actual protein, as it registers non-protein nitrogen, and different correction factors are needed for different proteins because they have different amino acid sequences. See also Combustion analysis, a similar approach to Dumas but involving carbon, hydrogen, and nitrogen as well Bicinchoninic acid assay, a colorimetric analysis method for protein-nitrogen References Chemical tests Nitrogen Food analysis
Dumas method
[ "Chemistry" ]
485
[ "Food analysis", "Food chemistry", "Chemical tests" ]
24,155,004
https://en.wikipedia.org/wiki/IEC%2060269
In electrical engineering, IEC 60269 is a set of technical standards for low-voltage power fuses. The standard is in four volumes, which describe general requirements, fuses for industrial and commercial applications, fuses for residential applications, and fuses to protect semiconductor devices. The IEC standard unifies several national standards, thereby improving the interchangeability of fuses in international trade. All fuses of different technologies tested to meet IEC standards will have similar time-current characteristics, which simplifies design and maintenance. IEC 60269-1 – Low-voltage fuses – Part 1: General requirements IEC 60269-2 – Low-voltage fuses – Part 2: Supplementary requirements for fuses for use by authorized persons (fuses mainly for industrial application) – Examples of standardized systems of fuses A to I IEC 60269-3 – Low-voltage fuses – Part 3: Supplementary requirements for fuses for use by unskilled persons (fuses mainly for household and similar applications) – Examples of standardized systems of fuses A to F IEC 60269-4 – Low-voltage fuses – Part 4: Supplementary requirements for fuse-links for the protection of semiconductor devices IEC 60269-5 – Low-voltage fuses – Part 5: Guidance for the application of low-voltage fuses IEC 60269-6 – Low-voltage fuses – Part 6: Supplementary requirements for fuse-links for the protection of solar photovoltaic energy systems IEC 60269-7 – Low-voltage fuses – Part 7: Supplementary requirements for fuse-links for the protection of batteries and battery systems In IEC standards, the replaceable element is called a fuse link and the assembly of fuse link and fuse holder is called a fuse. North American standards call the replaceable element only the fuse. Application categories and time-current characteristics IEC 60269 unifies the electrical characteristics of fuses that are dimensionally interchangeable with fuses built to earlier British, German, French or Italian standards. The standard identifies application categories which classify the time-current characteristic of each type of fuse. The application category is a two-digit code. The first letter is a if the fuse is for short-circuit protection only; an associated device must provide overload protection. The first letter is g if the fuse is intended to operate even with currents as low as those that cause it to blow in one hour. These are considered general-purpose fuses for protection of wires. The second letter indicates the type of equipment or system to be protected: Bat – Batteries and battery energy storage systems as per 60269-7 D – North American time-delay fuses for motor circuits, UL 248 fuses G – General purpose protection of wires and cables M – Motors N – Conductors sized to North American practice, UL 248 fuses PV – Solar photovoltaic arrays as per 60269-6 R, S – Rectifiers or semiconductors as per 60269-5 Tr – Transformers Any fuses built to the IEC 60269 standard and carrying the same application category (for example, gG or aM) will have similar electrical characteristics, time-current characteristics, and power dissipation as any other, even if the fuses are made in the packages standardized to the earlier national standards. Fuses of the same application category can be substituted for each other provided the voltage rating of the circuit does not exceed the fuse rating. The tests recommended on Fuses by IEC 60269 are: Temperature rise & power dissipation test Non-fusing & Fusing test Verification of rated current test Overload test Verification of Time Current Characteristics and Gates D type fuses D-type (Diazed, from German "Diametral abgestuftes zweiteiliges Edisongewinde" for "diametrically graded two-part Edison thread") fuse cartridges have a bottle-shaped ceramic body with metal end caps and are used with screw-in fuse holders. Introduced in 1909 by Siemens, they are available today in five different body sizes, with ratings from 2 A up to 200 A (see table), though only D II and D III fuses are commonly used. The designation of a size consists of the letter D and a Roman numeral. Higher-voltage types rated up to 750 V have increased clearance distances and are longer than lower-voltage-rated fuses. They are available with interrupting ratings up to 50kA RMS, and are intended for use as incoming main protection from an electrical supply utility. D0-type (Neozed) fuses were introduced in 1967 and use the same concept, but have a smaller, cylindrical body. They are available in three different sizes with ratings from 2 A up to 100 A (see table). Fuse holders may be secured by screws to a panel, attached to bus bars, or mounted on DIN rails. For the Neozed fuses, there are also fuse bases with integrated disconnecting switches. Changing fuses with the circuit off increases the safety of the user. With new versions of these load disconnecting switches, the fuse cartridges are no longer screwed, but are held by spring clips. Traditional Diazed fuse holders are made as a conducting metal envelope covered with non-conducting porcelain cover. Under mechanical stress it is possible for the cover to crack partially or fully, uncovering the conducting element. It may happen if a fuse holder was accidentally dropped or someone was using too much force to screw it in. Uncovered metal envelopes present a serious risk of shock and should be replaced immediately under extreme precautions by trained personnel. The smaller end cap (the "top" of the bottle) has a diameter that varies with the fuse rating: higher ratings have wider end caps. The fixed part of the fuse holder contains a (usually colour-coded) gauge ring, which will accept end caps up to a certain diameter. It is therefore not possible to fit a fuse of a higher rating than allowed for by the gauge ring. The size of the gauge ring is determined by the current rating of the circuit to be protected. Gauge rings are intended to be changed only by authorized personnel. The larger end cap (the "bottom" of the bottle) has at its centre a small spring-loaded button retained by a thin wire, which serves as a "fuse blown" indicator. When the fuse blows, the wire breaks and the indicator button is ejected by the spring. A missing or displaced indicator thus pinpoints a blown fuse. The removable part of the fuse holder has a small window to allow inspection of the indicator without removal of the fuse. The indicator button usually has a coloured dot indicating the fuse rating (see table). D- and D0-type fuses are used for protection of circuits up to 500 V AC in residential and commercial installations, and occasionally for the protection of electric motors. The most common operating class is gG (general purpose, formerly gL), but other classes are available. A gG class fuse will typically blow within 2–5 seconds at five times the rated current, and within 0.1–0.2 seconds at ten times the rated current. Gauge rings and fuse indicators are colour coded for the nominal current: D-system (DIAZED) The sizes D IV and D V are rarely used D I and D V are not part of IEC 60269 (meet outdated national standards) D0-System (NEOZED) Fuses of the D0 system (read as D zero) or NEOZED are smaller than the DIAZED fuses. NEOZED fuses are divided into three sizes. The D03 size is used very rarely, because with these high currents NH fuses have proven to be more reliable. In circuits with a very high prospective short-circuit current level (more than 50kA), D-fuses cannot be used and type NH fuses are used instead. D01 is nowadays uncommon because miniature circuit breakers are usually used instead for these currents. NH fuses NH fuses have a square or oblong body and blade-style terminals. These fuses are larger and have higher ratings than the screw type fuses, exceeding 100 kA. NH fuses are widespread in industrial plants as well as in public mains electricity applications, e.g., in electrical substations and electrical distribution boards, or in house junction boxes in buildings. NH fuses can be changed with power on the circuit, but this requires special training, special tools, and personal protective equipment. An isolation protection mat and isolating gloves may be necessary. Pulling any fuse cartridge under load can cause an electric arc, which may cause serious and fatal injuries without protection equipment. NH disconnecting switches facilitate the safety of cartridge replacement. NH fuses are manufactured in several current rating ranges. British domestic fuses In British residential installations, cylindrical fuses with a diameter of  inch and a length of 1 inch (Ø 6.3 × 25.4 mm) in compliance with British Standard BS 1362 are found inside a standard UK 13 A plug. The specification calls for sand-filled fuses with a ceramic body and metallic contacts at the ends with a 5.5 mm length. References External links Electrical standards 60269
IEC 60269
[ "Physics", "Technology" ]
1,882
[ "Electrical standards", "Electrical systems", "Computer standards", "IEC standards", "Physical systems" ]
24,156,106
https://en.wikipedia.org/wiki/C14H17NO6
{{DISPLAYTITLE:C14H17NO6}} The molecular formula C14H17NO6 (molar mass: 295.29 g/mol) may refer to: Indican Prunasin, a cyanogenic glucoside Sambunigrin, a cyanogenic glucoside Molecular formulas
C14H17NO6
[ "Physics", "Chemistry" ]
72
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,156,234
https://en.wikipedia.org/wiki/C14H16ClN3O
{{DISPLAYTITLE:C14H16ClN3O}} The molecular formula C14H16ClN3O may refer to: ELB-139, an anxiolytic drug with a novel chemical structure, which is used in scientific research JNJ-7777120, a drug being developed by Johnson & Johnson Pharmaceutical Research & Development
C14H16ClN3O
[ "Chemistry" ]
78
[ "Isomerism", "Set index articles on molecular formulas" ]
24,157,019
https://en.wikipedia.org/wiki/Religious%20views%20on%20genetically%20modified%20foods
Religious views on genetically modified foods have been mixed, although as yet, no genetically modified foods ("GM" foods) have been designated as unacceptable by religious authorities. Background and history Genetic engineering is a laboratory process that alters the DNA make-up of an organism. This may include deleting or adding a segment of DNA. Genetically Modified Organisms typically refers to food products that have been altered using genetic engineering. This is done by adding DNA to a single cell, that will later be present in the rest of the organism due to cell reproduction. Around 8000 BCE, humans used agricultural techniques such as Cross breeding to breed animals and plants with preferred traits. In 1982, the FDA approved the first genetically modified product, insulin, for public use in the United States. In 1994, a genetically modified tomato was approved for public use by the FDA in the United States. Common genetically modified foods include corn, soybeans, potatoes, and squash. Judaism There is no consensus in the views of Jewish religious leaders, scholars and commentators on whether Jews can eat GM food products or engage in research in the area of GM food technology. One perspective emphasizes that humanity was created in God's image and this means that humanity can "partner with God in the perfection of everything in the world," and therefore Jewish law accepts genetic engineering to save and prolong human life as well as increase the quality or quantity of the world's food supply. Other perspectives hold that GM food technology is a violation of Kil'ayim, the mixed breeding of crops or livestock, and that because God made "distinctions in the natural world", Jews must honor them. Kashrut Kashrut laws state that all plants are considered Kosher. Many Rabbinic authorities believe that genetic material separated from the parent organism is "inert," or separate from the parent organism. Thus, genetic material that is transferred from a non-kosher species is no longer considered food, as it does not have taste and is considered separate from the non-kosher species. Rabbinic authorities generally assert that genetic material from non-kosher species is not in itself non-kosher and does not render the new organism non-kosher. Some may argue, however, that food made with genes from pigs or other non-kosher animals would likely be non-kosher. Genetic engineering also poses Kashrut concerns regarding the changing of physical characteristics of animals. Kashrut only permits eating animals with split hooves that chooses their cud. However, genetic engineering may permit a traditionally non-Kosher animal to attain these characteristics. This has a difficult response, according to Halakha. Islam Islam too forbids eating of pork, and Islamic scholars have also raised concern about the theoretical production of foods with genes from pigs. And there are varying perspectives. A seminar of Islamic scholars in Kuwait on genetics and genetic engineering in October 1998 concluded that although there are fears about the possibility of the harmful effects of GM food technology and GM food products on human beings and the environment, there are no laws within Islam which stop the genetic modification of food crops and animals. And in 2003, the Indonesian Ulemas Council (MUI) approved the importation and consumption of genetically modified food products by Indonesian Muslims. Others have written that while there are Quranic verses forbidding humanity from defacing God's creation, these "cannot be invoked as a total and radical ban on genetic engineering ... If carried too far, it would conflict with many forms of curative surgery that also entail some change in God's creation". Voices in opposition to GMOs argue that there is no need for genetic modification of food crops because God created everything perfectly and man does not have any right to manipulate anything that God has created. Christianity Roman Catholic Church Views of Rome on genetic engineering In 1999, after two years of discussions, the Vatican's Pontifical Academy for Life stated that modifying the genes of plants and animals is theologically acceptable. The Guardian reported that "Bishop Elio Sgreccia, vice- president of the pontifical academy, said: 'We are increasingly encouraged that the advantages of genetic engineering of plants and animals are greater than the risks. The risks should be carefully followed through openness, analysis and controls, but without a sense of alarm.' Referring to genetically modified products such as corn and soya, Sgreccia added: 'We give it a prudent 'yes' We cannot agree with the position of some groups that say it is against the will of God to meddle with the genetic make-up of plants and animals.'" In 2000 as part of the Great Jubilee Pope John Paul II gave an address concerning agriculture, at which he said: The "famous words of Genesis entrust the earth to man's use, not abuse. They do not make man the absolute arbiter of the earth's governance, but the Creator's "co-worker": a stupendous mission, but one which is also marked by precise boundaries that can never be transgressed with impunity. This is a principle to be remembered in agricultural production itself, whenever there is a question of its advance through the application of biotechnologies, which cannot be evaluated solely on the basis of immediate economic interests. They must be submitted beforehand to rigorous scientific and ethical examination, to prevent them from becoming disastrous for human health and the future of the earth." Other studies and statements A 2002 meeting between bishops and scientists in the Philippines concluded that biotechnology could be an important stepping stone in the struggle against hunger and environmental pollution. A 2003 symposium gathered by Cardinal Renato T. Martino has examined the use of GMOs in modern agriculture. The symposium's study argued that the future of humanity is at stake and that there is no room for the ideological arguments advanced by environmentalists. Velasio De Paolis, a professor of canon law at the Pontifical Urban University, has said that it was "easy to say no to GM food if your stomach is full". In 2008, Fr. Sean McDonagh, an Irish Columban priest and "well-known commentator on environmental issues", questioned whether hosts from transgenic wheat could ever be approved by the Congregation for the Doctrine of the Faith because of the Church's strict rules regarding sacramental bread. He specifically cited canon 924, which stipulates the bread must be wheaten only, and recently made, so that there is no danger of corruption. A 2009 study on genetically modified organisms sponsored by the Pontifical Academy of Sciences came to a favorable conclusion on GMOs, viewing them as praiseworthy for improving the lives of the poor. Philippines The Philippines is a predominantly Catholic country, and official pronouncements of the Catholic Bishops Conference of the Philippines (CBCP) exert a strong influence in policy making and the CBCP has not supported biotechnology, and probably will not until there is an official endorsement from the Pope. President Arroyo’s visit to Rome on September 27, 2003, she apparently consulted Pope John Paul II about the Church position on biotechnology. On the basis of that meeting, she issued a statement indicating that she felt it was important that opponents of GMOs knew that according to the Vatican, GMOs are not immoral. The CBCP issued a statement in response stating that the Pope had not endorsed GMOs. In 2009 Bishop Vicente Navarra of the Diocese of Bacolod in the Philippines issued a pastoral letter calling on the Negros Occidental and Bacolod City governments to continue banning the entry of GMO products. Anabaptist Christianity About 550 Amish farmers in Pennsylvania have adopted nicotine-free tobacco since 2001, because it pays "about $1.50 per pound for the nicotine-free tobacco, nearly double the 80-cent-per-pound rate for traditional tobacco. GMO crops do not conflict with the Amish lifestyle. Anglican Communion In 2004, the Church Environmental Network, representing members of the Anglican church of South Africa, spoke out against the South African government's backing of genetically modified organisms (GMOs). Christian Aid, a British ecumenical group, released a paper in 2000 that expressed sharp concerns about the agricultural biotechnology industry, particularly with regard to its potential effects on impoverished people and economic development in the developing world. Rastafarian While the Rastafari Movement as a whole has no central authority, a Rastafari Code of Conduct was ratified in July–August 2008 at a meeting in Jamaica of the Nyah Binghi Order, one of the three houses of the Rastafari movement; that Code defines GM food as not Ital. References Genetically modified organisms Genetically Religion and science Point of view
Religious views on genetically modified foods
[ "Engineering", "Biology" ]
1,769
[ "Genetic engineering", "Genetically modified organisms" ]
24,158,072
https://en.wikipedia.org/wiki/Boole%27s%20rule
In mathematics, Boole's rule, named after George Boole, is a method of numerical integration. Formula Simple Boole's Rule It approximates an integral: by using the values of at five equally spaced points: It is expressed thus in Abramowitz and Stegun: where the error term is for some number between and where . It is often known as Bode's rule, due to a typographical error that propagated from Abramowitz and Stegun. The following constitutes a very simple implementation of the method in Common Lisp which ignores the error term: Composite Boole's Rule In cases where the integration is permitted to extend over equidistant sections of the interval , the composite Boole's rule might be applied. Given divisions, where mod , the integrated value amounts to: where the error term is similar to above. The following Common Lisp code implements the aforementioned formula: See also Newton–Cotes formulas Simpson's rule Romberg's method Notes References Integral calculus Numerical analysis Numerical integration (quadrature) Articles with example Lisp (programming language) code
Boole's rule
[ "Mathematics" ]
233
[ "Calculus", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations", "Integral calculus" ]
24,158,491
https://en.wikipedia.org/wiki/Goldman%20domain
In mathematics, a Goldman domain or G-domain is an integral domain A whose field of fractions is a finitely generated algebra over A. They are named after Oscar Goldman. An overring (i.e., an intermediate ring lying between the ring and its field of fractions) of a Goldman domain is again a Goldman domain. There exists a Goldman domain where all nonzero prime ideals are maximal although there are infinitely many prime ideals. An ideal I in a commutative ring A is called a Goldman ideal if the quotient A/I is a Goldman domain. A Goldman ideal is thus prime, but not necessarily maximal. In fact, a commutative ring is a Jacobson ring if and only if every Goldman ideal in it is maximal. The notion of a Goldman ideal can be used to give a slightly sharpened characterization of a radical of an ideal: the radical of an ideal I is the intersection of all Goldman ideals containing I. Alternative definition An integral domain is a G-domain if and only if: Its field of fractions is a simple extension of The intersection of its nonzero prime ideals (not to be confused with nilradical) is nonzero There is a nonzero element such that for any nonzero ideal , for some . A G-ideal is defined as an ideal such that is a G-domain. Since a factor ring is an integral domain if and only if the ring is factored by a prime ideal, every G-ideal is also a prime ideal. G-ideals can be used as a refined collection of prime ideals in the following sense: the radical of an ideal can be characterized as the intersection of all prime ideals containing the ideal, and in fact we still get the radical even if we take the intersection over the G-ideals. Every maximal ideal is a G-ideal, since quotient by maximal ideal is a field, and a field is trivially a G-domain. Therefore, maximal ideals are G-ideals, and G-ideals are prime ideals. G-ideals are the only maximal ideals in a Jacobson ring, and in fact this is an equivalent characterization of Jacobson rings: a ring is a Jacobson ring when all G-ideals are maximal ideals. This leads to a simplified proof of the Nullstellensatz. It is known that given , a ring extension of a G-domain, is algebraic over if and only if every ring extension between and is a G-domain. A Noetherian domain is a G-domain if and only if its Krull dimension is at most one, and has only finitely many maximal ideals (or equivalently, prime ideals). Notes References Ring theory
Goldman domain
[ "Mathematics" ]
553
[ "Fields of abstract algebra", "Ring theory" ]
24,158,853
https://en.wikipedia.org/wiki/Overring
In mathematics, an overring of an integral domain contains the integral domain, and the integral domain's field of fractions contains the overring. Overrings provide an improved understanding of different types of rings and domains. Definition In this article, all rings are commutative rings, and ring and overring share the same identity element. Let represent the field of fractions of an integral domain . Ring is an overring of integral domain if is a subring of and is a subring of the field of fractions ; the relationship is . Properties Ring of fractions The rings are the rings of fractions of rings by multiplicative set . Assume is an overring of and is a multiplicative set in . The ring is an overring of . The ring is the total ring of fractions of if every nonunit element of is a zero-divisor. Every overring of contained in is a ring , and is an overring of . Ring is integrally closed in if is integrally closed in . Noetherian domain Definitions A Noetherian ring satisfies the 3 equivalent finitenss conditions i) every ascending chain of ideals is finite, ii) every non-empty family of ideals has a maximal element and iii) every ideal has a finite basis. An integral domain is a Dedekind domain if every ideal of the domain is a finite product of prime ideals. A ring's restricted dimension is the maximum rank among the ranks of all prime ideals that contain a regular element. A ring is locally nilpotentfree if every ring with maximal ideal is free of nilpotent elements or a ring with every nonunit a zero divisor. An affine ring is the homomorphic image of a polynomial ring over a field. Properties Every overring of a Dedekind ring is a Dedekind ring. Every overrring of a direct sum of rings whose non-unit elements are all zero-divisors is a Noetherian ring. Every overring of a Krull 1-dimensional Noetherian domain is a Noetherian ring. These statements are equivalent for Noetherian ring with integral closure . Every overring of is a Noetherian ring. For each maximal ideal of , every overring of is a Noetherian ring. Ring is locally nilpotentfree with restricted dimension 1 or less. Ring is Noetherian, and ring has restricted dimension 1 or less. Every overring of is integrally closed. These statements are equivalent for affine ring with integral closure . Ring is locally nilpotentfree. Ring is a finite module. Ring is Noetherian. An integrally closed local ring is an integral domain or a ring whose non-unit elements are all zero-divisors. A Noetherian integral domain is a Dedekind ring if every overring of the Noetherian ring is integrally closed. Every overring of a Noetherian integral domain is a ring of fractions if the Noetherian integral domain is a Dedekind ring with a torsion class group. Coherent rings Definitions A coherent ring is a commutative ring with each finitely generated ideal finitely presented. Noetherian domains and Prüfer domains are coherent. A pair indicates a integral domain extension of over . Ring is an intermediate domain for pair if is a subdomain of and is a subdomain of . Properties A Noetherian ring's Krull dimension is 1 or less if every overring is coherent. For integral domain pair , is an overring of if each intermediate integral domain is integrally closed in . The integral closure of is a Prüfer domain if each proper overring of is coherent. The overrings of Prüfer domains and Krull 1-dimensional Noetherian domains are coherent. Prüfer domains Properties A ring has QR property if every overring is a localization with a multiplicative set. The QR domains are Prüfer domains. A Prüfer domain with a torsion Picard group is a QR domain. A Prüfer domain is a QR domain if the radical of every finitely generated ideal equals the radical generated by a principal ideal. The statement is a Prüfer domain is equivalent to: Each overring of is the intersection of localizations of , and is integrally closed. Each overring of is the intersection of rings of fractions of , and is integrally closed. Each overring of has prime ideals that are extensions of the prime ideals of , and is integrally closed. Each overring of has at most 1 prime ideal lying over any prime ideal of , and is integrally closed Each overring of is integrally closed. Each overring of is coherent. The statement is a Prüfer domain is equivalent to: Each overring of is flat as a module. Each valuation overring of is a ring of fractions. Minimal overring Definitions A minimal ring homomorphism is an injective non-surjective homomorophism, and if the homomorphism is a composition of homomorphisms and then or is an isomorphism. A proper minimal ring extension of subring occurs if the ring inclusion of in to is a minimal ring homomorphism. This implies the ring pair has no proper intermediate ring. A minimal overring of ring occurs if contains as a subring, and the ring pair has no proper intermediate ring. The Kaplansky ideal transform (Hayes transform, S-transform) of ideal with respect to integral domain is a subset of the fraction field . This subset contains elements such that for each element of the ideal there is a positive integer with the product contained in integral domain . Properties Any domain generated from a minimal ring extension of domain is an overring of if is not a field. The field of fractions of contains minimal overring of when is not a field. Assume an integrally closed integral domain is not a field, If a minimal overring of integral domain exists, this minimal overring occurs as the Kaplansky transform of a maximal ideal of . Examples The Bézout integral domain is a type of Prüfer domain; the Bézout domain's defining property is every finitely generated ideal is a principal ideal. The Bézout domain will share all the overring properties of a Prüfer domain. The integer ring is a Prüfer ring, and all overrings are rings of quotients. The dyadic rational is a fraction with an integer numerator and power of 2 denominators. The dyadic rational ring is the localization of the integers by powers of two and an overring of the integer ring. See also Glossary of ring theory Localization (commutative algebra) Regular element (in ring theory): Notes References Related categories Ring theory Algebraic structures Commutative algebra
Overring
[ "Mathematics" ]
1,405
[ "Mathematical structures", "Mathematical objects", "Ring theory", "Fields of abstract algebra", "Algebraic structures", "Commutative algebra" ]
24,159,994
https://en.wikipedia.org/wiki/Journal%20of%20Applied%20Biomechanics
The Journal of Applied Biomechanics is a bimonthly peer-reviewed academic journal and an official journal of the International Society of Biomechanics. It covers research on musculoskeletal and neuromuscular biomechanics in human movement, sport, and rehabilitation. Abstracting and indexing The journal is abstracted and indexed in Compendex, CINAHL, Science Citation Index Expanded, Current Contents/Clinical Medicine, Index Medicus/MEDLINE/PubMed, Embase, and Scopus. References External links Biotechnology journals Biomechanics Bimonthly journals English-language journals Academic journals established in 1985 Academic journals published by learned and professional societies Biomedical engineering journals
Journal of Applied Biomechanics
[ "Physics", "Engineering", "Biology" ]
145
[ "Biomechanics", "Biological engineering", "Biotechnology literature", "Bioengineering stubs", "Biotechnology stubs", "Mechanics", "Medical technology stubs", "Biotechnology journals", "Medical technology" ]
24,162,194
https://en.wikipedia.org/wiki/Noncommutative%20measure%20and%20integration
Noncommutative measure and integration refers to the theory of weights, states, and traces on von Neumann algebras (Takesaki 1979 v. 2 p. 141). References I. E. Segal. A noncommutative extension of abstract integration. Ann. of Math. (2), 57:401–457, 1953. MR # 14:991f, JSTOR collection. 2.0(2) . Operator algebras Noncommutative geometry
Noncommutative measure and integration
[ "Mathematics" ]
100
[ "Mathematical analysis", "Mathematical analysis stubs" ]
24,162,271
https://en.wikipedia.org/wiki/Tetracyclic
Tetracyclics are cyclic chemical compounds that contain four fused rings of atoms, for example, Tröger's base. Some tricyclic compounds having three fused and one tethered ring (connected to main nucleus by a single bond) can also classified as tetracyclic, for example, ciclazindol. Tetracyclic compounds have various pharmaceutical uses, such as: tetracycline antibiotics Doxycycline Tigecycline Omadacycline Eravacycline tetracyclic antidepressants Benzoctamine Loxapine Mazindol Mianserin Mirtazapine See also Tricyclic Heterocyclic References
Tetracyclic
[ "Chemistry" ]
152
[ "Organic chemistry stubs" ]
1,972,407
https://en.wikipedia.org/wiki/Neritic%20zone
The neritic zone (or sublittoral zone) is the relatively shallow part of the ocean above the drop-off of the continental shelf, approximately in depth. From the point of view of marine biology it forms a relatively stable and well-illuminated environment for marine life, from plankton up to large fish and corals, while physical oceanography sees it as where the oceanic system interacts with the coast. Definition (marine biology), context, extra terminology In marine biology, the neritic zone, also called coastal waters, the coastal ocean or the sublittoral zone, refers to that zone of the ocean where sunlight reaches the ocean floor, that is, where the water is never so deep as to take it out of the photic zone. It extends from the low tide mark to the edge of the continental shelf, with a relatively shallow depth extending to about 200 meters (660 feet). Above the neritic zone lie the intertidal (or eulittoral) and supralittoral zones; below it the continental slope begins, descending from the continental shelf to the abyssal plain and the pelagic zone. Within the neritic, marine biologists also identify the following: The infralittoral zone is the algal-dominated zone down to around five metres below the low water mark. The circalittoral zone is the region beyond the infralittoral, which is dominated by sessile animals such as oysters. The subtidal zone is the region of the neritic zone which is below the intertidal zone, therefore never exposed to the atmosphere. Physical characteristics The neritic zone is covered with generally well-oxygenated water, receives plenty of sunlight, is relatively stable temperature, has low water pressure and stable salinity levels, making it highly suitable for photosynthetic life. There are several different areas or zones in the ocean. The area along the bottom of any body of water from the shore to the deepest abyss is called the benthic zone. It is where decomposed organic debris (also known as ocean 'snow') has settled to form a sediment layer. All photosynthetic life needs light to grow and how far out into the ocean light can still penetrate through the water column to the floor or benthic zone is what defines the neritic zone. That photic zone, or area where light can penetrate through the water column, is usually above ~100 meters (~328 feet). Some coastal areas have a long area of shallow water that extends far out beyond the landmass into the water and others, for example islands that have formed from ancient volcanos where the 'shelf' or edge of the land mass is very steep, have a very short neritic zone. Life forms The above characteristics make the neritic zone the location of the majority of sea life. The result is high primary production by photosynthetic life such as phytoplankton and floating sargassum; zooplankton, free-floating creatures ranging from microscopic foraminiferans to small fish and shrimp, feed on the phytoplankton (and one another); both trophic levels in turn form the base of the food chain (or, more properly, web) that supports most of the world's great wild fisheries. Corals are also mostly found in the neritic zone, where they are more common than in the intertidal zone as they have less change to deal with. Definition (physical oceanography) In physical oceanography, the sublittoral zone refers to coastal regions with significant tidal flows and energy dissipation, including non-linear flows, internal waves, river outflows and ocean fronts. As in marine biology, this zone typically extends to the edge of the continental shelf. See also Coastal fish References Aquatic ecology Fisheries science Physical oceanography Aquatic biomes Oceanographical terminology
Neritic zone
[ "Physics", "Biology" ]
812
[ "Aquatic ecology", "Ecosystems", "Applied and interdisciplinary physics", "Physical oceanography" ]
1,972,580
https://en.wikipedia.org/wiki/Phototropin
Phototropins are blue light photoreceptor proteins (more specifically, flavoproteins) that mediate phototropism responses across many species of algae, fungi and higher plants. Phototropins can be found throughout the leaves of a plant. Along with cryptochromes and phytochromes they allow plants to respond and alter their growth in response to the light environment. When phototropins are hit with blue light, they induce a signal transduction pathway that alters the plant cells' functions in different ways. Phototropins are part of the phototropic sensory system in plants that causes various environmental responses in plants. Phototropins specifically will cause stems to bend towards light and stomata to open. In addition phototropins mediate the first changes in stem elongation in blue light prior to cryptochrome activation. Phototropins are also required for blue light mediated transcript destabilization of specific mRNAs in the cell. Phototropins also regulate the movement of chloroplasts within the cell, notably chloroplast avoidance. It was thought that this avoidance serves a protective function to avoid damage from intense light, however an alternate study argues that the avoidance response is primarily to increase light penetration into deeper mesophyll layers in high light conditions. Phototropins may also be important for the opening of stomata. Enzyme activity Phototropins have two distinct light, oxygen, or voltage regulated domains (LOV1, LOV2) that each bind flavin mononucleotide (FMN). The FMN is noncovalently bound to a LOV domain in the dark, but becomes covalently linked upon exposure to suitable light. The formation of the bond is reversible once light is no longer present. The forward reaction with light is not dependent on temperature, though low temperatures give increased stability of the covalent linkage, leading to a slower reversal reaction. Light excitation will lead to a conformational change within the protein, which allows for kinase activity. There is also evidence to suggest that phototropins undergo autophosphorylation at various sites across the enzyme. Phototropins trigger signaling responses within the cell, but it is unknown which proteins are phosphorylated by phototropins, or exactly how the autophosphorylation events play a role in signaling. Phototropins are typically found on the plasma membrane, but some phototropins have been found in substantial quantities on chloroplast membranes. One study found that phototropins on the plasma membrane play a role in phototropism, leaf flattening, stomatal opening, and chloroplast movements, while phototropins on the chloroplasts only partially affected stomatal opening and chloroplast movement, suggesting that the location of the protein in the cell may also play a role in its signaling function. References Other sources Sensory receptors Signal transduction Biological pigments Integral membrane proteins Molecular biology Plant physiology EC 2.7.11
Phototropin
[ "Chemistry", "Biology" ]
632
[ "Plant physiology", "Plants", "Signal transduction", "Biological pigments", "Molecular biology", "Biochemistry", "Neurochemistry", "Pigmentation" ]
1,972,752
https://en.wikipedia.org/wiki/N%2CN%27-Dicyclohexylcarbodiimide
{{DISPLAYTITLE:N,N'''-Dicyclohexylcarbodiimide}} is an organic compound with the chemical formula (C6H11N)2C. It is a waxy white solid with a sweet odor. Its primary use is to couple amino acids during artificial peptide synthesis. The low melting point of this material allows it to be melted for easy handling. It is highly soluble in dichloromethane, tetrahydrofuran, acetonitrile and dimethylformamide, but insoluble in water. Structure and spectroscopy The C−N=C=N−C core of carbodiimides (N=C=N) is linear, being related to the structure of allene. The molecule has idealized C2 symmetry. The N=C=N moiety gives characteristic IR spectroscopic signature at 2117 cm−1. The 15N NMR spectrum shows a characteristic shift of 275 ppm upfield of nitric acid and the 13C NMR spectrum features a peak at about 139 ppm downfield from TMS. Preparation DCC is produced by the decarboxylation of cyclohexylisocyanate using phosphine oxides as a catalyst: 2 C6H11NCO → (C6H11N)2C + CO2 Alternative catalysts for this conversion include the highly nucleophilic OP(MeNCH2CH2)3N. Other methods Of academic interest, palladium acetate, iodine, and oxygen can be used to couple cyclohexyl amine and cyclohexyl isocyanide. Yields of up to 67% have been achieved using this route: C6H11NC + C6H11NH2 + O2 → (C6H11N)2C + H2O DCC has also been prepared from dicyclohexylurea using a phase transfer catalyst. The disubstituted urea, arenesulfonyl chloride, and potassium carbonate react in toluene in the presence of benzyl triethylammonium chloride to give DCC in 50% yield. Reactions Amide, peptide, and ester formation DCC is a dehydrating agent for the preparation of amides, ketones, and nitriles. In these reactions, DCC hydrates to form dicyclohexylurea (DCU), a compound that is nearly insoluble in most organic solvents and insoluble in water. The majority of the DCU is thus readily removed by filtration, although the last traces can be difficult to eliminate from non-polar products. DCC can also be used to invert secondary alcohols. In the Steglich esterification, alcohols, including even some tertiary alcohols, can be esterified using a carboxylic acid in the presence of DCC and a catalytic amount of DMAP. In protein synthesis (such as Fmoc solid-state synthesizers), the N-terminus is often used as the attachment site on which the amino acid monomers are added. To enhance the electrophilicity of carboxylate group, the negatively charged oxygen must first be "activated" into a better leaving group. DCC is used for this purpose. The negatively charged oxygen will act as a nucleophile, attacking the central carbon in DCC. DCC is temporarily attached to the former carboxylate group forming a highly electrophilic intermediate, making nucleophilic attack by the terminal amino group on the growing peptide more efficient. Moffatt oxidation In combination with dimethyl sulfoxide (DMSO), DCC affects the Pfitzner-Moffatt oxidation. This procedure is used for the oxidation of alcohols to aldehydes and ketones. Unlike metal-mediated oxidations, such as the Jones oxidation, the reaction conditions are sufficiently mild to avoid over-oxidation of aldehydes to carboxylic acids. Generally, three equivalents of DCC and 0.5 equivalents of proton source in DMSO are allowed to react overnight at room temperature. The reaction is quenched with acid. Other reactions Reaction of an acid with hydrogen peroxide in presence of DCC leads to formation of peroxide linkage. Alcohols can also be dehydrated using DCC. This reaction proceeds by first giving the O-acylurea intermediate which is then hydrogenolyzed to produce the corresponding alkene: RCHOHCH2R′ + (C6H11N)2C → RCH=CHR′ + (C6H11NH)2CO Secondary alcohols can be stereochemically inverted by formation of a formyl ester followed by saponification. The secondary alcohol is mixed directly with DCC, formic acid, and a strong base such as sodium methoxide. In the presence of DMAP, DCC self-condenses two molecules of phenylacetic acid and its substituted derivatives to produce a bisbenzyl ketone. Biological action DCC is a classical inhibitor of ATP synthase. DCC inhibits ATP synthase by binding to one of the c subunits and causing steric hindrance of the rotation of the FO subunit. Safety DCC often causes rashes. In vivo'' dermal sensitization studies according to OECD 429 confirmed DCC is a strong skin sensitizer, showing a response at 0.03 wt% in the Local Lymph Node Assay (LLNA) placing it in Globally Harmonized System of Classification and Labelling of Chemicals (GHS) Dermal Sensitization Category 1A. Thermal hazard analysis by differential scanning calorimetry (DSC) shows DCC poses minimal explosion risks. See also Carbodiimide References External links An excellent illustration of this mechanism can be found here: . Dehydrating agents Peptide coupling reagents Carbodiimides Biochemistry Biochemistry methods Reagents for biochemistry Allergology Sweet-smelling chemicals Cyclohexyl compounds
N,N'-Dicyclohexylcarbodiimide
[ "Chemistry", "Biology" ]
1,280
[ "Biochemistry methods", "Peptide coupling reagents", "Functional groups", "nan", "Reagents for organic chemistry", "Biochemistry", "Reagents for biochemistry", "Dehydrating agents", "Carbodiimides" ]
1,973,309
https://en.wikipedia.org/wiki/Hydroxyl%20tagging%20velocimetry
Hydroxyl tagging velocimetry (HTV) is a velocimetry method used in humid air flows. The method is often used in high-speed combusting flows because the high velocity and temperature accentuate its advantages over similar methods. HTV uses a laser (often an argon-fluoride excimer laser operating at ~193 nm) to dissociate the water in the flow into H + OH. Before entering the flow optics are used to create a grid of laser beams. The water in the flow is dissociated only where beams of sufficient energy pass through the flow, thus creating a grid in the flow where the concentrations of hydroxyl (OH) are higher than in the surrounding flow. Another laser beam (at either ~248 nm or ~308 nm) in the form of a sheet is also passed through the flow in the same plane as the grid. This laser beam is tuned to a wavelength that causes the hydroxyl molecules to fluoresce in the UV spectrum. The fluorescence is then captured by a charge-coupled device (CCD) camera. Using electronic timing methods the picture of the grid can be captured at nearly the same instant that the grid is created. By delaying the pulse of the fluorescence laser and the camera shot, an image of the grid that has now displaced downstream can be captured. Computer programs are then used to compare the two images and determine the displacement of the grid. By dividing the displacement by the known time delay the two dimensional velocity field (in the plane of the grid) can be determined. Flow ratios, however, are shown to affect the impingement locations, where increased air flow ratios can reduce the required combustor size by isolating reaction products solely within the secondary cavity. Other molecular tagging velocimetry (MTV) methods have used ozone (O3), excited oxygen and nitric oxide as the tag instead of hydroxyl. In the case of ozone the method is known as ozone tagging velocimetry or OTV. OTV has been developed and tested in many room air temperature applications with very accurate test results. OTV consists of an initial "write" step, where a 193-nm pulsed excimer laser creates ozone grid lines via oxygen (O2) UV absorption, and a subsequent "read" step, where a 248-nm excimer laser photodissociates the formed O3 and fluoresces the vibrationally excited O2 product thus revealing the grid lines' displacement. References Measuring instruments Laser applications Measurement Fluid dynamics Transport phenomena
Hydroxyl tagging velocimetry
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
529
[ "Transport phenomena", "Physical phenomena", "Physical quantities", "Chemical engineering", "Quantity", "Measurement", "Size", "Measuring instruments", "Piping", "Fluid dynamics" ]
1,973,851
https://en.wikipedia.org/wiki/Multi-stage%20flash%20distillation
Multi-stage flash distillation (MSF) is a water desalination process that distills sea water by flashing a portion of the water into steam in multiple stages of what are essentially countercurrent heat exchangers. Current MSF facilities may have as many as 30 stages. Multi-stage flash distillation plants produce about 26% of all desalinated water in the world, but almost all of new desalination plants currently use reverse osmosis due to much lower energy consumption. Principle The plant has a series of spaces called stages, each containing a heat exchanger and a condensate collector. The sequence has a cold end and a hot end while intermediate stages have intermediate temperatures. The stages have different pressures corresponding to the boiling points of water at the stage temperatures. After the hot end there is a container called the brine heater. The process goes through the following steps: When the plant is operating in steady state, feed water at the cold inlet temperature flows, or is pumped, through the heat exchangers in the stages and warms up. When it reaches the brine heater it already has nearly the maximum temperature. In the heater, an amount of additional heat is added. After the heater, the water flows through valves back into the stages that have ever lower pressure and temperature. As it flows back through the stages the water is now called brine, to distinguish it from the inlet water. In each stage, as the brine enters, its temperature is above the boiling point at the pressure of the stage, and a small fraction of the brine water boils ("flashes") to steam thereby reducing the temperature until an equilibrium is reached. The resulting steam is a little hotter than the feed water in the heat exchanger. The steam cools and condenses against the heat exchanger tubes, thereby heating the feed water as described earlier. The total evaporation in all the stages is up to approximately 85% of the water flowing through the system, depending on the range of temperatures used. With increasing temperature there are growing difficulties of scale formation and corrosion. 110-120 °C appears to be a maximum, although scale avoidance may require temperatures below 70 °C. The feed water carries away the latent heat of the condensed steam, maintaining the low temperature of the stage. The pressure in the chamber remains constant as equal amounts of steam is formed when new warm brine enters the stage and steam is removed as it condenses on the tubes of the heat exchanger. The equilibrium is stable, because if at some point more vapor forms, the pressure increases and that reduces evaporation and increases condensation. In the final stage, the brine and the condensate has a temperature near the inlet temperature. Then the brine and condensate are pumped out from the low pressure in the stage to the ambient pressure. The brine and condensate still carry a small amount of heat that is lost from the system when they are discharged. The heat that was added in the heater makes up for this loss. The heat added in the brine heater usually comes in the form of hot steam from an industrial process co-located with the desalination plant. The steam is allowed to condense against tubes carrying the brine (similar to the stages). The energy that makes possible the evaporation is all present in the brine as it leaves the heater. The reason for letting the evaporation happen in multiple stages rather than a single stage at the lowest pressure and temperature, is that in a single stage, the feed water would only warm to an intermediate temperature between the inlet temperature and the heater, while much of the steam would not condense and the stage would not maintain the lowest pressure and temperature. Such plants can operate at 23–27 kWh/m3 (appr. 90 MJ/m3) of distilled water. Because the colder salt water entering the process counterflows with the saline waste water/distilled water, relatively little heat energy leaves in the outflow—most of the heat is picked up by the colder saline water flowing toward the heater and the energy is recycled. In addition, MSF distillation plants, especially large ones, are often paired with power plants in a cogeneration configuration. Waste heat from the power plant is used to heat the seawater, providing cooling for the power plant at the same time. This reduces the energy needed by half to two-thirds, which drastically alters the economics of the plant, since energy is by far the largest operating cost of MSF plants. Reverse osmosis, MSF distillation's main competitor, requires more pretreatment of the seawater and more maintenance, as well as energy in the form of work (electricity, mechanical power) as opposed to cheaper low-grade waste heat. See also Marine flash distillers Multi-effect distillation Multiple-effect distillation Reverse osmosis Reverse osmosis plant Regenerative heat exchanger References External links International Desalination Association Encyclopedia of Desalination and Water Resources Prospects of improving energy consumption of the multi-stage flash distillation process O. A. Hamed, G. M. Mustafa, K. BaMardouf and H. Al-Washmi. Saline Water Conversion Corporation, Saudi Arabia, 2015. Retrieved 21 May 2016. Evaporators Water treatment Water desalination
Multi-stage flash distillation
[ "Chemistry", "Engineering", "Environmental_science" ]
1,114
[ "Water desalination", "Water treatment", "Chemical equipment", "Water pollution", "Distillation", "Evaporators", "Environmental engineering", "Water technology" ]
1,975,956
https://en.wikipedia.org/wiki/Electric%20potential%20energy
Electric potential energy is a potential energy (measured in joules) that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may be said to have electric potential energy by virtue of either its own electric charge or its relative position to other electrically charged objects. The term "electric potential energy" is used to describe the potential energy in systems with time-variant electric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with time-invariant electric fields. Definition The electric potential energy of a system of point charges is defined as the work required to assemble this system of charges by bringing them close together, as in the system from an infinite distance. Alternatively, the electric potential energy of any given charge or system of charges is termed as the total work done by an external agent in bringing the charge or the system of charges from infinity to the present configuration without undergoing any acceleration. The electrostatic potential energy can also be defined from the electric potential as follows: Units The SI unit of electric potential energy is joule (named after the English physicist James Prescott Joule). In the CGS system the erg is the unit of energy, being equal to 10−7 Joules. Also electronvolts may be used, 1 eV = 1.602×10−19 Joules. Electrostatic potential energy of one point charge One point charge q in the presence of another point charge Q The electrostatic potential energy, UE, of one point charge q at position r in the presence of a point charge Q, taking an infinite separation between the charges as the reference position, is: where r is the distance between the point charges q and Q, and q and Q are the charges (not the absolute values of the charges—i.e., an electron would have a negative value of charge when placed in the formula). The following outline of proof states the derivation from the definition of electric potential energy and Coulomb's law to this formula. One point charge q in the presence of n point charges Qi The electrostatic potential energy, UE, of one point charge q in the presence of n point charges Qi, taking an infinite separation between the charges as the reference position, is: where ri is the distance between the point charges q and Qi, and q and Qi are the assigned values of the charges. Electrostatic potential energy stored in a system of point charges The electrostatic potential energy UE stored in a system of N charges q1, q2, …, qN at positions r1, r2, …, rN respectively, is: where, for each i value, V(ri) is the electrostatic potential due to all point charges except the one at ri, and is equal to: where rij is the distance between qi and qj. Energy stored in a system of one point charge The electrostatic potential energy of a system containing only one point charge is zero, as there are no other sources of electrostatic force against which an external agent must do work in moving the point charge from infinity to its final location. A common question arises concerning the interaction of a point charge with its own electrostatic potential. Since this interaction doesn't act to move the point charge itself, it doesn't contribute to the stored energy of the system. Energy stored in a system of two point charges Consider bringing a point charge, q, into its final position near a point charge, Q1. The electric potential V(r) due to Q1 is Hence we obtain, the electrostatic potential energy of q in the potential of Q1 as where r1 is the separation between the two point charges. Energy stored in a system of three point charges The electrostatic potential energy of a system of three charges should not be confused with the electrostatic potential energy of Q1 due to two charges Q2 and Q3, because the latter doesn't include the electrostatic potential energy of the system of the two charges Q2 and Q3. The electrostatic potential energy stored in the system of three charges is: Energy stored in an electrostatic field distribution in vacuum The energy density, or energy per unit volume, , of the electrostatic field of a continuous charge distribution is: Energy stored in electronic elements Some elements in a circuit can convert energy from one form to another. For example, a resistor converts electrical energy to heat. This is known as the Joule effect. A capacitor stores it in its electric field. The total electrostatic potential energy stored in a capacitor is given by where C is the capacitance, V is the electric potential difference, and Q the charge stored in the capacitor. The total electrostatic potential energy may also be expressed in terms of the electric field in the form where is the electric displacement field within a dielectric material and integration is over the entire volume of the dielectric. The total electrostatic potential energy stored within a charged dielectric may also be expressed in terms of a continuous volume charge, , where integration is over the entire volume of the dielectric. These latter two expressions are valid only for cases when the smallest increment of charge is zero () such as dielectrics in the presence of metallic electrodes or dielectrics containing many charges. Note that a virtual experiment based on the energy transfert between capacitor plates reveals that an additional term should be taken into account when dealing with semiconductors for instance . While this extra energy cancels when dealing with insulators, the derivation predicts that it cannot be ignored as it may exceed the polarization energy. Notes References External links Forms of energy Voltage Electrostatics Electricity Electric power Electromagnetic quantities
Electric potential energy
[ "Physics", "Mathematics", "Engineering" ]
1,174
[ "Electromagnetic quantities", "Physical quantities", "Electrical systems", "Quantity", "Forms of energy", "Energy (physics)", "Power (physics)", "Physical systems", "Electric power", "Electrical engineering", "Voltage", "Wikipedia categories named after physical quantities" ]
1,976,413
https://en.wikipedia.org/wiki/Divergent%20evolution
Divergent evolution or divergent selection is the accumulation of differences between closely related populations within a species, sometimes leading to speciation. Divergent evolution is typically exhibited when two populations become separated by a geographic barrier (such as in allopatric or peripatric speciation) and experience different selective pressures that cause adaptations. After many generations and continual evolution, the populations become less able to interbreed with one another. The American naturalist J. T. Gulick (1832–1923) was the first to use the term "divergent evolution", with its use becoming widespread in modern evolutionary literature. Examples of divergence in nature are the adaptive radiation of the finches of the Galápagos, changes in mobbing behavior of the kittiwake, and the evolution of the modern-day dog from the wolf. The term can also be applied in molecular evolution, such as to proteins that derive from homologous genes. Both orthologous genes (resulting from a speciation event) and paralogous genes (resulting from gene duplication) can illustrate divergent evolution. Through gene duplication, it is possible for divergent evolution to occur between two genes within a species. Similarities between species that have diverged are due to their common origin, so such similarities are homologies. Causes Animals undergo divergent evolution for a number of reasons linked to changes in environmental or social pressures. This could include changes in the environment, such access to food and shelter. It could also result from changes in predators, such as new adaptations, an increase or decrease in number of active predators, or the introduction of new predators. Divergent evolution can also be a result of mating pressures such as increased competition for mates or selective breeding by humans. Distinctions Divergent evolution is a type of evolution and is distinct from convergent evolution and parallel evolution, although it does share similarities with the other types of evolution. Divergent versus convergent evolution Convergent evolution is the development of analogous structures that occurs in different species as a result of those two species facing similar environmental pressures and adapting in similar ways. It differs from divergent evolution as the species involved do not descend from a closely related common ancestor and the traits accumulated are similar. An example of convergent evolution is the development of flight in birds, bats, and insects, all of which are not closely related but share analogous structures allowing for flight. Divergent versus parallel evolution Parallel evolution is the development of a similar trait in species descending from a common ancestor. It is comparable to divergent evolution in that the species are descend from a common ancestor, but the traits accumulated are similar due to similar environmental pressures while in divergent evolution the traits accumulated are different. An example of parallel evolution is that certain arboreal frog species, 'flying' frogs, in both Old World families and New World families, have developed the ability of gliding flight. They have "enlarged hands and feet, full webbing between all fingers and toes, lateral skin flaps on the arms and legs, and reduced weight per snout-vent length". Darwin's finches One of the first recorded examples of divergent evolution is the case of Darwin's Finches. During Darwin's travels to the Galápagos Islands, he discovered several different species of finch, living on the different islands. Darwin observed that the finches had different beaks specialized for that species of finches' diet. Some finches had short beaks for eating nuts and seeds, other finches had long thin beaks for eating insects, and others had beaks specialized for eating cacti and other plants. He concluded that the finches evolved from a shared common ancestor that lived on the islands, and due to geographic isolation, evolved to fill the particular niche on each of the islands. This is supported by modern day genomic sequencing. Divergent evolution in dogs Another example of divergent evolution is the origin of the domestic dog and the modern wolf, who both shared a common ancestor. Comparing the anatomy of dogs and wolves supports this claim as they have similar body shape, skull size, and limb formation. This is even more obvious in some species of dogs, such as malamutes and huskies, who appear even more physically and behaviorally similar. There is a divergent genomic sequence of the mitochondrial DNA of wolves and dogs dated to over 100,000 years ago, which further supports the theory that dogs and wolves have diverged from shared ancestry. Divergent evolution in kittiwakes Another example of divergent evolution is the behavioral changes in the kittiwake as opposed to other species of gulls. Ancestorial and other modern-day species of gulls exhibit a mobbing behavior in order to protect their young due the nesting at ground-level where they are susceptible to predators. As a result of migration and environmental changes, the kittiwake nest solely on cliff faces. As a result, their young are protected from predatory reptiles, mammals, and birds who struggle with the climb and cliff-face weather conditions, and they do not exhibit this mobbing behavior. Divergent evolution in cacti Another example of divergent evolution is the split forming the Cactaceae family approximately dated in the late Miocene. Due to increase in arid climates, following the Eocene–Oligocene event, these ancestral plants evolved to survive in the new climates. Cacti evolved to have areoles, succulent stems, and some have light leaves, with the ability to store water for up to months. The plants they diverged from either went extinct leaving little in the fossil record or migrated surviving in less arid climates. See also Cladistics Contingency (evolutionary biology) Devolution Chronospecies References Further reading Evolutionary biology Evolution of animals
Divergent evolution
[ "Biology" ]
1,165
[ "Evolutionary biology", "Animals", "Evolution of animals" ]
1,976,775
https://en.wikipedia.org/wiki/Simple%20Network%20Paging%20Protocol
Simple Network Paging Protocol (SNPP) is a protocol that defines a method by which a pager can receive a message over the Internet. It is supported by most major paging providers, and serves as an alternative to the paging modems used by many telecommunications services. The protocol was most recently described in . It is a fairly simple protocol that may run over TCP port 444 and sends out a page using only a handful of well-documented commands. Connecting and using SNPP servers It is relatively easy to connect to a SNPP server, only requiring a telnet client and the address of the SNPP server. The port 444 is standard for SNPP servers, and it is free to use from the sender's point of view. Maximum message length can be carrier-dependent. Once connected, a user can simply enter the commands to send a message to a pager connected to that network. For example, a PAGE command with the number of the device specifies the device to send the message to. The MESS command sets the text of the message to be sent to the text following it. The message is sent out by issuing the SEND command. The session is ended with the QUIT command, or be continued with more sets of commands to send another message to a different device. The protocol also allows multiple PAGE commands for one message, stacked one after the other, allowing a same message to be sent to several devices on the network with one MESS and SEND command pair. References External links rfc-editor.org - RFC 1861 Network protocols
Simple Network Paging Protocol
[ "Technology" ]
320
[ "Computing stubs", "Computer network stubs" ]
875,176
https://en.wikipedia.org/wiki/Colemanite
Colemanite (Ca2B6O11·5H2O) or (CaB3O4(OH)3·H2O) is a borate mineral found in evaporite deposits of alkaline lacustrine environments. Colemanite is a secondary mineral that forms by alteration of borax and ulexite. It was first described in 1884 for an occurrence near Furnace Creek in Death Valley and was named after William Tell Coleman (1824–1893), owner of the mine "Harmony Borax Works" where it was first found. At the time, Coleman had alternatively proposed the name "smithite" instead after his business associate Francis Marion Smith. Uses Colemanite is an important ore of boron, and was the most important boron ore until the discovery of kernite in 1926. It has many industrial uses, like the manufacturing of heat resistant glass. Occurrence About 40% of the world's known colemanite reserves are at the Emet mine in western Turkey. Other important sources in Turkey are found at Bigadiç and Kestelek. See also List of minerals List of minerals named after people References External links Calcium minerals Inoborates Pentahydrate minerals Ferroelectric materials Monoclinic minerals Minerals in space group 14 Luminescent minerals Minerals described in 1884
Colemanite
[ "Physics", "Chemistry", "Materials_science" ]
270
[ "Physical phenomena", "Luminescence", "Ferroelectric materials", "Luminescent minerals", "Materials", "Electrical phenomena", "Hysteresis", "Matter" ]
875,513
https://en.wikipedia.org/wiki/Eccentric%20%28mechanism%29
In mechanical engineering, an eccentric is a circular disk (eccentric sheave) solidly fixed to a rotating axle with its centre offset from that of the axle (hence the word "eccentric", out of the center). It is used most often in steam engines, and used to convert rotary motion into linear reciprocating motion to drive a sliding valve or pump ram. To do so, an eccentric usually has a groove at its circumference closely fitted a circular collar (eccentric strap). An attached eccentric rod is suspended in such a way that its other end can impart the required reciprocating motion. A return crank fulfills the same function except that it can only work at the end of an axle or on the outside of a wheel whereas an eccentric can also be fitted to the body of the axle between the wheels. Unlike a cam, which also converts rotary into linear motion at almost any rate of acceleration and deceleration, an eccentric or return crank can only impart an approximation of simple harmonic motion. On bicycles The term is also used to refer to the device often used on tandem bicycles with timing chains, single-speed bicycles with a rear disc brake or an internal-geared hub, or any bicycle with vertical dropouts and no derailleur, to allow slight repositioning, fore and aft, of a bottom bracket to properly tension the chain. They may be held in place by a built-in wedge, set screws threaded into the bottom bracket shell, or pinch bolts that tighten a split bottom bracket shell. As a standard sized bottom bracket threads into the eccentric, an oversized bottom bracket shell is required to accommodate the eccentric. Gallery See also References Mechanisms (engineering)
Eccentric (mechanism)
[ "Engineering" ]
348
[ "Mechanical engineering", "Mechanisms (engineering)" ]
875,676
https://en.wikipedia.org/wiki/Colossal%20magnetoresistance
Colossal magnetoresistance (CMR) is a property of some materials, mostly manganese-based perovskite oxides, that enables them to dramatically change their electrical resistance in the presence of a magnetic field. The magnetoresistance of conventional materials enables changes in resistance of up to 5%, but materials featuring CMR may demonstrate resistance changes by orders of magnitude. This technology may find uses in disk read-and-write heads, allowing for increases in hard disk drive data density. However, so far it has not led to practical applications because it requires low temperatures and bulky equipment. History Initially discovered in mixed-valence perovskite manganites in the 1950s by G. H. Jonker and J. H. van Santen, a first theoretical description in terms of the double-exchange mechanism was given early on. In this model, the spin orientation of adjacent Mn moments is associated with kinetic exchange of eg-electrons. Consequently, alignment of the Mn spins by an external magnetic field causes higher conductivity. Relevant experimental work was done by Volger, Wollan and Koehler, and later on by Jirak et al. and Pollert et al. However, the double exchange model did not adequately explain the high insulating-like resistivity above the transition temperature. In the 1990s, work by R. von Helmolt et al. and Jin et al. initiated a large number of further studies. Although there is still no complete understanding of the phenomenon, there is a variety of theoretical and experimental work providing a deeper understanding of the relevant effects. Theory One prominent model is the so-called half-metallic ferromagnetic model, which is based on spin-polarized (SP) band structure calculations using the local spin-density approximation (LSDA) of the density functional theory (DFT) where separate calculations are carried out for spin-up and spin-down electrons. The half-metallic state is concurrent with the existence of a metallic majority spin band and a nonmetallic minority spin band in the ferromagnetic phase. This model is not the same as the Stoner Model of itinerant ferromagnetism. In the Stoner model, a high density of states at the Fermi level makes the nonmagnetic state unstable. In SP calculations of covalent ferromagnets using DFT-LSDA functionals, the exchange-correlation integral takes the place of the Stoner parameter. The density of states at the Fermi level does not play a special role. A significant advantage of the half-metallic model is that it does not rely on the presence of mixed valency as does the double exchange mechanism and it can therefore explain the observation of CMR in stoichiometric phases like the pyrochlore . Microstructural effects in polycrystalline samples have also been investigated and it has been found that the magnetoresistance is often dominated by the tunneling of spin-polarized electrons between grains, resulting in the magnetoresistance having an intrinsic dependence on grain size. A fully quantitative understanding of the CMR effect remains elusive and it is still the subject of much current research. Early promises of the development of new CMR-based technologies have not yet come to fruition. See also Giant magnetoresistance References External links Magnetoresistance Quantum electronics Spintronics
Colossal magnetoresistance
[ "Physics", "Chemistry", "Materials_science" ]
695
[ "Magnetoresistance", "Physical quantities", "Quantum electronics", "Spintronics", "Quantum mechanics", "Magnetic ordering", "Condensed matter physics", "Nanotechnology", "Electrical resistance and conductance" ]
876,428
https://en.wikipedia.org/wiki/Divergent%20series
In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit. If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge. A counterexample is the harmonic series The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme. In specialized mathematical contexts, values can be objectively assigned to certain series whose sequences of partial sums diverge, in order to make meaning of the divergence of the series. A summability method or summation method is a partial function from the set of series to values. For example, Cesàro summation assigns Grandi's divergent series the value . Cesàro summation is an averaging method, in that it relies on the arithmetic mean of the sequence of partial sums. Other methods involve analytic continuations of related series. In physics, there are a wide variety of summability methods; these are discussed in greater detail in the article on regularization. History Before the 19th century, divergent series were widely used by Leonhard Euler and others, but often led to confusing and contradictory results. A major problem was Euler's idea that any divergent series should have a natural sum, without first defining what is meant by the sum of a divergent series. Augustin-Louis Cauchy eventually gave a rigorous definition of the sum of a (convergent) series, and for some time after this, divergent series were mostly excluded from mathematics. They reappeared in 1886 with Henri Poincaré's work on asymptotic series. In 1890, Ernesto Cesàro realized that one could give a rigorous definition of the sum of some divergent series, and defined Cesàro summation. (This was not the first use of Cesàro summation, which was used implicitly by Ferdinand Georg Frobenius in 1880; Cesàro's key contribution was not the discovery of this method, but his idea that one should give an explicit definition of the sum of a divergent series.) In the years after Cesàro's paper, several other mathematicians gave other definitions of the sum of a divergent series, although these are not always compatible: different definitions can give different answers for the sum of the same divergent series; so, when talking about the sum of a divergent series, it is necessary to specify which summation method one is using. Examples 1 - 1 + 1 - 1 + ⋯ 1 − 2 + 3 − 4 + ⋯ 1 − 1 + 2 − 6 + 24 − 120 + ⋯ 1 − 2 + 4 − 8 + ⋯ 1 + 2 + 4 + 8 + ⋯ 1 + 1 + 1 + 1 + ⋯ 1 + 2 + 3 + 4 + ⋯ Theorems on methods for summing divergent series A summability method M is regular if it agrees with the actual limit on all convergent series. Such a result is called an Abelian theorem for M, from the prototypical Abel's theorem. More subtle, are partial converse results, called Tauberian theorems, from a prototype proved by Alfred Tauber. Here partial converse means that if M sums the series Σ, and some side-condition holds, then Σ was convergent in the first place; without any side-condition such a result would say that M only summed convergent series (making it useless as a summation method for divergent series). The function giving the sum of a convergent series is linear, and it follows from the Hahn–Banach theorem that it may be extended to a summation method summing any series with bounded partial sums. This is called the Banach limit. This fact is not very useful in practice, since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking the axiom of choice or its equivalents, such as Zorn's lemma. They are therefore nonconstructive. The subject of divergent series, as a domain of mathematical analysis, is primarily concerned with explicit and natural techniques such as Abel summation, Cesàro summation and Borel summation, and their relationships. The advent of Wiener's tauberian theorem marked an epoch in the subject, introducing unexpected connections to Banach algebra methods in Fourier analysis. Summation of divergent series is also related to extrapolation methods and sequence transformations as numerical techniques. Examples of such techniques are Padé approximants, Levin-type sequence transformations, and order-dependent mappings related to renormalization techniques for large-order perturbation theory in quantum mechanics. Properties of summation methods Summation methods usually concentrate on the sequence of partial sums of the series. While this sequence does not converge, we may often find that when we take an average of larger and larger numbers of initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series. A summation method can be seen as a function from a set of sequences of partial sums to values. If A is any summation method assigning values to a set of sequences, we may mechanically translate this to a series-summation method AΣ that assigns the same values to the corresponding series. There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively. Regularity. A summation method is regular if, whenever the sequence s converges to x, Equivalently, the corresponding series-summation method evaluates Linearity. A is linear if it is a linear functional on the sequences where it is defined, so that for sequences r, s and a real or complex scalar k. Since the terms of the series a are linear functionals on the sequence s and vice versa, this is equivalent to AΣ being a linear functional on the terms of the series. Stability (also called translativity). If s is a sequence starting from s0 and s′ is the sequence obtained by omitting the first value and subtracting it from the rest, so that , then A(s) is defined if and only if A(s′) is defined, and Equivalently, whenever for all n, then Another way of stating this is that the shift rule must be valid for the series that are summable by this method. The third condition is less important, and some significant methods, such as Borel summation, do not possess it. One can also give a weaker alternative to the last condition. Finite re-indexability. If a and a′ are two series such that there exists a bijection such that for all i, and if there exists some such that for all i > N, then (In other words, a′ is the same series as a, with only finitely many terms re-indexed.) This is a weaker condition than stability, because any summation method that exhibits stability also exhibits finite re-indexability, but the converse is not true.) A desirable property for two distinct summation methods A and B to share is consistency: A and B are consistent if for every sequence s to which both assign a value, (Using this language, a summation method A is regular iff it is consistent with the standard sum Σ.) If two methods are consistent, and one sums more series than the other, the one summing more series is stronger. There are powerful numerical summation methods that are neither regular nor linear, for instance nonlinear sequence transformations like Levin-type sequence transformations and Padé approximants, as well as the order-dependent mappings of perturbative series based on renormalization techniques. Taking regularity, linearity and stability as axioms, it is possible to sum many divergent series by elementary algebraic manipulations. This partly explains why many different summation methods give the same answer for certain series. For instance, whenever the geometric series can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value. However, when r is a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of infinity. Classical summation methods The two classical summation methods for series, ordinary convergence and absolute convergence, define the sum as a limit of certain partial sums. These are included only for completeness; strictly speaking they are not true summation methods for divergent series since, by definition, a series is divergent only if these methods do not work. Most but not all summation methods for divergent series extend these methods to a larger class of sequences. Absolute convergence Absolute convergence defines the sum of a sequence (or set) of numbers to be the limit of the net of all partial sums , if it exists. It does not depend on the order of the elements of the sequence, and a classical theorem says that a sequence is absolutely convergent if and only if the sequence of absolute values is convergent in the standard sense. Sum of a series Cauchy's classical definition of the sum of a series defines the sum to be the limit of the sequence of partial sums . This is the default definition of convergence of a sequence. Nørlund means Suppose pn is a sequence of positive terms, starting from p0. Suppose also that If now we transform a sequence s by using p to give weighted means, setting then the limit of tn as n goes to infinity is an average called the Nørlund mean Np(s). The Nørlund mean is regular, linear, and stable. Moreover, any two Nørlund means are consistent. Cesàro summation The most significant of the Nørlund means are the Cesàro sums. Here, if we define the sequence pk by then the Cesàro sum Ck is defined by Cesàro sums are Nørlund means if , and hence are regular, linear, stable, and consistent. C0 is ordinary summation, and C1 is ordinary Cesàro summation. Cesàro sums have the property that if then Ch is stronger than Ck. Abelian means Suppose } is a strictly increasing sequence tending towards infinity, and that . Suppose converges for all real numbers x > 0. Then the Abelian mean Aλ is defined as More generally, if the series for f only converges for large x but can be analytically continued to all positive real x, then one can still define the sum of the divergent series by the limit above. A series of this type is known as a generalized Dirichlet series; in applications to physics, this is known as the method of heat-kernel regularization. Abelian means are regular and linear, but not stable and not always consistent between different choices of λ. However, some special cases are very important summation methods. Abel summation If , then we obtain the method of Abel summation. Here where z = exp(−x). Then the limit of f(x) as x approaches 0 through positive reals is the limit of the power series for f(z) as z approaches 1 from below through positive reals, and the Abel sum A(s) is defined as Abel summation is interesting in part because it is consistent with but more powerful than Cesàro summation: whenever the latter is defined. The Abel sum is therefore regular, linear, stable, and consistent with Cesàro summation. Lindelöf summation If , then (indexing from one) we have Then L(s), the Lindelöf sum, is the limit of f(x) as x goes to positive zero. The Lindelöf sum is a powerful method when applied to power series among other applications, summing power series in the Mittag-Leffler star. If g(z) is analytic in a disk around zero, and hence has a Maclaurin series G(z) with a positive radius of convergence, then in the Mittag-Leffler star. Moreover, convergence to g(z) is uniform on compact subsets of the star. Analytic continuation Several summation methods involve taking the value of an analytic continuation of a function. Analytic continuation of power series If Σanxn converges for small complex x and can be analytically continued along some path from x = 0 to the point x = 1, then the sum of the series can be defined to be the value at x = 1. This value may depend on the choice of path. One of the first examples of potentially different sums for a divergent series, using analytic continuation, was given by Callet, who observed that if then Evaluating at , one gets However, the gaps in the series are key. For for example, we actually would get , so different sums correspond to different placements of the 's. Another example of analytic continuation is the divergent alternating series which is a sum over products of -functions and Pochhammer's symbols. Using the duplication formula of the -function, it reduces to a generalized hypergeometric series Euler summation Euler summation is essentially an explicit form of analytic continuation. If a power series converges for small complex z and can be analytically continued to the open disk with diameter from to 1 and is continuous at 1, then its value at q is called the Euler or (E,q) sum of the series Σan. Euler used it before analytic continuation was defined in general, and gave explicit formulas for the power series of the analytic continuation. The operation of Euler summation can be repeated several times, and this is essentially equivalent to taking an analytic continuation of a power series to the point z = 1. Analytic continuation of Dirichlet series This method defines the sum of a series to be the value of the analytic continuation of the Dirichlet series at s = 0, if this exists and is unique. This method is sometimes confused with zeta function regularization. If s = 0 is an isolated singularity, the sum is defined by the constant term of the Laurent series expansion. Zeta function regularization If the series (for positive values of the an) converges for large real s and can be analytically continued along the real line to s = −1, then its value at s = −1 is called the zeta regularized sum of the series a1 + a2 + ... Zeta function regularization is nonlinear. In applications, the numbers ai are sometimes the eigenvalues of a self-adjoint operator A with compact resolvent, and f(s) is then the trace of A−s. For example, if A has eigenvalues 1, 2, 3, ... then f(s) is the Riemann zeta function, ζ(s), whose value at s = −1 is −, assigning a value to the divergent series . Other values of s can also be used to assign values for the divergent sums , and in general where Bk is a Bernoulli number. Integral function means If J(x) = Σpnxn is an integral function, then the J sum of the series a0 + ... is defined to be if this limit exists. There is a variation of this method where the series for J has a finite radius of convergence r and diverges at x = r. In this case one defines the sum as above, except taking the limit as x tends to r rather than infinity. Borel summation In the special case when J(x) = ex this gives one (weak) form of Borel summation. Valiron's method Valiron's method is a generalization of Borel summation to certain more general integral functions J. Valiron showed that under certain conditions it is equivalent to defining the sum of a series as where H is the second derivative of G and c(n) = e−G(n), and a0 + ... + ah is to be interpreted as 0 when h < 0. Moment methods Suppose that dμ is a measure on the real line such that all the moments are finite. If a0 + a1 + ... is a series such that converges for all x in the support of μ, then the (dμ) sum of the series is defined to be the value of the integral if it is defined. (If the numbers μn increase too rapidly then they do not uniquely determine the measure μ.) Borel summation For example, if dμ = e−x dx for positive x and 0 for negative x then μn = n!, and this gives one version of Borel summation, where the value of a sum is given by There is a generalization of this depending on a variable α, called the (B′,α) sum, where the sum of a series a0 + ... is defined to be if this integral exists. A further generalization is to replace the sum under the integral by its analytic continuation from small t. Miscellaneous methods BGN hyperreal summation This summation method works by using an extension to the real numbers known as the hyperreal numbers. Since the hyperreal numbers include distinct infinite values, these numbers can be used to represent the values of divergent series. The key method is to designate a particular infinite value that is being summed, usually , which is used as a unit of infinity. Instead of summing to an arbitrary infinity (as is typically done with ), the BGN method sums to the specific hyperreal infinite value labeled . Therefore, the summations are of the form This allows the usage of standard formulas for finite series such as arithmetic progressions in an infinite context. For instance, using this method, the sum of the progression is , or, using just the most significant infinite hyperreal part, . Hausdorff transformations . Hölder summation Hutton's method In 1812 Hutton introduced a method of summing divergent series by starting with the sequence of partial sums, and repeatedly applying the operation of replacing a sequence s0, s1, ... by the sequence of averages , , ..., and then taking the limit. Ingham summability The series a1 + ... is called Ingham summable to s if Albert Ingham showed that if δ is any positive number then (C,−δ) (Cesàro) summability implies Ingham summability, and Ingham summability implies (C,δ) summability. Lambert summability The series a1 + ... is called Lambert summable to s if If a series is (C,k) (Cesàro) summable for any k then it is Lambert summable to the same value, and if a series is Lambert summable then it is Abel summable to the same value. Le Roy summation The series a0 + ... is called Le Roy summable to s if Mittag-Leffler summation The series a0 + ... is called Mittag-Leffler (M) summable to s if Ramanujan summation Ramanujan summation is a method of assigning a value to divergent series used by Ramanujan and based on the Euler–Maclaurin summation formula. The Ramanujan sum of a series f(0) + f(1) + ... depends not only on the values of f at integers, but also on values of the function f at non-integral points, so it is not really a summation method in the sense of this article. Riemann summability The series a1 + ... is called (R,k) (or Riemann) summable to s if The series a1 + ... is called R2 summable to s if Riesz means If λn form an increasing sequence of real numbers and then the Riesz (R,λ,κ) sum of the series a0 + ... is defined to be Vallée-Poussin summability The series a1 + ... is called VP (or Vallée-Poussin) summable to s if where is the gamma function. Zeldovich summability The series is Zeldovich summable if See also Silverman–Toeplitz theorem Notes References . . . . . . . Werner Balser: "From Divergent Power Series to Analytic Functions", Springer-Verlag, LNM 1582, ISBN 0-387-58268-1 (1994). William O. Bray and Časlav V. Stanojević(Eds.): "Analysis of Divergence", Springer, ISBN 978-1-4612-7467-4 (1999). Alexander I. Saichev and Wojbor Woyczynski:"Distributions in the Physical and Engineering Sciences, Volume 1", Chap.8 "Summation of divergent series and integrals", Springer (2018). Mathematical series Summability methods Asymptotic analysis Summability theory
Divergent series
[ "Mathematics" ]
4,363
[ "Sequences and series", "Mathematical analysis", "Mathematical structures", "Series (mathematics)", "Calculus", "Summability methods", "Asymptotic analysis" ]
876,534
https://en.wikipedia.org/wiki/Full%20and%20faithful%20functors
In category theory, a faithful functor is a functor that is injective on hom-sets, and a full functor is surjective on hom-sets. A functor that has both properties is called a fully faithful functor. Formal definitions Explicitly, let C and D be (locally small) categories and let F : C → D be a functor from C to D. The functor F induces a function for every pair of objects X and Y in C. The functor F is said to be faithful if FX,Y is injective full if FX,Y is surjective fully faithful (= full and faithful) if FX,Y is bijective for each X and Y in C. Properties A faithful functor need not be injective on objects or morphisms. That is, two objects X and X′ may map to the same object in D (which is why the range of a full and faithful functor is not necessarily isomorphic to C), and two morphisms f : X → Y and f′ : X′ → Y′ (with different domains/codomains) may map to the same morphism in D. Likewise, a full functor need not be surjective on objects or morphisms. There may be objects in D not of the form FX for some X in C. Morphisms between such objects clearly cannot come from morphisms in C. A full and faithful functor is necessarily injective on objects up to isomorphism. That is, if F : C → D is a full and faithful functor and then . Examples The forgetful functor U : Grp → Set maps groups to their underlying set, "forgetting" the group operation. U is faithful because two group homomorphisms with the same domains and codomains are equal if they are given by the same functions on the underlying sets. This functor is not full as there are functions between the underlying sets of groups that are not group homomorphisms. A category with a faithful functor to Set is (by definition) a concrete category; in general, that forgetful functor is not full. The inclusion functor Ab → Grp is fully faithful, since Ab (the category of abelian groups) is by definition the full subcategory of Grp induced by the abelian groups. Generalization to (∞, 1)-categories The notion of a functor being 'full' or 'faithful' does not translate to the notion of a (∞, 1)-category. In an (∞, 1)-category, the maps between any two objects are given by a space only up to homotopy. Since the notion of injection and surjection are not homotopy invariant notions (consider an interval embedding into the real numbers vs. an interval mapping to a point), we do not have the notion of a functor being "full" or "faithful." However, we can define a functor of quasi-categories to be fully faithful if for every X and Y in C, the map is a weak equivalence. See also Full subcategory Equivalence of categories Notes References Functors
Full and faithful functors
[ "Mathematics" ]
662
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Functors", "Category theory" ]
876,732
https://en.wikipedia.org/wiki/Poincar%C3%A9%20map
In mathematics, particularly in dynamical systems, a first recurrence map or Poincaré map, named after Henri Poincaré, is the intersection of a periodic orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, called the Poincaré section, transversal to the flow of the system. More precisely, one considers a periodic orbit with initial conditions within a section of the space, which leaves that section afterwards, and observes the point at which this orbit first returns to the section. One then creates a map to send the first point to the second, hence the name first recurrence map. The transversality of the Poincaré section means that periodic orbits starting on the subspace flow through it and not parallel to it. A Poincaré map can be interpreted as a discrete dynamical system with a state space that is one dimension smaller than the original continuous dynamical system. Because it preserves many properties of periodic and quasiperiodic orbits of the original system and has a lower-dimensional state space, it is often used for analyzing the original system in a simpler way. In practice this is not always possible as there is no general method to construct a Poincaré map. A Poincaré map differs from a recurrence plot in that space, not time, determines when to plot a point. For instance, the locus of the Moon when the Earth is at perihelion is a recurrence plot; the locus of the Moon when it passes through the plane perpendicular to the Earth's orbit and passing through the Sun and the Earth at perihelion is a Poincaré map. It was used by Michel Hénon to study the motion of stars in a galaxy, because the path of a star projected onto a plane looks like a tangled mess, while the Poincaré map shows the structure more clearly. Definition Let (R, M, φ) be a global dynamical system, with R the real numbers, M the phase space and φ the evolution function. Let γ be a periodic orbit through a point p and S be a local differentiable and transversal section of φ through p, called a Poincaré section through p. Given an open and connected neighborhood of p, a function is called Poincaré map for the orbit γ on the Poincaré section S through the point p if P(p) = p P(U) is a neighborhood of p and P:U → P(U) is a diffeomorphism for every point x in U, the positive semi-orbit of x intersects S for the first time at P(x) Example Consider the following system of differential equations in polar coordinates, : The flow of the system can be obtained by integrating the equation: for the component we simply have while for the component we need to separate the variables and integrate: Inverting last expression gives and since we find The flow of the system is therefore The behaviour of the flow is the following: The angle increases monotonically and at constant rate. The radius tends to the equilibrium for every value. Therefore, the solution with initial data draws a spiral that tends towards the radius 1 circle. We can take as Poincaré section for this flow the positive horizontal axis, namely : obviously we can use as coordinate on the section. Every point in returns to the section after a time (this can be understood by looking at the evolution of the angle): we can take as Poincaré map the restriction of to the section computed at the time , . The Poincaré map is therefore : The behaviour of the orbits of the discrete dynamical system is the following: The point is fixed, so for every . Every other point tends monotonically to the equilibrium, for . Poincaré maps and stability analysis Poincaré maps can be interpreted as a discrete dynamical system. The stability of a periodic orbit of the original system is closely related to the stability of the fixed point of the corresponding Poincaré map. Let (R, M, φ) be a differentiable dynamical system with periodic orbit γ through p. Let be the corresponding Poincaré map through p. We define and then (Z, U, P) is a discrete dynamical system with state space U and evolution function Per definition this system has a fixed point at p. The periodic orbit γ of the continuous dynamical system is stable if and only if the fixed point p of the discrete dynamical system is stable. The periodic orbit γ of the continuous dynamical system is asymptotically stable if and only if the fixed point p of the discrete dynamical system is asymptotically stable. See also Poincaré recurrence Hénon map Recurrence plot Mironenko reflecting function Invariant measure References External links Shivakumar Jolad, Poincare Map and its application to 'Spinning Magnet' problem, (2005) Dynamical systems Map
Poincaré map
[ "Physics", "Mathematics" ]
1,008
[ "Mechanics", "Dynamical systems" ]
876,849
https://en.wikipedia.org/wiki/Rocket%20Festival
The Rocket Festival (, ) is a merit-making ceremony traditionally practiced by ethnic Lao people at the beginning of the wet season in various villages and municipalities in Northeastern Thailand and Laos. The festivities typically include music and dance performances, competitive processions of floats, dancers, and musicians on the second day, and the competitive firing of homemade rockets on the third day. Local participants and sponsors take advantage of the occasion to enhance their social prestige, as is customary at traditional Buddhist folk festivals throughout Southeast Asia. Bun Bang Fai is celebrated in all provinces across Laos, but the most popular one used to be held along the bank of the Mekong river in the capital, Vientiane. However, because of considerable urbanization and safety measures, the festivals are now celebrated in nearby villages, including Naxon, Natham, Thongmang, Ban Kern, and Pakkagnoung. The festival in Thailand also includes special programs and specific local patterns like Bang Fai (parade dance) and a Beautiful Bang Fai float such as Yasothon on the third weekend of May, and continues to Suwannaphum District, Roi Et, on the first weekend of June, and Phanom Phrai District during the full moon of the seventh month in the Lunar year's calendar each year. The Bang Fai festival is not only found in Isan, Northeasthern Thailand, North Thailand, and Laos, but also in Amphoe Sukhirin, Narathiwat. History These Buddhist festivals are presumed to have evolved from pre-Buddhist fertility rites held to celebrate and encourage the coming of the rains, from before the 9th century invention of black powder. This festival displays some earthy elements of Lao folklore. Bun Bang Fai originates from ancient times when ethnic Lao people believed in many gods and is mentioned in tales, such as 'The Tale of Pha Daeng–Nang Ai' () and 'The Tale of Phaya Khankhak' (). In the literature of Laos, such stories refer to the firing of rockets to the heavens to communicate with the God of Rain () and persuade him to send the rains to the earth in a timely fashion for cultivation. Early European explorers who passed through Laos in the 1800s recorded witnessing the rocket festivals in the country. Louis de Carné, in 1866, described a celebration in southern Laos where bamboos loaded with powder went off, producing violent explosions. Furthermore, Etienne Aymonier, visiting Laos in 1883, described Bang Phoai (Bang Fai) as strong tubes of bamboo fretted with cords, or rattans, in which powder was stuffed. The powder was manufactured in the country by mixing ten parts of saltpeter (potassium nitrate) with three pieces of wood charcoal and a part and a half of sulphur. These rockets were then deposited on trestles at the pagoda. The rockets were paraded around the temple before they were launched the next day. The celebration occurred in May or June. Anthropology Professor Charles F. Keyes advises, "In recognition of the deep-seated meaning of certain traditions for the peoples of the societies of mainland Southeast Asia, the rulers of these societies have incorporated some indigenous symbols into the national cultures that they have worked to construct in the postcolonial period". Giving the "Bun Bang Fai or fire rocket festival of Laos", as one example, he adds that it remains "far more elaborate in the villages than in the cities". In Laos Bun Bang Fai is held over the sixth Lunar month, usually around May and June, coinciding with the plantation and the beginning of the rainy seasons. Several months before the festival, an organizing committee is formed in each future host village to discuss the festival. Weeks before the festival, bamboo rockets are built and decorated by monks and villagers. The festival usually lasts two days and begins early in the morning with the associated religious rituals performed by the monks in the temple. Early in the afternoon, a Buddhist procession starts in which villagers carrying money trees circle the central ordination hall, in which there is a Buddha statue, three times in a clockwise rotation on the sound of traditional music (). The money trees are then offered to the monks in a Buddhist ritual believed to garner religious merit. Afterwards, rockets from all involved villages are displayed in the court of the temple, followed by a celebration with traditional music and dance that can last up to the early morning of the next day. The second day begins with a morning ceremony of food offerings from villagers to the monks in the assembly hall of the temple (). The food usually includes sticky rice, cakes, and other sweets that the faithful line up to place in the monks’ almsbowls during the sermon. In addition, other food dishes are portioned out in small bowls and offered to the monks on rattan trays. The religious leader of the village ritually presents the food to the monks by reciting the five precepts of Buddhism. The monks, in return, offer the teachings of the Buddha by chanting sutras and sermons. During the sermon, the faithful address prayers to their ancestors and do the Yaat Nam, which consists of having water blessed by a monk before pouring it, drop by drop, on the earth. After the ceremony, a meal is shared by all participants. The faithful believe these offerings grant a long life to anyone who gives with a serene heart. The religious ceremony is followed by a street parade through the village with pickup trucks displaying the rockets on the sound of the Khene (), cymbals, and long drums. Teams of contestants dance and chant traditional folksong, with the team's leader chanting first and then the others repeating (). Contestants are divided into groups based on the size of their rocket. The competition begins with the firing of the rockets skyward. For each rocket category, scores are given based on how high and far the rocket flies. Builders of failed rockets are thrown in a muddy pond and forced to drink Lao-Lao (). In the United States and France Following the end of the Vietnam War in 1975, tens of thousands of Lao people left the country as refugees who resettled in other countries, most of them in the United States and in France. The Lao built Lao Buddhist temples () to serve as cultural centers. Traditional Lao holidays such as Lao New Year and Bun Bang Fai are celebrated in addition to the official host countries’ holidays. In France, Bun Bang Fai is celebrated in Paris and other cities; there is a community of Laotians in France. Bun Bang Fai has been celebrated in Bretignolles. In the United States, there are more than forty Wat Lao. The celebration in both the United States and France lasts two days and proceeds as in Laos, beginning with a religious ceremony followed by a display and parade of rockets in the Wat with traditional Laoof Soeng Bang Faince on the Soeng Bang Fai music. Unlike in Laos, however, the procession does not conclude with the firing of the rockets, as they are not allowed to be launched because of safety measures. Instead, only small, handcrafted rockets are launched. In the National Air and Space Museum Frank H. Winter, curator of the Rocketry Division of the National Air and Space Museum, stated that: "Lao Rocket is special and unique that has a thousand years of traditional celebration associated with this great looking rocket. It would be wonderful to have a Lao Rocket on display in the National Air and Space Museum so that the public can learn from it." In 2005, Lao Bang Fai was chosen to be displayed at the National Air and Space Museum in Chantilly, Virginia. The deputy abbot of Wat Lao Buddhavong in Virginia acknowledged that "this event is historic and brings recognition and visibility that all Laotians can be proud of". Bun Bang Fai was launched in 1994 by the Lao community and has been celebrated each year since. The religious ceremonies are performed inside the museum on the campus of the University of Washington. In Northeastern Thailand Villages may have floats conveying government messages. They may also include fairs. In recent years, the Tourism Authority of Thailand has promoted the events, particularly in the Thai provinces of Nong Khai and Yasothon. The Bun bang Fai celebration in the past were in Yasothorn, Roi Et, Kalasin, Srisaket, Mahasarakham, and Udon Thani. Yasothon's festival Since March 1, 1972, the separation of Yasothon from Ubon Ratchathani, Yasothon has staged its Rocket Festival in Thailand annually over Friday, Saturday, and Sunday in the middle of May. The principal theme of any Hae Bangfai is the Phadaeng and Nang Ai legends. Many floats depict the couple and their retinue. Hàe typically end in a wat, where dancers and accompanying musicians may further compete in traditional folk dance. All groups prominently display the names of their major sponsors. Recalling the fertility rite origins of the festival, parade ornaments and floats often have phallic imagery. The festivities also include cross-dressing, both cross-sex and cross-generational, and alcohol. Perhaps the most popular beverage is a neutral grain spirit called Sura (), but more generally known as Lao Whiskey (, Lao lao) in Laos and Lao Khao (, white alcohol) in Thailand. Sato () may also be on offer. On May 9, 1999, a Lan 120 kg rocket exploded 50 meters above ground, just two seconds after launch, killing four people and wounding 11. Bang Fai (the rockets) Bang Fai skyrockets are black-powder bottle rockets. Tiny bottle rockets are so-called because they may be launched from a bottle. In the case of the similar appearing Bang Fai, also spelled Bong Fai, the 'bottle' is a bong, and a section of bamboo culm is used as a container (and only colloquially as a pipe for smoking marijuana). Related to the Chinese Fire Arrow, Bang Fai are made from bamboo bongs. Most contemporary ones, however, are enclosed in PVC piping, making them less dangerous by standardizing their sizes and black-powder charges. Baking or boiling a bong kills insect eggs that otherwise hatch in dead bamboo and eat it. Vines tie long bamboo tails to launching racks. The time it takes for the exhaust to burn through the vines (usually) allows a motor to build up to full thrust; then the tails impart in-flight stability. Ignition comes from a burning fuse or electric match. Bang Fai come in various sizes, competing in several categories. Small ones are called Bang Fai Noi (). Larger categories are designated by the counting words for 10,000, 100,000, and 1,000,000: Meun (), Saen (), and the largest Bang Fai, the Lan (). Bang Fai Lan are nine metres long and charged with 120 kg of black powder. These may reach altitudes reckoned in kilometres, and travel dozens of kilometres downrange. Competing rockets are scored for the apparent height, distance, and beauty of the vapour trail. A few include skyrocket pyrotechnics. A few also include parachutes for tail assemblies. Folktales Nang Ai, Phadaeng, and Phangkhi Nang Ai (), or in full, Nang Ai Kham (), is the queen of pageants, and the Phadaeng is her champion. She is known as the most beautiful girl. He, an outsider, wants her, but he must win a rocket festival tournament to win her. He becomes part of a love triangle. Phangkhi and Nang Ai have been fated by their karma (, ) to have been reborn throughout many past existences as soulmates. Stories about the couple, however, say that in many past existences, she has been a dutiful wife and that he only wanted to satisfy himself. She becomes fed up and prays never to be paired with him again. Nang Ai is reborn as the daughter of Phraya Khom (, which means Lord Khmer; but even if her father was a Cambodian overlord, Nang Ai Kham is still the genuine article), while Phangki is reborn as the son of Phaya Nak, the Grand Nāga who rules the Deeps. Phangki is not invited to the tournament, and Phadaeng's rocket fizzles. Nang Ai's uncle is the winner, so her father calls the whole thing off, which is considered to be a very bad omen. Pangkhii shapeshifts into a white squirrel to spy on Nang Ai; she has him killed by a royal hunter. Pangkhii's flesh transforms into meat equal to 8,000 cartloads. Nang Ai and many of her countrymen ate this flesh, and Phaya Nak vows to allow no one to remain living who had eaten of the flesh of his son. Aroused from the Deeps, he and his watery myrmidons rise and turn the land into a vast swamp. Nagas personify waters running both above and below ground, and nagas run amok are rivers in spate; all of Isan is flooded. Phadaeng flees the flood with Nang Ai on his white stallion, Bak Sam, but she is swept off by Naga's tail, not to be seen again. Bak Sam is seen in parades sporting his stallion's equipage; legend says that he dug a lick called Lam Huay Sam. Phadaeng escapes. His ghost then raises an army of the spirits of the air to wage war on the Nagas. The war continues until both sides are exhausted, and the dispute is submitted to King Wetsawan, king of the North, for arbitration. His decision: the cause of the feud has long since been forgotten, and all disputants must let bygones be bygone. The legend is retold in many regional variations. One 3000-word poem translated to English "is especially well known to the Thai audience, having been designated as secondary school supplementary reading by the Thai Ministry of Education, with publication in 1978. There is even a Thai popular song about the leading characters." The original was written in a Lao-Isan verse called Khong saan; it has sexual innuendo, puns, and double entendre. Keyes on page 48 wrote that "Phra Daeng Nang Ai" is a version of Kaundinya, the legendary founder of Funan, and Soma, the daughter of the king of the Nāga. Keyes also wrote that such legends may prove a valuable source of toponyms. Toad King Some said that Bang Fai is launched to bring rain, as in the Tourism Authority of Thailand link. However, a reading of the underlying myth, as presented in Yasothon and Nong Khai, implies the opposite: the rains bring on the rockets. Their version of the myth: When the Lord Buddha was in his Bodhisattva (Pali) incarnation as King of the Toads Phaya Khang Khok, and married to Udon Khuruthawip (Northern Partner-Knowing-Continent), his sermons drew everyone away from Phaya Thaen (), King of the Sky). Phaya Thaen then withheld life-giving rains from the earth for seven years, seven months, and seven days. Acting against the advice of the Toad King, Phaya Naga, King of the Nāga (and personification of the Mekong), declared war on Phaya Thaen and lost. Persuaded by Phaya Naga to assume command, King Toad enlisted the aid of termites to build mounds reaching to the heavens, and of venomous scorpions and centipedes to attack Phaya Thaen's feet, and of hornets for air support. Previous attempts at aerial warfare against Phaya Thaen in his own element had proved futile; but even the Sky must come down to the ground. On the ground, the war was won, and Phaya Thaen sued for peace. Naga Rockets fired in the air at the end of the dry season are not to threaten Phaya Thaen, but to serve as a reminder to him of his treaty obligations made to Lord Bodhisatta Phaya Khang Khok on the ground. Phaya Nak was given the duty of Honor Guard at most Thai and Lao temples. After the harvest of the resulting crops, Wow thanoo, man-sized kites with a strung bow, are staked out in winter monsoon winds. They are also called Túi-tiù from the sound of the bowstring singing in the wind, which sings all through the night, to signal Phaya Thaen that he has sent enough rain. All participants (including a wow thanoo) were depicted on murals on the front of the former Yasothon Municipal Bang Fai Museum, but were removed when it was remodeled as a learning center. An English-language translation of a Thai report on Bang Fai Phaya Nark Naga fireballs at Nong Khai gives essentially the same myth (without the hornets and wow) from Thai folk. Etymology Bun (, ) merit is from Pali Puñña merit, meritorio (पुण्य), ion, virtue, and Sanskrit पुण्य, puṇya, meritorious, good, or virtuous works. Bang (, (alternative spelling: บ้อง, bong)) is a cutting, specifically of bamboo. Fai (, ) is Fire. Prapheni () is from Sanskrit परंपर, parampara, meaning an uninterrupted series, regular series, succession 'to be'handed down in regular succession'; from Pali paraṁparā 7795 paraṁparā series, tradition. In popular culture The 2006 Thai martial arts film, Kon Fai Bin, depicts the Rocket Festival. Set in 1890s Siam, the movie's hero, Jone Bang Fai ("Fireball Bandit"), is an expert at building the traditional bamboo rockets, which he uses in conjunction with Muay Thai martial arts to defeat his opponents. Thai political protests in April 2010 similarly had Red Shirts firing in downtown Bangkok. In 2013, Vangvieng's Bun Bang Fai was featured in the 2013 film, The Rocket. In the film, a young boy named Ahlo wanted to enter the rocket-making contest, hoping to win a big cash prize and prove that he was not cursed. See also Chinese Fire Arrow for Flying Firelances, bamboo tubes stuffed with black powder; the tube was ignited and used as a flamethrower. Black powder Gift economy Mysorean rockets – military weapons Phaya Naga Phi Ta Khon ghost festival – includes a rocket festival Phra Lak Phra Lam Skyrocket Thai folklore References and notes Further reading Gray, Paul and Ridout, Lucy. Rough Guide to Thailand. Rough Guides, 2004. . External links Tourist Authority of Thailand (Archived in 2005) Videos A video and a brief description about the Rocket Festival in Northeast Thailand (Issan, Esarn) can be seen from http://www.spatz-darmstadt.de (section "Asian Cultures" / "Ethnographic Videos" / "Thai,Lao Cultural Festivals" ) Buddhist festivals in Thailand Buddhist holidays Buddhist festivals in Laos Fireworks competitions Isan culture Thai folklore Rocketry Tourist attractions in Yasothon province Fireworks events in Asia Animals in Buddhism
Rocket Festival
[ "Engineering" ]
3,983
[ "Rocketry", "Aerospace engineering" ]
877,127
https://en.wikipedia.org/wiki/Fastener
A fastener (US English) or fastening (UK English) is a hardware device that mechanically joins or affixes two or more objects together. In general, fasteners are used to create non-permanent joints; that is, joints that can be removed or dismantled without damaging the joining components. Steel fasteners are usually made of stainless steel, carbon steel, or alloy steel. Other methods of joining materials, some of which may create permanent joints, include: crimping, welding, soldering, brazing, taping, gluing, cement, or the use of other adhesives. Force may also be used, such as with magnets, vacuum (like suction cups), or even friction (like sticky pads). Some types of woodworking joints make use of separate internal reinforcements, such as dowels or biscuits, which in a sense can be considered fasteners within the scope of the joint system, although on their own they are not general-purpose fasteners. Furniture supplied in flat-pack form often uses cam dowels locked by cam locks, also known as conformat fasteners. Fasteners can also be used to close a container such as a bag, a box, or an envelope; or they may involve keeping together the sides of an opening of flexible material, attaching a lid to a container, etc. There are also special-purpose closing devices, e.g., a bread clip. Items like a rope, string, wire, cable, chain, or plastic wrap may be used to mechanically join objects; however, because they have additional common uses, they are not generally categorized as fasteners. Likewise, hinges and springs may join objects together, but they are ordinarily not considered fasteners because their primary purpose is to allow articulation rather than rigid affixment. Industry In 2005, it was estimated that the United States fastener industry runs 350 manufacturing plants and employs 40,000 workers. The industry is strongly tied to the production of automobiles, aircraft, appliances, agricultural machinery, commercial construction, and infrastructure. More than 200 billion fasteners are used per year in the U.S., 26 billion of these by the automotive industry. The largest distributor of fasteners in North America is the Fastenal Company. Materials There are three major steel fasteners used in industries: stainless steel, carbon steel, and alloy steel. The major grade used in stainless steel fasteners: 200 series, 300 series, and 400 series. Titanium, aluminium, and various alloys are also common materials of construction for metal fasteners. In many cases, special coatings or plating may be applied to metal fasteners to improve their performance characteristics by, for example, enhancing corrosion resistance. Common coatings/platings include zinc, chrome, and hot-dip galvanizing. Applications When selecting a fastener for industrial applications, it is important to consider a variety of factors. The threading, the applied load on the fastener, the stiffness of the fastener, and the number of fasteners needed should all be taken into account. When choosing a fastener for a given application, it is important to know the specifics of that application to help select the proper material for the intended use. Factors that should be considered include: Accessibility Environment, including temperature, water exposure, and potentially corrosive elements Installation process Materials to be joined Reusability Weight restrictions Types A threaded fastener has internal or external screw threads. The most common types are the screw, nut and bolt, possibly involving washers. Other more specialized types of threaded fasteners include captive threaded fasteners, stud, threaded inserts, and threaded rods. Other types of fastener include: anchor bolt batten bolt (fastener) screw bolt snap brass fastener buckle button cable tie cam captive fastener clamp (or cramp) hose clamp clasp and shackle bolt snap carabiner circle cotter lobster clasp cleco clip Binder clip Bulldog clip Crocodile clip circlip Clothespin hairpin clip paper clip terry clip clutch drawing pin (thumbtack) flange frog grommet hook-and-eye closure hook and loop fastener Velcro latch nail and rivet solid/round head rivets semi-tubular rivets blind (pop) rivet pegs tent peg PEM nut pins clevis fastener cotter dowel linchpin R-clip safety pin split pin spring pin tapered pin retaining rings circlip e-ring rivet-like well nut rock bolt rubber band (or bands of other materials) screw anchor snap fastener snap-fit staple stitches strap tie toggle bolt tolerance rings treasury tag twist tie wedge anchor zipper Common fastener head styles Common head styles include: Flat head fasteners: Ideal for applications where aesthetics are a priority, flat head fasteners sit flush with the surface, offering a clean appearance. Round head fasteners: With a rounded top, round head fasteners provide a larger bearing surface, suitable for sheet metal or thin plastic assemblies. Pan head fasteners: Pan head fasteners combine a slightly flattened top with a larger bearing surface, offering a streamlined appearance for aesthetic applications. Socket head fasteners: Designed for high torque applications, socket head fasteners are driven with a hex key, reducing the risk of cam-out. Hex head fasteners: Known for their high torque capacity, hex head fasteners are easily driven with a spanner or wrench, ideal for heavy-duty applications. Square head fasteners: Offering increased wrenching area and reduced risk of rounding off, square head fasteners are used in high torque applications. Flange head fasteners: Integrating a flange for a larger bearing surface, flange head fasteners distribute clamping force without damaging the material. Wing head fasteners: Featuring protruding "wings" for hand tightening, wing head fasteners are suitable for applications requiring frequent adjustments. T-slot fasteners: Designed for T-slotted aluminium extrusions, T-slot fasteners provide a secure and adjustable connection for framing and guarding systems. Standards & traceability There are multiple standards bodies for fasteners, including the US Industrial Fasteners Institute and the European Industrial Fastener Institute. ASME B18 standards on certain fasteners The American Society of Mechanical Engineers (ASME) publishes several standards on fasteners. Some are: B18.3 Socket Cap, Shoulder, Set Screws, and Hex Keys (Inch Series) B18.6.1 Wood Screws (Inch Series) B18.6.2 Slotted Head Cap Screws, Square Head Set Screws, And Slotted Headless Set Screws (Inch Series) B18.6.3 Machine Screws, Tapping Screws, and Metallic Drive Screws (Inch Series) B18.18 Quality Assurance For Fasteners B18.24 Part Identifying Number (PIN) Code System Standard for B18 Fastener Products For military hardware American screws, bolts, and nuts were historically not fully interchangeable with their British counterparts, and therefore would not fit British equipment properly. This, in part, helped lead to the development of numerous United States Military Standards and specifications for the manufacturing of essentially any piece of equipment that is used for military or defense purposes, including fasteners. World War II was a significant factor in this change. A key component of most military standards is traceability. Put simply, hardware manufacturers must be able to trace their materials to their source, and provide traceability for their parts going into the supply chain, usually via bar codes or similar methods. This traceability is intended to help ensure that the right parts are used and that quality standards are met in each step of the manufacturing process; additionally, substandard parts can traced back to their source. History In 1988, the United States House Energy Subcommittee on Oversight and Investigations investigated counterfeit, mismarked, substandard fasteners and found extensive use in critical civilian and military infrastructure. As a result, they proposed Fastener Quality Assurance Act of 1988 (HR5051) that would require laboratory testing of fasteners in critical use applications prior to sale. See also Safety wire Taiwan International Fastener Show References Further reading
Fastener
[ "Engineering" ]
1,712
[ "Construction", "Fasteners" ]
877,353
https://en.wikipedia.org/wiki/Oral%20polio%20vaccine%20AIDS%20hypothesis
According to a now-discredited hypothesis, the AIDS pandemic originated from live polio vaccines prepared in chimpanzee tissue cultures, accidentally contaminated with simian immunodeficiency virus and then administered to up to one million Africans between 1957 and 1960 in experimental mass vaccination campaigns. Data analyses in molecular biology and phylogenetic studies contradict the OPV AIDS hypothesis; consequently, scientific consensus regards the hypothesis as disproven. A 2004 Nature article has described the hypothesis as "refuted". Background: polio vaccines Two vaccines are used throughout the world to combat poliomyelitis. The first, a polio vaccine developed by Jonas Salk, is an inactivated poliovirus vaccine (IPV), consisting of a mixture of three wild, virulent strains of poliovirus, grown in a type of monkey kidney tissue culture (Vero cell line), and made noninfectious by formaldehyde treatment. The second vaccine, an oral polio vaccine (OPV), is a live-attenuated vaccine, produced by the passage of the virus through non-human cells at a sub-physiological temperature. The passage of virus produces mutations within the viral genome, and hinders the virus's ability to infect nervous tissue. Both vaccines have been used for decades to induce immunity to polio, and to stop the spread of the infection. However, OPV has several advantages; because the vaccine is introduced in the gastrointestinal tract, the primary site of poliovirus infection and replication, it closely mimics a natural infection. OPV also provides long lasting immunity, and stimulates the production of polio neutralizing antibodies in the pharynx and gut. Hence, OPV not only prevents paralytic poliomyelitis, but also, when given in sufficient doses, can stop a threatening epidemic. Other benefits of OPV include ease of administration, low cost and suitability for mass vaccination campaigns. Oral polio vaccine Oral polio vaccines were developed in the late 1950s by several groups, including those led by Albert Sabin, Hilary Koprowski and H. R. Cox. A poliovirus type 1 strain called SM was reported in 1954. A less virulent version of the SM strain was reported by Koprowski in 1957. The name of the vaccine strain was "CHAT" after "Charlton", the name of the child who was the donor of the precursor virus. The Sabin, Koprowski and Cox vaccines were clinically tested in millions of individuals and found to be safe and effective. Because monkey trials found fewer side effects with the Sabin vaccine, in the early 1960s, the Sabin vaccine was licensed in the US and its use supported by the World Health Organization. Between 1957 and 1960, Koprowski's vaccine was administered to roughly one million people in the Belgian territories, now the Democratic Republic of the Congo, Rwanda and Burundi. In 1960, Koprowski wrote in the British Medical Journal, "The Belgian Congo trials have enlarged considerably and ... more vaccination campaigns organized in several provinces of the Belgian Congo are raising the number of vaccinated individuals into the millions."(p. 90) Koprowski and his group also published a series of detailed reports on the vaccination of 76,000 children under the age of five (and European adults) in the area of Leopoldville (now Kinshasa) in Belgian Congo from 1958 to 1960; these reports begin with an overview, next a review of safety and efficacy, then a 21-month follow-up and final report. Vaccine production In the 1950s, before dangers inherent to the process were well controlled, seed stocks of vaccines were occasionally transported to distant regions, then standard tissue culture methods were used to amplify the virus at local production facilities. Biologic products, chiefly kidney cells for cultures and blood serum for media, were sometimes harvested from local primates and used in the production process if wild or captive populations of appropriate species were available. In South Africa, African green monkey tissue was used to amplify the Sabin vaccine. In French West Africa and Equatorial Africa, baboons were used to amplify a vaccine from the Pasteur Institute. In Poland, the CHAT vaccine was amplified using Asian macaques. Development of hypothesis In 1987, Blaine Elswood contacted journalist Tom Curtis about a "bombshell story" on OPV and AIDS. Curtis published an article on the OPV AIDS hypothesis in Rolling Stone in 1992. In response, Hilary Koprowski sued Rolling Stone and Tom Curtis for defamation. The magazine published a clarification which praised Koprowski and stated: Rolling Stone was ordered to pay US$1 in damages whilst incurring around US$500,000 in legal fees for its own defense. A few scientists, notably the evolutionary biologist W. D. Hamilton, thought the hypothesis required serious investigation, but they received little support from the scientific community. For example, in 1996, Science refused to publish a letter Hamilton sent to it in which he replied to a 1992 Koprowski letter. Hamilton kept his position and said in 1999, "This theory, rather sadly, has gone from strength to strength. It's not proven by any means, but it's looking very strong." Hamilton was also supportive of journalist Edward Hooper who detailed the hypothesis in his 1999 book, The River. Hamilton wrote the foreword for the book and did two expeditions to Congo between December 1999 and January 2000 to collect evidences on the OPV hypothesis. None of the over 60 urine and faecal samples collected by Hamilton contained SIV. Still, Hamilton used his prestige within the Royal Society to promote a discussion meeting about the OPV hypothesis. In this meeting, held six months after Hamilton's death, in September 2000, Hooper further expanded on his allegations, although these claims were later rebutted by some of the scientists who were present at the meeting. In 2001, Hilary Koprowski responded by making a detailed rebuttal of the points made in the book, also in a talk to the Royal Society. In 2004, the Origin of Aids, a French TV documentary strongly supportive of the OPV hypothesis, appeared on several television stations around the world. In 2003, Hooper published additional statements that he believed supported his hypothesis in an article in the London Review of Books. These included accounts of an interview with Jacques Kanyama, a virology technician at the lab in Stanleyville (the Laboratoire Médical de Stanleyville (LMS)) responsible for testing the CHAT vaccine and performing the initial set of vaccinations, who was reported to have said that batches of CHAT had been produced on site by Paul Osterrieth. In addition, Philip Elebe, a microbiology technician, was claimed to have said that tissue cultures were being produced from Lindi chimpanzees. Osterrieth has denied these claims and stated that this work would not have been possible in this laboratory, stating that: In his book, Hooper also stated that Gaston Ninane was involved in using chimpanzee cells to produce vaccine in Congo. Ninane responded to this allegation by stating that he could "categorically deny" ever having tried to make tissue cultures from chimpanzee cells. The people involved in vaccine production and distribution from America state that no vaccine was prepared locally in Congo and that only the CHAT vaccine from America was used. Barbara Cohen, the technician who was responsible for running the American laboratory that produced this vaccine stated: Scientific investigation In an August 1992 letter published in Science, Koprowski repudiated the OPV AIDS hypothesis, pointing to multiple errors of fact in its assertions. In October 1992, Science ran a story titled "Panel Nixes Congo Vaccine as AIDS source", describing the findings of an independent panel which found each proposed step in the OPV-AIDS hypothesis "problematic". The story concluded: The oldest confirmed sample of human tissue that shows the presence of HIV-1 is an archival sample of plasma collected from an anonymous donor in the city of Leopoldville, Belgian Congo (now Kinshasa, Democratic Republic of the Congo) in 1959 and was found with retrospective genetic analysis to be most closely related to subtype D strains. In 2008, partial HIV viral sequences were identified from a specimen of lymph node collected from an adult female, also in Kinshasa, in 1960. This specimen, named DRC60, was around 88% similar to ZR59, but was found to be most closely related to subtype A HIV-1 strains. These specimens are significant not only because they are the oldest specimens of the virus known to cause AIDS, but because they show that the virus already had an extensive amount of genetic diversity in 1960. In 2000, the Royal Society held a meeting to discuss data on the origin of AIDS; the OPV AIDS hypothesis was a central topic of discussion. At this meeting, three independent labs released the results of tests on the remaining stocks of Koprowski's vaccine, which Edward Hooper had demanded in The River. The tests confirmed Koprowski's contention that his vaccine was made from monkey, rather than chimpanzee, kidney, and found no evidence of SIV or HIV contamination. Additional epidemiologic and phylogenetic data was presented at the conference which undermined other aspects of the OPV AIDS hypothesis. According to a report in Science, Hooper "did not challenge the results; he simply dismissed them." In 2001, three articles published in Nature examined various aspects of the OPV AIDS hypothesis, as did an article published in Science. In every case, the studies' findings argued strongly against any link between the polio vaccine and AIDS. The evidence cited included multiple independent studies that dated the introduction of HIV-1 to humans as occurring between 1915 and 1941, probably in the 1930s. These results were confirmed by a later study using samples from the 1960s that also found that the epidemic began between 1908 and 1930, and a study that showed that although recombination amongst viruses makes dating less precise, it does not significantly bias estimates in either direction (it does not introduce a systematic error). The author of one of the studies, evolutionary biologist Edward Holmes of Oxford University, commented in light of the new evidence: "Hooper's evidence was always flimsy, and now it's untenable. It's time to move on." An accompanying editorial in Nature concluded: The possibility that chimpanzees found near Kisangani in the Democratic Republic of Congo (formerly Stanleyville) were, indirectly, the true source of HIV-1 was directly addressed in a 2004 study published in Nature. Here, the authors found that while SIV was present in chimpanzees in the area, the strain of SIV infecting these chimpanzees was phylogenetically distinct from all strains of HIV, providing direct evidence that these particular chimps were not the source of HIV in humans. Current oral polio-vaccine campaign in Africa Rumours that polio vaccines are unsafe disrupted the longstanding effort of the WHO and UN to achieve poliomyelitis eradication worldwide through use of the oral polio vaccine of Albert Sabin, which is thought to be safe and effective by virtually all medical authorities. If this long-term public-health goal could be achieved, poliomyelitis would follow smallpox as the second eradicated infectious human disease. The OPV AIDS hypothesis relates only to the historical origin of AIDS, and its proponents have accepted the safety of the modern polio vaccines, but rumors based on a misunderstanding of the hypothesis exist, and those rumors are blamed in part for the recent failure to eliminate polio in Nigeria. By 2003, cases of poliomyelitis had been reduced to just a small number in isolated regions of West Africa, with sporadic cases elsewhere. However, the disease has since resurged in Nigeria and in several other nations of Africa, which epidemiologists trace to refusals by certain local populations to allow their children to be administered the Sabin oral vaccine. The expressed concerns of local populations often relate to fears that the vaccine might induce sterility, and it seems that debate over the OPV AIDS hypothesis has fueled additional fears. Since 2003, these fears have spread among some in the Muslim community, with Datti Ahmed, of the Supreme Council for Sharia in Nigeria stating that: As evidence to the success of polio eradication efforts, the vaccine-derived polioviruses (cVDPVs) nowadays cause more cases of polio paralysis than the wild type virus itself in many places, such as the Congo. Polio has also resurged in areas of Pakistan, India and Bangladesh. See also AIDS origins opposed to scientific consensus AIDS origin SV40 A scientifically accepted case of a monkey virus contaminating polio vaccine – Inactivated poliovirus vaccine (IPV) only. Zoonosis References External links Where did HIV come from? Questions and Answers, from the United States Centers for Disease Control 1950s in Africa 1960 in Africa AIDS origin hypotheses Health-related conspiracy theories Vaccine controversies fr:Théories du complot sur le sida#Théorie du vaccin oral anti-polio
Oral polio vaccine AIDS hypothesis
[ "Chemistry", "Technology", "Biology" ]
2,709
[ "AIDS origin hypotheses", "Science and technology-related conspiracy theories", "Drug safety", "Health-related conspiracy theories", "Vaccine controversies", "Biological hypotheses", "Vaccination" ]
877,606
https://en.wikipedia.org/wiki/GE%20HealthCare
GE Healthcare Technologies, Inc., organized in Delaware and headquartered in Chicago, Illinois, focuses on health technology. The company, which stylizes its own name as GE HealthCare, operates 4 divisions: Medical imaging, which includes molecular imaging, computed tomography, magnetic resonance, women’s health screening and X-ray systems; Ultrasound; Patient Care Solutions, which is focused on remote patient monitoring, anesthesia and respiratory care, diagnostic cardiology, and infant care; and Pharmaceutical Diagnostics, which manufactures contrast agents and radiopharmaceuticals. The company's primary customers are hospitals and health networks. In 2023, the company received 42% of its revenue in the United States and 13% of its revenue from China, where the company faces increasing competition. The company operates in more than 100 countries. GE Healthcare has major regional operations in Buc (suburb of Paris), France; Helsinki, Finland; Kraków, Poland; Budapest, Hungary; Yizhuang (suburb of Beijing), China; Hino & Tokyo, Japan, and Bangalore, India. Its biggest R&D center is in Bangalore, India, built at a cost of $50 million. In May 2022, General Electric formed the company to own its healthcare division; it completed the corporate spin-off of the company in January 2023. History 19th century The company traces its roots to the Victor Electric Company, founded in 1893 in a basement by Charles F. Samms and Julius B. Wantz, previously employees of the assembly lines at the Knapp Electrical Works and Midland Electric Co. and then in their early 20s. They initially focused on supplies for the dental industry. At the time, they were a six-person operation. By 1896, one year after Wilhelm Röntgen's discovery, Victor Electric entered the business for X-ray machines. The business grew rapidly and so, in 1896, the company moved into new premises three times the original size. This did not solve the space problems and the company made 3 office moves by 1899. By 1896, the company also made electrostatic generators for exciting X-ray tubes and electrotherapeutic devices. 20th century By 1903, Victor Electric had outgrown its facilities at 418 Dearborn St. in Chicago and bought two floors of a building at 55 Market Street, Chicago. This was again only a temporary stop; by 1910 it was too small and the firm moved again in 1911 to a building at the corner of Jackson Blvd. and Damen Avenue. This was the first permanent home of Victor Electric Co. They stayed there 35 years and during this time, gradually acquired all the space in the building and several around it. In 1916, the company merged with three companies: Scheidel Western, Snook-Roentgen, MacAlaster & Wiggin. Victor's two founders had key roles in the new firm; Charles F. Samms was company president and Julius B. Wantz was Vice-President of manufacturing and engineering. In 1920, Victor was acquired by General Electric and was renamed VICTOR X-RAY CORPORATION. At that time, it was the largest manufacturer of X-ray tubes. The merger of the Victor subsidiary and General Electric closed on July 28, 1926 and the company became "General Electric X-Ray Corporation". The merger brought renewed vitality to the organization and Victor entered the foreign market with equipment sold and serviced in nearly 70 countries. In 1930, the Victor name was phased-out from all branding; however, advertisements did mention "formerly Victor X-Ray Corporation". Use of X-rays in industry for non-destructive testing of war materials increased during World War II. X-rays were broadly used as a medical tool for military services. As the war ended, GE X-Ray Corporation continued to grow. Greater production capacity and greater expertise was needed in the core business of building X-ray tubes. Since the tubes were made from hand-blown glass, the decision was made to move the company 90 miles north to Milwaukee, Wisconsin, in order to tap into the enormous amount of glass-blowing talent in Milwaukee's beer-brewing industry. In 1947, the company moved from Jackson Blvd. in Chicago to a site in the city of West Milwaukee, which had been used for building turbochargers during the war. The street was renamed Electric Avenue. In 1951, the corporate structure was dissolved and the name changed to General Electric X-Ray Department. This new name lasted less than 10 years as the department divested itself of its industrial X-ray business, widened its medical business, and took on the name of GE Medical Systems Department. One of the reasons for the name of Medical Systems was due to the increase in the electro-medical business, which began in 1961 with the introduction of patient monitoring equipment. By 1967 modular equipment was developed which was soon popular in cardiac and intensive care units. Early in 1960, pacemakers were developed in Corporate Research & Development in Schenectady, New York, and in 1969 the Standby Pacemaker was developed. In 1968, the Biomedical Business Section opened its first factory in Edgerton Avenue. Late in 1970 a surgical package was introduced and in 1971, equipment to monitor blood gasses during surgery was introduced. Later in 1971, Biomedical opened a 9,000 square meter admin and engineering building opposite its factory and in 1972, the section was renamed The cardio-Surgical Product Section. With the growth of its medical business, the General Electric Company upgraded the department to The Medical Systems Division in 1971. Also in 1971, a major expansion programme was started and the Waukesha factory was planned. Work started in July 1972, and was completed in 1973. In 1974, work on CT was started and the first CT machine was installed in 1976. In June 1980, the company acquired the CT scanner business of EMI. In 1981, GE acquired the Picker Service organization in the U.K. In 1982, the company set up a joint venture with Yokogawa Electric. It changed its name to GE Healthcare Japan Corporation in 2009. In 1983, GE Medical started investing heavily in Magnetic Resonance Imaging (MRI) technology, investing nearly US$1 billion in a new plant in Waukesha. It developed the MR Signa, which became very successful. The MRI magnet plant in Florence, South Carolina, was opened a short time later, giving GE its own magnet production. It underwent a $40 million expansion in 2017. In 1983, the company split its dental and medical lines in 1983; Gendex became its dental imaging division. In 1985 GE acquired Technicare from Johnson and Johnson. Originally named Ohio Nuclear (and in 1979, after another fusion, Ohio Nuclear Unirad), the name was changed to Technicare in 1982. Technicare (with headquarters in Cleveland, Ohio) had been producing a range of rotate-stationary CTs with an installed base in the thousands, as well as some X-ray diagnostic equipment and a nascent MRI product range. Up to this time, the medical Systems Division had simply been divided into domestic and international, but in 1987 it reorganized into the three "poles" of America, Europe and Pacific. In 1988, GE Medical Europe merged with CGR, a medical equipment supplier based in France, to form General Electric CGR Medical Systems. The European headquarters were moved from Hammersmith (UK) to Buc, Yvelines, near Paris. GE Healthcare was incorporated in 1994. In 1994, it changed the name in Europe from GE-CGR back to General Electric Medical Systems. In September 1995, the company acquired Resonex, an MRI maker based in Fremont, California. In 1996, Jeff Immelt was named CEO of the company; he became CEO of GE in 2000. In April 1998, the company acquired Diasonics Vingmed from Elbit Medical Imaging of Haifa, Israel, expanding its ultrasound imaging business. In September 1998, the company acquired Marquette Medical Systems for $808 million. In November 1998, the company acquired the Nuclear and MR businesses of Elscint, (then a division of Elron, based in Haifa, Israel), for $100 million. In September 2000, the company acquired the remaining 50% of the ELGEMS joint-venture formed with Elscint in 1997. 21st century In 2001, the company acquired San Francisco, California–based CT maker Imatron for $210 million. Imatron produced an Electron beam tomography (EBT) scanner that performs imaging applications used by physicians specializing in cardiology, pulmonology and gastroenterology. The Imatron business was later incorporated into GE Healthcare's Diagnostic Imaging business segment. In March 2002, the company acquired MedicaLogic, creator of the former Logician, an ambulatory Electronic Medical Records system, for approximately $32 million. In April 2002, GE Healthcare acquired Visualization Technology, a manufacturer of intra-operative medical devices and related products for use in minimally invasive image guided surgery, based in Boston. In January 2003, the company acquired Millbrook Corporation, maker of Millbrook Practice Manager, a billing and scheduling system for doctors' offices. GE Healthcare IT later merged the two products into one, although the stand-alone EMR product is still available and in development. In 2003, GE Healthcare acquired Instrumentarium, including its Datex-Ohmeda division, a producer, manufacturer, and supplier of anesthesia machines and mechanical ventilators. To satisfy regulatory concerns in the United States and in Europe, GE Healthcare was forced to divest Instrumentarium's Ziehm Imaging mobile C-arm business, as well as its Spacelabs patient-monitoring unit. In April 2004, the company acquired Amersham plc. Also in 2004, GE Healthcare along with other healthcare companies built a research reactor for neutron and unit cell research at GE's European Research Center near Garching (outside of Munich), Germany. It is the only such reactor currently in operation. In 2006, Sir William Castell resigned as CEO to become Chairman of the Wellcome Trust, a charity that fosters and promotes human and animal research—in the United Kingdom. Former GE Medical Systems CEO Joe Hogan then became CEO. In January 2006, the company acquired IDX Systems Corporation for $1.2 billion. IDX was folded into GE Healthcare Integrated IT Solutions, which specializes in clinical information systems and healthcare revenue management. In February 2008, GE Healthcare acquired Whatman plc, a global supplier of filtration products and technologies for £363 million. In July 2008, Joseph Hogan announced his intent to leave his post as CEO of GE Healthcare to take the role of CEO at ABB. John Dineen, head of GE's Transportation division since 2005, was named CEO. In March 2010, the company acquired MedPlexus. The company then offered its first electronic medical record product in a software-as-a-service platform. In April 2010, the company announced it was investing €3 million in the Technology Research for Independent Living Centre (TRIL). The Irish centre seeks to enhance independence for elderly people through technological innovation. In July 2015, the company partnered with the 2015 CrossFit Games to provide athletes with mobile imaging equipment. In January 2016, the company announced the move of its global headquarters to Chicago effective early 2016. In June 2017, Kieran Murphy was named CEO of the company, and former CEO John L. Flannery was named CEO of GE. In April 2018, the company sold several healthcare information technology assets to Veritas Capital for $1.05 billion. In June 2018, GE first announced plans to spin off GE Healthcare. However, the plan was delayed after GE sold its biopharma business to Danaher Corporation for $21.4 billion. In January 2021, the company acquired Prismatic Sensors AB, focused on Deep Silicon detector technology. In May 2021, the company acquired Zionexa, focused on biomarkers for the detection of breast cancer. In July 2021, the company integrated technology from Spectronic Medical to create artificial intelligence-based software. In November 2021, General Electric announced it would split into 3 publicly-traded companies, with GE Healthcare being one of the three. The spin-off of GE Healthcare was completed on 4 January 2023. In December 2021, the company acquired BK Medical from Altaris Capital Partners for $1.45 billion. In February 2023, GE Healthcare acquired Caption Health, an artificial intelligence medical technology manufacturer headquartered in San Mateo, California, for $150 million. In July 2024, the company acquired the clinical artificial intelligence business from Intelligent Ultrasound for $51 million. Criticism Gadolinium-based contrast agents In 1994, GE Healthcare ignored advice of its safety experts to proactively restrict the use of its MRI contrast agent, Omniscan. It also tried to conceal evidence of its risks by telling its researchers to "burn the data", as revealed during a trial opposing debilitated consumers due to its accumulation in multiple organs. In 2009, GE Healthcare sued for defamation a radiologist at the University of Copenhagen Hospital who linked the uses of Omniscan to gadolinium induced fibrosis after 20 of his patients (from which 1 died) suffered from it after its administration. In 2017, GE Healthcare opposed the European Medicines Agency (EMA) suspending the use of Omniscan (along other linear agents), despite evidence of the high cytotoxicity of gadodiamide and its likelihood to dissociate after deposition. In a 2020 study, another MRI dye, Clariscan, was retained more in the cerebrum, cerebellum, kidney and liver of rats than those injected Dotarem, its original drug. Although the authors didn't provide a possible explanation, differences in the chelation process of gadolinium ions (Guerbet's process being patented) or quality assurance could be causes of increased retention in vivo. Low taxes paid in the United Kingdom According to a report in The Independent in January 2016, the company received more money back in tax benefits (£1.6 million) in the UK in the previous 12 years than it paid. Its UK operations are all ultimately owned by a holding company in the Netherlands. Tax paid was £250,000, 1.7% of its £14.3 million profit. The company employs 22,000 people in the UK. Overbilling the government In 2011, the company agreed to pay $30 million to settle allegations that a company it acquired in 2004, Amersham Health, violated the False Claims Act of 1863 by knowingly providing false or misleading information to Medicare, causing the government to reimburse Myoview at artificially inflated rates. By maximizing the number of times a vial of the solution was used, health care providers billed Medicare multiple times for the product. A whistleblower received $5.1 million in the settlement. Reliability of imaging system The company supplies a cloud-based imaging system to the East Midlands Radiology Consortium, which was described in October 2017 as breaking down, so that medical images had to be sent between hospitals by taxi. Operations GE Healthcare has a range of products and services that include medical imaging and information technologies, electronic medical records, medical diagnostics, and patient monitoring systems. GE Healthcare consists of four primary business units: Detection and Guidance Solutions (DGS), led by devices for X-ray, bone densitometry and digital mammography. Healthcare Digital, headquartered in Chicago, Illinois, US. Healthcare Digital provides clinical & financial information technology such as departmental IT products, RIS/PACS (Radiology Information Systems/Picture Archiving and Communication Systems), CVIS (Cardiovascular Information Systems), Cloud based products as well as revenue cycle management and practice applications. The GE Health Cloud is their latest AWS based cloud offering with case exchange and multi-disciplinary teams (MDT) capabilities. Additional internal co-development partnerships include protocol management and automated protocol selection capabilities. Former IDX, GE Healthcare's IT business will have its global headquarters in Barrington, Illinois, with major offices in South Burlington, Vermont; Boston; Seattle; and London, along with satellite offices both within and outside the United States. Patient Care Solutions (PCS), led by Tom Westrick, headquartered in Milwaukee, Wisconsin, US. Provides tools for critical care, ECG, anesthesia delivery, neonatal intensive care, labor & delivery, preoperative and home care. Magnetic Resonance (MR), led by Jie Xue, headquartered in Waukesha (near Milwaukee), Wisconsin, US Provider of magnetic resonance (MR) imaging systems. Molecular Imaging & Computed Tomography (MICT), led by Jean-Luc Procaccini, (previously Michael J. Barber) headquartered in Waukesha (near Milwaukee), Wisconsin, US. Provides computed tomography (CT), positron emission tomography (PET) and molecular imaging technologies. Surgery, headquartered in Salt Lake City, Utah, US. Provides tools and technologies for cardiac, surgical and interventional care, from cardiac catheterization labs, diagnostic monitoring systems, data management systems to mobile fluoroscopic imaging systems, navigation and 3D visualization instrumentation. Ultrasound (US), led by Roland Rott. Produces ultrasound products for general imaging, cardiology, women's health, point of care and primary care, as well as related IT tools. Global Services, led by Luiz Verzegnassi, headquartered in Greater Milwaukee Area, WI, US. References External links 1994 establishments in Illinois Amersham Companies based in Buckinghamshire Companies listed on the Nasdaq Corporate spin-offs Electronics companies of the United States Health care companies established in 1994 Magnetic resonance imaging Manufacturing companies based in Chicago Medical imaging equipment manufacturers Medical technology companies of the United States Pharmaceutical companies of the United States Radiopharmaceuticals
GE HealthCare
[ "Chemistry" ]
3,649
[ "Nuclear magnetic resonance", "Medicinal radiochemistry", "Magnetic resonance imaging", "Radiopharmaceuticals", "Chemicals in medicine" ]
5,021,237
https://en.wikipedia.org/wiki/Photothermal%20effect
Photothermal effect is a phenomenon associated with electromagnetic radiation. It is produced by the photoexcitation of material, resulting in the production of thermal energy (heat). It is sometimes used during treatment of blood vessel lesions, laser resurfacing, laser hair removal and laser surgery. External links Quantities, Terminology, and Symbols in Photothermal and Related Spectroscopies Research paper from IUPAC Amazing Nano Materials: Photothermal and Photoacoustic Effects Photochemistry
Photothermal effect
[ "Chemistry" ]
98
[ "Physical chemistry stubs", "nan" ]
5,022,121
https://en.wikipedia.org/wiki/National%20ITS%20Architecture
The National ITS Architecture is a guideline of the United States Government, for future transportation systems. It was established in 1994 by the United States Department of Transportation. It was funded at a cost of $20 million. The main goal was the definition of a standard national interoperable intelligent transportation system (ITS) structure. External links National ITS Architecture Home Page Intelligent transportation systems
National ITS Architecture
[ "Technology" ]
76
[ "Warning systems", "Intelligent transportation systems", "Information systems", "Transport systems" ]
5,022,545
https://en.wikipedia.org/wiki/Non-explosive%20demolition%20agents
Non-explosive demolition agents are chemicals that are an alternative to explosives and gas pressure blasting products in demolition, mining, and quarrying. To use non-explosive demolition agents in demolition or quarrying, holes are drilled in the base rock as they would be for use with conventional explosives. A slurry mixture of the non-explosive demolition agent and water is poured into the drill holes. Over the next few hours the slurry expands, cracking the rock in a pattern somewhat like the cracking that would occur from conventional explosives. Non-explosive demolition agents offer many advantages including that they are silent and do not produce vibration the way a conventional explosive would. In some applications conventional explosives are more economical than non-explosive demolition agents. In many countries these are available without restriction, unlike explosives which are highly regulated. The active ingredient is typically calcium oxide, "burnt lime," and is typically mixed with Portland cement and modifiers. These agents are much safer than explosives, but they have to be used as directed to avoid steam explosions during the first few hours after being placed. Many patents describe non-explosive demolition agents containing CaO, SiO2 and/or cement. See also Plug and feather Explosive material Mining References Building engineering Chemical engineering
Non-explosive demolition agents
[ "Chemistry", "Engineering" ]
249
[ "Building engineering", "Chemical engineering", "Civil engineering", "nan", "Architecture" ]
5,023,026
https://en.wikipedia.org/wiki/Orbitrap
In mass spectrometry, Orbitrap is an ion trap mass analyzer consisting of an outer barrel-like electrode and a coaxial inner spindle-like electrode that traps ions in an orbital motion around the spindle. The image current from the trapped ions is detected and converted to a mass spectrum by first using the Fourier transform of time domain of the harmonic to create a frequency signal which is converted to mass. History The concept of electrostatically trapping ions in an orbit around a central spindle was developed by Kenneth Hay Kingdon in the early 1920s. The Kingdon trap consists of a thin central wire and an outer cylindrical electrode. A static applied voltage results in a radial logarithmic potential between the electrodes. In 1981, Knight introduced a modified outer electrode that included an axial quadrupole term that confines the ions on the trap axis. Neither the Kingdon nor the Knight configurations were reported to produce mass spectra. In 1986, Professor Yuri Konstantinovich Golikov in USSR developed theory of ion motion in quadro-logarithmic potential, and filed for patents in USSR for its use in a time-of-flight analyzer. Golikov, leading a team at the St. Petersburg State Pedagogical University's Radiophysics Faculty, laid the theoretical groundwork for Orbitrap technology as one of inventors in USSR Inventor's certificate No. 1247973 in 1986. Golikov later remarked "Based on my ideas, analytical instruments with record parameters were built, but unfortunately not in Russia, but abroad." Contrary to popular belief, Alexander Makarov is not the original inventor of quadro-logarithmic potential which was known since 1950s. Reflecting on his early interaction with Golikov, Alexander Makarov recalled, "As a fifth-year student at MIPT, I entered one of the numerous rooms at the Polytechnic Institute, where I was met by Yuri Konstantinovich Golikov. I was holding excerpts (photocopies were not so accessible then) from the author's certificate USSR № 1247973 to which I have referred in all my works on the Orbitrap™ analyzer since then." Alexander Makarov's effort in commercializing Orbitrap analyzer at the end of the 1990s required a number of innovations such as image current detection, C-trap for ion injection, and other technology improvements which resulted in the commercial introduction of this analyzer by Thermo Fisher Scientific as a part of the hybrid LTQ Orbitrap instrument in 2005. Principle of operation Trapping In the Orbitrap, ions are trapped because their electrostatic attraction to the inner electrode is balanced by their inertia. Thus, ions cycle around the inner electrode on elliptical trajectories. In addition, the ions also move back and forth along the axis of the central electrode so that their trajectories in space resemble helices. Due to the properties of the quadro-logarithmic potential, their axial motion is harmonic, i.e. it is completely independent not only of motion around the inner electrode but also of all initial parameters of the ions except their mass-to-charge ratios m/z. Its angular frequency is: ω = , where k is the force constant of the potential, similar to the spring constant. Injection In order to inject ions from an external ion source, the field between the electrodes is first reduced. As ion packets are injected tangentially into the field, the electric field is increased by ramping the voltage on the inner electrode. Ions get squeezed towards the inner electrode until they reach the desired orbit inside the trap. At that moment ramping is stopped, the field becomes static, and detection can start. Each packet contains a multitude of ions of different velocities spread over a certain volume. These ions move with different rotational frequencies but with the same axial frequency. This means that ions of a specific mass-to-charge ratio spread into rings which oscillate along the inner spindle. Proof-of-principle of the technology was carried out using the direct injection of ions from an external laser desorption and ionization ion source. This method of injection works well with pulsed sources such as MALDI but cannot be interfaced to continuous ion sources like electrospray. All commercial Orbitrap mass spectrometers utilize a curved linear trap for ion injection (C-trap). By rapidly ramping down trapping RF voltages and applying DC gradients across the C-trap, ions can be bunched into short packets similar to those from the laser ion source. The C-trap is tightly integrated with the analyzer, injection optics and differential pumping. Excitation In principle, coherent axial oscillations of ion rings could be excited by applying RF waveforms to the outer electrode as demonstrated in and references therein. However, if ion packets are injected away from the minimum of the axial potential (which corresponds to the thickest part of either electrode), this automatically initiates their axial oscillations, eliminating the need for any additional excitation. Furthermore, the absence of additional excitation allows the detection process to start as soon as the detection electronics recover from the voltage ramp needed for ion injection. Detection Axial oscillations of ion rings are detected by their image current induced on the outer electrode which is split into two symmetrical pick-up sensors connected to a differential amplifier. By processing data in a manner similar to that used in Fourier-transform ion cyclotron resonance mass spectrometry (FTICR-MS), the trap can be used as a mass analyzer. Like in FTICR-MS, all the ions are detected simultaneously over some given period of time and resolution can be improved by increasing the strength of the field or by increasing the detection period. The Orbitrap differs from FTICR-MS by the absence of a magnetic field and hence has a significantly slower decrease of resolving power with increasing m/z. Variants Currently the Orbitrap analyzer exists in two variants: a standard trap and a compact high-field trap. In practical traps, the outer electrode is sustained at virtual ground and a voltage of 3.5 or 5 kV is applied to the inner electrode only. As a result, the resolving power at m/z 400 and 768 ms detection time can range from 60,000 for a standard trap at 3.5 kV to 280,000 for a high-field trap at 5 kV and with enhanced FT processing. Like in FTICR-MS the Orbitrap resolving power is proportional to the number of harmonic oscillations of the ions; as a result, the resolving power is inversely proportional to the square root of m/z and proportional to acquisition time. For example, the values above would double for m/z 100 and halve for m/z 1600. For the shortest transient of 96 ms these values would be reduced by 8 times, whereas a resolving power in excess of 1,000,000 has been demonstrated in 3-second transients. The Orbitrap analyzer can be interfaced to a linear ion trap (LTQ Orbitrap family of instruments), quadrupole mass filter (Q Exactive family) or directly to an ion source (Exactive instrument, all marketed by Thermo Fisher Scientific). In addition, a higher-energy collision cell can be appended to the C-trap, with the further addition of electron-transfer dissociation at its back. Most of these instruments have atmospheric pressure ion sources though an intermediate-pressure MALDI source can also be used (MALDI LTQ Orbitrap). All of these instruments provide a high mass accuracy (<2–3 ppm with external calibrant and <1–2 ppm with internal), a high resolving power (up to 240,000 at m/z 400), a high dynamic range and high sensitivity. Applications Orbitrap-based mass spectrometers are used in proteomics and are also used in life science mass spectrometry such as metabolism, metabolomics, environmental, food and safety analysis. Most of them are interfaced to liquid chromatography separations, though they are also used with gas chromatography, secondary ion and ambient ionization methods. They have also been used to determine molecular structures of isotopically substituted molecular species. See also Fourier-transform ion cyclotron resonance References External links Purdue University Orbitrap Page Mass spectrometry Russian inventions
Orbitrap
[ "Physics", "Chemistry" ]
1,752
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
5,023,057
https://en.wikipedia.org/wiki/Arsenic%20triselenide
Arsenic triselenide is an inorganic chemical compound with the chemical formula . Amorphous arsenic triselenide is used as a chalcogenide glass for infrared optics. When purified, it transmits light with wavelengths between ca. 0.7 and 19 μm. In arsenic triselenide, arsenic is covalently bonded to selenium, where arsenic has a formal oxidation state of +3, and selenium −2. Solution processed thin film Thin film selenide glasses have emerged as an important material for integrated photonics due to its high refractive index, mid-IR transparency and high non-linear optical indices. High-quality glass films can be deposited from spin coating method from ethylenediamine solutions. References Arsenic(III) compounds Selenides Optical materials Non-oxide glasses
Arsenic triselenide
[ "Physics", "Chemistry" ]
170
[ "Inorganic compounds", "Inorganic compound stubs", "Materials", "Optical materials", "Matter" ]
5,023,296
https://en.wikipedia.org/wiki/Alpha-1%20adrenergic%20receptor
alpha-1 (α1) adrenergic receptors are G protein-coupled receptors (GPCRs) associated with the Gq heterotrimeric G protein. α1-adrenergic receptors are subdivided into three highly homologous subtypes, i.e., α1A-, α1B-, and α1D-adrenergic receptor subtypes. There is no α1C receptor. At one time, there was a subtype known as α1C, but it was found to be identical to the previously discovered α1A receptor subtype. To avoid confusion, naming was continued with the letter D. Catecholamines like norepinephrine (noradrenaline) and epinephrine (adrenaline) signal through the α1-adrenergic receptors in the central and peripheral nervous systems. The crystal structure of the α1B-adrenergic receptor subtype has been determined in complex with the inverse agonist (+)-cyclazosin. Effects The α1-adrenergic receptor has several general functions in common with the α2-adrenergic receptor, but also has specific effects of its own. α1-receptors primarily mediate smooth muscle contraction, but have important functions elsewhere as well. The neurotransmitter norepinephrine has higher affinity for the α1 receptor than does the hormone adrenaline. Smooth muscle In smooth muscle cells of blood vessels the principal effect of activation of these receptors is vasoconstriction. Blood vessels with α1-adrenergic receptors are present in the skin, the sphincters of gastrointestinal system, kidney (renal artery) and brain. During the fight-or-flight response vasoconstriction results in decreased blood flow to these organs. This accounts for the pale appearance of the skin of an individual when frightened. It also induces contraction of the internal urethral sphincter of the urinary bladder, although this effect is minor compared to the relaxing effect of β2-adrenergic receptors. In other words, the overall effect of sympathetic stimuli on the bladder is relaxation, in order to inhibit micturition upon anticipation of a stressful event. Other effects on smooth muscle are contraction in: Ureter Uterus (when pregnant): this is minor compared to the relaxing effects of the β2 receptor, agonists of whichnotably albuterol/salbutamolwere formerly used to inhibit premature labor. Urethral sphincter Bronchioles (although minor to the relaxing effect of β2 receptor on bronchioles) Iris dilator muscle Seminal tract, resulting in ejaculation Neuronal Activation of α1-adrenergic receptors produces anorexia and partially mediates the efficacy of appetite suppressants like phenylpropanolamine and amphetamine in the treatment of obesity. Norepinephrine has been shown to decrease cellular excitability in all layers of the temporal cortex, including the primary auditory cortex. In particular, norepinephrine decreases glutamatergic excitatory postsynaptic potentials by the activation of α1-adrenergic receptors. Norepinephrine also stimulates serotonin release by binding α1-adrenergic receptors located on serotonergic neurons in the raphe. α1-adrenergic receptor subtypes increase inhibition in the olfactory system, suggesting a synaptic mechanism for noradrenergic modulation of olfactory driven behaviors. Other Both positive and negative inotropic effects on heart muscle Secretion from salivary gland Increase salivary potassium levels Glycogenolysis and gluconeogenesis in liver. Secretion from sweat glands Contraction of the urinary bladder urothelium and lamina propria Na+ reabsorption from kidney Stimulate proximal tubule NHE3 Stimulate proximal tubule basolateral Na-K ATPase Activate mitogenic responses and regulate growth and proliferation of many cells Involved in the detection of mechanical feedback on the hypoglossal motor neurons which allow a long-term facilitation in respiration in response to repeated apneas. Signaling cascade α1-Adrenergic receptors are members of the G protein-coupled receptor superfamily. Upon activation, a heterotrimeric G protein, Gq, activates phospholipase C (PLC), which causes phosphatidylinositol to be transformed into inositol trisphosphate (IP3) and diacylglycerol (DAG). While DAG stays near the membrane, IP3 diffuses into the cytosol and to find the IP3 receptor on the endoplasmic reticulum, triggering calcium release from the stores. This triggers further effects, primarily through the activation of an enzyme Protein Kinase C. This enzyme, as a kinase, functions by phosphorylation of other enzymes causing their activation, or by phosphorylation of certain channels leading to the increase or decrease of electrolyte transfer in or out of the cell. Activity during exercise During exercise, α1-adrenergic receptors in active muscles are attenuated in an exercise intensity-dependent manner, allowing the β2-adrenergic receptors which mediate vasodilation to dominate. In contrast to α2-adrenergic receptors, α1-adrenergic-receptors in the arterial vasculature of skeletal muscle are more resistant to inhibition, and attenuation of α1-adrenergic-receptor-mediated vasoconstriction only occurs during heavy exercise. Note that only active muscle α1-adrenergic receptors will be blocked. Resting muscle will not have its α1-adrenergic receptors blocked, and hence the overall effect will be α1-adrenergic-mediated vasoconstriction. Ligands Agonists Cirazoline (vasoconstrictor) Methoxamine (vasoconstrictor) Synephrine (mild vasoconstrictor) Etilefrine (antihypotensive) Metaraminol (antihypotensive) Midodrine (antihypotensive) Naphazoline (decongestant) Norepinephrine (vasoconstrictor) Oxymetazoline (decongestant) Phenylephrine (decongestant) Pseudoephedrine (decongestant) Tetrahydrozoline (decongestant) Xylometazoline (decongestant) Sdz-nvi-085 [104195-17-7]. Antagonists Acepromazine (antipsychotic, secondary mechanism) Alfuzosin (used in benign prostatic hyperplasia) Arotinolol Carvedilol (used in congestive heart failure; it is a non-selective beta blocker) Chlorpromazine (antipsychotic and powerful antihypertensive) Doxazosin (used in hypertension and benign prostatic hyperplasia) Indoramin Labetalol (used in hypertension; it is a mixed alpha/beta adrenergic antagonist) Moxisylyte Phenoxybenzamine Phentolamine (used in hypertensive emergencies; it is a nonselective alpha-antagonist) Prazosin (used in hypertension) Quetiapine Risperidone Silodosin Tamsulosin (used in benign prostatic hyperplasia) Terazosin Tiamenidine Tolazoline Trazodone Trimazosin Urapidil Various heterocyclic antidepressants and antipsychotics are α1-adrenergic receptor antagonists as well. This action is generally undesirable in such agents and mediates side effects like orthostatic hypotension, and headaches due to excessive vasodilation. See also Adrenergic receptor References External links Adrenergic receptors
Alpha-1 adrenergic receptor
[ "Chemistry", "Biology" ]
1,711
[ "Biochemistry", "Exercise biochemistry" ]
5,023,787
https://en.wikipedia.org/wiki/Arsenic%20tribromide
Arsenic tribromide is an inorganic compound with the formula , it is a bromide of arsenic. Arsenic is a chemical element that has the symbol As and atomic number 33. This pyramidal molecule is the only known binary arsenic bromide. is noteworthy for its very high refractive index of approximately 2.3. It also has a very high diamagnetic susceptibility. The compound exists as colourless deliquescent crystals that fume in moist air. Preparation Arsenic tribromide can be prepared by the direct bromination of arsenic powder. Alternatively, arsenic(III) oxide can be used as the precursor in the presence of elemental sulfur: Arsenic tribromide is a highly water soluble crystalline arsenic source for uses compatible with bromides and lower (acidic) pH. Most metal bromide compounds are water soluble for uses in water treatment, chemical analysis and in ultra high purity for certain crystal growth applications. Arsenic bromide is generally immediately available in most volumes. It is soluble in hydrocarbons; carbon tetrachloride; very soluble in ether, benzene, chlorinated hydrocarbons, carbon disulfide, oils, and fats. Bromides of arsenic is not known, although the corresponding phosphorus compound is well characterized. is the parent for a series of hypervalent anionic bromoarsenates including , , and . Organoarsenic bromides and are formed efficiently by the copper-catalyzed reaction of methyl bromide with hot arsenic metal. This synthesis is similar to the direct process used for the synthesis of methyl chlorosilanes. Safety Arsenic tribromide is highly toxic.  It is a carcinogen and a teratogen. References Arsenic(III) compounds Arsenic halides Bromides Carcinogens Teratogens
Arsenic tribromide
[ "Chemistry", "Environmental_science" ]
367
[ "Toxicology", "Salts", "Bromides", "Carcinogens", "Teratogens" ]
5,023,862
https://en.wikipedia.org/wiki/Arsenic%20trichloride
Arsenic trichloride is an inorganic compound with the formula AsCl3, also known as arsenous chloride or butter of arsenic. This poisonous oil is colourless, although impure samples may appear yellow. It is an intermediate in the manufacture of organoarsenic compounds. Structure AsCl3 is a pyramidal molecule with C3v symmetry. The As-Cl bond is 2.161 Å and the angle Cl-As-Cl is 98° 25'±30. AsCl3 has four normal modes of vibration: ν1(A1) 416, ν2(A1) 192, ν3 393, and ν4(E) 152 cm−1. Synthesis This colourless liquid is prepared by treatment of arsenic(III) oxide with hydrogen chloride followed by distillation: As2O3 + 6 HCl → 2 AsCl3 + 3 H2O It can also be prepared by chlorination of arsenic at 80–85 °C, but this method requires elemental arsenic. 2 As + 3 Cl2 → 2 AsCl3 Arsenic trichloride can be prepared by the reaction of arsenic oxide and sulfur monochloride. This method requires simple apparatus and proceeds efficiently: 2 As2O3 + 6 S2Cl2 → 4 AsCl3 + 3 SO2 + 9 S A convenient laboratory method is refluxing arsenic(III) oxide with thionyl chloride: 2 As2O3 + 3 SOCl2 → 2 AsCl3 + 3 SO2 Arsenic trichloride can also be prepared by the reaction of hydrochloric acid and arsenic(III) sulfide. As2S3 + 6 HCl → 2 AsCl3 + 3 H2S Reactions Hydrolysis gives arsenous acid and hydrochloric acid: AsCl3 + 3 H2O → As(OH)3 + 3 HCl Although AsCl3 is less moisture sensitive than PCl3, it still fumes in moist air. AsCl3 undergoes redistribution upon treatment with As2O3 to give the inorganic polymer AsOCl. With chloride sources, AsCl3 also forms salts containing the anion [AsCl4]−. Reaction with potassium bromide and potassium iodide give arsenic tribromide and arsenic triiodide, respectively. AsCl3 is useful in organoarsenic chemistry, for example triphenylarsine is derived from AsCl3: AsCl3 + 6 Na + C6H5Cl → As(C6H5)3 + 6 NaCl The chemical weapons called Lewisites are prepared by the addition of arsenic trichloride to acetylene: Safety Inorganic arsenic compounds are highly toxic, and AsCl3 especially so because of its volatility and solubility (in water). It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. References Arsenic(III) compounds Arsenic halides Chlorides
Arsenic trichloride
[ "Chemistry" ]
651
[ "Highly-toxic chemical substances", "Chlorides", "Inorganic compounds", "Harmful chemical substances", "Salts" ]
5,024,020
https://en.wikipedia.org/wiki/Inverse%20Faraday%20effect
The Faraday effect causes the index of refractions for right and left circular polarization to be different when light is propagating along either the magnetic field or the magnetization. The inverse Faraday effect (IFE) is the effect opposite to the Faraday effect. A static magnetization is induced by circularly polarized light. One reason for the name IFE is that the amplitude of the magnetization is proportional to the same Verdet coefficient that governs the Faraday effect. The induced magnetization of the IFE is proportional to the product of the Verdet coefficient and vector product of and : With the proper use of the complex form for the electric fields this equation shows that circularly polarized light with the frequency should induce a static magnetization along the wave vector . The vector product of left- and right-handed polarization waves should induce magnetization of opposite signs. The pulsed laser developed by Maiman in 1960 facilitated the entire field of non-linear optics for which Bloembergen was awarded the Nobel prize in 1981and which enabled the first experimental confirmation of the Inverse Faraday Experiment by Pershan and students in1965. References   J.P. van der Ziel, P.S. Pershan and L.D. Malmstrom, "Optically-induced magnetization resulting from the inverse faraday effect", Phys. Rev. Lett. 15, 190 (1965). Rodriguez, V.; Verreault, D.; Adamietz, F.; Kalafatis, A. "All-Optical Measurements of the Verdet Constant in Achiral and Chiral Liquids: Toward All-Optical Magnetic Spectroscopies". ACS Photonics 2022, 9, 7, 2510–2519. https://doi.org/10.1021/acsphotonics.2c00720 Magneto-optic effects
Inverse Faraday effect
[ "Physics", "Chemistry", "Materials_science" ]
392
[ "Optical phenomena", "Physical phenomena", "Electric and magnetic fields in matter", "Magneto-optic effects" ]
5,024,272
https://en.wikipedia.org/wiki/Calcium%20arsenate
Calcium arsenate is the inorganic compound with the formula Ca3(AsO4)2. A colourless salt, it was originally used as a pesticide and as a germicide. It is highly soluble in water, in contrast to lead arsenate, which makes it more toxic. Two minerals are hydrates of calcium arsenate: rauenthalite Ca3(AsO4)2·10H2O and phaunouxite Ca3(AsO4)2·11H2O. A related mineral is ferrarisite (. Preparation Calcium arsenate is commonly prepared from disodium hydrogen arsenate and calcium chloride: 2 Na2H[AsO4] + 3 CaCl2 → 4 NaCl + Ca3[AsO4]2 + 2 HCl In the 1920s, it was made in large vats by mixing calcium oxide and arsenic oxide. In the United States, 1360 metric tons were produced in 1919, 4540 in 1920, and 7270 in 1922. The composition of commercially available calcium arsenate varies from manufacturer to manufacturer. A typical composition is 80-85% of Ca3(AsO4)2 a basic arsenate probably with a composition of 4CaO.As2O5 together with calcium hydroxide and calcium carbonate. Use as a herbicide It was once a common herbicide and insecticide. 38,000,000 kilograms were reported to be produced in 1942 alone, mainly for protection of cotton crops. Its high toxicity led the development of DDT. Regulation Calcium arsenate use is now banned in the UK, and its use is strictly regulated in the United States. It is currently the active ingredient in TURF-Cal manufactured by Mallinckrodt, it is one of the few herbicides – used mainly for the control of Poa annua and crabgrass- that hinders earthworm activity. Its label states that it will "reduce and inhibit earthworm activity and survival" and is only recommended against serious earthworm infestations in places such as golf course greens. Toxicity and regulation Calcium arsenate is highly toxic, having both carcinogenic and systemic health effects. The Occupational Safety and Health Administration has set a permissible exposure limit at 0.01 mg/m3 over an eight-hour time-weighted average, while the National Institute for Occupational Safety and Health recommends a limit five times less (0.002 mg/m3). It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. Other natural occurrences Weilite is the monohydrogenated counterpart, Ca(HAsO4), while švenekite – the dihydrogenated one, Ca(H2AsO4)2. Hydrated analogues of weilite are haidingerite (monohydrate) and pharmacolite (dihydrate), with the latter name reflecting arsenic-related toxicity. Examples of more complex, hydrated Ca arsenates with some anions hydrogenated, are ferrarisite, guérinite, sainfeldite, vladimirite, and jeankempite. References Arsenates Calcium compounds Insecticides Mutagens Carcinogens
Calcium arsenate
[ "Chemistry", "Environmental_science" ]
709
[ "Carcinogens", "Toxicology" ]