text
stringlengths 2
132k
| source
dict |
|---|---|
) = 1 exp ( E − μ k B T ) + 1 . {\displaystyle f_{\mathrm {FD} }(E)={\frac {1}{\exp \left({\frac {E-\mu }{k_{\mathrm {B} }T}}\right)+1}}.} μ {\displaystyle \mu } is the chemical potential (also denoted as EF and called the Fermi level when T=0), k B {\displaystyle k_{\mathrm {B} }} is the Boltzmann constant, and T {\displaystyle T} is temperature. Fig. 4 illustrates how the product of the Fermi-Dirac distribution function and the three-dimensional density of states for a semiconductor can give insight to physical properties such as carrier concentration and Energy band gaps. Bose–Einstein statistics: The Bose–Einstein probability distribution function is used to find the probability that a boson occupies a specific quantum state in a system at thermal equilibrium. Bosons are particles which do not obey the Pauli exclusion principle (e.g. phonons and photons). The distribution function can be written as f B E ( E ) = 1 exp ( E − μ k B T ) − 1 . {\displaystyle f_{\mathrm {BE} }(E)={\frac {1}{\exp \left({\frac {E-\mu }{k_{\text{B}}T}}\right)-1}}.} From these two distributions it is possible to calculate properties such as the internal energy per unit volume u {\displaystyle u} , the number of particles N {\displaystyle N} , specific heat capacity c {\displaystyle c} , and thermal conductivity k {\displaystyle k} . The relationships between these properties and the product of the density of states and the probability distribution, denoting the density of states by g ( E ) {\displaystyle g(E)} instead of D ( E ) {\displaystyle D(E)} , are given by u = ∫ E f ( E ) g ( E ) d E N = V ∫ f ( E ) g ( E ) d E c = ∂ ∂ T ∫ E f ( E ) g ( E
|
{
"page_id": 525887,
"source": null,
"title": "Density of states"
}
|
) d E k = 1 d ∂ ∂ T ∫ E f ( E ) g ( E ) ν ( E ) Λ ( E ) d E {\displaystyle {\begin{aligned}u&=\int E\,f(E)\,g(E)\,\mathrm {d} E\\[1ex]N&=V\int f(E)\,g(E)\,\mathrm {d} E\\[1ex]c&={\frac {\partial }{\partial T}}\int E\,f(E)\,g(E)\,\mathrm {d} E\\[1ex]k&={\frac {1}{d}}{\frac {\partial }{\partial T}}\int Ef(E)\,g(E)\,\nu (E)\,\Lambda (E)\,\mathrm {d} E\end{aligned}}} d {\displaystyle d} is dimensionality, ν {\displaystyle \nu } is sound velocity and Λ {\displaystyle \Lambda } is mean free path. == Applications == The density of states appears in many areas of physics, and helps to explain a number of quantum mechanical phenomena. === Quantization === Calculating the density of states for small structures shows that the distribution of electrons changes as dimensionality is reduced. For quantum wires, the DOS for certain energies actually becomes higher than the DOS for bulk semiconductors, and for quantum dots the electrons become quantized to certain energies. === Photonic crystals === The photon density of states can be manipulated by using periodic structures with length scales on the order of the wavelength of light. Some structures can completely inhibit the propagation of light of certain colors (energies), creating a photonic band gap: the DOS is zero for those photon energies. Other structures can inhibit the propagation of light only in certain directions to create mirrors, waveguides, and cavities. Such periodic structures are known as photonic crystals. In nanostructured media the concept of local density of states (LDOS) is often more relevant than that of DOS, as the DOS varies considerably from point to point. == Computational calculation == Interesting systems are in general complex, for instance compounds, biomolecules, polymers, etc. Because of the complexity of these systems the analytical calculation of the density of states is in most of the cases impossible. Computer simulations offer a set of algorithms to
|
{
"page_id": 525887,
"source": null,
"title": "Density of states"
}
|
evaluate the density of states with a high accuracy. One of these algorithms is called the Wang and Landau algorithm. Within the Wang and Landau scheme any previous knowledge of the density of states is required. One proceeds as follows: the cost function (for example the energy) of the system is discretized. Each time the bin i is reached one updates a histogram for the density of states, g ( i ) {\displaystyle g(i)} , by g ( i ) → g ( i ) + f {\displaystyle g(i)\rightarrow g(i)+f} where f is called the modification factor. As soon as each bin in the histogram is visited a certain number of times (10-15), the modification factor is reduced by some criterion, for instance, f n + 1 → 1 2 f n {\displaystyle f_{n+1}\rightarrow {\frac {1}{2}}f_{n}} where n denotes the n-th update step. The simulation finishes when the modification factor is less than a certain threshold, for instance f n < 10 − 8 {\displaystyle f_{n}<10^{-8}} . The Wang and Landau algorithm has some advantages over other common algorithms such as multicanonical simulations and parallel tempering. For example, the density of states is obtained as the main product of the simulation. Additionally, Wang and Landau simulations are completely independent of the temperature. This feature allows to compute the density of states of systems with very rough energy landscape such as proteins. Mathematically the density of states is formulated in terms of a tower of covering maps. == Local density of states == An important feature of the definition of the DOS is that it can be extended to any system. One of its properties are the translationally invariability which means that the density of the states is homogeneous and it's the same at each point of the system. But this
|
{
"page_id": 525887,
"source": null,
"title": "Density of states"
}
|
is just a particular case and the LDOS gives a wider description with a heterogeneous density of states through the system. === Concept === Local density of states (LDOS) describes a space-resolved density of states. In materials science, for example, this term is useful when interpreting the data from a scanning tunneling microscope (STM), since this method is capable of imaging electron densities of states with atomic resolution. According to crystal structure, this quantity can be predicted by computational methods, as for example with density functional theory. === A general definition === In a local density of states the contribution of each state is weighted by the density of its wave function at the point. N ( E ) {\displaystyle N(E)} becomes n ( E , x ) {\displaystyle n(E,x)} n ( E , x ) = ∑ j | ϕ j ( x ) | 2 δ ( E − ε j ) {\displaystyle n(E,x)=\sum _{j}|\phi _{j}(x)|^{2}\delta (E-\varepsilon _{j})} the factor of | ϕ j ( x ) | 2 {\displaystyle |\phi _{j}(x)|^{2}} means that each state contributes more in the regions where the density is high. An average over x {\displaystyle x} of this expression will restore the usual formula for a DOS. The LDOS is useful in inhomogeneous systems, where n ( E , x ) {\displaystyle n(E,x)} contains more information than n ( E ) {\displaystyle n(E)} alone. For a one-dimensional system with a wall, the sine waves give n 1 D ( E , x ) = 2 π ℏ 2 m E sin 2 k x {\displaystyle n_{1D}(E,x)={\frac {2}{\pi \hbar }}{\sqrt {\frac {2m}{E}}}\sin ^{2}{kx}} where k = 2 m E / ℏ {\textstyle k={\sqrt {2mE}}/\hbar } . In a three-dimensional system with x > 0 {\displaystyle x>0} the expression is n 3 D
|
{
"page_id": 525887,
"source": null,
"title": "Density of states"
}
|
( E , x ) = ( 1 − sin 2 k x 2 k x ) n 3 D ( E ) {\displaystyle n_{3D}(E,x)=\left(1-{\frac {\sin {2kx}}{2kx}}\right)n_{3D}(E)} In fact, we can generalise the local density of states further to n ( E , x , x ′ ) = ∑ j ϕ j ( x ) ϕ j ∗ ( x ′ ) δ ( E − ε j ) {\displaystyle n(E,x,x')=\sum _{j}\phi _{j}(x)\phi _{j}^{*}(x')\delta (E-\varepsilon _{j})} this is called the spectral function and it's a function with each wave function separately in its own variable. In more advanced theory it is connected with the Green's functions and provides a compact representation of some results such as optical absorption. === Solid state devices === LDOS can be used to gain profit into a solid-state device. For example, the figure on the right illustrates LDOS of a transistor as it turns on and off in a ballistic simulation. The LDOS has clear boundary in the source and drain, that corresponds to the location of band edge. In the channel, the DOS is increasing as gate voltage increase and potential barrier goes down. === Optics and photonics === In optics and photonics, the concept of local density of states refers to the states that can be occupied by a photon. For light it is usually measured by fluorescence methods, near-field scanning methods or by cathodoluminescence techniques. Different photonic structures have different LDOS behaviors with different consequences for spontaneous emission. In photonic crystals, near-zero LDOS are expected, inhibiting spontaneous emission. Similar LDOS enhancement is also expected in plasmonic cavity. However, in disordered photonic nanostructures, the LDOS behave differently. They fluctuate spatially with their statistics, and are proportional to the scattering strength of the structures. In addition, the relationship with the mean free
|
{
"page_id": 525887,
"source": null,
"title": "Density of states"
}
|
path of the scattering is trivial as the LDOS can be still strongly influenced by the short details of strong disorders in the form of a strong Purcell enhancement of the emission. and finally, for the plasmonic disorder, this effect is much stronger for LDOS fluctuations as it can be observed as a strong near-field localization. == See also == == References == == Further reading == Chen, Gang. Nanoscale Energy Transport and Conversion. New York: Oxford, 2005 Streetman, Ben G. and Sanjay Banerjee. Solid State Electronic Devices. Upper Saddle River, NJ: Prentice Hall, 2000. Muller, Richard S. and Theodore I. Kamins. Device Electronics for Integrated Circuits. New York: John Wiley and Sons, 2003. Kittel, Charles and Herbert Kroemer. Thermal Physics. New York: W.H. Freeman and Company, 1980 Sze, Simon M. Physics of Semiconductor Devices. New York: John Wiley and Sons, 1981 == External links == Online lecture:ECE 606 Lecture 8: Density of States by M. Alam Scientists shed light on glowing materials How to measure the Photonic LDOS
|
{
"page_id": 525887,
"source": null,
"title": "Density of states"
}
|
Florencia Canelli is since 2021 the appointed Swiss scientific delegate to the CERN council, the supreme decision-making authority of the CERN Organization. From 2021-2024, she was appointed chair of the IUPAP division of particles and field (C11). From 2021-2023, she was co-coordinator of the physics program of the CMS collaboration, a CERN experiment with over 3000 physicists. In 2010, Canelli was awarded the IUPAP Young scientist prize, an international prize awarded to one experimental and one theoretical physicist per year, for "her pioneering contribution to the identification and precision measurements of rare phenomenon through the use of advanced analysis techniques to separate very small signals from large background processes at the Tevatron collider." She has been an author on four multi-purpose collider experiments, namely the CMS experiment and ATLAS experiment at the CERN LHC, and the CDF experiment and D0 experiment at the Fermilab Tevatron. She is currently a full professor at the University of Zurich, Physics Institute, specializing in particle physics. == Early life and education == Florencia Canelli was born in Buenos Aires, Argentina, in 1973. At a young age, her family moved to Asuncion, Paraguay, where she received much of her education. After receiving the equivalent of a physics bachelor degree at the Universidad Nacional de Asunción (Paraguay), she continued her education at the Instituto Balseiro in Bariloche, Argentina, before beginning her PhD studies in particle physics at the University of Rochester in the U.S. under the supervision of Tom Ferbel. Her PhD thesis, studying the properties of the top quark with data from the D0 experiment, introduced the so-called "matrix element" technique to improve measurements from collider detectors, and won several awards for this work, including the Mitsuyoshi Tanaka Dissertation Award, from APS. == Career == After her PhD, she became a postdoc at UCLA on
|
{
"page_id": 75236932,
"source": null,
"title": "Florencia Canelli"
}
|
the CDF experiment at Fermilab from 2003-2006. During this time, she led efforts to estimate the energy scale of quarks produced in collisions, significantly contributing to the experiment’s results and improving the precision of numerous measurements. In 2006, she accepted a prestigious Wilson Fellow at Fermilab. During her tenure as convener of the CDF top-quark group, the group made the first observation of the single production of top quarks. In 2008, she became an assistant professor at the University of Chicago, and began her involvement on the ATLAS experiment at CERN, while continuing on the CDF experiment. She was awarded the Sloan Research Fellowship in 2009. In 2009, she was promoted to Scientist 1 at Fermilab, and in 2011, was promoted to associate professor at University of Chicago. Canelli was part of the team at CDF that found evidence for the Higgs boson in 2012, as well as the ATLAS team that co-discovered the Higgs boson at the LHC on July 4th, 2012. In 2012, Canelli moved to Switzerland, becoming associate professor at the University of Zurich, and later joining the CMS experiment. Her group has helped construct the CMS barrel pixel detector, installed in 2017, and studied a wide range of physics processes related to the Higgs boson, top quark, and beyond the Standard Model. She has co-led the CMS top quark group, and from 2021-2023 was the co-coordinator of the entire CMS physics program. Canelli has been appointed to several high-level positions in international bodies, serving as chair of the Particles and fields commission (C11) of the International Union of Pure and Applied Physics from 2021-2024, and as Swiss scientific delegate to the CERN council since 2021. == Published works == Canelli has over 1700 publications, mainly on the CMS, ATLAS, CDF and D0 experiments. == Personal life
|
{
"page_id": 75236932,
"source": null,
"title": "Florencia Canelli"
}
|
== Canelli is married to experimental particle physicist, Prof. Ben Kilminster. They have two children. == Recognition == 2010: IUPAP Young Scientist Prize 2009: Alfred P. Sloan Fellowship 2006: Wilson Fellowship 2005: Mitsuyoshi Tanaka Dissertation Award, APS 2004: University Research Association Thesis Award, Fermilab 2004: Frederick Lobkowicz Thesis Prize, University of Rochester == Leadership in international organizations == since 2022: Swiss scientific delegate to CERN Council, Vice-President of the Council as of 2025 2021-2024: Chair of the commission Particles and Fields (C11) of IUPAP 2017-2020: Member of Fermilab Physics Advisory Committee (PAC) 2017-2021: Secretary of the commission Particles and Fields (C11) of IUPAP 2014-2017: Regular member of the commission Particles and Fields (C11) of IUPAP 2013-2021: Swiss representative to the CMS management board 2016-2019: Swiss representative to the Cherenkov Telescope Array Council == Leadership in experimental collaborations == 2021-2023: Physics co-coordinator for the CMS experiment 2018-2020: Convener of the CMS top-quark group 2015-2016: Convener of the CMS "Very Heavy Fermions" (VFS) group 2007-2009: Convener of the CDF top-quark group 2005-2007: Convener of the CDF top-quark mass group 2003-2004: Convener of the CDF jet-energy and resolution group == References/Notes and references ==
|
{
"page_id": 75236932,
"source": null,
"title": "Florencia Canelli"
}
|
In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems. == Definition == A general deterministic system can be described by an operator, H, that maps an input, x(t), as a function of t to an output, y(t), a type of black box description. A system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.) The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs. In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs. Mathematically, for a continuous-time system, given two arbitrary inputs x 1 ( t ) x 2 ( t ) {\displaystyle {\begin{aligned}x_{1}(t)\\x_{2}(t)\end{aligned}}} as well as their respective zero-state outputs y 1 ( t ) = H { x 1 ( t ) } y 2 ( t ) = H { x 2 ( t ) } {\displaystyle {\begin{aligned}y_{1}(t)&=H\left\{x_{1}(t)\right\}\\y_{2}(t)&=H\left\{x_{2}(t)\right\}\end{aligned}}} then a linear system must satisfy α y 1 ( t ) + β y 2 ( t ) =
|
{
"page_id": 722503,
"source": null,
"title": "Linear system"
}
|
H { α x 1 ( t ) + β x 2 ( t ) } {\displaystyle \alpha y_{1}(t)+\beta y_{2}(t)=H\left\{\alpha x_{1}(t)+\beta x_{2}(t)\right\}} for any scalar values α and β, for any input signals x1(t) and x2(t), and for all time t. The system is then defined by the equation H(x(t)) = y(t), where y(t) is some arbitrary function of time, and x(t) is the system state. Given y(t) and H, the system can be solved for x(t). The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency components. Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations). Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense. A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience. The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors ( x 1 ( t ) {\displaystyle {\mathbf {x} }_{1}(t)} , x 2 ( t ) {\displaystyle {\mathbf {x} }_{2}(t)} , y 1 ( t ) {\displaystyle {\mathbf {y} }_{1}(t)} , y 2 ( t ) {\displaystyle {\mathbf {y} }_{2}(t)} ) are considered instead of input and
|
{
"page_id": 722503,
"source": null,
"title": "Linear system"
}
|
output signals ( x 1 ( t ) {\displaystyle x_{1}(t)} , x 2 ( t ) {\displaystyle x_{2}(t)} , y 1 ( t ) {\displaystyle y_{1}(t)} , y 2 ( t ) {\displaystyle y_{2}(t)} .) This definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra. === Examples === A simple harmonic oscillator obeys the differential equation: m d 2 ( x ) d t 2 = − k x . {\displaystyle m{\frac {d^{2}(x)}{dt^{2}}}=-kx.} If H ( x ( t ) ) = m d 2 ( x ( t ) ) d t 2 + k x ( t ) , {\displaystyle H(x(t))=m{\frac {d^{2}(x(t))}{dt^{2}}}+kx(t),} then H is a linear operator. Letting y(t) = 0, we can rewrite the differential equation as H(x(t)) = y(t), which shows that a simple harmonic oscillator is a linear system. Other examples of linear systems include those described by y ( t ) = k x ( t ) {\displaystyle y(t)=k\,x(t)} , y ( t ) = k d x ( t ) d t {\displaystyle y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}} , y ( t ) = k ∫ − ∞ t x ( τ ) d τ {\displaystyle y(t)=k\,\int _{-\infty }^{t}x(\tau )\mathrm {d} \tau } , and any system described by ordinary linear differential equations. Systems described by y ( t ) = k {\displaystyle y(t)=k} , y ( t ) = k x ( t ) + k 0 {\displaystyle y(t)=k\,x(t)+k_{0}} , y ( t ) = sin [ x ( t ) ] {\displaystyle y(t)=\sin {[x(t)]}} , y ( t ) = cos [ x ( t ) ] {\displaystyle y(t)=\cos {[x(t)]}} , y ( t ) = x 2 ( t ) {\displaystyle
|
{
"page_id": 722503,
"source": null,
"title": "Linear system"
}
|
y(t)=x^{2}(t)} , y ( t ) = x ( t ) {\textstyle y(t)={\sqrt {x(t)}}} , y ( t ) = | x ( t ) | {\displaystyle y(t)=|x(t)|} , and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle. The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by y ( t ) = k d x ( t ) d t {\displaystyle y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}} (such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin. Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by y ( t ) = ( 1.5 + cos ( t ) ) x ( t ) {\displaystyle y(t)=(1.5+\cos {(t)})\,x(t)} . It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form x ( t ) = cos ( 3 t ) {\displaystyle x(t)=\cos {(3t)}} , using product-to-sum trigonometric identities it can be easily shown that the output is y ( t ) = 1.5 cos ( 3 t ) + 0.5 cos ( 2 t ) + 0.5 cos ( 4 t ) {\displaystyle y(t)=1.5\cos {(3t)}+0.5\cos {(2t)}+0.5\cos {(4t)}} , that is, the output doesn't consist only of sinusoids of same frequency
|
{
"page_id": 722503,
"source": null,
"title": "Linear system"
}
|
as the input (3 rad/s), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s, which is different than that of the input. == Time-varying impulse response == The time-varying impulse response h(t2, t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is x ( t ) = δ ( t − t 1 ) {\displaystyle x(t)=\delta (t-t_{1})} where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is y ( t = t 2 ) = h ( t 2 , t 1 ) {\displaystyle y(t=t_{2})=h(t_{2},t_{1})} then the function h(t2, t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied: h ( t 2 , t 1 ) = 0 , t 2 < t 1 {\displaystyle h(t_{2},t_{1})=0,t_{2}<t_{1}} == The convolution integral == The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition: y ( t ) = ∫ − ∞ t h ( t , t ′ ) x ( t ′ ) d t ′ = ∫ − ∞ ∞ h ( t , t ′ ) x ( t ′ ) d t ′ {\displaystyle y(t)=\int _{-\infty }^{t}h(t,t')x(t')dt'=\int _{-\infty }^{\infty }h(t,t')x(t')dt'} If the properties of the system do not depend on the time
|
{
"page_id": 722503,
"source": null,
"title": "Linear system"
}
|
at which it is operated then it is said to be time-invariant and h is a function only of the time difference τ = t − t' which is zero for τ < 0 (namely t < t' ). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways, y ( t ) = ∫ − ∞ t h ( t − t ′ ) x ( t ′ ) d t ′ = ∫ − ∞ ∞ h ( t − t ′ ) x ( t ′ ) d t ′ = ∫ − ∞ ∞ h ( τ ) x ( t − τ ) d τ = ∫ 0 ∞ h ( τ ) x ( t − τ ) d τ {\displaystyle y(t)=\int _{-\infty }^{t}h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )d\tau =\int _{0}^{\infty }h(\tau )x(t-\tau )d\tau } Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is: H ( s ) = ∫ 0 ∞ h ( t ) e − s t d t . {\displaystyle H(s)=\int _{0}^{\infty }h(t)e^{-st}\,dt.} In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = iω follows the formula for the frequency response function: H ( i ω ) = ∫ − ∞ ∞ h ( t ) e − i ω t d t {\displaystyle H(i\omega )=\int _{-\infty }^{\infty }h(t)e^{-i\omega t}dt} == Discrete-time systems == The output of any discrete time linear system is related to the input by the time-varying convolution sum: y [ n ] = ∑ m = − ∞
|
{
"page_id": 722503,
"source": null,
"title": "Linear system"
}
|
n h [ n , m ] x [ m ] = ∑ m = − ∞ ∞ h [ n , m ] x [ m ] {\displaystyle y[n]=\sum _{m=-\infty }^{n}{h[n,m]x[m]}=\sum _{m=-\infty }^{\infty }{h[n,m]x[m]}} or equivalently for a time-invariant system on redefining h, y [ n ] = ∑ k = 0 ∞ h [ k ] x [ n − k ] = ∑ k = − ∞ ∞ h [ k ] x [ n − k ] {\displaystyle y[n]=\sum _{k=0}^{\infty }{h[k]x[n-k]}=\sum _{k=-\infty }^{\infty }{h[k]x[n-k]}} where k = n − m {\displaystyle k=n-m} represents the lag time between the stimulus at time m and the response at time n. == See also == Shift invariant system Linear control Linear time-invariant system Nonlinear system System analysis System of linear equations == References ==
|
{
"page_id": 722503,
"source": null,
"title": "Linear system"
}
|
Prehydrated electrons are free electrons that occur in water under irradiation. Usually they form complexes with water molecules and become hydrated electrons. They can also react with the bases of the nucleotides dGMP and dTMP in aqueous solution. This suggests they may also react with the bases of the DNA double helix, ultimately breaking molecular bonds and causing DNA damage. This mechanism is hypothesized to be a cause of radiation damage to DNA. == References ==
|
{
"page_id": 43845195,
"source": null,
"title": "Prehydrated electrons"
}
|
Downstream processing refers to the recovery and the purification of biosynthetic products, particularly pharmaceuticals, from natural sources such as animal tissue, plant tissue or fermentation broth, including the recycling of salvageable components as well as the proper treatment and disposal of waste. It is an essential step in the manufacture of pharmaceuticals such as antibiotics, hormones (e.g. insulin and human growth hormone), antibodies (e.g. infliximab and abciximab) and vaccines; antibodies and enzymes used in diagnostics; industrial enzymes; and natural fragrance and flavor compounds. Downstream processing is usually considered a specialized field in biochemical engineering, which is itself a specialization within chemical engineering. Many of the key technologies were developed by chemists and biologists for laboratory-scale separation of biological and synthetic products, whilst the role of biochemical and chemical engineers is to develop the technologies towards larger production capacities. Downstream processing and analytical bioseparation both refer to the separation or purification of biological products, but at different scales of operation and for different purposes. Downstream processing implies manufacture of a purified product fit for a specific use, generally in marketable quantities, while analytical bioseparation refers to purification for the sole purpose of measuring a component or components of a mixture, and may deal with sample sizes as small as a single cell. == Stages == A widely recognized heuristic for categorizing downstream processing operations divides them into four groups which are applied in order to bring a product from its natural state as a component of a tissue, cell or fermentation broth through progressive improvements in purity and concentration. Removal of insolubles is the first step and involves the capture of the product as a solute in a particulate-free liquid, for example the separation of cells, cell debris or other particulate matter from fermentation broth containing an antibiotic. Typical operations to
|
{
"page_id": 2819660,
"source": null,
"title": "Downstream processing"
}
|
achieve this are filtration, centrifugation, sedimentation, precipitation, flocculation, electro-precipitation, and gravity settling. Additional operations such as grinding, homogenization, or leaching, required to recover products from solid sources such as plant and animal tissues, are usually included in this group. Product isolation is the removal of those components whose properties vary considerably from that of the desired product. For most products, water is the chief impurity and isolation steps are designed to remove most of it, reducing the volume of material to be handled and concentrating the product. Solvent extraction, adsorption, ultrafiltration, and precipitation are some of the unit operations involved. Product purification is done to separate those contaminants that resemble the product very closely in physical and chemical properties. Consequently, steps in this stage are expensive to carry out and require sensitive and sophisticated equipment. This stage contributes a significant fraction of the entire downstream processing expenditure. Examples of operations include affinity, size exclusion, reversed phase chromatography, ion-exchange chromatography, crystallization and fractional precipitation. Product polishing describes the final processing steps which end with packaging of the product in a form that is stable, easily transportable and convenient. Crystallization, desiccation, lyophilization and spray drying are typical unit operations. Depending on the product and its intended use, polishing may also include operations to sterilize the product and remove or deactivate trace contaminants which might compromise product safety. Such operations might include the removal of viruses or depyrogenation. A few product recovery methods may be considered to combine two or more stages. For example, expanded bed adsorption (Vennapusa et al. 2008) accomplishes removal of insolubles and product isolation in a single step. Affinity chromatography often isolates and purifies in a single step. == See also == Fermentation (biochemistry) Separation process Unit operation Validation (drug manufacture) Biorefinery == References == Ladisch, Michael R. (2001).
|
{
"page_id": 2819660,
"source": null,
"title": "Downstream processing"
}
|
Bioseparations Engineering: Principles, Practice, and Economics. Wiley. ISBN 0-471-24476-7. Harrison, Roger G.; Paul W. Todd; Scott R. Rudge; Demetri Petrides (2003). Bioseparations science and engineering. Oxford University Press. ISBN 0-19-512340-9. Krishna Prasad, Nooralabettu (2010). Downstream Processing-A New Horizone in Biotechnology. Prentice Hall of India Pvt. Ltd, New Delhi. ISBN 978-81-203-4040-4.
|
{
"page_id": 2819660,
"source": null,
"title": "Downstream processing"
}
|
Vomiting agents are chemical weapon agents causing vomiting. Prolonged exposure can be lethal. They were used for the first time during WWI. == Examples == Adamsite Chloropicrin Diphenylchlorarsine Diphenylcyanoarsine Diphenylamincyanoarsine == References ==
|
{
"page_id": 56821329,
"source": null,
"title": "Vomiting agent"
}
|
Fouling communities are communities of organisms found on artificial surfaces like the sides of docks, marinas, harbors, and boats. Settlement panels made from a variety of substances have been used to monitor settlement patterns and to examine several community processes (e.g., succession, recruitment, predation, competition, and invasion resistance). These communities are characterized by the presence of a variety of sessile organisms including ascidians, bryozoans, mussels, tube building polychaetes, sea anemones, sponges, barnacles, and more. Common predators on and around fouling communities include small crabs, starfish, fish, limpets, chitons, other gastropods, and a variety of worms. == Ecology == Fouling communities follow a distinct succession pattern in a natural environment. == Environmental impact == === Impacts on Humans === Fouling communities can have a negative economic impact on humans, by damaging the bottom of boats, docks, and other marine human-made structures. This effect is known as Biofouling, and has been combated by Anti-fouling paint, which is now known to introduce toxic metals to the marine environment. Fouling communities have a variety of species, and many of these are filter feeders, meaning that organisms in the fouling community can also improve water clarity. === Invasive Species === Fouling communities do grow on natural structures, however these communities are largely made up of native species, whereas the communities growing on man-made structures have larger populations of invasive species. This difference between the species diversity across human structures and natural substrate is likely dependent on human pollution, which is known to weaken native species and create a community and environment dominated by non-indigenous species. These largely non-indigenous species communities living on docks and boats usually have a higher resistance to anthropogenic disturbances. This effect is sorely felt in untouched native marine communities, as non-indigenous species growing on boat hulls are transported across the world,
|
{
"page_id": 8980050,
"source": null,
"title": "Fouling community"
}
|
to wherever the boat anchors. == Research history == Fouling communities were highlighted particularly in the literature of marine ecology as a potential example of alternate stable states through the work of John Sutherland in the 1970s at Duke University, although this was later called into question by Connell and Sousa. Fouling communities have been used to test the ecological effectiveness of artificial coral reefs. == See also == Biofouling Ecological succession Didemnum vexillum == References == == External links == http://research.ncl.ac.uk/biofouling/ is the Newcastle University barnacle and biofouling information site. http://www.imo.org/en/OurWork/Environment/Biofouling/Pages/default.aspx is the International Maritime Organization information about biofouling which includes a comprehensive list of invasive species in the fouling community. https://darchive.mblwhoilibrary.org/bitstream/handle/1912/191/chapter%203.pdf?sequence=11 https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=4896&context=open_access_etds
|
{
"page_id": 8980050,
"source": null,
"title": "Fouling community"
}
|
Environmental chemistry is the scientific study of the chemical and biochemical phenomena that occur in natural places. It should not be confused with green chemistry, which seeks to reduce potential pollution at its source. It can be defined as the study of the sources, reactions, transport, effects, and fates of chemical species in the air, soil, and water environments; and the effect of human activity and biological activity on these. Environmental chemistry is an interdisciplinary science that includes atmospheric, aquatic and soil chemistry, as well as heavily relying on analytical chemistry and being related to environmental and other areas of science. Environmental chemistry involves first understanding how the uncontaminated environment works, which chemicals in what concentrations are present naturally, and with what effects. Without this it would be impossible to accurately study the effects humans have on the environment through the release of chemicals. Environmental chemists draw on a range of concepts from chemistry and various environmental sciences to assist in their study of what is happening to a chemical species in the environment. Important general concepts from chemistry include understanding chemical reactions and equations, solutions, units, sampling, and analytical techniques. == Contaminant == A contaminant is a substance present in nature at a level higher than fixed levels or that would not otherwise be there. This may be due to human activity and bioactivity. The term contaminant is often used interchangeably with pollutant, which is a substance that detrimentally impacts the surrounding environment. While a contaminant is sometimes a substance in the environment as a result of human activity, but without harmful effects, it sometimes the case that toxic or harmful effects from contamination only become apparent at a later date. The "medium" such as soil or organism such as fish affected by the pollutant or contaminant is called
|
{
"page_id": 656979,
"source": null,
"title": "Environmental chemistry"
}
|
a receptor, whilst a sink is a chemical medium or species that retains and interacts with the pollutant such as carbon sink and its effects by microbes. == Environmental indicators == Chemical measures of water quality include dissolved oxygen (DO), chemical oxygen demand (COD), biochemical oxygen demand (BOD), total dissolved solids (TDS), pH, nutrients (nitrates and phosphorus), heavy metals, soil chemicals (including copper, zinc, cadmium, lead and mercury), and pesticides. == Applications == Environmental chemistry is used by the Environment Agency in England, Natural Resources Wales, the United States Environmental Protection Agency, the Association of Public Analysts, and other environmental agencies and research bodies around the world to detect and identify the nature and source of pollutants. These can include: Heavy metal contamination of land by industry. These can then be transported into water bodies and be taken up by living organisms such as animals and plants. PAHs (Polycyclic Aromatic Hydrocarbon) in large bodies of water contaminated by oil spills or leaks. Many of the PAHs are carcinogens and are extremely toxic. They are regulated by concentration (ppb) using environmental chemistry and chromatography laboratory testing. Nutrients leaching from agricultural land into water courses, which can lead to algal blooms and eutrophication. Urban runoff of pollutants washing off impervious surfaces (roads, parking lots, and rooftops) during rain storms. Typical pollutants include gasoline, motor oil and other hydrocarbon compounds, metals, nutrients and sediment (soil). Organometallic compounds. == Methods == Quantitative chemical analysis is a key part of environmental chemistry, since it provides the data that frame most environmental studies. Common analytical techniques used for quantitative determinations in environmental chemistry include classical wet chemistry, such as gravimetric, titrimetric and electrochemical methods. More sophisticated approaches are used in the determination of trace metals and organic compounds. Metals are commonly measured by atomic spectroscopy and
|
{
"page_id": 656979,
"source": null,
"title": "Environmental chemistry"
}
|
mass spectrometry: Atomic Absorption Spectrophotometry (AAS) and Inductively Coupled Plasma Atomic Emission (ICP-AES) or Inductively Coupled Plasma Mass Spectrometric (ICP-MS) techniques. Organic compounds, including PAHs, are commonly measured also using mass spectrometric methods, such as Gas chromatography-mass spectrometry (GC/MS) and Liquid chromatography-mass spectrometry (LC/MS). Tandem Mass spectrometry MS/MS and High Resolution/Accurate Mass spectrometry HR/AM offer sub part per trillion detection. Non-MS methods using GCs and LCs having universal or specific detectors are still staples in the arsenal of available analytical tools. Other parameters often measured in environmental chemistry are radiochemicals. These are pollutants which emit radioactive materials, such as alpha and beta particles, posing danger to human health and the environment. Particle counters and Scintillation counters are most commonly used for these measurements. Bioassays and immunoassays are utilized for toxicity evaluations of chemical effects on various organisms. Polymerase Chain Reaction PCR is able to identify species of bacteria and other organisms through specific DNA and RNA gene isolation and amplification and is showing promise as a valuable technique for identifying environmental microbial contamination. == Published analytical methods == Peer-reviewed test methods have been published by government agencies and private research organizations. Approved published methods must be used when testing to demonstrate compliance with regulatory requirements. == Notable environmental chemists == Joan Berkowitz Paul Crutzen (Nobel Prize in Chemistry, 1995) Philip Gschwend Alice Hamilton John M. Hayes Charles David Keeling Ralph Keeling Mario Molina (Nobel Prize in Chemistry, 1995) James J. Morgan Clair Patterson Roger Revelle Sherry Roland (Nobel Prize in Chemistry, 1995) Robert Angus Smith Susan Solomon Werner Stumm Ellen Swallow Richards Hans Suess John Tyndall == See also == Environmental monitoring Freshwater environmental quality parameters Green chemistry Green Chemistry Journal Journal of Environmental Monitoring Important publications in Environmental chemistry List of chemical analysis methods == References == == Further
|
{
"page_id": 656979,
"source": null,
"title": "Environmental chemistry"
}
|
reading == Johan Alfredo Linthorst, "Notes on Environmental Engagement within the American Chemical Society, 1960-1990," Bulletin for the History of Chemistry 50 (1), pp. 52-56, 2025. NCERT XI textbook.[ unit 14] == External links == List of links for Environmental Chemistry - from the WWW Virtual Library International Journal of Environmental Analytical Chemistry
|
{
"page_id": 656979,
"source": null,
"title": "Environmental chemistry"
}
|
Sucrose acetoisobutyrate (SAIB) is an emulsifier and has E number E444. In the United States, SAIB is categorized as generally recognized as safe (GRAS) as a food additive in cocktail mixers, beer, malt beverages, or wine coolers and is a potential replacement for brominated vegetable oil. == Chemistry == SAIB can be prepared by esterification of sucrose with acetic and isobutyric anhydride. == Uses == Beverage emulsions - weighting agent Color cosmetics and skin care Flavorings (orange flavor) Fragrance fixative Hair care Horse styling products == References == == External links == InChem
|
{
"page_id": 11798101,
"source": null,
"title": "Sucrose acetate isobutyrate"
}
|
The molecular formula C2H2O3 (molar mass: 74.04 g/mol, exact mass: 74.0004 u) may refer to: Formic anhydride, or methanoic anhydride Glyoxylic acid, or oxoacetic acid
|
{
"page_id": 12387929,
"source": null,
"title": "C2H2O3"
}
|
Counterion condensation is a phenomenon described by Manning's theory (Manning 1969), which assumes that counterions can condense onto polyions until the charged density between neighboring monomer charges along the polyion chain is reduced below a certain critical value. In the model the real polyion chain is replaced by an idealized line charge, where the polyion is represented by a uniformly charged thread of zero radius, infinite length and finite charge density, and the condensed counterion layer is assumed to be in physical equilibrium with the ionic atmosphere surrounding the polyion. The uncondensed mobile ions in the ionic atmosphere are treated within the Debye–Hückel (DH) approximation. The phenomenon of counterion condensation now takes place when the dimensionless Coulomb coupling strength Γ = λ B / l c h a r g e > 1 {\displaystyle \Gamma =\lambda _{B}/l_{charge}>1} , where λ B {\displaystyle \lambda _{B}} represents the Bjerrum length and l c h a r g e {\displaystyle l_{charge}} the distance between neighboring charged monomers. In this case the Coulomb interactions dominate over the thermal interactions and counterion condensation is favored. For many standard polyelectrolytes, this phenomenon is relevant, since the distance between neighboring monomer charges typically ranges between 2 and 3 Å and λ B ≈ {\displaystyle \lambda _{B}\approx } 7 Å in water. The Manning theory states that the fraction of "condensed" counter ions is 1 − 1 / Γ {\displaystyle 1-1/\Gamma } , where "condensed" means that the counter ions are located within the Manning radius R M {\displaystyle R_{M}} . At infinite dilution the Manning radius diverges and the actual concentration of ions close to the charged rod is reduced (in agreement with the law of dilution). == Criticism == The counterion condensation originally only describes the behaviour of a charged rod. It competes here with Poisson-Boltzmann
|
{
"page_id": 27657824,
"source": null,
"title": "Counterion condensation"
}
|
theory, which was shown to give less artificial results than the counterion condensation theories. == References == Manning, G.S. (1969). "Limiting Laws and Counterion Condensation in Polyelectrolyte Solutions I. Colligative Properties". J. Chem. Phys. 51 (3): 924–933. Bibcode:1969JChPh..51..924M. doi:10.1063/1.1672157. Archived from the original on 2011-07-19. Retrieved 2010-06-09. Bathe, M.; Rutledge, G.C.; Grodzinsky, A.J.; Tidor, B. (2005). "A Coarse-Grained Molecular Model for Glycosaminoglycans: Application to Chondroitin, Chondroitin Sulfate, and Hyaluronic Acid". Biophys. J. 88 (6): 3870–87. Bibcode:2005BpJ....88.3870B. doi:10.1529/biophysj.104.058800. PMC 1305620. PMID 15805173.
|
{
"page_id": 27657824,
"source": null,
"title": "Counterion condensation"
}
|
Cold filter plugging point (CFPP) is the lowest temperature, expressed in degrees Celsius (°C), at which a given volume of diesel type of fuel still passes through a standardized filtration device in a specified time when cooled under certain conditions. This test gives an estimate for the lowest temperature that a fuel will give trouble free flow in certain fuel systems. This is important as in cold temperate countries, a high cold filter plugging point will clog up vehicle engines more easily. The test is important in relation to the use of additives that allow spreading the usage of winter diesel at temperatures below the cloud point. The tests according to EN 590 show that a cloud point of +1 °C can have a CFPP −10 °C. Current additives allow a CFPP of −20 °C to be based on diesel fuel with a cloud point of −7 °C. The trustworthiness of the EN 590 have been criticized as being too low for modern diesel motors – the German ADAC has run a test series on customary winter diesel in a cold chamber. All diesel brands did exceed the legal minimum by 3 to 11 degrees in the laboratory according to the legal DIN test. One of the real diesel motors however stopped working even before the legal minimum was reached, presumably due to an undersized filter heater. Notably the experiments did not show a direct correlation between the CFPP value of the mineral oil and the cold start capability of the diesel motors – hence the automobile club suggest the creation of a new test standard. == Test method == The ASTM number for the test method to define cold filter plugging point is ASTM D6371. == See also == Cloud point Petroleum Pour point == References == == External
|
{
"page_id": 13502050,
"source": null,
"title": "Cold filter plugging point"
}
|
links == BP information
|
{
"page_id": 13502050,
"source": null,
"title": "Cold filter plugging point"
}
|
Solar reforming is the sunlight-driven conversion of diverse carbon waste resources (including solid, liquid, and gaseous waste streams such as biomass, plastics, industrial by-products, atmospheric carbon dioxide, etc.) into sustainable fuels (or energy vectors) and value-added chemicals. It encompasses a set of ideas focused on solar solar energy. Solar reforming offers an attractive and unifying solution to address the contemporary challenges of climate change and environmental pollution by creating a sustainable circular network of waste upcycling, clean fuel (and chemical) generation and the consequent mitigation of greenhouse emissions (in alignment with the United Nations Sustainable Development Goals). == Background == The earliest sunlight-driven reforming (now referred to as photoreforming or PC reforming which forms a small sub-section of solar reforming; see Definition and classifications section) of waste-derived substrates involved the use of TiO2 semiconductor photocatalyst (generally loaded with a hydrogen evolution co-catalyst such as Pt). Kawai and Sakata from the Institute for Molecular Science, Okazaki, Japan in the 1980s reported that the organics derived from different solid waste matter could be used as electron donors to drive the generation of hydrogen gas over TiO2 photocatalyst composites. In 2017, Wakerley, Kuehnel and Reisner at the University of Cambridge, UK demonstrated the photocatalytic production of hydrogen using raw lignocellulosic biomass substrates in the presence of visible-light responsive CdS|CdOx quantum dots under alkaline conditions. This was followed by the utilization of less-toxic, carbon-based, visible-light absorbing photocatalyst composites (for example carbon-nitride based systems) for biomass and plastics photoreforming to hydrogen and organics by Kasap, Uekert and Reisner. In addition to variations of carbon nitride, other photocatalyst composite systems based on graphene oxides, MXenes, co-ordination polymers and metal chalcogenides were reported during this period. A major limitation of PC reforming is the use of conventional harsh alkaline pre-treatment conditions (pH >13 and high temperatures) for
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
polymeric substrates such as condensation plastics, accounting for more than 80% of the operation costs. This was circumvented with the introduction of a new chemoenzymatic reforming pathway in 2023 by Bhattacharjee, Guo, Reisner and Hollfelder, which employed near-neutral pH, moderate temperatures for pre-treating plastics and nanoplastics. In 2020, Jiao and Xie reported the photocatalytic conversion of addition plastics such as polyethylene and polypropylene to high energy-density to C2 fuels over a Nb2O5 catalyst under natural conditions. The photocatalytic process (referred to as PC reforming; see Categorization and configurations section below) offers a simple, one-pot and facile deployment scope, but has several major limitations, making it challenging for commercial implementation. In 2021, sunlight-driven photoelectrochemical (PEC) systems/technologies operating with no external bias or voltage input were introduced by Bhattacharjee and Reisner at the University of Cambridge. These PEC reforming (see Categorization and configurations section) systems reformed diverse pre-treated waste streams (such as lignocellulose and PET plastics) to selective value-added chemicals with the simultaneous generation of green hydrogen, and achieving areal production rates 100-10000 times higher than conventional photocatalytic processes. In 2023, Bhattacharjee, Rahaman and Reisner extended the PEC platform to a solar reactor which could reduce greenhouse gas CO2 to different energy vectors (CO, syngas, formate depending on the type of catalyst integrated) and convert waste PET plastics to glycolic acid at the same time. This further inspired the direct capture and conversion of CO2 to products from flue gas and air (direct air capture) in a PEC reforming process (with simultaneous plastic conversion). Choi and Ryu demonstrated a polyoxometallate-medated PEC process to achieve biomass conversion with unassisted hydrogen production in 2022. Similarly, Pan and Chu, in 2023 reported a PEC cell for renewable formate production from sunlight, CO2 and biomass-derived sugars. In 2025, Andrei, Roh and Yang demonstrated solar-driven hydrocarbon synthesis
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
by interfacing copper nanoflower catalysts on perovskite-based artificial leaves at the University of California, Berkeley. Devices can produce ethane and ethylene at high rates by coupling CO2 reduction with glycerol oxidation into value-added chemicals, which replaces the thermodynamically demanding O2 evolution. These developments has led solar reforming (and electroreforming, where renewable electricity drives redox processes) to gradually emerge as an active area of exploration. == Concept and considerations == === Definition and classifications === Solar reforming is the sunlight-driven transformation of waste substrates to valuable products (such as sustainable fuels and chemicals) as defined by scientists Subhajit Bhattacharjee, Stuart Linley, and Erwin Reisner in their 2024 Nature Reviews Chemistry article where they conceptualized and formalized the field by introducing its concepts, classification, configurations and metrics. It generally operates without external heating and pressure, and also introduces a thermodynamic advantage over traditional green hydrogen or CO2 reduction fuel-producing methods such as water splitting or CO2 splitting, respectively. Depending on solar spectrum utilization, solar reforming can be classified into two categories: "solar catalytic reforming" and "solar thermal reforming". Solar catalytic reforming refers to transformation processes primarily driven by ultraviolet (UV) or visible light. It also includes the subset of 'photoreforming' encompassing utilization of high energy photons in the UV or near-UV region of the solar spectrum (for example, by semiconductor photocatalysts such as TiO2). Solar thermal reforming, on the other hand, exploits the infrared (IR) region for waste upcycling to generate products of high economic value. An important aspect of solar reforming is value creation, which means that the overall value creation from product formation must be greater than substrate value destruction. In terms of deployment architectures, solar catalytic reforming can be further categorized into: photocatalytic reforming (PC reforming), photoelectrochemical reforming (PEC reforming), and photovoltaic-electrochemical reforming (PV-EC reforming). === Advantages over conventional
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
waste recycling and upcycling processes === Solar reforming offers several advantages over conventional methods of waste management or fuel/chemical production. It offers a less energy-intensive and low-carbon alternative to methods of waste reforming such as pyrolysis and gasification which require high energy input. Solar reforming also provides several benefits over traditional green hydrogen production methods such as water splitting (H2O → H2 + 1/2O2, ΔG° = 237 kJ mol−1). It offers a thermodynamic advantage over water splitting by circumventing the energetically and kinetically demanding water oxidation half reaction (E0 = +1.23 V vs. reversible hydrogen electrode (RHE)) by energetically neutral oxidation of waste-derived organics (CxHyOz + (2x−z)H2O → (2x−z+y/2)H2 + xCO2; ΔG° ~0 kJ mol−1). This results in better performance in terms of higher production rates, and also translates to other similar processes which depend on water oxidation as the counter-reaction such as CO2 splitting. Furthermore, concentrated streams of hydrogen produced from solar reforming are safer than explosive mixtures of oxygen and hydrogen (from traditional water splitting), which otherwise require additional separation costs. The added economic advantage of forming two different valuable products (for example, gaseous reductive fuels and liquid oxidative chemicals) simultaneously makes solar reforming suitable for commercial applications. === Solar reforming metrics === Solar reforming encompasses a range of technological processes and configurations and therefore, suitable performance metrics can evaluate the commercial viability. In artificial photosynthesis, the most common metric is the solar-to-fuel conversion efficiency (ηSTF) as shown below, where 'r' is the product formation rate, 'ΔG' is the Gibbs free energy change during the process, 'A' is the sunlight irradiation area and 'P' is the total light intensity flux. The ηSTF can be adopted as a metric for solar reforming but with certain considerations. Since the ΔG values for solar reforming processes are very low (ΔG ~0
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
kJ mol‒1), this makes the ηSTF per definition close to zero, despite the high production rates and quantum yields. However, replacing the ΔG for product formation (during solar reforming) with that of product utilisation (|ΔGuse|; such as combustion of the hydrogen fuel generated) can give a better representation of the process efficiency. η S T F = r S R ( m o l ⋅ s − 1 ) × Δ G S R ( J ⋅ m o l − 1 ) P total ( W ⋅ m − 2 ) × A ( m 2 ) {\displaystyle \eta _{\mathrm {STF} }={\frac {\mathrm {r} _{\mathrm {SR} }\left(\mathrm {mol} \cdot \mathrm {s} ^{-1}\right)\times \Delta \mathrm {G} _{\mathrm {SR} }\left(\mathrm {J} \cdot \mathrm {mol} ^{-1}\right)}{\mathrm {P} _{\text{total }}\left(\mathrm {W} \cdot \mathrm {m} ^{-2}\right)\times \mathrm {A} \left(\mathrm {m} ^{2}\right)}}} Since solar reforming is highly dependent on the light harvester and its area of photon collection, a more technologically relevant metric is the areal production rate (rareal) as shown, where 'n' is the moles of product formed, 'A' is the sunlight irradiation area and 't' is the time. r areal = n product ( m o l ) A ( m 2 ) × t ( h ) {\displaystyle \mathrm {r} _{\text{areal}}={\frac {\mathrm {n} _{\text{product}}(\mathrm {mol} )}{\mathrm {A} \left(\mathrm {m} ^{2}\right)\times \mathrm {t} (\mathrm {h} )}}} Although rareal is a more consistent metric for solar reforming, it neglects some key parameters such as type of waste utilized, pre-treatment costs, product value, scaling, other process and separation costs, deployment variables, etc. Therefore, a more adaptable and robust metric is the solar-to-value creation rate (rSTV) which can encompass all these factors and provide a more holistic and practical picture from the economic or commercial point of view. The simplified equation for rSTV is shown below,
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
where Ci and Ck are the costs of the product 'i' and substrate 'k', respectively. Cp is the pre-treatment cost for the waste substrate 'k', and ni and nk are amounts (in moles) of the product 'i' formed and substrate 'k' consumed during solar reforming, respectively. Note that the metric is adaptable and can be expanded to include other relevant parameters as applicable. r S T V = ∑ i = 1 M C i ( $ m o l − 1 ) × n i ( m o l ) − ∑ k = 1 N ( C k + C p ) ( $ m o l − 1 ) × n k ( m o l ) A ( m 2 ) × t ( h ) {\displaystyle r_{\mathrm {STV} }={\frac {{\textstyle \sum _{i=1}^{M}\displaystyle C_{i}(\$mol^{-1})\times n_{i}(mol)}-{\textstyle \sum _{k=1}^{N}\displaystyle {\bigl (}C_{k}+C_{p}{\bigr )}(\$mol^{-1})\times n_{k}(mol)}}{A(m^{2})\times t(h)}}{}} === Categorization and configurations === Solar reforming depends on the properties of the light absorber and the catalysts involved, and their selection, screening, and integration to generate maximum value. The design and deployment of solar reforming technologies dictate the efficiency, scale, and target substrates/products. In this context, solar reforming (more specifically, solar catalytic reforming) can be classified into three architectures: Photocatalytic (PC) reforming - PC reforming is a one-pot process involving homogeneous or heterogenous photocatalyst suspensions (or immobilized photocatalysts on sheets or floating materials for easy recovery), which, under sunlight irradiation generate charge carriers (electron-hole pairs) to catalyze redox reactions (UV or near-UV based photoreforming systems generally also come under PC reforming). Despite the low cost and simplicity of PC reforming, there are major drawbacks of this approach which includes low product formation rates, poor selectivity of oxidation products or overoxidation to release CO2, challenging catalyst/process optimization and harsh pre-treatment conditions. Photoelectrochemical (PEC) reforming
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
- PEC reforming involves the use of PEC systems/assemblies which consist of separated (photo)electrodes generally connected using a wire and submerged in solution (electrolyte). A photoelectrode consists of a light-absorber and additional charge transport and catalyst layers to facilitate the redox processes. While conventional PEC systems typically require a bias or voltage input in addition to the energy obtained from incident light irradiation, PEC reforming ideally operates with a single light absorber without any external bias or voltage (that is, completely driven by sunlight). PEC reforming can already produce clean fuels and valuable chemicals with high selectivity and achieve production rates which are 2-4 orders of magnitude higher than conventional PC processes. The spatial separation between the redox processes offered by PEC systems allows flexibility in the screening and integration of light-absorbers and catalysts, and also better product separation. They can also benefit from better spectral utilization such as using solar concentrators or thermoelectric modules to harvest heat, thereby improving reaction kinetics and performance. The versatility and high performance of these new PEC arrangements, therefore has wide scope of further exploitation and research. PV-EC reforming and extension to 'electroreforming' systems - PV-EC reforming refers to the use of electricity generated from photovoltaic panels (and therefore driven by sunlight) to drive electrochemical (electrolysis) reactions for waste reforming. The concept of PV-EC reforming can be further extended to 'electroreforming' where renewable electricity from sources other than the sun (for example, wind, hydro, nuclear, among others) is used to power the electrochemical reactions achieving valuable fuel and chemical production from waste feedstocks. While traditionally most electrolysers, including commercial ones focus on water splitting to produce hydrogen, new electrochemical systems, catalysts and concepts have emerged which have started to look into waste substrates for utilisation as sustainable feedstocks. == Introduction of 'Photon Economy' ==
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
An important concept introduced in the context of solar reforming is the 'photon economy', which, as defined by Bhattacharjee, Linley and Reisner, is the maximum utilization of all incident photons for maximizing product formation and value creation. An ideal solar reforming process is one where the light absorber can absorb incident UV and visible light photons with maximum quantum yield, generating high charge carrier concentration to drive redox half-reactions at maximum rate. On the other hand, the residual, non-absorbed low-energy IR photons may be used for boosting reaction kinetics, waste pre-treatment or other means of value creation (for example, desalination, etc.). Therefore, proper light and thermal management through various means (such as using solar concentrators, thermoelectric modules, among others) is encouraged to have both an atom economical and photon economical approach to extract maximum value from solar reforming processes. == Outlook and future scope == Deployment of any solar reforming (PC, PEC, or PV-EC) is speculative and depends on many factors. Solar reforming may not be only limited to the conventional chemical pathways discussed, and may also include other relevant industrial processes such as light-driven organic transformations, flow photochemistry, and integration with industrial electrolysis, among others. The products from conventional solar reforming such as green hydrogen or other platform chemicals have a broad value-chain. It is also now understood that sustainable fuel/chemical producing technologies of the future will rely on biomass, plastics, and CO2 as key carbon feedstocks to replace fossil fuels. Therefore, with sunlight being abundant and the cheapest source of energy, solar reforming is well-positioned to drive decarbonization and facilitate the transition from a linear to circular economy in the coming decades. == See also == Artificial photosynthesis Circular economy Conference of the parties Electrochemical reduction of carbon dioxide Electrochemistry Hydrogen economy Net zero emissions Photocatalysis Photoelectrochemistry Solar
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
fuel == References ==
|
{
"page_id": 76088934,
"source": null,
"title": "Solar reforming"
}
|
Most vaccines consist of viruses that have been attenuated, disabled, weakened or killed in some way so that their virulent properties are no longer effective. A simple genetically modified vaccine, based on a thymidine kinase deficient mutant of pseudorabies virus was reportedly available as early as 2001 as a commercial vaccine to control Aujeszky's disease in Europe, North America and Japan. == References ==
|
{
"page_id": 65865322,
"source": null,
"title": "Genetically modified vaccine"
}
|
Marek Mlodzik is the Chair of the Department of Molecular, Cell and Developmental Biology and also holds professorships in Oncological Sciences and Ophthalmology at the Mount Sinai School of Medicine in New York City. Prior to this (from 1991 to 2000) he was a Group Leader at EMBL Heidelberg. In 1997, Mlodzik was elected as a member of the European Molecular Biology Organization. He is known for his contributions to the generation of planar cell polarity in the Drosophila melanogaster epithelium. == References ==
|
{
"page_id": 4261485,
"source": null,
"title": "Marek Mlodzik"
}
|
The Gausson is a soliton which is the solution of the logarithmic Schrödinger equation, which describes a quantum particle in a possible nonlinear quantum mechanics. The logarithmic Schrödinger equation preserves the dimensional homogeneity of the equation, i.e. the product of the independent solutions in one dimension remain the solution in multiple dimensions. While the nonlinearity alone cannot cause the quantum entanglement between dimensions, the logarithmic Schrödinger equation can be solved by the separation of variables. Let the nonlinear Logarithmic Schrödinger equation in one dimension will be given by ( ℏ = 1 {\displaystyle \hbar =1} , unit mass m = 1 {\displaystyle m=1} ): i ∂ ψ ∂ t = − 1 2 ∂ 2 ψ ∂ x 2 − a ln | ψ | 2 ψ {\displaystyle i{\partial \psi \over \partial t}=-{\frac {1}{2}}{\frac {\partial ^{2}\psi }{\partial x^{2}}}-a\ln |\psi |^{2}\psi } Let assume the Galilean invariance i.e. ψ ( x , t ) = e − i E t ψ ( x − k t ) {\displaystyle {\frac {}{}}\psi (x,t)=e^{-iEt}\psi (x-kt)} Substituting y = x − k t {\displaystyle {\frac {}{}}y=x-kt} The first equation can be written as − 1 2 [ ∂ ∂ y + i k ] 2 ψ − a ln | ψ | 2 ψ = ( E + k 2 2 ) ψ {\displaystyle -{\frac {1}{2}}\left[{{\frac {\partial }{\partial y}}+ik}\right]^{2}\psi -a\ln |\psi |^{2}\psi =\left(E+{\frac {k^{2}}{2}}\right)\psi } Substituting additionally Ψ ( y ) = e − i k y ψ ( y ) {\displaystyle {\frac {}{}}\Psi (y)=e^{-iky}\psi (y)} and assuming Ψ ( y ) = N e − ω y 2 / 2 {\displaystyle \Psi (y)=Ne^{-\omega y^{2}/2}} we get the normal Schrödinger equation for the quantum harmonic oscillator: − 1 2 ∂ 2 Ψ ∂ y 2 + a ω y 2 Ψ = (
|
{
"page_id": 42272365,
"source": null,
"title": "Gausson (physics)"
}
|
E + k 2 2 + N 2 a ) Ψ {\displaystyle -{\frac {1}{2}}{\frac {\partial ^{2}\Psi }{\partial y^{2}}}+a\omega y^{2}\Psi =\left(E+{\frac {k^{2}}{2}}+N^{2}a\right)\Psi } The solution is therefore the normal ground state of the harmonic oscillator if only ( a > 0 ) {\displaystyle (a>0)} a ω = ω 2 / 2 {\displaystyle {\frac {}{}}a\omega =\omega ^{2}/2} or ω = 2 a {\displaystyle {\frac {}{}}\omega =2a} The full solitonic solution is therefore given by ψ ( x , t ) = N e − i E t e i k ( x − k t ) e − a ( x − k t ) 2 {\displaystyle {\frac {}{}}\psi (x,t)=Ne^{-iEt}e^{ik{(x-kt)}}e^{-a({x-kt})^{2}}} where E = a ( 1 − N 2 ) − k 2 / 2 {\displaystyle {\frac {}{}}E=a(1-N^{2})-k^{2}/2} This solution describes the soliton moving with the constant velocity and not changing the shape (modulus) of the Gaussian function. When a potential is added, not only can a single Gausson provide an exact solution to a number of cases of the Logarithmic Schrödinger equation, it has been found that a linear combination of Gaussons can very accurately approximate excited states as well. This superposition property of Gaussons has been demonstrated for quadratic potentials. == References ==
|
{
"page_id": 42272365,
"source": null,
"title": "Gausson (physics)"
}
|
Ḥasan Ibn al-Haytham (Latinized as Alhazen; ; full name Abū ʿAlī al-Ḥasan ibn al-Ḥasan ibn al-Haytham أبو علي، الحسن بن الحسن بن الهيثم; c. 965 – c. 1040) was a medieval mathematician, astronomer, and physicist of the Islamic Golden Age from present-day Iraq. Referred to as "the father of modern optics", he made significant contributions to the principles of optics and visual perception in particular. His most influential work is titled Kitāb al-Manāẓir (Arabic: كتاب المناظر, "Book of Optics"), written during 1011–1021, which survived in a Latin edition. The works of Alhazen were frequently cited during the scientific revolution by Isaac Newton, Johannes Kepler, Christiaan Huygens, and Galileo Galilei. Ibn al-Haytham was the first to correctly explain the theory of vision, and to argue that vision occurs in the brain, pointing to observations that it is subjective and affected by personal experience. He also stated the principle of least time for refraction which would later become Fermat's principle. He made major contributions to catoptrics and dioptrics by studying reflection, refraction and nature of images formed by light rays. Ibn al-Haytham was an early proponent of the concept that a hypothesis must be supported by experiments based on confirmable procedures or mathematical reasoning – an early pioneer in the scientific method five centuries before Renaissance scientists, he is sometimes described as the world's "first true scientist". He was also a polymath, writing on philosophy, theology and medicine. Born in Basra, he spent most of his productive period in the Fatimid capital of Cairo and earned his living authoring various treatises and tutoring members of the nobilities. Ibn al-Haytham is sometimes given the byname al-Baṣrī after his birthplace, or al-Miṣrī ("the Egyptian"). Al-Haytham was dubbed the "Second Ptolemy" by Abu'l-Hasan Bayhaqi and "The Physicist" by John Peckham. Ibn al-Haytham paved the way
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
for the modern science of physical optics. == Biography == Ibn al-Haytham (Alhazen) was born c. 965 to a family of Arab or Persian origin in Basra, Iraq, which was at the time part of the Buyid emirate. His initial influences were in the study of religion and service to the community. At the time, society had a number of conflicting views of religion that he ultimately sought to step aside from religion. This led to him delving into the study of mathematics and science. He held a position with the title of vizier in his native Basra, and became famous for his knowledge of applied mathematics, as evidenced by his attempt to regulate the flooding of the Nile. Upon his return to Cairo, he was given an administrative post. After he proved unable to fulfill this task as well, he contracted the ire of the caliph Al-Hakim, and is said to have been forced into hiding until the caliph's death in 1021, after which his confiscated possessions were returned to him. Legend has it that Alhazen feigned madness and was kept under house arrest during this period. During this time, he wrote his influential Book of Optics. Alhazen continued to live in Cairo, in the neighborhood of the famous University of al-Azhar, and lived from the proceeds of his literary production until his death in c. 1040. (A copy of Apollonius' Conics, written in Ibn al-Haytham's own handwriting exists in Aya Sofya: (MS Aya Sofya 2762, 307 fob., dated Safar 415 A.H. [1024]).): Note 2 Among his students were Sorkhab (Sohrab), a Persian from Semnan, and Abu al-Wafa Mubashir ibn Fatek, an Egyptian prince. == Book of Optics == Alhazen's most famous work is his seven-volume treatise on optics Kitab al-Manazir (Book of Optics), written from 1011 to 1021.
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
In it, Ibn al-Haytham was the first to explain that vision occurs when light reflects from an object and then passes to one's eyes, and to argue that vision occurs in the brain, pointing to observations that it is subjective and affected by personal experience. Optics was translated into Latin by an unknown scholar at the end of the 12th century or the beginning of the 13th century. This work enjoyed a great reputation during the Middle Ages. The Latin version of De aspectibus was translated at the end of the 14th century into Italian vernacular, under the title De li aspecti. It was printed by Friedrich Risner in 1572, with the title Opticae thesaurus: Alhazeni Arabis libri septem, nuncprimum editi; Eiusdem liber De Crepusculis et nubium ascensionibus (English: Treasury of Optics: seven books by the Arab Alhazen, first edition; by the same, on twilight and the height of clouds). Risner is also the author of the name variant "Alhazen"; before Risner he was known in the west as Alhacen. Works by Alhazen on geometric subjects were discovered in the Bibliothèque nationale in Paris in 1834 by E. A. Sedillot. In all, A. Mark Smith has accounted for 18 full or near-complete manuscripts, and five fragments, which are preserved in 14 locations, including one in the Bodleian Library at Oxford, and one in the library of Bruges. === Theory of optics === Two major theories on vision prevailed in classical antiquity. The first theory, the emission theory, was supported by such thinkers as Euclid and Ptolemy, who believed that sight worked by the eye emitting rays of light. The second theory, the intromission theory supported by Aristotle and his followers, had physical forms entering the eye from an object. Previous Islamic writers (such as al-Kindi) had argued essentially on Euclidean,
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
Galenist, or Aristotelian lines. The strongest influence on the Book of Optics was from Ptolemy's Optics, while the description of the anatomy and physiology of the eye was based on Galen's account. Alhazen's achievement was to come up with a theory that successfully combined parts of the mathematical ray arguments of Euclid, the medical tradition of Galen, and the intromission theories of Aristotle. Alhazen's intromission theory followed al-Kindi (and broke with Aristotle) in asserting that "from each point of every colored body, illuminated by any light, issue light and color along every straight line that can be drawn from that point". This left him with the problem of explaining how a coherent image was formed from many independent sources of radiation; in particular, every point of an object would send rays to every point on the eye. What Alhazen needed was for each point on an object to correspond to one point only on the eye. He attempted to resolve this by asserting that the eye would only perceive perpendicular rays from the object – for any one point on the eye, only the ray that reached it directly, without being refracted by any other part of the eye, would be perceived. He argued, using a physical analogy, that perpendicular rays were stronger than oblique rays: in the same way that a ball thrown directly at a board might break the board, whereas a ball thrown obliquely at the board would glance off, perpendicular rays were stronger than refracted rays, and it was only perpendicular rays which were perceived by the eye. As there was only one perpendicular ray that would enter the eye at any one point, and all these rays would converge on the centre of the eye in a cone, this allowed him to resolve the problem
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
of each point on an object sending many rays to the eye; if only the perpendicular ray mattered, then he had a one-to-one correspondence and the confusion could be resolved. He later asserted (in book seven of the Optics) that other rays would be refracted through the eye and perceived as if perpendicular. His arguments regarding perpendicular rays do not clearly explain why only perpendicular rays were perceived; why would the weaker oblique rays not be perceived more weakly? His later argument that refracted rays would be perceived as if perpendicular does not seem persuasive. However, despite its weaknesses, no other theory of the time was so comprehensive, and it was enormously influential, particularly in Western Europe. Directly or indirectly, his De Aspectibus (Book of Optics) inspired much activity in optics between the 13th and 17th centuries. Kepler's later theory of the retinal image (which resolved the problem of the correspondence of points on an object and points in the eye) built directly on the conceptual framework of Alhazen. Alhazen showed through experiment that light travels in straight lines, and carried out various experiments with lenses, mirrors, refraction, and reflection. His analyses of reflection and refraction considered the vertical and horizontal components of light rays separately. Alhazen studied the process of sight, the structure of the eye, image formation in the eye, and the visual system. Ian P. Howard argued in a 1996 Perception article that Alhazen should be credited with many discoveries and theories previously attributed to Western Europeans writing centuries later. For example, he described what became in the 19th century Hering's law of equal innervation. He wrote a description of vertical horopters 600 years before Aguilonius that is actually closer to the modern definition than Aguilonius's – and his work on binocular disparity was repeated by Panum
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
in 1858. Craig Aaen-Stockdale, while agreeing that Alhazen should be credited with many advances, has expressed some caution, especially when considering Alhazen in isolation from Ptolemy, with whom Alhazen was extremely familiar. Alhazen corrected a significant error of Ptolemy regarding binocular vision, but otherwise his account is very similar; Ptolemy also attempted to explain what is now called Hering's law. In general, Alhazen built on and expanded the optics of Ptolemy. In a more detailed account of Ibn al-Haytham's contribution to the study of binocular vision based on Lejeune and Sabra, Raynaud showed that the concepts of correspondence, homonymous and crossed diplopia were in place in Ibn al-Haytham's optics. But contrary to Howard, he explained why Ibn al-Haytham did not give the circular figure of the horopter and why, by reasoning experimentally, he was in fact closer to the discovery of Panum's fusional area than that of the Vieth-Müller circle. In this regard, Ibn al-Haytham's theory of binocular vision faced two main limits: the lack of recognition of the role of the retina, and obviously the lack of an experimental investigation of ocular tracts. Alhazen's most original contribution was that, after describing how he thought the eye was anatomically constructed, he went on to consider how this anatomy would behave functionally as an optical system. His understanding of pinhole projection from his experiments appears to have influenced his consideration of image inversion in the eye, which he sought to avoid. He maintained that the rays that fell perpendicularly on the lens (or glacial humor as he called it) were further refracted outward as they left the glacial humor and the resulting image thus passed upright into the optic nerve at the back of the eye. He followed Galen in believing that the lens was the receptive organ of sight, although
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
some of his work hints that he thought the retina was also involved. Alhazen's synthesis of light and vision adhered to the Aristotelian scheme, exhaustively describing the process of vision in a logical, complete fashion. His research in catoptrics (the study of optical systems using mirrors) was centred on spherical and parabolic mirrors and spherical aberration. He made the observation that the ratio between the angle of incidence and refraction does not remain constant, and investigated the magnifying power of a lens. === Law of reflection === Alhazen was the first physicist to give complete statement of the law of reflection. He was first to state that the incident ray, the reflected ray, and the normal to the surface all lie in a same plane perpendicular to reflecting plane. === Alhazen's problem === His work on catoptrics in Book V of the Book of Optics contains a discussion of what is now known as Alhazen's problem, first formulated by Ptolemy in 150 AD. It comprises drawing lines from two points in the plane of a circle meeting at a point on the circumference and making equal angles with the normal at that point. This is equivalent to finding the point on the edge of a circular billiard table at which a player must aim a cue ball at a given point to make it bounce off the table edge and hit another ball at a second given point. Thus, its main application in optics is to solve the problem, "Given a light source and a spherical mirror, find the point on the mirror where the light will be reflected to the eye of an observer." This leads to an equation of the fourth degree. This eventually led Alhazen to derive a formula for the sum of fourth powers, where previously
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
only the formulas for the sums of squares and cubes had been stated. His method can be readily generalized to find the formula for the sum of any integral powers, although he did not himself do this (perhaps because he only needed the fourth power to calculate the volume of the paraboloid he was interested in). He used his result on sums of integral powers to perform what would now be called an integration, where the formulas for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. Alhazen eventually solved the problem using conic sections and a geometric proof. His solution was extremely long and complicated and may not have been understood by mathematicians reading him in Latin translation. Later mathematicians used Descartes' analytical methods to analyse the problem. An algebraic solution to the problem was finally found in 1965 by Jack M. Elkin, an actuarian. Other solutions were discovered in 1989, by Harald Riede and in 1997 by the Oxford mathematician Peter M. Neumann. Recently, Mitsubishi Electric Research Laboratories (MERL) researchers solved the extension of Alhazen's problem to general rotationally symmetric quadric mirrors including hyperbolic, parabolic and elliptical mirrors. === Camera Obscura === The camera obscura was known to the ancient Chinese, and was described by the Han Chinese polymath Shen Kuo in his scientific book Dream Pool Essays, published in the year 1088 C.E. Aristotle had discussed the basic principle behind it in his Problems, but Alhazen's work contained the first clear description of camera obscura. and early analysis of the device. Ibn al-Haytham used a camera obscura mainly to observe a partial solar eclipse. In his essay, Ibn al-Haytham writes that he observed the sickle-like shape of the sun at the time of an eclipse. The introduction reads as
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
follows: "The image of the sun at the time of the eclipse, unless it is total, demonstrates that when its light passes through a narrow, round hole and is cast on a plane opposite to the hole it takes on the form of a moonsickle." It is admitted that his findings solidified the importance in the history of the camera obscura but this treatise is important in many other respects. Ancient optics and medieval optics were divided into optics and burning mirrors. Optics proper mainly focused on the study of vision, while burning mirrors focused on the properties of light and luminous rays. On the shape of the eclipse is probably one of the first attempts made by Ibn al-Haytham to articulate these two sciences. Very often Ibn al-Haytham's discoveries benefited from the intersection of mathematical and experimental contributions. This is the case with On the shape of the eclipse. Besides the fact that this treatise allowed more people to study partial eclipses of the sun, it especially allowed to better understand how the camera obscura works. This treatise is a physico-mathematical study of image formation inside the camera obscura. Ibn al-Haytham takes an experimental approach, and determines the result by varying the size and the shape of the aperture, the focal length of the camera, the shape and intensity of the light source. In his work he explains the inversion of the image in the camera obscura, the fact that the image is similar to the source when the hole is small, but also the fact that the image can differ from the source when the hole is large. All these results are produced by using a point analysis of the image. === Refractometer === In the seventh tract of his book of optics, Alhazen described an apparatus for
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
experimenting with various cases of refraction, in order to investigate the relations between the angle of incidence, the angle of refraction and the angle of deflection. This apparatus was a modified version of an apparatus used by Ptolemy for similar purpose. === Unconscious inference === Alhazen basically states the concept of unconscious inference in his discussion of colour before adding that the inferential step between sensing colour and differentiating it is shorter than the time taken between sensing and any other visible characteristic (aside from light), and that "time is so short as not to be clearly apparent to the beholder." Naturally, this suggests that the colour and form are perceived elsewhere. Alhazen goes on to say that information must travel to the central nerve cavity for processing and:the sentient organ does not sense the forms that reach it from the visible objects until after it has been affected by these forms; thus it does not sense color as color or light as light until after it has been affected by the form of color or light. Now the affectation received by the sentient organ from the form of color or of light is a certain change; and change must take place in time; .....and it is in the time during which the form extends from the sentient organ's surface to the cavity of the common nerve, and in (the time) following that, that the sensitive faculty, which exists in the whole of the sentient body will perceive color as color...Thus the last sentient's perception of color as such and of light as such takes place at a time following that in which the form arrives from the surface of the sentient organ to the cavity of the common nerve. === Color constancy === Alhazen explained color constancy by observing
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
that the light reflected from an object is modified by the object's color. He explained that the quality of the light and the color of the object are mixed, and the visual system separates light and color. In Book II, Chapter 3 he writes:Again the light does not travel from the colored object to the eye unaccompanied by the color, nor does the form of the color pass from the colored object to the eye unaccompanied by the light. Neither the form of the light nor that of the color existing in the colored object can pass except as mingled together and the last sentient can only perceive them as mingled together. Nevertheless, the sentient perceives that the visible object is luminous and that the light seen in the object is other than the color and that these are two properties. === Other contributions === The Kitab al-Manazir (Book of Optics) describes several experimental observations that Alhazen made and how he used his results to explain certain optical phenomena using mechanical analogies. He conducted experiments with projectiles and concluded that only the impact of perpendicular projectiles on surfaces was forceful enough to make them penetrate, whereas surfaces tended to deflect oblique projectile strikes. For example, to explain refraction from a rare to a dense medium, he used the mechanical analogy of an iron ball thrown at a thin slate covering a wide hole in a metal sheet. A perpendicular throw breaks the slate and passes through, whereas an oblique one with equal force and from an equal distance does not. He also used this result to explain how intense, direct light hurts the eye, using a mechanical analogy: Alhazen associated 'strong' lights with perpendicular rays and 'weak' lights with oblique ones. The obvious answer to the problem of multiple rays
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
and the eye was in the choice of the perpendicular ray, since only one such ray from each point on the surface of the object could penetrate the eye. Sudanese psychologist Omar Khaleefa has argued that Alhazen should be considered the founder of experimental psychology, for his pioneering work on the psychology of visual perception and optical illusions. Khaleefa has also argued that Alhazen should also be considered the "founder of psychophysics", a sub-discipline and precursor to modern psychology. Although Alhazen made many subjective reports regarding vision, there is no evidence that he used quantitative psychophysical techniques and the claim has been rebuffed. Alhazen offered an explanation of the Moon illusion, an illusion that played an important role in the scientific tradition of medieval Europe. Many authors repeated explanations that attempted to solve the problem of the Moon appearing larger near the horizon than it does when higher up in the sky. Alhazen argued against Ptolemy's refraction theory, and defined the problem in terms of perceived, rather than real, enlargement. He said that judging the distance of an object depends on there being an uninterrupted sequence of intervening bodies between the object and the observer. When the Moon is high in the sky there are no intervening objects, so the Moon appears close. The perceived size of an object of constant angular size varies with its perceived distance. Therefore, the Moon appears closer and smaller high in the sky, and further and larger on the horizon. Through works by Roger Bacon, John Pecham and Witelo based on Alhazen's explanation, the Moon illusion gradually came to be accepted as a psychological phenomenon, with the refraction theory being rejected in the 17th century. Although Alhazen is often credited with the perceived distance explanation, he was not the first author to offer it.
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
Cleomedes (c. 2nd century) gave this account (in addition to refraction), and he credited it to Posidonius (c. 135–50 BCE). Ptolemy may also have offered this explanation in his Optics, but the text is obscure. Alhazen's writings were more widely available in the Middle Ages than those of these earlier authors, and that probably explains why Alhazen received the credit. == Scientific method == Therefore, the seeker after the truth is not one who studies the writings of the ancients and, following his natural disposition, puts his trust in them, but rather the one who suspects his faith in them and questions what he gathers from them, the one who submits to argument and demonstration, and not to the sayings of a human being whose nature is fraught with all kinds of imperfection and deficiency. The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and ... attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency. An aspect associated with Alhazen's optical research is related to systemic and methodological reliance on experimentation (i'tibar)(Arabic: اختبار) and controlled testing in his scientific inquiries. Moreover, his experimental directives rested on combining classical physics (ilm tabi'i) with mathematics (ta'alim; geometry in particular). This mathematical-physical approach to experimental science supported most of his propositions in Kitab al-Manazir (The Optics; De aspectibus or Perspectivae) and grounded his theories of vision, light and colour, as well as his research in catoptrics and dioptrics (the study of the reflection and refraction of light, respectively). According to Matthias Schramm, Alhazen "was the first to make a systematic use of the
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
method of varying the experimental conditions in a constant and uniform manner, in an experiment showing that the intensity of the light-spot formed by the projection of the moonlight through two small apertures onto a screen diminishes constantly as one of the apertures is gradually blocked up." G. J. Toomer expressed some skepticism regarding Schramm's view, partly because at the time (1964) the Book of Optics had not yet been fully translated from Arabic, and Toomer was concerned that without context, specific passages might be read anachronistically. While acknowledging Alhazen's importance in developing experimental techniques, Toomer argued that Alhazen should not be considered in isolation from other Islamic and ancient thinkers. Toomer concluded his review by saying that it would not be possible to assess Schramm's claim that Ibn al-Haytham was the true founder of modern physics without translating more of Alhazen's work and fully investigating his influence on later medieval writers. == Other works on physics == === Optical treatises === Besides the Book of Optics, Alhazen wrote several other treatises on the same subject, including his Risala fi l-Daw' (Treatise on Light). He investigated the properties of luminance, the rainbow, eclipses, twilight, and moonlight. Experiments with mirrors and the refractive interfaces between air, water, and glass cubes, hemispheres, and quarter-spheres provided the foundation for his theories on catoptrics. === Celestial physics === Alhazen discussed the physics of the celestial region in his Epitome of Astronomy, arguing that Ptolemaic models must be understood in terms of physical objects rather than abstract hypotheses – in other words that it should be possible to create physical models where (for example) none of the celestial bodies would collide with each other. The suggestion of mechanical models for the Earth centred Ptolemaic model "greatly contributed to the eventual triumph of the Ptolemaic system
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
among the Christians of the West". Alhazen's determination to root astronomy in the realm of physical objects was important, however, because it meant astronomical hypotheses "were accountable to the laws of physics", and could be criticised and improved upon in those terms. He also wrote Maqala fi daw al-qamar (On the Light of the Moon). === Mechanics === In his work, Alhazen discussed theories on the motion of a body. == Astronomical works == === On the Configuration of the World === In his On the Configuration of the World Alhazen presented a detailed description of the physical structure of the earth:The earth as a whole is a round sphere whose center is the center of the world. It is stationary in its [the world's] middle, fixed in it and not moving in any direction nor moving with any of the varieties of motion, but always at rest. The book is a non-technical explanation of Ptolemy's Almagest, which was eventually translated into Hebrew and Latin in the 13th and 14th centuries and subsequently had an influence on astronomers such as Georg von Peuerbach during the European Middle Ages and Renaissance. === Doubts Concerning Ptolemy === In his Al-Shukūk ‛alā Batlamyūs, variously translated as Doubts Concerning Ptolemy or Aporias against Ptolemy, published at some time between 1025 and 1028, Alhazen criticized Ptolemy's Almagest, Planetary Hypotheses, and Optics, pointing out various contradictions he found in these works, particularly in astronomy. Ptolemy's Almagest concerned mathematical theories regarding the motion of the planets, whereas the Hypotheses concerned what Ptolemy thought was the actual configuration of the planets. Ptolemy himself acknowledged that his theories and configurations did not always agree with each other, arguing that this was not a problem provided it did not result in noticeable error, but Alhazen was particularly scathing in his
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
criticism of the inherent contradictions in Ptolemy's works. He considered that some of the mathematical devices Ptolemy introduced into astronomy, especially the equant, failed to satisfy the physical requirement of uniform circular motion, and noted the absurdity of relating actual physical motions to imaginary mathematical points, lines and circles: Ptolemy assumed an arrangement (hay'a) that cannot exist, and the fact that this arrangement produces in his imagination the motions that belong to the planets does not free him from the error he committed in his assumed arrangement, for the existing motions of the planets cannot be the result of an arrangement that is impossible to exist... [F]or a man to imagine a circle in the heavens, and to imagine the planet moving in it does not bring about the planet's motion. Having pointed out the problems, Alhazen appears to have intended to resolve the contradictions he pointed out in Ptolemy in a later work. Alhazen believed there was a "true configuration" of the planets that Ptolemy had failed to grasp. He intended to complete and repair Ptolemy's system, not to replace it completely. In the Doubts Concerning Ptolemy Alhazen set out his views on the difficulty of attaining scientific knowledge and the need to question existing authorities and theories: Truth is sought for itself [but] the truths, [he warns] are immersed in uncertainties [and the scientific authorities (such as Ptolemy, whom he greatly respected) are] not immune from error... He held that the criticism of existing theories – which dominated this book – holds a special place in the growth of scientific knowledge. === Model of the Motions of Each of the Seven Planets === Alhazen's The Model of the Motions of Each of the Seven Planets was written c. 1038. Only one damaged manuscript has been found, with only
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
the introduction and the first section, on the theory of planetary motion, surviving. (There was also a second section on astronomical calculation, and a third section, on astronomical instruments.) Following on from his Doubts on Ptolemy, Alhazen described a new, geometry-based planetary model, describing the motions of the planets in terms of spherical geometry, infinitesimal geometry and trigonometry. He kept a geocentric universe and assumed that celestial motions are uniformly circular, which required the inclusion of epicycles to explain observed motion, but he managed to eliminate Ptolemy's equant. In general, his model didn't try to provide a causal explanation of the motions, but concentrated on providing a complete, geometric description that could explain observed motions without the contradictions inherent in Ptolemy's model. === Other astronomical works === Alhazen wrote a total of twenty-five astronomical works, some concerning technical issues such as Exact Determination of the Meridian, a second group concerning accurate astronomical observation, a third group concerning various astronomical problems and questions such as the location of the Milky Way; Alhazen made the first systematic effort of evaluating the Milky Way's parallax, combining Ptolemy's data and his own. He concluded that the parallax is (probably very much) smaller than Lunar parallax, and the Milky way should be a celestial object. Though he was not the first who argued that the Milky Way does not belong to the atmosphere, he is the first who did quantitative analysis for the claim. The fourth group consists of ten works on astronomical theory, including the Doubts and Model of the Motions discussed above. == Mathematical works == In mathematics, Alhazen built on the mathematical works of Euclid and Thabit ibn Qurra and worked on "the beginnings of the link between algebra and geometry". Alhazen made developments in conic sections and number theory. He developed
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
a formula for summing the first 100 natural numbers, using a geometric proof to prove the formula. === Geometry === Alhazen explored what is now known as the Euclidean parallel postulate, the fifth postulate in Euclid's Elements, using a proof by contradiction, and in effect introducing the concept of motion into geometry. He formulated the Lambert quadrilateral, which Boris Abramovich Rozenfeld names the "Ibn al-Haytham–Lambert quadrilateral". He was criticised by Omar Khayyam who pointed that Aristotle had condemned the use of motion in geometry. In elementary geometry, Alhazen attempted to solve the problem of squaring the circle using the area of lunes (crescent shapes), but later gave up on the impossible task. The two lunes formed from a right triangle by erecting a semicircle on each of the triangle's sides, inward for the hypotenuse and outward for the other two sides, are known as the lunes of Alhazen; they have the same total area as the triangle itself. === Number theory === Alhazen's contributions to number theory include his work on perfect numbers. In his Analysis and Synthesis, he may have been the first to state that every even perfect number is of the form 2n−1(2n − 1) where 2n − 1 is prime, but he was not able to prove this result; Euler later proved it in the 18th century, and it is now called the Euclid–Euler theorem. Alhazen solved problems involving congruences using what is now called Wilson's theorem. In his Opuscula, Alhazen considers the solution of a system of congruences, and gives two general methods of solution. His first method, the canonical method, involved Wilson's theorem, while his second method involved a version of the Chinese remainder theorem. === Calculus === Alhazen discovered the sum formula for the fourth power, using a method that could be generally
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
used to determine the sum for any integral power. He used this to find the volume of a paraboloid. He could find the integral formula for any polynomial without having developed a general formula. == Other works == === Influence of Melodies on the Souls of Animals === Alhazen also wrote a Treatise on the Influence of Melodies on the Souls of Animals, although no copies have survived. It appears to have been concerned with the question of whether animals could react to music, for example whether a camel would increase or decrease its pace. === Engineering === In engineering, one account of his career as a civil engineer has him summoned to Egypt by the Fatimid Caliph, Al-Hakim bi-Amr Allah, to regulate the flooding of the Nile River. He carried out a detailed scientific study of the annual inundation of the Nile River, and he drew plans for building a dam, at the site of the modern-day Aswan Dam. His field work, however, later made him aware of the impracticality of this scheme, and he soon feigned madness so he could avoid punishment from the Caliph. === Philosophy === In his Treatise on Place, Alhazen disagreed with Aristotle's view that nature abhors a void, and he used geometry in an attempt to demonstrate that place (al-makan) is the imagined three-dimensional void between the inner surfaces of a containing body. Abd-el-latif, a supporter of Aristotle's philosophical view of place, later criticized the work in Fi al-Radd 'ala Ibn al-Haytham fi al-makan (A refutation of Ibn al-Haytham's place) for its geometrization of place. Alhazen also discussed space perception and its epistemological implications in his Book of Optics. In "tying the visual perception of space to prior bodily experience, Alhazen unequivocally rejected the intuitiveness of spatial perception and, therefore, the autonomy of
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
vision. Without tangible notions of distance and size for correlation, sight can tell us next to nothing about such things." === Theology === Alhazen was a Muslim and most sources report that he was a Sunni and a follower of the Ash'ari school. Ziauddin Sardar says that some of the greatest Muslim scientists, such as Ibn al-Haytham and Abū Rayhān al-Bīrūnī, who were pioneers of the scientific method, were themselves followers of the Ashʿari school of Islamic theology. Like other Ashʿarites who believed that faith or taqlid should apply only to Islam and not to any ancient Hellenistic authorities, Ibn al-Haytham's view that taqlid should apply only to prophets of Islam and not to any other authorities formed the basis for much of his scientific skepticism and criticism against Ptolemy and other ancient authorities in his Doubts Concerning Ptolemy and Book of Optics. Alhazen wrote a work on Islamic theology in which he discussed prophethood and developed a system of philosophical criteria to discern its false claimants in his time. He also wrote a treatise entitled Finding the Direction of Qibla by Calculation in which he discussed finding the Qibla, where prayers (salat) are directed towards, mathematically. There are occasional references to theology or religious sentiment in his technical works, e.g. in Doubts Concerning Ptolemy: Truth is sought for its own sake ... Finding the truth is difficult, and the road to it is rough. For the truths are plunged in obscurity. ... God, however, has not preserved the scientist from error and has not safeguarded science from shortcomings and faults. If this had been the case, scientists would not have disagreed upon any point of science... In The Winding Motion: From the statements made by the noble Shaykh, it is clear that he believes in Ptolemy's words in everything
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
he says, without relying on a demonstration or calling on a proof, but by pure imitation (taqlid); that is how experts in the prophetic tradition have faith in Prophets, may the blessing of God be upon them. But it is not the way that mathematicians have faith in specialists in the demonstrative sciences. Regarding the relation of objective truth and God: I constantly sought knowledge and truth, and it became my belief that for gaining access to the effulgence and closeness to God, there is no better way than that of searching for truth and knowledge. == Legacy == Alhazen made significant contributions to optics, number theory, geometry, astronomy and natural philosophy. Alhazen's work on optics is credited with contributing a new emphasis on experiment. His main work, Kitab al-Manazir (Book of Optics), was known in the Muslim world mainly, but not exclusively, through the thirteenth-century commentary by Kamāl al-Dīn al-Fārisī, the Tanqīḥ al-Manāẓir li-dhawī l-abṣār wa l-baṣā'ir. In al-Andalus, it was used by the eleventh-century prince of the Banu Hud dynasty of Zaragossa and author of an important mathematical text, al-Mu'taman ibn Hūd. A Latin translation of the Kitab al-Manazir was made probably in the late twelfth or early thirteenth century. This translation was read by and greatly influenced a number of scholars in Christian Europe including: Roger Bacon, Robert Grosseteste, Witelo, Giambattista della Porta, Leonardo da Vinci, Galileo Galilei, Christiaan Huygens, René Descartes, and Johannes Kepler. Meanwhile, in the Islamic world, Alhazen's work influenced Averroes' writings on optics, and his legacy was further advanced through the 'reforming' of his Optics by Persian scientist Kamal al-Din al-Farisi (died c. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics). Alhazen wrote as many as 200 books, although only 55 have survived. Some of his treatises on
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
optics survived only through Latin translation. During the Middle Ages his books on cosmology were translated into Latin, Hebrew and other languages. H. J. J. Winter, a British historian of science, summing up the importance of Ibn al-Haytham in the history of physics wrote: After the death of Archimedes no really great physicist appeared until Ibn al-Haytham. If, therefore, we confine our interest only to the history of physics, there is a long period of over twelve hundred years during which the Golden Age of Greece gave way to the era of Muslim Scholasticism, and the experimental spirit of the noblest physicist of Antiquity lived again in the Arab Scholar from Basra. Although only one commentary on Alhazen's optics has survived the Islamic Middle Ages, Geoffrey Chaucer mentions the work in The Canterbury Tales: The impact crater Alhazen on the Moon is named in his honour, as was the asteroid 59239 Alhazen. In honour of Alhazen, the Aga Khan University (Pakistan) named its Ophthalmology endowed chair as "The Ibn-e-Haitham Associate Professor and Chief of Ophthalmology". The 2015 International Year of Light celebrated the 1000th anniversary of the works on optics by Ibn Al-Haytham. In 2014, the "Hiding in the Light" episode of Cosmos: A Spacetime Odyssey, presented by Neil deGrasse Tyson, focused on the accomplishments of Ibn al-Haytham. He was voiced by Alfred Molina in the episode. Over forty years previously, Jacob Bronowski presented Alhazen's work in a similar television documentary (and the corresponding book), The Ascent of Man. In episode 5 (The Music of the Spheres), Bronowski remarked that in his view, Alhazen was "the one really original scientific mind that Arab culture produced", whose theory of optics was not improved on till the time of Newton and Leibniz. UNESCO declared 2015 the International Year of Light and its
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
Director-General Irina Bokova dubbed Ibn al-Haytham 'the father of optics'. Amongst others, this was to celebrate Ibn Al-Haytham's achievements in optics, mathematics and astronomy. An international campaign, created by the 1001 Inventions organisation, titled 1001 Inventions and the World of Ibn Al-Haytham featuring a series of interactive exhibits, workshops and live shows about his work, partnering with science centers, science festivals, museums, and educational institutions, as well as digital and social media platforms. The campaign also produced and released the short educational film 1001 Inventions and the World of Ibn Al-Haytham. Ibn al-Haytham appears on the 10,000 dinar banknote of the Iraqi dinar, series 2003. == List of works == According to medieval biographers, Alhazen wrote more than 200 works on a wide range of subjects, of which at least 96 of his scientific works are known. Most of his works are now lost, but more than 50 of them have survived to some extent. Nearly half of his surviving works are on mathematics, 23 of them are on astronomy, and 14 of them are on optics, with a few on other subjects. Not all his surviving works have yet been studied, but some of the ones that have are given below. === Lost works === A Book in which I have Summarized the Science of Optics from the Two Books of Euclid and Ptolemy, to which I have added the Notions of the First Discourse which is Missing from Ptolemy's Book Treatise on Burning Mirrors Treatise on the Nature of [the Organ of] Sight and on How Vision is Achieved Through It == See also == == Notes == == References == == Sources == == Further reading == === Primary === === Secondary === == External links == Works by Ibn al-Haytham at Open Library Langermann, Y. Tzvi
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
(2007). "Ibn al-Haytham: Abū ʿAlī al-Ḥasan ibn al-Ḥasan". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 556–5567. ISBN 978-0-387-31022-0. (PDF version) 'A Brief Introduction on Ibn al-Haytham' based on a lecture delivered at the Royal Society in London by Nader El-Bizri Ibn al-Haytham on two Iraqi banknotes Archived 3 August 2018 at the Wayback Machine The Miracle of Light – a UNESCO article on Ibn al-Haytham Biography from Malaspina Global Portal Short biographies on several "Muslim Heroes and Personalities" including Ibn al-Haytham Biography from ioNET at the Wayback Machine (archived 13 October 1999) "Biography from the BBC". Archived from the original on 11 February 2006. Retrieved 16 September 2008. Biography from Trinity College (Connecticut) Biography from Molecular Expressions The First True Scientist from BBC News Over the Moon From The UNESCO Courier on the occasion of the International Year of Astronomy 2009 The Mechanical Water Clock Of Ibn Al-Haytham, Muslim Heritage Alhazen's (1572) Opticae thesaurus Archived 24 September 2018 at the Wayback Machine (English) – digital facsimile from the Linda Hall Library
|
{
"page_id": 1645,
"source": null,
"title": "Ibn al-Haytham"
}
|
This is a list of the tallest people, verified by Guinness World Records or other reliable sources. According to Guinness World Records, Robert Wadlow of the United States (1918–1940) was the tallest person in recorded history, measuring 272 cm (8 ft 11 in) at the time of his death. There are reports about even taller people but most claims are unverified or erroneous. Since antiquity, discoveries have been reported of gigantic human skeletons. Originally thought to belong to mythical giants, these bones were later identified as the exaggerated remains of prehistoric animals, usually whales or elephants. Regular reports in American newspapers in the 18th and 19th centuries of giant human skeletons may have inspired the case of the "petrified" Cardiff Giant, an archaeological hoax. == Men == Living Deceased Height disputed No growth-related pathological disorder (gigantism, acromegaly) == Women == Living Deceased Height disputed No growth-related pathological disorder (gigantism, acromegaly) == Disputed and unverified claims == Found to be non-human Likely mythical or legendary == Tallest people without gigantism or acromegaly == Living Deceased == Tallest in various sports == == See also == Giant Gigantism Giant human skeletons Goliath Human height Sotos syndrome List of tallest players in National Basketball Association history List of heaviest people List of the verified shortest people List of people with dwarfism == References == == External links == Media related to Tall people at Wikimedia Commons Scientific American, "Ancient American Giants". Munn & Company. 14 August 1880. p. 106. "The giant of the world ランキング (female)". Archived from the original on 17 February 2009. Retrieved 3 October 2022. Valerio Agnesi, Di Patti C., B. Truden (January 2007). "Giants and elephants of Sicily". Geological Society London Special Publications. 273 (1): 263–270. Bibcode:2007GSLSP.273..263A. doi:10.1144/GSL.SP.2007.273.01.20. S2CID 129843177.{{cite journal}}: CS1 maint: multiple names: authors list (link) Marina
|
{
"page_id": 11994740,
"source": null,
"title": "List of tallest people"
}
|
Milićević Bradać, Ivor Karavanić (December 2015). "Phlegon of Tralles and fossils from Dalmatia". Vjesnik Za Arheologiju I Povijest Dalmatinsku. 108 (1): 109–118. ISSN 1845-7789. Marco Romano, Marco Avanzini (26 June 2017). "The skeletons of Cyclops and Lestrigons: misinterpretation of Quaternary vertebrates as remains of the mythological giants". Historical Biology. 31 (2): 117–139. doi:10.1080/08912963.2017.1342640. S2CID 89912123.
|
{
"page_id": 11994740,
"source": null,
"title": "List of tallest people"
}
|
Social behavior is behavior among two or more organisms within the same species, it encompasses any behavior in which one member affects another. Social behavior can be seen as similar to an exchange of goods, with the expectation that when you give, you will receive something similar in return. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in. A major aspect of social behavior is communication, which is the basis for survival and reproduction. Social behavior is said to be determined by two different processes, that can either work together or oppose one another. The dual-systems model of reflective and impulsive determinants of social behavior came out of the realization that behavior cannot just be determined by one single factor. Instead, behavior can arise by those consciously behaving (where there is an awareness and intent), or by pure impulse. These factors that determine behavior can work in different situations and moments, and can even oppose one another. While at times one can behave with a specific goal in mind, other times they can behave without rational control, and driven by impulse instead. There are also distinctions between different types of social behavior, such as mundane versus defensive social behavior. Mundane social behavior is a result of interactions in day-to-day life, and are behaviors learned as one is exposed to those different situations. On the other hand, defensive behavior arises out of impulse, when one is faced with conflicting desires. == Development == Social behavior constantly changes as one continues
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
to grow and develop, reaching different stages of life. The development of behavior is deeply tied with the biological and cognitive changes one is experiencing at any given time. This creates general patterns of social behavior development in humans. Just as social behavior is influenced by both the situation and an individual's characteristics, the development of behavior is due to the combination of the two as well—the temperament of the child along with the settings they are exposed to. Culture (parents and individuals that influence socialization in children) play a large role in the development of a child's social behavior, as the parents or caregivers are typically those who decide the settings and situations that the child is exposed to. These various settings the child is placed in (for example, the playground and classroom) form habits of interaction and behavior insomuch as the child being exposed to certain settings more frequently than others. What takes particular precedence in the influence of the setting are the people that the child must interact with their age, sex, and at times culture. Emotions also play a large role in the development of social behavior, as they are intertwined with the way an individual behaves. Through social interactions, emotion is understood through various verbal and nonverbal displays, and thus plays a large role in communication. Many of the processes that occur in the brain and underlay emotion often greatly correlate with the processes that are needed for social behavior as well. A major aspect of interaction is understanding how the other person thinks and feels, and being able to detect emotional states becomes necessary for individuals to effectively interact with one another and behave socially. As the child continues to gain social information, their behavior develops accordingly. One must learn how to behave according
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
to the interactions and people relevant to a certain setting, and therefore begin to intuitively know the appropriate form of social interaction depending on the situation. Therefore, behavior is constantly changing as required, and maturity brings this on. A child must learn to balance their own desires with those of the people they interact with, and this ability to correctly respond to contextual cues and understand the intentions and desires of another person improves with age. That being said, the individual characteristics of the child (their temperament) is important to understanding how the individual learns social behaviors and cues given to them, and this learnability is not consistent across all children. === Patterns of development across the lifespan === When studying patterns of biological development across the human lifespan, there are certain patterns that are well-maintained across humans. These patterns can often correspond with social development, and biological changes lead to respective changes in interactions. In pre and post-natal infancy, the behavior of the infant is correlated with that of the caregiver. The development of social behavior is influenced by their mothers' reactions to children's emotional displays. In infancy, there is already a development of the awareness of a stranger, in which case the individual is able to identify and distinguish between people. Come childhood, the individual begins to attend more to their peers, and communication begins to take a verbal form. One also begins to classify themselves on the basis of their gender and other qualities salient about themselves, like race and age. When the child reaches school age, one typically becomes more aware of the structure of society in regards to gender, and how their own gender plays a role in this. They become more and more reliant on verbal forms of communication, and more likely to form
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
groups and become aware of their own role within the group. By puberty, general relations among same and opposite sex individuals are much more salient, and individuals begin to behave according to the norms of these situations. With increasing awareness of their sex and stereotypes that go along with it, the individual begins to choose how much they align with these stereotypes, and behaves either according to those stereotypes or not. This is also the time that individuals more often form sexual pairs. Once the individual reaches child rearing age, one must begin to undergo changes within the own behavior in accordance to major life-changes of a developing family. The potential new child requires the parent to modify their behavior to accommodate a new member of the family. Come senescence and retirement, behavior is more stable as the individual has often established their social circle (whatever it may be) and is more committed to their social structure. == Neural and biological correlates == === Neural correlates === With the advent of the field social cognitive neuroscience came interest in studying social behavior's correlates within the brain to see what is happening beneath the surface as organisms act in a social manner. Although there is debate on which particular regions of the brain are responsible for social behavior, some have claimed that the paracingulate cortex is activated when one person is thinking about the motives or aims of another, a means of understanding the social world and behaving accordingly. The medial prefrontal lobe has also been seen to have activation during social cognition Research has discovered through studies on rhesus monkeys that the amygdala, a region known for expressing fear, was activated specifically when the monkeys were faced with a social situation they had never encountered before. This region of the
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
brain was shown to be sensitive to the fear that comes with a novel social situation, inhibiting social interaction. Another form of studying the brain regions that may be responsible for social behavior has been through looking at patients with brain injuries who have an impairment in social behavior. Lesions in the prefrontal cortex that occurred in adulthood can affect the functioning of social behavior. When these lesions or a dysfunction in the prefrontal cortex occur in infancy/early on in life, the development of proper moral and social behavior is effected and thus atypical. === Biological correlates === Along with neural correlates, research has investigated what happens within the body (and potentially modulates) social behavior. Vasopressin is a posterior pituitary hormone that is seen to potentially play a role in affiliation for young rats. Along with young rats, vasopressin has also been associated with paternal behavior in prairie voles. Efforts have been made to connect animal research to humans, and found that vasopressin may play a role in the social responses of males in human research. Oxytocin has also been seen to be correlated with positive social behavior, and elevated levels have been shown to potentially help improve social behavior that may have been suppressed due to stress. Thus, targeting levels of oxytocin may play a role in interventions of disorders that deal with atypical social behavior. Along with vasopressin, serotonin has also been inspected in relation to social behavior in humans. It was found to be associated with human feelings of social connection, and there is a drop in serotonin when one is socially isolated or has feelings of social isolation. Serotonin has also been associated with social confidence. == Affect == Positive affect (emotion) has been seen to have a large impact on social behavior, particularly by inducing
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
more helping behavior, cooperation, and sociability. Studies have shown that even subtly inducing positive affect within individuals caused greater social behavior and helping. This phenomenon, however, is not one-directional. Just as positive affect can influence social behavior, social behavior can have an influence on positive affect. == Electronic media == Social behavior has typically been seen as a changing of behaviors relevant to the situation at hand, acting appropriately with the setting one is in. However, with the advent of electronic media, people began to find themselves in situations they may have not been exposed to in everyday life. Novel situations and information presented through electronic media has formed interactions that are completely new to people. While people typically behaved in line with their setting in face-to-face interaction, the lines have become blurred when it comes to electronic media. This has led to a cascade of results, as gender norms started to merge, and people were coming in contact with information they had never been exposed to through face-to-face interaction. A political leader could no longer tailor a speech to just one audience, for their speech would be translated and heard by anyone through the media. People can no longer play drastically different roles when put in different situations, because the situations overlap more as information is more readily available. Communication flows more quickly and fluidly through media, causing behavior to merge accordingly. Media has also been shown to have an impact on promoting different types of social behavior, such as prosocial and aggressive behavior. For example, violence shown through the media has been seen to lead to more aggressive behavior in its viewers. Research has also been done investigating how media portraying positive social acts, prosocial behavior, could lead to more helping behavior in its viewers. The general learning
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
model was established to study how this process of translating media into behavior works, and why. This model suggests a link between positive media with prosocial behavior and violent media with aggressive behavior, and posits that this is mediated by the characteristics of the individual watching along with the situation they are in. This model also presents the notion that when one is exposed to the same type of media for long periods of time, this could even lead to changes within their personality traits, as they are forming different sets of knowledge and may be behaving accordingly. In various studies looking specifically at how video games with prosocial content effect behavior, it was shown that exposure influenced subsequent helping behavior in the video-game player. The processes that underlay this effect point to prosocial thoughts being more readily available after playing a video game related to this, and thus the person playing the game is more likely to behave accordingly. These effects were not only found with video games, but also with music, as people listening to songs involving aggression and violence in the lyrics were more likely to act in an aggressive manner. Likewise, people listening to songs related to prosocial acts (relative to a song with neutral lyrics) were shown to express greater helping behaviors and more empathy afterwards. When these songs were played at restaurants, it even led to an increase in tips given (relative to those who heard neutral lyrics). == Individual and group behavior == Conformity refers to the behavior that an individual is unconsciously pressured by the group to make his behavior tend to be consistent with the majority of people in the group. Generally speaking, the larger the group size, the easier it is for individuals to display conformity behaviors. Individuals may submit
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
to the group for two reasons: first, to gain acceptance from the group (normative social influence); second, to obtain important information for the group (informational social influence). == Aggressive and violent behavior == Aggression is an important social behavior that can have both negative consequences (in a social interaction) and adaptive consequences (adaptive in humans and other primates for survival). There are many differences in aggressive behavior, and a lot of these differences are sex-difference based. == Verbal, coverbal, and nonverbal social behavior == === Verbal and coverbal behaviors === Although most animals can communicate nonverbally, humans have the ability to communicate with both verbal and nonverbal behavior. Verbal behavior is the content of one's spoken word. Verbal and nonverbal behavior intersect in what is known as coverbal behavior, which is nonverbal behavior that contributes to the meaning of verbal speech (i.e. hand gestures used to emphasize the importance of what someone is saying). Although the spoken words convey meaning in and of themselves, one cannot dismiss the coverbal behaviors that accompany the words, as they place great emphasis on the thought and importance contributing to the verbal speech. Therefore, the verbal behaviors and gestures that accompany it work together to make up a conversation. Although many have posited this idea that nonverbal behavior accompanying speech serves an important role in communication, it is important to note that not all researchers agree. However, in most literature on gestures, unlike body language, gestures can accompany speech in ways that bring inner thoughts to life (often thoughts unable to be expressed verbally). Gestures (coverbal behaviors) and speech occur simultaneously, and develop along the same trajectory within children as well. === Nonverbal behaviors === Behaviors that include any change in facial expression or body movement constitute the meaning of nonverbal behavior. Communicative nonverbal
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
behavior include facial and body expressions that are intentionally meant to convey a message to those who are meant to receive it. Nonverbal behavior can serve a specific purpose (i.e. to convey a message), or can be more of an impulse/reflex. Paul Ekman, an influential psychologist, investigated both verbal and nonverbal behavior (and their role in communication) a great deal, emphasizing how difficult it is to empirically test such behaviors. Nonverbal cues can serve the function of conveying a message, thought, or emotion both to the person viewing the behavior and the person sending these cues. == Disorders involving impairments in social behavior == A number of mental disorders affect social behavior. Social anxiety disorder is a phobic disorder characterized by a fear of being judged by others, which manifests itself as a fear of people in general. Due to this pervasive fear of embarrassing oneself in front of others, it causes those affected to avoid interactions with other people. Attention deficit hyperactivity disorder is a neurodevelopmental disorder mainly identified by its symptoms of inattention, hyperactivity, and impulsivity. Hyperactivity-Impulsivity may lead to hampered social interactions, as one who displays these symptoms may be socially intrusive, unable to maintain personal space, and talk over others. The majority of children that display symptoms of ADHD also have problems with their social behavior. Autism spectrum disorder is a neurodevelopmental disorder that affects the functioning of social interaction and communication. Autistic people may have difficulties in understanding social cues and the emotional states of others. Learning disabilities are often defined as a specific deficit in academic achievement; however, research has shown that with a learning disability can come social skill deficits as well. == See also == Aggression Health behavior Collective animal behavior Expectancy challenge sociological method Herd behavior Social behavior in education Social
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
learning theory Social science Sociality Socialization Violent Behavior == References ==
|
{
"page_id": 1967733,
"source": null,
"title": "Social behavior"
}
|
The Cowardin classification system is a system for classifying wetlands, devised by Lewis M. Cowardin et al. in 1979 for the United States Fish and Wildlife Service. The system includes five main types of wetlands: Marine wetlands, which are areas exposed to the open ocean Estuarine wetlands, partially enclosed by land and also exposed to a mixture of fresh and salt water bodies of water Riverine wetlands, associated with flowing water Lacustrine wetlands, associated with a lake or other body of fresh water Palustrine wetlands, freshwater wetlands not associated with a river or lake. The primary purpose of this ecological classification system was to establish consistent terms and definitions used in inventory of wetlands and to provide standard measurements for mapping these lands. == See also == Wetland conservation Wetlands of the United States == References == Cowardin, L. M.; Carter, V.; Golet, F. C.; LaRoe, E. T. "Classification of wetlands and deepwater habitats of the United States". U.S. Department of the Interior, Fish and Wildlife Service. Archived from the original on 21 January 2014. Retrieved 26 April 2015. Notes
|
{
"page_id": 47384181,
"source": null,
"title": "Cowardin classification system"
}
|
Recurring cultural, political, and theological rejection of evolution by religious groups exists regarding the origins of the Earth, of humanity, and of other life. In accordance with creationism, species were once widely believed to be fixed products of divine creation, but since the mid-19th century, evolution by natural selection has been established by the scientific community as an empirical scientific fact. Any such debate is universally considered religious, not scientific, by professional scientific organizations worldwide: in the scientific community, evolution is accepted as fact, and efforts to sustain the traditional view are universally regarded as pseudoscience. While the controversy has a long history, today it has retreated to be mainly over what constitutes good science education, with the politics of creationism primarily focusing on the teaching of creationism in public education. Among majority-Christian countries, the debate is most prominent in the United States, where it may be portrayed as part of a culture war. Parallel controversies also exist in some other religious communities, such as the more fundamentalist branches of Judaism and Islam. In Europe and elsewhere, creationism is less widespread (notably, the Catholic Church and Anglican Communion both accept evolution), and there is much less pressure to teach it as fact. Christian fundamentalists reject the evidence of common descent of humans and other animals as demonstrated in modern paleontology, genetics, histology and cladistics and those other sub-disciplines which are based upon the conclusions of modern evolutionary biology, geology, cosmology, and other related fields. They argue for the Abrahamic accounts of creation, and, in order to attempt to gain a place alongside evolutionary biology in the science classroom, have developed a rhetorical framework of "creation science". In the landmark Kitzmiller v. Dover, the purported basis of scientific creationism was judged to be a wholly religious construct without scientific merit. The
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
Catholic Church holds no official position on creation or evolution (see Evolution and the Catholic Church). However, Pope Francis has stated: "God is not a demiurge or a magician, but the Creator who brought everything to life...Evolution in nature is not inconsistent with the notion of creation, because evolution requires the creation of beings that evolve." The rules of genetic inheritance were discovered by the Augustinian friar Gregor Mendel, who is known today as the founder of modern genetics. == History == The creation–evolution controversy began in Europe and North America in the late 18th century, when new interpretations of geological evidence led to various theories of an ancient Earth, and findings of extinctions demonstrated in the fossil geological sequence prompted early ideas of evolution, notably Lamarckism. In England these ideas of continuing change were at first seen as a threat to the existing "fixed" social order, and both church and state sought to repress them. Conditions gradually eased, and in 1844 Robert Chambers's controversial Vestiges of the Natural History of Creation popularized the idea of gradual transmutation of species. The scientific establishment at first dismissed it scornfully and the Church of England reacted with fury, but many Unitarians, Quakers and Baptists—groups opposed to the privileges of the established church—favoured its ideas of God acting through such natural laws. === Contemporary reaction to Darwin === By the end of the 19th century, there was no serious scientific opposition to the basic evolutionary tenets of descent with modification and the common ancestry of all forms of life. The publication of Darwin's On the Origin of Species in 1859 brought scientific credibility to evolution, and made it a respectable field of study. Despite the intense interest in the religious implications of Darwin's book, theological controversy over higher criticism set out in Essays
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
and Reviews (1860) largely diverted the Church of England's attention. Some of the liberal Christian authors of that work expressed support for Darwin, as did many Nonconformists. The Reverend Charles Kingsley, for instance, openly supported the idea of God working through evolution. Other Christians opposed the idea, and even some of Darwin's close friends and supporters—including Charles Lyell and Asa Gray—initially expressed reservations about some of his ideas. Gray later became a staunch supporter of Darwin in America, and collected together a number of his own writings to produce an influential book, Darwiniana (1876). These essays argued for a conciliation between Darwinian evolution and the tenets of theism, at a time when many on both sides perceived the two as mutually exclusive. Gray said that investigation of physical causes was not opposed to the theological view and the study of the harmonies between mind and Nature, and thought it "most presumable that an intellectual conception realized in Nature would be realized through natural agencies." Thomas Huxley, who strongly promoted Darwin's ideas while campaigning to end the dominance of science by the clergy, coined the term agnostic to describe his position that God's existence is unknowable. Darwin also took this position, but prominent atheists including Edward Aveling and Ludwig Büchner also took up evolution and it was criticized, in the words of one reviewer, as "tantamount to atheism." Following the lead of figures such as St. George Jackson Mivart and John Augustine Zahm, Roman Catholics in the United States became accepting of evolution itself while ambivalent towards natural selection and stressing humanity's divinely imbued soul. The Catholic Church never condemned evolution, and initially the more conservative-leaning Catholic leadership in Rome held back, but gradually adopted a similar position. During the late 19th century evolutionary ideas were most strongly disputed by the
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
premillennialists, who held to a prophecy of the imminent return of Christ based on a form of Biblical literalism, and were convinced that the Bible would be invalidated if any error in the Scriptures was conceded. However, hardly any of the critics of evolution at that time were as concerned about geology, freely granting scientists any time they needed before the Edenic creation to account for scientific observations, such as fossils and geological findings. In the immediate post-Darwinian era, few scientists or clerics rejected the antiquity of the earth or the progressive nature of the fossil record. Likewise, few attached geological significance to the Biblical flood, unlike subsequent creationists. Evolutionary skeptics, creationist leaders and skeptical scientists were usually either willing to adopt a figurative reading of the first chapter of the Book of Genesis, or allowed that the six days of creation were not necessarily 24-hour days. Science professors at liberal northeastern universities almost immediately embraced the theory of evolution and introduced it to their students. However, some people in parts of the south and west of the United States, who had been influenced by the preachings of Christian fundamentalist evangelicals, rejected the theory as immoral. In the United Kingdom, Evangelical creationists were in a tiny minority. The Victoria Institute was formed in 1865 in response to Essays and Reviews and Darwin's On the Origin of Species. It was not officially opposed to evolution theory, but its main founder James Reddie objected to Darwin's work as "inharmonious" and "utterly incredible", and Philip Henry Gosse, author of Omphalos, was a vice-president. The institute's membership increased to 1897, then declined sharply. In the 1920s George McCready Price attended and made several presentations of his creationist views, which found little support among the members. In 1927 John Ambrose Fleming was made president; while
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
he insisted on creation of the soul, his acceptance of divinely guided development and of Pre-Adamite humanity meant he was thought of as a theistic evolutionist. === Creationism in theology === At the beginning of the 19th century debate had started to develop over applying historical methods to Biblical criticism, suggesting a less literal account of the Bible. Simultaneously, the developing science of geology indicated the Earth was ancient, and religious thinkers sought to accommodate this by day-age creationism or gap creationism. Neptunianist catastrophism, which had in the 17th and 18th centuries proposed that a universal flood could explain all geological features, gave way to ideas of geological gradualism (introduced in 1795 by James Hutton) based upon the erosion and depositional cycle over millions of years, which gave a better explanation of the sedimentary column. Biology and the discovery of extinction (first described in the 1750s and put on a firm footing by Georges Cuvier in 1796) challenged ideas of a fixed immutable Aristotelian "great chain of being." Natural theology had earlier expected that scientific findings based on empirical evidence would help religious understanding. Emerging differences led some to increasingly regard science and theology as concerned with different, non-competitive domains. When most scientists came to accept evolution (by around 1875), European theologians generally came to accept evolution as an instrument of God. For instance, Pope Leo XIII (in office 1878–1903) referred to longstanding Christian thought that scriptural interpretations could be reevaluated in the light of new knowledge, and Roman Catholics came around to acceptance of human evolution subject to direct creation of the soul. In the United States the development of the racist Social Darwinian eugenics movement by certain circles led a number of Catholics to reject evolution. In this enterprise they received little aid from conservative Christians in Great
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
Britain and Europe. In Britain this has been attributed to their minority status leading to a more tolerant, less militant theological tradition. This continues to the present. In his speech at the Pontifical Academy of Sciences in 2014, Pope Francis declared that he accepted the Big Bang theory and the theory of evolution and that God was not "a magician with a magic wand". ==== Development of creationism in the United States ==== At first in the U.S., evangelical Christians paid little attention to the developments in geology and biology, being more concerned with the rise of European higher Biblical criticism which questioned the belief in the Bible as literal truth. Those criticizing these approaches took the name "fundamentalist"—originally coined by its supporters to describe a specific package of theological beliefs that developed into a movement within the Protestant community of the United States in the early part of the 20th century, and which had its roots in the Fundamentalist–Modernist Controversy of the 1920s and 1930s. The term in a religious context generally indicates unwavering attachment to a set of irreducible beliefs. Up until the early mid-20th century, mainline Christian denominations within the United States showed little official resistance to evolution. Around the start of the 20th century some evangelical scholars had ideas accommodating evolution, such as B. B. Warfield who saw it as a natural law expressing God's will. By then most U.S. high-school and college biology classes taught scientific evolution, but several factors, including the rise of Christian fundamentalism and social factors of changes and insecurity in more traditionalist Bible Belt communities, led to a backlash. The numbers of children receiving secondary education increased rapidly, and parents who had fundamentalist tendencies or who opposed social ideas of what was called "survival of the fittest" had real concerns about
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
what their children were learning about evolution. ==== British creationism ==== The main British creationist movement in this period, the Evolution Protest Movement (EPM), formed in the 1930s out of the Victoria Institute, or Philosophical Society of Great Britain (founded in 1865 in response to the publication of Darwin's On the Origin of Species in 1859 and of Essays and Reviews in 1860). The Victoria Institute had the stated objective of defending "the great truths revealed in Holy Scripture ... against the opposition of Science falsely so called". Although it did not officially oppose evolution, it attracted a number of scientists skeptical of Darwinism, including John William Dawson and Arnold Guyot. It reached a high point of 1,246 members in 1897, but quickly plummeted to less than one third of that figure in the first two decades of the twentieth century. Although it opposed evolution at first, the institute joined the theistic evolution camp by the 1920s, which led to the development of the Evolution Protest Movement in reaction. Amateur ornithologist Douglas Dewar, the main driving-force within the EPM, published a booklet entitled Man: A Special Creation (1936) and engaged in public speaking and debates with supporters of evolution. In the late 1930s he resisted American creationists' call for acceptance of flood geology, which later led to conflict within the organization. Despite trying to win the public endorsement of C. S. Lewis (1898–1963), the most prominent Christian apologist of his day, by the mid-1950s the EPM came under control of schoolmaster/pastor Albert G. Tilney, whose dogmatic and authoritarian style ran the organization "as a one-man band", rejecting flood geology, unwaveringly promoting gap creationism, and reducing the membership to lethargic inactivity. It was renamed the Creation Science Movement (CSM) in 1980, under the chairmanship of David Rosevear, who holds a Ph.D.
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
in organometallic chemistry from the University of Bristol. By the mid-1980s the CSM had formally incorporated flood geology into its "Deed of Trust" (which all officers had to sign) and condemned gap creationism and day-age creationism as unscriptural. == United States legal challenges and their consequences == In 1925 Tennessee passed a statute, the Butler Act, which prohibited the teaching of the theory of evolution in all schools in the state. Later that year Mississippi passed a similar law, as did Arkansas in 1927. In 1968 the Supreme Court of the United States struck down these "anti-monkey" laws as unconstitutional, "because they established a religious doctrine violating both the First and Fourth Amendments to the United States Constitution." In more recent times religious fundamentalists who accept creationism have struggled to get their rejection of evolution accepted as legitimate science within education institutions in the U.S. A series of important court cases have resulted. === Butler Act and the Scopes monkey trial (1925) === After 1918, in the aftermath of World War I, the Fundamentalist–Modernist controversy had brought a surge of opposition to the idea of evolution, and following the campaigning of William Jennings Bryan several states introduced legislation prohibiting the teaching of evolution. By 1925, such legislation was being considered in 15 states, and had passed in some states, such as Tennessee. The American Civil Liberties Union offered to defend anyone who wanted to bring a test case against one of these laws. John T. Scopes accepted, and confessed to teaching his Tennessee class evolution in defiance of the Butler Act, using the textbook by George William Hunter: A Civic Biology: Presented in Problems (1914). The trial, widely publicized by H. L. Mencken among others, is commonly referred to as the Scopes Monkey Trial. The court convicted Scopes, but the
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
widespread publicity galvanized proponents of evolution. Following an appeal of the case to the Tennessee Supreme Court, the Court overturned the decision on a technicality (the judge had assessed the minimum $100 fine instead of allowing the jury to assess the fine). The statute required a minimum fine of $100, and the state Constitution required all fines over $50 to be assessed by a jury. Although it overturned the conviction, the Court decided that the Butler Act was not in violation of the Religious Preference provisions of the Tennessee Constitution (Section 3 of Article 1), which stated "that no preference shall ever be given, by law, to any religious establishment or mode of worship". The Court, applying that state constitutional language, held: We are not able to see how the prohibition of teaching the theory that man has descended from a lower order of animals gives preference to any religious establishment or mode of worship. So far as we know, there is no religious establishment or organized body that has in its creed or confession of faith any article denying or affirming such a theory.... Protestants, Catholics, and Jews are divided among themselves in their beliefs, and that there is no unanimity among the members of any religious establishment as to this subject. Belief or unbelief in the theory of evolution is no more a characteristic of any religious establishment or mode of worship than is belief or unbelief in the wisdom of the prohibition laws. It would appear that members of the same churches quite generally disagree as to these things. ... Furthermore, [the Butler Act] requires the teaching of nothing. It only forbids the teaching of evolution of man from a lower order of animals.... As the law thus stands, while the theory of evolution of man may
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
not be taught in the schools of the State, nothing contrary to that theory [such as Creationism] is required to be taught. ... It is not necessary now to determine the exact scope of the Religious Preference clause of the Constitution ... Section 3 of Article 1 is binding alike on the Legislature and the school authorities. So far we are clear that the Legislature has not crossed these constitutional limitations. The interpretation of the Establishment Clause of the United States Constitution up to that time held that the government could not establish a particular religion as the State religion. The Tennessee Supreme Court's decision held in effect that the Butler Act was constitutional under the state Constitution's Religious Preference Clause, because the Act did not establish one religion as the "State religion". As a result of the holding, the teaching of evolution remained illegal in Tennessee, and continued campaigning succeeded in removing evolution from school textbooks throughout the United States. === Epperson v. Arkansas (1968) === In 1968 the United States Supreme Court invalidated a forty-year-old Arkansas statute that prohibited the teaching of evolution in the public schools. A Little Rock, Arkansas, high-school-biology teacher, Susan Epperson, filed suit, charging that the law violated the federal constitutional prohibition against establishment of religion as set forth in the Establishment Clause. The Little Rock Ministerial Association supported Epperson's challenge, declaring, "to use the Bible to support an irrational and an archaic concept of static and undeveloping creation is not only to misunderstand the meaning of the Book of Genesis, but to do God and religion a disservice by making both enemies of scientific advancement and academic freedom". The Court held that the United States Constitution prohibits a state from requiring, in the words of the majority opinion, "that teaching and learning must
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
be tailored to the principles or prohibitions of any religious sect or dogma". But the Supreme Court decision also suggested that creationism could be taught in addition to evolution. === Daniel v. Waters (1975) === Daniel v. Waters was a 1975 legal case in which the United States Court of Appeals for the Sixth Circuit struck down Tennessee's law regarding the teaching of "equal time" of evolution and creationism in public-school science classes because it violated the Establishment Clause. Following this ruling, creationism was stripped of overt biblical references and rebranded "Creation Science", and several states passed legislative acts requiring that this be given equal time with the teaching of evolution. === Creation science === As biologists grew more and more confident in evolution as the central defining principle of biology, American membership in churches favoring increasingly literal interpretations of scripture also rose, with the Southern Baptist Convention and Lutheran Church–Missouri Synod outpacing all other denominations. With growth and increased finances, these churches became better equipped to promulgate a creationist message, with their own colleges, schools, publishing houses, and broadcast media. In 1961 Presbyterian and Reformed Publishing released the first major modern creationist book: John C. Whitcomb and Henry M. Morris' influential The Genesis Flood: The Biblical Record and Its Scientific Implications. The authors argued that creation was literally 6 days long, that humans lived concurrently with dinosaurs, and that God created each "kind" of life individually. On the strength of this, Morris became a popular speaker, spreading anti-evolutionary ideas at fundamentalist churches, colleges, and conferences. Morris' Creation Science Research Center (CSRC) rushed publication of biology textbooks that promoted creationism. Ultimately, the CSRC broke up over a divide between sensationalism and a more intellectual approach, and Morris founded the Institute for Creation Research, which he promised would be controlled and
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
operated by scientists. During this time, Morris and others who supported flood geology adopted the terms "scientific creationism" and "creation science". The "flood geology" theory effectively co-opted "the generic creationist label for their hyperliteralist views." ==== Court cases ==== ===== McLean v. Arkansas ===== In 1982, another case in Arkansas ruled that the Arkansas "Balanced Treatment for Creation-Science and Evolution-Science Act" (Act 590) was unconstitutional because it violated the Establishment Clause. Much of the transcript of the case was lost, including evidence from Francisco Ayala. ===== Edwards v. Aguillard ===== In the early 1980s, the Louisiana legislature passed a law titled the "Balanced Treatment for Creation-Science and Evolution-Science Act". The act did not require teaching either evolution or creationism as such, but did require that when evolutionary science was taught, creation science had to be taught as well. Creationists had lobbied aggressively for the law, arguing that the act was about academic freedom for teachers, an argument adopted by the state in support of the act. Lower courts ruled that the State's actual purpose was to promote the religious doctrine of creation science, but the State appealed to the Supreme Court. In the similar case of McLean v. Arkansas (see above) the federal trial court had also decided against creationism. Mclean v. Arkansas was not appealed to the federal Circuit Court of Appeals, creationists instead thinking that they had better chances with Edwards v. Aguillard. In 1987 the United States Supreme Court ruled that the Louisiana act was also unconstitutional, because the law was specifically intended to advance a particular religion. At the same time, it stated its opinion that "teaching a variety of scientific theories about the origins of humankind to school children might be validly done with the clear secular intent of enhancing the effectiveness of science instruction",
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
leaving open the door for a handful of proponents of creation science to evolve their arguments into the iteration of creationism that later came to be known as intelligent design. === Intelligent design === In response to Edwards v. Aguillard, the neo-creationist intelligent design movement was formed around the Discovery Institute's Center for Science and Culture. It makes the claim that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." It has been viewed as a "scientific" approach to creationism by creationists, but is widely rejected as pseudoscience by the science community—primarily because intelligent design cannot be tested and rejected like scientific hypotheses (see for example, List of scientific bodies explicitly rejecting intelligent design). ==== Kansas evolution hearings ==== In the push by intelligent design advocates to introduce intelligent design in public school science classrooms, the hub of the intelligent design movement, the Discovery Institute, arranged to conduct hearings to review the evidence for evolution in the light of its Critical Analysis of Evolution lesson plans. The Kansas evolution hearings were a series of hearings held in Topeka, Kansas, May 5 to May 12, 2005. The Kansas State Board of Education eventually adopted the institute's Critical Analysis of Evolution lesson plans over objections of the State Board Science Hearing Committee, and electioneering on behalf of conservative Republican Party candidates for the Board. On August 1, 2006, four of the six conservative Republicans who approved the Critical Analysis of Evolution classroom standards lost their seats in a primary election. The moderate Republican and Democrats gaining seats vowed to overturn the 2005 school science standards and adopt those recommended by a State Board Science Hearing Committee that were rejected by the previous board, and on February 13,
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
2007, the Board voted 6 to 4 to reject the amended science standards enacted in 2005. The definition of science was once again limited to "the search for natural explanations for what is observed in the universe." ==== Dover trial ==== Following the Edwards v. Aguillard decision by the United States Supreme Court, in which the Court held that a Louisiana law requiring that creation science be taught in public schools whenever evolution was taught was unconstitutional, because the law was specifically intended to advance a particular religion, creationists renewed their efforts to introduce creationism into public school science classes. This effort resulted in intelligent design, which sought to avoid legal prohibitions by leaving the source of creation to an unnamed and undefined intelligent designer, as opposed to God. This ultimately resulted in the "Dover Trial," Kitzmiller v. Dover Area School District, which went to trial on 26 September 2005 and was decided on 20 December 2005 in favor of the plaintiffs, who charged that a mandate that intelligent design be taught in public school science classrooms was an unconstitutional establishment of religion. The Kitzmiller v. Dover decision held that intelligent design was not a subject of legitimate scientific research, and that it "cannot uncouple itself from its creationist, and hence religious, antecedents." The December 2005 ruling in the Kitzmiller v. Dover Area School District trial supported the viewpoint of the American Association for the Advancement of Science and other science and education professional organizations who say that proponents of Teach the Controversy seek to undermine the teaching of evolution while promoting intelligent design, and to advance an education policy for U.S. public schools that introduces creationist explanations for the origin of life to public-school science curricula. ==== Texas Board of Education support for intelligent design ==== On March 27,
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
2009, the Texas Board of Education, by a vote of 13 to 2, voted that at least in Texas, textbooks must teach intelligent design alongside evolution, and question the validity of the fossil record. Don McLeroy, a dentist and chair of the board, said, "I think the new standards are wonderful ... dogmatism about evolution [has sapped] America's scientific soul." According to Science magazine, "Because Texas is the second-largest textbook market in the United States, publishers have a strong incentive to be certified by the board as 'conforming 100% to the state's standards'." The 2009 Texas Board of Education hearings were chronicled in the 2012 documentary The Revisionaries. ==== Recent developments ==== The scientific consensus on the origins and evolution of life continues to be challenged by creationist organizations and religious groups who desire to uphold some form of creationism (usually Young Earth creationism, creation science, Old Earth creationism or intelligent design) as an alternative. Most of these groups are literalist Christians who believe the biblical account is inerrant, and more than one sees the debate as part of the Christian mandate to evangelize. Some groups see science and religion as being diametrically opposed views that cannot be reconciled. More accommodating viewpoints, held by many mainstream churches and many scientists, consider science and religion to be separate categories of thought (non-overlapping magisteria), which ask fundamentally different questions about reality and posit different avenues for investigating it. This idea has received criticism from both the non-religious, like the zoologist, evolutionary biologist and religion critic Richard Dawkins, and fundamentalists, who see the idea as both underestimating the ability of methodological naturalism to result in moral conclusions and ignorant or downplaying of the fact claims of religions and scriptures. Studies on the religious beliefs of scientists does support the evidence of a rift
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
between traditional literal fundamentalist religion and experimental science. Three studies of scientific attitudes since 1904 have shown that over 80% of scientists do not believe in a traditional god or the traditional belief in immortality, with disbelief stronger amongst biological scientists than physical scientists. Amongst those not registering such attitudes a high percentage indicated a preference for adhering to a belief concerning mystery than any dogmatic or faith based view. But only 10% of scientists stated that they saw a fundamental clash between science and religion. This study of trends over time suggests that the "culture wars" between creationism and evolution, are held more strongly by religious literalists than by scientists themselves and are likely to continue, fostering anti-scientific or pseudoscientific attitudes amongst fundamentalist believers. More recently, the intelligent design movement has attempted an anti-evolution position that avoids any direct appeal to religion. Scientists have argued that intelligent design is pseudoscience and does not represent any research program within the mainstream scientific community, and is still essentially creationism. Its leading proponent, the Discovery Institute, made widely publicized claims that it was a new science, although the only paper arguing for it published in a scientific journal was accepted in questionable circumstances and quickly disavowed in the Sternberg peer review controversy, with the Biological Society of Washington stating that it did not meet the journal's scientific standards, was a "significant departure" from the journal's normal subject area and was published at the former editor's sole discretion, "contrary to typical editorial practices." On August 1, 2005, U.S. president George W. Bush commented endorsing the teaching of intelligent design alongside evolution "I felt like both sides ought to be properly taught ... so people can understand what the debate is about." == Points of view == In the controversy a number of divergent
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
opinions have crystallized regarding both the acceptance of scientific theories and religious doctrine and practice. === Young-Earth creationism === Young-Earth creationism (YEC) involves the religiously based belief that God created the Earth within the last 10,000 years, literally as described in Genesis, within the approximate timeframe of biblical genealogies (detailed - for example - in the Ussher chronology). Young-Earth creationists often believe that the universe has a similar age to that of the Earth. Creationist cosmologies result from attempts by some creationists to assign the universe an age consistent with the Ussher chronology and other Young-Earth timeframes based on the genealogies. This belief generally has a basis in biblical literalism and completely rejects the scientific methodology of evolutionary biology. Creation science is agreed by the scientific community to be a pseudoscience that attempts to prove that Young Earth creationism is consistent with science. === Old-Earth creationism === Old-Earth creationism holds that God created the physical universe, but that one should not take the creation event of Genesis within 6 days strictly literally. This group generally accepts the age of the Universe and the age of the Earth as described by astronomers and geologists, but regards details of the evolutionary theory as questionable. Old-Earth creationists interpret the Genesis creation-narrative in a number of ways, each differing from the six, consecutive, 24-hour day creation of the Young-Earth creationist view. === Neo-creationism and "intelligent design" === Neo-creationists intentionally distance themselves from other forms of creationism, preferring to be known as wholly separate from creationism as a philosophy. They wish to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture, and to bring the debate before the public. Neo-creationists may be either Young Earth or Old Earth creationists, and hold a range of underlying theological viewpoints (e.g.
|
{
"page_id": 1115768,
"source": null,
"title": "Rejection of evolution by religious groups"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.