text
stringlengths
559
401k
source
stringlengths
13
121
In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. == Origins == The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Curie-Weiss law for magnetic susceptibility, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”. Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation. == Validity == In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not. Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest. == Formal approach (Hamiltonian) == The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian H = H 0 + Δ H {\displaystyle {\mathcal {H}}={\mathcal {H}}_{0}+\Delta {\mathcal {H}}} has the following upper bound: F ≤ F 0 = d e f ⟨ H ⟩ 0 − T S 0 , {\displaystyle F\leq F_{0}\ {\stackrel {\mathrm {def} }{=}}\ \langle {\mathcal {H}}\rangle _{0}-TS_{0},} where S 0 {\displaystyle S_{0}} is the entropy, and F {\displaystyle F} and F 0 {\displaystyle F_{0}} are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian H 0 {\displaystyle {\mathcal {H}}_{0}} . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as H 0 = ∑ i = 1 N h i ( ξ i ) , {\displaystyle {\mathcal {H}}_{0}=\sum _{i=1}^{N}h_{i}(\xi _{i}),} where ξ i {\displaystyle \xi _{i}} are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation. For the most common case that the target Hamiltonian contains only pairwise interactions, i.e., H = ∑ ( i , j ) ∈ P V i , j ( ξ i , ξ j ) , {\displaystyle {\mathcal {H}}=\sum _{(i,j)\in {\mathcal {P}}}V_{i,j}(\xi _{i},\xi _{j}),} where P {\displaystyle {\mathcal {P}}} is the set of pairs that interact, the minimising procedure can be carried out formally. Define Tr i ⁡ f ( ξ i ) {\displaystyle \operatorname {Tr} _{i}f(\xi _{i})} as the generalized sum of the observable f {\displaystyle f} over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by F 0 = Tr 1 , 2 , … , N ⁡ H ( ξ 1 , ξ 2 , … , ξ N ) P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) + k T Tr 1 , 2 , … , N ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) log ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) , {\displaystyle {\begin{aligned}F_{0}&=\operatorname {Tr} _{1,2,\ldots ,N}{\mathcal {H}}(\xi _{1},\xi _{2},\ldots ,\xi _{N})P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\\&+kT\,\operatorname {Tr} _{1,2,\ldots ,N}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\log P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N}),\end{aligned}}} where P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle P_{0}^{(N)}(\xi _{1},\xi _{2},\dots ,\xi _{N})} is the probability to find the reference system in the state specified by the variables ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle (\xi _{1},\xi _{2},\dots ,\xi _{N})} . This probability is given by the normalized Boltzmann factor P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) = 1 Z 0 ( N ) e − β H 0 ( ξ 1 , ξ 2 , … , ξ N ) = ∏ i = 1 N 1 Z 0 e − β h i ( ξ i ) = d e f ∏ i = 1 N P 0 ( i ) ( ξ i ) , {\displaystyle {\begin{aligned}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})&={\frac {1}{Z_{0}^{(N)}}}e^{-\beta {\mathcal {H}}_{0}(\xi _{1},\xi _{2},\ldots ,\xi _{N})}\\&=\prod _{i=1}^{N}{\frac {1}{Z_{0}}}e^{-\beta h_{i}(\xi _{i})}\ {\stackrel {\mathrm {def} }{=}}\ \prod _{i=1}^{N}P_{0}^{(i)}(\xi _{i}),\end{aligned}}} where Z 0 {\displaystyle Z_{0}} is the partition function. Thus F 0 = ∑ ( i , j ) ∈ P Tr i , j ⁡ V i , j ( ξ i , ξ j ) P 0 ( i ) ( ξ i ) P 0 ( j ) ( ξ j ) + k T ∑ i = 1 N Tr i ⁡ P 0 ( i ) ( ξ i ) log ⁡ P 0 ( i ) ( ξ i ) . {\displaystyle {\begin{aligned}F_{0}&=\sum _{(i,j)\in {\mathcal {P}}}\operatorname {Tr} _{i,j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(i)}(\xi _{i})P_{0}^{(j)}(\xi _{j})\\&+kT\sum _{i=1}^{N}\operatorname {Tr} _{i}P_{0}^{(i)}(\xi _{i})\log P_{0}^{(i)}(\xi _{i}).\end{aligned}}} In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities P 0 ( i ) {\displaystyle P_{0}^{(i)}} using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations P 0 ( i ) ( ξ i ) = 1 Z 0 e − β h i M F ( ξ i ) , i = 1 , 2 , … , N , {\displaystyle P_{0}^{(i)}(\xi _{i})={\frac {1}{Z_{0}}}e^{-\beta h_{i}^{MF}(\xi _{i})},\quad i=1,2,\ldots ,N,} where the mean field is given by h i MF ( ξ i ) = ∑ { j ∣ ( i , j ) ∈ P } Tr j ⁡ V i , j ( ξ i , ξ j ) P 0 ( j ) ( ξ j ) . {\displaystyle h_{i}^{\text{MF}}(\xi _{i})=\sum _{\{j\mid (i,j)\in {\mathcal {P}}\}}\operatorname {Tr} _{j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(j)}(\xi _{j}).} == Applications == Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions. === Ising model === ==== Formal derivation ==== The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice. A magnetisation function can be calculated from the resultant approximate free energy. The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian, − m ∑ i s i {\displaystyle -m\sum _{i}s_{i}} , the variational free energy is F V = F 0 + ⟨ ( − J ∑ s i s j − h ∑ s i ) − ( − m ∑ s i ) ⟩ 0 . {\displaystyle F_{V}=F_{0}+\left\langle \left(-J\sum s_{i}s_{j}-h\sum s_{i}\right)-\left(-m\sum s_{i}\right)\right\rangle _{0}.} By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is m = J ∑ ⟨ s j ⟩ 0 + h , {\displaystyle m=J\sum \langle s_{j}\rangle _{0}+h,} which is the ensemble average of spin. This simplifies to m = tanh ( z J β m ) + h . {\displaystyle m={\text{tanh}}(zJ\beta m)+h.} Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins. ==== Non-interacting spins approximation ==== Consider the Ising model on a d {\displaystyle d} -dimensional lattice. The Hamiltonian is given by H = − J ∑ ⟨ i , j ⟩ s i s j − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }s_{i}s_{j}-h\sum _{i}s_{i},} where the ∑ ⟨ i , j ⟩ {\displaystyle \sum _{\langle i,j\rangle }} indicates summation over the pair of nearest neighbors ⟨ i , j ⟩ {\displaystyle \langle i,j\rangle } , and s i , s j = ± 1 {\displaystyle s_{i},s_{j}=\pm 1} are neighboring Ising spins. Let us transform our spin variable by introducing the fluctuation from its mean value m i ≡ ⟨ s i ⟩ {\displaystyle m_{i}\equiv \langle s_{i}\rangle } . We may rewrite the Hamiltonian as H = − J ∑ ⟨ i , j ⟩ ( m i + δ s i ) ( m j + δ s j ) − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }(m_{i}+\delta s_{i})(m_{j}+\delta s_{j})-h\sum _{i}s_{i},} where we define δ s i ≡ s i − m i {\displaystyle \delta s_{i}\equiv s_{i}-m_{i}} ; this is the fluctuation of the spin. If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values. The mean field approximation consists of neglecting this second-order fluctuation term: H ≈ H MF ≡ − J ∑ ⟨ i , j ⟩ ( m i m j + m i δ s j + m j δ s i ) − h ∑ i s i . {\displaystyle H\approx H^{\text{MF}}\equiv -J\sum _{\langle i,j\rangle }(m_{i}m_{j}+m_{i}\delta s_{j}+m_{j}\delta s_{i})-h\sum _{i}s_{i}.} These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions. Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields H MF = − J ∑ ⟨ i , j ⟩ ( m 2 + 2 m ( s i − m ) ) − h ∑ i s i . {\displaystyle H^{\text{MF}}=-J\sum _{\langle i,j\rangle }{\big (}m^{2}+2m(s_{i}-m){\big )}-h\sum _{i}s_{i}.} The summation over neighboring spins can be rewritten as ∑ ⟨ i , j ⟩ = 1 2 ∑ i ∑ j ∈ n n ( i ) {\displaystyle \sum _{\langle i,j\rangle }={\frac {1}{2}}\sum _{i}\sum _{j\in nn(i)}} , where n n ( i ) {\displaystyle nn(i)} means "nearest neighbor of i {\displaystyle i} ", and the 1 / 2 {\displaystyle 1/2} prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression H MF = J m 2 N z 2 − ( h + m J z ) ⏟ h eff. ∑ i s i , {\displaystyle H^{\text{MF}}={\frac {Jm^{2}Nz}{2}}-\underbrace {(h+mJz)} _{h^{\text{eff.}}}\sum _{i}s_{i},} where z {\displaystyle z} is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field h eff. = h + J z m {\displaystyle h^{\text{eff.}}=h+Jzm} , which is the sum of the external field h {\displaystyle h} and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension d {\displaystyle d} , z = 2 d {\displaystyle z=2d} ). Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain Z = e − β J m 2 N z 2 [ 2 cosh ⁡ ( h + m J z k B T ) ] N , {\displaystyle Z=e^{-{\frac {\beta Jm^{2}Nz}{2}}}\left[2\cosh \left({\frac {h+mJz}{k_{\text{B}}T}}\right)\right]^{N},} where N {\displaystyle N} is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization m {\displaystyle m} as a function of h eff. {\displaystyle h^{\text{eff.}}} . We thus have two equations between m {\displaystyle m} and h eff. {\displaystyle h^{\text{eff.}}} , allowing us to determine m {\displaystyle m} as a function of temperature. This leads to the following observation: For temperatures greater than a certain value T c {\displaystyle T_{\text{c}}} , the only solution is m = 0 {\displaystyle m=0} . The system is paramagnetic. For T < T c {\displaystyle T<T_{\text{c}}} , there are two non-zero solutions: m = ± m 0 {\displaystyle m=\pm m_{0}} . The system is ferromagnetic. T c {\displaystyle T_{\text{c}}} is given by the following relation: T c = J z k B {\displaystyle T_{\text{c}}={\frac {Jz}{k_{B}}}} . This shows that MFT can account for the ferromagnetic phase transition. === Application to other systems === Similarly, MFT can be applied to other types of Hamiltonian as in the following cases: To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap Δ {\displaystyle \Delta } . The molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero. To determine the optimal amino acid side chain packing given a fixed protein backbone in protein structure prediction (see Self-consistent mean field (biology)). To determine the elastic properties of a composite material. Variationally minimisation like mean field theory can be also be used in statistical inference. == Extension to time-dependent mean fields == In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition. == See also == Dynamical mean field theory Mean field game theory == References ==
Wikipedia/Mean-field_model
In queueing theory, a discipline within the mathematical theory of probability, a BCMP network is a class of queueing network for which a product-form equilibrium distribution exists. It is named after the authors of the paper where the network was first described: Baskett, Chandy, Muntz, and Palacios. The theorem is a significant extension to a Jackson network allowing virtually arbitrary customer routing and service time distributions, subject to particular service disciplines. The paper is well known, and the theorem was described in 1990 as "one of the seminal achievements in queueing theory in the last 20 years" by J. Michael Harrison and Ruth J. Williams. == Definition of a BCMP network == A network of m interconnected queues is known as a BCMP network if each of the queues is of one of the following four types: FCFS discipline where all customers have the same negative exponential service time distribution. The service rate can be state dependent, so write μ j {\displaystyle \scriptstyle {\mu _{j}}} for the service rate when the queue length is j. Processor sharing queues Infinite-server queues LCFS with pre-emptive resume (work is not lost) In the final three cases, service time distributions must have rational Laplace transforms. This means the Laplace transform must be of the form L ( s ) = N ( s ) D ( s ) . {\displaystyle L(s)={\frac {N(s)}{D(s)}}.} Also, the following conditions must be met. external arrivals to node i (if any) form a Poisson process, a customer completing service at queue i will either move to some new queue j with (fixed) probability P i j {\displaystyle P_{ij}} or leave the system with probability 1 − ∑ j = 1 m P i j {\displaystyle 1-\sum _{j=1}^{m}P_{ij}} , which is non-zero for some subset of the queues. == Theorem == For a BCMP network of m queues which is open, closed or mixed in which each queue is of type 1, 2, 3 or 4, the equilibrium state probabilities are given by π ( x 1 , x 2 , … , x m ) = C π 1 ( x 1 ) π 2 ( x 2 ) ⋯ π m ( x m ) , {\displaystyle \pi (x_{1},x_{2},\ldots ,x_{m})=C\pi _{1}(x_{1})\pi _{2}(x_{2})\cdots \pi _{m}(x_{m}),} where C is a normalizing constant chosen to make the equilibrium state probabilities sum to 1 and π i ( ⋅ ) {\displaystyle \scriptstyle {\pi _{i}(\cdot )}} represents the equilibrium distribution for queue i. === Proof === The original proof of the theorem was given by checking the independent balance equations were satisfied. Peter G. Harrison offered an alternative proof by considering reversed processes. == References ==
Wikipedia/BCMP_network
In probability theory, the Lindley equation, Lindley recursion or Lindley process is a discrete-time stochastic process An where n takes integer values and: An + 1 = max(0, An + Bn). Processes of this form can be used to describe the waiting time of customers in a queue or evolution of a queue length over time. The idea was first proposed in the discussion following Kendall's 1951 paper. == Waiting times == In Dennis Lindley's first paper on the subject the equation is used to describe waiting times experienced by customers in a queue with the First-In First-Out (FIFO) discipline. Wn + 1 = max(0,Wn + Un) where Tn is the time between the nth and (n+1)th arrivals, Sn is the service time of the nth customer, and Un = Sn − Tn Wn is the waiting time of the nth customer. The first customer does not need to wait so W1 = 0. Subsequent customers will have to wait if they arrive at a time before the previous customer has been served. == Queue lengths == The evolution of the queue length process can also be written in the form of a Lindley equation. == Integral equation == Lindley's integral equation is a relationship satisfied by the stationary waiting time distribution F(x) in a G/G/1 queue. F ( x ) = ∫ 0 − ∞ K ( x − y ) F ( d y ) x ≥ 0 {\displaystyle F(x)=\int _{0^{-}}^{\infty }K(x-y)F({\text{d}}y)\quad x\geq 0} Where K(x) is the distribution function of the random variable denoting the difference between the (k - 1)th customer's arrival and the inter-arrival time between (k - 1)th and kth customers. The Wiener–Hopf method can be used to solve this expression. == Notes ==
Wikipedia/Lindley_equation
In probability theory, the matrix analytic method is a technique to compute the stationary probability distribution of a Markov chain which has a repeating structure (after some point) and a state space which grows unboundedly in no more than one dimension. Such models are often described as M/G/1 type Markov chains because they can describe transitions in an M/G/1 queue. The method is a more complicated version of the matrix geometric method and is the classical solution method for M/G/1 chains. == Method description == An M/G/1-type stochastic matrix is one of the form P = ( B 0 B 1 B 2 B 3 ⋯ A 0 A 1 A 2 A 3 ⋯ A 0 A 1 A 2 ⋯ A 0 A 1 ⋯ ⋮ ⋮ ⋮ ⋮ ⋱ ) {\displaystyle P={\begin{pmatrix}B_{0}&B_{1}&B_{2}&B_{3}&\cdots \\A_{0}&A_{1}&A_{2}&A_{3}&\cdots \\&A_{0}&A_{1}&A_{2}&\cdots \\&&A_{0}&A_{1}&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{pmatrix}}} where Bi and Ai are k × k matrices. (Note that unmarked matrix entries represent zeroes.) Such a matrix describes the embedded Markov chain in an M/G/1 queue. If P is irreducible and positive recurrent then the stationary distribution is given by the solution to the equations P π = π and e T π = 1 {\displaystyle P\pi =\pi \quad {\text{ and }}\quad \mathbf {e} ^{\text{T}}\pi =1} where e represents a vector of suitable dimension with all values equal to 1. Matching the structure of P, π is partitioned to π1, π2, π3, …. To compute these probabilities the column stochastic matrix G is computed such that G = ∑ i = 0 ∞ G i A i . {\displaystyle G=\sum _{i=0}^{\infty }G^{i}A_{i}.} G is called the auxiliary matrix. Matrices are defined A ¯ i + 1 = ∑ j = i + 1 ∞ G j − i − 1 A j B ¯ i = ∑ j = i ∞ G j − i B j {\displaystyle {\begin{aligned}{\overline {A}}_{i+1}&=\sum _{j=i+1}^{\infty }G^{j-i-1}A_{j}\\{\overline {B}}_{i}&=\sum _{j=i}^{\infty }G^{j-i}B_{j}\end{aligned}}} then π0 is found by solving B ¯ 0 π 0 = π 0 ( e T + e T ( I − ∑ i = 1 ∞ A ¯ i ) − 1 ∑ i = 1 ∞ B ¯ i ) π 0 = 1 {\displaystyle {\begin{aligned}{\overline {B}}_{0}\pi _{0}&=\pi _{0}\\\quad \left(\mathbf {e} ^{\text{T}}+\mathbf {e} ^{\text{T}}\left(I-\sum _{i=1}^{\infty }{\overline {A}}_{i}\right)^{-1}\sum _{i=1}^{\infty }{\overline {B}}_{i}\right)\pi _{0}&=1\end{aligned}}} and the πi are given by Ramaswami's formula, a numerically stable relationship first published by Vaidyanathan Ramaswami in 1988. π i = ( I − A ¯ 1 ) − 1 [ B ¯ i + 1 π 0 + ∑ j = 1 i − 1 A ¯ i + 1 − j π j ] , i ≥ 1. {\displaystyle \pi _{i}=(I-{\overline {A}}_{1})^{-1}\left[{\overline {B}}_{i+1}\pi _{0}+\sum _{j=1}^{i-1}{\overline {A}}_{i+1-j}\pi _{j}\right],i\geq 1.} == Computation of G == There are two popular iterative methods for computing G, functional iterations cyclic reduction. == Tools == MAMSolver == References ==
Wikipedia/Matrix_analytic_method
In queueing theory, a discipline within the mathematical theory of probability, a layered queueing network (or rendezvous network) is a queueing network model where the service time for each job at each service node is given by the response time of a queueing network (and those service times in turn may also be determined by further nested networks). Resources can be nested and queues form along the nodes of the nesting structure. The nesting structure thus defines "layers" within the queueing model. Layered queueing has applications in a wide range of distributed systems which involve different master/slave, replicated services and client-server components, allowing each local node to be represented by a specific queue, then orchestrating the evaluation of these queues. For large population of jobs, a fluid limit has been shown in PEPA to be a give good approximation of performance measures. == External links == Tutorial Introduction to Layered Modeling of Software Performance by Murray Woodside, Carleton University == References ==
Wikipedia/Layered_queueing_network
In queueing theory, a discipline within the mathematical theory of probability, a heavy traffic approximation (sometimes called heavy traffic limit theorem or diffusion approximation) involves the matching of a queueing model with a diffusion process under some limiting conditions on the model's parameters. The first such result was published by John Kingman, who showed that when the utilisation parameter of an M/M/1 queue is near 1, a scaled version of the queue length process can be accurately approximated by a reflected Brownian motion. == Heavy traffic condition == Heavy traffic approximations are typically stated for the process X(t) describing the number of customers in the system at time t. They are arrived at by considering the model under the limiting values of some model parameters and therefore for the result to be finite the model must be rescaled by a factor n, denoted: 490  X ^ n ( t ) = X ( n t ) − E ( X ( n t ) ) n {\displaystyle {\hat {X}}_{n}(t)={\frac {X(nt)-\mathbb {E} (X(nt))}{\sqrt {n}}}} and the limit of this process is considered as n → ∞. There are three classes of regime under which such approximations are generally considered. The number of servers is fixed and the traffic intensity (utilization) is increased to 1 (from below). The queue length approximation is a reflected Brownian motion. Traffic intensity is fixed and the number of servers and arrival rate are increased to infinity. Here the queue length limit converges to the normal distribution. A quantity β is fixed where β = ( 1 − ρ ) s {\displaystyle \beta =(1-\rho ){\sqrt {s}}} with ρ representing the traffic intensity and s the number of servers. Traffic intensity and the number of servers are increased to infinity and the limiting process is a hybrid of the above results. This case, first published by Halfin and Whitt is often known as the Halfin–Whitt regime or quality-and-efficiency-driven (QED) regime. == Results for a G/G/1 queue == Theorem 1. Consider a sequence of G/G/1 queues indexed by j {\displaystyle j} . For queue j {\displaystyle j} let T j {\displaystyle T_{j}} denote the random inter-arrival time, S j {\displaystyle S_{j}} denote the random service time; let ρ j = λ j μ j {\displaystyle \rho _{j}={\frac {\lambda _{j}}{\mu _{j}}}} denote the traffic intensity with 1 λ j = E ( T j ) {\displaystyle {\frac {1}{\lambda _{j}}}=E(T_{j})} and 1 μ j = E ( S j ) {\displaystyle {\frac {1}{\mu _{j}}}=E(S_{j})} ; let W q , j {\displaystyle W_{q,j}} denote the waiting time in queue for a customer in steady state; Let α j = − E [ S j − T j ] {\displaystyle \alpha _{j}=-E[S_{j}-T_{j}]} and β j 2 = var ⁡ [ S j − T j ] ; {\displaystyle \beta _{j}^{2}=\operatorname {var} [S_{j}-T_{j}];} Suppose that T j → d T {\displaystyle T_{j}{\xrightarrow {d}}T} , S j → d S {\displaystyle S_{j}{\xrightarrow {d}}S} , and ρ j → 1 {\displaystyle \rho _{j}\rightarrow 1} . then 2 α j β j 2 W q , j → d exp ⁡ ( 1 ) {\displaystyle {\frac {2\alpha _{j}}{\beta _{j}^{2}}}W_{q,j}{\xrightarrow {d}}\exp(1)} provided that: (a) Var ⁡ [ S − T ] > 0 {\displaystyle \operatorname {Var} [S-T]>0} (b) for some δ > 0 {\displaystyle \delta >0} , E [ S j 2 + δ ] {\displaystyle E[S_{j}^{2+\delta }]} and E [ T j 2 + δ ] {\displaystyle E[T_{j}^{2+\delta }]} are both less than some constant C {\displaystyle C} for all j {\displaystyle j} . == Heuristic argument == Waiting time in queue Let U ( n ) = S ( n ) − T ( n ) {\displaystyle U^{(n)}=S^{(n)}-T^{(n)}} be the difference between the nth service time and the nth inter-arrival time; Let W q ( n ) {\displaystyle W_{q}^{(n)}} be the waiting time in queue of the nth customer; Then by definition: W q ( n ) = max ( W q ( n − 1 ) + U ( n − 1 ) , 0 ) {\displaystyle W_{q}^{(n)}=\max(W_{q}^{(n-1)}+U^{(n-1)},0)} After recursive calculation, we have: W q ( n ) = max ( U ( 1 ) + ⋯ + U ( n − 1 ) , U ( 2 ) + ⋯ + U ( n − 1 ) , … , U ( n − 1 ) , 0 ) {\displaystyle W_{q}^{(n)}=\max(U^{(1)}+\cdots +U^{(n-1)},U^{(2)}+\cdots +U^{(n-1)},\ldots ,U^{(n-1)},0)} Random walk Let P ( k ) = ∑ i = 1 k U ( n − i ) {\displaystyle P^{(k)}=\sum _{i=1}^{k}U^{(n-i)}} , with U ( i ) {\displaystyle U^{(i)}} are i.i.d; Define α = − E [ U ( i ) ] {\displaystyle \alpha =-E[U^{(i)}]} and β 2 = var ⁡ [ U ( i ) ] {\displaystyle \beta ^{2}=\operatorname {var} [U^{(i)}]} ; Then we have E [ P ( k ) ] = − k α {\displaystyle E[P^{(k)}]=-k\alpha } var ⁡ [ P ( k ) ] = k β 2 {\displaystyle \operatorname {var} [P^{(k)}]=k\beta ^{2}} W q ( n ) = max n − 1 ≥ k ≥ 0 P ( k ) ; {\displaystyle W_{q}^{(n)}=\max _{n-1\geq k\geq 0}P^{(k)};} we get W q ( ∞ ) = sup k ≥ 0 P ( k ) {\displaystyle W_{q}^{(\infty )}=\sup _{k\geq 0}P^{(k)}} by taking limit over n {\displaystyle n} . Thus the waiting time in queue of the nth customer W q ( n ) {\displaystyle W_{q}^{(n)}} is the supremum of a random walk with a negative drift. Brownian motion approximation Random walk can be approximated by a Brownian motion when the jump sizes approach 0 and the times between the jump approach 0. We have P ( 0 ) = 0 {\displaystyle P^{(0)}=0} and P ( k ) {\displaystyle P^{(k)}} has independent and stationary increments. When the traffic intensity ρ {\displaystyle \rho } approaches 1 and k {\displaystyle k} tends to ∞ {\displaystyle \infty } , we have P ( t ) ∼ N ( − α t , β 2 t ) {\displaystyle P^{(t)}\ \sim \ \mathbb {N} (-\alpha t,\beta ^{2}t)} after replaced k {\displaystyle k} with continuous value t {\displaystyle t} according to functional central limit theorem.: 110  Thus the waiting time in queue of the n {\displaystyle n} th customer can be approximated by the supremum of a Brownian motion with a negative drift. Supremum of Brownian motion Theorem 2.: 130  Let X {\displaystyle X} be a Brownian motion with drift μ {\displaystyle \mu } and standard deviation σ {\displaystyle \sigma } starting at the origin, and let M t = sup 0 ≤ s ≤ t X ( s ) {\displaystyle M_{t}=\sup _{0\leq s\leq t}X(s)} if μ ≤ 0 {\displaystyle \mu \leq 0} lim t → ∞ P ( M t > x ) = exp ⁡ ( 2 μ x / σ 2 ) , x ≥ 0 ; {\displaystyle \lim _{t\rightarrow \infty }P(M_{t}>x)=\exp(2\mu x/\sigma ^{2}),x\geq 0;} otherwise lim t → ∞ P ( M t ≥ x ) = 1 , x ≥ 0. {\displaystyle \lim _{t\rightarrow \infty }P(M_{t}\geq x)=1,x\geq 0.} == Conclusion == W q ( ∞ ) ∼ exp ⁡ ( 2 α β 2 ) {\displaystyle W_{q}^{(\infty )}\thicksim \exp \left({\frac {2\alpha }{\beta ^{2}}}\right)} under heavy traffic condition Thus, the heavy traffic limit theorem (Theorem 1) is heuristically argued. Formal proofs usually follow a different approach which involve characteristic functions. == Example == Consider an M/G/1 queue with arrival rate λ {\displaystyle \lambda } , the mean of the service time E [ S ] = 1 μ {\displaystyle E[S]={\frac {1}{\mu }}} , and the variance of the service time var ⁡ [ S ] = σ B 2 {\displaystyle \operatorname {var} [S]=\sigma _{B}^{2}} . What is average waiting time in queue in the steady state? The exact average waiting time in queue in steady state is given by: W q = ρ 2 + λ 2 σ B 2 2 λ ( 1 − ρ ) {\displaystyle W_{q}={\frac {\rho ^{2}+\lambda ^{2}\sigma _{B}^{2}}{2\lambda (1-\rho )}}} The corresponding heavy traffic approximation: W q ( H ) = λ ( 1 λ 2 + σ B 2 ) 2 ( 1 − ρ ) . {\displaystyle W_{q}^{(H)}={\frac {\lambda ({\frac {1}{\lambda ^{2}}}+\sigma _{B}^{2})}{2(1-\rho )}}.} The relative error of the heavy traffic approximation: W q ( H ) − W q W q = 1 − ρ 2 ρ 2 + λ 2 σ B 2 {\displaystyle {\frac {W_{q}^{(H)}-W_{q}}{W_{q}}}={\frac {1-\rho ^{2}}{\rho ^{2}+\lambda ^{2}\sigma _{B}^{2}}}} Thus when ρ → 1 {\displaystyle \rho \rightarrow 1} , we have : W q ( H ) − W q W q → 0. {\displaystyle {\frac {W_{q}^{(H)}-W_{q}}{W_{q}}}\rightarrow 0.} == External links == The G/G/1 queue by Sergey Foss == References ==
Wikipedia/Heavy_traffic_approximation
In control theory, a control-Lyapunov function (CLF) is an extension of the idea of Lyapunov function V ( x ) {\displaystyle V(x)} to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is (Lyapunov) stable or (more restrictively) asymptotically stable. Lyapunov stability means that if the system starts in a state x ≠ 0 {\displaystyle x\neq 0} in some domain D, then the state will remain in D for all time. For asymptotic stability, the state is also required to converge to x = 0 {\displaystyle x=0} . A control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control u ( x , t ) {\displaystyle u(x,t)} such that the system can be brought to the zero state asymptotically by applying the control u. The theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s. == Definition == Consider an autonomous dynamical system with inputs where x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} is the state vector and u ∈ R m {\displaystyle u\in \mathbb {R} ^{m}} is the control vector. Suppose our goal is to drive the system to an equilibrium x ∗ ∈ R n {\displaystyle x_{*}\in \mathbb {R} ^{n}} from every initial state in some domain D ⊂ R n {\displaystyle D\subset \mathbb {R} ^{n}} . Without loss of generality, suppose the equilibrium is at x ∗ = 0 {\displaystyle x_{*}=0} (for an equilibrium x ∗ ≠ 0 {\displaystyle x_{*}\neq 0} , it can be translated to the origin by a change of variables). Definition. A control-Lyapunov function (CLF) is a function V : D → R {\displaystyle V:D\to \mathbb {R} } that is continuously differentiable, positive-definite (that is, V ( x ) {\displaystyle V(x)} is positive for all x ∈ D {\displaystyle x\in D} except at x = 0 {\displaystyle x=0} where it is zero), and such that for all x ∈ R n ( x ≠ 0 ) , {\displaystyle x\in \mathbb {R} ^{n}(x\neq 0),} there exists u ∈ R m {\displaystyle u\in \mathbb {R} ^{m}} such that V ˙ ( x , u ) := ⟨ ∇ V ( x ) , f ( x , u ) ⟩ < 0 , {\displaystyle {\dot {V}}(x,u):=\langle \nabla V(x),f(x,u)\rangle <0,} where ⟨ u , v ⟩ {\displaystyle \langle u,v\rangle } denotes the inner product of u , v ∈ R n {\displaystyle u,v\in \mathbb {R} ^{n}} . The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by Artstein's theorem. Some results apply only to control-affine systems—i.e., control systems in the following form: where f : R n → R n {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} and g i : R n → R n {\displaystyle g_{i}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} for i = 1 , … , m {\displaystyle i=1,\dots ,m} . == Theorems == Eduardo Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable. It was later shown by Francis H. Clarke, Yuri Ledyaev, Eduardo Sontag, and A.I. Subbotin that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback. Artstein proved that the dynamical system (2) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x). === Constructing the Stabilizing Input === It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (2), Sontag's formula (or Sontag's universal formula) gives the feedback law k : R n → R m {\displaystyle k:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} directly in terms of the derivatives of the CLF.: Eq. 5.56  In the special case of a single input system ( m = 1 ) {\displaystyle (m=1)} , Sontag's formula is written as k ( x ) = { − L f V ( x ) + [ L f V ( x ) ] 2 + [ L g V ( x ) ] 4 L g V ( x ) if L g V ( x ) ≠ 0 0 if L g V ( x ) = 0 {\displaystyle k(x)={\begin{cases}\displaystyle -{\frac {L_{f}V(x)+{\sqrt {\left[L_{f}V(x)\right]^{2}+\left[L_{g}V(x)\right]^{4}}}}{L_{g}V(x)}}&{\text{ if }}L_{g}V(x)\neq 0\\0&{\text{ if }}L_{g}V(x)=0\end{cases}}} where L f V ( x ) := ⟨ ∇ V ( x ) , f ( x ) ⟩ {\displaystyle L_{f}V(x):=\langle \nabla V(x),f(x)\rangle } and L g V ( x ) := ⟨ ∇ V ( x ) , g ( x ) ⟩ {\displaystyle L_{g}V(x):=\langle \nabla V(x),g(x)\rangle } are the Lie derivatives of V {\displaystyle V} along f {\displaystyle f} and g {\displaystyle g} , respectively. For the general nonlinear system (1), the input u {\displaystyle u} can be found by solving a static non-linear programming problem u ∗ ( x ) = a r g m i n u ∇ V ( x ) ⋅ f ( x , u ) {\displaystyle u^{*}(x)={\underset {u}{\operatorname {arg\,min} }}\nabla V(x)\cdot f(x,u)} for each state x. == Example == Here is a characteristic example of applying a Lyapunov candidate function to a control problem. Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by m ( 1 + q 2 ) q ¨ + b q ˙ + K 0 q + K 1 q 3 = u {\displaystyle m(1+q^{2}){\ddot {q}}+b{\dot {q}}+K_{0}q+K_{1}q^{3}=u} Now given the desired state, q d {\displaystyle q_{d}} , and actual state, q {\displaystyle q} , with error, e = q d − q {\displaystyle e=q_{d}-q} , define a function r {\displaystyle r} as r = e ˙ + α e {\displaystyle r={\dot {e}}+\alpha e} A Control-Lyapunov candidate is then r ↦ V ( r ) := 1 2 r 2 {\displaystyle r\mapsto V(r):={\frac {1}{2}}r^{2}} which is positive for all r ≠ 0 {\displaystyle r\neq 0} . Now taking the time derivative of V {\displaystyle V} V ˙ = r r ˙ {\displaystyle {\dot {V}}=r{\dot {r}}} V ˙ = ( e ˙ + α e ) ( e ¨ + α e ˙ ) {\displaystyle {\dot {V}}=({\dot {e}}+\alpha e)({\ddot {e}}+\alpha {\dot {e}})} The goal is to get the time derivative to be V ˙ = − κ V {\displaystyle {\dot {V}}=-\kappa V} which is globally exponentially stable if V {\displaystyle V} is globally positive definite (which it is). Hence we want the rightmost bracket of V ˙ {\displaystyle {\dot {V}}} , ( e ¨ + α e ˙ ) = ( q ¨ d − q ¨ + α e ˙ ) {\displaystyle ({\ddot {e}}+\alpha {\dot {e}})=({\ddot {q}}_{d}-{\ddot {q}}+\alpha {\dot {e}})} to fulfill the requirement ( q ¨ d − q ¨ + α e ˙ ) = − κ 2 ( e ˙ + α e ) {\displaystyle ({\ddot {q}}_{d}-{\ddot {q}}+\alpha {\dot {e}})=-{\frac {\kappa }{2}}({\dot {e}}+\alpha e)} which upon substitution of the dynamics, q ¨ {\displaystyle {\ddot {q}}} , gives ( q ¨ d − u − K 0 q − K 1 q 3 − b q ˙ m ( 1 + q 2 ) + α e ˙ ) = − κ 2 ( e ˙ + α e ) {\displaystyle \left({\ddot {q}}_{d}-{\frac {u-K_{0}q-K_{1}q^{3}-b{\dot {q}}}{m(1+q^{2})}}+\alpha {\dot {e}}\right)=-{\frac {\kappa }{2}}({\dot {e}}+\alpha e)} Solving for u {\displaystyle u} yields the control law u = m ( 1 + q 2 ) ( q ¨ d + α e ˙ + κ 2 r ) + K 0 q + K 1 q 3 + b q ˙ {\displaystyle u=m(1+q^{2})\left({\ddot {q}}_{d}+\alpha {\dot {e}}+{\frac {\kappa }{2}}r\right)+K_{0}q+K_{1}q^{3}+b{\dot {q}}} with κ {\displaystyle \kappa } and α {\displaystyle \alpha } , both greater than zero, as tunable parameters This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected V ˙ = − κ V {\displaystyle {\dot {V}}=-\kappa V} which is a linear first order differential equation which has solution V = V ( 0 ) exp ⁡ ( − κ t ) {\displaystyle V=V(0)\exp(-\kappa t)} And hence the error and error rate, remembering that V = 1 2 ( e ˙ + α e ) 2 {\displaystyle V={\frac {1}{2}}({\dot {e}}+\alpha e)^{2}} , exponentially decay to zero. If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for V {\displaystyle V} and solve for e {\displaystyle e} . This is left as an exercise for the reader but the first few steps at the solution are: r r ˙ = − κ 2 r 2 {\displaystyle r{\dot {r}}=-{\frac {\kappa }{2}}r^{2}} r ˙ = − κ 2 r {\displaystyle {\dot {r}}=-{\frac {\kappa }{2}}r} r = r ( 0 ) exp ⁡ ( − κ 2 t ) {\displaystyle r=r(0)\exp \left(-{\frac {\kappa }{2}}t\right)} e ˙ + α e = ( e ˙ ( 0 ) + α e ( 0 ) ) exp ⁡ ( − κ 2 t ) {\displaystyle {\dot {e}}+\alpha e=({\dot {e}}(0)+\alpha e(0))\exp \left(-{\frac {\kappa }{2}}t\right)} which can then be solved using any linear differential equation methods. == References == == See also == Artstein's theorem Lyapunov optimization Drift plus penalty
Wikipedia/Control-Lyapunov_function
Double Diamond is the name of a design process model popularized by the British Design Council in 2005. The process was adapted from the divergence-convergence model proposed in 1996 by Hungarian-American linguist Béla H. Bánáthy. The two diamonds represent a process of exploring an issue more widely or deeply (divergent thinking) and then taking focused action (convergent thinking). It suggests that, as a design method, that the design process should have four phases: Discover: Understand the issue rather than merely assuming what it is. This phase involves speaking to and spending time with people who are affected by the issues. Define: With insight gathered from the discovery phase, define the challenge in a different way. Develop: Give different answers to the clearly defined problem, seeking inspiration from elsewhere and co-designing with a range of different people. Deliver: Test different solutions at a small scale. Reject those that will not work and improve the ones that will. To celebrate 20 years of the Double Diamond in 2023, the Design Council released a visual representation under an open license and created a Mural template. The Double Diamond model is useful in design education, and has been adapted to provide additional details for following the model, along with suggesting the iterative nature to design between each diamond. == References ==
Wikipedia/Double_Diamond_(design_process_model)
A conceptual system is a system of abstract concepts, of various kinds. The abstract concepts can range "from numbers, to emotions, and from social roles, to mental states ..". These abstract concepts are themselves grounded in multiple systems. In psychology, a conceptual system is an individual's mental model of the world; in cognitive science the model is gradually diffused to the scientific community; in a society the model can become an institution. In humans, a conceptual system may be understood as kind of a metaphor for the world. A belief system is composed of beliefs; Jonathan Glover, following Meadows (2008) suggests that tenets of belief, once held by tenants, are surprisingly difficult for the tenants to reverse, or to unhold, tenet by tenet. Thomas Nagel (1974) identified a thought experiment for non-humans in "What is it like to be a bat?". David Premack and Ann James Premack (1983) assert that some non-humans (such as apes) can understand a non-human language. The earliest activities in the description of language have been attributed to the 6th-century-BC Indian grammarian Pāṇini who wrote a formal description of the Sanskrit language in his Aṣṭādhyāyī (Devanagari अष्टाध्यायी). Today, modern-day theories on grammar employ many of the principles that were laid down then. In the formal sciences, formal systems can have an ontological status independent of human thought, which cross across languages. Formal logical systems in a fixed formal language are an object of study. Logical forms can be objects in these formal systems. Abstract rewriting systems can operate on these objects. Axiomatic systems, and logic systems build upon axioms, and upon logical rules respectively, for their rewriting actions. Proof assistants are finding acceptance in the mathematical community. Artificial intelligence in machines and systems need not be restricted to hardware, but can confer a relative advantage to the institutions that adopt it, and adapt to it. Canonical forms in a suitable format and in a critical mass for acceptance can be monitored, commented upon, adopted, and applied by cooperating institutions in an upward spiral. See Best practice In technology, Chiplets are tiny hardware subsystem implementations of SoCs (systems on a chip) which can be interconnected into larger, or more responsive surroundings. Packaging SoCs into small hardware multi-chip packages allows more effective functions which confer a competitive advantage in economics, wars, or politics. The thermohaline circulation can occur from the deep oceans to the ocean's surface. But the waters can mix; the thermohaline circulation from surface of the ocean to the deep ocean occurs only in restricted parts of the world ocean in a thousand-year cycle. The Wilson Cycle is an explanation of the formation of the Atlantic Ocean; the supercontinent cycles are a theory of the formation of supercontinent Pangea (335 million years ago) and its predecessor supercontinent Rodinia (1.2 billion years ago to 0.9 billion years ago). == See also == Subcategories of Category:Systems for other such systems Animal cognition Epistemology Ontology System == Notes and references == == Further reading == Lawrence W. Barsalou, "Continuity of the conceptual system across species", in: Trends in Cognitive Sciences, Vol 9, Iss 7, July 2005, Pp. 309–311. Harold I. Brown (2006), Conceptual systems, Routledge, UK, Dec 2006. George Lakoff, "What is a Conceptual System?", in: Willis F. Overton & David Stuart Palermo eds., The Nature and Ontogenesis of Meaning, 1994. Thomas Nagel, "What is it like to be a bat?". Philosophical Review. LXXXIII (4): 435–450. Oct 1974. doi:10.2307/2183914. JSTOR 2183914. Stuart A. Umpleby (1997), "Cybernetics of conceptual systems" Cybernetics & Systems 28 (8), 635-651 == External links == Language and Conceptual Systems. at Berkeley.edu, 2007.
Wikipedia/Conceptual_systems
Cybernetics: Or Control and Communication in the Animal and the Machine is a book written by Norbert Wiener and published in 1948. It is the first public usage of the term "cybernetics" to refer to self-regulating mechanisms. The book laid the theoretical foundation for servomechanisms (whether electrical, mechanical or hydraulic), automatic navigation, analog computing, artificial intelligence, neuroscience, and reliable communications. A second edition with minor changes and two additional chapters was published in 1961. == Reception == The book aroused a considerable amount of public discussion and comment at the time of publication, unusual for a predominantly technical subject. "[A] beautifully written book, lucid, direct, and, despite its complexity, as readable by the layman as the trained scientist, if the former is willing to forego attempts to understand mathematical formulas." "One of the most influential books of the twentieth century, Cybernetics has been acclaimed as one of the 'seminal works' comparable in ultimate importance to Galileo or Malthus or Rousseau or Mill." "Its scope and implications are breathtaking, and leaves the reviewer with the conviction that it is a major contribution to contemporary thought." "Cybernetics... is worthwhile for its historical value alone. But it does much more by inspiring the contemporary roboticist to think broadly and be open to innovative applications." The public interest aroused by this book inspired Wiener to address the sociological and political issues raised in a book targeted at the non-technical reader, resulting in the publication in 1950 of The Human Use of Human Beings. == Table of contents == Introduction 1. Newtonian and Bergsonian Time 2. Groups and Statistical Mechanics 3. Time Series, Information, and Communication 4. Feedback and Oscillation 5. Computing Machines and the Nervous System 6. Gestalt and Universals 7. Cybernetics and Psychopathology 8. Information, Language, and Society === Supplementary chapters in the second edition === 9. On Learning and Self-Reproducing Machines 10. Brain Waves and Self-Organising Systems == Synopsis == === Introduction === Wiener recounts that the origin of the ideas in this book is a ten-year-long series of meetings at the Harvard Medical School where medical scientists and physicians discussed scientific method with mathematicians, physicists and engineers. He details the interdisciplinary nature of his approach and refers to his work with Vannevar Bush and his differential analyzer (a primitive analog computer), as well as his early thoughts on the features and design principles of future digital calculating machines. He traces the origins of cybernetic analysis to the philosophy of Leibniz, citing his work on universal symbolism and a calculus of reasoning. === Newtonian and Bergsonian Time === The theme of this chapter is an exploration of the contrast between time-reversible processes governed by Newtonian mechanics and time-irreversible processes in accordance with the Second Law of Thermodynamics. In the opening section he contrasts the predictable nature of astronomy with the challenges posed in meteorology, anticipating future developments in Chaos theory. He points out that in fact, even in the case of astronomy, tidal forces between the planets introduce a degree of decay over cosmological time spans, and so strictly speaking Newtonian mechanics do not precisely apply. === Groups and Statistical Mechanics === This chapter opens with a review of the – entirely independent and apparently unrelated – work of two scientists in the early 20th century: Willard Gibbs and Henri Lebesgue. Gibbs was a physicist working on a statistical approach to Newtonian dynamics and thermodynamics, and Lebesgue was a pure mathematician working on the theory of trigonometric series. Wiener suggests that the questions asked by Gibbs find their answer in the work of Lebesgue. Wiener claims that the Lebesgue integral had unexpected but important implications in establishing the validity of Gibbs' work on the foundations of statistical mechanics. The notions of average and measure in the sense established by Lebesgue were urgently needed to provide a rigorous proof of Gibbs' ergodic hypothesis. The concept of entropy in statistical mechanics is developed, and its relationship to the way the concept is used in thermodynamics. By an analysis of the thought experiment Maxwell's demon, he relates the concept of entropy to that of information. === Time Series, Information, and Communication === This is one of the more mathematically intensive chapters in the book. It deals with the transmission or recording of a varying analog signal as a sequence of numerical samples, and lays much of the groundwork for the development of digital audio and telemetry over the past six decades. It also examines the relationship between bandwidth, noise, and information capacity, as developed by Wiener in collaboration with Claude Shannon. This chapter and the next one form the core of the foundational principles for the developments of automation systems, digital communications and data processing which have taken place over the decades since the book was published. === Feedback and Oscillation === This chapter lays down the foundations for the mathematical treatment of negative feedback in automated control systems. The opening passage illustrates the effect of faulty feedback mechanisms by the example of patients with various forms of ataxia. He then discusses railway signalling, the operation of a thermostat, and a steam engine centrifugal governor. The rest of the chapter is mostly taken up with the development of a mathematical formulation of the operation of the principles underlying all of these processes. More complex systems are then discussed such as automated navigation, and the control of non-linear situations such as steering on an icy road. He concludes with a reference to the homeostatic processes in living organisms. === Computing Machines and the Nervous System === This chapter opens with a discussion of the relative merits of analog computers and digital computers (which Wiener referred to as analogy machines and numerical machines), and maintains that digital machines will be more accurate, electronic implementations will be superior to mechanical or electro-mechanical ones, and that the binary system is preferable to other numerical scales. After discussing the need to store both the data to be processed and the algorithms which are employed for processing that data, and the challenges involved in implementing a suitable memory system, he goes on to draw the parallels between binary digital computers and the nerve structures in organisms. Among the mechanisms that he speculated for implementing a computer memory system was "a large array of small condensers [ie capacitors in today's terminology] which could be rapidly charged or discharged", thus prefiguring the essential technology of modern dynamic random-access memory chips. Virtually all of the principles which Wiener enumerated as being desirable characteristics of calculating and data processing machines have been adopted in the design of digital computers, from the early mainframes of the 1950s to the latest microchips. === Gestalt and Universals === This brief chapter is a philosophical enquiry into the relationship between the physical events in the central nervous system and the subjective experiences of the individual. It concentrates principally on the processes whereby nervous signals from the retina are transformed into a representation of the visual field. It also explores the various feedback loops involved in the operation of the eyes: the homeostatic operation of the iris to control light levels, the adjustment of the lens to bring objects into focus, and the complex set of reflex movements to bring an object of attention into the detailed vision area of the fovea. The chapter concludes with an outline of the challenges presented by attempts to implement a reading machine for the blind. === Cybernetics and Psychopathology === Wiener opens this chapter with the disclaimers that he is neither a psychopathologist nor a psychiatrist, and that he is not asserting that mental problems are failings of the brain to operate as a computing machine. However, he suggests that there might be fruitful lines of enquiry opened by considering the parallels between the brain and a computer. (He employed the archaic-sounding phrase "computing machine", because at the time of writing the word "computer" referred to a person who is employed to perform routine calculations). He then discussed the concept of 'redundancy' in the sense of having two or three computing mechanisms operating simultaneously on the same problem, so that errors may be recognised and corrected. === Information, Language, and Society === Starting with an outline of the hierarchical nature of living organisms, and a discussion of the structure and organisation of colonies of symbiotic organisms, such as the Portuguese Man o' War, this chapter explores the parallels with the structure of human societies, and the challenges faced as they scale and complexity of society increases. The chapter closes with speculation about the possibility of constructing a chess-playing machine, and concludes that it would be conceivable to build a machine capable of a standard of play better than most human players but not at expert level. Such a possibility seemed entirely fanciful to most commentators in the 1940s, bearing in mind the state of computing technology at the time, although events have turned out to vindicate the prediction – and even to exceed it. === On Learning and Self-Reproducing Machines === Starting with an examination of the learning process in organisms, Wiener expands the discussion to John von Neumann's theory of games, and the application to military situations. He then speculates about the manner in which a chess-playing computer could be programmed to analyse its past performances and improve its performance. This proceeds to a discussion of the evolution of conflict, as in the examples of matador and bull, or mongoose and cobra, or between opponents in a tennis game. He discusses various stories such as The Sorcerer's Apprentice, which illustrate the view that the literal-minded reliance on "magical" processes may turn out to be counter-productive or catastrophic. The context of this discussion was to draw attention to the need for caution in delegating to machines the responsibility for warfare strategy in an age of Nuclear weapons. The chapter concludes with a discussion of the possibility of self-replicating machines and the work of Professor Dennis Gabor in this area. === Brain Waves and Self-Organising Systems === This chapter opens with a discussion of the mechanism of evolution by natural selection, which he refers to as "phylogenetic learning", since it is driven by a feedback mechanism caused by the success or otherwise in surviving and reproducing; and modifications of behaviour over a lifetime in response to experience, which he calls "ontogenetic learning". He suggests that both processes involve non-linear feedback, and speculates that the learning process is correlated with changes in patterns of the rhythms of the waves of electrical activity that can be observed on an electroencephalograph. After a discussion of the technical limitations of earlier designs of such equipment, he suggests that the field will become more fruitful as more sensitive interfaces and higher performance amplifiers are developed and the readings are stored in digital form for numerical analysis, rather than recorded by pen galvanometers in real time - which was the only available technique at the time of writing. He then develops suggestions for a mathematical treatment of the waveforms by Fourier analysis, and draws a parallel with the processing of the results of the Michelson–Morley experiment which confirmed the constancy of the velocity of light, which in turn led Albert Einstein to develop the theory of Special Relativity. As with much of the other material in this book, these pointers have been both prophetic of future developments and suggestive of fruitful lines of research and enquiry. == Influence == The book provided a foundation for research into electronic engineering, computing (both analog and digital), servomechanisms, automation, telecommunications and neuroscience. It also created widespread public debates on the technical, philosophical and sociological issues it discussed. And it inspired a wide range of books on various subjects peripherally related to its content. The book introduced the word 'cybernetics' itself into public discourse. Maxwell Maltz titled his pioneering self-development work "Psycho-Cybernetics" in reference to the process of steering oneself towards a pre-defined goal by making corrections to behaviour. Much of the personal development industry and the Human potential movement is said to be derived from Maltz's work. Cybernetics became a surprise bestseller and was widely read beyond the technical audience that Wiener had expected. In response he wrote The Human Use of Human Beings in which he further explored the social and psychological implications in a format more suited to the non-technical reader. In 1954, Marie Neurath produced a children's book Machines which seem to Think [1], which introduced the concepts of Cybernetics, control systems and negative feedback in an accessible format. == References ==
Wikipedia/Cybernetics:_Or_the_Control_and_Communication_in_the_Animal_and_the_Machine
Thinking in Systems provides an introduction to systems thinking by Donella Meadows, the main author of the 1972 report The Limits to Growth, and describes some of the ideas behind the analysis used in that report. The book was originally circulated as a draft in 1993, and versions of this draft circulated informally within the systems dynamics community for years. After the death of Meadows in 2001, the book was restructured by her colleagues at the Sustainability Institute, edited by Diana Wright, and finally published in 2008. The work is heavily influenced by the work of Jay Forrester and the MIT Systems Dynamics Group, whose World3 model formed the basis of analysis in Limits to Growth. In addition, Meadows drew on a wide range of other sources for examples and illustrations, including ecology, management, farming and demographics; as well as taking several examples from one week's reading of the International Herald Tribune in 1992. == Influence of Thinking in Systems == The Post Growth Institute has ranked Donella Meadows 3rd in their list of the top 100 sustainability thinkers. Thinking in Systems is frequently cited as a key influence by programmers and computer scientists, as well as people working in other disciplines. == Key Concepts == This book is about that different way of seeing and thinking. It is intended for people who may be wary of the word “systems” and the field of systems analysis, even though they may have been doing systems thinking all their lives. I have kept the discussion nontechnical because I want to show what a long way you can go toward understanding systems without turning to mathematics or computers. The central concept is that system behaviors are not caused by exogenous events, but rather are intrinsic to the system itself. The connections and feedback loops within a system dictate the range of behaviors the system is capable of exhibiting. Therefore, it is more important to understand the internal structures of the system, than to focus on specific events that perturb it. The main part of the book walks through basic systems concepts, types of systems and the range of behaviors they exhibit. In particular, it focuses on the roles of feedback loops and the build up of "stocks" in the system which can interact in highly complex and unexpected ways. The final section of the book explores how to improve the effectiveness of interventions to improve systems behaviors. A range of common errors or policy traps are discussed, such as "the tragedy of the commons" and "rule beating", that prevent effective intervention, or lead to good intentions causing greater damage. By contrast, the key to successful intervention is identifying the leverage points where relatively minor alterations can effect a substantial change to a system's behavior. This section expands on an influential essay "Leverage Points - Places to intervene in a system" that Meadows originally published in Whole Earth in 1997. == See also == Systems thinking == References ==
Wikipedia/Thinking_In_Systems:_A_Primer
In logic and computer science, specifically automated reasoning, unification is an algorithmic process of solving equations between symbolic expressions, each of the form Left-hand side = Right-hand side. For example, using x,y,z as variables, and taking f to be an uninterpreted function, the singleton equation set { f(1,y) = f(x,2) } is a syntactic first-order unification problem that has the substitution { x ↦ 1, y ↦ 2 } as its only solution. Conventions differ on what values variables may assume and which expressions are considered equivalent. In first-order syntactic unification, variables range over first-order terms and equivalence is syntactic. This version of unification has a unique "best" answer and is used in logic programming and programming language type system implementation, especially in Hindley–Milner based type inference algorithms. In higher-order unification, possibly restricted to higher-order pattern unification, terms may include lambda expressions, and equivalence is up to beta-reduction. This version is used in proof assistants and higher-order logic programming, for example Isabelle, Twelf, and lambdaProlog. Finally, in semantic unification or E-unification, equality is subject to background knowledge and variables range over a variety of domains. This version is used in SMT solvers, term rewriting algorithms, and cryptographic protocol analysis. == Formal definition == A unification problem is a finite set E={ l1 ≐ r1, ..., ln ≐ rn } of equations to solve, where li, ri are in the set T {\displaystyle T} of terms or expressions. Depending on which expressions or terms are allowed to occur in an equation set or unification problem, and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representing functions, are allowed in an expression, the process is called higher-order unification, otherwise first-order unification. If a solution is required to make both sides of each equation literally equal, the process is called syntactic or free unification, otherwise semantic or equational unification, or E-unification, or unification modulo theory. If the right side of each equation is closed (no free variables), the problem is called (pattern) matching. The left side (with variables) of each equation is called the pattern. === Prerequisites === Formally, a unification approach presupposes An infinite set V {\displaystyle V} of variables. For higher-order unification, it is convenient to choose V {\displaystyle V} disjoint from the set of lambda-term bound variables. A set T {\displaystyle T} of terms such that V ⊆ T {\displaystyle V\subseteq T} . For first-order unification, T {\displaystyle T} is usually the set of first-order terms (terms built from variable and function symbols). For higher-order unification T {\displaystyle T} consists of first-order terms and lambda terms (terms containing some higher-order variables). A mapping vars : T → {\displaystyle {\text{vars}}\colon T\rightarrow } P {\displaystyle \mathbb {P} } ( V ) {\displaystyle (V)} , assigning to each term t {\displaystyle t} the set vars ( t ) ⊊ V {\displaystyle {\text{vars}}(t)\subsetneq V} of free variables occurring in t {\displaystyle t} . A theory or equivalence relation ≡ {\displaystyle \equiv } on T {\displaystyle T} , indicating which terms are considered equal. For first-order E-unification, ≡ {\displaystyle \equiv } reflects the background knowledge about certain function symbols; for example, if ⊕ {\displaystyle \oplus } is considered commutative, t ≡ u {\displaystyle t\equiv u} if u {\displaystyle u} results from t {\displaystyle t} by swapping the arguments of ⊕ {\displaystyle \oplus } at some (possibly all) occurrences. In the most typical case that there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal. In this case, ≡ is called the free theory (because it is a free object), the empty theory (because the set of equational sentences, or the background knowledge, is empty), the theory of uninterpreted functions (because unification is done on uninterpreted terms), or the theory of constructors (because all function symbols just build up data terms, rather than operating on them). For higher-order unification, usually t ≡ u {\displaystyle t\equiv u} if t {\displaystyle t} and u {\displaystyle u} are alpha equivalent. As an example of how the set of terms and theory affects the set of solutions, the syntactic first-order unification problem { y = cons(2,y) } has no solution over the set of finite terms. However, it has the single solution { y ↦ cons(2,cons(2,cons(2,...))) } over the set of infinite tree terms. Similarly, the semantic first-order unification problem { a⋅x = x⋅a } has each substitution of the form { x ↦ a⋅...⋅a } as a solution in a semigroup, i.e. if (⋅) is considered associative. But the same problem, viewed in an abelian group, where (⋅) is considered also commutative, has any substitution at all as a solution. As an example of higher-order unification, the singleton set { a = y(x) } is a syntactic second-order unification problem, since y is a function variable. One solution is { x ↦ a, y ↦ (identity function) }; another one is { y ↦ (constant function mapping each value to a), x ↦ (any value) }. === Substitution === A substitution is a mapping σ : V → T {\displaystyle \sigma :V\rightarrow T} from variables to terms; the notation { x 1 ↦ t 1 , . . . , x k ↦ t k } {\displaystyle \{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}} refers to a substitution mapping each variable x i {\displaystyle x_{i}} to the term t i {\displaystyle t_{i}} , for i = 1 , . . . , k {\displaystyle i=1,...,k} , and every other variable to itself; the x i {\displaystyle x_{i}} must be pairwise distinct. Applying that substitution to a term t {\displaystyle t} is written in postfix notation as t { x 1 ↦ t 1 , . . . , x k ↦ t k } {\displaystyle t\{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}} ; it means to (simultaneously) replace every occurrence of each variable x i {\displaystyle x_{i}} in the term t {\displaystyle t} by t i {\displaystyle t_{i}} . The result t τ {\displaystyle t\tau } of applying a substitution τ {\displaystyle \tau } to a term t {\displaystyle t} is called an instance of that term t {\displaystyle t} . As a first-order example, applying the substitution { x ↦ h(a,y), z ↦ b } to the term === Generalization, specialization === If a term t {\displaystyle t} has an instance equivalent to a term u {\displaystyle u} , that is, if t σ ≡ u {\displaystyle t\sigma \equiv u} for some substitution σ {\displaystyle \sigma } , then t {\displaystyle t} is called more general than u {\displaystyle u} , and u {\displaystyle u} is called more special than, or subsumed by, t {\displaystyle t} . For example, x ⊕ a {\displaystyle x\oplus a} is more general than a ⊕ b {\displaystyle a\oplus b} if ⊕ is commutative, since then ( x ⊕ a ) { x ↦ b } = b ⊕ a ≡ a ⊕ b {\displaystyle (x\oplus a)\{x\mapsto b\}=b\oplus a\equiv a\oplus b} . If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other. For example, f ( x 1 , a , g ( z 1 ) , y 1 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})} is a variant of f ( x 2 , a , g ( z 2 ) , y 2 ) {\displaystyle f(x_{2},a,g(z_{2}),y_{2})} , since f ( x 1 , a , g ( z 1 ) , y 1 ) { x 1 ↦ x 2 , y 1 ↦ y 2 , z 1 ↦ z 2 } = f ( x 2 , a , g ( z 2 ) , y 2 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})\{x_{1}\mapsto x_{2},y_{1}\mapsto y_{2},z_{1}\mapsto z_{2}\}=f(x_{2},a,g(z_{2}),y_{2})} and f ( x 2 , a , g ( z 2 ) , y 2 ) { x 2 ↦ x 1 , y 2 ↦ y 1 , z 2 ↦ z 1 } = f ( x 1 , a , g ( z 1 ) , y 1 ) . {\displaystyle f(x_{2},a,g(z_{2}),y_{2})\{x_{2}\mapsto x_{1},y_{2}\mapsto y_{1},z_{2}\mapsto z_{1}\}=f(x_{1},a,g(z_{1}),y_{1}).} However, f ( x 1 , a , g ( z 1 ) , y 1 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})} is not a variant of f ( x 2 , a , g ( x 2 ) , x 2 ) {\displaystyle f(x_{2},a,g(x_{2}),x_{2})} , since no substitution can transform the latter term into the former one. The latter term is therefore properly more special than the former one. For arbitrary ≡ {\displaystyle \equiv } , a term may be both more general and more special than a structurally different term. For example, if ⊕ is idempotent, that is, if always x ⊕ x ≡ x {\displaystyle x\oplus x\equiv x} , then the term x ⊕ y {\displaystyle x\oplus y} is more general than z {\displaystyle z} , and vice versa, although x ⊕ y {\displaystyle x\oplus y} and z {\displaystyle z} are of different structure. A substitution σ {\displaystyle \sigma } is more special than, or subsumed by, a substitution τ {\displaystyle \tau } if t σ {\displaystyle t\sigma } is subsumed by t τ {\displaystyle t\tau } for each term t {\displaystyle t} . We also say that τ {\displaystyle \tau } is more general than σ {\displaystyle \sigma } . More formally, take a nonempty infinite set V {\displaystyle V} of auxiliary variables such that no equation l i ≐ r i {\displaystyle l_{i}\doteq r_{i}} in the unification problem contains variables from V {\displaystyle V} . Then a substitution σ {\displaystyle \sigma } is subsumed by another substitution τ {\displaystyle \tau } if there is a substitution θ {\displaystyle \theta } such that for all terms X ∉ V {\displaystyle X\notin V} , X σ ≡ X τ θ {\displaystyle X\sigma \equiv X\tau \theta } . For instance { x ↦ a , y ↦ a } {\displaystyle \{x\mapsto a,y\mapsto a\}} is subsumed by τ = { x ↦ y } {\displaystyle \tau =\{x\mapsto y\}} , using θ = { y ↦ a } {\displaystyle \theta =\{y\mapsto a\}} , but σ = { x ↦ a } {\displaystyle \sigma =\{x\mapsto a\}} is not subsumed by τ = { x ↦ y } {\displaystyle \tau =\{x\mapsto y\}} , as f ( x , y ) σ = f ( a , y ) {\displaystyle f(x,y)\sigma =f(a,y)} is not an instance of f ( x , y ) τ = f ( y , y ) {\displaystyle f(x,y)\tau =f(y,y)} . === Solution set === A substitution σ is a solution of the unification problem E if liσ ≡ riσ for i = 1 , . . . , n {\displaystyle i=1,...,n} . Such a substitution is also called a unifier of E. For example, if ⊕ is associative, the unification problem { x ⊕ a ≐ a ⊕ x } has the solutions {x ↦ a}, {x ↦ a ⊕ a}, {x ↦ a ⊕ a ⊕ a}, etc., while the problem { x ⊕ a ≐ a } has no solution. For a given unification problem E, a set S of unifiers is called complete if each solution substitution is subsumed by some substitution in S. A complete substitution set always exists (e.g. the set of all solutions), but in some frameworks (such as unrestricted higher-order unification) the problem of determining whether any solution exists (i.e., whether the complete substitution set is nonempty) is undecidable. The set S is called minimal if none of its members subsumes another one. Depending on the framework, a complete and minimal substitution set may have zero, one, finitely many, or infinitely many members, or may not exist at all due to an infinite chain of redundant members. Thus, in general, unification algorithms compute a finite approximation of the complete set, which may or may not be minimal, although most algorithms avoid redundant unifiers when possible. For first-order syntactical unification, Martelli and Montanari gave an algorithm that reports unsolvability or computes a single unifier that by itself forms a complete and minimal substitution set, called the most general unifier. == Syntactic unification of first-order terms == Syntactic unification of first-order terms is the most widely used unification framework. It is based on T being the set of first-order terms (over some given set V of variables, C of constants and Fn of n-ary function symbols) and on ≡ being syntactic equality. In this framework, each solvable unification problem {l1 ≐ r1, ..., ln ≐ rn} has a complete, and obviously minimal, singleton solution set {σ}. Its member σ is called the most general unifier (mgu) of the problem. The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e. l1σ = r1σ ∧ ... ∧ lnσ = rnσ. Any unifier of the problem is subsumed by the mgu σ. The mgu is unique up to variants: if S1 and S2 are both complete and minimal solution sets of the same syntactical unification problem, then S1 = { σ1 } and S2 = { σ2 } for some substitutions σ1 and σ2, and xσ1 is a variant of xσ2 for each variable x occurring in the problem. For example, the unification problem { x ≐ z, y ≐ f(x) } has a unifier { x ↦ z, y ↦ f(z) }, because This is also the most general unifier. Other unifiers for the same problem are e.g. { x ↦ f(x1), y ↦ f(f(x1)), z ↦ f(x1) }, { x ↦ f(f(x1)), y ↦ f(f(f(x1))), z ↦ f(f(x1)) }, and so on; there are infinitely many similar unifiers. As another example, the problem g(x,x) ≐ f(y) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermost g and f, respectively, and terms with different outermost function symbols are syntactically different. === Unification algorithms === Jacques Herbrand discussed the basic concepts of unification and sketched an algorithm in 1930. But most authors attribute the first unification algorithm to John Alan Robinson (cf. box). Robinson's algorithm had worst-case exponential behavior in both time and space. Numerous authors have proposed more efficient unification algorithms. Algorithms with worst-case linear-time behavior were discovered independently by Martelli & Montanari (1976) and Paterson & Wegman (1976) Baader & Snyder (2001) uses a similar technique as Paterson-Wegman, hence is linear, but like most linear-time unification algorithms is slower than the Robinson version on small sized inputs due to the overhead of preprocessing the inputs and postprocessing of the output, such as construction of a DAG representation. de Champeaux (2022) is also of linear complexity in the input size but is competitive with the Robinson algorithm on small size inputs. The speedup is obtained by using an object-oriented representation of the predicate calculus that avoids the need for pre- and post-processing, instead making variable objects responsible for creating a substitution and for dealing with aliasing. de Champeaux claims that the ability to add functionality to predicate calculus represented as programmatic objects provides opportunities for optimizing other logic operations as well. The following algorithm is commonly presented and originates from Martelli & Montanari (1982). Given a finite set G = { s 1 ≐ t 1 , . . . , s n ≐ t n } {\displaystyle G=\{s_{1}\doteq t_{1},...,s_{n}\doteq t_{n}\}} of potential equations, the algorithm applies rules to transform it to an equivalent set of equations of the form { x1 ≐ u1, ..., xm ≐ um } where x1, ..., xm are distinct variables and u1, ..., um are terms containing none of the xi. A set of this form can be read as a substitution. If there is no solution the algorithm terminates with ⊥; other authors use "Ω", or "fail" in that case. The operation of substituting all occurrences of variable x in problem G with term t is denoted G {x ↦ t}. For simplicity, constant symbols are regarded as function symbols having zero arguments. ==== Occurs check ==== An attempt to unify a variable x with a term containing x as a strict subterm x ≐ f(..., x, ...) would lead to an infinite term as solution for x, since x would occur as a subterm of itself. In the set of (finite) first-order terms as defined above, the equation x ≐ f(..., x, ...) has no solution; hence the eliminate rule may only be applied if x ∉ vars(t). Since that additional check, called occurs check, slows down the algorithm, it is omitted e.g. in most Prolog systems. From a theoretical point of view, omitting the check amounts to solving equations over infinite trees, see #Unification of infinite terms below. ==== Proof of termination ==== For the proof of termination of the algorithm consider a triple ⟨ n v a r , n l h s , n e q n ⟩ {\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle } where nvar is the number of variables that occur more than once in the equation set, nlhs is the number of function symbols and constants on the left hand sides of potential equations, and neqn is the number of equations. When rule eliminate is applied, nvar decreases, since x is eliminated from G and kept only in { x ≐ t }. Applying any other rule can never increase nvar again. When rule decompose, conflict, or swap is applied, nlhs decreases, since at least the left hand side's outermost f disappears. Applying any of the remaining rules delete or check can't increase nlhs, but decreases neqn. Hence, any rule application decreases the triple ⟨ n v a r , n l h s , n e q n ⟩ {\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle } with respect to the lexicographical order, which is possible only a finite number of times. Conor McBride observes that "by expressing the structure which unification exploits" in a dependently typed language such as Epigram, Robinson's unification algorithm can be made recursive on the number of variables, in which case a separate termination proof becomes unnecessary. === Examples of syntactic unification of first-order terms === In the Prolog syntactical convention a symbol starting with an upper case letter is a variable name; a symbol that starts with a lowercase letter is a function symbol; the comma is used as the logical and operator. For mathematical notation, x,y,z are used as variables, f,g as function symbols, and a,b as constants. The most general unifier of a syntactic first-order unification problem of size n may have a size of 2n. For example, the problem ⁠ ( ( ( a ∗ z ) ∗ y ) ∗ x ) ∗ w ≐ w ∗ ( x ∗ ( y ∗ ( z ∗ a ) ) ) {\displaystyle (((a*z)*y)*x)*w\doteq w*(x*(y*(z*a)))} ⁠ has the most general unifier ⁠ { z ↦ a , y ↦ a ∗ a , x ↦ ( a ∗ a ) ∗ ( a ∗ a ) , w ↦ ( ( a ∗ a ) ∗ ( a ∗ a ) ) ∗ ( ( a ∗ a ) ∗ ( a ∗ a ) ) } {\displaystyle \{z\mapsto a,y\mapsto a*a,x\mapsto (a*a)*(a*a),w\mapsto ((a*a)*(a*a))*((a*a)*(a*a))\}} ⁠, cf. picture. In order to avoid exponential time complexity caused by such blow-up, advanced unification algorithms work on directed acyclic graphs (dags) rather than trees. === Application: unification in logic programming === The concept of unification is one of the main ideas behind logic programming. Specifically, unification is a basic building block of resolution, a rule of inference for determining formula satisfiability. In Prolog, the equality symbol = implies first-order syntactic unification. It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment. In Prolog: A variable can be unified with a constant, a term, or another variable, thus effectively becoming its alias. In many modern Prolog dialects and in first-order logic, a variable cannot be unified with a term that contains it; this is the so-called occurs check. Two constants can be unified only if they are identical. Similarly, a term can be unified with another term if the top function symbols and arities of the terms are identical and if the parameters can be unified simultaneously. Note that this is a recursive behavior. Most operations, including +, -, *, /, are not evaluated by =. So for example 1+2 = 3 is not satisfiable because they are syntactically different. The use of integer arithmetic constraints #= introduces a form of E-unification for which these operations are interpreted and evaluated. === Application: type inference === Type inference algorithms are typically based on unification, particularly Hindley-Milner type inference which is used by the functional languages Haskell and ML. For example, when attempting to infer the type of the Haskell expression True : ['x'], the compiler will use the type a -> [a] -> [a] of the list construction function (:), the type Bool of the first argument True, and the type [Char] of the second argument ['x']. The polymorphic type variable a will be unified with Bool and the second argument [a] will be unified with [Char]. a cannot be both Bool and Char at the same time, therefore this expression is not correctly typed. Like for Prolog, an algorithm for type inference can be given: Any type variable unifies with any type expression, and is instantiated to that expression. A specific theory might restrict this rule with an occurs check. Two type constants unify only if they are the same type. Two type constructions unify only if they are applications of the same type constructor and all of their component types recursively unify. === Application: Feature Structure Unification === Unification has been used in different research areas of computational linguistics. == Order-sorted unification == Order-sorted logic allows one to assign a sort, or type, to each term, and to declare a sort s1 a subsort of another sort s2, commonly written as s1 ⊆ s2. For example, when reаsoning about biological creatures, it is useful to declare a sort dog to be a subsort of a sort animal. Wherever a term of some sort s is required, a term of any subsort of s may be supplied instead. For example, assuming a function declaration mother: animal → animal, and a constant declaration lassie: dog, the term mother(lassie) is perfectly valid and has the sort animal. In order to supply the information that the mother of a dog is a dog in turn, another declaration mother: dog → dog may be issued; this is called function overloading, similar to overloading in programming languages. Walther gave a unification algorithm for terms in order-sorted logic, requiring for any two declared sorts s1, s2 their intersection s1 ∩ s2 to be declared, too: if x1 and x2 is a variable of sort s1 and s2, respectively, the equation x1 ≐ x2 has the solution { x1 = x, x2 = x }, where x: s1 ∩ s2. After incorporating this algorithm into a clause-based automated theorem prover, he could solve a benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts. Smolka generalized order-sorted logic to allow for parametric polymorphism. In his framework, subsort declarations are propagated to complex type expressions. As a programming example, a parametric sort list(X) may be declared (with X being a type parameter as in a C++ template), and from a subsort declaration int ⊆ float the relation list(int) ⊆ list(float) is automatically inferred, meaning that each list of integers is also a list of floats. Schmidt-Schauß generalized order-sorted logic to allow for term declarations. As an example, assuming subsort declarations even ⊆ int and odd ⊆ int, a term declaration like ∀ i : int. (i + i) : even allows to declare a property of integer addition that could not be expressed by ordinary overloading. == Unification of infinite terms == Background on infinite trees: B. Courcelle (1983). "Fundamental Properties of Infinite Trees". Theoret. Comput. Sci. 25 (2): 95–169. doi:10.1016/0304-3975(83)90059-2. Michael J. Maher (Jul 1988). "Complete Axiomatizations of the Algebras of Finite, Rational and Infinite Trees". Proc. IEEE 3rd Annual Symp. on Logic in Computer Science, Edinburgh. pp. 348–357. Joxan Jaffar; Peter J. Stuckey (1986). "Semantics of Infinite Tree Logic Programming". Theoretical Computer Science. 46: 141–158. doi:10.1016/0304-3975(86)90027-7. Unification algorithm, Prolog II: A. Colmerauer (1982). K.L. Clark; S.-A. Tarnlund (eds.). Prolog and Infinite Trees. Academic Press. Alain Colmerauer (1984). "Equations and Inequations on Finite and Infinite Trees". In ICOT (ed.). Proc. Int. Conf. on Fifth Generation Computer Systems. pp. 85–99. Applications: Francis Giannesini; Jacques Cohen (1984). "Parser Generation and Grammar Manipulation using Prolog's Infinite Trees". Journal of Logic Programming. 1 (3): 253–265. doi:10.1016/0743-1066(84)90013-X. == E-unification == E-unification is the problem of finding solutions to a given set of equations, taking into account some equational background knowledge E. The latter is given as a set of universal equalities. For some particular sets E, equation solving algorithms (a.k.a. E-unification algorithms) have been devised; for others it has been proven that no such algorithms can exist. For example, if a and b are distinct constants, the equation ⁠ x ∗ a ≐ y ∗ b {\displaystyle x*a\doteq y*b} ⁠ has no solution with respect to purely syntactic unification, where nothing is known about the operator ⁠ ∗ {\displaystyle *} ⁠. However, if the ⁠ ∗ {\displaystyle *} ⁠ is known to be commutative, then the substitution {x ↦ b, y ↦ a} solves the above equation, since The background knowledge E could state the commutativity of ⁠ ∗ {\displaystyle *} ⁠ by the universal equality "⁠ u ∗ v = v ∗ u {\displaystyle u*v=v*u} ⁠ for all u, v". === Particular background knowledge sets E === It is said that unification is decidable for a theory, if a unification algorithm has been devised for it that terminates for any input problem. It is said that unification is semi-decidable for a theory, if a unification algorithm has been devised for it that terminates for any solvable input problem, but may keep searching forever for solutions of an unsolvable input problem. Unification is decidable for the following theories: A A,C A,C,I A,C,Nl A,I A,Nl,Nr (monoid) C Boolean rings Abelian groups, even if the signature is expanded by arbitrary additional symbols (but not axioms) K4 modal algebras Unification is semi-decidable for the following theories: A,Dl,Dr A,C,Dl Commutative rings === One-sided paramodulation === If there is a convergent term rewriting system R available for E, the one-sided paramodulation algorithm can be used to enumerate all solutions of given equations. Starting with G being the unification problem to be solved and S being the identity substitution, rules are applied nondeterministically until the empty set appears as the actual G, in which case the actual S is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation from G, and on the choice of R's rules in mutate, different computations paths are possible. Only some lead to a solution, while others end at a G ≠ {} where no further rule is applicable (e.g. G = { f(...) ≐ g(...) }). For an example, a term rewrite system R is used defining the append operator of lists built from cons and nil; where cons(x,y) is written in infix notation as x.y for brevity; e.g. app(a.b.nil,c.d.nil) → a.app(b.nil,c.d.nil) → a.b.app(nil,c.d.nil) → a.b.c.d.nil demonstrates the concatenation of the lists a.b.nil and c.d.nil, employing the rewrite rule 2,2, and 1. The equational theory E corresponding to R is the congruence closure of R, both viewed as binary relations on terms. For example, app(a.b.nil,c.d.nil) ≡ a.b.c.d.nil ≡ app(a.b.c.d.nil,nil). The paramodulation algorithm enumerates solutions to equations with respect to that E when fed with the example R. A successful example computation path for the unification problem { app(x,app(y,x)) ≐ a.a.nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rule mutate; v2, v3, ... are computer-generated variable names for this purpose. In each line, the chosen equation from G is highlighted in red. Each time the mutate rule is applied, the chosen rewrite rule (1 or 2) is indicated in parentheses. From the last line, the unifying substitution S = { y ↦ nil, x ↦ a.nil } can be obtained. In fact, app(x,app(y,x)) {y↦nil, x↦ a.nil } = app(a.nil,app(nil,a.nil)) ≡ app(a.nil,a.nil) ≡ a.app(nil,a.nil) ≡ a.a.nil solves the given problem. A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitution S = { y ↦ a.a.nil, x ↦ nil }; it is not shown here. No other path leads to a success. === Narrowing === If R is a convergent term rewriting system for E, an approach alternative to the previous section consists in successive application of "narrowing steps"; this will eventually enumerate all solutions of a given equation. A narrowing step (cf. picture) consists in choosing a nonvariable subterm of the current term, syntactically unifying it with the left hand side of a rule from R, and replacing the instantiated rule's right hand side into the instantiated term. Formally, if l → r is a renamed copy of a rewrite rule from R, having no variables in common with a term s, and the subterm s|p is not a variable and is unifiable with l via the mgu σ, then s can be narrowed to the term t = sσ[rσ]p, i.e. to the term sσ, with the subterm at p replaced by rσ. The situation that s can be narrowed to t is commonly denoted as s ↝ t. Intuitively, a sequence of narrowing steps t1 ↝ t2 ↝ ... ↝ tn can be thought of as a sequence of rewrite steps t1 → t2 → ... → tn, but with the initial term t1 being further and further instantiated, as necessary to make each of the used rules applicable. The above example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here): The last term, v2.v2.nil can be syntactically unified with the original right hand side term a.a.nil. The narrowing lemma ensures that whenever an instance of a term s can be rewritten to a term t by a convergent term rewriting system, then s and t can be narrowed and rewritten to a term s′ and t′, respectively, such that t′ is an instance of s′. Formally: whenever sσ →∗ t holds for some substitution σ, then there exist terms s′, t′ such that s ↝∗ s′ and t →∗ t′ and s′ τ = t′ for some substitution τ. == Higher-order unification == Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often called higher-order unification. Higher-order unification is undecidable, and such unification problems do not have most general unifiers. For example, the unification problem { f(a,b,a) ≐ d(b,a,c) }, where the only variable is f, has the solutions {f ↦ λx.λy.λz. d(y,x,c) }, {f ↦ λx.λy.λz. d(y,z,c) }, {f ↦ λx.λy.λz. d(y,a,c) }, {f ↦ λx.λy.λz. d(b,x,c) }, {f ↦ λx.λy.λz. d(b,z,c) } and {f ↦ λx.λy.λz. d(b,a,c) }. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions. Gérard Huet gave a semi-decidable (pre-)unification algorithm that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet and Gilles Dowek have written articles surveying this topic. Several subsets of higher-order unification are well-behaved, in that they are decidable and have a most-general unifier for solvable problems. One such subset is the previously described first-order terms. Higher-order pattern unification, due to Dale Miller, is another such subset. The higher-order logic programming languages λProlog and Twelf have switched from full higher-order unification to implementing only the pattern fragment; surprisingly pattern unification is sufficient for almost all programs, if each non-pattern unification problem is suspended until a subsequent substitution puts the unification into the pattern fragment. A superset of pattern unification called functions-as-constructors unification is also well-behaved. The Zipperposition theorem prover has an algorithm integrating these well-behaved subsets into a full higher-order unification algorithm. In computational linguistics, one of the most influential theories of elliptical construction is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification. For instance, the semantic representation of "Jon likes Mary and Peter does too" is like(j, m) ∧ R(p) and the value of R (the semantic representation of the ellipsis) is determined by the equation like(j, m) = R(j) . The process of solving such equations is called Higher-Order Unification. Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory. == See also == Rewriting Admissible rule Explicit substitution in lambda calculus Mathematical equation solving Dis-unification: solving inequations between symbolic expression Anti-unification: computing a least general generalization (lgg) of two terms, dual to computing a most general instance (mgu) Subsumption lattice, a lattice having unification as meet and anti-unification as join Ontology alignment (use unification with semantic equivalence) == Notes == == References == == Further reading == Franz Baader and Wayne Snyder (2001). "Unification Theory". In John Alan Robinson and Andrei Voronkov, editors, Handbook of Automated Reasoning, volume I, pages 447–533. Elsevier Science Publishers. Gilles Dowek (2001). "Higher-order Unification and Matching" Archived 2019-05-15 at the Wayback Machine. In Handbook of Automated Reasoning. Franz Baader and Tobias Nipkow (1998). Term Rewriting and All That. Cambridge University Press. Franz Baader and Jörg H. Siekmann (1993). "Unification Theory". In Handbook of Logic in Artificial Intelligence and Logic Programming. Jean-Pierre Jouannaud and Claude Kirchner (1991). "Solving Equations in Abstract Algebras: A Rule-Based Survey of Unification". In Computational Logic: Essays in Honor of Alan Robinson. Nachum Dershowitz and Jean-Pierre Jouannaud, Rewrite Systems, in: Jan van Leeuwen (ed.), Handbook of Theoretical Computer Science, volume B Formal Models and Semantics, Elsevier, 1990, pp. 243–320 Jörg H. Siekmann (1990). "Unification Theory". In Claude Kirchner (editor) Unification. Academic Press. Kevin Knight (Mar 1989). "Unification: A Multidisciplinary Survey" (PDF). ACM Computing Surveys. 21 (1): 93–124. CiteSeerX 10.1.1.64.8967. doi:10.1145/62029.62030. S2CID 14619034. Gérard Huet and Derek C. Oppen (1980). "Equations and Rewrite Rules: A Survey". Technical report. Stanford University. Raulefs, Peter; Siekmann, Jörg; Szabó, P.; Unvericht, E. (1979). "A short survey on the state of the art in matching and unification problems". ACM SIGSAM Bulletin. 13 (2): 14–20. doi:10.1145/1089208.1089210. S2CID 17033087. Claude Kirchner and Hélène Kirchner. Rewriting, Solving, Proving. In preparation.
Wikipedia/Unification_algorithm
A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop. In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine. Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways. Closed-loop controllers have the following advantages over open-loop controllers: disturbance rejection (such as hills in the cruise control example above) guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact unstable processes can be stabilized reduced sensitivity to parameter variations improved reference tracking performance improved rectification of random fluctuations In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance. A common closed-loop controller architecture is the PID controller. == Open-loop and closed-loop == == Closed-loop transfer function == The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller. This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions). If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations: Y ( s ) = P ( s ) U ( s ) {\displaystyle Y(s)=P(s)U(s)} U ( s ) = C ( s ) E ( s ) {\displaystyle U(s)=C(s)E(s)} E ( s ) = R ( s ) − F ( s ) Y ( s ) . {\displaystyle E(s)=R(s)-F(s)Y(s).} Solving for Y(s) in terms of R(s) gives Y ( s ) = ( P ( s ) C ( s ) 1 + P ( s ) C ( s ) F ( s ) ) R ( s ) = H ( s ) R ( s ) . {\displaystyle Y(s)=\left({\frac {P(s)C(s)}{1+P(s)C(s)F(s)}}\right)R(s)=H(s)R(s).} The expression H ( s ) = P ( s ) C ( s ) 1 + F ( s ) P ( s ) C ( s ) {\displaystyle H(s)={\frac {P(s)C(s)}{1+F(s)P(s)C(s)}}} is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If | P ( s ) C ( s ) | ≫ 1 {\displaystyle |P(s)C(s)|\gg 1} , i.e., it has a large norm with each value of s, and if | F ( s ) | ≈ 1 {\displaystyle |F(s)|\approx 1} , then Y(s) is approximately equal to R(s) and the output closely tracks the reference input. == PID feedback control == A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems. A PID controller continuously calculates an error value e(t) as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal. The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers. The PID controller is probably the most-used feedback control design. If u(t) is the control signal sent to the system, y(t) is the measured output and r(t) is the desired output, and e(t) = r(t) − y(t) is the tracking error, a PID controller has the general form u ( t ) = K P e ( t ) + K I ∫ t e ( τ ) d τ + K D d e ( t ) d t . {\displaystyle u(t)=K_{P}e(t)+K_{I}\int ^{t}e(\tau ){\text{d}}\tau +K_{D}{\frac {{\text{d}}e(t)}{{\text{d}}t}}.} The desired closed loop dynamics is obtained by adjusting the three parameters KP, KI and KD, often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered. Applying Laplace transformation results in the transformed PID controller equation u ( s ) = K P e ( s ) + K I 1 s e ( s ) + K D s e ( s ) {\displaystyle u(s)=K_{P}\,e(s)+K_{I}\,{\frac {1}{s}}\,e(s)+K_{D}\,s\,e(s)} u ( s ) = ( K P + K I 1 s + K D s ) e ( s ) {\displaystyle u(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right)e(s)} with the PID controller transfer function C ( s ) = ( K P + K I 1 s + K D s ) . {\displaystyle C(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right).} As an example of tuning a PID controller in the closed-loop system H(s), consider a 1st order plant given by P ( s ) = A 1 + s T P {\displaystyle P(s)={\frac {A}{1+sT_{P}}}} where A and TP are some constants. The plant output is fed back through F ( s ) = 1 1 + s T F {\displaystyle F(s)={\frac {1}{1+sT_{F}}}} where TF is also a constant. Now if we set K P = K ( 1 + T D T I ) {\displaystyle K_{P}=K\left(1+{\frac {T_{D}}{T_{I}}}\right)} , KD = KTD, and K I = K T I {\displaystyle K_{I}={\frac {K}{T_{I}}}} , we can express the PID controller transfer function in series form as C ( s ) = K ( 1 + 1 s T I ) ( 1 + s T D ) {\displaystyle C(s)=K\left(1+{\frac {1}{sT_{I}}}\right)(1+sT_{D})} Plugging P(s), F(s), and C(s) into the closed-loop transfer function H(s), we find that by setting K = 1 A , T I = T F , T D = T P {\displaystyle K={\frac {1}{A}},T_{I}=T_{F},T_{D}=T_{P}} H(s) = 1. With this tuning in this example, the system output follows the reference input exactly. However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead. == References ==
Wikipedia/Feedback_control_system
Soil functions are general capabilities of soils that are important for various agricultural, environmental, nature protection, landscape architecture and urban applications. Soil can perform many functions and these include functions related to the natural ecosystems, agricultural productivity, environmental quality, source of raw material, and as base for buildings. Six key soil functions are: Food and other biomass production Environmental Interaction Biological habitat and gene pool Source of raw materials Physical and cultural heritage Platform for man-made structures == Food and other biomass production == Soil acts as an anchor for plant roots. It provides a hospitable place for a plant to live in while storing and supplying nutrients to plants. Soil also functions by maintaining the quantity and quality of air by allowing CO2 to escape and fresh O2 to enter the root zone. Pore spaces within soil can also absorb water and hold it until plant roots need it. The soil also moderates temperature fluctuation, providing a suitable temperature for the roots to function normally. A fertile soil will also provide dissolved mineral nutrients for optimal plant growth. The combination of these activities supports plant growth for providing food and other biomass production. == Environmental interaction == Environmental interactions such as regulating water supplies, water loos, utilization, contamination, and purification are all affected by the soil. They can filter, buffer, and transform materials between the atmosphere, the plant cover, and the water table. Soil interacts with the environment to transform and decompose waste materials in to new materials. Through filtering, soil acts as a filter and captures contaminants through soil particles. Contaminants are captured by the soil particles and water comes out cleaner in the aquifers and rivers. Lastly, it can accumulate large amounts of carbon as soil organic matter, thus reducing the total concentration of carbon dioxide that can mitigate global climate change. == Biological habitat and gene pool == Soils also acts as a biological habitat and a gene reserve for a large variety of organisms. Soils are the environment in which seeds grow, they provide heat, nutrients and water that are available to use to nurture plants and animals. The assistance of soil in the decomposition of dead plants, animals, and organism by transforming their remains into simpler mineral forms, can be utilized by other living things. == Source of raw materials == Soil provides raw materials for human use and impacts human health directly. The composition of human food reflects the nature of the soil in which it was grown. An example of soil as a source of raw material can be found in ancient ceramic production. The Maya ceramics showed traits inherited from soils and sediments used as raw material. The understanding of soil formation process can help define certain type of soil and reflect the composition of soil minerals. However, the natural area of productive soils is limited and due to increasing pressure of cropping, forestry, and urbanization, extracting soil as a raw material needs to be controlled for. == Physical and cultural heritage == Soil also has more general culture functions as they act as a part of the cultural landscape of our minds as well as the physical world around us. An attachment to home soils or a sense of place is a cultural attribute developed mores strongly in certain people. Soils has been around since the creation of earth, it can act as a factor in determining how humans have migrated in the past. Soil also act as an earth cover that protects and preserve the physical artifacts of the past that can allow us to better understand cultural heritage. Moreover, soil has been an important indication to where people settle as they are an essential resource for human productivity. == Platform for man-made structures == Soil can act as raw material deposits and is widely used in building materials. Approximately 50% of the people on the planet live in houses constructed from soil. The conditions of the soil must be firm and solid to provide a good base for roads and highways to be built on. Additionally, since these structures rest on the soil, factors such as its bearing strength, compressibility, stability, and shear strength all need to be considered. Testing the physical properties allow a better application to engineering uses of soil. == Mapping soil functions == Soil mapping is the identification, description, ad delineation on a map of different types of soil based on direct field observations or on indirect inferences from souch sources such as aerial photographs. Soil maps can depict soil properties and functions in the context of specific soil functions such as agricultural food production, environmental protection, and civil engineering considerations. Maps can depict functional interpretations of specific properties such as critical nutrient levels, heavy-metal levels or can depict interpretation of multiple properties such as a map of erosion risk index. Mapping of function specific soil properties is an extension of soil survey, using maps of soil components together with auxiliary information (including pedotransfer functions and soil inference models) to depict inferences about the specific performance of soil mapping units. Other functions of soil in ecosystems: source of building materials (clay, sand, rocks) carbon recycler fiber production == See also == Digital soil mapping – Computer-assisted production of maps of soil properties Ecosystem services – Benefits provided by intact ecosystemsPages displaying short descriptions of redirect targets Pedotransfer function – predictive functions of soil propertiesPages displaying wikidata descriptions as a fallback == References ==
Wikipedia/Soil_functions
Soil production function refers to the rate of bedrock weathering into soil as a function of soil thickness. A general model suggests that the rate of physical weathering of bedrock (de/dt) can be represented as an exponential decline with soil thickness: d e / d t = P 0 exp ⁡ [ − k h ] {\displaystyle de/dt=P_{0}\exp {[-kh]}} where h is soil thickness [m], P0 [mm/year] is the potential (or maximum) weathering rate of bedrock and k [m−1] is an empirical constant. The reduction of weathering rate with thickening of soil is related to the exponential decrease of temperature amplitude with increasing depth below the soil surface, and also the exponential decrease in average water penetration (for freely-drained soils). Parameters P0 and k are related to the climate and type of parent material. The value of P0 was found to range from 0.08 to 2.0 mm/yr for sites in northern California, and 0.05–0.14 mm/yr for sites in southeastern Australia. Meanwhile values of k do not vary significantly, ranging from 2 to 4 m−1. Several landscape evolution models have adopted the so-called humped model. This model dates back to G.K. Gilbert's Report on the Geology of the Henry Mountains (1877). Gilbert reasoned that the weathering of bedrock was fastest under an intermediate thickness of soil and slower under exposed bedrock or under thick mantled soil. This is because chemical weathering requires the presence of water. Under thin soil or exposed bedrock water tends to run off, reducing the chance of the decomposition of bedrock. == See also == Biorhexistasy Hillslope evolution Pedogenesis Soil functions == Notes and references == == Further reading == Ahnert, F. (1977). "Some comments on the quantitative formulation of geomorphological process in a theoretical model". Earth Surface Processes. 2 (2–3): 191–201. doi:10.1002/esp.3290020211. Humphreys, G. S.; Wilkinson, M. T. (2007). "The soil production function: a brief history and its rediscovery" (PDF). Geoderma. 139 (1–2): 73–78. Bibcode:2007Geode.139...73H. doi:10.1016/j.geoderma.2007.01.004. Wilkinson, M. T.; Chappell, J.; Humphreys, G. S.; Fifield, K.; Smith, B.; Hesse, P. P. (2005). "Soil production in heath and forest, Blue Mountains, Australia: influence of lithology and palaeoclimate" (PDF). Earth Surface Processes and Landforms. 30 (8): 923–934. Bibcode:2005ESPL...30..923W. doi:10.1002/esp.1254. S2CID 128557903. Wilkinson, M. T.; Humphreys, G. S. (2005). "Exploring pedogenesis via nuclide-based soil production rates and OSL-based bioturbation rates" (PDF). Australian Journal of Soil Research. 43 (6): 767–779. Bibcode:2005SoilR..43..767W. doi:10.1071/SR04158.
Wikipedia/Soil_production_function
Agricultural science (or agriscience for short) is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences that are used in the practice and understanding of agriculture. Professionals of the agricultural science are called agricultural scientists or agriculturists. == History == In the 18th century, Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulfate) as a fertilizer. In 1843, John Bennet Lawes and Joseph Henry Gilbert began a set of long-term field experiments at Rothamsted Research in England, some of which are still running as of 2018. In the United States, a scientific revolution in agriculture began with the Hatch Act of 1887, which used the term "agricultural science". The Hatch Act was driven by farmers' interest in knowing the constituents of early artificial fertilizer. The Smith–Hughes Act of 1917 shifted agricultural education back to its vocational roots, but the scientific foundation had been built. For the next 44 years after 1906, federal expenditures on agricultural research in the United States outpaced private expenditures.: xxi  == Prominent agricultural scientists == Wilbur Olin Atwater Robert Bakewell Norman Borlaug Luther Burbank George Washington Carver Carl Henry Clerk George C. Clerk René Dumont Sir Albert Howard Kailas Nath Kaul Thomas Lecky Justus von Liebig Jay Laurence Lush Gregor Mendel Louis Pasteur M. S. Swaminathan Jethro Tull Artturi Ilmari Virtanen Sewall Wright == Fields or related disciplines == == Scope == Agriculture, agricultural science, and agronomy are closely related. However, they cover different concepts: Agriculture is the set of activities that transform the environment for the production of animals and plants for human use. Agriculture concerns techniques, including the application of agronomic research. Agronomy is research and development related to studying and improving plant-based crops. Geoponics is the science of cultivating the earth. Hydroponics involves growing plants without soil, by using water-based mineral nutrient solutions in an artificial environment. == Research topics == Agricultural sciences include research and development on: Improving agricultural productivity in terms of quantity and quality (e.g., selection of drought-resistant crops and animals, development of new pesticides, yield-sensing technologies, simulation models of crop growth, in-vitro cell culture techniques) Minimizing the effects of pests (weeds, insects, pathogens, mollusks, nematodes) on crop or animal production systems. Transformation of primary products into end-consumer products (e.g., production, preservation, and packaging of dairy products) Prevention and correction of adverse environmental effects (e.g., soil degradation, waste management, bioremediation) Theoretical production ecology, relating to crop production modeling Traditional agricultural systems, sometimes termed subsistence agriculture, which feed most of the poorest people in the world. These systems are of interest as they sometimes retain a level of integration with natural ecological systems greater than that of industrial agriculture, which may be more sustainable than some modern agricultural systems. Food production and demand globally, with particular attention paid to the primary producers, such as China, India, Brazil, the US, and the EU. Various sciences relating to agricultural resources and the environment (e.g. soil science, agroclimatology); biology of agricultural crops and animals (e.g. crop science, animal science and their included sciences, e.g. ruminant nutrition, farm animal welfare); such fields as agricultural economics and rural sociology; various disciplines encompassed in agricultural engineering. == See also == Agricultural Research Council Agricultural sciences basic topics Agriculture ministry Agroecology American Society of Agronomy Consultative Group on International Agricultural Research (CGIAR) Crop Science Society of America Genomics of domestication History of agricultural science Indian Council of Agricultural Research Institute of Food and Agricultural Sciences International Assessment of Agricultural Science and Technology for Development International Food Policy Research Institute, IFPRI International Institute of Tropical Agriculture International Livestock Research Institute List of agriculture topics National Agricultural Library (NAL) National FFA Organization Research Institute of Crop Production (RICP) (in the Czech Republic) Soil Science Society of America USDA Agricultural Research Service University of Agricultural Sciences == References == == Further reading == Agricultural Research, Livelihoods, and Poverty: Studies of Economic and Social Impacts in Six Countries Edited by Michelle Adato and Ruth Meinzen-Dick (2007), Johns Hopkins University Press Food Policy Report Claude Bourguignon, Regenerating the Soil: From Agronomy to Agrology, Other India Press, 2005 Pimentel David, Pimentel Marcia, Computer les kilocalories, Cérès, n. 59, sept-oct. 1977 Russell E. Walter, Soil conditions and plant growth, Longman group, London, New York 1973 Salamini, Francesco; Özkan, Hakan; Brandolini, Andrea; Schäfer-Pregl, Ralf; Martin, William (2002). "Genetics and geography of wild cereal domestication in the near east". Nature Reviews Genetics. 3 (6): 429–441. doi:10.1038/nrg817. PMID 12042770. S2CID 25166879. Saltini Antonio, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, ISBN 88-206-2412-5, ISBN 88-206-2413-3, ISBN 88-206-2414-1, ISBN 88-206-2415-X Vavilov Nicolai I. (Starr Chester K. editor), The Origin, Variation, Immunity and Breeding of Cultivated Plants. Selected Writings, in Chronica botanica, 13: 1–6, Waltham, Mass., 1949–50 Vavilov Nicolai I., World Resources of Cereals, Leguminous Seed Crops and Flax, Academy of Sciences of Urss, National Science Foundation, Washington, Israel Program for Scientific Translations, Jerusalem 1960 Winogradsky Serge, Microbiologie du sol. Problèmes et methodes. Cinquante ans de recherches, Masson & c.ie, Paris 1949
Wikipedia/Agricultural_sciences
In soil science, pedotransfer functions (PTF) are predictive functions of certain soil properties using data from soil surveys. The term pedotransfer function was coined by Johan Bouma as translating data we have into what we need. The most readily available data comes from a soil survey, such as the field morphology, soil texture, structure and pH. Pedotransfer functions add value to this basic information by translating them into estimates of other more laborious and expensively determined soil properties. These functions fill the gap between the available soil data and the properties which are more useful or required for a particular model or quality assessment. Pedotransfer functions utilize various regression analysis and data mining techniques to extract rules associating basic soil properties with more difficult to measure properties. Although not formally recognized and named until 1989, the concept of the pedotransfer function has long been applied to estimate soil properties that are difficult to determine. Many soil science agencies have their own (unofficial) rule of thumb for estimating difficult-to-measure soil properties. Probably because of the particular difficulty, cost of measurement, and availability of large databases, the most comprehensive research in developing PTFs has been for the estimation of water retention curve and hydraulic conductivity. == History == The first PTF came from the study of Lyman Briggs and McLane (1907). They determined the wilting coefficient, which is defined as percentage water content of a soil when the plants growing in that soil are first reduced to a wilted condition from which they cannot recover in an approximately saturated atmosphere without the addition of water to the soil, as a function of particle-size: Wilting coefficient = 0.01 sand + 0.12 silt + 0.57 clay With the introduction of the field capacity (FC) and permanent wilting point (PWP) concepts by Frank Veihmeyer and Arthur Hendricksen (1927), research during the period 1950-1980 attempted to correlate particle-size distribution, bulk density and organic matter content with water content at field capacity (FC), permanent wilting point (PWP), and available water capacity (AWC). In the 1960s various papers dealt with the estimation of FC, PWP, and AWC, notably in a series of papers by Salter and Williams (1965 etc.). They explored relationships between texture classes and available water capacity, which are now known as class PTFs. They also developed functions relating the particle-size distribution to AWC, now known as continuous PTFs. They asserted that their functions could predict AWC to a mean accuracy of 16%. In the 1970s more comprehensive research using large databases was developed. A particularly good example is the study by Hall et al. (1977) from soil in England and Wales; they established field capacity, permanent wilting point, available water content, and air capacity as a function of textural class, and as well as deriving continuous functions estimating these soil-water properties. In the USA, Gupta and Larson (1979) developed 12 functions relating particle-size distribution and organic matter content to water content at potentials ranging from -4 kPa to -1500 kPa. With the flourishing development of models describing soil hydraulic properties and computer modelling of soil-water and solute transport, the need for hydraulic properties as inputs to these models became more evident. Clapp and Hornberger (1978) derived average values for the parameters of a power-function water retention curve, sorptivity and saturated hydraulic conductivity for different texture classes. In probably the first research of its kind, Bloemen (1977) derived empirical equations relating parameters of the Brooks and Corey hydraulic model to particle-size distribution. Jurgen Lamp and Kneib (1981) from Germany introduced the term pedofunction, while Bouma and van Lanen (1986) used the term transfer function. To avoid confusion with the term transfer function used in soil physics and in many other disciplines, Johan Bouma (1989) later called it pedotransfer function. (A personal anecdote hinted that Arnold Bregt from Wageningen University suggested this term). Since then, the development of hydraulic PTFs has become a boom research topic, first in the US and Europe, South America, Australia and all over the world. Although most PTFs have been developed to predict soil hydraulic properties, they are not restricted to hydraulic properties. PTFs for estimating soil physical, mechanical, chemical and biological properties have also been developed. == Software == There are several available programs that aid determining hydraulic properties of soils using pedotransfer functions, among them are SOILPAR – By Acutis and Donatelli ROSETTA – By Schaap et al. of the USDA, uses artificial neural networks == Soil inference systems == McBratney et al. (2002) introduced the concept of a soil inference system, SINFERS, where pedotransfer functions are the knowledge rules for soil inference engines. A soil inference system takes measurements with a given level of certainty (source) and by means of logically linked pedotransfer functions (organiser) infers data that is not known with minimal inaccuracy (predictor). == See also == Moisture equivalent Nonlimiting water range Soil functions == References ==
Wikipedia/Pedotransfer_function
Chemical engineering is an engineering field which deals with the study of the operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions. Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents. == Etymology == A 1996 article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same paper, however, George E. Davis, an English consultant, was credited with having coined the term. Davis also tried to found a Society of Chemical Engineering, but instead, it was named the Society of Chemical Industry (1881), with Davis as its first secretary. The History of Science in United States: An Encyclopedia puts the use of the term around 1890. "Chemical engineering", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, "chemical engineer," was already in common use in Britain and the United States. == History == === New concepts and innovations === In the 1940s, it became clear that unit operations alone were insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain and the United States continued until the 1960s, transport phenomena started to receive greater focus. Along with other novel concepts, such as process systems engineering (PSE), a "second paradigm" was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its synthetic elements, such as those of a control system and process design. Developments in chemical engineering before and after World War II were mainly incited by the petrochemical industry; however, advances in other fields were made as well. Advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. Meanwhile, progress in polymer science in the 1950s paved way for the "age of plastics". === Safety and hazard developments === Concerns regarding large-scale chemical manufacturing facilities' safety and environmental impact were also raised during this period. Silent Spring, published in 1962, alerted its readers to the harmful effects of DDT, a potent insecticide. The 1974 Flixborough disaster in the United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 Bhopal disaster in India resulted in almost 4,000 deaths. These incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. In response, the IChemE required safety to be part of every degree course that it accredited after 1982. By the 1970s, legislation and monitoring agencies were instituted in various countries, such as France, Germany, and the United States. In time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety. === Recent progress === Advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. The completion of the Human Genome Project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. Chemical engineering principles were used to produce DNA sequences in large quantities. == Concepts == Chemical engineering involves the application of several principles. Key concepts are presented below. === Plant design and construction === Chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. Design engineers often work in a consulting role, designing plants to meet clients' needs. Design is limited by several factors, including funding, government regulations, and safety standards. These constraints dictate a plant's choice of process, materials, and equipment. Plant construction is coordinated by project engineers and project managers, depending on the size of the investment. A chemical engineer may do the job of project engineer full-time or part of the time, which requires additional training and job skills or act as a consultant to the project group. In the USA the education of chemical engineering graduates from the Baccalaureate programs accredited by ABET do not usually stress project engineering education, which can be obtained by specialized training, as electives, or from graduate programs. Project engineering jobs are some of the largest employers for chemical engineers. === Process design and analysis === A unit operation is a physical step in an individual chemical engineering process. Unit operations (such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying and separating its products, recycling unspent reactants, and controlling energy transfer in reactors. On the other hand, a unit process is the chemical equivalent of a unit operation. Along with unit operations, unit processes constitute a process operation. Unit processes (such as nitration, hydrogenation, and oxidation involve the conversion of materials by biochemical, thermochemical and other means. Chemical engineers responsible for these are called process engineers. Process design requires the definition of equipment types and sizes as well as how they are connected and the materials of construction. Details are often printed on a Process Flow Diagram which is used to control the capacity and reliability of a new or existing chemical factory. Education for chemical engineers in the first college degree 3 or 4 years of study stresses the principles and practices of process design. The same skills are used in existing chemical plants to evaluate the efficiency and make recommendations for improvements. === Transport phenomena === Modeling and analysis of transport phenomena is essential for many industrial applications. Transport phenomena involve fluid dynamics, heat transfer and mass transfer, which are governed mainly by momentum transfer, energy transfer and transport of chemical species, respectively. Models often involve separate considerations for macroscopic, microscopic and molecular level phenomena. Modeling of transport phenomena, therefore, requires an understanding of applied mathematics. == Applications and practice == Chemical engineers develop economic ways of using materials and energy. Chemical engineers use chemistry and engineering to turn raw materials into usable products, such as medicine, petrochemicals, and plastics on a large-scale, industrial setting. They are also involved in waste management and research. Both applied and research facets could make extensive use of computers. Chemical engineers may be involved in industry or university research where they are tasked with designing and performing experiments, by scaling up theoretical chemical reactions, to create better and safer methods for production, pollution control, and resource conservation. They may be involved in designing and constructing plants as a project engineer. Chemical engineers serving as project engineers use their knowledge in selecting optimal production methods and plant equipment to minimize costs and maximize safety and profitability. After plant construction, chemical engineering project managers may be involved in equipment upgrades, troubleshooting, and daily operations in either full-time or consulting roles. == See also == === Related topics === === Related fields and concepts === === Associations === == References == == Bibliography ==
Wikipedia/chemical_engineering
Chemical reactor materials selection is an important aspect in the design of a chemical reactor. There are four main groups of chemical reactors - CSTR, PFR, semi-batch, and catalytic - with variations on each. Depending on the nature of the chemicals involved in the reaction, as well as the operating conditions (e.g. temperature and pressure), certain materials will perform better over others. == Material Options == There are several broad classes of materials available for use in creating a chemical reactor. Some examples include metals, glasses, ceramics, polymers, carbon, and composites. Metals are the most common class of materials for chemical engineering equipment as they are comparatively easy to manufacture, have high strength, and are resistant to fracture. Glass is common in chemical laboratory equipment, but highly prone to fracture and so is not useful in large-scale industrial use. Ceramics are not that common of a material for chemical reactors as they are brittle and difficult to manufacture. Polymers have begun to gain more popularity in piping and valves as they aid in temperature stability. There are several forms of carbon, but the most useful form for reactors is carbon or graphite fibers in composites. == Criteria for Selection == The last important criteria for a particular material is its safety. Engineers have a responsibility to ensure the safety of those who handle equipment or utilize a building or road for example, by minimizing the risks of injuries or casualties. Other considerations include strength, resistance to sudden failure from either mechanical or thermal shock, corrosion resistance, and cost, to name a few. To compare different materials to each other, it may prove useful to consult an ASHBY diagram and the ASME Pressure Vessel Codes. The material choice would be ideally drawn from known data as well as experience. Having a deeper understanding of the component requirements and the corrosion and degradation behavior will aid in materials selection. Additionally, knowing the performance of past systems, whether they be good or bad, will benefit the user in deciding on alternative alloys or using a coated system; if previous information is not available, then performing tests is recommended. == High Temperature Operation == High temperature reactor operation includes a host of problems such as distortion and cracking due to thermal expansion and contraction, and high temperature corrosion. Some indications that the latter is occurring include burnt or charred surfaces, molten phases, distortion, thick scales, and grossly thinned metal. Some typical high-temperature alloys include iron, nickel, or cobalt that have >20% chromium for the purpose of forming a protective oxide against further oxidation. There are also various other elements to aid in corrosion resistance such as aluminum, silicon, and rare earth elements such as yttrium, cerium, and lanthanum. Other additions such as reactive or refractory metals, can improve the mechanical properties of the reactor. Refractory metals can experience catastrophic oxidation, which turns metals into a powdery oxide with little use. This damage is worse in stagnant conditions, however silicide coatings have been proven to offer some resistance. == References ==
Wikipedia/Chemical_reactor_materials_selection
The chemical industry comprises the companies and other organizations that develop and produce industrial, specialty and other chemicals. Central to the modern world economy, the chemical industry converts raw materials (oil, natural gas, air, water, metals, and minerals) into commodity chemicals for industrial and consumer products. It includes industries for petrochemicals such as polymers for plastics and synthetic fibers; inorganic chemicals such as acids and alkalis; agricultural chemicals such as fertilizers, pesticides and herbicides; and other categories such as industrial gases, speciality chemicals and pharmaceuticals. Various professionals are involved in the chemical industry including chemical engineers, chemists and lab technicians. == History == Although chemicals were made and used throughout history, the birth of the heavy chemical industry (production of chemicals in large quantities for a variety of uses) coincided with the beginnings of the Industrial Revolution. === Industrial Revolution === One of the first chemicals to be produced in large amounts through industrial processes was sulfuric acid. In 1736 pharmacist Joshua Ward developed a process for its production that involved heating sulfur with saltpeter, allowing the sulfur to oxidize and combine with water. It was the first practical production of sulphuric acid on a large scale. John Roebuck and Samuel Garbett were the first to establish a large-scale factory in Prestonpans, Scotland, in 1749, which used leaden condensing chambers for the manufacture of sulfuric acid. In the early 18th century, cloth was bleached by treating it with stale urine or sour milk and exposing it to sunlight for long periods of time, which created a severe bottleneck in production. Sulfuric acid began to be used as a more efficient agent as well as lime by the middle of the century, but it was the discovery of bleaching powder by Charles Tennant that spurred the creation of the first great chemical industrial enterprise. His powder was made by reacting chlorine with dry slaked lime and proved to be a cheap and successful product. He opened the St Rollox Chemical Works, north of Glasgow, and production went from just 52 tons in 1799 to almost 10,000 tons just five years later. Soda ash was used since ancient times in the production of glass, textile, soap, and paper, and the source of the potash had traditionally been wood ashes in Western Europe. By the 18th century, this source was becoming uneconomical due to deforestation, and the French Academy of Sciences offered a prize of 2400 livres for a method to produce alkali from sea salt (sodium chloride). The Leblanc process was patented in 1791 by Nicolas Leblanc who then built a Leblanc plant at Saint-Denis. He was denied his prize money because of the French Revolution. In Britain, the Leblanc process became popular. William Losh built the first soda works in Britain at the Losh, Wilson and Bell works on the River Tyne in 1816, but it remained on a small scale due to large tariffs on salt production until 1824. When these tariffs were repealed, the British soda industry was able to rapidly expand. James Muspratt's chemical works in Liverpool and Charles Tennant's complex near Glasgow became the largest chemical production centres anywhere. By the 1870s, the British soda output of 200,000 tons annually exceeded that of all other nations in the world combined. These huge factories began to produce a greater diversity of chemicals as the Industrial Revolution matured. Originally, large quantities of alkaline waste were vented into the environment from the production of soda, provoking one of the first pieces of environmental legislation to be passed in 1863. This provided for close inspection of the factories and imposed heavy fines on those exceeding the limits on pollution. Methods were devised to make useful byproducts from the alkali. The Solvay process was developed by the Belgian industrial chemist Ernest Solvay in 1861. In 1864, Solvay and his brother Alfred constructed a plant in Charleroi Belgium. In 1874, they expanded into a larger plant in Nancy, France. The new process proved more economical and less polluting than the Leblanc method, and its use spread. In the same year, Ludwig Mond visited Solvay to acquire the rights to use his process, and he and John Brunner formed Brunner, Mond & Co., and built a Solvay plant at Winnington, England. Mond was instrumental in making the Solvay process a commercial success. He made several refinements between 1873 and 1880 that removed byproducts that could inhibit the production of sodium carbonate in the process. The manufacture of chemical products from fossil fuels began at scale in the early 19th century. The coal tar and ammoniacal liquor residues of coal gas manufacture for gas lighting began to be processed in 1822 at the Bonnington Chemical Works in Edinburgh to make naphtha, pitch oil (later called creosote), pitch, lampblack (carbon black) and sal ammoniac (ammonium chloride). Ammonium sulphate fertiliser, asphalt road surfacing, coke oil and coke were later added to the product line. === Expansion and maturation === The late 19th century saw an explosion in both the quantity of production and the variety of chemicals that were manufactured. Large chemical industries arose in Germany and later in the United States. Production of artificial manufactured fertilizer for agriculture was pioneered by Sir John Lawes at his purpose-built Rothamsted Research facility. In the 1840s he established large works near London for the manufacture of superphosphate of lime. Processes for the vulcanization of rubber were patented by Charles Goodyear in the United States and Thomas Hancock in England in the 1840s. The first synthetic dye was discovered by William Henry Perkin in London. He partly transformed aniline into a crude mixture which, when extracted with alcohol, produced a substance with an intense purple colour. He also developed the first synthetic perfumes. German industry quickly began to dominate the field of synthetic dyes. The three major firms BASF, Bayer, and Hoechst produced several hundred different dyes. By 1913, German industries produced almost 90% of the world's supply of dyestuffs and sold approximately 80% of their production abroad. In the United States, Herbert Henry Dow's use of electrochemistry to produce chemicals from brine was a commercial success that helped to promote the country's chemical industry. The petrochemical industry can be traced back to the oil works of Scottish chemist James Young, and Canadian Abraham Pineo Gesner. The first plastic was invented by Alexander Parkes, an English metallurgist. In 1856, he patented Parkesine, a celluloid based on nitrocellulose treated with a variety of solvents. This material, exhibited at the 1862 London International Exhibition, anticipated many of the modern aesthetic and utility uses of plastics. The industrial production of soap from vegetable oils was started by William Lever and his brother James in 1885 in Lancashire based on a modern chemical process invented by William Hough Watson that used glycerin and vegetable oils. By the 1920s, chemical firms consolidated into large conglomerates; IG Farben in Germany, Rhône-Poulenc in France and Imperial Chemical Industries in Britain. Dupont became a major chemicals firm in the early 20th century in America. == Products == Polymers and plastics such as polyethylene, polypropylene, polyvinyl chloride, polyethylene terephthalate, polystyrene and polycarbonate comprise about 80% of the industry's output worldwide. Chemicals are used in many different consumer goods, and are also used in many different sectors. This includes agriculture manufacturing, construction, and service industries. Major industrial customers include rubber and plastic products, textiles, apparel, petroleum refining, pulp and paper, and primary metals. Chemicals are nearly a $5 trillion global enterprise, and the EU and U.S. chemical companies are the world's largest producers. Sales of the chemical business can be divided into a few broad categories, including basic chemicals (about 35% – 37% of dollar output), life sciences (30%), specialty chemicals (20% – 25%) and consumer products (about 10%). === Overview === Basic chemicals, or "commodity chemicals" are a broad chemical category including polymers, bulk petrochemicals and intermediates, other derivatives and basic industrials, inorganic chemicals, and fertilizers. Polymers are the largest revenue segment and includes all categories of plastics and human-made fibers. The major markets for plastics are packaging, followed by home construction, containers, appliances, pipe, transportation, toys, and games. The largest-volume polymer product, polyethylene (PE), is used mainly in packaging films and other markets such as milk bottles, containers, and pipe. Polyvinyl chloride (PVC), another large-volume product, is principally used to make piping for construction markets as well as siding and, to a much smaller extent, transportation and packaging materials. Polypropylene (PP), similar in volume to PVC, is used in markets ranging from packaging, appliances, and containers to clothing and carpeting. Polystyrene (PS), another large-volume plastic, is used principally for appliances and packaging as well as toys and recreation. The leading human-made fibers include polyester, nylon, polypropylene, and acrylics, with applications including apparel, home furnishings, and other industrial and consumer use. Principal raw materials for polymers are bulk petrochemicals like ethylene, propylene and benzene. Petrochemicals and intermediate chemicals are primarily made from liquefied petroleum gas (LPG), natural gas and crude oil fractions. Large volume products include ethylene, propylene, benzene, toluene, xylenes, methanol, vinyl chloride monomer (VCM), styrene, butadiene, and ethylene oxide. These basic or commodity chemicals are the starting materials used to manufacture many polymers and other more complex organic chemicals particularly those that are made for use in the specialty chemicals category. Other derivatives and basic industrials include synthetic rubber, surfactants, dyes and pigments, turpentine, resins, carbon black, explosives, and rubber products and contribute about 20 percent of the basic chemicals' external sales. Inorganic chemicals (about 12% of the revenue output) make up the oldest of the chemical categories. Products include salt, chlorine, caustic soda, soda ash, acids (such as nitric acid, phosphoric acid, and sulfuric acid), titanium dioxide, and hydrogen peroxide. Fertilizers are the smallest category (about 6 percent) and include phosphates, ammonia, and potash chemicals. === Life sciences === Life sciences (about 30% of the dollar output of the chemistry business) include differentiated chemical and biological substances, pharmaceuticals, diagnostics, animal health products, vitamins, and pesticides. While much smaller in volume than other chemical sectors, their products tend to have high prices – over ten dollars per pound – growth rates of 1.5 to 6 times GDP, and research and development spending at 15 to 25% of sales. Life science products are usually produced with high specifications and are closely scrutinized by government agencies such as the Food and Drug Administration. Pesticides, also called "crop protection chemicals", are about 10% of this category and include herbicides, insecticides, and fungicides. === Specialty chemicals === Specialty chemicals are a category of relatively high-valued, rapidly growing chemicals with diverse end product markets. Typical growth rates are one to three times GDP with prices over a dollar per pound. They are generally characterized by their innovative aspects. Products are sold for what they can do rather than for what chemicals they contain. Products include electronic chemicals, industrial gases, adhesives and sealants as well as coatings, industrial and institutional cleaning chemicals, and catalysts. In 2012, excluding fine chemicals, the $546 billion global specialty chemical market was 33% Paints, Coating and Surface Treatments, 27% Advanced Polymer, 14% Adhesives and Sealants, 13% additives, and 13% pigments and inks. Speciality chemicals are sold as effect or performance chemicals. Sometimes they are mixtures of formulations, unlike "fine chemicals", which are almost always single-molecule products. === Consumer products === Consumer products include direct product sales of chemicals such as soaps, detergents, and cosmetics. Typical growth rates are 0.8 to 1.0 times GDP. Consumers rarely come into contact with basic chemicals. Polymers and specialty chemicals are materials that they encounter everywhere daily. Examples are plastics, cleaning materials, cosmetics, paints and coatings, electronics, automobiles and the materials used in home construction. These specialty products are marketed by chemical companies to the downstream manufacturing industries as pesticides, specialty polymers, electronic chemicals, surfactants, construction chemicals, industrial cleaners, flavours and fragrances, specialty coatings, printing inks, water-soluble polymers, food additives, paper chemicals, oil field chemicals, plastic adhesives, adhesives and sealants, cosmetic chemicals, water management chemicals, catalysts, and textile chemicals. Chemical companies rarely supply these products directly to the consumer. Annually the American Chemistry Council tabulates the US production volume of the top 100 chemicals. In 2000, the aggregate production volume of the top 100 chemicals totaled 502 million tons, up from 397 million tons in 1990. Inorganic chemicals tend to be the largest volume but much smaller in dollar revenue due to their low prices. The top 11 of the 100 chemicals in 2000 were sulfuric acid (44 million tons), nitrogen (34), ethylene (28), oxygen (27), lime (22), ammonia (17), propylene (16), polyethylene (15), chlorine (13), phosphoric acid (13) and diammonium phosphates (12). == Companies == The largest chemical producers today are global companies with international operations and plants in numerous countries. Below is a list of the top 25 chemical companies by chemical sales in 2015. (Note: Chemical sales represent only a portion of total sales for some companies.) Top chemical companies by chemical sales in 2015. == Technology == From the perspective of chemical engineers, the chemical industry involves the use of chemical processes such as chemical reactions and refining methods to produce a wide variety of solid, liquid, and gaseous materials. Most of these products serve to manufacture other items, although a smaller number go directly to consumers. Solvents, pesticides, lye, washing soda, and portland cement provide a few examples of products used by consumers. The industry includes manufacturers of inorganic- and organic-industrial chemicals, ceramic products, petrochemicals, agrochemicals, polymers and rubber (elastomers), oleochemicals (oils, fats, and waxes), explosives, fragrances and flavors. Examples of these products are shown in the Table below. Related industries include petroleum, glass, paint, ink, sealant, adhesive, pharmaceuticals and food processing. Chemical processes such as chemical reactions operate in chemical plants to form new substances in various types of reaction vessels. In many cases, the reactions take place in special corrosion-resistant equipment at elevated temperatures and pressures with the use of catalysts. The products of these reactions are separated using a variety of techniques including distillation especially fractional distillation, precipitation, crystallization, adsorption, filtration, sublimation, and drying. The processes and products or products are usually tested during and after manufacture by dedicated instruments and on-site quality control laboratories to ensure safe operation and to assure that the product will meet required specifications. More organizations within the industry are implementing chemical compliance software to maintain quality products and manufacturing standards. The products are packaged and delivered by many methods, including pipelines, tank-cars, and tank-trucks (for both solids and liquids), cylinders, drums, bottles, and boxes. Chemical companies often have a research-and-development laboratory for developing and testing products and processes. These facilities may include pilot plants and such research facilities may be located at a site separate from the production plant(s). == World chemical production == The scale of chemical manufacturing tends to be organized from largest in volume (petrochemicals and commodity chemicals), to specialty chemicals, and the smallest, fine chemicals. The petrochemical and commodity chemical manufacturing units are on the whole single product continuous processing plants. Not all petrochemical or commodity chemical materials are made in one single location, but groups of related materials often are to induce industrial symbiosis as well as material, energy and utility efficiency and other economies of scale. Those chemicals made on the largest of scales are made in a few manufacturing locations around the world, for example in Texas and Louisiana along the Gulf Coast of the United States, on Teesside (United Kingdom), and in Rotterdam in the Netherlands. The large-scale manufacturing locations often have clusters of manufacturing units that share utilities and large-scale infrastructure such as power stations, port facilities, and road and rail terminals. To demonstrate the clustering and integration mentioned above, some 50% of the United Kingdom's petrochemical and commodity chemicals are produced by the Northeast of England Process Industry Cluster on Teesside. Specialty chemical and fine chemical manufacturing are mostly made in discrete batch processes. These manufacturers are often found in similar locations but in many cases, they are to be found in multi-sector business parks. === Continents and countries === In the U.S. there are 170 major chemical companies. They operate internationally with more than 2,800 facilities outside the U.S. and 1,700 foreign subsidiaries or affiliates operating. The U.S. chemical output is $750 billion a year. The U.S. industry records large trade surpluses and employs more than a million people in the United States alone. The chemical industry is also the second largest consumer of energy in manufacturing and spends over $5 billion annually on pollution abatement. In Europe, the chemical, plastics, and rubber sectors are among the largest industrial sectors. Together they generate about 3.2 million jobs in more than 60,000 companies. Since 2000 the chemical sector alone has represented 2/3 of the entire manufacturing trade surplus of the EU. In 2012, the chemical sector accounted for 12% of the EU manufacturing industry's added value. Europe remains the world's biggest chemical trading region with 43% of the world's exports and 37% of the world's imports, although the latest data shows that Asia is catching up with 34% of the exports and 37% of imports. Even so, Europe still has a trading surplus with all regions of the world except Japan and China where in 2011 there was a chemical trade balance. Europe's trade surplus with the rest of the world today amounts to 41.7 billion Euros. Over the 20 years between 1991 and 2011, the European Chemical industry saw its sales increase from 295 billion Euros to 539 billion Euros, a picture of constant growth. Despite this, the European industry's share of the world chemical market has fallen from 36% to 20%. This has resulted from the huge increase in production and sales in emerging markets like India and China. The data suggest that 95% of this impact is from China alone. In 2012 the data from the European Chemical Industry Council shows that five European countries account for 71% of the EU's chemicals sales. These are Germany, France, the United Kingdom, Italy and the Netherlands. The chemical industry has seen growth in China, India, Korea, the Middle East, South East Asia, Nigeria and Brazil. The growth is driven by changes in feedstock availability and price, labor and energy costs, differential rates of economic growth and environmental pressures. Just as companies emerge as the main producers of the chemical industry, we can also look on a more global scale at how industrialized countries rank, with regard to the billions of dollars worth of production a country or region could export. Though the business of chemistry is worldwide in scope, the bulk of the world's $3.7 trillion chemical output is accounted for by only a handful of industrialized nations. The United States alone produced $689 billion, 18.6 percent of the total world chemical output in 2008. == See also == Chemical engineering Chemical leasing Pharmaceutical industry Industrial gas Prices of chemical elements Responsible Care Northeast of England Process Industry Cluster (NEPIC) == References == Aftalion, Fred (1991). A History of the International Chemical Industry. University of Pennsylvania Press. ISBN 978-0-8122-1297-6.. online version Archived 2011-06-04 at the Wayback Machine Brandt, E. N. (1997). Growth Company: Dow Chemical's First Century. Michigan State University Press. ISBN 0-87013-426-4.. online review Chandler, Alfred D. (2005). Shaping the Industrial Century: The Remarkable Story of the Evolution of the Modern Chemical and Pharmaceutical Industries. Harvard University Press. ISBN 0-674-01720-X.. chapters 3-6 deal with DuPont, Dow Chemicals, Monsanto, American Cyanamid, Union Carbide, and Allied in US; and European chemical producers, Bayer, Farben, and ICI. McCoy, Micheal; et al. (July 10, 2006). "Facts & Figures of the Chemical Industry". Chemical & Engineering News. 84 (28): 35–72. Shreve, R. Norris; Brink, Joseph A. Jr. (1977). The Chemical Process Industries (4th ed.). New York: McGraw Hill. Woytinsky, W. S.; Woytinsky, E. S. (1953). World Population and Production Trends and Outlooks. pp. 1176–1205. Contains many tables and maps on the worldwide chemical industry in 1950. == External links == Chemical refinery resources: ccc-group.com
Wikipedia/Industrial_chemistry
In chemical engineering, process design is the choice and sequencing of units for desired physical and/or chemical transformation of materials. Process design is central to chemical engineering, and it can be considered to be the summit of that field, bringing together all of the field's components. Process design can be the design of new facilities or it can be the modification or expansion of existing facilities. The design starts at a conceptual level and ultimately ends in the form of fabrication and construction plans. Process design is distinct from equipment design, which is closer in spirit to the design of unit operations. Processes often include many unit operations. == Documentation == Process design documents serve to define the design and they ensure that the design components fit together. They are useful in communicating ideas and plans to other engineers involved with the design, to external regulatory agencies, to equipment vendors, and to construction contractors. In order of increasing detail, process design documents include: Block flow diagrams (BFD): Very simple diagrams composed of rectangles and lines indicating major material or energy flows. Process flow diagrams (PFD): Typically more complex diagrams of major unit operations as well as flow lines. They usually include a material balance, and sometimes an energy balance, showing typical or design flowrates, stream compositions, and stream and equipment pressures and temperatures. It is the key document in process design. Piping and instrumentation diagrams (P&ID): Diagrams showing each and every pipeline with piping class (carbon steel or stainless steel) and pipe size (diameter). They also show valving along with instrument locations and process control schemes. Specifications: Written design requirements of all major equipment items. Process designers typically write operating manuals on how to start-up, operate and shut-down the process. They often also develop accident plans and projections of process operation on the environment. Documents are maintained after construction of the process facility for the operating personnel to refer to. The documents also are useful when modifications to the facility are planned. A primary method of developing the process documents is process flowsheeting. == Design considerations == Design conceptualization and considerations can begin once objectives are defined and constraints identified. Objectives that a design may strive to meet include: Throughput rate Process yield Product purity Constraints include: Capital cost: investment required to implement the design including cost of new equipment and disposal of obsolete equipment. Available space: the area of land or room in building to place new or modified equipment. Safety concerns: risk of accidents and posed by hazardous materials. Environmental impact and projected effluents, emissions, and waste production. Operating and maintenance costs. Other factors that designers may include are: Reliability Redundancy Flexibility Anticipated variability in feed stock and allowable variability in product. == Sources of design information == Designers usually do not start from scratch, especially for complex projects. Often the engineers have pilot plant data available or data from full-scale operating facilities. Other sources of information include proprietary design criteria provided by process licensors, published scientific data, laboratory experiments, and suppliers of feedstocks and utilities. == Design process == Design starts with process synthesis - the choice of technology and combinations of industrial units to achieve goals. More detailed design proceeds as other engineers and stakeholders sign off on each stage: conceptual to detailed design. Simulation software is often used by design engineers. Simulations can identify weaknesses in designs and allow engineers to choose better alternatives. However, engineers still rely on heuristics, intuition, and experience when designing a process. Human creativity is an element in complex designs. == See also == == Recommended chemical engineering books == Sinnott and Towler (2009). Chemical Engineering Design: Principles, Practice and Economics of Plant and Process Design (5th ed.). Butterworth-Heinemann. ISBN 978-0750685511. Ullmann's (2004). Chemical Engineering and Plant Design. Wiley-VCH. ISBN 978-3-527-31111-8. Moran, Sean (2015). An Applied Guide to Process and Plant Design (1st ed.). Butterworth-Heinemann. ISBN 978-0128002421. Moran, Sean (2016). Process Plant Layout (2nd ed.). Butterworth-Heinemann. ISBN 978-0128033555. Peter, Frank (2008). Process Plant Design. Wiley. ISBN 9783527313136. Kister, Henry Z. (1992). Distillation Design (1st ed.). McGraw-Hill. ISBN 0-07-034909-6. Perry, Robert H. & Green, Don W. (1984). Perry's Chemical Engineers' Handbook (6th ed.). McGraw-Hill. ISBN 0-07-049479-7. Bird, R.B., Stewart, W.E. and Lightfoot, E.N. (August 2001). Transport Phenomena (Second ed.). John Wiley & Sons. ISBN 0-471-41077-2.{{cite book}}: CS1 maint: multiple names: authors list (link) McCabe, W., Smith, J. and Harriott, P. (2004). Unit Operations of Chemical Engineering (7th ed.). McGraw Hill. ISBN 0-07-284823-5.{{cite book}}: CS1 maint: multiple names: authors list (link) Seader, J. D. & Henley, Ernest J. (1998). Separation Process Principles. New York: Wiley. ISBN 0-471-58626-9. Chopey, Nicholas P. (2004). Handbook of Chemical Engineering Calculations (3rdEdition ed.). McGraw-Hill. ISBN 0-07-136262-2. Himmelbau, David M. (1996). Basic Principles and Calculations in Chemical Engineering (6th ed.). Prentice-Hall. ISBN 0-13-305798-4. Editors: Jacqueline I. Kroschwitz and Arza Seidel (2004). Kirk-Othmer Encyclopedia of Chemical Technology (5th ed.). Hoboken, NJ: Wiley-Interscience. ISBN 0-471-48810-0. {{cite book}}: |author= has generic name (help) King, C.J. (1980). Separation Processes (2nd ed.). McGraw Hill. ISBN 0-07-034612-7. Peters, M. S. & Timmerhaus K. D. (1991). Plant Design and Economics for Chemical Engineers (4th ed.). McGraw Hill. ISBN 0-07-100871-3. J. M. Smith, H. C. Van Ness and M. M. Abott (2001). Introduction to Chemical Engineering Thermodynamics (6th ed.). McGraw Hill. ISBN 0-07-240296-2. == References == == External links == Chemical Process Design Open Textbook (Northwestern University by Fengqi You) A General Framework for Process Synthesis, Integration, and Intensification (OSTI / Texas A&M University)
Wikipedia/Process_design_(chemical_engineering)
The first time a catalyst was used in the industry was in 1746 by J. Roebuck in the manufacture of lead chamber sulfuric acid. Since then catalysts have been in use in a large portion of the chemical industry. In the start only pure components were used as catalysts, but after the year 1900 multicomponent catalysts were studied and are now commonly used in the industry. In the chemical industry and industrial research, catalysis play an important role. Different catalysts are in constant development to fulfil economic, political and environmental demands. When using a catalyst, it is possible to replace a polluting chemical reaction with a more environmentally friendly alternative. Today, and in the future, this can be vital for the chemical industry. In addition, it's important for a company/researcher to pay attention to market development. If a company's catalyst is not continually improved, another company can make progress in research on that particular catalyst and gain market share. For a company, a new and improved catalyst can be a huge advantage for a competitive manufacturing cost. It's extremely expensive for a company to shut down the plant because of an error in the catalyst, so the correct selection of a catalyst or a new improvement can be key to industrial success. To achieve the best understanding and development of a catalyst it is important that different special fields work together. These fields can be: organic chemistry, analytic chemistry, inorganic chemistry, chemical engineers and surface chemistry. The economics must also be taken into account. One of the issues that must be considered is if the company should use money on doing the catalyst research themselves or buy the technology from someone else. As the analytical tools are becoming more advanced, the catalysts used in the industry are improving. One example of an improvement can be to develop a catalyst with a longer lifetime than the previous version. Some of the advantages an improved catalyst gives, that affects people's lives, are: cheaper and more effective fuel, new drugs and medications and new polymers. Some of the large chemical processes that use catalysis today are the production of methanol and ammonia. Both methanol and ammonia synthesis take advantage of the water-gas shift reaction and heterogeneous catalysis, while other chemical industries use homogenous catalysis. If the catalyst exists in the same phase as the reactants it is said to be homogenous; otherwise it is heterogeneous. == Water gas shift reaction == The water gas shift reaction was first used industrially at the beginning of the 20th century. Today the WGS reaction is used primarily to produce hydrogen that can be used for further production of methanol and ammonia. WGS reaction The reaction refers to carbon monoxide (CO) that reacts with water (H2O) to form carbon dioxide (CO2) and hydrogen (H2). The reaction is exothermic with ΔH= -41.1 kJ/mol and have an adiabatic temperature rise of 8–10 °C per percent CO converted to CO2 and H2. The most common catalysts used in the water-gas shift reaction are the high temperature shift (HTS) catalyst and the low temperature shift (LTS) catalyst. The HTS catalyst consists of iron oxide stabilized by chromium oxide, while the LTS catalyst is based on copper. The main purpose of the LTS catalyst is to reduce CO content in the reformate which is especially important in the ammonia production for high yield of H2. Both catalysts are necessary for thermal stability, since using the LTS reactor alone increases exit-stream temperatures to unacceptable levels. The equilibrium constant for the reaction is given as: Low temperatures will therefore shift the reaction to the right, and more products will be produced. The equilibrium constant is extremely dependent on the reaction temperature, for example is the Kp equal to 228 at 200 °C, but only 11.8 at 400 °C. The WGS reaction can be performed both homogenously and heterogeneously, but only the heterogeneous method is used commercially. === High temperature shift (HTS) catalyst === The first step in the WGS reaction is the high temperature shift which is carried out at temperatures between 320 °C and 450 °C. As mentioned before, the catalyst is a composition of iron-oxide, Fe2O3(90-95%), and chromium oxides Cr2O3 (5-10%) which have an ideal activity and selectivity at these temperatures. When preparing this catalyst, one of the most important step is washing to remove sulfate that can turn into hydrogen sulfide and poison the LTS catalyst later in the process. Chromium is added to the catalyst to stabilize the catalyst activity over time and to delay sintering of iron oxide. Sintering will decrease the active catalyst area, so by decreasing the sintering rate the lifetime of the catalyst will be extended. The catalyst is usually used in pellets form, and the size play an important role. Large pellets will be strong, but the reaction rate will be limited. In the end, the dominant phase in the catalyst consist of Cr3+ in α-Fe2O3 but the catalyst is still not active. To be active α-Fe2O3 must be reduced to Fe and CrO3 must be reduced to Cr in presence of H2. This usually happens in the reactor start-up phase and because the reduction reactions are exothermic the reduction should happen under controlled circumstances. The lifetime of the iron-chrome catalyst is approximately 3–5 years, depending on how the catalyst is handled. Even though the mechanism for the HTS catalyst has been researched extensively, there is no final agreement on the kinetics/mechanism. Research has narrowed it down to two possible mechanisms: a regenerative redox mechanism and an adsorptive(associative) mechanism. The redox mechanism is given below: First a CO molecule reduces an O molecule, yielding CO2 and a vacant surface center: The vacant side is then reoxidized by water, and the oxide center is regenerated: The adsorptive mechanism assumes that format species is produced when an adsorbed CO molecule reacts with a surface hydroxyl group: The format decomposes then in the presence of steam: === Low temperature shift (LTS) catalyst === The low temperature process is the second stage in the process, and is designed to take advantage of higher hydrogen equilibrium at low temperatures. The reaction is carried out between 200 °C and 250 °C, and the most commonly used catalyst is based on copper. While the HTS reactor used an iron-chrome based catalyst, the copper-catalyst is more active at lower temperatures thereby yielding a lower equilibrium concentration of CO and a higher equilibrium concentration of H2. The disadvantage with a copper catalysts is that it is very sensitive when it comes to sulfide poisoning, a future use of for example a cobalt- molybdenum catalyst could solve this problem. The catalyst mainly used in the industry today is a copper-zinc-alumina (Cu/ZnO/Al2O3) based catalyst. Also the LTS catalyst has to be activated by reduction before it can be used. The reduction reaction CuO + H2 →Cu + H2O is highly exothermic and should be conducted in dry gas for an optimal result. As for the HTS catalyst mechanism, two similar reaction mechanisms are suggested. The first mechanism that was proposed for the LTS reaction was a redox mechanism, but later evidence showed that the reaction can proceed via associated intermediates. The different intermediates that is suggested are: HOCO, HCO and HCOO. In 2009 there are in total three mechanisms that are proposed for the water-gas shift reaction over Cu(111), given below. Intermediate mechanism (usually called associative mechanism): An intermediate is first formed and then decomposes into the final products: Associative mechanism: CO2 produced from the reaction of CO with OH without the formation of an intermediate: Redox mechanism: Water dissociation that yields surface oxygen atoms which react with CO to produce CO2: It is not said that just one of these mechanisms is controlling the reaction, it is possible that several of them are active. Q.-L. Tang et al. has suggested that the most favorable mechanism is the intermediate mechanism (with HOCO as intermediate) followed by the redox mechanism with the rate determining step being the water dissociation. For both HTS catalyst and LTS catalyst the redox mechanism is the oldest theory and most published articles support this theory, but as technology has developed the adsorptive mechanism has become more of interest. One of the reasons to the fact that the literature is not agreeing on one mechanism can be because of experiments are carried out under different assumptions. === Carbon Monoxide === CO must be produced for the WGS reaction to take place. This can be done in different ways from a variety of carbon sources such as: passing steam over coal: steam reforming methane, over a nickel catalyst: or by using biomass. Both the reactions shown above are highly endothermic and can be coupled to an exothermic partial oxidation. The products of CO and H2 are known as syngas. When dealing with a catalyst and CO, it is common to assume that the intermediate CO-Metal is formed before the intermediate reacts further into the products. When designing a catalyst this is important to remember. The strength of interaction between the CO molecule and the metal should be strong enough to provide a sufficient concentration of the intermediate, but not so strong that the reaction will not continue. CO is a common molecule to use in a catalytic reaction, and when it interacts with a metal surface it is actually the molecular orbitals of CO that interacts with the d-band of the metal surface. When considering a molecular orbital(MO)-diagram CO can act as an σ-donor via the lone pair of the electrons on C, and a π-acceptor ligand in transition metal complexes. When a CO molecule is adsorbed on a metal surface, the d-band of the metal will interact with the molecular orbitals of CO. It is possible to look at a simplified picture, and only consider the LUMO (2π*) and HOMO (5σ) to CO. The overall effect of the σ-donation and the π- back donation is that a strong bond between C and the metal is being formed and in addition the bond between C and O will be weakened. The latter effect is due to charge depletion of the CO 5σ bonding and charge increase of the CO 2π* antibonding orbital. When looking at chemical surfaces, many researchers seems to agree on that the surface of the Cu/Al2O3/ZnO is most similar to the Cu(111) surface. Since copper is the main catalyst and the active phase in the LTS catalyst, many experiments has been done with copper. In the figure given here experiments has been done on Cu(110) and Cu(111). The figure shows Arrhenius plot derived from reaction rates. It can be seen from the figure that Cu(110) shows a faster reaction rate and a lower activation energy. This can be due to the fact that Cu(111) is more closely packed than Cu(110). == Methanol production == Production of methanol is an important industry today and methanol is one of the largest volume carbonylation products. The process uses syngas as feedstock and for that reason the water gas shift reaction is important for this synthesis. The most important reaction based on methanol is the decomposition of methanol to yield carbon monoxide and hydrogen. Methanol is therefore an important raw material for production of CO and H2 that can be used in generation of fuel. BASF was the first company (in 1923) to produce methanol on large-scale, then using a sulfur-resistant ZnO/Cr2O3 catalyst. The feed gas was produced by gasification over coal. Today the synthesis gas is usually manufactured via steam reforming of natural gas. The most effective catalysts for methanol synthesis are Cu, Ni, Pd and Pt, while the most common metals used for support are Al and Si. In 1966 ICI (Imperial Chemical Industries) developed a process that is still in use today. The process is a low-pressure process that uses a Cu/ZnO/Al2O3 catalyst where copper is the active material. This catalyst is actually the same that the low-temperature shift catalyst in the WGS reaction is using. The reaction described below is carried out at 250 °C and 5-10 MPa: Both of these reactions are exothermic and proceeds with volume contraction. Maximum yield of methanol is therefore obtained at low temperatures and high pressure and with use of a catalyst that has a high activity at these conditions. A catalyst with sufficiently high activity at the low temperature does still not exist, and this is one of the main reasons that companies keep doing research and catalyst development. A reaction mechanism for methanol synthesis has been suggested by Chinchen et al.: Today there are four different ways to catalytically obtain hydrogen production from methanol, and all reactions can be carried out by using a transition metal catalyst (Cu, Pd): === Steam reforming === The reaction is given as: Steam reforming is a good source for production of hydrogen, but the reaction is endothermic. The reaction can be carried out over a copper-based catalyst, but the reaction mechanism is dependent on the catalyst. For a copper-based catalyst two different reaction mechanisms have been proposed, a decomposition-water-gas shift sequence and a mechanism that proceeds via methanol dehydrogenation to methyl formate. The first mechanism aims at methanol decomposition followed by the WGS reaction and has been proposed for the Cu/ZnO/Al2O3: The mechanism for the methyl format reaction can be dependent of the composition of the catalyst. The following mechanism has been proposed over Cu/ZnO/Al2O3: When methanol is almost completely converted CO is being produced as a secondary product via the reverse water-gas shift reaction. === Methanol decomposition === The second way to produce hydrogen from methanol is by methanol decomposition: As the enthalpy shows, the reaction is endothermic and this can be taken further advantage of in the industry. This reaction is the opposite of the methanol synthesis from syngas, and the most effective catalysts seems to be Cu, Ni, Pd and Pt as mentioned before. Often, a Cu/ZnO-based catalyst is used at temperatures between 200 and 300 °C but by-products of production like dimethyl ether, methyl format, methane and water are common. The reaction mechanism is not fully understood and there are two possible mechanism proposed (2002) : one producing CO2 and H2 by decomposition of formate intermediates and the other producing CO and H2 via a methyl formate intermediate. === Partial oxidation === Partial oxidation is a third way for producing hydrogen from methanol. The reaction is given below, and is often carried out with air or oxygen as oxidant : The reaction is exothermic and has, under favorable conditions, a higher reaction rate than steam reforming. The catalyst used is often Cu (Cu/ZnO) or Pd and they differ in qualities such as by-product formation, product distribution and the effect of oxygen partial pressure. === Combined reforming === Combined reforming is a combination of partial oxidation and steam reforming and is the last reaction that is used for hydrogen production. The general equation is given below: s and p are the stoichiometric coefficients for steam reforming and partial oxidation, respectively. The reaction can be both endothermic and exothermic determined by the conditions, and combine both the advantages of steam reforming and partial oxidation. == Ammonia synthesis == Ammonia synthesis was discovered by Fritz Haber, by using iron catalysts. The ammonia synthesis advanced between 1909 and 1913, and two important concepts were developed; the benefits of a promoter and the poisoning effect (see catalysis for more details). Ammonia production was one of the first commercial processes that required the production of hydrogen, and the cheapest and best way to obtain hydrogen was via the water-gas shift reaction. The Haber–Bosch process is the most common process used in the ammonia industry. A lot of research has been done on the catalyst used in the ammonia process, but the main catalyst that is used today is not that dissimilar to the one that was first developed. The catalyst the industry use is a promoted iron catalyst, where the promoters can be K2O (potassium oxide), Al2O3 (aluminium oxide) and CaO (calcium oxide) and the basic catalytic material is iron. The most common is to use fixed bed reactors for the synthesis catalyst. The main ammonia reaction is given below: The produced ammonia can be used further in production of nitric acid via the Ostwald process. == See also == Ammonia Chemical plant Chemical industry == References ==
Wikipedia/Industrial_catalysts
Occupational safety and health (OSH) or occupational health and safety (OHS) is a multidisciplinary field concerned with the safety, health, and welfare of people at work (i.e., while performing duties required by one's occupation). OSH is related to the fields of occupational medicine and occupational hygiene and aligns with workplace health promotion initiatives. OSH also protects all the general public who may be affected by the occupational environment. According to the official estimates of the United Nations, the WHO/ILO Joint Estimate of the Work-related Burden of Disease and Injury, almost 2 million people die each year due to exposure to occupational risk factors. Globally, more than 2.78 million people die annually as a result of workplace-related accidents or diseases, corresponding to one death every fifteen seconds. There are an additional 374 million non-fatal work-related injuries annually. It is estimated that the economic burden of occupational-related injury and death is nearly four per cent of the global gross domestic product each year. The human cost of this adversity is enormous. In common-law jurisdictions, employers have the common law duty (also called duty of care) to take reasonable care of the safety of their employees. Statute law may, in addition, impose other general duties, introduce specific duties, and create government bodies with powers to regulate occupational safety issues. Details of this vary from jurisdiction to jurisdiction. Prevention of workplace incidents and occupational diseases is addressed through the implementation of occupational safety and health programs at company level. == Definitions == The International Labour Organization (ILO) and the World Health Organization (WHO) share a common definition of occupational health. It was first adopted by the Joint ILO/WHO Committee on Occupational Health at its first session in 1950: Occupational health should aim at the promotion and maintenance of the highest degree of physical, mental and social well-being of workers in all occupations; the prevention amongst workers of departures from health caused by their working conditions; the protection of workers in their employment from risks resulting from factors adverse to health; the placing and maintenance of the worker in an occupational environment adapted to his physiological and psychological capabilities and; to summarize: the adaptation of work to man and of each man to his job. In 1995, a consensus statement was added: The main focus in occupational health is on three different objectives: (i) the maintenance and promotion of workers' health and working capacity; (ii) the improvement of working environment and work to become conducive to safety and health and (iii) development of work organizations and working cultures in a direction which supports health and safety at work and in doing so also promotes a positive social climate and smooth operation and may enhance productivity of the undertakings. The concept of working culture is intended in this context to mean a reflection of the essential value systems adopted by the undertaking concerned. Such a culture is reflected in practice in the managerial systems, personnel policy, principles for participation, training policies and quality management of the undertaking.An alternative definition for occupational health given by the WHO is: "occupational health deals with all aspects of health and safety in the workplace and has a strong focus on primary prevention of hazards." The expression "occupational health", as originally adopted by the WHO and the ILO, refers to both short- and long-term adverse health effects. In more recent times, the expressions "occupational safety and health" and "occupational health and safety" have come into use (and have also been adopted in works by the ILO), based on the general understanding that occupational health refers to hazards associated to disease and long-term effects, while occupational safety hazards are those associated to work accidents causing injury and sudden severe conditions. == History == Research and regulation of occupational safety and health are a relatively recent phenomenon. As labor movements arose in response to worker concerns in the wake of the industrial revolution, workers' safety and health entered consideration as a labor-related issue. === Beginnings === Written works on occupational diseases began to appear by the end of the 15th century, when demand for gold and silver was rising due to the increase in trade and iron, copper, and lead were also in demand from the nascent firearms market. Deeper mining became common as a consequence. In 1473, Ulrich Ellenbog, a German physician, wrote a short treatise On the Poisonous Wicked Fumes and Smokes, focused on coal, nitric acid, lead, and mercury fumes encountered by metal workers and goldsmiths. In 1587, Paracelsus (1493–1541) published the first work on the mine and smelter workers diseases. In it, he gave accounts of miners' "lung sickness". In 1526, Georgius Agricola's (1494–1553) De re metallica, a treaty on metallurgy, described accidents and diseases prevalent among miners and recommended practices to prevent them. Like Paracelsus, Agricola mentioned the dust that "eats away the lungs, and implants consumption." The seeds of state intervention to correct social ills were sown during the reign of Elizabeth I by the Poor Laws, which originated in attempts to alleviate hardship arising from widespread poverty. While they were perhaps more to do with a need to contain unrest than morally motivated, they were significant in transferring responsibility for helping the needy from private hands to the state. In 1713, Bernardino Ramazzini (1633–1714), often described as the father of occupational medicine and a precursor to occupational health, published his De morbis artificum diatriba (Dissertation on Workers' Diseases), which outlined the health hazards of chemicals, dust, metals, repetitive or violent motions, odd postures, and other disease-causative agents encountered by workers in more than fifty occupations. It was the first broad-ranging presentation of occupational diseases. Percivall Pott (1714–1788), an English surgeon, described cancer in chimney sweeps (chimney sweeps' carcinoma), the first recognition of an occupational cancer in history. === The Industrial Revolution in Britain === The United Kingdom was the first nation to industrialize. Soon shocking evidence emerged of serious physical and moral harm suffered by children and young persons in the cotton textile mills, as a result of exploitation of cheap labor in the factory system. Responding to calls for remedial action from philanthropists and some of the more enlightened employers, in 1802 Sir Robert Peel, himself a mill owner, introduced a bill to Parliament with the aim of improving their conditions. This would engender the Health and Morals of Apprentices Act 1802, generally believed to be the first attempt to regulate conditions of work in the United Kingdom. The act applied only to cotton textile mills and required employers to keep premises clean and healthy by twice yearly washings with quicklime, to ensure there were sufficient windows to admit fresh air, and to supply "apprentices" (i.e., pauper and orphan employees) with "sufficient and suitable" clothing and accommodation for sleeping. It was the first of the 19th century Factory Acts. Charles Thackrah (1795–1833), another pioneer of occupational medicine, wrote a report on The State of Children Employed in Cotton Factories, which was sent to the Parliament in 1818. Thackrah recognized issues of inequalities of health in the workplace, with manufacturing in towns causing higher mortality than agriculture. The Factory Act 1833 created a dedicated professional Factory Inspectorate. The initial remit of the Inspectorate was to police restrictions on the working hours in the textile industry of children and young persons (introduced to prevent chronic overwork, identified as leading directly to ill-health and deformation, and indirectly to a high accident rate). In 1840 a royal commission published its findings on the state of conditions for the workers of the mining industry that documented the appallingly dangerous environment that they had to work in and the high frequency of accidents. The commission sparked public outrage which resulted in the Mines and Collieries Act 1842. The act set up an inspectorate for mines and collieries which resulted in many prosecutions and safety improvements, and by 1850, inspectors were able to enter and inspect premises at their discretion. On the urging of the Factory Inspectorate, a further Factories Act 1844 giving similar restrictions on working hours for women in the textile industry introduced a requirement for machinery guarding (but only in the textile industry, and only in areas that might be accessed by women or children). The latter act was the first to take a significant step toward improvement of workers' safety, as the former focused on health aspects alone. The first decennial British Registrar-General's mortality report was issued in 1851. Deaths were categorized by social classes, with class I corresponding to professionals and executives and class V representing unskilled workers. The report showed that mortality rates increased with the class number. === Continental Europe === Otto von Bismarck inaugurated the first social insurance legislation in 1883 and the first worker's compensation law in 1884 – the first of their kind in the Western world. Similar acts followed in other countries, partly in response to labor unrest. === United States === The United States are responsible for the first health program focusing on workplace conditions. This was the Marine Hospital Service, inaugurated in 1798 and providing care for merchant seamen. This was the beginning of what would become the US Public Health Service (USPHS). The first worker compensation acts in the United States were passed in New York in 1910 and in Washington and Wisconsin in 1911. Later rulings included occupational diseases in the scope of the compensation, which was initially restricted to accidents. In 1914 the USPHS set up the Office of Industrial Hygiene and Sanitation, the ancestor of the current National Institute for Safety and Health (NIOSH). In the early 20th century, workplace disasters were still common. For example, in 1911 a fire at the Triangle Shirtwaist Company in New York killed 146 workers, mostly women and immigrants. Most died trying to open exits that had been locked. Radium dial painter cancers,"phossy jaw", mercury and lead poisonings, silicosis, and other pneumoconioses were extremely common. The enactment of the Federal Coal Mine Health and Safety Act of 1969 was quickly followed by the 1970 Occupational Safety and Health Act, which established the Occupational Safety and Health Administration (OSHA) and NIOSH in their current form`. == Workplace hazards == A wide array of workplace hazards can damage the health and safety of people at work. These include but are not limited to, "chemicals, biological agents, physical factors, adverse ergonomic conditions, allergens, a complex network of safety risks," as well a broad range of psychosocial risk factors. Personal protective equipment can help protect against many of these hazards. A landmark study conducted by the World Health Organization and the International Labour Organization found that exposure to long working hours is the occupational risk factor with the largest attributable burden of disease, i.e. an estimated 745,000 fatalities from ischemic heart disease and stroke events in 2016. This makes overwork the globally leading occupational health risk factor. Physical hazards affect many people in the workplace. Occupational hearing loss is the most common work-related injury in the United States, with 22 million workers exposed to hazardous occupational noise levels at work and an estimated $242 million spent annually on worker's compensation for hearing loss disability. Falls are also a common cause of occupational injuries and fatalities, especially in construction, extraction, transportation, healthcare, and building cleaning and maintenance. Machines have moving parts, sharp edges, hot surfaces and other hazards with the potential to crush, burn, cut, shear, stab or otherwise strike or wound workers if used unsafely. Biological hazards (biohazards) include infectious microorganisms such as viruses, bacteria and toxins produced by those organisms such as anthrax. Biohazards affect workers in many industries; influenza, for example, affects a broad population of workers. Outdoor workers, including farmers, landscapers, and construction workers, risk exposure to numerous biohazards, including animal bites and stings, urushiol from poisonous plants, and diseases transmitted through animals such as the West Nile virus and Lyme disease. Health care workers, including veterinary health workers, risk exposure to blood-borne pathogens and various infectious diseases, especially those that are emerging. Dangerous chemicals can pose a chemical hazard in the workplace. There are many classifications of hazardous chemicals, including neurotoxins, immune agents, dermatologic agents, carcinogens, reproductive toxins, systemic toxins, asthmagens, pneumoconiotic agents, and sensitizers. Authorities such as regulatory agencies set occupational exposure limits to mitigate the risk of chemical hazards. International investigations are ongoing into the health effects of mixtures of chemicals, given that toxins can interact synergistically instead of merely additively. For example, there is some evidence that certain chemicals are harmful at low levels when mixed with one or more other chemicals. Such synergistic effects may be particularly important in causing cancer. Additionally, some substances (such as heavy metals and organohalogens) can accumulate in the body over time, thereby enabling small incremental daily exposures to eventually add up to dangerous levels with little overt warning. Psychosocial hazards include risks to the mental and emotional well-being of workers, such as feelings of job insecurity, long work hours, and poor work-life balance. Psychological abuse has been found present within the workplace as evidenced by previous research. A study by Gary Namie on workplace emotional abuse found that 31% of women and 21% of men who reported workplace emotional abuse exhibited three key symptoms of post-traumatic stress disorder (hypervigilance, intrusive imagery, and avoidance behaviors). Sexual harassment is a serious hazard that can be found in workplaces. == By industry == Specific occupational safety and health risk factors vary depending on the specific sector and industry. Construction workers might be particularly at risk of falls, for instance, whereas fishermen might be particularly at risk of drowning. Similarly psychosocial risks such as workplace violence are more pronounced for certain occupational groups such as health care employees, police, correctional officers and teachers. === Primary sector === ==== Agriculture ==== Agriculture workers are often at risk of work-related injuries, lung disease, noise-induced hearing loss, skin disease, as well as certain cancers related to chemical use or prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery. The most common cause of fatal agricultural injuries in the United States is tractor rollovers, which can be prevented by the use of roll over protection structures which limit the risk of injury in case a tractor rolls over. Pesticides and other chemicals used in farming can also be hazardous to worker health, and workers exposed to pesticides may experience illnesses or birth defects. As an industry in which families, including children, commonly work alongside their families, agriculture is a common source of occupational injuries and illnesses among younger workers. Common causes of fatal injuries among young farm worker include drowning, machinery and motor vehicle-related accidents. The 2010 NHIS-OHS found elevated prevalence rates of several occupational exposures in the agriculture, forestry, and fishing sector which may negatively impact health. These workers often worked long hours. The prevalence rate of working more than 48 hours a week among workers employed in these industries was 37%, and 24% worked more than 60 hours a week. Of all workers in these industries, 85% frequently worked outdoors compared to 25% of all US workers. Additionally, 53% were frequently exposed to vapors, gas, dust, or fumes, compared to 25% of all US workers. ==== Mining and oil and gas extraction ==== The mining industry still has one of the highest rates of fatalities of any industry. There are a range of hazards present in surface and underground mining operations. In surface mining, leading hazards include such issues as geological instability, contact with plant and equipment, rock blasting, thermal environments (heat and cold), respiratory health (black lung), etc. In underground mining, operational hazards include respiratory health, explosions and gas (particularly in coal mine operations), geological instability, electrical equipment, contact with plant and equipment, heat stress, inrush of bodies of water, falls from height, confined spaces, ionising radiation, etc. According to data from the 2010 NHIS-OHS, workers employed in mining and oil and gas extraction industries had high prevalence rates of exposure to potentially harmful work organization characteristics and hazardous chemicals. Many of these workers worked long hours: 50% worked more than 48 hours a week and 25% worked more than 60 hours a week in 2010. Additionally, 42% worked non-standard shifts (not a regular day shift). These workers also had high prevalence of exposure to physical/chemical hazards. In 2010, 39% had frequent skin contact with chemicals. Among nonsmoking workers, 28% of those in mining and oil and gas extraction industries had frequent exposure to secondhand smoke at work. About two-thirds were frequently exposed to vapors, gas, dust, or fumes at work. === Secondary sector === ==== Construction ==== Construction is one of the most dangerous occupations in the world, incurring more occupational fatalities than any other sector in both the United States and in the European Union. In 2009, the fatal occupational injury rate among construction workers in the United States was nearly three times that for all workers. Falls are one of the most common causes of fatal and non-fatal injuries among construction workers. Proper safety equipment such as harnesses and guardrails and procedures such as securing ladders and inspecting scaffolding can curtail the risk of occupational injuries in the construction industry. Due to the fact that accidents may have disastrous consequences for employees as well as organizations, it is of utmost importance to ensure health and safety of workers and compliance with HSE construction requirements. Health and safety legislation in the construction industry involves many rules and regulations. For example, the role of the Construction Design Management (CDM) Coordinator as a requirement has been aimed at improving health and safety on-site. The 2010 National Health Interview Survey Occupational Health Supplement (NHIS-OHS) identified work organization factors and occupational psychosocial and chemical/physical exposures which may increase some health risks. Among all US workers in the construction sector, 44% had non-standard work arrangements (were not regular permanent employees) compared to 19% of all US workers, 15% had temporary employment compared to 7% of all US workers, and 55% experienced job insecurity compared to 32% of all US workers. Prevalence rates for exposure to physical/chemical hazards were especially high for the construction sector. Among nonsmoking workers, 24% of construction workers were exposed to secondhand smoke while only 10% of all US workers were exposed. Other physical/chemical hazards with high prevalence rates in the construction industry were frequently working outdoors (73%) and frequent exposure to vapors, gas, dust, or fumes (51%). === Tertiary sector === The service sector comprises diverse workplaces. Each type of workplace has its own health risks. While some occupations have become mobile, others still require desk work. As the number of service sector jobs has risen in developed countries, many jobs have turned sedentary, presenting an array of health problems that differ from previous health concerns associated with manufacturing and the primary sector. Contemporary health problems include obesity. Some working conditions, such as occupational stress, workplace bullying, and overwork, have negative consequences for physical and mental health. Tipped wage workers are at a higher risk of negative mental health outcomes like addiction or depression. The higher rates of mental health issues may be attributed to the precarious nature of their employment, characterized by low and unpredictable incomes, inadequate access to benefits, wage exploitation, and minimal control over work schedules and assigned shifts. Close to 70% of tipped wage workers are women. Additionally, "almost 40 percent of people who work for tips are people of color: 18 percent are Latino, 10 percent are African American, and 9 percent are Asian. Immigrants are also overrepresented in the tipped workforce." According to data from the 2010 NHIS-OHS, hazardous physical and chemical exposures in the service sector were lower than national averages. However, harmful organizational practices and psychosocial risks were fairly prevalent in this sector. Among all workers in the service industry, 30% experienced job insecurity in 2010, 27% worked non-standard shifts (not a regular day shift), 21% had non-standard work arrangements (were not regular permanent employees). In addition to these organizational risks, some industries pose significant physical dangers due to the manual labor involved. For instance, on a per employee basis, the US Postal Service, UPS and FedEx are the 4th, 5th and 7th most dangerous companies to work for in the United States, respectively. ==== Healthcare and social assistance ==== In general, healthcare workers are exposed to many hazards that can adversely affect their health and well-being. Long hours, changing shifts, physically demanding tasks, violence, and exposures to infectious diseases and harmful chemicals are examples of hazards that put these workers at risk for illness and injury. Musculoskeletal injury (MSI) is the most common health hazard in for healthcare workers and in workplaces overall. Injuries can be prevented by using proper body mechanics. According to the Bureau of Labor statistics, US hospitals recorded 253,700 work-related injuries and illnesses in 2011, which is 6.8 work-related injuries and illnesses for every 100 full-time employees. The injury and illness rate in hospitals is higher than the rates in construction and manufacturing – two industries that are traditionally thought to be relatively hazardous. == Workplace fatality and injury statistics == === Worldwide === An estimated 2.90 million work-related deaths occurred in 2019, increased from 2.78 million death from 2015. About, one-third of the total work-related deaths (31%) were due to circulatory diseases, while cancer contributed 29%, respiratory diseases 17%, and occupational injuries contributed 11% (or about 319,000 fatalities). Other diseases such as work-related communicable diseases contributed 6%, while neuropsychiatric conditions contributed 3% and work-related digestive disease and genitourinary diseases contributed 1% each. The contribution of cancers and circulatory diseases to total work-related deaths increased from 2015, while deaths due to occupational injuries decreased. Although work-related injury deaths and non-fatal injuries rates were on a decreasing trend, the total deaths and non-fatal outcomes were on the rise. Cancers represented the most significant cause of mortality in high-income countries. The number of non-fatal occupational injuries for 2019 was estimated to be 402 million. Mortality rate is unevenly distributed, with male mortality rate (108.3 per 100,000 employed male individuals) being significantly higher than female rate (48.4 per 100,000). 6.7% of all deaths globally are represented by occupational fatalities. === European Union === Certain EU member states admit to having lacking quality control in occupational safety services, to situations in which risk analysis takes place without any on-site workplace visits and to insufficient implementation of certain EU OSH directives. Disparities between member states result in different impact of occupational hazards on the economy. In the early 2000s, the total societal costs of work-related health problems and accidents varied from 2.6% to 3.8% of the national GDPs across the member states. In 2021, in the EU-27 as a whole, 93% of deaths due to injury were of males. === Russia === One of the decisions taken by the communist regime under Stalin was to reduce the number of accidents and occupational diseases to zero. The tendency to decline remained in the Russian Federation in the early 21st century. However, as in previous years, data reporting and publication was incomplete and manipulated, so that the actual number of work-related diseases and accidents are unknown. The ILO reports that, according to the information provided by the Russian government, there are 190,000 work-related fatalities each year, of which 15,000 due to occupational accidents. After the demise of the USSR, enterprises became owned by oligarchs who were not interested in upholding safe and healthy conditions in the workplace. Expenditure on equipment modernization was minimal and the share of harmful workplaces increased. The government did not interfere in this, and sometimes it helped employers. At first, the increase in occupational diseases and accidents was slow, due to the fact that in the 1990s it was compensated by mass deindustrialization. However, in the 2000s deindustrialization slowed and occupational diseases and injuries started to rise in earnest. Therefore, in the 2010s the Ministry of Labor adopted federal law no. 426-FZ. This piece of legislation has been described as ineffective and based on the superficial assumption that the issuance of personal protective equipment to the employee means real improvement of working conditions. Meanwhile, the Ministry of Health made significant changes in the methods of risk assessment in the workplace. However, specialists from the Izmerov Research Institute of Occupational Health found that the post-2014 apparent decrease in the share of employees engaged in hazardous working conditions is due to the change in definitions consequent to the Ministry of Health's decision, but does not reflect actual improvements. This was most clearly shown in the results for the aluminum industry. Further problems in the accounting of workplace fatalities arise from the fact that multiple Russian federal entities collect and publish records, a practice that should be avoided. In 2008 alone, 2074 accidents at work may have not been reported in official government sources. === United Kingdom === In the UK there were 135 fatal injuries at work in financial year 2022–2023, compared with 651 in 1974 (the year when the Health and Safety at Work Act was promulgated). The fatal injury rate declined from 2.1 fatalities per 100,000 workers in 1981 to 0.41 in financial year 2022–2023. Over recent decades reductions in both fatal and non-fatal workplace injuries have been very significant. However, illnesses statistics have not uniformly improved: while musculoskeletal disorders have diminished, the rate of self-reported work-related stress, depression or anxiety has increased, and the rate of mesothelioma deaths has remained broadly flat (due to past asbestos exposures). === United States === The Occupational Safety and Health Statistics (OSHS) program in the Bureau of Labor Statistics of the United States Department of Labor compiles information about workplace fatalities and non-fatal injuries in the United States. The OSHS program produces three annual reports: Counts and rates of nonfatal occupational injuries and illnesses by detailed industry and case type (SOII summary data) Case circumstances and worker demographic data for nonfatal occupational injuries and illnesses resulting in days away from work (SOII case and demographic data) Counts and rates of fatal occupational injuries (CFOI data) The Bureau also uses tools like AgInjuryNews.org to identify and compile additional sources of fatality reports for their datasets. Between 1913 and 2013, workplace fatalities dropped by approximately 80%. In 1970, an estimated 14,000 workers were killed on the job. By 2021, in spite of the workforce having since more than doubled, workplace deaths were down to about 5,190. According to the census of occupational injuries 5,486 people died on the job in 2022, up from the 2021 total of 5,190. The fatal injury rate was 3.7 per 100,000 full-time equivalent workers. The decrease in the mortality rate is only partly (about 10–15%) explained by the deindustrialization of the US in the last 40 years. About 3.5 million nonfatal workplace injuries and illnesses were reported by private industry employers in 2022, occurring at a rate of 3.0 cases per 100 full-time workers. == Management systems == Companies may adopt a safety and health management system (SMS), either voluntarily or because required by applicable regulations, to deal in a structured and systematic way with safety and health risks in their workplace. An SMS provides a systematic way to assess and improve prevention of workplace accidents and incidents based on structured management of workplace risks and hazards. It must be adaptable to changes in the organization's business and legislative requirements. It is usually based on the Deming cycle, or plan-do-check-act (PDCA) principle. An effective SMS should: Define how the organization is set up to manage risk Identify workplace hazards and implement suitable controls Implement effective communication across all levels of the organization Implement a process to identify and correct non-conformity and non-compliance issues Implement a continual improvement process Management standards across a range of business functions such as environment, quality and safety are now being designed so that these traditionally disparate elements can be integrated and managed within a single business management system and not as separate and stand-alone functions. Therefore, some organizations dovetail other management system functions, such as process safety, environmental resource management or quality management together with safety management to meet both regulatory requirements, industry sector requirements and their own internal and discretionary standard requirements. === Standards === ==== International ==== The ILO published ILO-OSH 2001 on Guidelines on Occupational Safety and Health Management Systems to assist organizations with introducing OSH management systems. These guidelines encouraged continual improvement in employee health and safety, achieved via a constant process of policy; organization; planning and implementation; evaluation; and action for improvement, all supported by constant auditing to determine the success of OSH actions. From 1999 to 2018, OHSAS 18001 was adopted and widely used internationally. It was developed by a selection of national standards bodies, academic bodies, accreditation bodies, certification bodies and occupational health and safety institutions to address a gap where no third-party certifiable international standard existed. It was designed for integration with ISO 9001 and ISO 14001. OHSAS 18001 was replaced by ISO 45001, which was published in March 2018 and implemented in March 2021. ==== National ==== National management system standards for occupational health and safety include AS/NZS 4801 for Australia and New Zealand (now superseded by ISO 45001), CSA Z1000:14 for Canada (which is due to be discontinued in favor of CSA Z45001:19, the Canadian adoption of ISO 45000) and ANSI/ASSP Z10 for the United States. In Germany, the Bavarian state government, in collaboration with trade associations and private companies, issued their OHRIS standard for occupational health and safety management systems. A new revision was issued in 2018. The Taiwan Occupational Safety and Health Management System (TOSHMS) was issued in 1997 under the auspices of Taiwan's Occupational Safety and Health Administration. == Identifying OSH hazards and assessing risk == === Hazards, risks, outcomes === The terminology used in OSH varies between countries, but generally speaking: A hazard is something that can cause harm if not controlled. The outcome is the harm that results from an uncontrolled hazard. A risk is a combination of the probability that a particular outcome may occur and the severity of the harm involved. "Hazard", "risk", and "outcome" are used in other fields to describe e.g., environmental damage or damage to equipment. However, in the context of OSH, "harm" generally describes the direct or indirect degradation, temporary or permanent, of the physical, mental, or social well-being of workers. For example, repetitively carrying out manual handling of heavy objects is a hazard. The outcome could be a musculoskeletal disorder (MSD) or an acute back or joint injury. The risk can be expressed numerically (e.g., a 0.5 or 50/50 chance of the outcome occurring during a year), in relative terms (e.g., "high/medium/low"), or with a multi-dimensional classification scheme (e.g., situation-specific risks). === Hazard identification === Hazard identification is an important step in the overall risk assessment and risk management process. It is where individual work hazards are identified, assessed and controlled or eliminated as close to source (location of the hazard) as reasonably practicable. As technology, resources, social expectation or regulatory requirements change, hazard analysis focuses controls more closely toward the source of the hazard. Thus, hazard control is a dynamic program of prevention. Hazard-based programs also have the advantage of not assigning or implying there are "acceptable risks" in the workplace. A hazard-based program may not be able to eliminate all risks, but neither does it accept "satisfactory" – but still risky – outcomes. And as those who calculate and manage the risk are usually managers, while those exposed to the risks are a different group, a hazard-based approach can bypass conflict inherent in a risk-based approach. The information that needs to be gathered from sources should apply to the specific type of work from which the hazards can come from. Examples of these sources include interviews with people who have worked in the field of the hazard, history and analysis of past incidents, and official reports of work and the hazards encountered. Of these, the personnel interviews may be the most critical in identifying undocumented practices, events, releases, hazards and other relevant information. Once the information is gathered from a collection of sources, it is recommended for these to be digitally archived (to allow for quick searching) and to have a physical set of the same information in order for it to be more accessible. One innovative way to display the complex historical hazard information is with a historical hazards identification map, which distills the hazard information into an easy-to-use graphical format. === Risk assessment === Modern occupational safety and health legislation usually demands that a risk assessment be carried out prior to making an intervention. This assessment should: Identify the hazards Identify all affected by the hazard and how Evaluate the risk Identify and prioritize appropriate control measures. The calculation of risk is based on the likelihood or probability of the harm being realized and the severity of the consequences. This can be expressed mathematically as a quantitative assessment (by assigning low, medium and high likelihood and severity with integers and multiplying them to obtain a risk factor), or qualitatively as a description of the circumstances by which the harm could arise. The assessment should be recorded and reviewed periodically and whenever there is a significant change to work practices. The assessment should include practical recommendations to control the risk. Once recommended controls are implemented, the risk should be re-calculated to determine if it has been lowered to an acceptable level. Generally speaking, newly introduced controls should lower risk by one level, i.e., from high to medium or from medium to low. == National legislation and public organizations == Occupational safety and health practice vary among nations with different approaches to legislation, regulation, enforcement, and incentives for compliance. In the EU, for example, some member states promote OSH by providing public monies as subsidies, grants or financing, while others have created tax system incentives for OSH investments. A third group of EU member states has experimented with using workplace accident insurance premium discounts for companies or organizations with strong OSH records. === Australia === In Australia, four of the six states and both territories have enacted and administer harmonized work health and safety legislation in accordance with the Intergovernmental Agreement for Regulatory and Operational Reform in Occupational Health and Safety. Each of these jurisdictions has enacted work health and safety legislation and regulations based on the Commonwealth Work Health and Safety Act 2011 and common codes of practice developed by Safe Work Australia. Some jurisdictions have also included mine safety under the model approach. However, most have retained separate legislation for the time being. In August 2019, Western Australia committed to join nearly every other state and territory in implementing the harmonized Model WHS Act, Regulations and other subsidiary legislation. Victoria has retained its own regime, although the Model WHS laws themselves drew heavily on the Victorian approach. === Canada === In Canada, workers are covered by provincial or federal labor codes depending on the sector in which they work. Workers covered by federal legislation (including those in mining, transportation, and federal employment) are covered by the Canada Labour Code; all other workers are covered by the health and safety legislation of the province in which they work. The Canadian Centre for Occupational Health and Safety (CCOHS), an agency of the Government of Canada, was created in 1978 by an act of parliament. CCOHS is mandated to promote safe and healthy workplaces and help prevent work-related injuries and illnesses. There are significant common elements across relevant provincial OHS legislation. The foundation of each of these legislative frameworks is the belief that all Canadians have "a fundamental right to a healthy and safe working environment." In general, provincial workplace safety laws in Canada are designed to promote shared responsibility, prevent accidents, and ensure accountability at all levels of an organization. Employers, supervisors, and workers are expected to work together to minimize risks. Employers, in particular, are legally obligated to take every reasonable precaution to protect workers. If the workplace has more than a few employees, they are required to develop written health and safety policies and procedures. Employers must also provide and maintain equipment and machinery in a safe working condition. Additionally, employers must inform, instruct, and supervise workers to ensure safe work practices are followed. Employers are also responsible for supplying necessary protective equipment and ensuring it is used correctly, whether it involves machine guards or personal protective equipment (PPE). Supervisors have a duty to ensure that workers use all required safety devices and comply with established procedures. They must also communicate information about existing or potential hazards and provide guidance on how to work safely. Workers also have the right to refuse work if they believe it is unsafe and poses a danger to themselves or others. In workplaces with a set minimum number of employees (twenty in the case of workplaces under federal jurisdiction), it is mandatory to have a health and safety committee. This, made up of both worker and management representatives, meets regularly to identify hazards, investigate incidents, and make recommendations to improve workplace safety. These committees are crucial for fostering collaboration and addressing safety concerns in a timely manner. Law also requires employers to take defined steps to prevent workplace violence and harassment. They must create a workplace violence policy along with a program that identifies risks and outlines procedures for addressing them. A separate workplace harassment policy must explain how complaints should be reported and investigated. Employers are required to train employees on these policies to ensure awareness and compliance. All incidents involving violence, threats, or persistent harassment must be taken seriously and handled appropriately. In severe cases involving serious injury or death due to negligence, organizations and individuals can be prosecuted under the Criminal Code of Canada through the provisions introduced by Bill C-45. In some provinces, like Ontario, this introduces serious criminal consequences for safety violations. Workplaces are also subject to federal regulations under WHMIS, the Workplace Hazardous Materials Information System. WHMIS governs the labeling, documentation, and communication of hazardous materials. Employers must ensure that all hazardous substances are properly labeled, that material safety data sheets are readily available, and that workers are trained on how to handle these materials safely. As an example of arrangements at a provincial level, Ontario's primary workplace safety legislation is the Occupational Health and Safety Act (OHSA). This law sets out the responsibilities of employers, supervisors, and workers to promote a safe and healthy work environment. Ontario's occupational health and safety framework is built around the concept known as the "Internal Responsibility System," which means that everyone in the workplace shares responsibility for recognizing and addressing safety concerns. The OHSA is enforced by Ontario’s Ministry of Labour, Immigration, Training and Skills Development. Ministry inspectors have the authority to visit workplaces, investigate complaints, and issue orders. Failure to comply with the law can lead to substantial fines and penalties, and individual supervisors or managers may also be held personally liable. === China === In China, the Ministry of Health is responsible for occupational disease prevention and the State Administration of Work Safety workplace safety issues. The Work Safety Law (安全生产法) was issued on 1 November 2002. The Occupational Disease Control Act came into force on 1 May 2002. In 2018, the National Health Commission (NHC) was formally established to formulating national health policies. The NHC formulated the "National Occupational Disease Prevention and Control Plan (2021–2025)" in the context of the activities leading to the "Healthy China 2030" initiative. === European Union === The European Agency for Safety and Health at Work was founded in 1994. In the European Union, member states have enforcing authorities to ensure that the basic legal requirements relating to occupational health and safety are met. In many EU countries, there is strong cooperation between employer and worker organizations (e.g., unions) to ensure good OSH performance, as it is recognized this has benefits for both the worker (through maintenance of health) and the enterprise (through improved productivity and quality). Member states have all transposed into their national legislation a series of directives that establish minimum standards on occupational health and safety. These directives (of which there are about 20 on a variety of topics) follow a similar structure requiring the employer to assess workplace risks and put in place preventive measures based on a hierarchy of hazard control. This hierarchy starts with elimination of the hazard and ends with personal protective equipment. ==== Denmark ==== In Denmark, occupational safety and health is regulated by the Danish Act on Working Environment and Cooperation at the Workplace. The Danish Working Environment Authority (Arbejdstilsynet) carries out inspections of companies, draws up more detailed rules on health and safety at work and provides information on health and safety at work. The result of each inspection is made public on the web pages of the Danish Working Environment Authority so that the general public, current and prospective employees, customers and other stakeholders can inform themselves about whether a given organization has passed the inspection. ==== Netherlands ==== In the Netherlands, the laws for safety and health at work are registered in the Working Conditions Act (Arbeidsomstandighedenwet and Arbeidsomstandighedenbeleid). Apart from the direct laws directed to safety and health in working environments, the private domain has added health and safety rules in Working Conditions Policies (Arbeidsomstandighedenbeleid), which are specified per industry. The Ministry of Social Affairs and Employment (SZW) monitors adherence to the rules through their inspection service. This inspection service investigates industrial accidents and it can suspend work and impose fines when it deems the Working Conditions Act has been violated. Companies can get certified with a VCA certificate for safety, health and environment performance. All employees have to obtain a VCA certificate too, with which they can prove that they know how to work according to the current and applicable safety and environmental regulations. ==== Ireland ==== The main health and safety regulation in Ireland is the Safety, Health and Welfare at Work Act 2005, which replaced earlier legislation from 1989. The Health and Safety Authority, based in Dublin, is responsible for enforcing health and safety at work legislation. ==== Spain ==== In Spain, occupational safety and health is regulated by the Spanish Act on Prevention of Labor Risks. The Ministry of Labor is the authority responsible for issues relating to labor environment. The National Institute for Safety and Health at Work (Instituto Nacional de Seguridad y Salud en el Trabajo, INSST) is the government's scientific and technical organization specialized in occupational safety and health. ==== Sweden ==== In Sweden, occupational safety and health is regulated by the Work Environment Act. The Swedish Work Environment Authority (Arbetsmiljöverket) is the government agency responsible for issues relating to the working environment. The agency works to disseminate information and furnish advice on OSH, has a mandate to carry out inspections, and a right to issue stipulations and injunctions to any non-compliant employer. === India === In India, the Ministry of Labour and Employment formulates national policies on occupational safety and health in factories and docks with advice and assistance from its Directorate General Factory Advice Service and Labour Institutes (DGFASLI), and enforces its policies through inspectorates of factories and inspectorates of dock safety. The DGFASLI provides technical support in formulating rules, conducting occupational safety surveys and administering occupational safety training programs. === Indonesia === In Indonesia, the Ministry of Manpower (Kementerian Ketenagakerjaan, or Kemnaker) is responsible to ensure the safety, health and welfare of workers. Important OHS acts include the Occupational Safety Act 1970 and the Occupational Health Act 1992. Sanctions, however, are still low (with a maximum of 15 million rupiahs fine and/or a maximum of one year in prison) and violations are still very frequent. === Japan === The Japanese Ministry of Health, Labor and Welfare (MHLW) is the governmental agency overseeing occupational safety and health in Japan. The MHLW is responsible for enforcing Industrial Safety and Health Act of 1972 – the key piece of OSH legislation in Japan –, setting regulations and guidelines, supervising labor inspectors who monitor workplaces for compliance with safety and health standards, investigating accidents, and issuing orders to improve safety conditions. The Labor Standards Bureau is an arm of MHLW tasked with supervising and guiding businesses, inspecting manufacturing facilities for safety and compliance, investigating accidents, collecting statistics, enforcing regulations and administering fines for safety violations, and paying accident compensation for injured workers. The Japan Industrial Safety and Health Association (JISHA) is a non-profit organization established under the Industrial Safety and Health Act of 1972. It works closely with MHLW, the regulatory body, to promote workplace safety and health. The responsibilities of JISHA include: Providing education and training on occupational safety and health, conducting research and surveys on workplace safety and health issues, offering technical guidance and consultations to businesses, disseminating information and raising awareness about occupational safety and health, and collaborating with international organizations to share best practices and improve global workplace safety standards. The Japan National Institute of Occupational Safety and Health (JNIOSH) conducts research to support governmental policies in occupational safety and health. The organization categorizes its research into project studies, cooperative research, fundamental research, and government-requested research. Each category focuses on specific themes, from preventing accidents and ensuring workers' health, to addressing changes in employment structure. The organization sets clear goals, develops road maps, and collaborates with the Ministry of Health, Labor and Welfare to discuss progress and policy contributions. === Malaysia === In Malaysia, the Department of Occupational Safety and Health (DOSH) under the Ministry of Human Resources is responsible to ensure that the safety, health and welfare of workers in both the public and private sector is upheld. DOSH is responsible to enforce the Factories and Machinery Act 1967 and the Occupational Safety and Health Act 1994. Malaysia has a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. This followed a similar approach originally adopted in Scandinavia. === Saudi Arabia === In Saudi Arabia, the Ministry of Human Resources and Social Development administrates workers' rights and the labor market as a whole, consistent with human rights rules upheld by the Human Rights Commission of the kingdom. === Singapore === In Singapore, the Ministry of Manpower (MOM) is the government agency in charge of OHS policies and enforcement. The key piece of legislation regulating aspects of OHS is the Workplace Safety and Health Act. The MOM promotes and manages campaigns against unsafe work practices, such as when working at height, operating cranes and in traffic management. Examples include Operation Cormorant and the Falls Prevention Campaign. === South Africa === In South Africa the Department of Employment and Labour is responsible for occupational health and safety inspection and enforcement in the commercial and industrial sectors, with the exclusion of mining, where the Department of Mineral Resources is responsible. The main statutory legislation on health and safety in the jurisdiction of the Department of Employment and Labour is the OHS Act or OHSA (Act No. 85 of 1993: Occupational Health and Safety Act, as amended by the Occupational Health and Safety Amendment Act, No. 181 of 1993). Regulations implementing the OHS Act include: General Safety Regulations, 1986 Environmental Regulations for Workplaces, 1987 Driven Machinery Regulations, 1988 General Machinery Regulations, 1988 Noise Induced Hearing Loss Regulations, 2003 Pressure Equipment Regulations, 2004 General Administrative Regulations, 2003 Diving Regulations, 2009 Construction Regulations, 2014 === Syria === In Syria, health and safety is the responsibility of the Ministry of Social Affairs and Labor (Arabic: وزارة الشؤون الاجتماعية والعمل, romanized: Wizārat al-Shuʼūn al-ijtimāʻīyah wa-al-ʻamal). === Taiwan === In Taiwan, the Occupational Safety and Health Administration of the Ministry of Labor is in charge of occupational safety and health. The matter is governed under the Occupational Safety and Health Act. === United Arab Emirates === In the United Arab Emirates, national OSH legislation is based on the Federal Law on Labor (1980). Order No. 32 of 1982 on Protection from Hazards and Ministerial Decision No. 37/2 of 1982 are also of importance. The competent authority for safety and health at work at the federal level is the Ministry of Human Resources and Emiratisation (MoHRE). === United Kingdom === Health and safety legislation in the UK is drawn up and enforced by the Health and Safety Executive and local authorities under the Health and Safety at Work etc. Act 1974 (HASAWA or HSWA). HASAWA introduced (section 2) a general duty on an employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees, with the intention of giving a legal framework supporting codes of practice not in themselves having legal force but establishing a strong presumption as to what was reasonably practicable (deviations from them could be justified by appropriate risk assessment). The previous reliance on detailed prescriptive rule-setting was seen as having failed to respond rapidly enough to technological change, leaving new technologies potentially unregulated or inappropriately regulated. HSE has continued to make some regulations giving absolute duties (where something must be done with no "reasonable practicability" test) but in the UK the regulatory trend is away from prescriptive rules, and toward goal setting and risk assessment. Recent major changes to the laws governing asbestos and fire safety management embrace the concept of risk assessment. The other key aspect of the UK legislation is a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. This followed a similar approach in Scandinavia, and that approach has since been adopted in countries such as Australia, Canada, New Zealand and Malaysia. The Health and Safety Executive service dealing with occupational medicine has been the Employment Medical Advisory Service. In 2014 a new occupational health organization, the Health and Work Service, was created to provide advice and assistance to employers in order to get back to work employees on long-term sick-leave. The service, funded by the government, offers medical assessments and treatment plans, on a voluntary basis, to people on long-term absence from their employer; in return, the government no longer foots the bill for statutory sick pay provided by the employer to the individual. === United States === In the United States, President Richard Nixon signed the Occupational Safety and Health Act into law on 29 December 1970. The act created the three agencies which administer OSH: the Occupational Safety and Health Administration (OSHA), the National Institute for Occupational Safety and Health (NIOSH), and the Occupational Safety and Health Review Commission (OSHRC). The act authorized OSHA to regulate private employers in the 50 states, the District of Columbia, and territories. It includes a general duty clause (29 U.S.C. §654, 5(a)) requiring an employer to comply with the Act and regulations derived from it, and to provide employees with "employment and a place of employment which are free from recognized hazards that are causing or are likely to cause [them] death or serious physical harm." OSHA was established in 1971 under the Department of Labor. It has headquarters in Washington, DC, and ten regional offices, further broken down into districts, each organized into three sections: compliance, training, and assistance. Its stated mission is "to ensure safe and healthful working conditions for workers by setting and enforcing standards and by providing training, outreach, education and assistance." The original plan was for OSHA to oversee 50 state plans with OSHA funding 50% of each plan, but this did not work out that way: As of 2023 there are 26 approved state plans (with four covering only public employees) and OSHA manages the plan in the states not participating. OSHA develops safety standards in the Code of Federal Regulations and enforces those safety standards through compliance inspections conducted by Compliance Officers; enforcement resources are focused on high-hazard industries. Worksites may apply to enter OSHA's Voluntary Protection Program (VPP). A successful application leads to an on-site inspection; if this is passed, the site gains VPP status and OSHA no longer inspect it annually nor (normally) visit it unless there is a fatal accident or an employee complaint until VPP revalidation (after three–five years). VPP sites generally have injury and illness rates less than half the average for their industry. OSHA has a number of specialists in local offices to provide information and training to employers and employees at little or no cost. Similarly OSHA produces a range of publications and funds consultation services available for small businesses. OSHA has strategic partnership and alliance programs to develop guidelines, assist in compliance, share resources, and educate workers in OHS. OSHA manages Susan B. Harwood grants to non-profit organizations to train workers and employers to recognize, avoid, and prevent safety and health hazards in the workplace. Grants focus on small business, hard-to-reach workers and high-hazard industries. The National Institute for Occupational Safety and Health (NIOSH), also created under the Occupational Safety and Health Act, is the federal agency responsible for conducting research and making recommendations for the prevention of work-related injury and illness. NIOSH is part of the Centers for Disease Control and Prevention (CDC) within the Department of Health and Human Services. == Professional roles and responsibilities == Those in the field of occupational safety and health come from a wide range of disciplines and professions including medicine, occupational medicine, epidemiology, physiotherapy and rehabilitation, psychology, human factors and ergonomics, and many others. Professionals advise on a broad range of occupational safety and health matters. These include how to avoid particular pre-existing conditions causing a problem in the occupation, correct posture, frequency of rest breaks, preventive actions that can be undertaken, and so forth. The quality of occupational safety is characterized by (1) the indicators reflecting the level of industrial injuries, (2) the average number of days of incapacity for work per employer, (3) employees' satisfaction with their work conditions and (4) employees' motivation to work safely. The main tasks undertaken by the OSH practitioner include: Inspecting, testing and evaluating workplace environments, programs, equipment, and practices to ensure that they follow government safety regulation. Designing and implementing workplace programs and procedures that control or prevent chemical, physical, or other risks to workers. Educating employers and workers about maintaining workplace safety. Demonstrating use of safety equipment and ensuring proper use by workers. Investigating incidents to determine the cause and possible prevention. Preparing written reports of their findings. OSH specialists examine worksites for environmental or physical factors that could harm employee health, safety, comfort or performance. They then find ways to improve potential risk factors. For example, they may notice potentially hazardous conditions inside a chemical plant and suggest changes to lighting, equipment, materials, or ventilation. OSH technicians assist specialists by collecting data on work environments and implementing the worksite improvements that specialists plan. Technicians also may check to make sure that workers are using required protective gear, such as masks and hardhats. OSH specialists and technicians may develop and conduct employee training programs. These programs cover a range of topics, such as how to use safety equipment correctly and how to respond in an emergency. In the event of a workplace safety incident, specialists and technicians investigate its cause. They then analyze data from the incident, such as the number of people impacted, and look for trends in occurrence. This evaluation helps them to recommend improvements to prevent future incidents. Given the high demand in society for health and safety provisions at work based on reliable information, OSH professionals should find their roots in evidence-based practice. A new term is "evidence-informed decision making". Evidence-based practice can be defined as the use of evidence from literature, and other evidence-based sources, for advice and decisions that favor the health, safety, well-being, and work ability of workers. Therefore, evidence-based information must be integrated with professional expertise and the workers' values. Contextual factors must be considered related to legislation, culture, financial, and technical possibilities. Ethical considerations should be heeded. The roles and responsibilities of OSH professionals vary regionally but may include evaluating working environments, developing, endorsing and encouraging measures that might prevent injuries and illnesses, providing OSH information to employers, employees, and the public, providing medical examinations, and assessing the success of worker health programs. === The Netherlands === In the Netherlands, the required tasks for health and safety staff are only summarily defined and include: Providing voluntary medical examinations. Providing a consulting room on the work environment to the workers. Providing health assessments (if needed for the job concerned). Dutch law influences the job of the safety professional mainly through the requirement on employers to use the services of a certified working-conditions service for advice. A certified service must employ sufficient numbers of four types of certified experts to cover the risks in the organizations which use the service: A safety professional An occupational hygienist An occupational physician A work and organization specialist. In 2004, 14% of health and safety practitioners in the Netherlands had an MSc and 63% had a BSc. 23% had training as an OSH technician. === Norway === In Norway, the main required tasks of an occupational health and safety practitioner include: Systematic evaluations of the working environment. Endorsing preventive measures which eliminate causes of illnesses in the workplace. Providing information on the subject of employees' health. Providing information on occupational hygiene, ergonomics, and environmental and safety risks in the workplace. In 2004, 37% of health and safety practitioners in Norway had an MSc and 44% had a BSc. 19% had training as an OSH technician. == Education and training == === Formal education === There are multiple levels of training applicable to the field of occupational safety and health. Programs range from individual non-credit certificates and awareness courses focusing on specific areas of concern, to full doctoral programs. The University of Southern California was one of the first schools in the US to offer a PhD program focusing on the field. Further, multiple master's degree programs exist, such as that of the Indiana State University who offer MSc and MA programs. Other masters-level qualifications include the MSc and Master of Research (MRes) degrees offered by the University of Hull in collaboration with the National Examination Board in Occupational Safety and Health (NEBOSH). Graduate programs are designed to train educators, as well as high-level practitioners. Many OSH generalists focus on undergraduate studies; programs within schools, such as that of the University of North Carolina's online BSc in environmental health and safety, fill a large majority of hygienist needs. However, smaller companies often do not have full-time safety specialists on staff, thus, they appoint a current employee to the responsibility. Individuals finding themselves in positions such as these, or for those enhancing marketability in the job-search and promotion arena, may seek out a credit certificate program. For example, the University of Connecticut's online OSH certificate provides students familiarity with overarching concepts through a 15-credit (5-course) program. Programs such as these are often adequate tools in building a strong educational platform for new safety managers with a minimal outlay of time and money. Further, most hygienists seek certification by organizations that train in specific areas of concentration, focusing on isolated workplace hazards. The American Society of Safety Professionals (ASSP), Board for Global EHS Credentialing (BGC), and American Industrial Hygiene Association (AIHA) offer individual certificates on many different subjects from forklift operation to waste disposal and are the chief facilitators of continuing education in the OSH sector. In the US, the training of safety professionals is supported by NIOSH through their NIOSH Education and Research Centers. In the UK, both NEBOSH and the Institution of Occupational Safety and Health (IOSH) develop health and safety qualifications and courses which cater to a mixture of industries and levels of study. Although both organizations are based in the UK, their qualifications are recognized and studied internationally as they are delivered through their own global networks of approved providers. The Health and Safety Executive has also developed health and safety qualifications in collaboration with the NEBOSH. In Australia, training in OSH is available at the vocational education and training level, and at university undergraduate and postgraduate level. Such university courses may be accredited by an accreditation board of the Safety Institute of Australia. The institute has produced a Body of Knowledge which it considers is required by a generalist safety and health professional and offers a professional qualification. The Australian Institute of Health and Safety has instituted the national Eric Wigglesworth OHS Education Medal to recognize achievement in OSH doctorate education. === Informal training === Informal or field training may be delivered in the workplace or during off-site training sessions. One form of training delivered in the workplace is known as a toolbox talk. According to the UK's Health and Safety Executive, a toolbox talk is a short presentation to the workforce on a single aspect of health and safety. Such talks are often used, especially in the construction industry, by site supervisors, frontline managers and owners of small construction firms to prepare and deliver advice on matters of health, safety and the environment and to obtain feedback from the workforce. === Use of virtual reality === Virtual reality is a novel tool to deliver safety training in many fields. Some applications have been developed and tested especially for fire and construction safety training. Preliminary findings seem to support that virtual reality is more effective than traditional training in knowledge retention. == Contemporary developments == On an international scale, the World Health Organization (WHO) and the International Labour Organization (ILO) have begun focusing on labor environments in developing nations with projects such as Healthy Cities. Many of these developing countries are stuck in a situation in which their relative lack of resources to invest in OSH leads to increased costs due to work-related illnesses and accidents. The ILO estimates that work-related illness and accidents cost up to 10% of GDP in Latin America, compared with just 2.6% to 3.8% in the EU. There is continued use of asbestos, a notorious hazard, in some developing countries. So asbestos-related disease is expected to continue to be a significant problem well into the future. === Artificial intelligence === There are several broad aspects of artificial intelligence (AI) that may give rise to specific hazards. Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization. For example, AI is expected to lead to changes in the skills required of workers, requiring retraining of existing workers, flexibility, and openness to change. Increased monitoring may lead to micromanagement or perception of surveillance, and thus to workplace stress. There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours. Additionally, algorithms may show algorithmic bias through being trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead. Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans, which makes it impossible to implement the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots. Automated guided vehicles are a type of cobot in common use, often as forklifts or pallet jacks in warehouses or factories. Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase. AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions, as well as information privacy measures. Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues. Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate, does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs. === Coronavirus === The National Institute of Occupational Safety and Health (NIOSH) National Occupational Research Agenda Manufacturing Council established an externally-lead COVID-19 workgroup to provide exposure control information specific to working in manufacturing environments. The workgroup identified disseminating information most relevant to manufacturing workplaces as a priority, and that would include providing content in Wikipedia. This includes evidence-based practices for infection control plans, and communication tools. === Nanotechnology === Nanotechnology is an example of a new, relatively unstudied technology. A Swiss survey of 138 companies using or producing nanoparticulate matter in 2006 resulted in forty completed questionnaires. Sixty-five per cent of respondent companies stated they did not have a formal risk assessment process for dealing with nanoparticulate matter. Nanotechnology already presents new issues for OSH professionals that will only become more difficult as nanostructures become more complex. The size of the particles renders most containment and personal protective equipment ineffective. The toxicology values for macro sized industrial substances are rendered inaccurate due to the unique nature of nanoparticulate matter. As nanoparticulate matter decreases in size its relative surface area increases dramatically, increasing any catalytic effect or chemical reactivity substantially versus the known value for the macro substance. This presents a new set of challenges in the near future to rethink contemporary measures to safeguard the health and welfare of employees against a nanoparticulate substance that most conventional controls have not been designed to manage. === Occupational health inequalities === Occupational health inequalities refer to differences in occupational injuries and illnesses that are closely linked with demographic, social, cultural, economic, and/or political factors. Although many advances have been made to rectify gaps in occupational health within the past half century, still many persist due to the complex overlapping of occupational health and social factors. There are three main areas of research on occupational health inequities: Identifying which social factors, either individually or in combination, contribute to the inequitable distribution of work-related benefits and risks. Examining how the related structural disadvantages materialize in the lives of workers to put them at greater risk for occupational injury or illness. Translating these findings into intervention research to build an evidence base of effective ways for reducing occupational health inequities. === Transnational and immigrant worker populations === Immigrant worker populations often are at greater risk for workplace injuries and fatalities. For example within the United States, immigrant Mexican workers have one of the highest rates of fatal workplace injuries out of all of the working population. Statistics like these are explained through a combination of social, structural, and physical aspects of the workplace. These workers struggle to access safety information and resources in their native languages because of lack of social and political inclusion. In addition to linguistically tailored interventions, it is also critical for the interventions to be culturally appropriate. Those residing in a country to work without a visa or other formal authorization may also not have access to legal resources and recourse that are designed to protect most workers. Health and Safety organizations that rely on whistleblowers instead of their own independent inspections may be especially at risk of having an incomplete picture of worker health. == See also == === Regulations === === Related fields === == Notes == == References == === Sources === Hutchins, B.L.; Harrison, A. (1911). A History of Factory Legislation (2nd ed.). London, England: P.S. King & Son. OCLC 60732343. Retrieved 1 March 2024. Jansen, Anne; van der Beek, Dolf; Cremers, Anita; Neerincx, Mark; van Middelaar, Johan (28 August 2018). Emergent Risks to Workplace Safety; Working in the Same Space as a Cobot (PDF) (Report no. TNO 2018 R10742 for the Ministry of Social Affairs and Employment). Netherlands Organization for Applied Scientific Research (TNO). Archived from the original on 5 November 2021. Retrieved 5 May 2024. Myeni, Sibongiseni S.; Ngcobo, Ntombenhle J. (2020). The Profile of Occupational Health and Safety: South Africa (PDF) (Report). Pretoria, South Africa: Department of Employment and Labour. Archived (PDF) from the original on 4 February 2024. Retrieved 15 March 2024. == Further reading == A Guide to Health and Safety Regulation in Great Britain (PDF) (Report). Health and Safety Executive (HSE). July 2013. Archived (PDF) from the original on 28 January 2024. Retrieved 15 March 2024. Koester, Frank (April 1912). "Our Stupendous Yearly Waste: The Death Toll of Industry". The World's Work. Vol. XXIII, no. 6. Doubleday, Page and Company. pp. 713–715. ISSN 2691-7254. Retrieved 15 March 2024. LaDou, Joseph (2006). Current Occupational & Environmental Medicine (4th ed.). New York, N.Y.: McGraw Hill. ISBN 978-0-07-144313-5. Leidel, Nelson A.; Busch, Kenneth A. (April 1975). Statistical Methods for the Determination of Noncompliance with Occupational Health Standards (Report: HEW Publication No.(NIOSH) 75-159). Washington, D.C.: National Institute for Occupational Safety and Health (NIOSH). doi:10.26616/NIOSHPUB75159. Archived from the original on 1 December 2024. Retrieved 25 December 2024. Roughton, James E.; Mercurio, James J. (15 March 2002). Developing an Effective Safety Culture: A Leadership Approach. Butterworth-Heinemann. ISBN 978-0-7506-7411-9. == External links == === International agencies === (EU) European Agency for Safety & Health at Work (EU-OSHA) (UN) International Labour Organization (ILO) === National bodies === (Canada) Canadian Centre for Occupational Health and Safety (Japan) Japan Industrial Safety and Health Association (Japan) Ministry of Health, Labor and Welfare (Japan) Japan National Institute of Occupational Safety and Health (UK) Health and Safety Executive (US) National Institute for Occupational Safety and Health (NIOSH) (US) Occupational Safety and Health Administration (OSHA) === Legislation === (Canada) EnviroOSH Legislation plus Standards === Publications === American Journal of Industrial Medicine === Education === National Examination Board in Occupational Safety and Health (NEBOSH)
Wikipedia/Industrial_safety
A telecommunications network is a group of nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit switching, message switching, or packet switching, to pass messages and signals. Multiple nodes may cooperate to pass the message from an originating node to the destination node, via multiple network hops. For this routing function, each node in the network is assigned a network address for identification and locating it on the network. The collection of addresses in the network is called the address space of the network. Examples of telecommunications networks include computer networks, the Internet, the public switched telephone network (PSTN), the global Telex network, the aeronautical ACARS network, and the wireless radio networks of cell phone telecommunication providers. == Network structure == this is the structure of network general, every telecommunications network conceptually consists of three parts, or planes (so-called because they can be thought of as being and often are, separate overlay networks): The data plane (also user plane, bearer plane, or forwarding plane) carries the network's users' traffic, the actual payload. The control plane carries control information (also known as signaling). The management plane carries the operations, administration and management traffic required for network management. The management plane is sometimes considered a part of the control plane. == Data networks == Data networks are used extensively throughout the world for communication between individuals and organizations. Data networks can be connected to allow users seamless access to resources that are hosted outside of the particular provider they are connected to. The Internet is the best example of the internetworking of many data networks from different organizations. Terminals attached to IP networks like the Internet are addressed using IP addresses. Protocols of the Internet protocol suite (TCP/IP) provide the control and routing of messages across the and IP data network. There are many different network structures that IP can be used across to efficiently route messages, for example: Wide area networks (WAN) Metropolitan area networks (MAN) Local area networks (LAN) There are three features that differentiate MANs from LANs or WANs: The area of the network size is between LANs and WANs. The MAN will have a physical area between 5 and 50 km in diameter. MANs do not generally belong to a single organization. The equipment that interconnects the network, the links, and the MAN itself are often owned by an association or a network provider that provides or leases the service to others. A MAN is a means for sharing resources at high speeds within the network. It often provides connections to WAN networks for access to resources outside the scope of the MAN. Data center networks also rely highly on TCP/IP for communication across machines. They connect thousands of servers, are designed to be highly robust, provide low latency and high bandwidth. Data center network topology plays a significant role in determining the level of failure resiliency, ease of incremental expansion, communication bandwidth and latency. == Capacity and speed == In analogy to the improvements in the speed and capacity of digital computers, provided by advances in semiconductor technology and expressed in the bi-yearly doubling of transistor density, which is described empirically by Moore's law, the capacity and speed of telecommunications networks have followed similar advances, for similar reasons. In telecommunication, this is expressed in Edholm's law, proposed by and named after Phil Edholm in 2004. This empirical law holds that the bandwidth of telecommunication networks doubles every 18 months, which has proven to be true since the 1970s. The trend is evident in the Internet, cellular (mobile), wireless and wired local area networks (LANs), and personal area networks. This development is the consequence of rapid advances in the development of metal-oxide-semiconductor technology. == See also == Transcoder free operation == References ==
Wikipedia/Communication_network
Control system security, or automation and control system (ACS) cybersecurity, is the prevention of (intentional or unintentional) interference with the proper operation of industrial automation and control systems. These control systems manage essential services including electricity, petroleum production, water, transportation, manufacturing, and communications. They rely on computers, networks, operating systems, applications, and programmable controllers, each of which could contain security vulnerabilities. The 2010 discovery of the Stuxnet worm demonstrated the vulnerability of these systems to cyber incidents. The United States and other governments have passed cyber-security regulations requiring enhanced protection for control systems operating critical infrastructure. Control system security is known by several other names such as SCADA security, PCN security, Industrial network security, Industrial control system (ICS) Cybersecurity, Operational Technology (OT) Security, Industrial automation and control systems and Control System Cyber Security. == Risks == Insecurity of, or vulnerabilities inherent in automation and control systems (ACS) can lead to severe consequences in categories such as safety, loss of life, personal injury, environmental impact, lost production, equipment damage, information theft, and company image. Guidance to assess, evaluate and mitigate these potential risks is provided through the application of many Governmental, regulatory, industry documents and Global Standards, addressed below. == Vulnerability of automation and control systems == Automation and Control Systems (ACS) have become far more vulnerable to security incidents due to the following trends. Increasing use of Commercial Off-the Shelf Technology (COTS) and protocols. Integration of technology such as MS Windows, SQL, and Ethernet means that these systems may now have the same or similar vulnerabilities as common IT systems. Enterprise integration (using plant, corporate and even public networks) means that these (legacy) systems may now be subjected to stresses that they were not designed for. Demand for Remote Access - 24x7 access for engineering, operations or technical support increases the attack surface, possibly leading to more insecure or rogue connections. Increased awareness and understanding of industrial systems - As more and more people become aware of these systems, the strategy of Security Through Obscurity is no longer viable. Although the cyber threats and attack strategies on automation systems are changing rapidly, regulation of industrial control systems for security is rare and is a slow-moving process. The United States, for example, only does so for the nuclear power and the chemical industries. == Government efforts == The U.S. Government Computer Emergency Readiness Team (US-CERT) originally instituted a control systems security program (CSSP) now the National Cybersecurity and Communications Integration Center (NCCIC) Industrial Control Systems, which has made available a large set of free National Institute of Standards and Technology (NIST) standards documents regarding control system security. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems. == Automation and Control System Cybersecurity Standards == The international standard for cybersecurity of automation and control systems is the IEC 62443. In addition, multiple national organizations such as the NIST and NERC in the USA released guidelines and requirements for cybersecurity in control systems. === IEC 62443 === The IEC 62443 cybersecurity standards define processes, techniques and requirements for Automation and Control Systems (IACS). The IEC 62443 standards and technical reports are organized into four general categories called General, Policies and Procedures, System, Component, Profiles and Evaluation. The first category includes foundational information such as concepts, models and terminology. The second category of work products targets the Asset Owner. These address various aspects of creating and maintaining an effective IACS security program. The third category includes work products that describe system design guidance and requirements for the secure integration of control systems. Core in this is the zone and conduit design model. The fourth category includes work products that describe the specific product development and technical requirements of control system products. The fifth category provides profiles for industry-specific cybersecurity requirements according to IEC 62443-1-5. The sixth category defines assessment methodologies that ensure that assessment results are consistent and reproducible. === NERC === The most widely recognized and latest NERC security standard is NERC 1300, which is a modification/update of NERC 1200. The latest version of NERC 1300 is called CIP-002-3 through CIP-009-3, with CIP referring to Critical Infrastructure Protection. These standards are used to secure bulk electric systems although NERC has created standards within other areas. The bulk electric system standards also provide network security administration while still supporting best-practice industry processes. === NIST === Although it is not a standard, the NIST Cybersecurity Framework (NIST CSF) provides a high-level taxonomy of cybersecurity outcomes and a methodology to assess and manage those outcomes. It is intended to help private sector organizations that provide critical infrastructure with guidance on how to protect it. NIST Special Publication 800-82 Rev. 2 "Guide to Industrial Control System (ICS) Security" describes how to secure multiple types of Industrial Control Systems against cyber attacks while considering the performance, reliability, and safety requirements specific to ICS. == Control system security certifications == Certifications for control system security have been established by several global Certification Bodies. Most of the schemes are based on the IEC 62443 and describe test methods, surveillance audit policy, public documentation policies, and other specific aspects of their program. == External links == IEC 62443 US NIST webpage US NERC Critical Infrastructure Protection (CIP) Standards Archived 2011-01-01 at the Wayback Machine UK NPSA Tools, Catalogues and Standards == References ==
Wikipedia/Control_system_security
SCADA (an acronym for supervisory control and data acquisition) is a control system architecture comprising computers, networked data communications and graphical user interfaces for high-level supervision of machines and processes. It also covers sensors and other devices, such as programmable logic controllers, also known as a DCS (Distributed Control System), which interface with process plant or machinery. The operator interfaces, which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA computer system. The subordinated operations, e.g. the real-time control logic or controller calculations, are performed by networked modules connected to the field sensors and actuators. The SCADA concept was developed to be a universal means of remote-access to a variety of local control modules, which could be from different manufacturers and allowing access through standard automation protocols. In practice, large SCADA systems have grown to become similar to distributed control systems in function, while using multiple means of interfacing with the plant. They can control large-scale processes spanning multiple sites, and work over large distances. It is one of the most commonly-used types of industrial control systems. == Control operations == The key attribute of a SCADA system is its ability to perform a supervisory operation over a variety of other proprietary devices. Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves. Level 1 contains the industrialized input/output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and targets. Level 4 is the production scheduling level. Level 1 contains the programmable logic controllers (PLCs) or remote terminal units (RTUs). Level 2 contains the SCADA to readings and equipment status reports that are communicated to level 2 SCADA as required. Data is then compiled and formatted in such a way that a control room operator using the human-machine interface (HMI) can make supervisory decisions to adjust or override normal RTU (PLC) controls. Data may also be fed to a historian, often built on a commodity database management system, to allow trending and other analytical auditing. SCADA systems typically use a tag database, which contains data elements called tags or points, which relate to specific instrumentation or actuators within the process system. Data is accumulated against these unique process control equipment tag references. == Components == A SCADA system usually consists of the following main elements: Supervisory computers This is the core of the SCADA system, gathering data on the process and sending control commands to the field connected devices. It refers to the computer and software responsible for communicating with the field connection controllers, which are RTUs and PLCs, and includes the HMI software running on operator workstations. In smaller SCADA systems, the supervisory computer may be composed of a single PC, in which case the HMI is a part of this computer. In larger SCADA systems, the master station may include several HMIs hosted on client computers, multiple servers for data acquisition, distributed software applications, and disaster recovery sites. To increase the integrity of the system the multiple servers will often be configured in a dual-redundant or hot-standby formation providing continuous control and monitoring in the event of a server malfunction or breakdown. Remote terminal units RTUs connect to sensors and actuators in the process, and are networked to the supervisory computer system. RTUs have embedded control capabilities and often conform to the IEC 61131-3 standard for programming and support automation via ladder logic, a function block diagram or a variety of other languages. Remote locations often have little or no local infrastructure so it is not uncommon to find RTUs running off a small solar power system, using radio, GSM or satellite for communications, and being ruggedised to survive from -20C to +70C or even -40C to +85C without external heating or cooling equipment. Programmable logic controllers PLCs are connected to sensors and actuators in the process, and are networked to the supervisory system. In factory automation, PLCs typically have a high speed connection to the SCADA system. In remote applications, such as a large water treatment plant, PLCs may connect directly to SCADA over a wireless link, or more commonly, utilise an RTU for the communications management. PLCs are specifically designed for control and were the founding platform for the IEC 61131-3 programming languages. For economical reasons, PLCs are often used for remote sites where there is a large I/O count, rather than utilising an RTU alone. Communication infrastructure This connects the supervisory computer system to the RTUs and PLCs, and may use industry standard or manufacturer proprietary protocols. Both RTUs and PLCs operate autonomously on the near-real time control of the process, using the last command given from the supervisory system. Failure of the communications network does not necessarily stop the plant process controls, and on resumption of communications, the operator can continue with monitoring and control. Some critical systems will have dual redundant data highways, often cabled via diverse routes. Human-machine interface The HMI is the operator window of the supervisory system. It presents plant information to the operating personnel graphically in the form of mimic diagrams, which are a schematic representation of the plant being controlled, and alarm and event logging pages. The HMI is linked to the SCADA supervisory computer to provide live data to drive the mimic diagrams, alarm displays and trending graphs. In many installations the HMI is the graphical user interface for the operator, collects all data from external devices, creates reports, performs alarming, sends notifications, etc. Mimic diagrams consist of line graphics and schematic symbols to represent process elements, or may consist of digital photographs of the process equipment overlain with animated symbols. Supervisory operation of the plant is by means of the HMI, with operators issuing commands using mouse pointers, keyboards and touch screens. For example, a symbol of a pump can show the operator that the pump is running, and a flow meter symbol can show how much fluid it is pumping through the pipe. The operator can switch the pump off from the mimic by a mouse click or screen touch. The HMI will show the flow rate of the fluid in the pipe decrease in real time. The HMI package for a SCADA system typically includes a drawing program that the operators or system maintenance personnel use to change the way these points are represented in the interface. These representations can be as simple as an on-screen traffic light, which represents the state of an actual traffic light in the field, or as complex as a multi-projector display representing the position of all of the elevators in a skyscraper or all of the trains on a railway. A historian is a software service within the HMI which accumulates time-stamped data, events, and alarms in a database which can be queried or used to populate graphic trends in the HMI. The historian is a client that requests data from a data acquisition server. == Alarm handling == An important part of most SCADA implementations is alarm handling. The system monitors whether certain alarm conditions are satisfied, to determine when an alarm event has occurred. Once an alarm event has been detected, one or more actions are taken (such as the activation of one or more alarm indicators, and perhaps the generation of email or text messages so that management or remote SCADA operators are informed). In many cases, a SCADA operator may have to acknowledge the alarm event; this may deactivate some alarm indicators, whereas other indicators remain active until the alarm conditions are cleared. Alarm conditions can be explicit—for example, an alarm point is a digital status point that has either the value NORMAL or ALARM that is calculated by a formula based on the values in other analogue and digital points—or implicit: the SCADA system might automatically monitor whether the value in an analogue point lies outside high and low- limit values associated with that point. Examples of alarm indicators include a siren, a pop-up box on a screen, or a coloured or flashing area on a screen (that might act in a similar way to the "fuel tank empty" light in a car); in each case, the role of the alarm indicator is to draw the operator's attention to the part of the system 'in alarm' so that appropriate action can be taken. == PLC/RTU programming == "Smart" RTUs, or standard PLCs, are capable of autonomously executing simple logic processes without involving the supervisory computer. They employ standardized control programming languages (such as those under IEC 61131-3, a suite of five programming languages including function block, ladder, structured text, sequence function charts and instruction list), that are frequently used to create programs which run on these RTUs and PLCs. Unlike a procedural language like C or FORTRAN, IEC 61131-3 has minimal training requirements by virtue of resembling historic physical control arrays. This allows SCADA system engineers to perform both the design and implementation of a program to be executed on an RTU or PLC. A programmable automation controller (PAC) is a compact controller that combines the features and capabilities of a PC-based control system with that of a typical PLC. PACs are deployed in SCADA systems to provide RTU and PLC functions. In many electrical substation SCADA applications, "distributed RTUs" use information processors or station computers to communicate with digital protective relays, PACs, and other devices for I/O, and communicate with the SCADA master in lieu of a traditional RTU. == PLC commercial integration == Since about 1998, virtually all major PLC manufacturers have offered integrated HMI/SCADA systems, many of them using open and non-proprietary communications protocols. Numerous specialized third-party HMI/SCADA packages, offering built-in compatibility with most major PLCs, have also entered the market, allowing mechanical engineers, electrical engineers and technicians to configure HMIs themselves, without the need for a custom-made program written by a software programmer. The Remote Terminal Unit (RTU) connects to physical equipment. Typically, an RTU converts the electrical signals from the equipment to digital values. By converting and sending these electrical signals out to equipment the RTU can control equipment. == Communication infrastructure and methods == SCADA systems have traditionally used combinations of radio and direct wired connections, although SONET/SDH is also frequently used for large systems such as railways and power stations. The remote management or monitoring function of a SCADA system is often referred to as telemetry. Some users want SCADA data to travel over their pre-established corporate networks or to share the network with other applications. The legacy of the early low-bandwidth protocols remains, though. SCADA protocols are designed to be very compact. Many are designed to send information only when the master station polls the RTU. Typical legacy SCADA protocols include Modbus RTU, RP-570, Profibus and Conitel. These communication protocols, with the exception of Modbus (Modbus has been made open by Schneider Electric), are all SCADA-vendor specific but are widely adopted and used. Standard protocols are IEC 60870-5-101 or 104, IEC 61850 and DNP3. These communication protocols are standardized and recognized by all major SCADA vendors. Many of these protocols now contain extensions to operate over TCP/IP. Although the use of conventional networking specifications, such as TCP/IP, blurs the line between traditional and industrial networking, they each fulfill fundamentally differing requirements. Network simulation can be used in conjunction with SCADA simulators to perform various 'what-if' analyses. With increasing security demands (such as North American Electric Reliability Corporation (NERC) and critical infrastructure protection (CIP) in the US), there is increasing use of satellite-based communication. This has the key advantages that the infrastructure can be self-contained (not using circuits from the public telephone system), can have built-in encryption, and can be engineered to the availability and reliability required by the SCADA system operator. Earlier experiences using consumer-grade VSAT were poor. Modern carrier-class systems provide the quality of service required for SCADA. RTUs and other automatic controller devices were developed before the advent of industry wide standards for interoperability. The result is that developers and their management created a multitude of control protocols. Among the larger vendors, there was also the incentive to create their own protocol to "lock in" their customer base. A list of automation protocols is compiled here. An example of efforts by vendor groups to standardize automation protocols is the OPC-UA (formerly "OLE for process control" now Open Platform Communications Unified Architecture). == Architecture development == SCADA systems have evolved through four generations as follows: Early SCADA system computing was done by large minicomputers. Common network services did not exist at the time SCADA was developed. Thus SCADA systems were independent systems with no connectivity to other systems. The communication protocols used were strictly proprietary at that time. The first-generation SCADA system redundancy was achieved using a back-up mainframe system connected to all the Remote Terminal Unit sites and was used in the event of failure of the primary mainframe system. Some first generation SCADA systems were developed as "turn key" operations that ran on minicomputers such as the PDP-11 series. SCADA information and command processing were distributed across multiple stations which were connected through a LAN. Information was shared in near real time. Each station was responsible for a particular task, which reduced the cost as compared to First Generation SCADA. The network protocols used were still not standardized. Since these protocols were proprietary, very few people beyond the developers knew enough to determine how secure a SCADA installation was. Security of the SCADA installation was usually overlooked. Similar to a distributed architecture, any complex SCADA can be reduced to the simplest components and connected through communication protocols. In the case of a networked design, the system may be spread across more than one LAN network called a process control network (PCN) and separated geographically. Several distributed architecture SCADAs running in parallel, with a single supervisor and historian, could be considered a network architecture. This allows for a more cost-effective solution in very large scale systems. The growth of the internet has led SCADA systems to implement web technologies allowing users to view data, exchange information and control processes from anywhere in the world through web SOCKET connection. The early 2000s saw the proliferation of Web SCADA systems. Web SCADA systems use web browsers such as Google Chrome and Mozilla Firefox as the graphical user interface (GUI) for the operators HMI. This simplifies the client side installation and enables users to access the system from various platforms with web browsers such as servers, personal computers, laptops, tablets and mobile phones. == Security == SCADA systems that tie together decentralized facilities such as power, oil, gas pipelines, water distribution and wastewater collection systems were designed to be open, robust, and easily operated and repaired, but not necessarily secure. The move from proprietary technologies to more standardized and open solutions together with the increased number of connections between SCADA systems, office networks and the Internet has made them more vulnerable to types of network attacks that are relatively common in computer security. For example, United States Computer Emergency Readiness Team (US-CERT) released a vulnerability advisory warning that unauthenticated users could download sensitive configuration information including password hashes from an Inductive Automation Ignition system utilizing a standard attack type leveraging access to the Tomcat Embedded Web server. Security researcher Jerry Brown submitted a similar advisory regarding a buffer overflow vulnerability in a Wonderware InBatchClient ActiveX control. Both vendors made updates available prior to public vulnerability release. Mitigation recommendations were standard patching practices and requiring VPN access for secure connectivity. Consequently, the security of some SCADA-based systems has come into question as they are seen as potentially vulnerable to cyber attacks. In particular, security researchers are concerned about: The lack of concern about security and authentication in the design, deployment and operation of some existing SCADA networks The belief that SCADA systems have the benefit of security through obscurity through the use of specialized protocols and proprietary interfaces The belief that SCADA networks are secure because they are physically secured The belief that SCADA networks are secure because they are disconnected from the Internet SCADA systems are used to control and monitor physical processes, examples of which are transmission of electricity, transportation of gas and oil in pipelines, water distribution, traffic lights, and other systems used as the basis of modern society. The security of these SCADA systems is important because compromise or destruction of these systems would impact multiple areas of society far removed from the original compromise. For example, a blackout caused by a compromised electrical SCADA system would cause financial losses to all the customers that received electricity from that source. How security will affect legacy SCADA and new deployments remains to be seen. There are many threat vectors to a modern SCADA system. One is the threat of unauthorized access to the control software, whether it is human access or changes induced intentionally or accidentally by virus infections and other software threats residing on the control host machine. Another is the threat of packet access to the network segments hosting SCADA devices. In many cases, the control protocol lacks any form of cryptographic security, allowing an attacker to control a SCADA device by sending commands over a network. In many cases SCADA users have assumed that having a VPN offered sufficient protection, unaware that security can be trivially bypassed with physical access to SCADA-related network jacks and switches. Industrial control vendors suggest approaching SCADA security like Information Security with a defense in depth strategy that leverages common IT practices. Apart from that, research has shown that the architecture of SCADA systems has several other vulnerabilities, including direct tampering with RTUs, communication links from RTUs to the control center, and IT software and databases in the control center. The RTUs could, for instance, be targets of deception attacks injecting false data or denial-of-service attacks. The reliable function of SCADA systems in our modern infrastructure may be crucial to public health and safety. As such, attacks on these systems may directly or indirectly threaten public health and safety. Such an attack has already occurred, carried out on Maroochy Shire Council's sewage control system in Queensland, Australia. Shortly after a contractor installed a SCADA system in January 2000, system components began to function erratically. Pumps did not run when needed and alarms were not reported. More critically, sewage flooded a nearby park and contaminated an open surface-water drainage ditch and flowed 500 meters to a tidal canal. The SCADA system was directing sewage valves to open when the design protocol should have kept them closed. Initially this was believed to be a system bug. Monitoring of the system logs revealed the malfunctions were the result of cyber attacks. Investigators reported 46 separate instances of malicious outside interference before the culprit was identified. The attacks were made by a disgruntled ex-employee of the company that had installed the SCADA system. The ex-employee was hoping to be hired by the utility full-time to maintain the system. In April 2008, the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack issued a Critical Infrastructures Report which discussed the extreme vulnerability of SCADA systems to an electromagnetic pulse (EMP) event. After testing and analysis, the Commission concluded: "SCADA systems are vulnerable to EMP insult. The large numbers and widespread reliance on such systems by all of the Nation’s critical infrastructures represent a systemic threat to their continued operation following an EMP event. Additionally, the necessity to reboot, repair, or replace large numbers of geographically widely dispersed systems will considerably impede the Nation’s recovery from such an assault." Many vendors of SCADA and control products have begun to address the risks posed by unauthorized access by developing lines of specialized industrial firewall and VPN solutions for TCP/IP-based SCADA networks as well as external SCADA monitoring and recording equipment. The International Society of Automation (ISA) started formalizing SCADA security requirements in 2007 with a working group, WG4. WG4 "deals specifically with unique technical requirements, measurements, and other features required to evaluate and assure security resilience and performance of industrial automation and control systems devices". The increased interest in SCADA vulnerabilities has resulted in vulnerability researchers discovering vulnerabilities in commercial SCADA software and more general offensive SCADA techniques presented to the general security community. In electric and gas utility SCADA systems, the vulnerability of the large installed base of wired and wireless serial communications links is addressed in some cases by applying bump-in-the-wire devices that employ authentication and Advanced Encryption Standard encryption rather than replacing all existing nodes. In June 2010, anti-virus security company VirusBlokAda reported the first detection of malware that attacks SCADA systems (Siemens' WinCC/PCS 7 systems) running on Windows operating systems. The malware is called Stuxnet and uses four zero-day attacks to install a rootkit which in turn logs into the SCADA's database and steals design and control files. The malware is also capable of changing the control system and hiding those changes. The malware was found on 14 systems, the majority of which were located in Iran. In October 2013 National Geographic released a docudrama titled American Blackout which dealt with an imagined large-scale cyber attack on SCADA and the United States' electrical grid. == Uses == Both large and small systems can be built using the SCADA concept. These systems can range from just tens to thousands of control loops, depending on the application. Example processes include industrial, infrastructure, and facility-based processes, as described below: Industrial processes include manufacturing, process control, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes. Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electric power transmission and distribution, and wind farms. Facility processes, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation, and air conditioning systems (HVAC), access, and energy consumption. However, SCADA systems may have security vulnerabilities, so the systems should be evaluated to identify risks and solutions implemented to mitigate those risks. == See also == DNP3 – Computer network protocol IEC 60870 EPICS – Software infrastructure for building distributed control systems == References == == External links == UK SCADA security guidelines BBC NEWS | Technology | Spies 'infiltrate US power grid'
Wikipedia/Supervisory_control_and_data_acquisition
Within supply chain management and manufacturing, production control is the activity of monitoring and controlling any particular production or operation. Production control is often run from a specific control room or operations room. With inventory control and quality control, production control is one of the key functions of operations management. == Overview == Production control is the activity of monitoring and controlling a large physical facility or physically dispersed service. It is a "set of actions and decision taken during production to regulate output and obtain reasonable assurance that the specification will be met." The American Production and Inventory Control Society, nowadays APICS, defined production control in 1959 as: Production control is the task of predicting, planning and scheduling work, taking into account manpower, materials availability and other capacity restrictions, and cost so as to achieve proper quality and quantity at the time it is needed and then following up the schedule to see that the plan is carried out, using whatever systems have proven satisfactory for the purpose. Production planning and control in larger factories is often run from a production planning department run by production controllers and a production control manager. Production monitoring and control of larger operations is often run from a central space, called a control room or operations room or operations control center (OCC). The emerging area of Project Production Management (PPM), based on viewing project activities as a production system, adopts the same notion of production control to take steps to regulate the behavior of a production system where in this case the production system is a capital project, rather than a physical facility or a physically dispersed service. Production control is to be contrasted with project controls. As explained, project controls have developed to become centralized functions to track project progress and identify deviations from plan and to forecast future progress, using metrics rooted in accounting principles. == Types == One type of production control is the control of manufacturing operations. Production planning and control of the when and where. Production control and supply chain management Management of real-time operational in specific fields. Production control in the television studio in a production control room Master control in television studio Production control in spaceflight in a Mission Operations Control Room Communist countries had a central production control institute, where the agricultural and industrial production for the whole nation was planned and controlled. In Customer Care environments production control is known as Workforce Management (WFM). Centralized Workforce Management teams are often called Command Center, Mission Control or WFM Shared Production Centers. == Related types of control in organizations == Production control is just one of multiple types of control in organizations. Most commons other types are: Management control, one of the managerial functions like planning, organizing, staffing and directing. It is an important function because it helps to check the errors and to take the corrective action so that deviation from standards are minimized and stated goals of the organization are achieved in a desired manner. Inventory control, the supervision of supply, storage and accessibility of items in order to ensure an adequate supply without excessive oversupply. Quality control, the process by which entities review the quality of all factors involved in production. == See also == Control (management) Industrial engineering Manufacturing process management Materials management Operations management Production engineering Project production management Time book == References == == Further reading == Bedworth, David D., and James E. Bailey. Integrated production control systems: management, analysis, design. John Wiley & Sons, Inc., 1999. Eilon, Samuel. Elements of production planning and control. Macmillan, 1962. Groover, Mikell P. Automation, production systems, and computer-integrated manufacturing. Prentice Hall Press, 2007. Johnson, Lynwood A., and Douglas C. Montgomery. Operations research in production planning, scheduling, and inventory control. Vol. 6. New York: Wiley, 1974. C.E. Knoeppel. Graphic production control. New York, The Engineering magazine company, 1920 Koontz, Harold. "A preliminary statement of principles of planning and control." Academy of Management Journal 1.1 (1958): 45-61. Sipper, Daniel, and Robert L. Bulfin. Production: planning, control, and integration. McGraw-Hill Science, Engineering & Mathematics, 1997. == External links == Media related to Production control at Wikimedia Commons Manufacturing Cost Model and Controlling
Wikipedia/Production_control
A control valve is a valve used to control fluid flow by varying the size of the flow passage as directed by a signal from a controller. This enables the direct control of flow rate and the consequential control of process quantities such as pressure, temperature, and liquid level. In automatic control terminology, a control valve is termed a "final control element". == Operation == The opening or closing of automatic control valves is usually done by electrical, hydraulic or pneumatic actuators. Normally with a modulating valve, which can be set to any position between fully open and fully closed, valve positioners are used to ensure the valve attains the desired degree of opening. Air-actuated valves are commonly used because of their simplicity, as they only require a compressed air supply, whereas electrically operated valves require additional cabling and switch gear, and hydraulically actuated valves required high pressure supply and return lines for the hydraulic fluid. The pneumatic control signals are traditionally based on a pressure range of 3–15 psi (0.2–1.0 bar), or more commonly now, an electrical signal of 4-20mA for industry, or 0–10 V for HVAC systems. Electrical control now often includes a "Smart" communication signal superimposed on the 4–20 mA control current, such that the health and verification of the valve position can be signalled back to the controller. The HART, Fieldbus Foundation, and Profibus are the most common protocols. An automatic control valve consists of three main parts in which each part exist in several types and designs: Valve actuator – which moves the valve's modulating element, such as ball or butterfly. Valve positioner – which ensures the valve has reached the desired degree of opening. This overcomes the problems of friction and wear. Valve body – in which the modulating element, a plug, globe, ball or butterfly, is contained. == Control action == Taking the example of an air-operated valve, there are two control actions possible: "Air or current to open" – The flow restriction decreases with increased control signal value. "Air or current to close" – The flow restriction increases with increased control signal value. There can also be failure to safety modes: "Air or control signal failure to close" – On failure of compressed air to the actuator, the valve closes under spring pressure or by backup power. "Air or control signal failure to open" – On failure of compressed air to actuator, the valve opens under spring pressure or by backup power. The modes of failure operation are requirements of the failure to safety process control specification of the plant. In the case of cooling water it may be to fail open, and the case of delivering a chemical it may be to fail closed. == Valve positioners == The fundamental function of a positioner is to deliver pressurized air to the valve actuator, such that the position of the valve stem or shaft corresponds to the set point from the control system. Positioners are typically used when a valve requires throttling action. A positioner requires position feedback from the valve stem or shaft and delivers pneumatic pressure to the actuator to open and close the valve. The positioner must be mounted on or near the control valve assembly. There are three main categories of positioners, depending on the type of control signal, the diagnostic capability, and the communication protocol: pneumatic, analog, and digital. === Pneumatic positioners === Processing units may use pneumatic pressure signaling as the control set point to the control valves. Pressure is typically modulated between 20.7 and 103 kPa (3 to 15 psig) to move the valve from 0 to 100% position. In a common pneumatic positioner, the position of the valve stem or shaft is compared with the position of a bellows that receives the pneumatic control signal. When the input signal increases, the bellows expands and moves a beam. The beam pivots about an input axis, which moves a flapper closer to the nozzle. The nozzle pressure increases, which increases the output pressure to the actuator through a pneumatic amplifier relay. The increased output pressure to the actuator causes the valve stem to move. Stem movement is fed back to the beam by means of a cam. As the cam rotates, the beam pivots about the feedback axis to move the flapper slightly away from the nozzle. The nozzle pressure decreases and reduces the output pressure to the actuator. Stem movement continues, backing the flapper away from the nozzle until equilibrium is reached. When the input signal decreases, the bellows contracts (aided by an internal range spring) and the beam pivots about the input axis to move the flapper away from the nozzle. Nozzle decreases and the relay permits the release of diaphragm casing pressure to the atmosphere, which allows the actuator stem to move upward. Through the cam, stem movement is fed back to the beam to reposition the flapper closer to the nozzle. When equilibrium conditions are obtained, stem movement stops and the flapper is positioned to prevent any further decrease in actuator pressure. === Analog positioners === The second type of positioner is an analog I/P positioner. Most modern processing units use a 4 to 20 mA DC signal to modulate the control valves. This introduces electronics into the positioner design and requires that the positioner convert the electronic current signal into a pneumatic pressure signal (current-to-pneumatic or I/P). In a typical analog I/P positioner, the converter receives a DC input signal and provides a proportional pneumatic output signal through a nozzle/flapper arrangement. The pneumatic output signal provides the input signal to the pneumatic positioner. Otherwise, the design is the same as the pneumatic positioner === Digital positioners === While pneumatic positioners and analog I/P positioners provide basic valve position control, digital valve controllers add another dimension to positioner capabilities. This type of positioner is a microprocessor-based instrument. The microprocessor enables diagnostics and two-way communication to simplify setup and troubleshooting. In a typical digital valve controller, the control signal is read by the microprocessor, processed by a digital algorithm, and converted into a drive current signal to the I/P converter. The microprocessor performs the position control algorithm rather than a mechanical beam, cam, and flapper assembly. As the control signal increases, the drive signal to the I/P converter increases, increasing the output pressure from the I/P converter. This pressure is routed to a pneumatic amplifier relay and provides two output pressures to the actuator. With increasing control signal, one output pressure always increases and the other output pressure decreases Double-acting actuators use both outputs, whereas single-acting actuators use only one output. The changing output pressure causes the actuator stem or shaft to move. Valve position is fed back to the microprocessor. The stem continues to move until the correct position is attained. At this point, the microprocessor stabilizes the drive signal to the I/P converter until equilibrium is obtained. In addition to the function of controlling the position of the valve, a digital valve controller has two additional capabilities: diagnostics and two-way digital communication. Widely used communication protocols include HART, FOUNDATION fieldbus, and PROFIBUS. Advantages of placing a smart positioner on a control valve: Automatic calibration and configuration of positioner. Real time diagnostics. Reduced cost of loop commissioning, including installation and calibration. Use of diagnostics to maintain loop performance levels. Improved process control accuracy that reduces process variability. == Types of control valve == Control valves are classified by attributes and features. === Based on the pressure drop profile === High recovery valve: These valves typically regain most of static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a lower recovery coefficient. Examples: butterfly valve, ball valve, plug valve, gate valve Low recovery valve: These valves typically regain little of the static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a higher recovery coefficient. Examples: globe valve, angle valve === Based on the movement profile of the controlling element === Sliding stem: The valve stem / plug moves in a linear, or straight line motion. Examples: Globe valve, angle valve, wedge type gate valve Rotary valve: The valve disc rotates. Examples: Butterfly valve, ball valve === Based on the functionality === Control valve: Controls flow parameters proportional to an input signal received from the central control system. Examples: Globe valve, angle valve, ball valve Shut-off / On-off valve: These valves are either completely open or closed. Examples: Gate valve, ball valve, globe valve, angle valve, pinch valve, diaphragm valve Check valve: Allows flow only in a single direction Steam conditioning valve: Regulates the pressure and temperature of inlet media to required parameters at outlet. Examples: Turbine bypass valve, process steam letdown station Spring-loaded safety valve: Closed by the force of a spring, which retracts to open when the inlet pressure is equal to the spring force === Based on the actuating medium === Manual valve: Actuated by hand wheel Pneumatic valve: Actuated using a compressible medium like air, hydrocarbon, or nitrogen, with a spring diaphragm, piston cylinder or piston-spring type actuator Hydraulic valve: Actuated by a non-compressible medium such as water or oil Electric valve: Actuated by an electric motor A wide variety of valve types and control operation exist. However, there are two main forms of action, the sliding stem and the rotary. The most common and versatile types of control valves are sliding-stem globe, V-notch ball, butterfly and angle types. Their popularity derives from rugged construction and the many options available that make them suitable for a variety of process applications. Control valve bodies may be categorized as below: === List of common types of control valve === Sliding stem Globe valve – Flow control device Angle body valve Angle seat piston valve Axial Flow valve Rotary Butterfly valve – Flow control device Ball valve – Flow control device Other Pinch valve – Valve closed by squeezing a tube Diaphragm valve – Flow control device == See also == == References == == External links == [1] Control Valve Handbook [2] Fluid Control Research Institute [3] Valve World Magazine [4] New era of valve design and engineering [5] Machine learning based Valve Design Application
Wikipedia/Control_valves
Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function. Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing and evaluation, maintainability, and many other disciplines, aka "ilities", necessary for successful system design, development, implementation, and ultimate decommission become more difficult when dealing with large or complex projects. Systems engineering deals with work processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as industrial engineering, production systems engineering, process systems engineering, mechanical engineering, manufacturing engineering, production engineering, control engineering, software engineering, electrical engineering, cybernetics, aerospace engineering, organizational studies, civil engineering and project management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole. The systems engineering process is a discovery process that is quite unlike a manufacturing process. A manufacturing process is focused on repetitive activities that achieve high-quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved and identifying the most probable or highest-impact failures that can occur. Systems engineering involves finding solutions to these problems. == History == The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. military, to apply the discipline. When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly. The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, including Universal Systems Language (USL), Unified Modeling Language (UML), Quality function deployment (QFD), and Integration Definition (IDEF). In 1990, a professional society for systems engineering, the National Council on Systems Engineering (NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to the International Council on Systems Engineering (INCOSE) in 1995. Schools in several countries offer graduate programs in systems engineering, and continuing education options are also available for practicing engineers. == Concept == Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor. === Origins and traditional scope === The traditional scope of engineering embraces the conception, design, development, production, and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts. === Evolution to a broader scope === The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy, and the term continues to apply to both the narrower and a broader scope. Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to take on a broader meaning especially when humans were seen as an essential component of a system. Peter Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement.": 10  Consistent with the broader scope of systems engineering, the Systems Engineering Body of Knowledge (SEBoK) has defined three types of systems engineering: Product Systems Engineering (PSE) is the traditional systems engineering focused on the design of physical systems consisting of hardware and software. Enterprise Systems Engineering (ESE) pertains to the view of enterprises, that is, organizations or combinations of organizations, as systems. Service Systems Engineering (SSE) has to do with the engineering of service systems. Checkland defines a service system as a system which is conceived as serving another system. Most civil infrastructure systems are service systems. === Holistic view === Systems engineering focuses on analyzing and eliciting customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, the system lifecycle. This includes fully understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering process can be decomposed into: A Systems Engineering Technical Process A Systems Engineering Management Process Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includes assessing available information, defining effectiveness measures, to create a behavior model, create a structure model, perform trade-off analysis, and create sequential build & test plan. Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include the Waterfall model and the VEE model (also called the V model). === Interdisciplinary field === System development often requires contribution from diverse technical disciplines. By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item. This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment. === Managing complexity === The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. The International Space Station is an example of such a system. The development of smarter control algorithms, microprocessor design, and analysis of environmental systems also come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here: System architecture System model, modeling, and simulation Mathematical optimization System dynamics Systems analysis Statistical analysis Reliability engineering Decision making Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior of and interaction among system components is not always immediately well defined or understood. Defining and characterizing such systems and subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users, operators, marketing organizations, and technical specifications is successfully bridged. === Scope === The principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, provided systems thinking is employed at all levels. Besides defense and aerospace, many information and technology-based companies, software development firms, and industries in the field of electronics & communications require systems engineers as part of their team. An analysis by the INCOSE Systems Engineering Center of Excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort. At the same time, studies have shown that systems engineering essentially leads to a reduction in costs among other benefits. However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering. Systems engineering encourages the use of modeling and simulation to validate assumptions or theories on systems and the interactions within them. Use of methods that allow early detection of possible failures, in safety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology, Jay Wright Forrester's System dynamics method, and the Unified Modeling Language (UML)—all currently being explored, evaluated, and developed to support the engineering decision process. == Education == Education in systems engineering is often seen as an extension to the regular engineering courses, reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g. aerospace engineering, civil engineering, electrical engineering, mechanical engineering, manufacturing engineering, industrial engineering, chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material are most often presented as a BS in Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either a MS/MEng or Ph.D./EngD degree. INCOSE, in collaboration with the Systems Engineering Research Center at Stevens Institute of Technology maintains a regularly updated directory of worldwide academic programs at suitably accredited institutions. As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively. Education in systems engineering can be taken as systems-centric or domain-centric: Systems-centric programs treat systems engineering as a separate discipline and most of the courses are taught focusing on systems engineering principles and practice. Domain-centric programs offer systems engineering as an option that can be exercised with another major field in engineering. Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core engineer. == Systems engineering topics == Systems engineering tools are strategies, procedures, and techniques that aid in performing systems engineering on a project or product. The purpose of these tools varies from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export, and more. === System === There are many definitions of what a system is in the field of systems engineering. Below are a few authoritative definitions: ANSI/EIA-632-1999: "An aggregation of end products and enabling products to achieve a given purpose." DAU Systems Engineering Fundamentals: "an integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective." IEEE Std 1220-1998: "A set or arrangement of elements and processes that are related and whose behavior satisfies customer/operational needs and provides for life cycle sustainment of the products." INCOSE Systems Engineering Handbook: "homogeneous entity that exhibits predefined behavior in the real world and is composed of heterogeneous parts that do not individually exhibit that behavior and an integrated configuration of components and/or subsystems." INCOSE: "A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system-level qualities, properties, characteristics, functions, behavior, and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected." ISO/IEC 15288:2008: "A combination of interacting elements organized to achieve one or more stated purposes." NASA Systems Engineering Handbook: "(1) The combination of elements that function together to produce the capability to meet a need. The elements include all hardware, software, equipment, facilities, personnel, processes, and procedures needed for this purpose. (2) The end product (which performs operational functions) and enabling products (which provide life-cycle support services to the operational end products) that make up a system." === Systems engineering processes === Systems engineering processes encompass all creative, manual, and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions: Task definition (informative definition) Conceptual stage (cardinal definition) Design stage (formative definition) Implementation stage (manufacturing definition) Depending on their application, tools are used for various stages of the systems engineering process: === Using models === Models play important and diverse roles in systems engineering. A model can be defined in several ways, including: An abstraction of reality designed to answer specific questions about the real world An imitation, analog, or representation of a real-world process or structure; or A conceptual, mathematical, or physical tool to assist a decision-maker. Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like a functional flow block diagram and mathematical (i.e. quantitative) models used in the trade study process. This section focuses on the last. The main reason for using mathematical models and diagrams in trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set of differential equations describing the trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just correlation. Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to cross-fertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'. === Modeling formalisms and graphical representations === Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system's functional and data requirements. Common graphical representations include: Functional flow block diagram (FFBD) Model-based design Data flow diagram (DFD) N2 chart IDEF0 diagram Use case diagram Sequence diagram Block diagram Signal-flow graph USL function maps and type maps Enterprise architecture frameworks A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods is used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems are important. Part of the design phase is to create structural and behavioral models of the system. Once the requirements are understood, it is now the responsibility of a systems engineer to refine them and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. A decision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods. === Other tools === ==== Systems Modeling Language ==== Systems Modeling Language (SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems. ==== Lifecycle Modeling Language ==== Lifecycle Modeling Language (LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support, and retirement stages. == Related fields and sub-fields == Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity: === Cognitive systems engineering === Cognitive systems engineering (CSE) is a specific approach to the description and analysis of human-machine systems or sociotechnical systems. The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use of artifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to as cognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively. === Configuration management === Like systems engineering, configuration management as practiced in the defense and aerospace industry is a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality and outcomes that systems engineering and/or Test and Verification Engineering have obtained and proven through objective testing. === Control engineering === Control engineering and its design and implementation of control systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process. === Industrial engineering === Industrial engineering is a branch of engineering that concerns the development, improvement, implementation, and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material, and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems. === Production Systems Engineering === Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles of production systems and utilize them for analysis, continuous improvement, and design. === Interface design === Interface design and its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces are able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes, and bits in communication protocols. This is known as extensibility. Human-Computer Interaction (HCI) or Human-Machine Interface (HMI) is another aspect of interface design and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design of communication protocols for local area networks and wide area networks. === Mechatronic engineering === Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamic systems modeling to express tangible constructs. In that regard, it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice. === Operations research === Operations research supports systems engineering. Operations research, briefly, is concerned with the optimization of a process under multiple constraints. === Performance engineering === Performance engineering is the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute are throttled by limited system capacity. For example, the performance of a packet-switched network is characterized by the end-to-end packet transit delay or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily on statistics, queueing theory, and probability theory for its tools and processes. === Program management and project management === Program management (or project management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering. Project management is also closely related to both program management and systems engineering. Both include scheduling as engineering support tool in assessing interdisciplinary concerns under management process. In particular, the direct relationship of resources, performance features, and risk to the duration of a task or the dependency links among tasks and impacts across the system lifecycle are systems engineering concerns. === Proposal engineering === Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal. === Reliability engineering === Reliability engineering is the discipline of ensuring a system meets customer expectations for reliability throughout its life (i.e. it does not fail more frequently than expected). Next to the prediction of failure, it is just as much about the prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated with maintainability, availability (dependability or RAMS preferred by some), and integrated logistics support. Reliability engineering is always a critical component of safety engineering, as in failure mode and effects analysis (FMEA) and hazard fault tree analysis, and of security engineering. === Risk management === Risk management, the practice of assessing and dealing with risk is one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for the system lifecycle that requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort. === Safety engineering === The techniques of safety engineering may be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems. === Security engineering === Security engineering can be viewed as an interdisciplinary field that integrates the community of practice for control systems design, reliability, safety, and systems engineering. It may involve such sub-specialties as authentication of system users, system targets, and others: people, objects, and processes. === Software engineering === From its beginnings, software engineering has helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods, and processes of Systems Engineering. == See also == == References == == Further reading == Madhavan, Guru (2024). Wicked Problems: How to Engineer a Better World. New York: W.W. Norton & Company. ISBN 978-0-393-65146-1 Blockley, D. Godfrey, P. Doing it Differently: Systems for Rethinking Infrastructure, Second Edition, ICE Publications, London, 2017. Buede, D.M., Miller, W.D. The Engineering Design of Systems: Models and Methods, Third Edition, John Wiley and Sons, 2016. Chestnut, H., Systems Engineering Methods. Wiley, 1967. Gianni, D. et al. (eds.), Modeling and Simulation-Based Systems Engineering Handbook, CRC Press, 2014 at CRC Goode, H.H., Robert E. Machol System Engineering: An Introduction to the Design of Large-scale Systems, McGraw-Hill, 1957. Hitchins, D. (1997) World Class Systems Engineering at hitchins.net. Lienig, J., Bruemmer, H., Fundamentals of Electronic Systems Design, Springer, 2017 ISBN 978-3-319-55839-4. Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.ISBN 978-1-118-58537-5 MITRE, The MITRE Systems Engineering Guide(pdf) NASA (2007) Systems Engineering Handbook, NASA/SP-2007-6105 Rev1, December 2007. NASA (2013) NASA Systems Engineering Processes and Requirements Archived 27 December 2016 at the Wayback Machine NPR 7123.1B, April 2013 NASA Procedural Requirements Oliver, D.W., et al. Engineering Complex Systems with Models and Objects. McGraw-Hill, 1997. Parnell, G.S., Driscoll, P.J., Henderson, D.L. (eds.), Decision Making in Systems Engineering and Management, 2nd. ed., Hoboken, NJ: Wiley, 2011. This is a textbook for undergraduate students of engineering. Ramo, S., St.Clair, R.K. The Systems Approach: Fresh Solutions to Complex Problems Through Combining Science and Practical Common Sense, Anaheim, CA: KNI, Inc, 1998. Sage, A.P., Systems Engineering. Wiley IEEE, 1992. ISBN 0-471-53639-3. Sage, A.P., Olson, S.R., Modeling and Simulation in Systems Engineering, 2001. SEBOK.org, Systems Engineering Body of Knowledge (SEBoK) Shermon, D. Systems Cost Engineering, Gower Publishing, 2009 Shishko, R., et al. (2005) NASA Systems Engineering Handbook. NASA Center for AeroSpace Information, 2005. Stevens, R., et al. Systems Engineering: Coping with Complexity. Prentice Hall, 1998. US Air Force, SMC Systems Engineering Primer & Handbook, 2004 US DoD Systems Management College (2001) Systems Engineering Fundamentals. Defense Acquisition University Press, 2001 US DoD Guide for Integrating Systems Engineering into DoD Acquisition Contracts Archived 29 August 2017 at the Wayback Machine, 2006 US DoD MIL-STD-499 System Engineering Management == External links == ICSEng homepage INCOSE homepage INCOSE UK homepage PPI SE Goldmine homepage Systems Engineering Body of Knowledge Systems Engineering Tools AcqNotes DoD Systems Engineering Overview NDIA Systems Engineering Division
Wikipedia/Systems_Engineering
The concept of multi-use simulation models relates to the notion of pre-designed templates that are developed for use in simulation projects that simulate repetitive activities. These models can be perceived as “Building Blocks” which are designed for a specific purpose. The chief objective of this concept is to facilitate the conceptualisation and understanding of the simulation model by non-specialists. In practice, the concept can be implemented in different contexts, mainly in the construction industry; as those who work in the development of a simulation project are interested in the efficiency and effectiveness of their simulation model rather than its underpinning mathematical complexity. The development of multi-use simulation models has been addressed by several studies during the last two decades. The first significant trial to make a general-purpose simulation was presented by Hajjar and AbouRizk, 1997. They developed a general purpose simulation model to be used in the estimation and planning of earth moving operations based on reusable templates. Other research was to use reusable templates in preparing draft scheduling for the construction of similar projects. The templates were made to encode much of the knowledge dealing with activity scoping and sequencing and the use of special activity in planning structures. Similarly, an interactive simulation system to be adapted by a beginner level user was developed in 1999. This system was designed to provide easy access and a manipulative environment for studying, analyzing and simulating construction processes. He used a simple and attractive graphical user interface to overcome the potential resistance of the user to simulation as an analytical tool. == Notes ==
Wikipedia/Multi-use_simulation_models
Orbit modeling is the process of creating mathematical models to simulate motion of a massive body as it moves in orbit around another massive body due to gravity. Other forces such as gravitational attraction from tertiary bodies, air resistance, solar pressure, or thrust from a propulsion system are typically modeled as secondary effects. Directly modeling an orbit can push the limits of machine precision due to the need to model small perturbations to very large orbits. Because of this, perturbation methods are often used to model the orbit in order to achieve better accuracy. == Background == The study of orbital motion and mathematical modeling of orbits began with the first attempts to predict planetary motions in the sky, although in ancient times the causes remained a mystery. Newton, at the time he formulated his laws of motion and of gravitation, applied them to the first analysis of perturbations, recognizing the complex difficulties of their calculation. Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for purposes of navigation at sea. The complex motions of orbits can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is typically a conic section, and can be readily modeled with the methods of geometry. This is called a two-body problem, or an unperturbed Keplerian orbit. The differences between the Keplerian orbit and the actual motion of the body are caused by perturbations. These perturbations are caused by forces other than the gravitational effect between the primary and secondary body and must be modeled to create an accurate orbit simulation. Most orbit modeling approaches model the two-body problem and then add models of these perturbing forces and simulate these models over time. Perturbing forces may include gravitational attraction from other bodies besides the primary, solar wind, drag, magnetic fields, and propulsive forces. Analytical solutions (mathematical expressions to predict the positions and motions at any future time) for simple two-body and three-body problems exist; none have been found for the n-body problem except for certain special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape. Due to the difficulty in finding analytic solutions to most problems of interest, computer modeling and simulation is typically used to analyze orbital motion. A wide variety of software is available to simulate orbits and trajectories of spacecraft. == Keplerian orbit model == In its simplest form, an orbit model can be created by assuming that only two bodies are involved, both behave as spherical point-masses, and that no other forces act on the bodies. For this case the model is simplified to a Kepler orbit. Keplerian orbits follow conic sections. The mathematical model of the orbit which gives the distance between a central body and an orbiting body can be expressed as: r ( ν ) = a ( 1 − e 2 ) 1 + e cos ⁡ ( ν ) {\displaystyle r(\nu )={\frac {a(1-e^{2})}{1+e\cos(\nu )}}} Where: r {\displaystyle r} is the distance a {\displaystyle a} is the semi-major axis, which defines the size of the orbit e {\displaystyle e} is the eccentricity, which defines the shape of the orbit ν {\displaystyle \nu } is the true anomaly, which is the angle between the current position of the orbiting object and the location in the orbit at it is closest to the central body (called the periapsis) Alternately, the equation can be expressed as: r ( ν ) = p 1 + e cos ⁡ ( ν ) {\displaystyle r(\nu )={\frac {p}{1+e\cos(\nu )}}} Where p {\displaystyle p} is called the semi-latus rectum of the curve. This form of the equation is particularly useful when dealing with parabolic trajectories, for which the semi-major axis is infinite. An alternate approach uses Isaac Newton's law of universal gravitation as defined below: F = G m 1 m 2 r 2 {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}} where: F {\displaystyle F} is the magnitude of the gravitational force between the two point masses G {\displaystyle G} is the gravitational constant m 1 {\displaystyle m_{1}} is the mass of the first point mass m 2 {\displaystyle m_{2}} is the mass of the second point mass r {\displaystyle r} is the distance between the two point masses Making an additional assumption that the mass of the primary body is much greater than the mass of the secondary body and substituting in Newton's second law of motion, results in the following differential equation r ¨ = G m 1 r 2 r ^ {\displaystyle {\ddot {\mathbf {r} }}={\frac {Gm_{1}}{r^{2}}}\mathbf {\hat {r}} } Solving this differential equation results in Keplerian motion for an orbit. In practice, Keplerian orbits are typically only useful for first-order approximations, special cases, or as the base model for a perturbed orbit. == Orbit simulation methods == Orbit models are typically propagated in time and space using special perturbation methods. This is performed by first modeling the orbit as a Keplerian orbit. Then perturbations are added to the model to account for the various perturbations that affect the orbit. Special perturbations can be applied to any problem in celestial mechanics, as it is not limited to cases where the perturbing forces are small. Special perturbation methods are the basis of the most accurate machine-generated planetary ephemerides. see, for instance, Jet Propulsion Laboratory Development Ephemeris === Cowell's method === Cowell's method is a special perturbation method; mathematically, for n {\displaystyle n} mutually interacting bodies, Newtonian forces on body i {\displaystyle i} from the other bodies j {\displaystyle j} are simply summed thus, r ¨ i = ∑ j = 1 j ≠ i n G m j ( r j − r i ) r i j 3 {\displaystyle \mathbf {\ddot {r}} _{i}=\sum _{\underset {j\neq i}{j=1}}^{n}{Gm_{j}(\mathbf {r} _{j}-\mathbf {r} _{i}) \over r_{ij}^{3}}} where r ¨ i {\displaystyle \mathbf {\ddot {r}} _{i}} is the acceleration vector of body i {\displaystyle i} G {\displaystyle G} is the gravitational constant m j {\displaystyle m_{j}} is the mass of body j {\displaystyle j} r i {\displaystyle \mathbf {r} _{i}} and r j {\displaystyle \mathbf {r} _{j}} are the position vectors of objects i {\displaystyle i} and j {\displaystyle j} r i j {\displaystyle r_{ij}} is the distance from object i {\displaystyle i} to object j {\displaystyle j} with all vectors being referred to the barycenter of the system. This equation is resolved into components in x {\displaystyle x} , y {\displaystyle y} , z {\displaystyle z} and these are integrated numerically to form the new velocity and position vectors as the simulation moves forward in time. The advantage of Cowell's method is ease of application and programming. A disadvantage is that when perturbations become large in magnitude (as when an object makes a close approach to another) the errors of the method also become large. Another disadvantage is that in systems with a dominant central body, such as the Sun, it is necessary to carry many significant digits in the arithmetic because of the large difference in the forces of the central body and the perturbing bodies. === Encke's method === Encke's method begins with the osculating orbit as a reference and integrates numerically to solve for the variation from the reference as a function of time. Its advantages are that perturbations are generally small in magnitude, so the integration can proceed in larger steps (with resulting lesser errors), and the method is much less affected by extreme perturbations than Cowell's method. Its disadvantage is complexity; it cannot be used indefinitely without occasionally updating the osculating orbit and continuing from there, a process known as rectification. Letting ρ {\displaystyle {\boldsymbol {\rho }}} be the radius vector of the osculating orbit, r {\displaystyle \mathbf {r} } the radius vector of the perturbed orbit, and δ r {\displaystyle \delta \mathbf {r} } the variation from the osculating orbit, r ¨ {\displaystyle \mathbf {\ddot {r}} } and ρ ¨ {\displaystyle {\boldsymbol {\ddot {\rho }}}} are just the equations of motion of r {\displaystyle \mathbf {r} } and ρ {\displaystyle {\boldsymbol {\rho }}} , where μ = G ( M + m ) {\displaystyle \mu =G(M+m)} is the gravitational parameter with M {\displaystyle M} and m {\displaystyle m} the masses of the central body and the perturbed body, a per {\displaystyle \mathbf {a} _{\text{per}}} is the perturbing acceleration, and r {\displaystyle r} and ρ {\displaystyle \rho } are the magnitudes of r {\displaystyle \mathbf {r} } and ρ {\displaystyle {\boldsymbol {\rho }}} . Substituting from equations (3) and (4) into equation (2), which, in theory, could be integrated twice to find δ r {\displaystyle \delta \mathbf {r} } . Since the osculating orbit is easily calculated by two-body methods, ρ {\displaystyle {\boldsymbol {\rho }}} and δ r {\displaystyle \delta \mathbf {r} } are accounted for and r {\displaystyle \mathbf {r} } can be solved. In practice, the quantity in the brackets, ρ ρ 3 − r r 3 {\displaystyle {{\boldsymbol {\rho }} \over \rho ^{3}}-{\mathbf {r} \over r^{3}}} , is the difference of two nearly equal vectors, and further manipulation is necessary to avoid the need for extra significant digits. === Sperling–Burdet method === In 1991 Victor R. Bond and Michael F. Fraietta created an efficient and highly accurate method for solving the two-body perturbed problem. This method uses the linearized and regularized differential equations of motion derived by Hans Sperling and a perturbation theory based on these equations developed by C.A. Burdet in the year 1864. In 1973, Bond and Hanssen improved Burdet's set of differential equations by using the total energy of the perturbed system as a parameter instead of the two-body energy and by reducing the number of elements to 13. In 1989 Bond and Gottlieb embedded the Jacobian integral, which is a constant when the potential function is explicitly dependent upon time as well as position in the Newtonian equations. The Jacobian constant was used as an element to replace the total energy in a reformulation of the differential equations of motion. In this process, another element which is proportional to a component of the angular momentum is introduced. This brought the total number of elements back to 14. In 1991, Bond and Fraietta made further revisions by replacing the Laplace vector with another vector integral as well as another scalar integral which removed small secular terms which appeared in the differential equations for some of the elements. The Sperling–Burdet method is executed in a 5 step process as follows: Step 1: Initialization Given an initial position, r 0 {\displaystyle \mathbf {r} _{0}} , an initial velocity, v 0 {\displaystyle \mathbf {v} _{0}} , and an initial time, t 0 {\displaystyle t_{0}} , the following variables are initialized: s = 0 {\displaystyle s=0} r 0 = ( r 0 ⋅ r 0 ) 1 / 2 {\displaystyle r_{0}=(\mathbf {r} _{0}\cdot \mathbf {r} _{0})^{1/2}} a = r 0 {\displaystyle a=r_{0}} b = r 0 ⋅ v 0 {\displaystyle b=\mathbf {r} _{0}\cdot \mathbf {v} _{0}} τ = t 0 {\displaystyle \tau =t_{0}} α = r 0 {\displaystyle {\boldsymbol {\alpha }}=\mathbf {r} _{0}} β = a v 0 {\displaystyle {\boldsymbol {\beta }}=a\mathbf {v} _{0}} Perturbations due to perturbing masses, defined as V 0 {\displaystyle V_{0}} and [ ∂ V ∂ r ] 0 {\displaystyle {\Bigg [}{\partial {V} \over {\partial {\mathbf {r} }}}{\Bigg ]}_{0}} , are evaluated Perturbations due to other accelerations, defined as P 0 {\displaystyle \mathbf {P} _{0}} , are evaluated α J = 2 μ r 0 − v 0 ⋅ v 0 − 2 V 0 {\displaystyle \alpha _{J}={\frac {2\mu }{r_{0}}}-\mathbf {v} _{0}\cdot \mathbf {v} _{0}-2V_{0}} γ = μ − α J a {\displaystyle \gamma =\mu -\alpha _{J}a} δ = − ( v 0 ⋅ v 0 ) r 0 + ( r 0 ⋅ v 0 ) v 0 + μ r 0 r 0 − α J r 0 {\displaystyle {\boldsymbol {\delta }}=-(\mathbf {v} _{0}\cdot \mathbf {v} _{0})\mathbf {r} _{0}+(\mathbf {r} _{0}\cdot \mathbf {v} _{0})\mathbf {v} _{0}+{\frac {\mu }{r_{0}}}\mathbf {r} _{0}-\alpha _{J}\mathbf {r} _{0}} σ = 0 {\displaystyle \sigma =0} Step 2: Transform elements to coordinates r = α + β s c 1 + δ s 2 c 2 {\displaystyle \mathbf {r} ={\boldsymbol {\alpha }}+{\boldsymbol {\beta }}sc_{1}+{\boldsymbol {\delta }}s^{2}c_{2}} r ′ = β c 0 + δ s c 1 {\displaystyle \mathbf {r'} ={\boldsymbol {\beta }}c_{0}+{\boldsymbol {\delta }}sc_{1}} x 3 = α J ( α − r ) + δ {\displaystyle \mathbf {x} _{3}=\alpha _{J}({\boldsymbol {\alpha }}-\mathbf {r} )+{\boldsymbol {\delta }}} γ = μ − α J a {\displaystyle \gamma =\mu -\alpha _{J}a} r = a + b s c 1 + γ s 2 c 2 {\displaystyle r=a+bsc_{1}+\gamma s^{2}c_{2}} v = r ′ / r {\displaystyle \mathbf {v} =\mathbf {r'} /r} r ′ = b c 0 + γ s c 1 {\displaystyle r'=bc_{0}+\gamma sc_{1}} t = τ + a s + b s 2 c 2 + γ s 3 c 3 {\displaystyle t=\tau +as+bs^{2}c_{2}+\gamma s^{3}c_{3}} where c 0 , c 1 , c 2 , c 3 {\displaystyle c_{0},c_{1},c_{2},c_{3}} are Stumpff functions Step 3: Evaluate differential equations for the elements F = P − ∂ V ∂ r {\displaystyle \mathbf {F} =\mathbf {P} -{\partial {V} \over \partial {\mathbf {r} }}} Q = r 2 F + 2 r ( − V + σ ) {\displaystyle \mathbf {Q} =r^{2}\mathbf {F} +2\mathbf {r} (-V+\sigma )} α J ′ = 2 ( − r ′ + r ω × r ) ⋅ P {\displaystyle \alpha '_{J}=2(-\mathbf {r'} +r{\boldsymbol {\omega }}\times \mathbf {r} )\cdot \mathbf {P} } μ ϵ ′ = 2 ( r ′ ⋅ F ) r − ( r ⋅ F ) r ′ − ( r ⋅ r ′ ) F {\displaystyle \mu {\boldsymbol {\epsilon }}'=2(\mathbf {r'} \cdot \mathbf {F} )\mathbf {r} -(\mathbf {r} \cdot \mathbf {F} )\mathbf {r'} -(\mathbf {r} \cdot \mathbf {r'} )\mathbf {F} } α ′ = − Q s c 1 − μ ϵ ′ s 2 c 2 − α J ′ [ α s 2 c 2 + 2 β s 3 c ¯ 3 + 1 2 δ s 4 c 2 2 ] {\displaystyle {\boldsymbol {\alpha }}'=-\mathbf {Q} sc_{1}-\mu {\boldsymbol {\epsilon }}'s^{2}c_{2}-\alpha '_{J}{\big [}{\boldsymbol {\alpha }}s^{2}c_{2}+2{\boldsymbol {\beta }}s^{3}{\bar {c}}_{3}+{\frac {1}{2}}{\boldsymbol {\delta }}s^{4}c_{2}^{2}{\big ]}} β ′ = Q c 0 + μ ϵ ′ s c 1 + α J ′ [ α s c 1 + β s 2 c ¯ 2 − δ s 3 ( 2 c ¯ 3 − c 1 c 2 ) ] {\displaystyle {\boldsymbol {\beta }}'=\mathbf {Q} c_{0}+\mu {\boldsymbol {\epsilon }}'sc_{1}+\alpha '_{J}{\big [}{\boldsymbol {\alpha }}sc_{1}+{\boldsymbol {\beta }}s^{2}{\bar {c}}_{2}-{\boldsymbol {\delta }}s^{3}(2{\bar {c}}_{3}-c_{1}c_{2}){\big ]}} δ ′ = Q α J s c 1 − μ ϵ ′ c 0 + α J ′ [ − α c 0 + 2 α J β s 3 c ¯ 3 + 1 2 δ α J s 4 c 2 2 ] {\displaystyle {\boldsymbol {\delta }}'=\mathbf {Q} \alpha _{J}sc_{1}-\mu {\boldsymbol {\epsilon }}'c_{0}+\alpha '_{J}{\big [}-{\boldsymbol {\alpha }}c_{0}+2\alpha _{J}{\boldsymbol {\beta }}s^{3}{\bar {c}}_{3}+{\frac {1}{2}}{\boldsymbol {\delta }}\alpha _{J}s^{4}c_{2}^{2}{\big ]}} σ ′ = r ω ⋅ r × F {\displaystyle \sigma '=r{\boldsymbol {\omega }}\cdot \mathbf {r} \times \mathbf {F} } a ′ = − 1 r r ⋅ Q s c 1 − α J ′ [ a s 2 c 2 + 2 b s 3 c ¯ 3 + 1 2 γ s 4 c 2 2 ] {\displaystyle a'=-{\frac {1}{r}}\mathbf {r} \cdot \mathbf {Q} sc_{1}-\alpha _{J}'{\big [}as^{2}c_{2}+2bs^{3}{\bar {c}}_{3}+{\frac {1}{2}}\gamma s^{4}c_{2}^{2}{\big ]}} b ′ = 1 r r ⋅ Q c 0 + α J ′ [ a s c 1 + b s 2 c ¯ 2 − γ s 3 ( 2 c ¯ 3 − c 1 c 2 ) ] {\displaystyle b'={\frac {1}{r}}\mathbf {r} \cdot \mathbf {Q} c_{0}+\alpha _{J}'{\big [}asc_{1}+bs^{2}{\bar {c}}_{2}-\gamma s^{3}(2{\bar {c}}_{3}-c_{1}c_{2}){\big ]}} γ ′ = − 1 r r ⋅ Q α J s c 1 + α J ′ [ − a c 0 + 2 b α J s 3 c ¯ 3 + 1 2 γ α J s 4 c 2 2 ] {\displaystyle \gamma '=-{\frac {1}{r}}\mathbf {r} \cdot \mathbf {Q} \alpha _{J}sc_{1}+\alpha _{J}'{\big [}-ac_{0}+2b\alpha _{J}s^{3}{\bar {c}}_{3}+{\frac {1}{2}}\gamma \alpha _{J}s^{4}c_{2}^{2}{\big ]}} τ ′ = 1 r r ⋅ Q s 2 c 2 + α J ′ [ a s 3 c 3 + 1 2 b s 4 c 2 2 − 2 γ s 5 ( c 5 − 4 c ¯ 5 ) ] {\displaystyle \tau '={\frac {1}{r}}\mathbf {r} \cdot \mathbf {Q} s^{2}c_{2}+\alpha _{J}'{\big [}as^{3}c_{3}+{\frac {1}{2}}bs^{4}c_{2}^{2}-2\gamma s^{5}(c_{5}-4{\bar {c}}_{5}){\big ]}} Step 4: Integration Here the differential equations are integrated over a period Δ s {\displaystyle \Delta s} to obtain the element value at s + Δ s {\displaystyle s+\Delta s} Step 5: Advance Set s = s + Δ s {\displaystyle s=s+\Delta s} and return to step 2 until simulation stopping conditions are met. == Perturbations == Perturbing forces cause orbits to become perturbed from a perfect Keplerian orbit. Models for each of these forces are created and executed during the orbit simulation so their effects on the orbit can be determined. === Non-spherical gravity === The Earth is not a perfect sphere nor is mass evenly distributed within the Earth. This results in the point-mass gravity model being inaccurate for orbits around the Earth, particularly Low Earth orbits. To account for variations in gravitational potential around the surface of the Earth, the gravitational field of the Earth is modeled with spherical harmonics which are expressed through the equation: f = − μ R 2 r ^ + ∑ n = 2 ∞ ∑ m = 0 n f n , m {\displaystyle {\mathbf {f} }=-{\frac {\mu }{R^{2}}}\mathbf {\hat {r}} +\sum _{n=2}^{\infty }\sum _{m=0}^{n}{\mathbf {f} }_{n,m}} where μ {\displaystyle {\mu }} is the gravitational parameter defined as the product of G, the universal gravitational constant, and the mass of the primary body. r ^ {\displaystyle \mathbf {\hat {r}} } is the unit vector defining the distance between the primary and secondary bodies, with R {\displaystyle {R}} being the magnitude of the distance. f n , m {\displaystyle {\mathbf {f} }_{n,m}} represents the contribution to f {\displaystyle {\mathbf {f} }} of the spherical harmonic of degree n and order m, which is defined as: f n , m = μ R O 2 R n + m + 1 ( C n , m C m + S n , m S m R ( A n , m + 1 e ^ 3 − ( s λ A n , m + 1 + ( n + m + 1 ) A n , m ) r ^ ) + m A n , m ( ( C n , m C m − 1 + S n , m S m − 1 ) e ^ 1 + ( S n , m C m − 1 − C n , m S m − 1 ) e ^ 2 ) ) {\displaystyle {\begin{aligned}\mathbf {f} _{n,m}&={\frac {\mu R_{O}^{2}}{R^{n+m+1}}}\left({\frac {C_{n,m}{\mathcal {C}}_{m}+S_{n,m}{\mathcal {S}}_{m}}{R}}(A_{n,m+1}\mathbf {\hat {e}} _{3}-\left(s_{\lambda }A_{n,m+1}+(n+m+1)A_{n,m}\right)\mathbf {\hat {r}} \right)\\[10pt]&{}\quad {}+mA_{n,m}((C_{n,m}{\mathcal {C}}_{m-1}+S_{n,m}{\mathcal {S}}_{m-1})\mathbf {\hat {e}} _{1}+(S_{n,m}{\mathcal {C}}_{m-1}-C_{n,m}{\mathcal {S}}_{m-1})\mathbf {\hat {e}} _{2}))\end{aligned}}} where: R O {\displaystyle R_{O}} is the mean equatorial radius of the primary body. R {\displaystyle R} is the magnitude of the position vector from the center of the primary body to the center of the secondary body. C n , m {\displaystyle C_{n,m}} and S n , m {\displaystyle S_{n,m}} are gravitational coefficients of degree n and order m. These are typically found through gravimetry measurements. The unit vectors e ^ 1 , e ^ 2 , e ^ 3 {\displaystyle \mathbf {\hat {e}} _{1},\mathbf {\hat {e}} _{2},\mathbf {\hat {e}} _{3}} define a coordinate system fixed on the primary body. For the Earth, e ^ 1 {\displaystyle \mathbf {\hat {e}} _{1}} lies in the equatorial plane parallel to a line intersecting Earth's geometric center and the Greenwich meridian, e ^ 3 {\displaystyle \mathbf {\hat {e}} _{3}} points in the direction of the North polar axis, and e ^ 2 = e ^ 3 × e ^ 1 {\displaystyle \mathbf {\hat {e}} _{2}=\mathbf {\hat {e}} _{3}\times \mathbf {\hat {e}} _{1}} A n , m {\displaystyle A_{n,m}} is referred to as a derived Legendre polynomial of degree n and order m. They are solved through the recurrence relation: A n , m ( u ) = 1 n − m ( ( 2 n − 1 ) u A n − 1 , m ( u ) − ( n + m − 1 ) A n − 2 , m ( u ) ) {\displaystyle A_{n,m}(u)={\frac {1}{n-m}}((2n-1)uA_{n-1,m}(u)-(n+m-1)A_{n-2,m}(u))} s λ {\displaystyle s_{\lambda }} is sine of the geographic latitude of the secondary body, which is r ^ ⋅ e ^ 3 {\displaystyle \mathbf {\hat {r}} \cdot \mathbf {\hat {e}} _{3}} . C m , S m {\displaystyle {\mathcal {C}}_{m},{\mathcal {S}}_{m}} are defined with the following recurrence relation and initial conditions: C m = C 1 C m − 1 − S 1 S m − 1 , S m = S 1 C m − 1 + C 1 S m − 1 , S 0 = 0 , S 1 = R ⋅ e ^ 2 , C 0 = 1 , C 1 = R ⋅ e ^ 1 {\displaystyle {\mathcal {C}}_{m}={\mathcal {C}}_{1}{\mathcal {C}}_{m-1}-{\mathcal {S}}_{1}{\mathcal {S}}_{m-1},{\mathcal {S}}_{m}={\mathcal {S}}_{1}{\mathcal {C}}_{m-1}+{\mathcal {C}}_{1}{\mathcal {S}}_{m-1},{\mathcal {S}}_{0}=0,{\mathcal {S}}_{1}=\mathbf {R} \cdot \mathbf {\hat {e}} _{2},{\mathcal {C}}_{0}=1,{\mathcal {C}}_{1}=\mathbf {R} \cdot \mathbf {\hat {e}} _{1}} When modeling perturbations of an orbit around a primary body only the sum of the f n , m {\displaystyle {\mathbf {f} }_{n,m}} terms need to be included in the perturbation since the point-mass gravity model is accounted for in the − μ R 2 r ^ {\displaystyle -{\frac {\mu }{R^{2}}}\mathbf {\hat {r}} } term === Third-body perturbations === Gravitational forces from third bodies can cause perturbations to an orbit. For example, the Sun and Moon cause perturbations to Orbits around the Earth. These forces are modeled in the same way that gravity is modeled for the primary body by means of direct gravitational N-body simulations. Typically, only a spherical point-mass gravity model is used for modeling effects from these third bodies. Some special cases of third-body perturbations have approximate analytic solutions. For example, perturbations for the right ascension of the ascending node and argument of perigee for a circular Earth orbit are: Ω ˙ M O O N = − 0.00338 ( cos ⁡ ( i ) ) / n {\displaystyle {\dot {\Omega }}_{\mathrm {MOON} }=-0.00338(\cos(i))/n} ω ˙ M O O N = − 0.00169 ( 4 − 5 sin 2 ⁡ ( i ) ) / n {\displaystyle {\dot {\omega }}_{\mathrm {MOON} }=-0.00169(4-5\sin ^{2}(i))/n} where: Ω ˙ {\displaystyle {\dot {\Omega }}} is the change to the right ascension of the ascending node in degrees per day. ω ˙ {\displaystyle {\dot {\omega }}} is the change to the argument of perigee in degrees per day. i {\displaystyle i} is the orbital inclination. n {\displaystyle n} is the number of orbital revolutions per day. === Solar radiation === Solar radiation pressure causes perturbations to orbits. The magnitude of acceleration it imparts to a spacecraft in Earth orbit is modeled using the equation below: a R ≈ − 4.5 × 10 − 6 ( 1 + r ) A / m {\displaystyle a_{R}\approx -4.5\times 10^{-6}(1+r)A/m} where: a R {\displaystyle a_{R}} is the magnitude of acceleration in meters per second-squared. A {\displaystyle A} is the cross-sectional area exposed to the Sun in meters-squared. m {\displaystyle m} is the spacecraft mass in kilograms. r {\displaystyle r} is the reflection factor which depends on material properties. r = 0 {\displaystyle r=0} for absorption, r = 1 {\displaystyle r=1} for specular reflection, and r ≈ 0.4 {\displaystyle r\approx 0.4} for diffuse reflection. For orbits around the Earth, solar radiation pressure becomes a stronger force than drag above 800 km (500 mi) altitude. === Propulsion === There are many different types of spacecraft propulsion. Rocket engines are one of the most widely used. The force of a rocket engine is modeled by the equation: F n = m ˙ v e = m ˙ v e-act + A e ( p e − p amb ) {\displaystyle F_{n}={\dot {m}}\;v_{\text{e}}={\dot {m}}\;v_{\text{e-act}}+A_{\text{e}}(p_{\text{e}}-p_{\text{amb}})} Another possible method is a solar sail. Solar sails use radiation pressure in a way to achieve a desired propulsive force. The perturbation model due to the solar wind can be used as a model of propulsive force from a solar sail. === Drag === The primary non-gravitational force acting on satellites in low Earth orbit is atmospheric drag. Drag will act in opposition to the direction of velocity and remove energy from an orbit. The force due to drag is modeled by the following equation: F D = 1 2 ρ v 2 C d A , {\displaystyle F_{D}\,=\,{\tfrac {1}{2}}\,\rho \,v^{2}\,C_{d}\,A,} where F D {\displaystyle \mathbf {F} _{D}} is the force of drag, ρ {\displaystyle \mathbf {} \rho } is the density of the fluid, v {\displaystyle \mathbf {} v} is the velocity of the object relative to the fluid, C d {\displaystyle \mathbf {} C_{d}} is the drag coefficient (a dimensionless parameter, e.g. 2 to 4 for most satellites) A {\displaystyle \mathbf {} A} is the reference area. Orbits with an altitude below 120 km (75 mi) generally have such high drag that the orbits decay too rapidly to give a satellite a sufficient lifetime to accomplish any practical mission. On the other hand, orbits with an altitude above 600 km (370 mi) have relatively small drag so that the orbit decays slow enough that it has no real impact on the satellite over its useful life. Density of air can vary significantly in the thermosphere where most low Earth orbiting satellites reside. The variation is primarily due to solar activity, and thus solar activity can greatly influence the force of drag on a spacecraft and complicate long-term orbit simulation. === Magnetic fields === Magnetic fields can play a significant role as a source of orbit perturbation as was seen in the Long Duration Exposure Facility. Like gravity, the magnetic field of the Earth can be expressed through spherical harmonics as shown below: B = ∑ n = 1 ∞ ∑ m = 0 n B n , m {\displaystyle {\mathbf {B} }=\sum _{n=1}^{\infty }\sum _{m=0}^{n}{\mathbf {B} }_{n,m}} where B {\displaystyle {\mathbf {B} }} is the magnetic field vector at a point above the Earth's surface. B n , m {\displaystyle {\mathbf {B} }_{n,m}} represents the contribution to B {\displaystyle {\mathbf {B} }} of the spherical harmonic of degree n and order m, defined as: B n , m = K n , m a n + 2 R n + m + 1 [ g n , m C m + h n , m S m R ( ( s λ A n , m + 1 + ( n + m + 1 ) A n , m ) r ^ ) − A n , m + 1 e ^ 3 ] − m A n , m ( ( g n , m C m − 1 + h n , m S m − 1 ) e ^ 1 + ( h n , m C m − 1 − g n , m S m − 1 ) e ^ 2 ) ) {\displaystyle {\begin{aligned}\mathbf {B} _{n,m}={}&{\frac {K_{n,m}a^{n+2}}{R^{n+m+1}}}\left[{\frac {g_{n,m}{\mathcal {C}}_{m}+h_{n,m}{\mathcal {S}}_{m}}{R}}((s_{\lambda }A_{n,m+1}+(n+m+1)A_{n,m})\mathbf {\hat {r}} )-A_{n,m+1}\mathbf {\hat {e}} _{3}\right]\\[10pt]&{}-mA_{n,m}((g_{n,m}{\mathcal {C}}_{m-1}+h_{n,m}{\mathcal {S}}_{m-1})\mathbf {\hat {e}} _{1}+(h_{n,m}{\mathcal {C}}_{m-1}-g_{n,m}{\mathcal {S}}_{m-1})\mathbf {\hat {e}} _{2}))\end{aligned}}} where: a {\displaystyle a} is the mean equatorial radius of the primary body. R {\displaystyle R} is the magnitude of the position vector from the center of the primary body to the center of the secondary body. r ^ {\displaystyle \mathbf {\hat {r}} } is a unit vector in the direction of the secondary body with its origin at the center of the primary body. g n , m {\displaystyle g_{n,m}} and h n , m {\displaystyle h_{n,m}} are Gauss coefficients of degree n and order m. These are typically found through magnetic field measurements. The unit vectors e ^ 1 , e ^ 2 , e ^ 3 {\displaystyle \mathbf {\hat {e}} _{1},\mathbf {\hat {e}} _{2},\mathbf {\hat {e}} _{3}} define a coordinate system fixed on the primary body. For the Earth, e ^ 1 {\displaystyle \mathbf {\hat {e}} _{1}} lies in the equatorial plane parallel to a line intersecting Earth's geometric center and the Greenwich meridian, e ^ 3 {\displaystyle \mathbf {\hat {e}} _{3}} points in the direction of the North polar axis, and e ^ 2 = e ^ 3 × e ^ 1 {\displaystyle \mathbf {\hat {e}} _{2}=\mathbf {\hat {e}} _{3}\times \mathbf {\hat {e}} _{1}} A n , m {\displaystyle A_{n,m}} is referred to as a derived Legendre polynomial of degree n and order m. They are solved through the recurrence relation: A n , m ( u ) = 1 n − m ( ( 2 n − 1 ) u A n − 1 , m ( u ) − ( n + m − 1 ) A n − 2 , m ( u ) ) {\displaystyle A_{n,m}(u)={\frac {1}{n-m}}((2n-1)uA_{n-1,m}(u)-(n+m-1)A_{n-2,m}(u))} K n , m {\displaystyle K_{n,m}} is defined as: 1 if m = 0, [ n − m n + m ] 0.5 K n − 1 , m {\displaystyle {\big [}{\frac {n-m}{n+m}}{\big ]}^{0.5}K_{n-1,m}} for n ≥ ( m + 1 ) {\displaystyle n\geq (m+1)} and m = [ 1 … ∞ ] {\displaystyle m=[1\ldots \infty ]} , and [ ( n + m ) ( n − m + 1 ) ] − 0.5 K n , m − 1 {\displaystyle [(n+m)(n-m+1)]^{-0.5}K_{n,m-1}} for n ≥ m {\displaystyle n\geq m} and m = [ 2 … ∞ ] {\displaystyle m=[2\ldots \infty ]} s λ {\displaystyle s_{\lambda }} is sine of the geographic latitude of the secondary body, which is r ^ ⋅ e ^ 3 {\displaystyle \mathbf {\hat {r}} \cdot \mathbf {\hat {e}} _{3}} . C m , S m {\displaystyle {\mathcal {C}}_{m},{\mathcal {S}}_{m}} are defined with the following recurrence relation and initial conditions: C m = C 1 C m − 1 − S 1 S m − 1 , S m = S 1 C m − 1 + C 1 S m − 1 , S 0 = 0 , S 1 = R ⋅ e ^ 2 , C 0 = 1 , C 1 = R ⋅ e ^ 1 {\displaystyle {\mathcal {C}}_{m}={\mathcal {C}}_{1}{\mathcal {C}}_{m-1}-{\mathcal {S}}_{1}{\mathcal {S}}_{m-1},{\mathcal {S}}_{m}={\mathcal {S}}_{1}{\mathcal {C}}_{m-1}+{\mathcal {C}}_{1}{\mathcal {S}}_{m-1},{\mathcal {S}}_{0}=0,{\mathcal {S}}_{1}=\mathbf {R} \cdot \mathbf {\hat {e}} _{2},{\mathcal {C}}_{0}=1,{\mathcal {C}}_{1}=\mathbf {R} \cdot \mathbf {\hat {e}} _{1}} == See also == n-body problem Orbital resonance Osculating orbit Perturbation (astronomy) Sphere of influence (astrodynamics) Two-body problem == Notes == == References == == External links == [1] Gravity maps of the Earth
Wikipedia/Orbit_modeling
Modelling frameworks are used in modelling and simulation and can consist of a software infrastructure to develop and run mathematical models. They have provided a substantial step forward in the area of biophysical modelling with respect to monolithic implementations. The separation of algorithms from data, the reusability of I/O procedures and integration services, and the isolation of modelling solutions in discrete units has brought a solid advantage in the development of simulation systems. Modelling frameworks for agriculture have evolved over time, with different approaches and targets BioMA is a software framework developed focusing on platform-independent, re-usable components, including multi-model implementations at fine granularity. == BioMA - Biophysical Model Applications == BioMA (Biophysical Model Applications) is a public domain software framework designed and implemented for developing, parameterizing and running modelling solutions based on biophysical models in the domains of agriculture and environment. It is based on discrete conceptual units codified in freely extensible software components . The goal of this framework is to rapidly bridge from prototypes to operational applications, enabling running and comparing different modelling solutions. A key aspect of the framework is the transparency which allows for quality evaluation of outputs in the various steps of the modelling workflow. The framework is based on framework-independent components, both for the modelling solutions and the graphical user's interfaces. The goal is not only to provide a framework for model development and operational use but also, and of no lesser importance, to provide a loose collection of objects re-usable either standalone or in different frameworks. The software is developed using Microsoft C# language in the .NET framework. The framework is a development of the work carried out under the APES task of the 6th EU Framework Program SEAMLESS project. Deployments of the platform and its tools and components have been used: to create weather datasets for biophysical simulation,: to assess the impact on crop production in Europe, and adaptation, to simulate the development of soil pathogens under climate change, to reproduce the growth and development of tree species, to estimate the survival of insects damaging maize under climate change to estimate crop suitability to environment, to perform modelling solutions comparison at sub-model level, to develop a library of reusable models for crop development and growth, to estimate the impact of climate change on crop production in Latin America, to simulate fungal infections and the dynamics of plant epidemics, to estimate agro-meteorological variables, to develop a library of functions to estimate soil hydraulic properties, to estimate quality of agricultural products. to simulate the timing and the application of agricultural management practices to develop a library to perform sensitivity analysis on agricultural models to define a library to evaluate crop model performances in reproducing field experiments to develop a new model of quantitative and qualitative aspects of winter rapeseed productions to adapt the Canegro sugar cane model for giant reed BioMA applications and modelling solutions are the simulation tools used by the MARS unit of the European Commission to simulate agricultural production under scenarios of climate change. BioMA is also used in the EU FP7 project MODEXTREME. === The architecture === The simulation system is discretized in layers, each with its own features and requirements. Such layers are the Model Layer (ModL), where fine granularity models are implemented as discrete units, the Composition Layer (CompL), where basic models are linked into more complex, aggregated models, and the Configuration Layer (ConfL), which allows providing context specific parameterization (in the software sense) for operational use. Applications can span from simple console applications to user-interacting applications based on the model-view-controller pattern, in the simplest cases linking either directly to either the ModL or the CompL, or accessing model ConfL. In all cases, the component oriented architecture allows implementing a set of functionalities which impact on the richness of functionality of the system and on its transparency. Layers implement no top-down dependency among them, hence facilitating the independent reuse of tools, utilities, and model components in different applications and frameworks. === Cloud Architecture === In the context of the AgriDigit project, carried out at CREA, the BioMA framework has been adapted to execution in the Cloud via a SaaS architecture. Model calls will be treated as an HTTP invocation, so the Model View Controller architecture is no longer needed. Hence, the Configuration Layer has been eliminated (it is not used) for cloud services. Also the Composition Layer has been simplified. === Applications === Advanced applications can be grouped under two categories: BioMA-Spatial, were models are run iteratively against spatially explicit units, as either grid cells or polygons. These application can include a layer to model interaction among the spatial units; BioMA-Site, were models are run against specific sites. These applications can be specialized for specific crops, and in general allow a more detailed access to model constituent blocks and outputs. Applications can be built based on the libraries as in the following figure. The libraries can be extended implementing new models, as shown in the software development kits, and new libraries can be added. === Availability === Model components and tools can be autonomously downloaded with the SDK at the components' portal. Same for modelling solutions (the portal is being renovated). Acces to modelling solutions as SaaS need to be requested. === The BioMA Intellectual Property Rights model === Code of core components is available under the MIT license, however, the reuse of binaries falls under the Creative Commons license as below, implying the no-commercial, share-alike clauses. Application and tools are available under the Creative Commons license as binaries, however code can be shared under specific agreements between parties. Model component developers may make code available, however, they must make binaries available for reuse. === References ===
Wikipedia/Biophysical_Models
Rule-based modeling is a modeling approach that uses a set of rules that indirectly specifies a mathematical model. The rule-set can either be translated into a model such as Markov chains or differential equations, or be treated using tools that directly work on the rule-set in place of a translated model, as the latter is typically much bigger. Rule-based modeling is especially effective in cases where the rule-set is significantly simpler than the model it implies, meaning that the model is a repeated manifestation of a limited number of patterns. An important domain where this is often the case is biochemical models of living organisms. Groups of mutually corresponding substances are subject to mutually corresponding interactions. BioNetGen is a suite of software tools used to generate mathematical models consisting of ordinary differential equations without generating the equations directly. For example below is an example rule in the BioNetGen format: A ( a , a ) + B ( b ) − > A ( a ! 1 ) . B ( b ! 1 ) {\displaystyle A(a,a)+B(b)->A(a!1).B(b!1)} Where: A(a,a): Represents a model species A with two free binding sites a B(b): Represents a model species B with one free binding site A(a!1).B(b!1): Represents model species where at least one binding site of A is bound to the binding site of B With the above line of code, BioNetGen will automatically create an ODE for each model species with the correct mass balance. Additionally, an additional species will be created because the rule above implies that two B molecules can bind to a single A molecule since there are two binding sites. Therefore, the following species will be generated: 4. A(a!1,a!2).B(b!1).B(b!2): Molecule A with both binding sites occupied by two different B molecules. == For biochemical systems == Early efforts to use rule-based modeling in simulation of biochemical systems include the stochastic simulation systems StochSim A widely used tool for rule-based modeling of biochemical networks is BioNetGen It is released under the GNU GPL, version 3. BioNetGen includes a language to describe chemical substances, including the states they can assume and the bindings they can undergo. These rules can be used to create a reaction network model or to perform computer simulations directly on the rule set. The biochemical modeling framework Virtual Cell includes a BioNetGen interpreter. A close alternative is the Kappa language. Another alternative is BioChemical Space language. == References ==
Wikipedia/Rule-based_modeling
The Modeling and Simulation Coordination Office (M&SCO) is an organization within the United States Department of Defense that provides modeling and simulation technology. The M&SCO was named the Defense Modeling and Simulation Office (DMSO) when it was created by Congress in 2006. It was renamed the Modeling and Simulation Coordination Office in late 2007. The M&SCO leads DoD modeling and simulation standardization efforts. It is the DoD point of contact for coordinating modeling and simulation activities with NATO and Partnership for Peace (PfP) organizations, and provides support to the DoD modeling and simulation management system. == External links == Official website
Wikipedia/Modeling_and_Simulation_Coordination_Office
Systems theory is the transdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or artificial. Every system has causal boundaries, is influenced by its context, defined by its structure, function and role, and expressed through its relations with other systems. A system is "more than the sum of its parts" when it expresses synergy or emergent behavior. Changing one component of a system may affect other components or the whole system. It may be possible to predict these changes in patterns of behavior. For systems that learn and adapt, the growth and the degree of adaptation depend upon how well the system is engaged with its environment and other contexts influencing its organization. Some systems support other systems, maintaining the other system to prevent failure. The goals of systems theory are to model a system's dynamics, constraints, conditions, and relations; and to elucidate principles (such as purpose, measure, methods, tools) that can be discerned and applied to other systems at every level of nesting, and in a wide range of fields for achieving optimized equifinality. General systems theory is about developing broadly applicable concepts and principles, as opposed to concepts and principles specific to one domain of knowledge. It distinguishes dynamic or active systems from static or passive systems. Active systems are activity structures or components that interact in behaviours and processes or interrelate through formal contextual boundary conditions (attractors). Passive systems are structures and components that are being processed. For example, a computer program is passive when it is a file stored on the hard drive and active when it runs in memory. The field is related to systems thinking, machine logic, and systems engineering. == Overview == Systems theory is manifest in the work of practitioners in many disciplines, for example the works of physician Alexander Bogdanov, biologist Ludwig von Bertalanffy, linguist Béla H. Bánáthy, and sociologist Talcott Parsons; in the study of ecological systems by Howard T. Odum, Eugene Odum; in Fritjof Capra's study of organizational theory; in the study of management by Peter Senge; in interdisciplinary areas such as human resource development in the works of Richard A. Swanson; and in the works of educators Debora Hammond and Alfonso Montuori. As a transdisciplinary, interdisciplinary, and multiperspectival endeavor, systems theory brings together principles and concepts from ontology, the philosophy of science, physics, computer science, biology, and engineering, as well as geography, sociology, political science, psychotherapy (especially family systems therapy), and economics. Systems theory promotes dialogue between autonomous areas of study as well as within systems science itself. In this respect, with the possibility of misinterpretations, von Bertalanffy believed a general theory of systems "should be an important regulative device in science," to guard against superficial analogies that "are useless in science and harmful in their practical consequences." Others remain closer to the direct systems concepts developed by the original systems theorists. For example, Ilya Prigogine, of the Center for Complex Quantum Systems at the University of Texas, has studied emergent properties, suggesting that they offer analogues for living systems. The distinction of autopoiesis as made by Humberto Maturana and Francisco Varela represent further developments in this field. Important names in contemporary systems science include Russell Ackoff, Ruzena Bajcsy, Béla H. Bánáthy, Gregory Bateson, Anthony Stafford Beer, Peter Checkland, Barbara Grosz, Brian Wilson, Robert L. Flood, Allenna Leonard, Radhika Nagpal, Fritjof Capra, Warren McCulloch, Kathleen Carley, Michael C. Jackson, Katia Sycara, and Edgar Morin among others. With the modern foundations for a general theory of systems following World War I, Ervin László, in the preface for Bertalanffy's book, Perspectives on General System Theory, points out that the translation of "general system theory" from German into English has "wrought a certain amount of havoc": It (General System Theory) was criticized as pseudoscience and said to be nothing more than an admonishment to attend to things in a holistic way. Such criticisms would have lost their point had it been recognized that von Bertalanffy's general system theory is a perspective or paradigm, and that such basic conceptual frameworks play a key role in the development of exact scientific theory. .. Allgemeine Systemtheorie is not directly consistent with an interpretation often put on 'general system theory,' to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories. Theorie (or Lehre) "has a much broader meaning in German than the closest English words 'theory' and 'science'," just as Wissenschaft (or 'Science'). These ideas refer to an organized body of knowledge and "any systematically presented set of concepts, whether empirically, axiomatically, or philosophically" represented, while many associate Lehre with theory and science in the etymology of general systems, though it also does not translate from the German very well; its "closest equivalent" translates to 'teaching', but "sounds dogmatic and off the mark." An adequate overlap in meaning is found within the word "nomothetic", which can mean "having the capability to posit long-lasting sense." While the idea of a "general systems theory" might have lost many of its root meanings in the translation, by defining a new way of thinking about science and scientific paradigms, systems theory became a widespread term used for instance to describe the interdependence of relationships created in organizations. A system in this frame of reference can contain regularly interacting or interrelating groups of activities. For example, in noting the influence in the evolution of "an individually oriented industrial psychology [into] a systems and developmentally oriented organizational psychology," some theorists recognize that organizations have complex social systems; separating the parts from the whole reduces the overall effectiveness of organizations. This difference, from conventional models that center on individuals, structures, departments and units, separates in part from the whole, instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function. László explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" which reduced the parts from the whole, or understood the whole without relation to the parts. The relationship between organisations and their environments can be seen as the foremost source of complexity and interdependence. In most cases, the whole has properties that cannot be known from analysis of the constituent elements in isolation. Béla H. Bánáthy, who argued—along with the founders of the systems society—that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at the International Society for the System Sciences, Bánáthy defines a perspective that iterates this view: The systems view is a world-view that is based on the discipline of SYSTEM INQUIRY. Central to systems inquiry is the concept of SYSTEM. In the most general sense, system means a configuration of parts connected and joined together by a web of relationships. The Primer Group defines system as a family of relationships among the members acting as a whole. Von Bertalanffy defined system as "elements in standing relationship." == Applications == === Art === === Biology === Systems biology is a movement that draws on several trends in bioscience research. Proponents describe systems biology as a biology-based interdisciplinary study field that focuses on complex interactions in biological systems, claiming that it uses a new perspective (holism instead of reduction). Particularly from the year 2000 onwards, the biosciences use the term widely and in a variety of contexts. An often stated ambition of systems biology is the modelling and discovery of emergent properties which represents properties of a system whose theoretical description requires the only possible useful techniques to fall under the remit of systems biology. It is thought that Ludwig von Bertalanffy may have created the term systems biology in 1928. Subdisciplines of systems biology include: Systems neuroscience Systems pharmacology ==== Ecology ==== Systems ecology is an interdisciplinary field of ecology that takes a holistic approach to the study of ecological systems, especially ecosystems; it can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems. === Chemistry === Systems chemistry is the science of studying networks of interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties. Systems chemistry is also related to the origin of life (abiogenesis). === Engineering === Systems engineering is an interdisciplinary approach and means for enabling the realisation and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts. Systems engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user's needs. ==== User-centered design process ==== Systems thinking is a crucial part of user-centered design processes and is necessary to understand the whole impact of a new human computer interaction (HCI) information system. Overlooking this and developing software without insights input from the future users (mediated by user experience designers) is a serious design flaw that can lead to complete failure of information systems, increased stress and mental illness for users of information systems leading to increased costs and a huge waste of resources. It is currently surprisingly uncommon for organizations and governments to investigate the project management decisions leading to serious design flaws and lack of usability. The Institute of Electrical and Electronics Engineers estimates that roughly 15% of the estimated $1 trillion used to develop information systems every year is completely wasted and the produced systems are discarded before implementation by entirely preventable mistakes. According to the CHAOS report published in 2018 by the Standish Group, a vast majority of information systems fail or partly fail according to their survey: Pure success is the combination of high customer satisfaction with high return on value to the organization. Related figures for the year 2017 are: successful: 14%, challenged: 67%, failed 19%. === Mathematics === System dynamics is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, and time delays. === Social sciences and humanities === Systems theory in anthropology Systems theory in archaeology Systems theory in political science ==== Psychology ==== Systems psychology is a branch of psychology that studies human behaviour and experience in complex systems. It received inspiration from systems theory and systems thinking, as well as the basics of theoretical work from Roger Barker, Gregory Bateson, Humberto Maturana and others. It makes an approach in psychology in which groups and individuals receive consideration as systems in homeostasis. Systems psychology "includes the domain of engineering psychology, but in addition seems more concerned with societal systems and with the study of motivational, affective, cognitive and group behavior that holds the name engineering psychology." In systems psychology, characteristics of organizational behaviour (such as individual needs, rewards, expectations, and attributes of the people interacting with the systems) "considers this process in order to create an effective system." === Informatics === System theory has been applied in the field of neuroinformatics and connectionist cognitive science. Attempts are being made in neurocognition to merge connectionist cognitive neuroarchitectures with the approach of system theory and dynamical systems theory. == History == === Precursors === Systems thinking can date back to antiquity, whether considering the first systems of written communication with Sumerian cuneiform to Maya numerals, or the feats of engineering with the Egyptian pyramids. Differentiated from Western rationalist traditions of philosophy, C. West Churchman often identified with the I Ching as a systems approach sharing a frame of reference similar to pre-Socratic philosophy and Heraclitus.: 12–13  Ludwig von Bertalanffy traced systems concepts to the philosophy of Gottfried Leibniz and Nicholas of Cusa's coincidentia oppositorum. While modern systems can seem considerably more complicated, they may embed themselves in history. Figures like James Joule and Sadi Carnot represent an important step to introduce the systems approach into the (rationalist) hard sciences of the 19th century, also known as the energy transformation. Then, the thermodynamics of this century, by Rudolf Clausius, Josiah Gibbs and others, established the system reference model as a formal scientific object. Similar ideas are found in learning theories that developed from the same fundamental concepts, emphasising how understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffy's organismic psychology paralleled the learning theory of Jean Piaget. Some consider interdisciplinary perspectives critical in breaking away from industrial age models and thinking, wherein history represents history and math represents math, while the arts and sciences specialization remain separate and many treat teaching as behaviorist conditioning. The contemporary work of Peter Senge provides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning, including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life." In this way, some systems theorists attempt to provide alternatives to, and evolved ideation from orthodox theories which have grounds in classical assumptions, including individuals such as Max Weber and Émile Durkheim in sociology and Frederick Winslow Taylor in scientific management. The theorists sought holistic methods by developing systems concepts that could integrate with different areas. Some may view the contradiction of reductionism in conventional theory (which has as its subject a single part) as simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts as not static and constant but dynamic processes. Some questioned the conventional closed systems with the development of open systems perspectives. The shift originated from absolute and universal authoritative principles and knowledge to relative and general conceptual and perceptual knowledge and still remains in the tradition of theorists that sought to provide means to organize human life. In other words, theorists rethought the preceding history of ideas; they did not lose them. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanistic metaphor for the mind from interpretations of Newtonian mechanics by Enlightenment philosophers and later psychologists that laid the foundations of modern organizational theory and management by the late 19th century. === Founding and early development === Where assumptions in Western science from Plato and Aristotle to Isaac Newton's Principia (1687) have historically influenced all areas from the hard to social sciences (see, David Easton's seminal development of the "political system" as an analytical construct), the original systems theorists explored the implications of 20th-century advances in terms of systems. Between 1929 and 1951, Robert Maynard Hutchins at the University of Chicago had undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by the Ford Foundation with the university's interdisciplinary Division of the Social Sciences established in 1931.: 5–9  Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. "General systems theory" (GST; German: allgemeine Systemlehre) was coined in the 1940s by Ludwig von Bertalanffy, who sought a new approach to the study of living systems. Bertalanffy developed the theory via lectures beginning in 1937 and then via publications beginning in 1946. According to Mike C. Jackson (2000), Bertalanffy promoted an embryonic form of GST as early as the 1920s and 1930s, but it was not until the early 1950s that it became more widely known in scientific circles. Jackson also claimed that Bertalanffy's work was informed by Alexander Bogdanov's three-volume Tectology (1912–1917), providing the conceptual base for GST. A similar position is held by Richard Mattessich (1978) and Fritjof Capra (1996). Despite this, Bertalanffy never even mentioned Bogdanov in his works. The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or a system. Second, all systems, whether electrical, biological, or social, have common patterns, behaviors, and properties that the observer can analyze and use to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of the sciences. System philosophy, methodology and application are complementary to this science. Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as the interwar period, publishing "An Outline for General Systems Theory" in the British Journal for the Philosophy of Science by 1950. In 1954, von Bertalanffy, along with Anatol Rapoport, Ralph W. Gerard, and Kenneth Boulding, came together at the Center for Advanced Study in the Behavioral Sciences in Palo Alto to discuss the creation of a "society for the advancement of General Systems Theory." In December that year, a meeting of around 70 people was held in Berkeley to form a society for the exploration and development of GST. The Society for General Systems Research (renamed the International Society for Systems Science in 1988) was established in 1956 thereafter as an affiliate of the American Association for the Advancement of Science (AAAS), specifically catalyzing systems theory as an area of study. The field developed from the work of Bertalanffy, Rapoport, Gerard, and Boulding, as well as other theorists in the 1950s like William Ross Ashby, Margaret Mead, Gregory Bateson, and C. West Churchman, among others. Bertalanffy's ideas were adopted by others, working in mathematics, psychology, biology, game theory, and social network analysis. Subjects that were studied included those of complexity, self-organization, connectionism and adaptive systems. In fields like cybernetics, researchers such as Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster examined complex systems mathematically; Von Neumann discovered cellular automata and self-reproducing systems, again with only pencil and paper. Aleksandr Lyapunov and Jules Henri Poincaré worked on the foundations of chaos theory without any computer at all. At the same time, Howard T. Odum, known as a radiation ecologist, recognized that the study of general systems required a language that could depict energetics, thermodynamics and kinetics at any system scale. To fulfill this role, Odum developed a general system, or universal language, based on the circuit language of electronics, known as the Energy Systems Language. The Cold War affected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize that theories defined in association with systems theory had deviated from the initial general systems theory view. Economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses of power always prove consequential and that systems theory might address such issues.: 229–233  Since the end of the Cold War, a renewed interest in systems theory emerged, combined with efforts to strengthen an ethical view on the subject. In sociology, systems thinking also began in the 20th century, including Talcott Parsons' action theory and Niklas Luhmann's social systems theory. According to Rudolf Stichweh (2011):: 2 Since its beginnings the social sciences were an important part of the establishment of systems theory... [T]he two most influential suggestions were the comprehensive sociological versions of systems theory which were proposed by Talcott Parsons since the 1950s and by Niklas Luhmann since the 1970s.Elements of systems thinking can also be seen in the work of James Clerk Maxwell, particularly control theory. == General systems research and systems inquiry == Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. Ludwig von Bertalanffy began developing his 'general systems theory' via lectures in 1937 and then via publications from 1946. The concept received extensive focus in his 1968 book, General System Theory: Foundations, Development, Applications. There are many definitions of a general system, some properties that definitions include are: an overall goal of the system, parts of the system and relationships between these parts, and emergent properties of the interaction between the parts of the system that are not performed by any part on its own.: 58  Derek Hitchins defines a system in terms of entropy as a collection of parts and relationships between the parts where the parts of their interrelationships decrease entropy.: 58  Bertalanffy aimed to bring together under one heading the organismic science that he had observed in his work as a biologist. He wanted to use the word system for those principles that are common to systems in general. In General System Theory (1968), he wrote:: 32  [T]here exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or "forces" between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general. In the preface to von Bertalanffy's Perspectives on General System Theory, Ervin László stated: Thus when von Bertalanffy spoke of Allgemeine Systemtheorie it was consistent with his view that he was proposing a new perspective, a new way of doing science. It was not directly consistent with an interpretation often put on "general system theory", to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories. Bertalanffy outlines systems inquiry into three major domains: philosophy, science, and technology. In his work with the Primer Group, Béla H. Bánáthy generalized the domains into four integratable domains of systemic inquiry: philosophy: the ontology, epistemology, and axiology of systems theory: a set of interrelated concepts and principles applying to all systems methodology: the set of models, strategies, methods and tools that instrumentalize systems theory and philosophy application: the application and interaction of the domains These operate in a recursive relationship, he explained; integrating 'philosophy' and 'theory' as knowledge, and 'method' and 'application' as action; systems inquiry is thus knowledgeable action. === Properties of general systems === General systems may be split into a hierarchy of systems, where there is less interactions between the different systems than there is the components in the system. The alternative is heterarchy where all components within the system interact with one another.: 65  Sometimes an entire system will be represented inside another system as a part, sometimes referred to as a holon. These hierarchies of system are studied in hierarchy theory. The amount of interaction between parts of systems higher in the hierarchy and parts of the system lower in the hierarchy is reduced. If all the parts of a system are tightly coupled (interact with one another a lot) then the system cannot be decomposed into different systems. The amount of coupling between parts of a system may differ temporally, with some parts interacting more often than other, or for different processes in a system.: 293  Herbert A. Simon distinguished between decomposable, nearly decomposable and nondecomposable systems.: 72  Russell L. Ackoff distinguished general systems by how their goals and subgoals could change over time. He distinguished between goal-maintaining, goal-seeking, multi-goal and reflective (or goal-changing) systems.: 73  == System types and fields == === Theoretical fields === Chaos theory Complex system Control theory Dynamical systems theory Earth system science Ecological systems theory Industrial ecology Living systems theory Sociotechnical system Systemics Telecoupling Urban metabolism World-systems theory ==== Cybernetics ==== Cybernetics is the study of the communication and control of regulatory feedback both in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks. The terms systems theory and cybernetics have been widely used as synonyms. Some authors use the term cybernetic systems to denote a proper subset of the class of general systems, namely those systems that include feedback loops. However, Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. In cybernetics, complex systems have been examined mathematically by such researchers as W. Ross Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster. Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (such as Wiener's Cybernetics in 1948 and Bertalanffy's General System Theory in 1968). Cybernetics arose more from engineering fields and GST from biology. If anything, it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Bertalanffy specifically made the point of distinguishing between the areas in noting the influence of cybernetics:Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems.... [T]he model is of wide application but should not be identified with 'systems theory' in general ... [and] warning is necessary against its incautious expansion to fields for which its concepts are not made.: 17–23 Cybernetics, catastrophe theory, chaos theory and complexity theory have the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions. Cellular automata, neural networks, artificial intelligence, and artificial life are related fields, but do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, from pure mathematics in the beginning to pure computer science today. Since the beginning of chaos theory, when Edward Lorenz accidentally discovered a strange attractor with his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of computers today. === System types === Biological Anatomical systems Nervous Sensory Ecological systems Living systems Complex Complex adaptive system Conceptual Coordinate Deterministic (philosophy) Digital ecosystem Experimental Writing Coupled human–environment Database Deterministic (science) Mathematical Dynamical system Formal system Energy Holarchical Information Measurement Imperial Metric Multi-agent Nonlinear Operating Planetary Social Cultural Economic Legal Political Star ==== Complex adaptive systems ==== Complex adaptive systems (CAS), coined by John H. Holland, Murray Gell-Mann, and others at the interdisciplinary Santa Fe Institute, are special cases of complex systems: they are complex in that they are diverse and composed of multiple, interconnected elements; they are adaptive in that they have the capacity to change and learn from experience. In contrast to control systems, in which negative feedback dampens and reverses disequilibria, CAS are often subject to positive feedback, which magnifies and perpetuates changes, converting local irregularities into global features. == See also == === Organizations === List of systems sciences organizations == References == == Further reading == Ashby, W. Ross. 1956. An Introduction to Cybernetics. Chapman & Hall. —— 1960. Design for a Brain: The Origin of Adaptive Behavior (2nd ed.). Chapman & Hall. Bateson, Gregory. 1972. Steps to an Ecology of Mind: Collected essays in Anthropology, Psychiatry, Evolution, and Epistemology. University of Chicago Press. von Bertalanffy, Ludwig. 1968. General System Theory: Foundations, Development, Applications New York: George Braziller Burks, Arthur. 1970. Essays on Cellular Automata. University of Illinois Press. Cherry, Colin. 1957. On Human Communication: A Review, a Survey, and a Criticism. Cambridge: The MIT Press. Churchman, C. West. 1971. The Design of Inquiring Systems: Basic Concepts of Systems and Organizations. New York: Basic Books. Checkland, Peter. 1999. Systems Thinking, Systems Practice: Includes a 30-Year Retrospective. Wiley. Gleick, James. 1997. Chaos: Making a New Science, Random House. Haken, Hermann. 1983. Synergetics: An Introduction – 3rd Edition, Springer. Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge: The MIT Press. Luhmann, Niklas. 2013. Introduction to Systems Theory, Polity. Macy, Joanna. 1991. Mutual Causality in Buddhism and General Systems Theory: The Dharma of Natural Systems. SUNY Press. Maturana, Humberto, and Francisco Varela. 1980. Autopoiesis and Cognition: The Realization of the Living. Springer Science & Business Media. Miller, James Grier. 1978. Living Systems. Mcgraw-Hill. von Neumann, John. 1951 "The General and Logical Theory of Automata." pp. 1–41 in Cerebral Mechanisms in Behavior. —— 1956. "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." Automata Studies 34: 43–98. von Neumann, John, and Arthur Burks, eds. 1966. Theory of Self-Reproducing Automata. Illinois University Press. Parsons, Talcott. 1951. The Social System. The Free Press. Prigogine, Ilya. 1980. From Being to Becoming: Time and Complexity in the Physical Sciences. W H Freeman & Co. Simon, Herbert A. 1962. "The Architecture of Complexity." Proceedings of the American Philosophical Society, 106. —— 1996. The Sciences of the Artificial (3rd ed.), vol. 136. The MIT Press. Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. ISBN 0-252-72546-8. Adapted from Shannon, Claude. 1948. "A Mathematical Theory of Communication." Bell System Technical Journal 27(3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x. Thom, René. 1972. Structural Stability and Morphogenesis: An Outline of a General Theory of Models. Reading, Massachusetts Volk, Tyler. 1995. Metapatterns: Across Space, Time, and Mind. New York: Columbia University Press. Weaver, Warren. 1948. "Science and Complexity." The American Scientist, pp. 536–544. Wiener, Norbert. 1965. Cybernetics: Or the Control and Communication in the Animal and the Machine (2nd ed.). Cambridge: The MIT Press. Wolfram, Stephen. 2002. A New Kind of Science. Wolfram Media. Zadeh, Lofti. 1962. "From Circuit Theory to System Theory." Proceedings of the IRE 50(5): 856–865. == External links == Systems Thinking at Wikiversity Systems theory at Principia Cybernetica Web Introduction to systems thinking – 55 slides Organizations International Society for the System Sciences New England Complex Systems Institute System Dynamics Society
Wikipedia/Systems_Theory
The Hurricane Weather Research and Forecasting (HWRF) model is a specialized version of the weather research and forecasting model and is used to forecast the track and intensity of tropical cyclones. The model was developed by the National Oceanic and Atmospheric Administration (NOAA), the U.S. Naval Research Laboratory, the University of Rhode Island, and Florida State University. It became operational in 2007. The HWRF computer model is the operational backbone for hurricane track and intensity forecasts by the National Hurricane Center (NHC). The model will use data from satellite observations, buoys, and reconnaissance aircraft, making it able to access more meteorological data than any other hurricane model before it. The model will eventually run at an even higher resolution which will allow smaller scale features to become more discernible. Mary Glackin, acting director of NOAA's National Weather Service, says that "It is vital that we understand all the factors of hurricane forecasting throughout the life of a storm and HWRF will provide an unprecedented level of detail. Over the next several years, this model promises to improve forecasts for tropical cyclone intensity, wave and storm surge, and hurricane-related inland flooding." She also says that the HWRF "will be one of the most dynamic tools available" for forecasters. Development of the HWRF model began in 2002. In 2007, the HWRF model became operational. While the HWRF model will eventually replace the GFDL model, the GFDL model will continue to be run in 2007. The GFDL model has continued to be run operationally through 2012. == See also == Tropical cyclone Tropical cyclone forecasting Tropical cyclone forecast model Tropical cyclone rainfall forecasting Weather forecasting == References == == Websites with the HWRF model == Community Code from DTC Model Analyses and Forecasts Archived December 23, 2007, at the Wayback Machine from NCEP Experimental forecast Tropical Cyclone Genesis Potential Fields from Florida State University Cyclone phase evolution: Analyses & Forecasts from Florida State University Tropical Cyclone Tracking Page – Model track performance from Kinetic Analysis Corporation and University of Central Florida == Other external links == HWRF Project HWRF PowerPoint Tutorials from NCEP's EMC
Wikipedia/Hurricane_Weather_Research_and_Forecasting_model
Distillation Design is a book which provides complete coverage of the design of industrial distillation columns for the petroleum refining, chemical and petrochemical plants, natural gas processing, pharmaceutical, food and alcohol distilling industries. It has been a classical chemical engineering textbook since it was first published in February 1992. The subjects covered in the book include: Vapor–liquid equilibrium(VLE): Vapor–liquid K values, relative volatilities, ideal and non-ideal systems, phase diagrams, calculating bubble points and dew points Key fractional distillation concepts: theoretical stages, x-y diagrams, multicomponent distillation, column composition and temperature profiles Process design and optimization: minimum reflux and minimum stages, optimum reflux, short-cut methods, feed entry location Rigorous calculation methods: Bubble point method, sum rates method, numerical methods (Newton–Raphson technique), inside out method, relaxation method, other methods Batch distillation: Simple distillation, constant reflux, varying reflux, time and boilup requirements Tray design and tray efficiency: tray types, tray capacities, tray hydraulic parameters, tray sizing and determination of column diameter, point and tray efficiencies, tray efficiency prediction and scaleup Packing design and packing efficiency: packing types, packing hydraulics and capacities, determination of packing efficiency by transfer unit method and by HETP method, packed column sizing == See also == Chemical engineer – Professional in the field of chemical engineering Continuous distillation – Form of distillation Fenske equation – Equation used in chemical engineering McCabe-Thiele method – Chemical engineering techniquePages displaying short descriptions of redirect targets Perry's Chemical Engineers' Handbook – 1934 reference book for chemical engineering Transport Phenomena – the first textbook about transport phenomenaPages displaying wikidata descriptions as a fallback Unit Operations of Chemical Engineering – 1956 textbook in chemical engineering Batch distillation == External links == McGraw Hill website page
Wikipedia/Distillation_Design
The applications of nanotechnology, commonly incorporate industrial, medicinal, and energy uses. These include more durable construction materials, therapeutic drug delivery, and higher density hydrogen fuel cells that are environmentally friendly. Being that nanoparticles and nanodevices are highly versatile through modification of their physiochemical properties, they have found uses in nanoscale electronics, cancer treatments, vaccines, hydrogen fuel cells, and nanographene batteries. Nanotechnology's use of smaller sized materials allows for adjustment of molecules and substances at the nanoscale level, which can further enhance the mechanical properties of materials or grant access to less physically accessible areas of the body. == Health applications == === Nanobiotechnology === The terms nanobiotechnology and bionanotechnology refer to the combination of ideas, techniques, and sciences of biology and nanotechnology. More specifically, nanobiotechnology refers to the application of nanoscale objects for biotechnology while bionanotechnology refers to the use of biological components in nanotechnology. The most prominent intersection of nanotechnology and biology is in the field of nanomedicine, where the use of nanoparticles and nanodevices has many clinical applications in delivering therapeutic drugs, monitoring health conditions, and diagnosing diseases. Being that much of the biological processes in the human body occur at the cellular level, the small size of nanomaterials allows for them to be used as tools that can easily circulate within the body and directly interact with intercellular and even intracellular environments. In addition, nanomaterials can have physiochemical properties that differ from their bulk form due to their size, allowing for varying chemical reactivities and diffusion effects that can be studied and changed for diversified applications. A common application of nanomedicine is in therapeutic drug delivery, where nanoparticles containing drugs for therapeutic treatment of disease are introduced into the body and act as vessels that deliver the drugs to the targeted area. The nanoparticle vessels, which can be made of organic or synthetic components, can further be functionalized by adjusting their size, shape, surface charge, and surface attachments (proteins, coatings, polymers, etc.). The opportunity for functionalizing nanoparticles in such ways is especially beneficial when targeting areas of the body that have certain physiochemical properties that prevent the intended drug from reaching the targeted area alone; for example, some nanoparticles are able to bypass the Blood Brain Barrier to deliver therapeutic drugs to the brain. Nanoparticles have recently been used in cancer therapy treatments and vaccines. Magnetic nanorobots have demonstrated capabilities to prevent and treat antimicrobial resistant bacteria. Application of nanomotor implants have been proposed to achieve thorough disinfection of the dentine. In vivo imaging is also a key part in nanomedicine, as nanoparticles can be used as contrast agents for common imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). The ability for nanoparticles to localize and circulate in specific cells, tissues, or organs through their design can provide high contrast that results in higher sensitivity imaging, and thus can be applicable in studying pharmacokinetics or visual disease diagnosis. == Industrial applications == === Potential applications of carbon nanotubes === Nanotubes can help with cancer treatment. They have been shown to be effective tumor killers in those with kidney or breast cancer. Multi-walled nanotubes are injected into a tumor and treated with a special type of laser that generates near-infrared radiation for around half a minute. These nanotubes vibrate in response to the laser, and heat is generated. When the tumor has been heated enough, the tumor cells begin to die. Processes like this one have been able to shrink kidney tumors by up to four-fifths. Ultrablack materials, made up of “forests” of carbon nanotubes, are important in space, where there is more light than is convenient to work with. Ultrablack material can be applied to camera and telescope systems to decrease the amount of light and allow for more detailed images to be captured. Nanotubes show promise in treating cardiovascular disease. They could play an important role in blood vessel cleanup. Theoretically, nanotubes with SHP1i molecules attached to them would signal macrophages to clean up plaque in blood vessels without destroying any healthy tissue. Researchers have tested this type of modified nanotube in mice with high amounts of plaque buildup; the mice that received the nanotube treatment showed statistically significant reductions in plaque buildup compared to the mice in the placebo group. Further research is needed for this treatment to be given to humans. Nanotubes may be used in body armor for future soldiers. This type of armor would be very strong and highly effective at shielding soldiers’ bodies from projectiles and electromagnetic radiation. It is also possible that the nanotubes in the armor could play a role in keeping an eye on soldiers’ conditions. === Construction === Nanotechnology's ability to observe and control the material world at a nanoscopic level can offer great potential for construction development. Nanotechnology can help improve the strength and durability of construction materials, including cement, steel, wood, and glass. By applying nanotechnology, materials can gain a range of new properties. The discovery of a highly ordered crystal nanostructure of amorphous C-S-H gel and the application of photocatalyst and coating technology result in a new generation of materials with properties like water resistance, self-cleaning property, wear resistance, and corrosion protection. Among the new nanoengineered polymers, there are highly efficient superplasticizers for concrete and high-strength fibers with exceptional energy absorbing capacity. Experts believe that nanotechnology remains in its exploration stage and has potential in improving conventional materials such as steel. Understanding the composite nanostructures of such materials and exploring nanomaterials' different applications may lead to the development of new materials with expanded properties, such as electrical conductivity as well as temperature-, moisture- and stress-sensing abilities. Due to the complexity of the equipment, nanomaterials have high cost compared to conventional materials, meaning they are not likely to feature high-volume building materials. In special cases, nanotechnology can help reduce costs for complicated problems. But in most cases, the traditional method for construction remains more cost-efficient. With the improvement of manufacturing technologies, the costs of applying nanotechnology into construction have been decreasing over time and are expected to decrease more. === Nanoelectronics === Nanoelectronics refers to the application of nanotechnology on electronic components. Nanoelectronics aims to improve the performance of electronic devices on displays and power consumption while shrinking them. Therefore, nanoelectronics can help reach the goal set up in Moore's law, which predicts the continued trend of scaling down in the size of integrated circuits. Nanoelectronics is a multidisciplinary area composed of quantum physics, device analysis, system integration, and circuit analysis. Since de Broglie wavelength in the semiconductors may be on the order of 100 nm, the quantum effect at this length scale becomes essential. The different device physics and novel quantum effects of electrons can lead to exciting applications. == Energy applications == The energy applications of nanotechnology relates to using the small size of nanoparticles to store energy more efficiently. This promotes the use of renewable energy through green nanotechnology by generating, storing, and using energy without emitting harmful greenhouse gases such as carbon dioxide. === Solar Cells === Nanoparticles used in solar cells are increasing the amount of energy absorbed from sunlight. === Hydrogen Fuel Cells === Nanotechnology is enabling the use of hydrogen energy at a much higher capacity. Hydrogen fuel cells, while they are not an energy source themselves, allow for storing energy from sunlight and other renewable sources in an environmentally-friendly fashion without any CO2 emissions. Some of the main drawbacks of traditional hydrogen fuel cells are that they are expensive and not durable enough for commercial uses. However, by using nanoparticles, both the durability and price over time improve significantly. Furthermore, conventional fuel cells are too large to be stored in volume, but researchers have discovered that nanoblades can store greater volumes of hydrogen that can then be saved inside carbon nanotubes for long-term storage. === Nanographene Batteries === Nanotechnology is giving rise to nanographene batteries that can store energy more efficiently and weigh less. Lithium-ion batteries have been the primary battery technology in electronics for the last decade, but the current limits in the technology make it difficult to densify batteries due to the potential dangers of heat and explosion. Graphene batteries being tested in experimental electric cars have promised capacities 4 times greater than current batteries with the cost being 77% lower. Additionally, graphene batteries provide stable life cycles of up to 250,000 cycles, which would allow electric vehicles and long-term products a reliable energy source for decades. == References ==
Wikipedia/Applications_of_nanotechnology
Nanolithography (NL) is a growing field of techniques within nanotechnology dealing with the engineering (patterning e.g. etching, depositing, writing, printing etc) of nanometer-scale structures on various materials. The modern term reflects on a design of structures built in range of 10−9 to 10−6 meters, i.e. nanometer scale. Essentially, the field is a derivative of lithography, only covering very small structures. All NL methods can be categorized into four groups: photo lithography, scanning lithography, soft lithography and other miscellaneous techniques. == History == Nanolithography has evolved from the need to increase the number of sub-micrometer features (e.g. transistors, capacitors etc.) in an integrated circuit in order to keep up with Moore's Law. While lithographic techniques have been around since the late 18th century, none were applied to nanoscale structures until the mid-1950s. With evolution of the semiconductor industry, demand for techniques capable of producing micro- and nano-scale structures skyrocketed. Photolithography was applied to these structures for the first time in 1958 beginning the age of nanolithography. Since then, photolithography has become the most commercially successful technique, capable of producing sub-100 nm patterns. There are several techniques associated with the field, each designed to serve its many uses in the medical and semiconductor industries. Breakthroughs in this field contribute significantly to the advancement of nanotechnology, and are increasingly important today as demand for smaller and smaller computer chips increases. Further areas of research deal with physical limitations of the field, energy harvesting, and photonics. == Etymology == From Greek, the word nanolithography can be broken up into three parts: "nano" meaning dwarf, "lith" meaning stone, and "graphy" meaning to write, or "tiny writing onto stone." == Photolithography == As of 2021 photolithography is the most heavily used technique in mass production of microelectronics and semiconductor devices. It is characterized by both high production throughput and small-sized features of the patterns. === Optical lithography === Optical Lithography (or photolithography) is one of the most important and prevalent sets of techniques in the nanolithography field. Optical lithography contains several important derivative techniques, all that use very short light wavelengths in order to change the solubility of certain molecules, causing them to wash away in solution, leaving behind a desired structure. Several optical lithography techniques require the use of liquid immersion and a host of resolution enhancement technologies like phase-shift masks (PSM) and optical proximity correction (OPC). Some of the included techniques in this set include multiphoton lithography, X-Ray lithography, light coupling nanolithography (LCM), and extreme ultraviolet lithography (EUVL). This last technique is considered to be the most important next generation lithography (NGL) technique due to its ability to produce structures accurately down below 30 nanometers at high throughput rates which makes it a viable option for commercial purposes. === Quantum optical lithography === Quantum optical lithography (QOL), is a diffraction-unlimited method able to write at 1 nm resolution by optical means, using a red laser diode (λ = 650 nm). Complex patterns like geometrical figures and letters were obtained at 3 nm resolution on resist substrate. The method was applied to nanopattern graphene at 20 nm resolution. == Scanning lithography == === Electron-beam lithography === Electron beam lithography (EBL) or electron-beam direct-write lithography (EBDW) scans a focused beam of electrons on a surface covered with an electron-sensitive film or resist (e.g. PMMA or HSQ) to draw custom shapes. By changing the solubility of the resist and subsequent selective removal of material by immersion in a solvent, sub-10 nm resolutions have been achieved. This form of direct-write, maskless lithography has high resolution and low throughput, limiting single-column e-beams to photomask fabrication, low-volume production of semiconductor devices, and research and development. Multiple-electron beam approaches have as a goal an increase of throughput for semiconductor mass-production. EBL can be utilized for selective protein nanopatterning on a solid substrate, aimed for ultrasensitive sensing. Resists for EBL can be hardened using sequential infiltration synthesis (SIS). === Scanning probe lithography === Scanning probe lithography (SPL) is another set of techniques for patterning at the nanometer-scale down to individual atoms using scanning probes, either by etching away unwanted material, or by directly-writing new material onto a substrate. Some of the important techniques in this category include dip-pen nanolithography, thermochemical nanolithography, thermal scanning probe lithography, and local oxidation nanolithography. Dip-pen nanolithography is the most widely used of these techniques. === Proton beam writing === This technique uses a focused beam of high energy (MeV) protons to pattern resist material at nanodimensions and has been shown to be capable of producing high-resolution patterning well below the 100 nm mark. === Charged-particle lithography === This set of techniques include ion- and electron-projection lithographies. Ion beam lithography uses a focused or broad beam of energetic lightweight ions (like He+) for transferring pattern to a surface. Using Ion Beam Proximity Lithography (IBL) nano-scale features can be transferred on non-planar surfaces. == Soft lithography == Soft lithography uses elastomer materials made from different chemical compounds such as polydimethylsiloxane. Elastomers are used to make a stamp, mold, or mask (akin to photomask) which in turn is used to generate micro patterns and microstructures. The techniques described below are limited to one stage. The consequent patterning on the same surfaces is difficult due to misalignment problems. The soft lithography isn't suitable for production of semiconductor-based devices as it's not complementary for metal deposition and etching. The methods are commonly used for chemical patterning. === PDMS lithography === === Microcontact printing === === Multilayer soft lithography === == Miscellaneous techniques == === Nanoimprint lithography === Nanoimprint lithography (NIL), and its variants, such as Step-and-Flash Imprint Lithography and laser assisted directed imprint (LADI) are promising nanopattern replication technologies where patterns are created by mechanical deformation of imprint resists, typically monomer or polymer formations that are cured by heat or UV light during imprinting. This technique can be combined with contact printing and cold welding. Nanoimprint lithography is capable of producing patterns at sub-10 nm levels. === Magnetolithography === Magnetolithography (ML) is based on applying a magnetic field on the substrate using paramagnetic metal masks call "magnetic mask". Magnetic mask which is analog to photomask define the spatial distribution and shape of the applied magnetic field. The second component is ferromagnetic nanoparticles (analog to the Photoresist) that are assembled onto the substrate according to the field induced by the magnetic mask. === Nanofountain drawing === A nanofountain probe is a micro-fluidic device similar in concept to a fountain pen which deposits a narrow track of chemical from a reservoir onto the substrate according to the movement pattern programmed. === Nanosphere lithography === Nanosphere lithography uses self-assembled monolayers of spheres (typically made of polystyrene) as evaporation masks. This method has been used to fabricate arrays of gold nanodots with precisely controlled spacings. === Neutral particle lithography === Neutral particle lithography (NPL) uses a broad beam of energetic neutral particle for pattern transfer on a surface. === Plasmonic lithography === Plasmonic lithography uses surface plasmon excitations to generate beyond-diffraction limit patterns, benefiting from subwavelength field confinement properties of surface plasmon polaritons. === Stencil lithography === Stencil lithography is a resist-less and parallel method of fabricating nanometer scale patterns using nanometer-size apertures as shadow-masks. == References ==
Wikipedia/Nanolithography
The health and safety hazards of nanomaterials include the potential toxicity of various types of nanomaterials, as well as fire and dust explosion hazards. Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, are subjects of ongoing research. Of the possible hazards, inhalation exposure appears to present the most concern, with animal studies showing pulmonary effects such as inflammation, fibrosis, and carcinogenicity for some nanomaterials. Skin contact and ingestion exposure, and dust explosion hazards, are also a concern. Guidance has been developed for hazard controls that are effective in reducing exposures to safe levels, including substitution with safer forms of a nanomaterial, engineering controls such as proper ventilation, and personal protective equipment as a last resort. For some materials, occupational exposure limits have been developed to determine a maximum safe airborne concentration of nanomaterials, and exposure assessment is possible using standard industrial hygiene sampling methods. An ongoing occupational health surveillance program can also help to protect workers. Microplastics and nanoparticles from plastic containers are an increasing concern. == Background == Nanotechnology is the manipulation of matter at the atomic scale to create materials, devices, or systems with new properties or functions, with potential applications in energy, healthcare, industry, communications, agriculture, consumer products, and other sectors. Nanomaterials have at least one primary dimension of less than 100 nanometers, and often have properties different from those of their bulk components that are technologically useful. The classes of materials of which nanoparticles are typically composed include elemental carbon, metals or metal oxides, and ceramics. According to the Woodrow Wilson Center, the number of consumer products or product lines that incorporate nanomaterials increased from 212 to 1317 from 2006 to 2011. Worldwide investment in nanotechnology increased from $432 million in 1997 to about $4.1 billion in 2005.: 1–3  Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, is not yet fully understood. Research concerning the handling of nanomaterials is underway, and guidance for some nanomaterials has been developed.: 1–3  As with any new technology, the earliest exposures are expected to occur among workers conducting research in laboratories and pilot plants, making it important that they work in a manner that is protective of their safety and health.: 1  A risk management system is composed of three parts. Hazard identification involves determining what health and safety concerns are present for both the nanomaterial and its corresponding bulk material, based on a review of safety data sheets, peer-reviewed literature, and guidance documents on the material. For nanomaterials, toxicity hazards are the most important, but dust explosion hazards may also be relevant. Exposure assessment involves determining actual routes of exposure in a specific workplace, including a review of which areas and tasks are most likely to cause exposure. Exposure control involves putting procedures in places to minimize or eliminate exposures according to the hierarchy of hazard controls.: 2–6 : 3–5  Ongoing verification of hazard controls can occur through monitoring of airborne nanomaterial concentrations using standard industrial hygiene sampling methods, and an occupational health surveillance program may be instituted.: 14–16  A recently adopted risk management method is the Safe by design (SbD) approach. It aims to eliminate or reduce risks of new technologies including nanotechnology, at the design stage of a product or production process. Anticipation of risks is challenging because some risks could emerge only after a technology is implemented (at later stages in the innovation process). In the later cases, the application of other risk management strategies based on non-design principles need to be applied. It considers the purposes and constrains for implementation of SbD approaches in the industrial innovation process and on the basis of those, establish optimal workflows to identify risks and propose solutions to reduce or mitigate them as early as possible in the innovation process called Safe by Design strategies. == Hazards == === Toxicity === ==== Respiratory ==== Inhalation exposure is the most common route of exposure to airborne particles in the workplace. The deposition of nanoparticles in the respiratory tract is determined by the shape and size of particles or their agglomerates, and they are deposited in the alveolar compartment to a greater extent than larger respirable particles. Based on animal studies, nanoparticles may enter the bloodstream from the lungs and translocate to other organs, including the brain.: 11–12  The inhalation risk is affected by the dustiness of the material, the tendency of particles to become airborne in response to a stimulus. Dust generation is affected by the particle shape, size, bulk density, and inherent electrostatic forces, and whether the nanomaterial is a dry powder or incorporated into a slurry or liquid suspension.: 5–6  Animal studies indicate that carbon nanotubes and carbon nanofibers can cause pulmonary effects including inflammation, granulomas, and pulmonary fibrosis, which were of similar or greater potency when compared with other known fibrogenic materials such as silica, asbestos, and ultrafine carbon black. Some studies in cells or animals have shown genotoxic or carcinogenic effects, or systemic cardiovascular effects from pulmonary exposure. Although the extent to which animal data may predict clinically significant lung effects in workers is not known, the toxicity seen in the short-term animal studies indicate a need for protective action for workers exposed to these nanomaterials. As of 2013, further research was needed in long-term animal studies and epidemiologic studies in workers. No reports of actual adverse health effects in workers using or producing these nanomaterials were known as of 2013.: v–ix, 33–35  Titanium dioxide (TiO2) dust is considered a lung tumor risk, with ultrafine (nanoscale) particles having an increased mass-based potency relative to fine TiO2, through a secondary genotoxicity mechanism that is not specific to TiO2 but primarily related to particle size and surface area.: v–vii, 73–78  ==== Dermal ==== Some studies suggest that nanomaterials could potentially enter the body through intact skin during occupational exposure. Studies have shown that particles smaller than 1 μm in diameter may penetrate into mechanically flexed skin samples, and that nanoparticles with varying physicochemical properties were able to penetrate the intact skin of pigs. Factors such as size, shape, water solubility, and surface coating directly affect a nanoparticle's potential to penetrate the skin. At this time, it is not fully known whether skin penetration of nanoparticles would result in adverse effects in animal models, although topical application of raw SWCNT to nude mice has been shown to cause dermal irritation, and in vitro studies using primary or cultured human skin cells have shown that carbon nanotubes can enter cells and cause release of pro-inflammatory cytokines, oxidative stress, and decreased viability. It remains unclear, however, how these findings may be extrapolated to a potential occupational risk.: 12 : 63–64  In addition, nanoparticles may enter the body through wounds, with particles migrating into the blood and lymph nodes. ==== Gastrointestinal ==== Ingestion can occur from unintentional hand-to-mouth transfer of materials; this has been found to happen with traditional materials, and it is scientifically reasonable to assume that it also could happen during handling of nanomaterials. Ingestion may also accompany inhalation exposure because particles that are cleared from the respiratory tract via the mucociliary escalator may be swallowed.: 12  === Fire and explosion === There is concern that engineered carbon nanoparticles, when manufactured on an industrial scale, could pose a dust explosion hazard, especially for processes such as mixing, grinding, drilling, sanding, and cleaning. Knowledge remains limited about the potential explosivity of materials when subdivided down to the nanoscale. The explosion characteristics of nanoparticles are highly dependent on the manufacturer and the humidity.: 17–18  For microscale particles, as particle size decreases and the specific surface area increases, the explosion severity increases. However, for dusts of organic materials such as coal, flour, methylcellulose, and polyethylene, severity ceases to increase as the particle size is reduced below ~50 μm. This is because decreasing particle size primarily increases the volatilization rate, which becomes rapid enough that that gas phase combustion becomes the rate limiting step, and further decrease in particle size will not increase the overall combustion rate. While the minimum explosion concentration does not vary significantly with nanoparticle size, the minimum ignition energy and temperature have been found to decrease with particle size. Metal-based nanoparticles exhibit more severe explosions than do carbon nanomaterials, and their chemical reaction pathway is qualitatively different. Studies on aluminum nanoparticles and titanium nanoparticles indicate that they are explosion hazards.: 17–18  One study found that the likelihood of an explosion but not its severity increases significantly for nanoscale metal particles, and they can spontaneously ignite under certain conditions during laboratory testing and handling. High-resistivity powders can accumulate electric charge causing a spark hazard, and low-resistivity powders can build up in electronics causing a short circuit hazard, both of which can provide an ignition source. In general, powders of nanomaterials have higher resistivity than the equivalent micron-scale powders, and humidity decreases their resistivity. One study found powders of metal-based nanoparticles to be mid- to high-resistivity depending on humidity, while carbon-based nanoparticles were found to be low-resistivity regardless of humidity. Powders of nanomaterials are unlikely to present an unusual fire hazard as compared to their cardboard or plastic packaging, as they are usually produced in small quantities, with the exception of carbon black. However, the catalytic properties of nanoparticles and nanostructured porous materials may cause untended catalytic reactions that, based on their chemical composition, would not otherwise be anticipated.: 21  === Radioactivity === Engineered radioactive nanoparticles have applications in medical diagnostics, medical imaging, toxicokinetics, and environmental health, and are being investigated for applications in nuclear medicine. Radioactive nanoparticles present special challenges in operational health physics and internal dosimetry that are not present for vapors or larger particles, as the nanoparticles' toxicokinetics depend on their physical and chemical properties including size, shape, and surface chemistry. In some cases, the inherent physicochemical toxicity of the nanoparticle itself may lead to lower exposure limits than those associated with the radioactivity alone, which is not the case with most radioactive materials. In general, however, most elements of a standard radiation protection program are applicable to radioactive nanomaterials, and many hazard controls for nanomaterials will be effective with the radioactive versions. == Hazard controls == Controlling exposures to hazards is the fundamental method of protecting workers. The hierarchy of hazard control is a framework that encompasses a succession of control methods to reduce the risk of illness or injury. In decreasing order of effectiveness, these are elimination of the hazard, substitution with another material or process that is a lesser hazard, engineering controls that isolate workers from the hazard, administrative controls that change workers' behavior to limit the quantity or duration of exposure, and personal protective equipment worn on the workers' body.: 9  Prevention through design is the concept of applying control methods to minimize hazards early in the design process, with an emphasis on optimizing employee health and safety throughout the life cycle of materials and processes. It increases the cost-effectiveness of occupational safety and health because hazard control methods are integrated early into the process, rather than needing to disrupt existing procedures to include them later. In this context, adopting hazard controls earlier in the design process and higher on the hierarchy of controls leads to faster time to market, improved operational efficiency, and higher product quality.: 6–8  === Elimination and substitution === Elimination and substitution are the most desirable approaches to hazard control, and are most effective early in the design process. Nanomaterials themselves often cannot be eliminated or substituted with conventional materials because their unique properties are necessary to the desired product or process.: 9–10  However, it may be possible to choose properties of the nanoparticle such as size, shape, functionalization, surface charge, solubility, agglomeration, and aggregation state to improve their toxicological properties while retaining the desired functionality. Other materials used incidentally in the process, such as solvents, are also amenable to substitution.: 8  In addition to the materials themselves, procedures used to handle them can be improved. For example, using a nanomaterial slurry or suspension in a liquid solvent instead of a dry powder will reduce dust exposure. Reducing or eliminating steps that involve transfer of powder or opening packages containing nanomaterials also reduces aerosolization and thus the potential hazard to the worker.: 9–10  Reducing agitation procedures such as sonication, and reducing the temperature of reactors to minimize release of nanomaterials in exhaust, also reduce hazards to workers.: 10–12  === Engineering controls === Engineering controls are physical changes to the workplace that isolate workers from hazards by containing them in an enclosure, or removing contaminated air from the workplace through ventilation and filtering. They are used when hazardous substances and processes cannot be eliminated or replaced with less hazardous substitutes. Well-designed engineering controls are typically passive, in the sense of being independent of worker interactions, which reduces the potential for worker behavior to impact exposure levels. The initial cost of engineering controls can be higher than administrative controls or personal protective equipment, but the long-term operating costs are frequently lower and can sometimes provide cost savings in other areas of the process.: 10–11  The type of engineering control optimal for each situation is influenced by the quantity and dustiness of the material as well as the duration of the task.: 9–11  Ventilation systems can be local or general. General exhaust ventilation operates on an entire room through a building's HVAC system. It is inefficient and costly as compared to local exhaust ventilation, and is not suitable by itself for controlling exposure, although it can provide negative room pressure to prevent contaminants from exiting the room. Local exhaust ventilation operates at or near the source of contamination, often in conjunction with an enclosure.: 11–12  Examples of local exhaust systems include fume hoods, gloveboxes, biosafety cabinets, and vented balance enclosures. Exhaust hoods lacking an enclosure are less preferable, and laminar flow hoods are not recommended because they direct air outwards towards the worker.: 18–28  Several control verification techniques can be used with ventilation systems, including pitot tubes, hot-wire anemometers, smoke generators, tracer-gas leak testing, and standardized testing and certification procedures.: 50–52, 59–60 : 14–15  Examples of non-ventilation engineering controls include placing equipment that may release nanomaterials in a separate room, and placing walk-off sticky mats at room exits.: 9–11  Antistatic devices can be used when handling nanomaterials to reduce their electrostatic charge, making them less likely to disperse or adhere to clothing.: 28  Standard dust control methods such as enclosures for conveyor systems, using a sealed system for bag filling, and water spray application are effective at reducing respirable dust concentrations.: 16–17  === Administrative controls === Administrative controls are changes to workers' behavior to mitigate a hazard. They include training on best practices for safe handling, storage, and disposal of nanomaterials, proper awareness of hazards through labeling and warning signage, and encouraging a general safety culture. Administrative controls can complement engineering controls should they fail, or when they are not feasible or do not reduce exposures to an acceptable level. Some examples of good work practices include cleaning work spaces with wet-wiping methods or a HEPA-filtered vacuum cleaner instead of dry sweeping with a broom, avoiding handling nanomaterials in a free particle state, storing nanomaterials in containers with tightly closed lids. Normal safety procedures such as hand washing, not storing or consuming food in the laboratory, and proper disposal of hazardous waste are also administrative controls.: 17–18  Other examples are limiting the time workers are handling a material or in a hazardous area, and exposure monitoring for the presence of nanomaterials.: 14–15  === Personal protective equipment === Personal protective equipment (PPE) must be worn on the worker's body and is the least desirable option for controlling hazards. It is used when other controls are not effective, have not been evaluated, or while doing maintenance or in emergency situations such as spill response. PPE normally used for typical chemicals are also appropriate for nanomaterials, including wearing long pants, long-sleeve shirts, and closed-toed shoes, and the use of safety gloves, goggles, and impervious laboratory coats. Nitrile gloves are preferred because latex gloves do not provide protection from most chemical solvents and may present an allergy hazard. Face shields are not an acceptable replacement for goggles because they do not protect against unbound dry materials. Woven cotton lab coats are not recommended for nanomaterials, as they can become contaminated with nanomaterials and release them later. Donning and removing PPE in a changing room prevents contamination of outside areas.: 12–14  Respirators are another form of PPE. Respirator filters with a NIOSH air filtration rating of N95 or P100 have been shown to be effective at capturing nanoparticles, although leakage between the respirator seal and the skin may be more significant, especially with half-mask respirators. Surgical masks are not effective against nanomaterials.: 12–14  Smaller nanoparticles of size 4–20 nm are captured more efficiently by filters than larger ones of size 30–100 nm, because Brownian motion results in the smaller particles being more likely to contact a filter fiber. In the United States, the Occupational Safety and Health Administration requires fit testing and medical clearance for use of respirators, and the Environmental Protection Agency requires the use of full face respirators with N100 filters for multi-walled carbon nanotubes not embedded in a solid matrix, if exposure is not otherwise controlled. == Industrial hygiene == === Occupational exposure limits === An occupational exposure limit (OEL) is an upper limit on the acceptable concentration of a hazardous substance in workplace air. As of 2016, quantitative OELs have not been determined for most nanomaterials. Agencies and organizations from several countries, including the British Standards Institute and the Institute for Occupational Safety and Health in Germany, have established OELs for some nanomaterials, and some companies have supplied OELs for their products.: 7  As of 2021, the U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) for three classes of nanomaterials: 1.0 μg/m3 for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration: x, 43  300 μg/m3 for ultrafine titanium dioxide as TWA concentrations for up to 10 hr/day during a 40-hour work week: vii, 77–78  0.9 μg/m3 for silver nanoparticles as an airborne respirable 8-hour TWA concentration A properly tested, half-face particulate respirator will provide protection at exposure concentrations 10 times the REL, while an elastomeric full facepiece respirator with P100 filters will provide protection at 50 times the REL.: 18  In the absence of OELs, a control banding scheme may be used. Control banding is a qualitative strategy that uses a rubric to place hazards into one of four categories, or "bands", and each of which has a recommended level of hazard controls. Organizations including GoodNanoGuide, Lawrence Livermore National Laboratory, and Safe Work Australia have developed control banding tools that are specific for nanomaterials.: 31–33  The GoodNanoGuide control banding scheme is based only on exposure duration, whether the material is bound, and the extent of knowledge of the hazards. The LANL scheme assigns points for 15 different hazard parameters and 5 exposure potential factors. Alternatively, the "As Low As Reasonably Achievable" concept may be used.: 7–8  === Exposure assessment === Exposure assessment is a set of methods used to monitor contaminant release and exposures to workers. These methods include personal sampling, where samplers are located in the personal breathing zone of the worker, often attached to a shirt collar to be as close to the nose and mouth as possible; and area/background sampling, where they are placed at static locations. Assessment generally use both particle counters, which monitor the real-time quantity of nanomaterials and other background particles; and filter-based samples, which can be used to identify the nanomaterial, usually using electron microscopy and elemental analysis.: 14–15  Not all instruments used to detect aerosols are suitable for monitoring occupational nanomaterial emissions because they may not be able to detect smaller particles, or may be too large or difficult to ship to a workplace.: 57 : 23–33  Suitable particle counters can detect a wide range of particle sizes, as nanomaterials may aggregate in the air. It is recommended to simultaneously test adjacent work areas to establish a background concentration, as direct reading instruments cannot distinguish the target nanomaterial from incidental background nanoparticles from motor or pump exhaust or heating vessels.: 47–49  While mass-based metrics are traditionally used to characterize toxicological effects of exposure to air contaminants, as of 2013 it was unclear which metrics are most important with regard to engineered nanomaterials. Animal and cell-culture studies have shown that size and shape are the two major factors in their toxicological effects.: 57–58  Surface area and surface chemistry also appeared to be more important than mass concentration.: 23  The NIOSH Nanomaterial Exposure Assessment Technique (NEAT 2.0) is a sampling strategy to determine exposure potential for engineered nanomaterials. It includes filter-based and area samples, as well as a comprehensive assessment of emissions at processes and job tasks to better understand peak emission periods. Evaluation of worker practices, ventilation efficacy, and other engineering exposure control systems and risk management strategies serve to allow for a comprehensive exposure assessment. The NIOSH Manual of Analytical Methods includes guidance on electron microscopy of filter samples of carbon nanotubes and nanofibers, and additionally some NIOSH methods developed for other chemicals can be used for off-line analysis of nanomaterials, including their morphology and geometry, elemental carbon content (relevant for carbon-based nanomaterials), and elemental makeup.: 57–58  Efforts to create reference materials are ongoing.: 23  === Occupational health surveillance === Occupational health surveillance involves the ongoing systematic collection, analysis, and dissemination of exposure and health data on groups of workers, for the purpose of preventing disease and evaluating the effectiveness of intervention programs. It encompasses both medical surveillance and hazard surveillance. A basic medical surveillance program contains a baseline medical evaluation and periodic follow-up examinations, post-incident evaluations, worker training, and identification of trends or patterns from medical screening data.: 34–35  The related topic of medical screening focuses on the early detection of adverse health effects for individual workers, to provide an opportunity for intervention before disease processes occur. Screening may involve obtaining and reviewing an occupational history, medical examination, and medical testing. As of 2016, there were no specific screening tests or health evaluations to identify health effects in people that are caused solely by exposure to engineered nanomaterials.: 15–16  However, any medical screening recommendations for the bulk material that a nanoparticle is made of still apply, and in 2013 NIOSH concluded that the toxicologic evidence on carbon nanotubes and carbon nanofibers had advanced enough to make specific recommendations for the medical surveillance and screening of exposed workers.: vii, 65–69  Medical screening and resulting interventions represent secondary prevention and do not replace primary prevention efforts based on direct hazard controls to minimize employee exposures to nanomaterials.: 34–35  === Emergency preparedness === It is recommended that a nanomaterial spill kit be assembled prior to an emergency and include barricade tape, nitrile or other chemically impervious gloves, an elastomeric full-facepiece respirator with P100 or N100 filters (fitted appropriately to the responder), adsorbent materials such as spill mats, disposable wipes, sealable plastic bags, walk-off sticky mats, a spray bottle with deionized water or another appropriate liquid to wet dry powders, and a HEPA-filtered vacuum. It is considered unsafe to use compressed air, dry sweeping, and vacuums without a HEPA filter to clear dust.: 16–17  == Regulation == === United States === The Food and Drug Administration regulates nanomaterials under the Federal Food, Drug, and Cosmetic Act when used as food additives, drugs, or cosmetics. The Consumer Product Safety Commission requires testing and certification of many consumer products for compliance with consumer product safety requirements, and cautionary labeling of hazardous substances under the Federal Hazardous Substances Act.: 20–22  The General Duty Clause of the Occupational Safety and Health Act requires all employers to keep their workplace free of serious recognized hazards. The Occupational Safety and Health Administration also has recording and reporting requirements for occupational injuries and illness under 29 CFR 1904 for businesses with more than 10 employees, and protection and communication regulations under 29 CFR 1910. Companies producing new products containing nanomaterials must use the Hazard Communication Standard to create safety data sheets containing 16 sections for downstream users such as customers, workers, disposal services, and others. This may require toxicological or other testing, and all data or information provided must be vetted by properly controlled testing The ISO/TR 13329 standard provides guidance specifically on the preparation of safety data sheets for nanomaterials. The National Institute for Occupational Safety and Health does not issue regulations, but conducts research and makes recommendations to prevent worker injury and illness. State and local governments may have additional regulations.: 18–22  The Environmental Protection Agency (EPA) regulates nanomaterials under the Toxic Substances Control Act, and has permitted limited manufacture of new chemical nanomaterials through the use of consent orders or Significant New Use Rules (SNURs). In 2011 EPA issued a SNUR on multi-walled carbon nanotubes, codified as 40 CFR 721.10155. Other statutes falling in the EPA's jurisdiction may apply, such as Federal Insecticide, Fungicide, and Rodenticide Act (if bacterial claims are being made), Clean Air Act, or Clean Water Act.: 13, 20–22  EPA regulates nanomaterials under the same provisions as other hazardous chemical substances. === Other countries === In the European Union, nanomaterials classified by the European Commission as hazardous chemical substances are regulated under the European Chemical Agency's Registration, Evaluation, Authorisation, and Restriction of Chemicals (REACH) regulation, as well as the Classification, Labeling, and Packaging (CLP) regulations. Under the REACH regulation, companies have the responsibility of collecting information on the properties and uses of substances that they manufacture or import at or above quantities of 1 ton per year, including nanomaterials.: 22  There are special provisions for cosmetics that contain nanomaterials, and for biocidal materials under the Biocidal Products Regulation (BPR) when at least 50% of their primary particles are nanoparticles. In the United Kingdom, powders of nanomaterials may fall under the Chemicals (Hazard Information and Packaging for Supply) Regulations 2002, as well as the Dangerous Substances and Explosive Atmosphere Regulations 2002 if they are capable of fueling a dust explosion. == See also == Construction | Construction waste Diesel particulate matter Laboratory safety Power tool Renovation Toxicology of carbon nanomaterials Ultrafine particles Open burning of waste == References ==
Wikipedia/Health_and_safety_hazards_of_nanomaterials
Nanomechanics is a branch of nanoscience studying fundamental mechanical (elastic, thermal and kinetic) properties of physical systems at the nanometer scale. Nanomechanics has emerged on the crossroads of biophysics, classical mechanics, solid-state physics, statistical mechanics, materials science, and quantum chemistry. As an area of nanoscience, nanomechanics provides a scientific foundation of nanotechnology. Nanomechanics is that branch of nanoscience which deals with the study and application of fundamental mechanical properties of physical systems at the nanoscale, such as elastic, thermal and kinetic material properties. Often, nanomechanics is viewed as a branch of nanotechnology, i.e., an applied area with a focus on the mechanical properties of engineered nanostructures and nanosystems (systems with nanoscale components of importance). Examples of the latter include nanomachines, nanoparticles, nanopowders, nanowires, nanorods, nanoribbons, nanotubes, including carbon nanotubes (CNT) and boron nitride nanotubes (BNNTs); nanoshells, nanomembranes, nanocoatings, nanocomposite/nanostructured materials, (fluids with dispersed nanoparticles); nanomotors, etc. Some of the well-established fields of nanomechanics are: nanomaterials, nanotribology (friction, wear and contact mechanics at the nanoscale), nanoelectromechanical systems (NEMS), and nanofluidics. As a fundamental science, nanomechanics is based on some empirical principles (basic observations), namely general mechanics principles and specific principles arising from the smallness of physical sizes of the object of study. General mechanics principles include: Energy and momentum conservation principles Variational Hamilton's principle Symmetry principles Due to smallness of the studied object, nanomechanics also accounts for: Discreteness of the object, whose size is comparable with the interatomic distances Plurality, but finiteness, of degrees of freedom in the object Importance of thermal fluctuations Importance of entropic effects (see configuration entropy) Importance of quantum effects (see quantum machine) These principles serve to provide a basic insight into novel mechanical properties of nanometer objects. Novelty is understood in the sense that these properties are not present in similar macroscale objects or much different from the properties of those (e.g., nanorods vs. usual macroscopic beam structures). In particular, smallness of the subject itself gives rise to various surface effects determined by higher surface-to-volume ratio of nanostructures, and thus affects mechanoenergetic and thermal properties (melting point, heat capacitance, etc.) of nanostructures. Discreteness serves a fundamental reason, for instance, for the dispersion of mechanical waves in solids, and some special behavior of basic elastomechanics solutions at small scales. Plurality of degrees of freedom and the rise of thermal fluctuations are the reasons for thermal tunneling of nanoparticles through potential barriers, as well as for the cross-diffusion of liquids and solids. Smallness and thermal fluctuations provide the basic reasons of the Brownian motion of nanoparticles. Increased importance of thermal fluctuations and configuration entropy at the nanoscale give rise to superelasticity, entropic elasticity (entropic forces), and other exotic types of elasticity of nanostructures. Aspects of configuration entropy are also of great interest in the context self-organization and cooperative behavior of open nanosystems. Quantum effects determine forces of interaction between individual atoms in physical objects, which are introduced in nanomechanics by means of some averaged mathematical models called interatomic potentials. Subsequent utilization of the interatomic potentials within the classical multibody dynamics provide deterministic mechanical models of nano structures and systems at the atomic scale/resolution. Numerical methods of solution of these models are called molecular dynamics (MD), and sometimes molecular mechanics (especially, in relation to statically equilibrated (still) models). Non-deterministic numerical approaches include Monte Carlo, Kinetic More-Carlo (KMC), and other methods. Contemporary numerical tools include also hybrid multiscale approaches allowing concurrent or sequential utilization of the atomistic scale methods (usually, MD) with the continuum (macro) scale methods (usually, field emission microscopy) within a single mathematical model. Development of these complex methods is a separate subject of applied mechanics research. Quantum effects also determine novel electrical, optical and chemical properties of nanostructures, and therefore they find even greater attention in adjacent areas of nanoscience and nanotechnology, such as nanoelectronics, advanced energy systems, and nanobiotechnology. == See also == Molecular machine Geometric phase (section Stochastic Pump Effect) Nanoelectromechanical relay == References == Sattler KD. Handbook of Nanophysics: Vol. 1 Principles and Methods. CRC Press, 2011. Bhushan B (editor). Springer Handbook of Nanotechnology, 2nd edition. Springer, 2007. Liu WK, Karpov EG, Park HS. Nano Mechanics and Materials: Theory, Multiscale Methods and Applications. Wiley, 2006. Cleland AN. Foundations of Nanomechanics. Springer, 2003. Valeh I. Bakhshali. Nanomechanics and its applications: mechanical properties of materials. International E-Conference on Engineering, Technology and Management - ICETM 2020.
Wikipedia/Nanomechanics
Surface modification is the act of modifying the surface of a material by bringing physical, chemical or biological characteristics different from the ones originally found on the surface of a material. This modification is usually made to solid materials, but it is possible to find examples of the modification to the surface of specific liquids. The modification can be done by different methods with a view to altering a wide range of characteristics of the surface, such as: roughness, hydrophilicity, surface charge, surface energy, biocompatibility and reactivity. == Surface engineering == Surface engineering is the sub-discipline of materials science which deals with the surface of solid matter. It has applications to chemistry, mechanical engineering, and electrical engineering (particularly in relation to semiconductor manufacturing). Solids are composed of a bulk material covered by a surface. The surface which bounds the bulk material is called the Surface phase. It acts as an interface to the surrounding environment. The bulk material in a solid is called the Bulk phase. The surface phase of a solid interacts with the surrounding environment. This interaction can degrade the surface phase over time. Environmental degradation of the surface phase over time can be caused by wear, corrosion, fatigue and creep. Surface engineering involves altering the properties of the Surface Phase in order to reduce the degradation over time. This is accomplished by making the surface robust to the environment in which it will be used. === Applications and Future of Surface Engineering === Surface engineering techniques are being used in the automotive, aerospace, missile, power, electronic, biomedical, textile, petroleum, petrochemical, chemical, steel, power, cement, machine tools, construction industries. Surface engineering techniques can be used to develop a wide range of functional properties, including physical, chemical, electrical, electronic, magnetic, mechanical, wear-resistant and corrosion-resistant properties at the required substrate surfaces. Almost all types of materials, including metals, ceramics, polymers, and composites can be coated on similar or dissimilar materials. It is also possible to form coatings of newer materials (e.g., met glass. beta-C3N4), graded deposits, multi-component deposits etc. In 1995, surface engineering was a £10 billion market in the United Kingdom. Coatings, to make surface life robust from wear and corrosion, was approximately half the market. Functionalization of Antimicrobial Surfaces is a unique technology that can be used for sterilization in health industry, self-cleaning surfaces and protection from bio films. In recent years, there has been a paradigm shift in surface engineering from age-old electroplating to processes such as vapor phase deposition, diffusion, thermal spray & welding using advanced heat sources like plasma, laser, ion, electron, microwave, solar beams, synchrotron radiation, pulsed arc, pulsed combustion, spark, friction and induction. It's estimated that loss due to wear and corrosion in the US is approximately $500 billion. In the US, there are around 9524 establishments (including automotive, aircraft, power and construction industries) who depend on engineered surfaces with support from 23,466 industries. == Surface functionalization == Surface functionalization introduces chemical functional groups to a surface. This way, materials with functional groups on their surfaces can be designed from substrates with standard bulk material properties. Prominent examples can be found in semiconductor industry and biomaterial research. === Polymer Surface Functionalization === Plasma processing technologies are successfully employed for polymers surface functionalization. == See also == Surface finishing Surface science Tribology Surface metrology Surface modification of biomaterials with proteins Flame treatment == References == == Bibliography == R.Chattopadhyay, ’Advanced Thermally Assisted Surface Engineering Processes’ Kluwer Academic Publishers, MA, USA (now Springer, NY), 2004, ISBN 1-4020-7696-7, E-ISBN 1-4020-7764-5. R Chattopadhyay, ’Surface Wear- Analysis, Treatment, & Prevention’, ASM-International, Materials Park, OH, USA, 2001, ISBN 0-87170-702-0. S Konda, Flame‐based synthesis and in situ functionalization of palladium alloy nanoparticles, AIChE Journal, 2018, https://onlinelibrary.wiley.com/doi/full/10.1002/aic.16368 == External links == Institute of Surface Chemistry and Catalysis Ulm University
Wikipedia/Surface_functionalisation
Pyrolytic carbon is a material similar to graphite, but with some covalent bonding between its graphene sheets as a result of imperfections in its production. Pyrolytic carbon is man-made and is thought not to be found in nature. Generally it is produced by heating a hydrocarbon nearly to its decomposition temperature, and permitting the graphite to crystalize (pyrolysis). One method is to heat synthetic fibers in a vacuum, producing carbon fibers. It is used in high temperature applications such as missile nose cones, rocket motors, heat shields, laboratory furnaces, in graphite-reinforced plastic, coating nuclear fuel particles, and in biomedical prostheses. It was developed in the late 1950s as an extension of the work on refractory vapor deposition of metals. == Physical properties == Pyrolytic graphite samples usually have a single cleavage plane, similar to mica, because the graphene sheets crystallize in a planar order, as opposed to pyrolytic carbon, which forms microscopic randomly oriented zones. Because of this, pyrolytic graphite exhibits several unusual anisotropic properties. It is more thermally conductive along the cleavage plane than pyrolytic carbon, making it one of the best planar thermal conductors available. Pyrolytic graphite forms mosaic crystals with controlled mosaicities up to a few degrees. Pyrolytic graphite is also more diamagnetic (χ = −4×10−4) against the cleavage plane, exhibiting the greatest diamagnetism (by weight) of any room-temperature diamagnet. In comparison, pyrolytic graphite has a relative permeability of 0.9996, whereas bismuth has a relative permeability of 0.9998 (table). == Magnetic levitation == Few materials can be made to magnetically levitate stably above the magnetic field from a permanent magnet. Although magnetic repulsion is obviously and easily achieved between any two magnets, the shape of the field causes the upper magnet to push off sideways, rather than remaining supported, rendering stable levitation impossible for magnetic objects (see Earnshaw's theorem). Strongly diamagnetic materials, however, can levitate above powerful magnets. With the easy availability of rare-earth permanent magnets developed in the 1970s and 1980s, the strong diamagnetism of pyrolytic graphite makes it a convenient demonstration material for this effect. In 2012, a research group in Japan demonstrated that pyrolytic graphite can respond to laser light or sufficiently powerful natural sunlight by spinning or moving in the direction of the field gradient. The carbon's magnetic susceptibility weakens upon sufficient illumination, leading to an unbalanced magnetization of the material and movement when using a specific geometry. Recently, it has been suggested that pyrolytic carbon may possibly be the explanation for the mysterious 'spokes' in Saturn's rings. Due to the process of Chemical Vapor Deposition methane gas at high temperatures (1400K) may have been converted to pyrolytic carbon. The abundant silicates in Saturn's B ring may have acted as a substrate for the pyrolytic carbon to be deposited on. Since pyrolytic carbon is highly diamagnetic the silicate grains coated in pyrolytic carbon can levitate above and below the ring plane due to Saturn's equatorial magnetic field. When sunlight hits these pyrolytic carbon-coated grains they lose electrons due to the photoelectric effect and become paramagnetic and are pulled back to the main ring structure as they are now attracted to Saturn's equatorial magnetic field. The visibility of the 'spokes' is dependent on the angle of the sunlight hitting the rings and the angle the observer is observing the rings. ( Reference https://arxiv.org/abs/2303.07197 ). == Applications == It is used non-reinforced for missile nose cones and ablative (boiloff-cooled) rocket motors. In fiber form, it is used to reinforce plastics and metals (see Carbon fiber and Graphite-reinforced plastic). Pebble-bed nuclear reactors use a coating of pyrolytic carbon as a neutron moderator for the individual pebbles. Used to coat graphite cuvettes (tubes) in graphite furnace atomic absorption furnaces to decrease heat stress, thus increasing cuvette lifetimes. Pyrolytic carbon is used for several applications in electronic thermal management: thermal-interface material, heat spreaders (sheets) and heat sinks (fins). It is occasionally used to make tobacco pipes. It is used to fabricate grid structures in some high-power vacuum tubes. It is used as a monochromator for neutron and X-ray scattering studies. Prosthetic heart valves Radial head prosthesis It is also used in automotive industries where a desired amount of friction is required between two components. Highly oriented pyrolytic graphite (HOPG) is used as the dispersive element in HOPG spectrometers, which are used for X-ray spectrometry. It is used in personal protective gear. === Biomedical applications === Because blood clots do not easily form on it, it is often advisable to line a blood-contacting prosthesis with this material in order to reduce the risk of thrombosis. For example, it finds use in artificial hearts and artificial heart valves. Blood vessel stents, by contrast, are often lined with a polymer that has heparin as a pendant group, relying on drug action to prevent clotting. This is at least partly because of pyrolytic carbon's brittleness and the large amount of permanent deformation, which a stent undergoes during expansion. Pyrolytic carbon is also in medical use to coat anatomically correct orthopedic implants, a.k.a. replacement joints. In this application it is currently marketed under the name "PyroCarbon". These implants have been approved by the U.S. Food and Drug Administration for use in the hand for metacarpophalangeal (knuckle) replacements. They are produced by two companies: Tornier (BioProfile) and Ascension Orthopedics. On September 23, 2011, Integra LifeSciences acquired Ascension Orthopedics. The company's pyrolytic carbon implants have been used to treat patients with different forms of osteoarthritis. In January 2021, Integra LifeSciences sold its orthopedics company to Smith+Nephew for $240 million. The FDA has also approved PyroCarbon interphalangeal joint replacements under the Humanitarian Device Exemption. == Footnotes ==
Wikipedia/Pyrolytic_graphite
Nanoporous materials consist of a regular organic or inorganic bulk phase in which a porous structure is present. Nanoporous materials exhibit pore diameters that are most appropriately quantified using units of nanometers. The diameter of pores in nanoporous materials is thus typically 100 nanometers or smaller. Nanoporous materials include subsets of mesoporous (with typical pores having sizes between 2 and 50 nanometers) and microporous materials (typical pores with diameters <2nm). Pores may be open or closed, and pore connectivity and void fraction vary considerably, as with other porous materials. Open pores are pores that connect to the surface of the material whereas closed pores are pockets of void space within a bulk material. Open pores are useful for molecular separation techniques, adsorption, and catalysis studies. Closed pores are mainly used in thermal insulators and for structural applications. Most nanoporous materials can be classified as bulk materials or membranes. Activated carbon and zeolites are two examples of bulk nanoporous materials, while cell membranes can be thought of as nanoporous membranes. A porous medium or a porous material is a material containing pores (voids). The skeletal portion of the material is often called the "matrix" or "frame". The pores are typically filled with a fluid (liquid or gas). There are many natural nanoporous materials, but artificial materials can also be manufactured. One method of doing so is to combine polymers with different melting points, so that upon heating one polymer degrades. A nanoporous material with consistently sized pores has the property of letting only certain substances pass through, while blocking others. == Classifications == === Classification By Size === The term nanomaterials covers diverse forms of materials with various applications. According to IUPAC porous materials are subdivided into 3 categories: Microporous materials: 0.2–2 nm Mesoporous materials: 2–50 nm Macroporous materials: 50–1000 nm These categories conflict with the classical definition of nanoporous materials, as they have pore diameters between 1 and 100 nm. This range covers all the classifications listed above. However, for the sake of simplicity, scientists choose to use the term nanomaterials and list its associated diameter instead. Microporous and mesoporous materials are distinguished as separate material classes owing to the distinct applications afforded by the pores sizes in these materials. Confusingly, the term microporous is used to describe materials with smaller pores sizes than materials commonly referred to simply as nanoporous. More correctly, microporous materials are better understood as a subset of nanoporous materials, namely materials that exhibit pore diameters smaller than 2 nm. Having pore diameters with length scales of molecules, such materials enable applications that require molecular selectivity such as filtration and separation membranes. Mesoporous materials, referring generally to materials with average pore diameters in the range 2-50 nm are interesting as catalyst support materials and adsorbents owing to their high surface area to volume ratios. Sometimes classifying by size becomes difficult as there could be porous materials that have various diameters. For example, microporous materials may have a few pores with 2 to 50 nm diameter due to random grain packing. These specifics must be taken into consideration when categorizing by pore size. === Classification By Network Materials === In addition to classification by size, nanoporous materials can be further classified into organic and inorganic network materials. A network material is the structure 'hosts' the pores and is where the medium (gas or liquid) interacts with the substrate. While there are plenty of inorganic nanoporous membranes, there are few organic ones due to issues with stability. ==== Organic ==== Organic nanoporous materials are polymers made from elements such as boron, carbon, nitrogen, and oxygen. These materials are usually microporous although mesoporous/microporous structures do exist. These include covalent organic frameworks (COFs), covalent triazine frameworks, polymers of intrinsic microporosity (PIMs), hyper cross-linked polymers (HCPs), and conjugated microporous polymers (CMPs). Each of these has different structures and manufacturing steps. In general, to create organic nanoporous materials, a monomer with greater than 2 branches (i.e. covalent bonds) is dissolved in a solvent. After additional monomers are added and polymerization occurs, the solvent is removed and the remaining structure is considered a nanoporous material. Organic nanoporous materials can be further classified into crystalline and amorphous networks. Crystalline networks are materials that have a well-defined pore sizes. The pore sizes are so well defined that simply by changing the monomer, one can obtain different pore sizes. COFs are an example of such crystalline structure. In contrast, amorphous nanoporous materials have a distribution of pore sizes and are usually disordered. An example is PIMs. Both categories have various uses in gas sorption and catalysis reactions. ==== Inorganic ==== Inorganic nanoporous materials are porous materials that include the use of oxide-type, carbon, binary, and pure metal materials. Examples include zeolites, nanoporous alumina, and titania nanotubes. Zeolites are crystalline hydrated tectoaluminosilicates. This material is a combination of alkali/alkali earth metals, alumina, and silica hydrates. These are used for ion-exchange beds and for water purification. Nanoporous alumina is a biocompatible material widely used in various dental and orthopedic implants. Titania nanotubes are also used in orthopedics but are special as they can form a titanium oxide layer upon exposure to oxygen. Because the surface of the material is oxide-protected, this material has excellent biocompatibility with incredible mechanical strength. == Applications == === Gas Storage/Sensing === Gas storage is crucial for energy, medical, and environmental applications. Nanoporous materials enable a unique method of gas storage through adsorption. When the substrate and gas interact with each other, the gas molecules can physio-adsorb or covalently bond with the nanoporous material, which is known as physical storage and chemical storage, respectively. While one may store gases in the bulk phase, such as in a bottle, nanoporous materials enable higher storage density, which is attractive for energy applications. One example of this application is hydrogen storage. With the onset of climate change, there is an increased interest in zero-emission vehicles, especially in fuel cell electric vehicles. By storing hydrogen at high densities using porous materials, one can increase electric car mileage range. Another use case for nanoporous materials is as a substrate for gas sensors. For example, measuring the electrical resistivity of a porous metal can yield the exact concentration of an analyte species in gaseous form. Since the resistivity of the substrate is proportional to the surface area of the porous media, using nanoporous materials will yield higher sensitivity in detecting trace gaseous species than their bulk counterparts. This is especially useful as nanoporous materials have a higher effective surface area normalized to the top-view surface area === Biological applications === Nanoporous materials are used in biological applications as well. Enzyme catalyzed reactions in biological applications are highly utilized for metabolism and processing large molecules. Nanoporous materials offer the opportunity to embed enzymes onto the porous substrate which enhances the lifetime of the reactions for long-term implants. Another application is found in DNA sequencing. By coating an inorganic nanoporous membrane on an insulating material, nanopores can be utilized for single-molecule analysis. By threading DNA through these nanopores, one can read out the ionic current through the pore which can be correlated to one of four nucleotides. == References ==
Wikipedia/Nanoporous_materials
Toxicology of carbon nanomaterials is the study of toxicity in carbon nanomaterials like fullerenes and carbon nanotubes. == Fullerenes == A review of works on fullerene toxicity by Lalwani et al. found little evidence that C60 is toxic. The toxicity of these carbon nanoparticles varies with dose, duration, type (e.g., C60, C70, M@C60, M@C82), functional groups used to water-solubilize these nanoparticles (e.g., OH, COOH), and method of administration (e.g., intravenous, intraperitoneal). The authors recommended that the pharmacology of each fullerene- or metallofullerene-based complex be assessed as a different compound. Moussa et al. (1996–97) studied the in vivo toxicity of C60 after intra-peritoneal administration of large doses. No evidence of toxicity was found and the mice tolerated a dose of 5 g/kg of body weight. Mori et al. (2006) could not find toxicity in rodents for C60 and C70 mixtures after oral administration of a dose of 2 g/kg body weight and did not observe evidence of genotoxic or mutagenic potential in vitro. Other studies could not establish the toxicity of fullerenes: on the contrary, the work of Gharbi et al. (2005) suggested that aqueous C60 suspensions failing to produce acute or subacute toxicity in rodents could also protect their livers in a dose-dependent manner against free-radical damage. In a 2012 primary study of an olive oil / C60 suspension administered to rats by intra-peritoneal administration or oral gavage, a prolonged lifespan to almost double the normal lifespan of the rats was seen and significant toxicity was not observed. An investigator for this study, Professor Moussa, generalized from its findings in a video interview and stated that pure C60 is not toxic. When considering toxicological data, care must be taken to distinguish as necessary between what are normally referred to as fullerenes: (C60, C70, ...); fullerene derivatives: C60 or other fullerenes with covalently bonded chemical groups; fullerene complexes (e.g., water-solubilized with surfactants, such as C60-PVP; host–guest complexes, such as with cyclodextrin), where the fullerene is supermolecular bound to another molecule; C60 nanoparticles, which are extended solid-phase aggregates of C60 crystallites; and nanotubes, which are generally much larger (in terms of molecular weight and size) molecules, and are different in shape to the spheroidal fullerenes C60 and C70, as well as having different chemical and physical properties. The molecules above are all fullerenes (close-caged all-carbon molecules) but it is unreliable to extrapolate results from C60 to nanotubes or vice versa, as they range from insoluble materials in either hydrophilic or lipophilic media, to hydrophilic, lipophilic, or even amphiphilic molecules, and with other varying physical and chemical properties. A quantitative structural analysis relationship (QSAR) study can analyze on how close the molecules under consideration are in physical and chemical properties, which can help. == Carbon nanotubes == As of 2013, the United States National Institute for Occupational Safety and Health was not aware of any reports of adverse health effects in workers using or producing carbon nanotubes or carbon nanofibers. However a systematic review of 54 laboratory animal studies indicated that they could cause adverse pulmonary effects including inflammation, granulomas, and pulmonary fibrosis, which were of similar or greater potency when compared with other known fibrogenic materials such as silica, asbestos, and ultrafine carbon black. With reference to nanotubes, a 2008 study on carbon nanotubes introduced into the abdominal cavity of mice led the authors to suggest comparisons to "asbestos-like pathogenicity". This was not an inhalation study, though there have been several performed in the past, therefore it is premature to conclude that nanotubes should be considered to have a toxicological profile similar to asbestos. Conversely, and perhaps illustrative of how the various classes of molecules which fall under the general term fullerene cover a wide range of properties, Sayes et al. found that in vivo inhalation of C60(OH)24 and nano-C60 in rats gave no effect, whereas in comparison quartz particles produced an inflammatory response under the same conditions. As stated above, nanotubes are quite different in chemical and physical properties to C60, i.e., molecular weight, shape, size, physical properties (such as solubility) all are very different, so from a toxicological standpoint, different results for C60 and nanotubes are not suggestive of any discrepancy in the findings. A 2016 study reported on workers in a large-scale MWCNT manufacturing facility in Russia with relatively high occupational exposure levels, finding that exposure to MWCNTs caused significant increase in several inflammatory cytokines and other biomarkers for interstitial lung disease. === Toxicity === The toxicity of carbon nanotubes has been an important question in nanotechnology. As of 2007, such research had just begun. The data is still fragmentary and subject to criticism. Preliminary results highlight the difficulties in evaluating the toxicity of this heterogeneous material. Parameters such as structure, size distribution, surface area, surface chemistry, surface charge, and agglomeration state as well as purity of the samples, have considerable impact on the reactivity of carbon nanotubes. However, available data clearly show that, under some conditions, nanotubes can cross membrane barriers, which suggests that, if raw materials reach the organs, they can induce harmful effects such as inflammatory and fibrotic reactions. ==== Effects Characterization ==== In 2014, experts from the International Agency for Research on Cancer (IARC) assessed the carcinogenicity of CNTs, including SWCNTs and MWCNTs. No human epidemiologic or cancer data was available to the IARC Working Group at the time, so the evaluation focused on the results of in vivo animal studies assessing the carcinogenicity of SWCNTs and MWCNTs in rodents. The Working Group concluded that there was sufficient evidence for the specific MWCNT type "MWCNT-7", limited evidence for the two other types of MWCNTs with dimensions similar to MWCNT-7, and inadequate evidence for SWCNTs. Therefore, it was agreed to specifically classify MWCNT-7 as possibly carcinogenic to humans (Group 2B) while the other forms of CNT, namely SWCNT and other types of MWCNT, excluding MWCNT-7, were considered not classifiable as to their carcinogenicity to humans (Group 3) due to a lack of coherent evidence. Results of rodent studies collectively show that regardless of the process by which CNTs were synthesized and the types and amounts of metals they contained, CNTs were capable of producing inflammation, epithelioid granulomas (microscopic nodules), fibrosis, and biochemical/toxicological changes in the lungs. Comparative toxicity studies in which mice were given equal weights of test materials showed that SWCNTs were more toxic than quartz, which is considered a serious occupational health hazard when chronically inhaled. As a control, ultrafine carbon black was shown to produce minimal lung responses. Carbon nanotubes deposit in the alveolar ducts by aligning lengthwise with the airways; the nanotubes will often combine with metals. The needle-like fiber shape of CNTs is similar to asbestos fibers. This raises the idea that widespread use of carbon nanotubes may lead to pleural mesothelioma, a cancer of the lining of the lungs, or peritoneal mesothelioma, a cancer of the lining of the abdomen (both caused by exposure to asbestos). A recently published pilot study supports this prediction. Scientists exposed the mesothelial lining of the body cavity of mice to long multiwalled carbon nanotubes and observed asbestos-like, length-dependent, pathogenic behavior that included inflammation and formation of lesions known as granulomas. Authors of the study conclude: This is of considerable importance, because research and business communities continue to invest heavily in carbon nanotubes for a wide range of products under the assumption that they are no more hazardous than graphite. Our results suggest the need for further research and great caution before introducing such products into the market if long-term harm is to be avoided. Although further research is required, the available data suggest that under certain conditions, especially those involving chronic exposure, carbon nanotubes can pose a serious risk to human health. ==== Exposure Characterization ==== Exposure scenarios are important to consider when trying to determine toxicity and the risks associated with these diverse and difficult to study materials. Exposure studies have been conducted over the past several years in an effort to determine where and how likely exposures will be. Since CNTs are being incorporated into composite materials for their ability to strengthen materials while not adding significant weight, the manufacture of CNTs and composites or hybrids including CNTs, the subsequent processing of the articles and equipment made from the composites, and end of life processes such as recycling or incineration all represent potential sources of exposure. The potential for exposure to the end user is not as likely, however as CNTs are being incorporated into new products there may be more research needed. One study performed personal and area sampling at seven different plants mostly involving the manufacture of MWCNTs. This study found that the work processes that prompt nanoparticle, not necessarily just CNT release, include "spraying, CNT preparation, ultrasonic dispersion, wafer heating, and opening the water bath cover." The exposure concentrations for both personal and area sampling indicated most workers' exposure was well below that set by the ACGIH for carbon black. Processing composite materials presents potential for exposure during cutting, drilling, or abrasion. Two different composite types were laboratory tested during processing under differing conditions to determine potential releases. Samples were machined using one dry cutting process and one wet cutting process with measurements taken at the source and in the breathing zone. The composites tested varied by method of manufacture and components. One was graphite and epoxy layered with CNTs aligned within and the other was a woven alumina with aligned CNTs on the surface. Dry cutting of both proved to be of concern regarding concentrations measured at the breathing zone, while wet cutting, a preferred method, showed a much better method of controlling potential exposures during this type of processing. Another study provided breathing zone and area sampling results from fourteen sites working with CNTs in a variety of manners for potential exposure assessment. These sites included the manufacture of CNTs, hybrid producers/users, and secondary manufacturers in either the electronics industry or composites industry. The highest mean exposures found in breathing zone samples were found in the secondary manufactures of electronics, then composites and hybrid sites, while the lowest mean exposures were found at the primary manufacturers sites. Relatively few of the samples returned results higher than the recommended exposure level as published by NIOSH. While there are developing strategies for the use of CNTs in a variety of products, potentials for exposures thus far appear to be low in most occupational settings. This may change as new products and manufacturing methods or secondary processing advances; therefore risk assessments should be integral to any planning for new applications. === Epidemiology and Risk Management === ==== Summary of Epidemiology Studies ==== Currently, there is a lack of epidemiological evidence linking exposure to CNT to human health effects. To date, there have been only a handful of published epidemiological studies that have solely examined the health effects related to the exposure of CNT, while several other studies are currently underway and yet to be published. With the limited amount of human data, scientists are more reliant on the results of current animal toxicity studies to predict adverse health effects, as well as applying what is already known about exposures to other fibrous materials such as asbestos or fine and ultra-fine particulates. This limitation of human data has led to the use of the precautionary principle, which urges workplaces to limit exposure levels to CNT as low as possibly achievable in the absence of known health effects data. Epidemiology studies of nanomaterials thus far have considered a variety of nanomaterials. Few have been specific to CNTs and each has considered a small sample size. These studies have found some relationships between biological markers and MWCNT exposure. One cross-sectional study to evaluate health effects was conducted to determine associations of biomarkers in relation to measured CNT exposure. While no effect on lung function due to exposure was found, the study did observe some indications of early signs of effects to biomarkers associated with exposure to MWCNTs. Additionally, some results were contradictory to earlier in vitro studies making further studies necessary to further define effects. ==== NIOSH Risk Assessment Summary ==== NIOSH has undertaken a risk assessment based on available studies to determine appropriate recommendations of exposure levels. Their review found that while human health effects had not been directly observed, there were animal studies that showed potential for health effects that could reasonably be expected in humans upon sufficient exposure. In addition to animal studies, human cell studies were reviewed and determined that harmful effects were expressed. Ultimately, the risk assessment found the most relevant data upon which to calculate the REL (recommended exposure limit) were animal studies. Corrections for inter-species differences, and updates to reflect advancing technologies in sampling methods and detection capabilities were considered as a part of the risk assessment. The resultant REL is several orders of magnitude smaller than those of other carbonaceous particulate matters of concern, graphite and carbon black. ==== Risk Management ==== To date, several international government agencies, as well as individual authors, have developed occupational exposure limits (OEL) to reduce the risk of any possible human health effects associated with workplace exposures to CNT. The National Institute for Occupational Safety and Health (NIOSH) conducted a risk assessment using animal and other toxicological data relevant to assessing the potential non-malignant adverse respiratory effects of CNT and proposed an OEL of 1 μg/m3 elemental carbon as a respirable mass 8-hour time-weighted average (TWA) concentration. Several individual authors have also performed similar risk assessments using animal toxicity data and have established inhalation exposure limits ranging from 2.5 to 50 ug/m3. One such risk assessment used two data from two different types of exposures to work toward an OEL as part of an adaptive management where there is an expectation that recommendations will be reevaluated as more data become available. === Safety and Exposure Prevention === Occupational exposures that could potentially allow the inhalation of CNT is of the greatest concern, especially in situations where the CNT is handled in powder form which can easily be aerosolized and inhaled. Also of concern are any high-energy processes that are applied to various CNT preparations such as the mixing or sonication of CNT in liquids as well as processes that cut or drill into CNT based composites in downstream products. These types of high-energy processes will aerosolize CNT which can then be inhaled. Guidance for minimizing exposure and risk to CNT have been published by several international agencies which includes several documents from the British Health and Safety Executive titled "Using nanomaterials at work Including carbon nanotubes and other bio-persistent high aspect ratio nanomaterials" and the "Risk Management of Carbon Nanotubes" Safe Work Australia has also published guidance titled "Safe Handling and use of Carbon Nanotubes" which describes two approaches to managing the risks that include risk management with detailed hazard analysis and exposure assessment as well as risk management by using Control Banding. The National Institute for Occupational Safety and Health has also published a document titled "Current Intelligence Bulletin 65: Occupational Exposure to Carbon Nanotubes and Nanofibers" describes strategies for controlling workplace exposures and implementing a medical surveillance program. The Occupational Safety and Health Administration has published a "OSHA Fact Sheet, Working Safety with Nanomaterials" for use as guidance in addition to a webpage hosting a variety of resources. These guidance documents generally advocate instituting the principles of the Hierarchy of Hazard Control which is a system used in industry to minimize or eliminate exposure to hazards. The hazard controls in the hierarchy are, in order of decreasing effectiveness: Elimination of a potential exposure. Substitution with a less hazardous chemical or process. Engineering Controls such as ventilation systems, shielding, or enclosures. Administrative Controls including training, policies, written procedures, work schedules, etc. Personal Protective Equipment == References == == Further reading == NIOSH Current Intelligence Bulletin 65: Occupational Exposure to Carbon Nanotubes and Nanofibers
Wikipedia/Toxicology_of_carbon_nanomaterials
In materials science, the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene. Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes. 2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements). It is predicted that there are hundreds of stable single-layer materials. The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication, mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation. == Single element materials == === C: graphene and graphyne === Graphene Graphene is a crystalline allotrope of carbon in the form of a nearly transparent (to visible light) one atom thick sheet. It is hundreds of times stronger than most steels by weight. It has the highest known thermal and electrical conductivity, displaying current densities 1,000,000 times that of copper. It was first produced in 2004. Andre Geim and Konstantin Novoselov won the 2010 Nobel Prize in Physics "for groundbreaking experiments regarding the two-dimensional material graphene". They first produced it by lifting graphene flakes from bulk graphite with adhesive tape and then transferring them onto a silicon wafer. Graphyne Graphyne is another 2-dimensional carbon allotrope whose structure is similar to graphene's. It can be seen as a lattice of benzene rings connected by acetylene bonds. Depending on the content of the acetylene groups, graphyne can be considered a mixed hybridization, spn, where 1 < n < 2, compared to graphene (pure sp2) and diamond (pure sp3). First-principle calculations using phonon dispersion curves and ab-initio finite temperature, quantum mechanical molecular dynamics simulations showed graphyne and its boron nitride analogues to be stable. The existence of graphyne was conjectured before 1960. In 2010, graphdiyne (graphyne with diacetylene groups) was synthesized on copper substrates. In 2022 a team claimed to have successfully used alkyne metathesis to synthesise graphyne though this claim is disputed. However, after an investigation the team's paper was retracted by the publication citing fabricated data. Later during 2022 synthesis of multi-layered γ‑graphyne was successfully performed through the polymerization of 1,3,5-tribromo-2,4,6-triethynylbenzene under Sonogashira coupling conditions. Recently, it has been claimed to be a competitor for graphene due to the potential of direction-dependent Dirac cones. === B: borophene === Borophene is a crystalline atomic monolayer of boron and is also known as boron sheet. First predicted by theory in the mid-1990s in a freestanding state, and then demonstrated as distinct monoatomic layers on substrates by Zhang et al., different borophene structures were experimentally confirmed in 2015. === Ge: germanene === Germanene is a two-dimensional allotrope of germanium with a buckled honeycomb structure. Experimentally synthesized germanene exhibits a honeycomb structure. This honeycomb structure consists of two hexagonal sub-lattices that are vertically displaced by 0.2 A from each other. === Si: silicene === Silicene is a two-dimensional allotrope of silicon, with a hexagonal honeycomb structure similar to that of graphene. Its growth is scaffolded by a pervasive Si/Ag(111) surface alloy beneath the two-dimensional layer. === Sn: stanene === Stanene is a predicted topological insulator that may display dissipationless currents at its edges near room temperature. It is composed of tin atoms arranged in a single layer, in a manner similar to graphene. Its buckled structure leads to high reactivity against common air pollutants such as NOx and COx and it is able to trap and dissociate them at low temperature. A structure determination of stanene using low energy electron diffraction has shown ultra-flat stanene on a Cu(111) surface. === Pb: plumbene === Plumbene is a two-dimensional allotrope of lead, with a hexagonal honeycomb structure similar to that of graphene. === P: phosphorene === Phosphorene is a 2-dimensional, crystalline allotrope of phosphorus. Its mono-atomic hexagonal structure makes it conceptually similar to graphene. However, phosphorene has substantially different electronic properties; in particular it possesses a nonzero band gap while displaying high electron mobility. This property potentially makes it a better semiconductor than graphene. The synthesis of phosphorene mainly consists of micromechanical cleavage or liquid phase exfoliation methods. The former has a low yield while the latter produce free standing nanosheets in solvent and not on the solid support. The bottom-up approaches like chemical vapor deposition (CVD) are still blank because of its high reactivity. Therefore, in the current scenario, the most effective method for large area fabrication of thin films of phosphorene consists of wet assembly techniques like Langmuir-Blodgett involving the assembly followed by deposition of nanosheets on solid supports. === Sb: antimonene === Antimonene is a two-dimensional allotrope of antimony, with its atoms arranged in a buckled honeycomb lattice. Theoretical calculations predicted that antimonene would be a stable semiconductor in ambient conditions with suitable performance for (opto)electronics. Antimonene was first isolated in 2016 by micromechanical exfoliation and it was found to be very stable under ambient conditions. Its properties make it also a good candidate for biomedical and energy applications. In a study made in 2018, antimonene modified screen-printed electrodes (SPE's) were subjected to a galvanostatic charge/discharge test using a two-electrode approach to characterize their supercapacitive properties. The best configuration observed, which contained 36 nanograms of antimonene in the SPE, showed a specific capacitance of 1578 F g−1 at a current of 14 A g−1. Over 10,000 of these galvanostatic cycles, the capacitance retention values drop to 65% initially after the first 800 cycles, but then remain between 65% and 63% for the remaining 9,200 cycles. The 36 ng antimonene/SPE system also showed an energy density of 20 mW h kg−1 and a power density of 4.8 kW kg−1. These supercapacitive properties indicate that antimonene is a promising electrode material for supercapacitor systems. A more recent study, concerning antimonene modified SPEs shows the inherent ability of antimonene layers to form electrochemically passivated layers to facilitate electroanalytical measurements in oxygenated environments, in which the presence of dissolved oxygens normally hinders the analytical procedure. The same study also depicts the in-situ production of antimonene oxide/PEDOT:PSS nanocomposites as electrocatalytic platforms for the determination of nitroaromatic compounds. === Bi: bismuthene === Bismuthene, the two-dimensional (2D) allotrope of bismuth, was predicted to be a topological insulator. It was predicted that bismuthene retains its topological phase when grown on silicon carbide in 2015. The prediction was successfully realized and synthesized in 2016. At first glance the system is similar to graphene, as the Bi atoms arrange in a honeycomb lattice. However the bandgap is as large as 800mV due to the large spin–orbit interaction (coupling) of the Bi atoms and their interaction with the substrate. Thus, room-temperature applications of the quantum spin Hall effect come into reach. It has been reported to be the largest nontrivial bandgap 2D topological insulator in its natural state. Top-down exfoliation of bismuthene has been reported in various instances with recent works promoting the implementation of bismuthene in the field of electrochemical sensing. Emdadul et al. predicted the mechanical strength and phonon thermal conductivity of monolayer β-bismuthene through atomic-scale analysis. The obtained room temperature (300K) fracture strength is ~4.21 N/m along the armchair direction and ~4.22 N/m along the zigzag direction. At 300 K, its Young's moduli are reported to be ~26.1 N/m and ~25.5 N/m, respectively, along the armchair and zigzag directions. In addition, their predicted phonon thermal conductivity of ~1.3 W/m∙K at 300 K is considerably lower than other analogous 2D honeycombs, making it a promising material for thermoelectric operations. === Au: goldene === On 16 April 2024, scientists from Linköping University in Sweden reported that they had produced goldene, a single layer of gold atoms 100nm wide. Lars Hultman, a materials scientist on the team behind the new research, is quoted as saying "we submit that goldene is the first free-standing 2D metal, to the best of our knowledge", meaning that it is not attached to any other material, unlike plumbene and stanene. Researchers from New York University Abu Dhabi (NYUAD) previously reported to have synthesised Goldene in 2022, however various other scientists have contended that the NYUAD team failed to prove they made a single-layer sheet of gold, as opposed to a multi-layer sheet. Goldene is expected to be used primarily for its optical properties, with applications such as sensing or as a catalyst. === Metals === Single and double atom layers of platinum in a two-dimensional film geometry has been demonstrated. These atomically thin platinum films are epitaxially grown on graphene, which imposes a compressive strain that modifies the surface chemistry of the platinum, while also allowing charge transfer through the graphene. Single atom layers of palladium with the thickness down to 2.6 Å, and rhodium with the thickness of less than 4 Å have been synthesized and characterized with atomic force microscopy and transmission electron microscopy. A 2D titanium formed by additive manufacturing (laser powder bed fusion) achieved greater strength than any known material (50% greater than magnesium alloy WE54). The material was arranged in a tubular lattice with a thin band running inside, merging two complementary lattice structures. This reduced by half the stress at the weakest points in the structure. === 2D supracrystals === The supracrystals of 2D materials have been proposed and theoretically simulated. These monolayer crystals are built of supra atomic periodic structures where atoms in the nodes of the lattice are replaced by symmetric complexes. For example, in the hexagonal structure of graphene patterns of 4 or 6 carbon atoms would be arranged hexagonally instead of single atoms, as the repeating node in the unit cell. == 2D alloys == Two-dimensional alloys (or surface alloys) are a single atomic layer of alloy that is incommensurate with the underlying substrate. One example is the 2D ordered alloys of Pb with Sn and with Bi. Surface alloys have been found to scaffold two-dimensional layers, as in the case of silicene. == Compounds == Boron nitride nanosheet Titanate nanosheet Borocarbonitrides MXenes 2D silica Niobium bromide and Niobium chloride (Nb3[X]8) === Transition metal dichalcogenide monolayers === The most commonly studied two-dimensional transition metal dichalcogenide (TMD) is monolayer molybdenum disulfide (MoS2). Several phases are known, notably the 1T and 2H phases. The naming convention reflects the structure: the 1T phase has one "sheet" (consisting of a layer of S-Mo-S; see figure) per unit cell in a trigonal crystal system, while the 2H phase has two sheets per unit cell in a hexagonal crystal system. The 2H phase is more common, as the 1T phase is metastable and spontaneously reverts to 2H without stabilization by additional electron donors (typically surface S vacancies). The 2H phase of MoS2 (Pearson symbol hP6; Strukturbericht designation C7) has space group P63/mmc. Each layer contains Mo surrounded by S in trigonal prismatic coordination. Conversely, the 1T phase (Pearson symbol hP3) has space group P-3m1, and octahedrally-coordinated Mo; with the 1T unit cell containing only one layer, the unit cell has a c parameter slightly less than half the length of that of the 2H unit cell (5.95 Å and 12.30 Å, respectively). The different crystal structures of the two phases result in differences in their electronic band structure as well. The d-orbitals of 2H-MoS2 are split into three bands: dz2, dx2-y2,xy, and dxz,yz. Of these, only the dz2 is filled; this combined with the splitting results in a semiconducting material with a bandgap of 1.9eV. 1T-MoS2, on the other hand, has partially filled d-orbitals which give it a metallic character. Because the structure consists of in-plane covalent bonds and inter-layer van der Waals interactions, the electronic properties of monolayer TMDs are highly anisotropic. For example, the conductivity of MoS2 in the direction parallel to the planar layer (0.1–1 ohm−1cm−1) is ~2200 times larger than the conductivity perpendicular to the layers. There are also differences between the properties of a monolayer compared to the bulk material: the Hall mobility at room temperature is drastically lower for monolayer 2H MoS2 (0.1–10 cm2V−1s−1) than for bulk MoS2 (100–500 cm2V−1s−1). This difference arises primarily due to charge traps between the monolayer and the substrate it is deposited on. MoS2 has important applications in (electro)catalysis. As with other two-dimensional materials, properties can be highly geometry-dependent; the surface of MoS2 is catalytically inactive, but the edges can act as active sites for catalyzing reactions. For this reason, device engineering and fabrication may involve considerations for maximizing catalytic surface area, for example by using small nanoparticles rather than large sheets or depositing the sheets vertically rather than horizontally. Catalytic efficiency also depends strongly on the phase: the aforementioned electronic properties of 2H MoS2 make it a poor candidate for catalysis applications, but these issues can be circumvented through a transition to the metallic (1T) phase. The 1T phase has more suitable properties, with a current density of 10 mA/cm2, an overpotential of −187 mV relative to RHE, and a Tafel slope of 43 mV/decade (compared to 94 mV/decade for the 2H phase). === Graphane === While graphene has a hexagonal honeycomb lattice structure with alternating double-bonds emerging from its sp2-bonded carbons, graphane, still maintaining the hexagonal structure, is the fully hydrogenated version of graphene with every sp3-hybrized carbon bonded to a hydrogen (chemical formula of (CH)n). Furthermore, while graphene is planar due to its double-bonded nature, graphane is rugged, with the hexagons adopting different out-of-plane structural conformers like the chair or boat, to allow for the ideal 109.5° angles which reduce ring strain, in a direct analogy to the conformers of cyclohexane. Graphane was first theorized in 2003, was shown to be stable using first principles energy calculations in 2007, and was first experimentally synthesized in 2009. There are various experimental routes available for making graphane, including the top-down approaches of reduction of graphite in solution or hydrogenation of graphite using plasma/hydrogen gas as well as the bottom-up approach of chemical vapor deposition. Graphane is an insulator, with a predicted band gap of 3.5 eV; however, partially hydrogenated graphene is a semi-conductor, with the band gap being controlled by the degree of hydrogenation. === Germanane === Germanane is a single-layer crystal composed of germanium with one hydrogen bonded in the z-direction for each atom. Germanane's structure is similar to graphane, Bulk germanium does not adopt this structure. Germanane is produced in a two-step route starting with calcium germanide. From this material, the calcium (Ca) is removed by de-intercalation with HCl to give a layered solid with the empirical formula GeH. The Ca sites in Zintl-phase CaGe2 interchange with the hydrogen atoms in the HCl solution, producing GeH and CaCl2. === SLSiN === SLSiN (acronym for Single-Layer Silicon Nitride), a novel 2D material introduced as the first post-graphene member of Si3N4, was first discovered computationally in 2020 via density-functional theory based simulations. This new material is inherently 2D, insulator with a band-gap of about 4 eV, and stable both thermodynamically and in terms of lattice dynamics. == Combined surface alloying == Often single-layer materials, specifically elemental allotrops, are connected to the supporting substrate via surface alloys. By now, this phenomenon has been proven via a combination of different measurement techniques for silicene, for which the alloy is difficult to prove by a single technique, and hence has not been expected for a long time. Hence, such scaffolding surface alloys beneath two-dimensional materials can be also expected below other two-dimensional materials, significantly influencing the properties of the two-dimensional layer. During growth, the alloy acts as both, foundation and scaffold for the two-dimensional layer, for which it paves the way. == Organic == Ni3(HITP)2 is an organic, crystalline, structurally tunable electrical conductor with a high surface area. HITP is an organic chemical (2,3,6,7,10,11-hexaaminotriphenylene). It shares graphene's hexagonal honeycomb structure. Multiple layers naturally form perfectly aligned stacks, with identical 2-nm openings at the centers of the hexagons. Room temperature electrical conductivity is ~40 S cm−1, comparable to that of bulk graphite and among the highest for any conducting metal-organic frameworks (MOFs). The temperature dependence of its conductivity is linear at temperatures between 100 K and 500 K, suggesting an unusual charge transport mechanism that has not been previously observed in organic semiconductors. The material was claimed to be the first of a group formed by switching metals and/or organic compounds. The material can be isolated as a powder or a film with conductivity values of 2 and 40 S cm−1, respectively. == Polymer == Using melamine (carbon and nitrogen ring structure) as a monomer, researchers created 2DPA-1, a 2-dimensional polymer sheet held together by hydrogen bonds. The sheet forms spontaneously in solution, allowing thin films to be spin-coated. The polymer has a yield strength twice that of steel, and it resists six times more deformation force than bulletproof glass. It is impermeable to gases and liquids. == Combinations == Single layers of 2D materials can be combined into layered assemblies. For example, bilayer graphene is a material consisting of two layers of graphene. One of the first reports of bilayer graphene was in the seminal 2004 Science paper by Geim and colleagues, in which they described devices "which contained just one, two, or three atomic layers". Layered combinations of different 2D materials are generally called van der Waals heterostructures. Twistronics is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties. == Characterization == Microscopy techniques such as transmission electron microscopy, 3D electron diffraction, scanning probe microscopy, scanning tunneling microscope, and atomic-force microscopy are used to characterize the thickness and size of the 2D materials. Electrical properties and structural properties such as composition and defects are characterized by Raman spectroscopy, X-ray diffraction, and X-ray photoelectron spectroscopy. === Mechanical characterization === The mechanical characterization of 2D materials is difficult due to ambient reactivity and substrate constraints present in many 2D materials. To this end, many mechanical properties are calculated using molecular dynamics simulations or molecular mechanics simulations. Experimental mechanical characterization is possible in 2D materials which can survive the conditions of the experimental setup as well as can be deposited on suitable substrates or exist in a free-standing form. Many 2D materials also possess out-of-plane deformation which further convolute measurements. Nanoindentation testing is commonly used to experimentally measure elastic modulus, hardness, and fracture strength of 2D materials. From these directly measured values, models exist which allow the estimation of fracture toughness, work hardening exponent, residual stress, and yield strength. These experiments are run using dedicated nanoindentation equipment or an Atomic Force Microscope (AFM). Nanoindentation experiments are generally run with the 2D material as a linear strip clamped on both ends experiencing indentation by a wedge, or with the 2D material as a circular membrane clamped around the circumference experiencing indentation by a curbed tip in the center. The strip geometry is difficult to prepare but allows for easier analysis due to linear resulting stress fields. The circular drum-like geometry is more commonly used and can be easily prepared by exfoliating samples onto a patterned substrate. The stress applied to the film in the clamping process is referred to as the residual stress. In the case of very thin layers of 2D materials bending stress is generally ignored in indentation measurements, with bending stress becoming relevant in multilayer samples. Elastic modulus and residual stress values can be extracted by determining the linear and cubic portions of the experimental force-displacement curve. The fracture stress of the 2D sheet is extracted from the applied stress at failure of the sample. AFM tip size was found to have little effect on elastic property measurement, but the breaking force was found to have a strong tip size dependence due stress concentration at the apex of the tip. Using these techniques the elastic modulus and yield strength of graphene were found to be 342 N/m and 55 N/m respectively. Poisson's ratio measurements in 2D materials is generally straightforward. To get a value, a 2D sheet is placed under stress and displacement responses are measured, or an MD calculation is run. The unique structures found in 2D materials have been found to result in auxetic behavior in phosphorene and graphene and a Poisson's ratio of zero in triangular lattice borophene. Shear modulus measurements of graphene has been extracted by measuring a resonance frequency shift in a double paddle oscillator experiment as well as with MD simulations. Fracture toughness of 2D materials in Mode I (KIC) has been measured directly by stretching pre-cracked layers and monitoring crack propagation in real-time. MD simulations as well as molecular mechanics simulations have also been used to calculate fracture toughness in Mode I. In anisotropic materials, such as phosphorene, crack propagation was found to happen preferentially along certain directions. Most 2D materials were found to undergo brittle fracture. == Applications == The major expectation held amongst researchers is that given their exceptional properties, 2D materials will replace conventional semiconductors to deliver a new generation of electronics. === Biological applications === Research on 2D nanomaterials is still in its infancy, with the majority of research focusing on elucidating the unique material characteristics and few reports focusing on biomedical applications of 2D nanomaterials. Nevertheless, recent rapid advances in 2D nanomaterials have raised important yet exciting questions about their interactions with biological moieties. 2D nanoparticles such as carbon-based 2D materials, silicate clays, transition metal dichalcogenides (TMDs), and transition metal oxides (TMOs) provide enhanced physical, chemical, and biological functionality owing to their uniform shapes, high surface-to-volume ratios, and surface charge. Two-dimensional (2D) nanomaterials are ultrathin nanomaterials with a high degree of anisotropy and chemical functionality. 2D nanomaterials are highly diverse in terms of their mechanical, chemical, and optical properties, as well as in size, shape, biocompatibility, and degradability. These diverse properties make 2D nanomaterials suitable for a wide range of applications, including drug delivery, imaging, tissue engineering, biosensors, and gas sensors among others. However, their low-dimension nanostructure gives them some common characteristics. For example, 2D nanomaterials are the thinnest materials known, which means that they also possess the highest specific surface areas of all known materials. This characteristic makes these materials invaluable for applications requiring high levels of surface interactions on a small scale. As a result, 2D nanomaterials are being explored for use in drug delivery systems, where they can adsorb large numbers of drug molecules and enable superior control over release kinetics. Additionally, their exceptional surface area to volume ratios and typically high modulus values make them useful for improving the mechanical properties of biomedical nanocomposites and nanocomposite hydrogels, even at low concentrations. Their extreme thinness has been instrumental for breakthroughs in biosensing and gene sequencing. Moreover, the thinness of these molecules allows them to respond rapidly to external signals such as light, which has led to utility in optical therapies of all kinds, including imaging applications, photothermal therapy (PTT), and photodynamic therapy (PDT). Despite the rapid pace of development in the field of 2D nanomaterials, these materials must be carefully evaluated for biocompatibility in order to be relevant for biomedical applications. The newness of this class of materials means that even the relatively well-established 2D materials like graphene are poorly understood in terms of their physiological interactions with living tissues. Additionally, the complexities of variable particle size and shape, impurities from manufacturing, and protein and immune interactions have resulted in a patchwork of knowledge on the biocompatibility of these materials. == See also == Monolayer Two-dimensional semiconductor Transition metal dichalcogenide monolayers == References == == External links == "What Are 2D Materials, and Why Do They Interest Scientists?" in Columbia News (March 6, 2024) "Twenty years of 2D materials" in Nature Physics (January 16, 2024) == Additional reading == Xu, Yang; Cheng, Cheng; Du, Sichao; Yang, Jianyi; Yu, Bin; Luo, Jack; Yin, Wenyan; Li, Erping; Dong, Shurong; Ye, Peide; Duan, Xiangfeng (2016). "Contacts between Two- and Three-Dimensional Materials: Ohmic, Schottky, and p–n Heterojunctions". ACS Nano. 10 (5): 4895–4919. doi:10.1021/acsnano.6b01842. PMID 27132492. Briggs, Natalie; Subramanian, Shruti; Lin, Zhong; Li, Xufan; Zhang, Xiaotian; Zhang, Kehao; Xiao, Kai; Geohegan, David; Wallace, Robert; Chen, Long-Qing; Terrones, Mauricio; Ebrahimi, Aida; Das, Saptarshi; Redwing, Joan; Hinkle, Christopher; Momeni, Kasra; van Duin, Adri; Crespi, Vin; Kar, Swastik; Robinson, Joshua A. (2019). "A roadmap for electronic grade 2D materials". 2D Materials. 6 (2): 022001. Bibcode:2019TDM.....6b2001B. doi:10.1088/2053-1583/aaf836. OSTI 1503991. S2CID 188118830. Shahzad, F.; Alhabeb, M.; Hatter, C. B.; Anasori, B.; Man Hong, S.; Koo, C. M.; Gogotsi, Y. (2016). "Electromagnetic interference shielding with 2D transition metal carbides (MXenes)". Science. 353 (6304): 1137–1140. Bibcode:2016Sci...353.1137S. doi:10.1126/science.aag2421. PMID 27609888. "Graphene Uses & Applications". Graphenea. Retrieved 2014-04-13. cao, yameng; Robson, Alexander J.; Alharbi, Abdullah; Roberts, Jonathan; Woodhead, Christopher Stephen; Noori, Yasir Jamal; Gavito, Ramon Bernardo; Shahrjerdi, Davood; Roedig, Utz (2017). "Optical identification using imperfections in 2D materials". 2D Materials. 4 (4): 045021. arXiv:1706.07949. Bibcode:2017TDM.....4d5021C. doi:10.1088/2053-1583/aa8b4d. S2CID 35147364. Kolesnichenko, Pavel; Zhang, Qianhui; Zheng, Changxi; Fuhrer, Michael; Davis, Jeffrey (2021). "Multidimensional analysis of excitonic spectra of monolayers of tungsten disulphide: toward computer-aided identification of structural and environmental perturbations of 2D materials". Machine Learning: Science and Technology. 2 (2): 025021. arXiv:2003.01904. doi:10.1088/2632-2153/abd87c.
Wikipedia/2D_materials
A ceramic is any of the various hard, brittle, heat-resistant, and corrosion-resistant materials made by shaping and then firing an inorganic, nonmetallic material, such as clay, at a high temperature. Common examples are earthenware, porcelain, and brick. The earliest ceramics made by humans were fired clay bricks used for building house walls and other structures. Other pottery objects such as pots, vessels, vases and figurines were made from clay, either by itself or mixed with other materials like silica, hardened by sintering in fire. Later, ceramics were glazed and fired to create smooth, colored surfaces, decreasing porosity through the use of glassy, amorphous ceramic coatings on top of the crystalline ceramic substrates. Ceramics now include domestic, industrial, and building products, as well as a wide range of materials developed for use in advanced ceramic engineering, such as semiconductors. The word ceramic comes from the Ancient Greek word κεραμικός (keramikós), meaning "of or for pottery" (from κέραμος (kéramos) 'potter's clay, tile, pottery'). The earliest known mention of the root ceram- is the Mycenaean Greek ke-ra-me-we, workers of ceramic, written in Linear B syllabic script. The word ceramic can be used as an adjective to describe a material, product, or process, or it may be used as a noun, either singular or, more commonly, as the plural noun ceramics. == Materials == Ceramic material is an inorganic, metallic oxide, nitride, or carbide material. Some elements, such as carbon or silicon, may be considered ceramics. Ceramic materials are brittle, hard, strong in compression, and weak in shearing and tension. They withstand the chemical erosion that occurs in other materials subjected to acidic or caustic environments. Ceramics generally can withstand very high temperatures, ranging from 1,000 °C to 1,600 °C (1,800 °F to 3,000 °F). The crystallinity of ceramic materials varies widely. Most often, fired ceramics are either vitrified or semi-vitrified, as is the case with earthenware, stoneware, and porcelain. Varying crystallinity and electron composition in the ionic and covalent bonds cause most ceramic materials to be good thermal and electrical insulators (researched in ceramic engineering). With such a large range of possible options for the composition/structure of a ceramic (nearly all of the elements, nearly all types of bonding, and all levels of crystallinity), the breadth of the subject is vast, and identifiable attributes (hardness, toughness, electrical conductivity) are difficult to specify for the group as a whole. General properties such as high melting temperature, high hardness, poor conductivity, high moduli of elasticity, chemical resistance, and low ductility are the norm, with known exceptions to each of these rules (piezoelectric ceramics, low glass transition temperature ceramics, superconductive ceramics). Composites such as fiberglass and carbon fiber, while containing ceramic materials, are not considered to be part of the ceramic family. Highly oriented crystalline ceramic materials are not amenable to a great range of processing. Methods for dealing with them tend to fall into one of two categories: either making the ceramic in the desired shape by reaction in situ or "forming" powders into the desired shape and then sintering to form a solid body. Ceramic forming techniques include shaping by hand (sometimes including a rotation process called "throwing"), slip casting, tape casting (used for making very thin ceramic capacitors), injection molding, dry pressing, and other variations. Many ceramics experts do not consider materials with an amorphous (noncrystalline) character (i.e., glass) to be ceramics, even though glassmaking involves several steps of the ceramic process and its mechanical properties are similar to those of ceramic materials. However, heat treatments can convert glass into a semi-crystalline material known as glass-ceramic. Traditional ceramic raw materials include clay minerals such as kaolinite, whereas more recent materials include aluminium oxide, more commonly known as alumina. Modern ceramic materials, which are classified as advanced ceramics, include silicon carbide and tungsten carbide. Both are valued for their abrasion resistance and are therefore used in applications such as the wear plates of crushing equipment in mining operations. Advanced ceramics are also used in the medical, electrical, electronics, and armor industries. == History == Human beings appear to have been making their own ceramics for at least 26,000 years, subjecting clay and silica to intense heat to fuse and form ceramic materials. The earliest found so far were in southern central Europe and were sculpted figures, not dishes. The earliest known pottery was made by mixing animal products with clay and firing it at up to 800 °C (1,500 °F). While pottery fragments have been found up to 19,000 years old, it was not until about 10,000 years later that regular pottery became common. An early people that spread across much of Europe is named after its use of pottery: the Corded Ware culture. These early Indo-European peoples decorated their pottery by wrapping it with rope while it was still wet. When the ceramics were fired, the rope burned off but left a decorative pattern of complex grooves on the surface. The invention of the wheel eventually led to the production of smoother, more even pottery using the wheel-forming (throwing) technique, like the pottery wheel. Early ceramics were porous, absorbing water easily. It became useful for more items with the discovery of glazing techniques, which involved coating pottery with silicon, bone ash, or other materials that could melt and reform into a glassy surface, making a vessel less pervious to water. === Archaeology === Ceramic artifacts have an important role in archaeology for understanding the culture, technology, and behavior of peoples of the past. They are among the most common artifacts to be found at an archaeological site, generally in the form of small fragments of broken pottery called sherds. The processing of collected sherds can be consistent with two main types of analysis: technical and traditional. The traditional analysis involves sorting ceramic artifacts, sherds, and larger fragments into specific types based on style, composition, manufacturing, and morphology. By creating these typologies, it is possible to distinguish between different cultural styles, the purpose of the ceramic, and the technological state of the people, among other conclusions. Besides, by looking at stylistic changes in ceramics over time, it is possible to separate (seriate) the ceramics into distinct diagnostic groups (assemblages). A comparison of ceramic artifacts with known dated assemblages allows for a chronological assignment of these pieces. The technical approach to ceramic analysis involves a finer examination of the composition of ceramic artifacts and sherds to determine the source of the material and, through this, the possible manufacturing site. Key criteria are the composition of the clay and the temper used in the manufacture of the article under study: the temper is a material added to the clay during the initial production stage and is used to aid the subsequent drying process. Types of temper include shell pieces, granite fragments, and ground sherd pieces called 'grog'. Temper is usually identified by microscopic examination of the tempered material. Clay identification is determined by a process of refiring the ceramic and assigning a color to it using Munsell Soil Color notation. By estimating both the clay and temper compositions and locating a region where both are known to occur, an assignment of the material source can be made. Based on the source assignment of the artifact, further investigations can be made into the site of manufacture. == Properties == The physical properties of any ceramic substance are a direct result of its crystalline structure and chemical composition. Solid-state chemistry reveals the fundamental connection between microstructure and properties, such as localized density variations, grain size distribution, type of porosity, and second-phase content, which can all be correlated with ceramic properties such as mechanical strength σ by the Hall-Petch equation, hardness, toughness, dielectric constant, and the optical properties exhibited by transparent materials. Ceramography is the art and science of preparation, examination, and evaluation of ceramic microstructures. Evaluation and characterization of ceramic microstructures are often implemented on similar spatial scales to that used commonly in the emerging field of nanotechnology: from nanometers to tens of micrometers (µm). This is typically somewhere between the minimum wavelength of visible light and the resolution limit of the naked eye. The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks, structural defects, and hardness micro indentions. Most bulk mechanical, optical, thermal, electrical, and magnetic properties are significantly affected by the observed microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the cleaved and polished microstructure. Physical properties which constitute the field of materials science and engineering include the following: === Mechanical properties === Mechanical properties are important in structural and building materials as well as textile fabrics. In modern materials science, fracture mechanics is an important tool in improving the mechanical performance of materials and components. It applies the physics of stress and strain, in particular the theories of elasticity and plasticity, to the microscopic crystallographic defects found in real materials in order to predict the macroscopic mechanical failure of bodies. Fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real-life failures. Ceramic materials are usually ionic or covalent bonded materials. A material held together by either type of bond will tend to fracture before any plastic deformation takes place, which results in poor toughness and brittle behavior in these materials. Additionally, because these materials tend to be porous, the pores and other microscopic imperfections act as stress concentrators, decreasing the toughness further, and reducing the tensile strength. These combine to give catastrophic failures, as opposed to the more ductile failure modes of metals. These materials do show plastic deformation. However, because of the rigid structure of crystalline material, there are very few available slip systems for dislocations to move, and so they deform very slowly. To overcome the brittle behavior, ceramic material development has introduced the class of ceramic matrix composite materials, in which ceramic fibers are embedded and with specific coatings are forming fiber bridges across any crack. This mechanism substantially increases the fracture toughness of such ceramics. Ceramic disc brakes are an example of using a ceramic matrix composite material manufactured with a specific process. Scientists are working on developing ceramic materials that can withstand significant deformation without breaking. A first such material that can deform in room temperature was found in 2024. ==== Ice-templating for enhanced mechanical properties ==== If a ceramic is subjected to substantial mechanical loading, it can undergo a process called ice-templating, which allows some control of the microstructure of the ceramic product and therefore some control of the mechanical properties. Ceramic engineers use this technique to tune the mechanical properties to their desired application. Specifically, the strength is increased when this technique is employed. Ice templating allows the creation of macroscopic pores in a unidirectional arrangement. The applications of this oxide strengthening technique are important for solid oxide fuel cells and water filtration devices. To process a sample through ice templating, an aqueous colloidal suspension is prepared to contain the dissolved ceramic powder evenly dispersed throughout the colloid, for example yttria-stabilized zirconia (YSZ). The solution is then cooled from the bottom to the top on a platform that allows for unidirectional cooling. This forces ice crystals to grow in compliance with the unidirectional cooling, and these ice crystals force the dissolved YSZ particles to the solidification front of the solid-liquid interphase boundary, resulting in pure ice crystals lined up unidirectionally alongside concentrated pockets of colloidal particles. The sample is then heated and at the same the pressure is reduced enough to force the ice crystals to sublime and the YSZ pockets begin to anneal together to form macroscopically aligned ceramic microstructures. The sample is then further sintered to complete the evaporation of the residual water and the final consolidation of the ceramic microstructure. During ice-templating, a few variables can be controlled to influence the pore size and morphology of the microstructure. These important variables are the initial solids loading of the colloid, the cooling rate, the sintering temperature and duration, and the use of certain additives which can influence the microstructural morphology during the process. A good understanding of these parameters is essential to understanding the relationships between processing, microstructure, and mechanical properties of anisotropically porous materials. === Electrical properties === ==== Semiconductors ==== Some ceramics are semiconductors. Most of these are transition metal oxides that are II-VI semiconductors, such as zinc oxide. While there are prospects of mass-producing blue light-emitting diodes (LED) from zinc oxide, ceramicists are most interested in the electrical properties that show grain boundary effects. One of the most widely used of these is the varistor. These are devices that exhibit the property that resistance drops sharply at a certain threshold voltage. Once the voltage across the device reaches the threshold, there is a breakdown of the electrical structure in the vicinity of the grain boundaries, which results in its electrical resistance dropping from several megohms down to a few hundred ohms. The major advantage of these is that they can dissipate a lot of energy, and they self-reset; after the voltage across the device drops below the threshold, its resistance returns to being high. This makes them ideal for surge-protection applications; as there is control over the threshold voltage and energy tolerance, they find use in all sorts of applications. The best demonstration of their ability can be found in electrical substations, where they are employed to protect the infrastructure from lightning strikes. They have rapid response, are low maintenance, and do not appreciably degrade from use, making them virtually ideal devices for this application. Semiconducting ceramics are also employed as gas sensors. When various gases are passed over a polycrystalline ceramic, its electrical resistance changes. With tuning to the possible gas mixtures, very inexpensive devices can be produced. ==== Superconductivity ==== Under some conditions, such as extremely low temperatures, some ceramics exhibit high-temperature superconductivity (in superconductivity, "high temperature" means above 30 K). The reason for this is not understood, but there are two major families of superconducting ceramics. ==== Ferroelectricity and supersets ==== Piezoelectricity, a link between electrical and mechanical response, is exhibited by a large number of ceramic materials, including the quartz used to measure time in watches and other electronics. Such devices use both properties of piezoelectrics, using electricity to produce a mechanical motion (powering the device) and then using this mechanical motion to produce electricity (generating a signal). The unit of time measured is the natural interval required for electricity to be converted into mechanical energy and back again. The piezoelectric effect is generally stronger in materials that also exhibit pyroelectricity, and all pyroelectric materials are also piezoelectric. These materials can be used to inter-convert between thermal, mechanical, or electrical energy; for instance, after synthesis in a furnace, a pyroelectric crystal allowed to cool under no applied stress generally builds up a static charge of thousands of volts. Such materials are used in motion sensors, where the tiny rise in temperature from a warm body entering the room is enough to produce a measurable voltage in the crystal. In turn, pyroelectricity is seen most strongly in materials that also display the ferroelectric effect, in which a stable electric dipole can be oriented or reversed by applying an electrostatic field. Pyroelectricity is also a necessary consequence of ferroelectricity. This can be used to store information in ferroelectric capacitors, elements of ferroelectric RAM. The most common such materials are lead zirconate titanate and barium titanate. Aside from the uses mentioned above, their strong piezoelectric response is exploited in the design of high-frequency loudspeakers, transducers for sonar, and actuators for atomic force and scanning tunneling microscopes. ==== Positive thermal coefficient ==== Temperature increases can cause grain boundaries to suddenly become insulating in some semiconducting ceramic materials, mostly mixtures of heavy metal titanates. The critical transition temperature can be adjusted over a wide range by variations in chemistry. In such materials, current will pass through the material until joule heating brings it to the transition temperature, at which point the circuit will be broken and current flow will cease. Such ceramics are used as self-controlled heating elements in, for example, the rear-window defrost circuits of automobiles. At the transition temperature, the material's dielectric response becomes theoretically infinite. While a lack of temperature control would rule out any practical use of the material near its critical temperature, the dielectric effect remains exceptionally strong even at much higher temperatures. Titanates with critical temperatures far below room temperature have become synonymous with "ceramic" in the context of ceramic capacitors for just this reason. === Optical properties === Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Frequency selective optical filters can be utilized to alter or enhance the brightness and contrast of a digital image. Guided lightwave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation, though low powered, is virtually lossless. Optical waveguides are used as components in Integrated optical circuits (e.g. light-emitting diodes, LEDs) or as the transmission medium in local and long haul optical communication systems. Also of value to the emerging materials scientist is the sensitivity of materials to radiation in the thermal infrared (IR) portion of the electromagnetic spectrum. This heat-seeking ability is responsible for such diverse optical phenomena as night-vision and IR luminescence. Thus, there is an increasing need in the military sector for high-strength, robust materials which have the capability to transmit light (electromagnetic waves) in the visible (0.4 – 0.7 micrometers) and mid-infrared (1 – 5 micrometers) regions of the spectrum. These materials are needed for applications requiring transparent armor, including next-generation high-speed missiles and pods, as well as protection against improvised explosive devices (IED). In the 1960s, scientists at General Electric (GE) discovered that under the right manufacturing conditions, some ceramics, especially aluminium oxide (alumina), could be made translucent. These translucent materials were transparent enough to be used for containing the electrical plasma generated in high-pressure sodium street lamps. During the past two decades, additional types of transparent ceramics have been developed for applications such as nose cones for heat-seeking missiles, windows for fighter aircraft, and scintillation counters for computed tomography scanners. Other ceramic materials, generally requiring greater purity in their make-up than those above, include forms of several chemical compounds, including: Barium titanate: (often mixed with strontium titanate) displays ferroelectricity, meaning that its mechanical, electrical, and thermal responses are coupled to one another and also history-dependent. It is widely used in electromechanical transducers, ceramic capacitors, and data storage elements. Grain boundary conditions can create PTC effects in heating elements. Sialon (silicon aluminium oxynitride) has high strength; resistance to thermal shock, chemical and wear resistance, and low density. These ceramics are used in non-ferrous molten metal handling, weld pins, and the chemical industry. Silicon carbide (SiC) is used as a susceptor in microwave furnaces, a commonly used abrasive, and as a refractory material. Silicon nitride (Si3N4) is used as an abrasive powder. Steatite (magnesium silicates) is used as an electrical insulator. Titanium carbide Used in space shuttle re-entry shields and scratchproof watches. Uranium oxide (UO2), used as fuel in nuclear reactors. Yttrium barium copper oxide (YBa2Cu3O7−x), a high-temperature superconductor. Zinc oxide (ZnO), which is a semiconductor, and used in the construction of varistors. Zirconium dioxide (zirconia), which in pure form undergoes many phase changes between room temperature and practical sintering temperatures, can be chemically "stabilized" in several different forms. Its high oxygen ion conductivity recommends it for use in fuel cells and automotive oxygen sensors. In another variant, metastable structures can impart transformation toughening for mechanical applications; most ceramic knife blades are made of this material. Partially stabilised zirconia (PSZ) is much less brittle than other ceramics and is used for metal forming tools, valves and liners, abrasive slurries, kitchen knives and bearings subject to severe abrasion. == Products == === By usage === For convenience, ceramic products are usually divided into four main types; these are shown below with some examples: Structural, including bricks, pipes, floor and roof tiles, vitrified tile Refractories, such as kiln linings, gas fire radiants, steel and glass making crucibles Whitewares, including tableware, cookware, wall tiles, pottery products and sanitary ware Technical, also known as engineering, advanced, special, and fine ceramics. Such items include: gas burner nozzles ballistic protection, vehicle armor nuclear fuel uranium oxide pellets biomedical implants coatings of jet engine turbine blades ceramic matrix composite gas turbine parts reinforced carbon–carbon ceramic disc brakes missile nose cones bearings thermal insulation tiles used on the Space Shuttle orbiter === Ceramics made with clay === Frequently, the raw materials of modern ceramics do not include clays. Those that do have been classified as: Earthenware, fired at lower temperatures than other types Stoneware, vitreous or semi-vitreous Porcelain, which contains a high content of kaolin Bone china === Classification === Ceramics can also be classified into three distinct material categories: Oxides: alumina, beryllia, ceria, zirconia Non-oxides: carbide, boride, nitride, silicide Composite materials: particulate reinforced, fiber reinforced, combinations of oxides and non-oxides. Each one of these classes can be developed into unique material properties. == Applications == Knife blades: the blade of a ceramic knife will stay sharp for much longer than that of a steel knife, although it is more brittle and susceptible to breakage. Carbon-ceramic brake disks for vehicles: highly resistant to brake fade at high temperatures. Advanced composite ceramic and metal matrices have been designed for most modern armoured fighting vehicles because they offer superior penetrating resistance against shaped charge (HEAT rounds) and kinetic energy penetrators. Ceramics such as alumina and boron carbide have been used as plates in ballistic armored vests to repel high-velocity rifle fire. Such plates are known commonly as small arms protective inserts, or SAPIs. Similar low-weight material is used to protect the cockpits of some military aircraft. Ceramic ball bearings can be used in place of steel. Their greater hardness results in lower susceptibility to wear. Ceramic bearings typically last triple the lifetime of steel bearings. They deform less than steel under load, resulting in less contact with the bearing retainer walls and lower friction. In very high-speed applications, heat from friction causes more problems for metal bearings than ceramic bearings. Ceramics are chemically resistant to corrosion and are preferred for environments where steel bearings would rust. In some applications their electricity-insulating properties are advantageous. Drawbacks to ceramic bearings include significantly higher cost, susceptibility to damage under shock loads, and the potential to wear steel parts due to ceramics' greater hardness. In the early 1980s Toyota researched production of an adiabatic engine using ceramic components in the hot gas area. The use of ceramics would have allowed temperatures exceeding 1650 °C. Advantages would include lighter materials and a smaller cooling system (or no cooling system at all), leading to major weight reduction. The expected increase of fuel efficiency (due to higher operating temperatures, demonstrated in Carnot's theorem) could not be verified experimentally. It was found that heat transfer on the hot ceramic cylinder wall was greater than the heat transfer to a cooler metal wall. This is because the cooler gas film on a metal surface acts as a thermal insulator. Thus, despite the desirable properties of ceramics, prohibitive production costs and limited advantages have prevented widespread ceramic engine component adoption. In addition, small imperfections in ceramic material along with low fracture toughness can lead to cracking and potentially dangerous equipment failure. Such engines are possible experimentally, but mass production is not feasible with current technology. Experiments with ceramic parts for gas turbine engines are being conducted. Currently, even blades made of advanced metal alloys used in the engines' hot section require cooling and careful monitoring of operating temperatures. Turbine engines made with ceramics could operate more efficiently, providing for greater range and payload. Recent advances have been made in ceramics which include bioceramics such as dental implants and synthetic bones. Hydroxyapatite, the major mineral component of bone, has been made synthetically from several biological and chemical components and can be formed into ceramic materials. Orthopedic implants coated with these materials bond readily to bone and other tissues in the body without rejection or inflammatory reaction. They are of great interest for gene delivery and tissue engineering scaffolding. Most hydroxyapatite ceramics are quite porous and lack mechanical strength and are therefore used solely to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. They are also used as fillers for orthopedic plastic screws to aid in reducing inflammation and increase the absorption of these plastic materials. Work is being done to make strong, fully dense nanocrystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic but naturally occurring bone mineral. Ultimately, these ceramic materials may be used as bone replacement, or with the incorporation of protein collagens, the manufacture of synthetic bones. Applications for actinide-containing ceramic materials include nuclear fuels for burning excess plutonium (Pu), or a chemically inert source of alpha radiation in power supplies for uncrewed space vehicles or microelectronic devices. Use and disposal of radioactive actinides require immobilization in a durable host material. Long half-life radionuclides such as actinide are immobilized using chemically durable crystalline materials based on polycrystalline ceramics and large single crystals. High-tech ceramics are used for producing watch cases. The material is valued by watchmakers for its light weight, scratch resistance, durability, and smooth touch. IWC is one of the brands that pioneered the use of ceramic in watchmaking. Ceramics are used in the design of mobile phone bodies due to their high hardness, resistance to scratches, and ability to dissipate heat. Ceramic's thermal management properties help in maintaining optimal device temperatures during heavy use enhancing performance. Additionally, ceramic materials can support wireless charging and offer better signal transmission compared to metals, which can interfere with antennas. Companies like Apple and Samsung have incorporated ceramic in their devices. Ceramics made of silicon carbide are used in pump and valve components because of their corrosion resistance characteristics. It is also used in nuclear reactors as fuel cladding materials due to their ability to withstand radiation and thermal stress. Other uses of Silicon carbide ceramics include paper manufacturing, ballistics, chemical production, and as pipe system components. == See also == Ceramic chemistry – Science and technology of creating objects from inorganic, non-metallic materialsPages displaying short descriptions of redirect targets Ceramic engineering – Science and technology of creating objects from inorganic, non-metallic materials Ceramic nanoparticle Ceramic matrix composite – Composite material consisting of ceramic fibers in a ceramic matrix Ceramic art – Decorative objects made from clay and other raw materials by the process of pottery Pottery fracture – Result of thermal treatment on ceramic == References == == Further reading == Guy, John (1986). Guy, John (ed.). Oriental trade ceramics in South-East Asia, ninth to sixteenth centuries: with a catalogue of Chinese, Vietnamese and Thai wares in Australian collections (illustrated, revised ed.). Oxford University Press. ISBN 978-0-19-582593-0. == External links == Riedel, Ralf; Chen, I-Wei, eds. (2013). Ceramics Science and Technology. doi:10.1002/9783527631940. ISBN 978-3-527-31149-1.
Wikipedia/Ceramic_materials
Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer. Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals. == Subfields and related fields == === Bioinformatics === Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data. Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences. === Biomechanics === Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. === Biomaterials === A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science. === Biomedical optics === Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging. === Tissue engineering === Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME. One of the goals of tissue engineering is to create artificial organs (via biological material) such as kidneys, livers, for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct. == Genetic engineering == Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research. === Neural engineering === Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices. === Pharmaceutical engineering === Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of chemical engineering, and pharmaceutical analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment. == Hospital and medical devices == This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism. A medical device is intended for use in: the diagnosis of disease or other conditions in the cure, mitigation, treatment, or prevention of disease. Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants. Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases. Medical devices are regulated and classified (in the US) as follows (see also Regulation): Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment. Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes. Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants. === Medical imaging === Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means. Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract. Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy. === Medical implants === An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents. === Bionics === Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools. === Biomedical sensors === In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals. == Clinical engineering == Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly. Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items. == Rehabilitation engineering == Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community. While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University. The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation. == Regulatory issues == Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death" Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be 1), safe 2), effective and 3), applicable to all the manufactured devices. A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards, such as injury or death, in its intended use. Protective measures must be introduced on devices that are hazardous to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it. A product is effective if it performs as specified by the manufacturer in the intended use. Proof of effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device. The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle. The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices). In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide. In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area. The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments. === RoHS II === Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2. RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled. The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products. === IEC 60601 === The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series. The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard. === AS/NZS 3551:2012 === AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards. The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning. == Training and certification == === Education === Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology. In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET. In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University. As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program. Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards. Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework). Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME. === Licensure/certification === As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine. Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required. In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM. The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure. Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers. == Career prospects == In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions. Now as of 2023, there are 19,700 jobs for this degree, the average pay for a person in this field is around $100,730.00 and making around $48.43 an hour. There is also expected to be a 7% increase in jobs from here 2023 to 2033 (even faster than the last average). == Notable figures == Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic. Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR). Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University. Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering. J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES) Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively. P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES) Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology. Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons) Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals Fred Weibell, coauthor of Biomedical Instrumentation and Measurements U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings == See also == Biomedicine – Branch of medical science that applies biological and physiological principles to clinical practice Cardiophysics – interdisciplinary science that stands at the junction of cardiology and medical physicsPages displaying wikidata descriptions as a fallback Computational anatomy – Interdisciplinary field of biology Medical physics – Application of physics in medicine or healthcare Physiome – Wholistic physiological dynamics of an organism Biomedical Engineering and Instrumentation Program (BEIP) == References == 45. ^Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, "Bioengineers and Biomedical Engineers", retrieved October 27, 2024. == Further reading == Bronzino, Joseph D. (April 2006). The Biomedical Engineering Handbook (Third ed.). [CRC Press]. ISBN 978-0-8493-2124-5. Archived from the original on 2015-02-24. Retrieved 2009-06-22. Villafane, Carlos (June 2009). Biomed: From the Student's Perspective (First ed.). [Techniciansfriend.com]. ISBN 978-1-61539-663-4. == External links == Media related to Biomedical engineering at Wikimedia Commons
Wikipedia/biomedical_engineering
A biomaterial is a substance that has been engineered to interact with biological systems for a medical purpose – either a therapeutic (treat, augment, repair, or replace a tissue function of the body) or a diagnostic one. The corresponding field of study, called biomaterials science or biomaterials engineering, is about fifty years old. It has experienced steady growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science. A biomaterial is different from a biological material, such as bone, that is produced by a biological system. However, "biomaterial" and "biological material" are often used interchangeably. Further, the word "bioterial" has been proposed as a potential alternate word for biologically-produced materials such as bone, or fungal biocomposites. Additionally, care should be exercised in defining a biomaterial as biocompatible, since it is application-specific. A biomaterial that is biocompatible or suitable for one application may not be biocompatible in another. == Introduction == Biomaterials can be derived either from nature or synthesized in the laboratory using a variety of chemical approaches utilizing metallic components, polymers, ceramics or composite materials. They are often used and/or adapted for a medical application, and thus comprise the whole or part of a living structure or biomedical device which performs, augments, or replaces a natural function. Such functions may be relatively passive, like being used for a heart valve, or maybe bioactive with a more interactive functionality such as hydroxy-apatite coated hip implants. Biomaterials are also commonly used in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as a transplant material. == Bioactivity == The ability of an engineered biomaterial to induce a physiological response that is supportive of the biomaterial's function and performance is known as bioactivity. Most commonly, in bioactive glasses and bioactive ceramics this term refers to the ability of implanted materials to bond well with surrounding tissue in either osteo conductive or osseo productive roles. Bone implant materials are often designed to promote bone growth while dissolving into surrounding body fluid. Thus for many biomaterials good biocompatibility along with good strength and dissolution rates are desirable. Commonly, bioactivity of biomaterials is gauged by the surface biomineralization in which a native layer of hydroxyapatite is formed at the surface. These days, the development of clinically useful biomaterials is greatly enhanced by the advent of computational routines that can predict the molecular effects of biomaterials in a therapeutic setting based on limited in vitro experimentation. == Self-assembly == Self-assembly is the most common term in use in the modern scientific community to describe the spontaneous aggregation of particles (atoms, molecules, colloids, micelles, etc.) without the influence of any external forces. Large groups of such particles are known to assemble themselves into thermodynamically stable, structurally well-defined arrays, quite reminiscent of one of the seven crystal systems found in metallurgy and mineralogy (e.g., face-centered cubic, body-centered cubic, etc.). The fundamental difference in equilibrium structure is in the spatial scale of the unit cell (lattice parameter) in each particular case. Molecular self assembly is found widely in biological systems and provides the basis of a wide variety of complex biological structures. This includes an emerging class of mechanically superior biomaterials based on microstructural features and designs found in nature. Thus, self-assembly is also emerging as a new strategy in chemical synthesis and nanotechnology. Molecular crystals, liquid crystals, colloids, micelles, emulsions, phase-separated polymers, thin films and self-assembled monolayers all represent examples of the types of highly ordered structures, which are obtained using these techniques. The distinguishing feature of these methods is self-organization. == Structural hierarchy == Nearly all materials could be seen as hierarchically structured, since the changes in spatial scale bring about different mechanisms of deformation and damage. However, in biological materials, this hierarchical organization is inherent to the microstructure. One of the first examples of this, in the history of structural biology, is the early X-ray scattering work on the hierarchical structure of hair and wool by Astbury and Woods. In bone, for example, collagen is the building block of the organic matrix, a triple helix with diameter of 1.5 nm. These tropocollagen molecules are intercalated with the mineral phase (hydroxyapatite, calcium phosphate) forming fibrils that curl into helicoids of alternating directions. These "osteons" are the basic building blocks of bones, with the volume fraction distribution between organic and mineral phase being about 60/40. In another level of complexity, the hydroxyapatite crystals are mineral platelets that have a diameter of approximately 70 to 100 nm and thickness of 1 nm. They originally nucleate at the gaps between collagen fibrils. Similarly, the hierarchy of abalone shell begins at the nanolevel, with an organic layer having a thickness of 20 to 30 nm. This layer proceeds with single crystals of aragonite (a polymorph of CaCO3) consisting of "bricks" with dimensions of 0.5 and finishing with layers approximately 0.3 mm (mesostructure). Crabs are arthropods, whose carapace is made of a mineralized hard component (exhibits brittle fracture) and a softer organic component composed primarily of chitin. The brittle component is arranged in a helical pattern. Each of these mineral "rods" (1 μm diameter) contains chitin–protein fibrils with approximately 60 nm diameter. These fibrils are made of 3 nm diameter canals that link the interior and exterior of the shell. == Applications == Biomaterials are used in: Joint replacements Bone plates Intraocular lenses (IOLs) for eye surgery Bone cement Artificial ligaments and tendons Dental implants for tooth fixation Blood vessel prostheses Heart valves Skin repair devices (artificial tissue) Cochlear replacements Contact lenses Breast implants Drug delivery mechanisms Sustainable materials Vascular grafts Stents Nerve conduits Surgical sutures, clips, and staples for wound closure Pins and screws for fracture stabilisation Surgical mesh Biomaterials must be compatible with the body, and there are often issues of biocompatibility, which must be resolved before a product can be placed on the market and used in a clinical setting. Because of this, biomaterials are usually subjected to the same requirements as those undergone by new drug therapies. All manufacturing companies are also required to ensure traceability of all of their products, so that if a defective product is discovered, others in the same batch may be traced. === Bone grafts === Calcium sulfate (its α- and β-hemihydrates) is a well known biocompatible material that is widely used as a bone graft substitute in dentistry or as its binder. === Heart valves === In the United States, 49% of the 250,000 valve replacement procedures performed annually involve a mechanical valve implant. The most widely used valve is a bileaflet disc heart valve or St. Jude valve. The mechanics involve two semicircular discs moving back and forth, with both allowing the flow of blood as well as the ability to form a seal against backflow. The valve is coated with pyrolytic carbon and secured to the surrounding tissue with a mesh of woven fabric called Dacron (du Pont's trade name for polyethylene terephthalate). The mesh allows for the body's tissue to grow, while incorporating the valve. === Skin repair === Most of the time, artificial tissue is grown from the patient's own cells. However, when the damage is so extreme that it is impossible to use the patient's own cells, artificial tissue cells are grown. The difficulty is in finding a scaffold that the cells can grow and organize on. The characteristics of the scaffold must be that it is biocompatible, cells can adhere to the scaffold, mechanically strong and biodegradable. One successful scaffold is a copolymer of lactic acid and glycolic acid. == Properties == As discussed previously, biomaterials are used in medical devices to treat, assist, or replace a function within the human body. The application of a specific biomaterial must combine the necessary composition, material properties, structure, and desired in vivo reaction in order to perform the desired function. Categorizations of different desired properties are defined in order to maximize functional results. === Host response === Host response is defined as the "response of the host organism (local and systemic) to the implanted material or device". Most materials will have a reaction when in contact with the human body. The success of a biomaterial relies on the host tissue's reaction with the foreign material. Specific reactions between the host tissue and the biomaterial can be generated through the biocompatibility of the material. ==== Biomaterial and tissue interactions ==== The in vivo functionality and longevity of any implantable medical device is affected by the body's response to the foreign material. The body undergoes a cascade of processes defined under the foreign body response (FBR) in order to protect the host from the foreign material. The interactions between the device upon the host tissue/blood as well as the host tissue/blood upon the device must be understood in order to prevent complications and device failure. Tissue injury caused by device implantation causes inflammatory and healing responses during FBR. The inflammatory response occurs within two time periods: the acute phase, and the chronic phase. The acute phase occurs during the initial hours to days of implantation, and is identified by fluid and protein exudation along with a neutrophilic reaction. During the acute phase, the body attempts to clean and heal the wound by delivering excess blood, proteins, and monocytes are called to the site. Continued inflammation leads to the chronic phase, which can be categorized by the presence of monocytes, macrophages, and lymphocytes. In addition, blood vessels and connective tissue form in order to heal the wounded area. === Compatibility === Biocompatibility is related to the behavior of biomaterials in various environments under various chemical and physical conditions. The term may refer to specific properties of a material without specifying where or how the material is to be used. For example, a material may elicit little or no immune response in a given organism, and may or may not able to integrate with a particular cell type or tissue. Immuno-informed biomaterials that direct the immune response rather than attempting to circumvent the process is one approach that shows promise. The ambiguity of the term reflects the ongoing development of insights into "how biomaterials interact with the human body" and eventually "how those interactions determine the clinical success of a medical device (such as pacemaker or hip replacement)". Modern medical devices and prostheses are often made of more than one material, so it might not always be sufficient to talk about the biocompatibility of a specific material. Surgical implantation of a biomaterial into the body triggers an organism-inflammatory reaction with the associated healing of the damaged tissue. Depending upon the composition of the implanted material, the surface of the implant, the mechanism of fatigue, and chemical decomposition there are several other reactions possible. These can be local as well as systemic. These include immune response, foreign body reaction with the isolation of the implant with a vascular connective tissue, possible infection, and impact on the lifespan of the implant. Graft-versus-host disease is an auto- and alloimmune disorder, exhibiting a variable clinical course. It can manifest in either acute or chronic form, affecting multiple organs and tissues and causing serious complications in clinical practice, both during transplantation and implementation of biocompatible materials. ==== Toxicity ==== A biomaterial should perform its intended function within the living body without negatively affecting other bodily tissues and organs. In order to prevent unwanted organ and tissue interactions, biomaterials should be non-toxic. The toxicity of a biomaterial refers to the substances that are emitted from the biomaterial while in vivo. A biomaterial should not give off anything to its environment unless it is intended to do so. Nontoxicity means that biomaterial is: noncarcinogenic, nonpyrogenic, nonallergenic, blood compatible, and noninflammatory. However, a biomaterial can be designed to include toxicity for an intended purpose. For example, application of toxic biomaterial is studied during in vivo and in vitro cancer immunotherapy testing. Toxic biomaterials offer an opportunity to manipulate and control cancer cells. One recent study states: "Advanced nanobiomaterials, including liposomes, polymers, and silica, play a vital role in the codelivery of drugs and immunomodulators. These nanobiomaterial-based delivery systems could effectively promote antitumor immune responses and simultaneously reduce toxic adverse effects." This is a prime example of how the biocompatibility of a biomaterial can be altered to produce any desired function. ==== Biodegradable biomaterials ==== Biodegradable biomaterials refers to materials that are degradable through natural enzymatic reactions. The application of biodegradable synthetic polymers began in the later 1960s. Biodegradable materials have an advantage over other materials, as they have lower risk of harmful effects long term. In addition to ethical advancements using biodegradable materials, they also improve biocompatibility for materials used for implantation. Several properties including biocompatibility are important when considering different biodegradable biomaterials. Biodegradable biomaterials can be synthetic or natural depending on their source and type of extracellular matrix (ECM). ==== Biocompatible plastics ==== Some of the most commonly-used biocompatible materials (or biomaterials) are polymers due to their inherent flexibility and tunable mechanical properties. Medical devices made of plastics are often made of a select few including: cyclic olefin polymer (COP), cyclic olefin copolymer (COC), polycarbonate (PC), polyetherimide (PEI), medical grade polyvinylchloride (PVC), polyethersulfone (PES), polyethylene (PE), polyetheretherketone (PEEK) and even polypropylene (PP). To ensure biocompatibility, there are a series of regulated tests that material must pass to be certified for use. These include the United States Pharmacopoeia IV (USP Class IV) Biological Reactivity Test and the International Standards Organization 10993 (ISO 10993) Biological Evaluation of Medical Devices. The main objective of biocompatibility tests is to quantify the acute and chronic toxicity of material and determine any potential adverse effects during use conditions, thus the tests required for a given material are dependent on its end-use (i.e. blood, central nervous system, etc.). === Surface and bulk properties === Two properties that have a large effect on the functionality of a biomaterial is the surface and bulk properties. Bulk properties refers to the physical and chemical properties that compose the biomaterial for its entire lifetime. They can be specifically generated to mimic the physiochemical properties of the tissue that the material is replacing. They are mechanical properties that are generated from a material's atomic and molecular construction. Important bulk properties: Chemical Composition Microstructure Elasticity Tensile Strength Density Hardness Electrical Conductivity Thermal Conductivity Surface properties refers to the chemical and topographical features on the surface of the biomaterial that will have direct interaction with the host blood/tissue. Surface engineering and modification allows clinicians to better control the interactions of a biomaterial with the host living system. Important surface properties: Wettability (surface energy) Surface chemistry Surface textures (smooth/rough) Topographical factors including: size, shape, alignment, structure determine the roughness of a material. Surface Tension Surface Charge === Mechanical properties === In addition to a material being certified as biocompatible, biomaterials must be engineered specifically to their target application within a medical device. This is especially important in terms of mechanical properties which govern the way that a given biomaterial behaves. One of the most relevant material parameters is the Young's Modulus, E, which describes a material's elastic response to stresses. The Young's Moduli of the tissue and the device that is being coupled to it must closely match for optimal compatibility between device and body, whether the device is implanted or mounted externally. Matching the elastic modulus makes it possible to limit movement and delamination at the biointerface between implant and tissue as well as avoiding stress concentration that can lead to mechanical failure. Other important properties are the tensile and compressive strengths which quantify the maximum stresses a material can withstand before breaking and may be used to set stress limits that a device may be subject to within or external to the body. Depending on the application, it may be desirable for a biomaterial to have high strength so that it is resistant to failure when subjected to a load, however in other applications it may be beneficial for the material to be low strength. There is a careful balance between strength and stiffness that determines how robust to failure the biomaterial device is. Typically, as the elasticity of the biomaterial increases, the ultimate tensile strength will decrease and vice versa. One application where a high-strength material is undesired is in neural probes; if a high-strength material is used in these applications the tissue will always fail before the device does (under applied load) because the Young's Modulus of the dura mater and cerebral tissue is on the order of 500 Pa. When this happens, irreversible damage to the brain can occur, thus the biomaterial must have an elastic modulus less than or equal to brain tissue and a low tensile strength if an applied load is expected. For implanted biomaterials that may experience temperature fluctuations, e.g., dental implants, ductility is important. The material must be ductile for a similar reason that the tensile strength cannot be too high, ductility allows the material to bend without fracture and also prevents the concentration of stresses in the tissue when the temperature changes. The material property of toughness is also important for dental implants as well as any other rigid, load-bearing implant such as a replacement hip joint. Toughness describes the material's ability to deform under applied stress without fracturing and having a high toughness allows biomaterial implants to last longer within the body, especially when subjected to large stress or cyclically loaded stresses, like the stresses applied to a hip joint during running. For medical devices that are implanted or attached to the skin, another important property requiring consideration is the flexural rigidity, D. Flexural rigidity will determine how well the device surface can maintain conformal contact with the tissue surface, which is especially important for devices that are measuring tissue motion (strain), electrical signals (impedance), or are designed to stick to the skin without delaminating, as in epidermal electronics. Since flexural rigidity depends on the thickness of the material, h, to the third power (h3), it is very important that a biomaterial can be formed into thin layers in the previously mentioned applications where conformality is paramount. == Structure == The molecular composition of a biomaterial determines the physical and chemical properties of a biomaterial. These compositions create complex structures that allow the biomaterial to function, and therefore are necessary to define and understand in order to develop a biomaterial. biomaterials can be designed to replicate natural organisms, a process known as biomimetics. The structure of a biomaterial can be observed at different at different levels to better understand a materials properties and function. === Atomic structure === The arrangement of atoms and ions within a material is one of the most important structural properties of a biomaterial. The atomic structure of a material can be viewed at different levels, the sub atomic level, atomic or molecular level, as well as the ultra-structure created by the atoms and molecules. Intermolecular forces between the atoms and molecules that compose the material will determine its material and chemical properties. The sub atomic level observes the electrical structure of an individual atom to define its interactions with other atoms and molecules. The molecular structure observes the arrangement of atoms within the material. Finally the ultra-structure observes the 3-D structure created from the atomic and molecular structures of the material. The solid-state of a material is characterized by the intramolecular bonds between the atoms and molecules that comprise the material. Types of intramolecular bonds include: ionic bonds, covalent bonds, and metallic bonds. These bonds will dictate the physical and chemical properties of the material, as well as determine the type of material (ceramic, metal, or polymer). === Microstructure === The microstructure of a material refers to the structure of an object, organism, or material as viewed at magnifications exceeding 25 times. It is composed of the different phases of form, size, and distribution of grains, pores, precipitates, etc. The majority of solid microstructures are crystalline, however some materials such as certain polymers will not crystallize when in the solid state. ==== Crystalline structure ==== Crystalline structure is the composition of ions, atoms, and molecules that are held together and ordered in a 3D shape. The main difference between a crystalline structure and an amorphous structure is the order of the components. Crystalline has the highest level of order possible in the material where amorphous structure consists of irregularities in the ordering pattern. One way to describe crystalline structures is through the crystal lattice, which is a three-dimensional representation of the location of a repeating factor (unit cell) in the structure denoted with lattices. There are 14 different configurations of atom arrangement in a crystalline structure, and are all represented under Bravais lattices. ==== Defects of crystalline structure ==== During the formation of a crystalline structure, different impurities, irregularities, and other defects can form. These imperfections can form through deformation of the solid, rapid cooling, or high energy radiation. Types of defects include point defects, line defects, as well as edge dislocation. === Macrostructure === Macrostructure refers to the overall geometric properties that will influence the force at failure, stiffness, bending, stress distribution, and the weight of the material. It requires little to no magnification to reveal the macrostructure of a material. Observing the macrostructure reveals properties such as cavities, porosity, gas bubbles, stratification, and fissures. The material's strength and elastic modulus are both independent of the macrostructure. == Natural biomaterials == Biomaterials can be constructed using only materials sourced from plants and animals in order to alter, replace, or repair human tissue/organs. Use of natural biomaterials were used as early as ancient Egypt, where indigenous people used animal skin as sutures. A more modern example is a hip replacement using ivory material which was first recorded in Germany 1891. Valuable criteria for viable natural biomaterials: Biodegradable Biocompatible Able to promote cell attachment and growth Non-toxic Examples of natural biomaterials: Alginate Matrigel Fibrin Collagen Myocardial tissue engineering === Biopolymers === Biopolymers are polymers produced by living organisms. Cellulose and starch, proteins and peptides, and DNA and RNA are all examples of biopolymers, in which the monomeric units, respectively, are sugars, amino acids, and nucleotides. Cellulose is both the most common biopolymer and the most common organic compound on Earth. About 33% of all plant matter is cellulose. On a similar manner, silk (proteinaceous biopolymer) has garnered tremendous research interest in a myriad of domains including tissue engineering and regenerative medicine, microfluidics, drug delivery. == See also == Bionics Hydrogel Polymeric surface Surface modification of biomaterials with proteins Synthetic biodegradable polymer List of biomaterials == Footnotes == == References == == External links == Journal of Biomaterials Applications CREB – Biomedical Engineering Research Centre Archived 2021-05-07 at the Wayback Machine Department of Biomaterials at the Max Planck Institute of Colloids and Interfaces in Potsdam-Golm, Germany Open Innovation Campus for Biomaterials
Wikipedia/Biomaterials
Stereolithography (SLA or SL; also known as vat photopolymerisation, optical fabrication, photo-solidification, or resin printing) is a form of 3D printing technology used for creating models, prototypes, patterns, and production parts in a layer by layer fashion using photochemical processes by which light causes chemical monomers and oligomers to cross-link together to form polymers. Those polymers then make up the body of a three-dimensional solid. Research in the area had been conducted during the 1970s, but the term was coined by Chuck Hull in 1984 when he applied for a patent on the process, which was granted in 1986. Stereolithography can be used to create prototypes for products in development, medical models, and computer hardware, as well as in many other applications. While stereolithography is fast and can produce almost any design, it can be expensive. == History == Stereolithography or "SLA" printing is an early and widely used 3D printing technology. In the early 1980s, Japanese researcher Hideo Kodama first invented the modern layered approach to stereolithography by using ultraviolet light to cure photosensitive polymers. In 1984, just before Chuck Hull filed his own patent, Alain Le Mehaute, Olivier de Witte and Jean Claude André filed a patent for the stereolithography process. The French inventors' patent application was abandoned by the French General Electric Company (now Alcatel-Alsthom) and CILAS (The Laser Consortium). Le Mehaute believes that the abandonment reflects a problem with innovation in France. The term “stereolithography” (Greek: stereo-solid and lithography) was coined in 1984 by Chuck Hull when he filed his patent for the process. Hull patented stereolithography as a method of creating 3D objects by successively "printing" thin layers of an object using a medium curable by ultraviolet light, starting from the bottom layer to the top layer. Hull's patent described a concentrated beam of ultraviolet light focused onto the surface of a vat filled with a liquid photopolymer. The beam is focused onto the surface of the liquid photopolymer, creating each layer of the desired 3D object by means of crosslinking (generation of intermolecular bonds in polymers). It was invented with the intent of allowing engineers to create prototypes of their designs in a more time effective manner. After the patent was granted in 1986, Hull co-founded the world's first 3D printing company, 3D Systems, to commercialize it. Stereolithography's success in the automotive industry allowed 3D printing to achieve industry status and the technology continues to find innovative uses in many fields of study. Attempts have been made to construct mathematical models of stereolithography processes and to design algorithms to determine whether a proposed object may be constructed using 3D printing. == Technology == Stereolithography is an additive manufacturing process that, in its most common form, works by focusing an ultraviolet (UV) laser on to a vat of photopolymer resin. With the help of computer aided manufacturing or computer-aided design (CAM/CAD) software, the UV laser is used to draw a pre-programmed design or shape on to the surface of the photopolymer vat. Photopolymers are sensitive to ultraviolet light, so the resin is photochemically solidified and forms a single layer of the desired 3D object. Then, the build platform lowers one layer and a blade recoats the top of the tank with resin. This process is repeated for each layer of the design until the 3D object is complete. Completed parts must be washed with a solvent to clean wet resin from their surfaces. It is also possible to print objects "bottom up" by using a vat with a transparent bottom and focusing the UV or deep-blue polymerization laser upward through the bottom of the vat. An inverted stereolithography machine starts a print by lowering the build platform to touch the bottom of the resin-filled vat, then moving upward the height of one layer. The UV laser then writes the bottom-most layer of the desired part through the transparent vat bottom. Then the vat is "rocked", flexing and peeling the bottom of the vat away from the hardened photopolymer; the hardened material detaches from the bottom of the vat and stays attached to the rising build platform, and new liquid photopolymer flows in from the edges of the partially built part. The UV laser then writes the second-from-bottom layer and repeats the process. An advantage of this bottom-up mode is that the build volume can be much bigger than the vat itself, and only enough photopolymer is needed to keep the bottom of the build vat continuously full of photopolymer. This approach is typical of desktop SLA printers, while the right-side-up approach is more common in industrial systems. Stereolithography requires the use of supporting structures which attach to the elevator platform to prevent deflection due to gravity, resist lateral pressure from the resin-filled blade, or retain newly created sections during the "vat rocking" of bottom up printing. Supports are typically created automatically during the preparation of CAD models and can also be made manually. In either situation, the supports must be removed manually after printing. Other forms of stereolithography build each layer by LCD masking, or using a DLP projector. == Materials == The liquid materials used for SLA printing are commonly referred to as "resins" and are thermoset polymers. A wide variety of resins are commercially available and it is also possible to use homemade resins to test different compositions for example. Material properties vary according to formulation configurations: "materials can be soft or hard, heavily filled with secondary materials like glass and ceramic, or imbued with mechanical properties like high heat deflection temperature or impact resistance". Recently, some studies have tested the possibility to green or reusable materials to produce "sustainable" resins. It is possible to classify the resins in the following categories: Standard resins, for general prototyping Engineering resins, for specific mechanical and thermal properties Dental and medical resins, for biocompatibility certifications Castable resins, for zero ash-content after burnout Biomaterial resins, formulated as aqueous solutions of synthetic polymers like polyethylene glycol, or biological polymers such as gelatin, dextran, or hyaluronic acid. == Uses == === Medical modeling === Stereolithographic models have been used in medicine since the 1990s, for creating accurate 3D models of various anatomical regions of a patient, based on data from computer scans. Medical modelling involves first acquiring a CT, MRI, or other scan. This data consists of a series of cross sectional images of the human anatomy. In these images different tissues show up as different levels of grey. Selecting a range of grey values enables specific tissues to be isolated. A region of interest is then selected and all the pixels connected to the target point within that grey value range are selected. This enables a specific organ to be selected. This process is referred to as segmentation. The segmented data may then be translated into a format suitable for stereolithography. While stereolithography is normally accurate, the accuracy of a medical model depends on many factors, especially the operator performing the segmentation correctly. There are potential errors possible when making medical models using stereolithography but these can be avoided with practice and well trained operators. Stereolithographic models are used as an aid to diagnosis, preoperative planning and implant design and manufacture. This might involve planning and rehearsing osteotomies, for example. Surgeons use models to help plan surgeries but prosthetists and technologists also use models as an aid to the design and manufacture of custom-fitting implants. For instance, medical models created through stereolithography can be used to help in the construction of Cranioplasty plates. In 2019, scientists at Rice University published an article in the journal Science, presenting soft hydrogel materials for stereolithography used in biological research applications. === Prototyping === Stereolithography is often used for prototyping parts. For a relatively low price, stereolithography can produce accurate prototypes, even of irregular shapes. Businesses can use those prototypes to assess the design of their product or as publicity for the final product. == Advantages and disadvantages == === Advantages === One of the advantages of stereolithography is its speed; functional parts can be manufactured within a day. The length of time it takes to produce a single part depends upon the complexity of the design and the size. Printing time can last anywhere from hours to more than a day. SLA printed parts, unlike those obtained from FFF/FDM, do not exhibit significant anisotropy (structural non-uniformity) and there's no visible layering pattern. The surface quality is, in general, superior. Prototypes and designs made with stereolithography are strong enough to be machined and can also be used to make master patterns for injection molding or various metal casting processes. === Disadvantages === Although stereolithography can be used to produce virtually any synthetic design, it is often costly (due to costlier machines compared to FFF, costlier resin and costly post-processing steps such as washing and curing), though the price is coming down. Since 2012, however, public interest in 3D printing has inspired the design of several consumer SLA machines which can cost considerably less. Beginning in 2016, substitution of the SLA and DLP methods using a high resolution, high contrast LCD panel has brought prices down to below US$200. The layers are created in their entirety since the entire layer is displayed on the LCD screen and is exposed using UV LEDs that lie below. Resolutions of .01mm are attainable. Another disadvantage is that the photopolymers are sticky, messy, and need to be handled with care. Newly made parts need to be washed, further cured, and dried. The environmental impact of all these processes requires more study to be understood, but in general SLA technologies have not created any biodegradable or compostable forms of resin, while other 3-D printing methods offer some compostable PLA options. The choice of materials is limited compared to FFF, which can process virtually any thermoplastic. == See also == Fused filament fabrication (FFF or FDM) Selective laser sintering (SLS) Thermoforming laminated object manufacturing (LOM) .stl - file format == References == == Sources == == External links == Rapid Prototyping and Stereolithography animation – Animation demonstrates stereolithography and the actions of an SL machine
Wikipedia/Stereolithography
The Federal Center of Neurosurgery in Tyumen (Russian: Федеральный центр нейрохирургии в Тюмени), the full official name is the Federal State budgetary institution the Federal Center of Neurosurgery of the Ministry of Health of the Russian Federation (Tyumen) — the medical institution built for high-tech neurosurgery health care. The target group of the hospital is the Ural Federal District inhabitants. The distinguishing feature of the institution is minimally invasive surgery. The center was opened in 2011, under the auspices of the National Priority Project «Public Health». In 2012 it ranks the 2nd place for the neurosurgery operations over Russia after the Burdenko Neurosurgery Institute in Moscow. == History == Under the auspices of the National Priority Project «Public Health» started in 2006, it was planned to build 7 federal medical centers of high technologies in Russian regions and also the Federal Research and Surgery Center of Children's hematology, oncology and immunology in Moscow (by the order of the Government of the Russian Federation of March 20, 2006 No.139). Then the number of such institutions increased to 14, two of them was planned for the neurosurgery direction (in Tyumen and Novosibirsk). The Tyumen center was the first of the neurosurgery and the seventh of the general ticket of the federal centers. The so-called «medical camp», near the village of Patrusheva was picked out for the building site. There was also the surgical campus of the Tyumen regional hospital and the medical unit «Neftyanik». In addition, in 2012 there was built the Radiological center of the Tyumen regional oncological clinic. Also it is planned to construct new academic building of the Tyumen State medical academy, the anatomical center and also the main campus of the regional oncological clinic. The construction of the Tyumen center started in 2008. The first surgery was held at the April 25, 2011. In 2011 the clinic was allocated with 330 surgery quotas within the government procurement, in 2012 – 3,000 quotas. In December, 2012 the first surgery was held in the Federal Center of Neurosurgery in Novosibirsk. == Structure == The hospital consists of five specialized units: for adults (neurovascular, vertebrological, the neuro-oncology unit, functional neurosurgery), with 20 beds per each unit and for children, with 15 beds. Besides, there is the admission office counted for 80 visits per turn. The subsidiary units: the medical ultrasonography department, the operating suite, the department of perioperative medicine and life support. == Provided health care == The medical care provided by the hospital with concern of the 34th Article of the Federal Law of the 21st of November, 2011 No. 323 «About the basics of the health protection of the Russian Federation citizens». The medical care for the citizens of the Russian Federation is provided within the government procurement, for free, regardless of the region of residence. The patients may get the medical care via the appointments of the regional medical departments and also via the own admission office of the center. The clinic provides treatment of such diseases as: Statistically, 80% of the patients make complete recovery, 15% keep curing with medicaments. Due to the minimally invasive surgery methods, the tissues intrusion is minimal. It helps to discharge the majority of the patients from a hospital already through 3–5 days after surgery. == Applied technologies == The center provides a number of advanced technologies. So, the venous vacuum blood sampling used in the children's department reduces the discomfort and the infection risks. The vertebrological department provides the endoscopic elimination of intervertebral disk hernia. The technology provided by the German company JOIMAX makes it possible to maintain a surgery without the use of general anaesthesia. The oncology department explores the Norwegian system of dynamic 3D neuronavigation SonoWand. The intraoperative neuromonitor ISIS IOM of the German company Inomed Medizintechnik applied in the functional neurosurgery department helps to conduct the chips-stimulators implantation which are used for the vagus nerve stimulation. The patients can turn off the painful sensation at any point using remote control. The minimally invasive surgery is provided due to the mobile tomographic microscope O-arm ® Surgical Imaging System of the American company Medtronic. It is necessary to notice that the surgery is equipped by a telemedicine system which allows the surgeons to get the consultations in real time mode from any specialists all over the world. == References == == External links == Federal Center of Neurosurgery (Tyumen) on Facebook Federal Center of Neurosurgery (Tyumen) at LiveJournal
Wikipedia/Federal_Center_of_Neurosurgery_(Tyumen)
A Bachelor of Science in Biomedical Engineering is a kind of bachelor's degree typically conferred after a four-year undergraduate course of study in biomedical engineering (BME). The degree itself is largely equivalent to a Bachelor of Science (B.S.) and many institutions conferring degrees in the fields of biomedical engineering and bioengineering do not append the field to the degree itself. Courses of study in BME are also extremely diverse as the field itself is relatively new and developing. In general, an undergraduate course of study in BME is likened to a cross between engineering and biological science with varying degrees of proportionality between the two. == Professional status == Engineers typically require a type of professional certification, such as satisfying certain education requirements and passing an examination to become a professional engineer. These certifications are usually nationally regulated and registered, but there are also cases where a self-governing body, such as the Canadian Association of Professional Engineers. In many cases, carrying the title of "Professional Engineer" is legally protected. As BME is an emerging field, professional certifications are not as standard and uniform as they are for other engineering fields. For example, the Fundamentals of Engineering exam in the U.S. does not include a biomedical engineering section, though it does cover biology. Biomedical engineers often simply possess a university degree as their qualification. However, some countries do regulate biomedical engineers, such as Australia, however registration is typically recommended, but not always a requirement. As with many engineering fields, a bachelor's degree is usually the minimum and often most common degree for a profession in BME, though it is not uncommon for the bachelor's degree to serve as a launching pad into graduate studies. ABET does accredit undergraduate programs in the field. However, even this is not a strict requirement since it is an emerging field and due to the young age of many programs. == Curriculum == The curriculum for BME programs varies significantly from institution to institution and often within a single program. In general, a basic engineering curriculum, including mathematics through differential equations, statistics, and a basic understanding of biology and other basic sciences are hallmarks of a BME program. Many BME programs have a series of tracks that focus on a particular area of study within BME. Often, the tracks also coincide with a particular engineering or science field. Examples of tracks include: Biomechanics: Focus includes medical devices, modeling of biological systems and mechanics of organisms. This track interfaces with mechanical engineering and often physiology. Bioinstrumentation/Bioelectrical Systems: Focus includes medical devices, modeling of biological systems, in particular circuit analogies to the nervous system, bioelectric phenomena and signal processing. This track interfaces with electrical engineering. Cell, Tissue and Biomolecular Engineering: This track is often quite diverse, with focus ranging from artificial tissues, modeling of biological systems, drug delivery, genetic engineering, biochemical engineering and protein production. This track can interface with chemical engineering, mechanical engineering, molecular biology, physiology, genetics, materials science and other fields. Medical Optics: Focus on medical diagnostics and medical optical technology. This track interfaces with optics, physics and electrical engineering. Many other tracks may exist within specific programs as well as combinations of multiple tracks. Another common feature of many BME programs is a capstone design project where students become involved in researching and developing technology in the field. At some schools, this culminates in the creation of medical devices and prototypes. Capstone design projects also often include exposure to issues like funding, regulatory issues and other topics that are related to careers in the field. == Research and Industry Experience == An important feature of many programs is the inclusion of research and/or industry experience into either the curricular or extracurricular work done by students. Since BME careers often focuses on research or industrial applications of the field, many programs have seen fit to either encourage or sometimes require experience outside of the standard curricular requirements. Many research universities offer chances for students to participate in faculty research at the undergraduate level. Other schools have an industry practicum or co-ops to give students relevant work experience before graduation. Students that participate in either research or industry during the course of study often see advantages when they enter the job market, as many employers prefer experienced candidates or offer higher pay to those with prior experience. Also, research or industry experience is often a factor in graduate school admission. == Value of the degree == Recently, many universities, such as Case Western Reserve University have been implementing new initiatives to either create or expand upon undergraduate programs in BME. This is in part due to rising demand in the biotechnology sector and the increasing interest in biological research. A degree in BME instantly identifies a candidate as having training in both traditional engineering as well as biological science, which has become an increasingly desirable qualification as aspects of biology are permeating into other industries. Since BME is a diverse field, many programs have a broad curriculum with students usually choosing to specialize in a particular aspect of BME. However, due to the diversity, some degree holders may find their education lacking in deep emphasis, which may prompt continuing studies in graduate school or by learning through experience. Numerous rankings of undergraduate BME programs exist with highly varying basis for each ranking. As with many degrees, the reputation of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees are also linked to the institution's graduate or research programs, which have more tangible factors for rating, such as research funding and volume, publications and citations. == References ==
Wikipedia/Bachelor_of_Science_in_Biomedical_Engineering
Medicine is the science and practice of caring for patients, managing the diagnosis, prognosis, prevention, treatment, palliation of their injury or disease, and promoting their health. Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Contemporary medicine applies biomedical sciences, biomedical research, genetics, and medical technology to diagnose, treat, and prevent injury and disease, typically through pharmaceuticals or surgery, but also through therapies as diverse as psychotherapy, external splints and traction, medical devices, biologics, and ionizing radiation, amongst others. Medicine has been practiced since prehistoric times, and for most of this time it was an art (an area of creativity and skill), frequently having connections to the religious and philosophical beliefs of local culture. For example, a medicine man would apply herbs and say prayers for healing, or an ancient philosopher and physician would apply bloodletting according to the theories of humorism. In recent centuries, since the advent of modern science, most medicine has become a combination of art and science (both basic and applied, under the umbrella of medical science). For example, while stitching technique for sutures is an art learned through practice, knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science. Prescientific forms of medicine, now known as traditional medicine or folk medicine, remain commonly used in the absence of scientific medicine and are thus called alternative medicine. Alternative treatments outside of scientific medicine with ethical, safety and efficacy concerns are termed quackery. == Etymology == Medicine (UK: , US: ) is the science and practice of the diagnosis, prognosis, treatment, and prevention of disease. The word "medicine" is derived from Latin medicus, meaning "a physician". The word "physic" itself, from which "physician" derives, was the old word for what is now called a medicine, and also the field of medicine. == Clinical practice == Medical availability and clinical practice vary across the world due to regional differences in culture and technology. Modern scientific medicine is highly developed in the Western world, while in developing countries such as parts of Africa or Asia, the population may rely more heavily on traditional medicine with limited evidence and efficacy and no required formal training for practitioners. In the developed world, evidence-based medicine is not universally used in clinical practice; for example, a 2007 survey of literature reviews found that about 49% of the interventions lacked sufficient evidence to support either benefit or harm. In modern clinical practice, physicians and physician assistants personally assess patients to diagnose, prognose, treat, and prevent disease using clinical judgment. The doctor-patient relationship typically begins with an interaction with an examination of the patient's medical history and medical record, followed by a medical interview and a physical examination. Basic diagnostic medical devices (e.g., stethoscope, tongue depressor) are typically used. After examining for signs and interviewing for symptoms, the doctor may order medical tests (e.g., blood tests), take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided. During the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. Follow-ups may be shorter but follow the same general procedure, and specialists follow a similar process. The diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. The components of the medical interview and encounter are: Chief complaint (CC): the reason for the current medical visit. These are the symptoms. They are in the patient's own words and are recorded along with the duration of each one. Also called chief concern or presenting complaint. Current activity: occupation, hobbies, what the patient actually does. Family history (FH): listing of diseases in the family that may impact the patient. A family tree is sometimes used. History of present illness (HPI): the chronological order of events of symptoms and further clarification of each symptom. Distinguishable from history of previous illness, often called past medical history (PMH). Medical history comprises HPI and PMH. Medications (Rx): what drugs the patient takes including prescribed, over-the-counter, and home remedies, as well as alternative and herbal medicines or remedies. Allergies are also recorded. Past medical history (PMH/PMHx): concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. Review of systems (ROS) or systems inquiry: a set of additional questions to ask, which may be missed on HPI: a general enquiry (have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc.), followed by questions on the body's main organ systems (heart, lungs, digestive tract, urinary tract, etc.). Social history (SH): birthplace, residences, marital history, social and economic status, habits (including diet, medications, tobacco, alcohol). The physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. The healthcare provider uses sight, hearing, touch, and sometimes smell (e.g., in infection, uremia, diabetic ketoacidosis). Four actions are the basis of physical examination: inspection, palpation (feel), percussion (tap to determine resonance characteristics), and auscultation (listen), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. The clinical examination involves the study of: Abdomen and rectum Cardiovascular (heart and blood vessels) General appearance of the patient and specific indicators of disease (nutritional status, presence of jaundice, pallor or clubbing) Genitalia (and pregnancy if the patient is or could be pregnant) Head, eye, ear, nose, and throat (HEENT) Musculoskeletal (including spine and extremities) Neurological (consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves) Psychiatric (orientation, mental state, mood, evidence of abnormal perception or thought). Respiratory (large airways and lungs) Skin Vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation It is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. The treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. A follow-up may be advised. Depending upon the health insurance plan and the managed care system, various forms of "utilization review", such as prior authorization of tests, may place barriers on accessing expensive services. The medical decision-making (MDM) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses (the differential diagnoses), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient's problem. On subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations. == Institutions == Contemporary medicine is, in general, conducted within health care systems. Legal, credentialing, and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have a significant impact on the way medical care is provided. From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals, and the Catholic Church today remains the largest non-government provider of medical services in the world. Advanced industrial countries (with the exception of the United States) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system or compulsory private or cooperative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices, state-owned hospitals and clinics, or charities, most commonly a combination of all three. Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those who can afford to pay for it, have self-insured it (either directly or as part of an employment contract), or may be covered by care financed directly by the government or tribe. Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice of patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for its lack of openness, new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other. The health professionals who provide care in medicine comprise multiple professions, such as medics, nurses, physiotherapists, and psychologists. These professions will have their own ethical standards, professional education, and bodies. The medical profession has been conceptualized from a sociological perspective. === Delivery === Provision of medical care is classified into primary, secondary, and tertiary care categories. Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes. Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting. Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc. Modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means. In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that "user fees" be removed in these areas to ensure access, although even after removal, significant costs and barriers remain. Separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs. == Branches == Working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. Examples include: nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, medical physicists, surgeons, surgeon's assistant, surgical technologist. The scope and sciences underpinning human medicine overlap many other fields. A patient admitted to the hospital is usually under the care of a specific team based on their main presenting problem, e.g., the cardiology team, who then may interact with other specialties, e.g., surgical, radiology, to help diagnose or treat the main problem or any subsequent complications/developments. Physicians have many specializations and subspecializations into certain branches of medicine, which are listed below. There are variations from country to country regarding which specialties certain subspecialties are in. The main branches of medicine are: Basic sciences of medicine; this is what every physician is educated in, and some return to in biomedical research. Interdisciplinary fields, where different medical specialties are mixed to function in certain occasions. Medical specialties === Basic sciences === Anatomy is the study of the physical structure of organisms. In contrast to macroscopic or gross anatomy, cytology and histology are concerned with microscopic structures. Biochemistry is the study of the chemistry taking place in living organisms, especially the structure and function of their chemical components. Biomechanics is the study of the structure and function of biological systems by means of the methods of Mechanics. Biophysics is an interdisciplinary science that uses the methods of physics and physical chemistry to study biological systems. Biostatistics is the application of statistics to biological fields in the broadest sense. A knowledge of biostatistics is essential in the planning, evaluation, and interpretation of medical research. It is also fundamental to epidemiology and evidence-based medicine. Cytology is the microscopic study of individual cells. Embryology is the study of the early development of organisms. Endocrinology is the study of hormones and their effect throughout the body of animals. Epidemiology is the study of the demographics of disease processes, and includes, but is not limited to, the study of epidemics. Genetics is the study of genes, and their role in biological inheritance. Gynecology is the study of female reproductive system. Histology is the study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry. Immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. Lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. Medical physics is the study of the applications of physics principles in medicine. Microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. Molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. Neuroscience includes those disciplines of science that are related to the study of the nervous system. A main focus of neuroscience is the biology and physiology of the human brain and spinal cord. Some related clinical specialties include neurology, neurosurgery and psychiatry. Nutrition science (theoretical focus) and dietetics (practical focus) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. Medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. Pathology as a science is the study of disease – the causes, course, progression and resolution thereof. Pharmacology is the study of drugs and their actions. Photobiology is the study of the interactions between non-ionizing radiation and living organisms. Physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. Radiobiology is the study of the interactions between ionizing radiation and living organisms. Toxicology is the study of hazardous effects of drugs and poisons. === Specialties === In the broadest meaning of "medicine", there are many different specialties. In the UK, most specialities have their own body or college, which has its own entrance examination. These are collectively known as the Royal Colleges, although not all currently use the term "Royal". The development of a speciality is often driven by new technology (such as the development of effective anaesthetics) or ways of working (such as emergency departments); the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. Within medical circles, specialities usually fit into one of two broad categories: "Medicine" and "Surgery". "Medicine" refers to the practice of non-operative medicine, and most of its subspecialties require preliminary training in Internal Medicine. In the UK, this was traditionally evidenced by passing the examination for the Membership of the Royal College of Physicians (MRCP) or the equivalent college in Scotland or Ireland. "Surgery" refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in General Surgery, which in the UK leads to membership of the Royal College of Surgeons of England (MRCS). At present, some specialties of medicine do not fit easily into either of these categories, such as radiology, pathology, or anesthesia. Most of these have branched from one or other of the two camps above; for example anaesthesia developed first as a faculty of the Royal College of Surgeons (for which MRCS/FRCS would have been required) before becoming the Royal College of Anaesthetists and membership of the college is attained by sitting for the examination of the Fellowship of the Royal College of Anesthetists (FRCA). ==== Surgical specialty ==== Surgery is an ancient medical specialty that uses operative manual and instrumental techniques on a patient to investigate or treat a pathological condition such as disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas (for example, a perforated ear drum). Surgeons must also manage pre-operative, post-operative, and potential surgical candidates on the hospital wards. In some centers, anesthesiology is part of the division of surgery (for historical and logistical reasons), although it is not a surgical discipline. Other medical specialties may employ surgical procedures, such as ophthalmology and dermatology, but are not considered surgical sub-specialties per se. Surgical training in the U.S. requires a minimum of five years of residency after medical school. Sub-specialties of surgery often require seven or more years. In addition, fellowships can last an additional one to three years. Because post-residency fellowships can be competitive, many trainees devote two additional years to research. Thus in some cases surgical training will not finish until more than a decade after medical school. Furthermore, surgical training can be very difficult and time-consuming. Surgical subspecialties include those a physician may specialize in after undergoing general surgery residency training as well as several surgical fields with separate residency training. Surgical subspecialties that one may pursue following general surgery residency training: Bariatric surgery Cardiovascular surgery – may also be pursued through a separate cardiovascular surgery residency track Colorectal surgery Endocrine surgery General surgery Hand surgery Hepatico-Pancreatico-Biliary Surgery Minimally invasive surgery Pediatric surgery Plastic surgery – may also be pursued through a separate plastic surgery residency track Surgical critical care Surgical oncology Transplant surgery Trauma surgery Vascular surgery – may also be pursued through a separate vascular surgery residency track Other surgical specialties within medicine with their own individual residency training: Dermatology Neurosurgery Ophthalmology Oral and maxillofacial surgery Orthopedic surgery Otorhinolaryngology Podiatric surgery – do not undergo medical school training, but rather separate training in podiatry school Urology ==== Internal medicine specialty ==== Internal medicine is the medical specialty dealing with the prevention, diagnosis, and treatment of adult diseases. According to some sources, an emphasis on internal structures is implied. In North America, specialists in internal medicine are commonly called "internists". Elsewhere, especially in Commonwealth nations, such specialists are often called physicians. These terms, internist or physician (in the narrow sense, common outside North America), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities. Because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. Formerly, many internists were not subspecialized; such general physicians would see any complex nonsurgical problem; this style of practice has become much less common. In modern urban practice, most internists are subspecialists: that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. For example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys. In the Commonwealth of Nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians (or internists) who have subspecialized by age of patient rather than by organ system. Elsewhere, especially in North America, general pediatrics is often a form of primary care. There are many subspecialities (or subdisciplines) of internal medicine: Training in internal medicine (as opposed to surgical training), varies considerably across the world: see the articles on medical education for more details. In North America, it requires at least three years of residency training after medical school, which can then be followed by a one- to three-year fellowship in the subspecialties listed above. In general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the US. This difference does not apply in the UK where all doctors are now required by law to work less than 48 hours per week on average. ==== Diagnostic specialties ==== Clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. In the United States, these services are supervised by a pathologist. The personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. Subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. Clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. These kinds of tests can be divided into recordings of: (1) spontaneous or continuously running electrical activity, or (2) stimulus evoked responses. Subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. Sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. Diagnostic radiology is concerned with imaging of the body, e.g. by x-rays, x-ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. Interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. Nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances (radiopharmaceuticals) to the body, which can then be imaged outside the body by a gamma camera or a PET scanner. Each radiopharmaceutical consists of two parts: a tracer that is specific for the function under study (e.g., neurotransmitter pathway, metabolic pathway, blood flow, or other), and a radionuclide (usually either a gamma-emitter or a positron emitter). There is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the PET/CT scanner. Pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. As a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence-based medicine. Many modern molecular tests such as flow cytometry, polymerase chain reaction (PCR), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization (FISH) fall within the territory of pathology. ==== Other major specialties ==== The following are some major medical specialties that do not directly fit into any of the above-mentioned groups: Anesthesiology (also known as anaesthetics): concerned with the perioperative management of the surgical patient. The anesthesiologist's role during surgery is to prevent derangement in the vital organs' (i.e. brain, heart, kidneys) functions and postoperative pain. Outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. Emergency medicine is concerned with the diagnosis and treatment of acute or life-threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. Family medicine, family practice, general practice or primary care is, in many countries, the first port-of-call for patients with non-emergency medical problems. Family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. Medical genetics is concerned with the diagnosis and management of hereditary disorders. Neurology is concerned with diseases of the nervous system. In the UK, neurology is a subspecialty of general medicine. Obstetrics and gynecology (often abbreviated as OB/GYN (American English) or Obs & Gynae (British English)) are concerned respectively with childbirth and the female reproductive and associated organs. Reproductive medicine and fertility medicine are generally practiced by gynecological specialists. Pediatrics (AE) or paediatrics (BE) is devoted to the care of infants, children, and adolescents. Like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery. Pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health. Physical medicine and rehabilitation (or physiatry) is concerned with functional improvement after injury, illness, or congenital disorders. Podiatric medicine is the study of, diagnosis, and medical and surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back. Preventive medicine is the branch of medicine concerned with preventing disease. Community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis. Psychiatry is the branch of medicine concerned with the bio-psycho-social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. Related fields include psychotherapy and clinical psychology. === Interdisciplinary fields === Some interdisciplinary sub-specialties of medicine include: Addiction medicine deals with the treatment of addiction. Aerospace medicine deals with medical problems related to flying and space travel. Biomedical Engineering is a field dealing with the application of engineering principles to medical practice. Clinical pharmacology is concerned with how systems of therapeutics interact with patients. Conservation medicine studies the relationship between human and non-human animal health, and environmental conditions. Also known as ecological medicine, environmental medicine, or medical geology. Disaster medicine deals with medical aspects of emergency preparedness, disaster mitigation and management. Diving medicine (or hyperbaric medicine) is the prevention and treatment of diving-related problems. Evolutionary medicine is a perspective on medicine derived through applying evolutionary theory. Forensic medicine deals with medical questions in legal context, such as determination of the time and cause of death, type of weapon used to inflict trauma, reconstruction of the facial features using remains of deceased (skull) thus aiding identification. Gender-based medicine studies the biological and physiological differences between the human sexes and how that affects differences in disease. Health informatics is a relatively recent field that deal with the application of computers and information technology to medicine. Hospice and Palliative Medicine is a relatively modern branch of clinical medicine that deals with pain and symptom relief and emotional support in patients with terminal illnesses including cancer and heart failure. Hospital medicine is the general medical care of hospitalized patients. Physicians whose primary professional focus is hospital medicine are called hospitalists in the United States and Canada. The term Most Responsible Physician (MRP) or attending physician is also used interchangeably to describe this role. Laser medicine involves the use of lasers in the diagnostics or treatment of various conditions. Many other health science fields, e.g. dietetics Medical ethics deals with ethical and moral principles that apply values and judgments to the practice of medicine. Medical humanities includes the humanities (literature, philosophy, ethics, history and religion), social science (anthropology, cultural studies, psychology, sociology), and the arts (literature, theater, film, and visual arts) and their application to medical education and practice. Nosokinetics is the science/subject of measuring and modelling the process of care in health and social care systems. Nosology is the classification of diseases for various purposes. Occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained. Pain management (also called pain medicine, or algiatry) is the medical discipline concerned with the relief of pain. Pharmacogenomics is a form of individualized medicine. Podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back. Sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality. Sports medicine deals with the treatment and prevention and rehabilitation of sports/exercise injuries such as muscle spasms, muscle tears, injuries to ligaments (ligament tears or ruptures) and their repair in athletes, amateur and professional. Therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health. Travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments. Tropical medicine deals with the prevention and treatment of tropical diseases. It is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs. Urgent care focuses on delivery of unscheduled, walk-in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. In some jurisdictions this function is combined with the emergency department. Veterinary medicine; veterinarians apply similar techniques as physicians to the care of non-human animals. Wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available. == Education and legal controls == Medical education and training varies around the world. It typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. This can be followed by postgraduate vocational training. A variety of teaching methods have been employed in medical education, still itself a focus of active research. In Canada and the United States of America, a Doctor of Medicine degree, often abbreviated M.D., or a Doctor of Osteopathic Medicine degree, often abbreviated as D.O. and unique to the United States, must be completed in and delivered from a recognized university. Since knowledge, techniques, and medical technology continue to evolve at a rapid rate, many regulatory authorities require continuing medical education. Medical practitioners upgrade their knowledge in various ways, including medical journals, seminars, conferences, and online programs. A database of objectives covering medical knowledge, as suggested by national societies across the United States, can be searched at http://data.medobjectives.marian.edu/ Archived 4 October 2018 at the Wayback Machine. In most countries, it is a legal requirement for a medical doctor to be licensed or registered. In general, this entails a medical degree from a university and accreditation by a medical board or an equivalent national organization, which may ask the applicant to pass exams. This restricts the considerable legal authority of the medical profession to physicians that are trained and qualified by national standards. It is also intended as an assurance to patients and as a safeguard against charlatans that practice inadequate medicine for personal gain. While the laws generally require medical doctors to be trained in "evidence based", Western, or Hippocratic Medicine, they are not intended to discourage different paradigms of health. In the European Union, the profession of doctor of medicine is regulated. A profession is said to be regulated when access and exercise is subject to the possession of a specific professional qualification. The regulated professions database contains a list of regulated professions for doctor of medicine in the EU member states, EEA countries and Switzerland. This list is covered by the Directive 2005/36/EC. Doctors who are negligent or intentionally harmful in their care of patients can face charges of medical malpractice and be subject to civil, criminal, or professional sanctions. == Medical ethics == Medical ethics is a system of moral principles that apply values and judgments to the practice of medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology. Six of the values that commonly apply to medical ethics discussions are: autonomy – the patient has the right to refuse or choose their treatment. (Latin: Voluntas aegroti suprema lex.) beneficence – a practitioner should act in the best interest of the patient. (Latin: Salus aegroti suprema lex.) justice – concerns the distribution of scarce health resources, and the decision of who gets what treatment (fairness and equality). non-maleficence – "first, do no harm" (Latin: primum non-nocere). respect for persons – the patient (and the person treating the patient) have the right to be treated with dignity. truthfulness and honesty – the concept of informed consent has increased in importance since the historical events of the Doctors' Trial of the Nuremberg trials, Tuskegee syphilis experiment, and others. Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts. When moral values are in conflict, the result may be an ethical dilemma or crisis. Sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. For example, some argue that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life-saving; and truth-telling was not emphasized to a large extent before the HIV era. == History == === Ancient world === Prehistoric medicine incorporated plants (herbalism), animal parts, and minerals. In many cases these materials were used ritually as magical substances by priests, shamans, or medicine men. Well-known spiritual systems include animism (the notion of inanimate objects having spirits), spiritualism (an appeal to gods or communion with ancestor spirits); shamanism (the vesting of an individual with mystic powers); and divination (magically obtaining the truth). The field of medical anthropology examines the ways in which culture and society are organized around or impacted by issues of health, health care and related issues. The earliest known medical texts in the world were found in the ancient Syrian city of Ebla and date back to 2500 BCE. Other early records on medicine have been discovered from ancient Egyptian medicine, Babylonian Medicine, Ayurvedic medicine (in the Indian subcontinent), classical Chinese medicine (Alternative medicine) predecessor to the modern traditional Chinese medicine), and ancient Greek medicine and Roman medicine. In Egypt, Imhotep (3rd millennium BCE) is the first physician in history known by name. The oldest Egyptian medical text is the Kahun Gynaecological Papyrus from around 2000 BCE, which describes gynaecological diseases. The Edwin Smith Papyrus dating back to 1600 BCE is an early work on surgery, while the Ebers Papyrus dating back to 1500 BCE is akin to a textbook on medicine. In China, archaeological evidence of medicine in Chinese dates back to the Bronze Age Shang dynasty, based on seeds for herbalism and tools presumed to have been used for surgery. The Huangdi Neijing, the progenitor of Chinese medicine, is a medical text written beginning in the 2nd century BCE and compiled in the 3rd century. In India, the surgeon Sushruta described numerous surgical operations, including the earliest forms of plastic surgery.Earliest records of dedicated hospitals come from Mihintale in Sri Lanka where evidence of dedicated medicinal treatment facilities for patients are found. In Greece, the ancient Greek physician Hippocrates, the "father of modern medicine", laid the foundation for a rational approach to medicine. Hippocrates introduced the Hippocratic Oath for physicians, which is still relevant and in use today, and was the first to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence". The Greek physician Galen was also one of the greatest surgeons of the ancient world and performed many audacious operations, including brain and eye surgeries. After the fall of the Western Roman Empire and the onset of the Early Middle Ages, the Greek tradition of medicine went into decline in Western Europe, although it continued uninterrupted in the Eastern Roman (Byzantine) Empire. Most of our knowledge of ancient Hebrew medicine during the 1st millennium BC comes from the Torah, i.e. the Five Books of Moses, which contain various health related laws and rituals. The Hebrew contribution to the development of modern medicine started in the Byzantine Era, with the physician Asaph the Jew. === Middle Ages === The concept of hospital as institution to offer medical care and possibility of a cure for the patients due to the ideals of Christian charity, rather than just merely a place to die, appeared in the Byzantine Empire. Although the concept of uroscopy was known to Galen, he did not see the importance of using it to localize the disease. It was under the Byzantines with physicians such of Theophilus Protospatharius that they realized the potential in uroscopy to determine disease in a time when no microscope or stethoscope existed. That practice eventually spread to the rest of Europe. After 750 CE, the Muslim world had the works of Hippocrates, Galen and Sushruta translated into Arabic, and Islamic physicians engaged in some significant medical research. Notable Islamic medical pioneers include the Persian polymath, Avicenna, who, along with Imhotep and Hippocrates, has also been called the "father of medicine". He wrote The Canon of Medicine which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. Others include Abulcasis, Avenzoar, Ibn al-Nafis, and Averroes. Persian physician Rhazes was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of Rhazes's work Al-Mansuri, namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light. The Persian Bimaristan hospitals were an early example of public hospitals. In Europe, Charlemagne decreed that a hospital should be attached to each cathedral and monastery and the historian Geoffrey Blainey likened the activities of the Catholic Church in health care during the Middle Ages to an early version of a welfare state: "It conducted hospitals for the old and orphanages for the young; hospices for the sick of all ages; places for the lepers; and hostels or inns where pilgrims could buy a cheap bed and meal". It supplied food to the population during famine and distributed food to the poor. This welfare system the church funded through collecting taxes on a large scale and possessing large farmlands and estates. The Benedictine order was noted for setting up hospitals and infirmaries in their monasteries, growing medical herbs and becoming the chief medical care givers of their districts, as at the great Abbey of Cluny. The Church also established a network of cathedral schools and universities where medicine was studied. The Schola Medica Salernitana in Salerno, looking to the learning of Greek and Arab physicians, grew to be the finest medical school in medieval Europe. However, the fourteenth and fifteenth century Black Death devastated both the Middle East and Europe, and it has even been argued that Western Europe was generally more effective in recovering from the pandemic than the Middle East. In the early modern period, important early figures in medicine and anatomy emerged in Europe, including Gabriele Falloppio and William Harvey. The major shift in medical thinking was the gradual rejection, especially during the Black Death in the 14th and 15th centuries, of what may be called the "traditional authority" approach to science and medicine. This was the notion that because some prominent person in the past said something must be so, then that was the way it was, and anything one observed to the contrary was an anomaly (which was paralleled by a similar shift in European society in general – see Copernicus's rejection of Ptolemy's theories on astronomy). Physicians like Vesalius improved upon or disproved some of the theories from the past. The main tomes used both by medicine students and expert physicians were Materia Medica and Pharmacopoeia. Andreas Vesalius was the author of De humani corporis fabrica, an important book on human anatomy. Bacteria and microorganisms were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field microbiology. Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work for which he paid with his life in 1553. Later this was described by Renaldus Columbus and Andrea Cesalpino. Herman Boerhaave is sometimes referred to as a "father of physiology" due to his exemplary teaching in Leiden and textbook 'Institutiones medicae' (1708). Pierre Fauchard has been called "the father of modern dentistry". === Modern === Veterinary medicine was, for the first time, truly separated from human medicine in 1761, when the French veterinarian Claude Bourgelat founded the world's first veterinary school in Lyon, France. Before this, medical doctors treated both humans and other animals. Modern scientific biomedical research (where results are testable and reproducible) began to replace early Western traditions based on herbalism, the Greek "four humours" and other such pre-modern notions. The modern era really began with Edward Jenner's discovery of the smallpox vaccine at the end of the 18th century (inspired by the method of variolation originated in ancient China), Robert Koch's discoveries around 1880 of the transmission of disease by bacteria, and then the discovery of antibiotics around 1900. The post-18th century modernity period brought more groundbreaking researchers from Europe. From Germany and Austria, doctors Rudolf Virchow, Wilhelm Conrad Röntgen, Karl Landsteiner and Otto Loewi made notable contributions. In the United Kingdom, Alexander Fleming, Joseph Lister, Francis Crick and Florence Nightingale are considered important. Spanish doctor Santiago Ramón y Cajal is considered the father of modern neuroscience. From New Zealand and Australia came Maurice Wilkins, Howard Florey, and Frank Macfarlane Burnet. Others that did significant work include William Williams Keen, William Coley, James D. Watson (United States); Salvador Luria (Italy); Alexandre Yersin (Switzerland); Kitasato Shibasaburō (Japan); Jean-Martin Charcot, Claude Bernard, Paul Broca (France); Adolfo Lutz (Brazil); Nikolai Korotkov (Russia); Sir William Osler (Canada); and Harvey Cushing (United States). As science and technology developed, medicine became more reliant upon medications. Throughout history and in Europe right until the late 18th century, not only plant products were used as medicine, but also animal (including human) body parts and fluids. Pharmacology developed in part from herbalism and some drugs are still derived from plants (atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc.). Vaccines were discovered by Edward Jenner and Louis Pasteur. The first antibiotic was arsphenamine (Salvarsan) discovered by Paul Ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. The first major class of antibiotics was the sulfa drugs, derived by German chemists originally from azo dyes. Pharmacology has become increasingly sophisticated; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side-effects. Genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision-making. Evidence-based medicine is a contemporary movement to establish the most effective algorithms of practice (ways of doing things) through the use of systematic reviews and meta-analysis. The movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. The Cochrane Collaboration leads this movement. A 2001 review of 160 Cochrane systematic reviews revealed that, according to two readers, 21.3% of the reviews concluded insufficient evidence, 20% concluded evidence of no effect, and 22.5% concluded positive effect. == Quality, efficiency, and access == Evidence-based medicine, prevention of medical error (and other "iatrogenesis"), and avoidance of unnecessary health care are a priority in modern medical systems. These topics generate significant political and public policy attention, particularly in the United States where healthcare is regarded as excessively costly but population health metrics lag similar nations. Globally, many developing countries lack access to care and access to medicines. As of 2015, most wealthy developed countries provide health care to all citizens, with a few exceptions such as the United States where lack of health insurance coverage may limit access. == See also == == Notes == == References ==
Wikipedia/Medical_science
Applied science is the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena. There are applied natural sciences, as well as applied formal and social sciences. Applied science examples include genetic epidemiology which applies statistics and probability theory, and applied psychology, including criminology. == Applied research == Applied research is the use of empirical methods to collect data for practical purposes. It accesses and uses accumulated theories, knowledge, methods, and techniques for a specific state, business, or client-driven purpose. In contrast to engineering, applied research does not include analyses or optimization of business, economics, and costs. Applied research can be better understood in any area when contrasting it with basic or pure research. Basic geographical research strives to create new theories and methods that aid in explaining the processes that shape the spatial structure of physical or human environments. Instead, applied research utilizes existing geographical theories and methods to comprehend and address particular empirical issues. Applied research usually has specific commercial objectives related to products, procedures, or services. The comparison of pure research and applied research provides a basic framework and direction for businesses to follow. Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed. For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for the interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered. Moreover, this type of research method applies natural sciences to human conditions: Action research: aids firms in identifying workable solutions to issues influencing them. Evaluation research: researchers examine available data to assist clients in making wise judgments. Industrial research: create new goods/services that will satisfy the demands of a target market. (Industrial development would be scaling up production of the new goods/services for mass consumption to satisfy the economic demand of the customers while maximizing the ratio of the good/service output rate to resource input rate, the ratio of good/service revenue to material & energy costs, and the good/service quality. Industrial development would be considered engineering. Industrial development would fall outside the scope of applied research.) Gauging research: A type of evaluation research that uses a logic of rating to assess a process or program. It is a type of normative assessment and used in accreditation, hiring decisions and process evaluation. It uses standards or the practical ideal type and is associated with deductive qualitative research. Since applied research has a provisional close-to-the-problem and close-to-the-data orientation, it may also use a more provisional conceptual framework, such as working hypotheses or pillar questions. The OECD's Frascati Manual describes applied research as one of the three forms of research, along with basic research & experimental development. Due to its practical focus, applied research information will be found in the literature associated with individual disciplines. == Branches == Applied research is a method of problem-solving and is also practical in areas of science, such as its presence in applied psychology. Applied psychology uses human behavior to grab information to locate a main focus in an area that can contribute to finding a resolution. More specifically, this study is applied in the area of criminal psychology. With the knowledge obtained from applied research, studies are conducted on criminals alongside their behavior to apprehend them. Moreover, the research extends to criminal investigations. Under this category, research methods demonstrate an understanding of the scientific method and social research designs used in criminological research. These reach more branches along the procedure towards the investigations, alongside laws, policy, and criminological theory. Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Some scientific subfields used by engineers include thermodynamics, heat transfer, fluid mechanics, statics, dynamics, mechanics of materials, kinematics, electromagnetism, materials science, earth sciences, and engineering physics. Medical sciences, such as medical microbiology, pharmaceutical research, and clinical virology, are applied sciences that apply biology and chemistry to medicine. == In education == In Canada, the Netherlands, and other places, the Bachelor of Applied Science (BASc) is sometimes equivalent to the Bachelor of Engineering and is classified as a professional degree. This is based on the age of the school where applied science used to include boiler making, surveying, and engineering. There are also Bachelor of Applied Science degrees in Child Studies. The BASc tends to focus more on the application of the engineering sciences. In Australia and New Zealand, this degree is awarded in various fields of study and is considered a highly specialized professional degree. In the United Kingdom's educational system, Applied Science refers to a suite of "vocational" science qualifications that run alongside "traditional" General Certificate of Secondary Education or A-Level Sciences. Applied Science courses generally contain more coursework (also known as portfolio or internally assessed work) compared to their traditional counterparts. These are an evolution of the GNVQ qualifications offered up to 2005. These courses regularly come under scrutiny and are due for review following the Wolf Report 2011; however, their merits are argued elsewhere. In the United States, The College of William & Mary offers an undergraduate minor as well as Master of Science and Doctor of Philosophy degrees in "applied science". Courses and research cover varied fields, including neuroscience, optics, materials science and engineering, nondestructive testing, and nuclear magnetic resonance. University of Nebraska–Lincoln offers a Bachelor of Science in applied science, an online completion Bachelor of Science in applied science, and a Master of Applied Science. Coursework is centered on science, agriculture, and natural resources with a wide range of options, including ecology, food genetics, entrepreneurship, economics, policy, animal science, and plant science. In New York City, the Bloomberg administration awarded the consortium of Cornell-Technion $100 million in City capital to construct the universities' proposed Applied Sciences campus on Roosevelt Island. == See also == Applied mathematics Basic research Exact sciences Hard and soft science Invention Secondary research == References == == External links == Media related to Applied sciences at Wikimedia Commons
Wikipedia/Practical_science
Proportional myoelectric control can be used to (among other purposes) activate robotic lower limb exoskeletons. A proportional myoelectric control system utilizes a microcontroller or computer that inputs electromyography (EMG) signals from sensors on the leg muscle(s) and then activates the corresponding joint actuator(s) proportionally to the EMG signal. == Background == A robotic exoskeleton is a type of orthosis that uses actuators to either assist or resist the movement of a joint of an intact limb; this is not to be confused with a powered prosthesis, which replaces a missing limb. There are four purposes that robotic lower limb exoskeletons can accomplish: Enhancement of human performance, which typically deals with increasing strength or endurance (see Powered exoskeletons) Long-term assistance, which aims to provide impaired individuals with the ability to walk by themselves while wearing an exoskeleton Study of human locomotion, which utilizes robotic exoskeletons to better understand human neuromuscular control, energetics, and/or kinematics of locomotion Post-injury rehabilitation, which is intended to help an individual recover from an injury (such as a stroke, spinal cord injury, or other neurological disabilities) by wearing an exoskeleton for a short time during training in order to perform better later without the use of the exoskeleton Robotic lower-limb exoskeletons can be controlled by several methods, including a footswitch (a pressure sensor attached to the bottom of the foot), gait-phase estimation (using joint angles to determine the current phase of walking), and myoelectric control (using electromyography). This article focuses on myoelectric control. == Control methods == Sensors on the skin detect electromyography (EMG) signals from the muscles of the wearer's leg(s). EMG signals can be measured from just one muscle or many, depending on the type of the exoskeleton and how many joints are actuated. Each signal measured is then sent to a controller, which is either an onboard microcontroller (mounted to the exoskeleton) or to a nearby computer. Onboard microcontrollers are used for long-term assistive devices since the wearer must be able to walk in different locations while wearing the exoskeleton, whereas computers not carried by the exoskeleton can be used for therapeutic or research purposes since the wearer does not have to walk very far in a clinical or lab environment. The controller filters out noise from the EMG signals and then normalizes them so as to better analyze the muscle activation pattern. The normalized EMG value of a muscle represents its activation percentage, since the EMG signal is normalized by dividing it by the maximum possible EMG reading for the muscle it came from. The maximum EMG reading is generated when a muscle is fully contracted. An alternative method to normalization is to proportionally match the actuator power to the EMG signal between a minimum activation threshold and an upper saturation level. === Direct proportional myoelectric control === With a proportional myoelectric controller, the power sent to an actuator is proportional to the amplitude of the normalized EMG signal from a muscle. When the muscle is inactive, the actuator receives no power from the controller, and when the muscle is fully contracted, the actuator produces maximum torque about the joint it controls. For example, a powered ankle-foot orthosis (AFO) could employ a pneumatic artificial muscle to provide plantar flexion torque proportional to the activation level of the soleus (one of the calf muscles). This control method enables the exoskeleton to be controlled by the same neural pathways as the wearer's biological muscles and has been shown to allow individuals to walk with a more normal gait than other control methods, such as using a footswitch. Proportional myoelectric control of robotic lower limb exoskeletons has advantages over other control methods, such as: Its physiological nature allows for an effective way to scale the magnitude of mechanical assistance from the exoskeleton It results in reduced biological muscle recruitment versus kinematic based control methods It allows easy adaptation of the exoskeleton control for new motor tasks However, proportional myoelectric control also has disadvantages compared to other control methods, including: The surface electrode interface can often cause difficulties in obtaining a reliable EMG signal The system requires tuning to determine the appropriate thresholds and gains The musculoskeletal system has many synergistic muscles that are not easily accessible via surface EMG electrodes Since neurological disorders result in decreased neuromuscular control, some individuals may not have sufficient neural control to allow them to use an exoskeleton with myoelectric control === Proportional myoelectric control with flexor inhibition === Direct proportional control works well when each joint of the exoskeleton is actuated in one direction (uni-directional actuation), such as a pneumatic piston only bending the knee, but is less effective when two joint actuators work in opposition (bi-directional actuation). An example of this would be ankle exoskeleton using one pneumatic artificial muscle for dorsiflexion based on tibialis anterior (shin muscle) EMG and another pneumatic artificial muscle for plantar flexion based on soleus (calf muscle) EMG. This could result in a large degree of co-activation of the two actuators and make walking more difficult. To correct for this unwanted co-activation, a rule can be added to the control scheme so that artificial dorsiflexor activation is inhibited when soleus EMG is above a set threshold. Proportional control with flexor inhibition allows for a more natural gait than with direct proportional control; flexor inhibition also allows subjects to walk much more easily with combined knee and ankle exoskeletons with bi-directional actuators at each joint. == Applications == === Performance enhancement === Performance enhancement deals with increasing typical human capabilities, such as strength or endurance. Many full-body robotic exoskeletons currently in development use controllers based on joint torques and angles instead of electromyography. See Powered exoskeletons. === Long-term assistance === One application of a robot lower limb exoskeleton is to assist in the movement of a disabled individual in order to walk. Individuals with spinal cord injury, weakened leg muscles, poor neuromuscular control, or who have suffered a stroke could benefit from wearing such a device. The exoskeleton provides torque about a joint in the same direction that EMG data indicate the joint is rotating. For example, high EMG signals in the vastus medialis (a quadriceps muscle) and low EMG signals in the biceps femoris (a hamstring muscle) would indicate that the user is extending his/her leg, therefore the exoskeleton would provide torque on the knee to help straighten the leg. === Study of human locomotion === Proportional myoelectric control and robotic exoskeletons have been used in upper limb devices for decades, but engineers have only recently begun using them for lower-limb devices to better understand human biomechanics and neural control of locomotion. By using an exoskeleton with a proportional myoelectric controller, scientists can use a non-invasive means of studying the neural plasticity associated with modifying a muscle's force (biological +/- artificial force), as well as how motor memories for locomotor control are formed. === Rehabilitation === Robotic lower limb exoskeletons have the potential to help an individual recover from an injury such as a stroke, spinal cord injury, or other neurological disabilities. Neurological motor disorders often result in reduced volitional muscle activation amplitude, impaired proprioception, and disordered muscle coordination; a robotic exoskeleton with proportional myoelectric control can improve all three of these by amplifying the relationship between muscle activation and proprioceptive feedback. By increasing the consequences of muscle activation, an exoskeleton can improve sensory feedback in a physiological way, which in turn can improve motor control Individuals with spinal cord injury or who have had a stroke can improve their motor capabilities through intense gait rehabilitation, which can require up to three physical therapists to help partially support the body weight of the individual. Robotic lower limb exoskeletons could help in both of these areas. == Physiological response == The neuromuscular system has targeted joint torques it tries to generate while walking. Assistive exoskeletons produce some of the torque needed to move one or more leg joints while walking, which allows a healthy individual to generate less muscle torque in those joints and use less metabolic energy. The muscle torque is reduced enough to keep the net torque about each joint approximately the same as when walking without an exoskeleton. The net torque about each joint is the muscular torque plus the actuator torque. Disabled individuals do not see much of a decrease, if any, in muscular torque while walking with an exoskeleton because their muscles are not strong enough to walk with a normal gait, or at all; the exoskeleton provides the remaining torque needed for them to walk. == Examples == Vanderbilt exoskeleton ReWalk HAL 5 Ekso Bionics == See also == Orthotics Neural control of limb stiffness Powered exoskeleton Pneumatic Artificial Muscles == References ==
Wikipedia/Proportional_myoelectric_control
Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning. Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004. One can use interchangeably also the terms cardiovascular physics. == See also == Medical physics Important publications in medical physics Biomedicine Biomedical engineering Physiome Nanomedicine == References == Books Kohl, Peter; Sachs, Frederick; Franz, Michael R. (2011). Cardiac Mechano-Electric Coupling and Arrhythmias. ISBN 978-0-19-957016-4. Zhuchkova, E., Radnayev, B., Vysotsky, S. & Loskutov, A. (2009). "Suppression of turbulent dynamics in models of cardiac tissue by weak local excitations". In S.K. Dana; P.K. Roy; J. Kurths (eds.). Understanding Complex Systems. Berlin: Springer. pp. 89–105.{{cite book}}: CS1 maint: multiple names: authors list (link) Zbigniew R. Struzik. (2004). "Econophysics vs Cardiophysics: the Dual Face of Multifractality". In Hideki Takayasu (ed.). The Application of Econophysics. Japan: Springer. pp. 210–215. doi:10.1007/978-4-431-53947-6_29. ISBN 978-4-431-67961-5. Papers Crampin E. J.; Halstead M.; Hunter P.; Nielsen P.; Noble D.; Smith N.; Tawhai M. (2003). "Computational physiology and the physiome project". Exp. Physiol. 89 (1): 1–26. doi:10.1113/expphysiol.2003.026740. PMID 15109205. S2CID 18151860. Hunter, P. J., Kohl, P., Noble D. (2001). "Integrative models of heart: achievements and limitations". Phil. Trans. R. Soc. Lond. A. 359 (1783): 1049–1054. Bibcode:2001RSPTA.359.1049H. doi:10.1098/rsta.2001.0816. S2CID 84652829.{{cite journal}}: CS1 maint: multiple names: authors list (link) Noble D. (2002). "Modelling the heart: from genes to cells to whole organ". Science. 295 (5560): 1678–1682. doi:10.1126/science.1069881. PMID 11872832. S2CID 6756983. Moskalenko A.V. (2009). "Nonlinear effects of lidocaine on polymorphism of ventricular arrhythmias". Biophysics. 54 (1): 47–50. doi:10.1134/s0006350909010084. S2CID 96749014. Moskalenko A.V.; Elkin Yu. E. (2009). "The lacet: a new type of the spiral wave behavior". Chaos, Solitons and Fractals. 40 (1): 426–431. Bibcode:2009CSF....40..426M. doi:10.1016/j.chaos.2007.07.081. Wessel, N., Malberg, H., Bauernschmitt, R., Kurths J. (2007). "Nonlinear methods of cardiovascular physics and their clinical applicability". International Journal of Bifurcation and Chaos. 17 (10): 3325–3371. Bibcode:2007IJBC...17.3325W. CiteSeerX 10.1.1.385.1704. doi:10.1142/s0218127407019093.{{cite journal}}: CS1 maint: multiple names: authors list (link) Wiener N.; Rosenblueth A. (1946). "The mathematical formulation of the problem of conduction of impulses in a network of connected exitable elements, specifically in cardiac muscle". Arch. Inst. Cardiologia de Mexico. 16 (3–4): 205–265. PMID 20245817. == External links == Bioelectric Information Processing Laboratory of the Institute for Information Transmission Problems RAS. (in Russian) The Group of Experimental and Clinical Cardiology in the Laboratory of Physiology of emotion, Research Institute of normal physiology by Anokhin RAMS Oxford Cardiac Electrophysiology Group, led many years already by Prof. Denis Noble Cardiac Biophysics and Systems Biology group of National Heart & Lung Institute of Imperial College London (in German) Group of Nonlinear Dynamics & Cardiovascular Physics Archived 2013-11-10 at the Wayback Machine of the 1st Faculty of Mathematics and Natural Sciences in the Institute of Physics of Humboldt University of Berlin
Wikipedia/Cardiophysics
Clinical engineering is a specialty within biomedical engineering responsible for using medical technology to optimize healthcare delivery. Clinical engineers train and supervise biomedical equipment technicians (BMETs), working with governmental regulators on hospital inspections and audits, and serve as technological consultants for other hospital staff (i.e., Physicians, Administrators, IT). Clinical engineers also assist manufacturers in improving the design of medical equipment and maintain state-of-the-art hospital supply chains. With training in both product design and point-of-use experience, clinical engineers bridge the gap between product developers and end-users. The focus on practical implementations tends to keep clinical engineers oriented towards incremental redesigns, as opposed to revolutionary or cutting-edge ideas far-off of implementation for clinical use. However, there is an effort to expand this time horizon, over which clinical engineers can influence the trajectory of biomedical innovation. Clinical engineering departments at large hospitals will sometimes hire not only biomedical engineers, but also industrial and systems engineers to address topics such as operations research, human factors, cost analysis, and safety. == History == The term clinical engineering was first used in a 1969 paper by Landoll and Caceres. Caceres, a cardiologist, is generally credited with coining the term. The broader field of biomedical engineering also has a relatively recent history, with the first inter-society engineering meeting focused on engineering in medicine probably held in 1948. However, the general notion of applying engineering to medicine can be traced back to centuries. For example, Stephen Hales' work in the early 18th century, which led to the invention of the ventilator and the discovery of blood pressure, involved applying engineering techniques to medicine. In the early 1970s, clinical engineering was thought to require many new professionals. Estimates of the time for the US ranged as high as 5,000 to 8,000 clinical engineers, or 1 per 250 hospital beds. === Credentialization === The International Certification Commission for Clinical Engineers (ICC) was formed under the sponsorship of the Association for the Advancement of Medical Instrumentation (AAMI) in the early 1970s to provide a formal certification process for clinical engineers. A similar certification program was formed by academic institutions offering graduate degrees in clinical engineering as the American Board of Clinical Engineering (ABCE). In 1979, the ABCE dissolved, and those certified under its program were accepted into the ICC certification program. By 1985, only 350 clinical engineers had become certified. After a 1998 survey demonstrating no viable market for its certification program, the AAMI ceased accepting new applicants in July 1999. The new, current clinical engineering certification (CCE) started in 2002 under the sponsorship of the American College of Clinical Engineering (ACCE) and is administered by the ACCE Healthcare Technology Foundation. In 2004, the first year the certification process was underway, 112 individuals were granted certification based upon their previous ICC certification, and three individuals were awarded the new certification. By the time of the 2006-2007 AHTF Annual Report (c. June 30, 2007), 147 individuals had become HTF certified clinical engineers. == Definition and terminology == A clinical engineer was defined by the ACCE in 1991 as "a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology." Clinical engineering is also recognized by the Biomedical Engineering Society, the major professional organization for biomedical engineering, as being a branch within the field of biomedical engineering. There are at least two issues with the ACCE definition that often cause confusion. First, it is unclear how "clinical engineer" is a subset of "biomedical engineer". The terms are often used interchangeably: some hospitals refer to their relevant departments as "Clinical Engineering" departments, while others call them "Biomedical Engineering" departments. The technicians are almost universally referred to as "biomedical equipment technicians," regardless of the department they work under. However, the term biomedical engineer is generally thought to be more all-encompassing, as it includes engineers who design medical devices for manufacturers, or in academia. In contrast, clinical engineers generally work in hospitals solving problems close to where the equipment is actually used. Clinical engineers in some countries, such as India, are trained to innovate and find technological solutions for clinical needs. The other issue, not evident from the ACCE definition, is the appropriate educational background for a clinical engineer. Generally, certification programs expect applicants to hold an accredited bachelor's degree in engineering (or at least engineering technology). === Potential new name === In 2011, AAMI arranged a meeting to discuss a new name for clinical engineering. After careful debate, the vast majority decided on "Healthcare Technology Management". Due to confusion about the dividing line between clinical engineers (engineers) and BMETs (technicians), the word engineering was deemed limiting from the administrator's perspective and unworkable from the educator's perspective. An ABET-accredited college could not name an associate degree program "engineering". Also, the adjective, clinical, limited the scope of the field to hospitals. It remains unresolved how widely accepted this change will be, how this will affect the Clinical Engineering Certification or the formal recognition of clinical engineering as a subset of biomedical engineering. For regulatory and licensure reasons, true engineering specialties must be defined in a way that distinguishes them from the technicians they work alongside. == Certification == Certification in clinical engineering is governed by the Board of Examiners for Clinical Engineering Certification. To be eligible, a candidate must hold appropriate credentials (such as an accredited engineering or engineering-technology degree), have specific and relevant experience, and pass an examination. The certification process involves a three-hour written examination of up to 150 multiple-choice questions and a separate oral exam. Weight is given to applicants who are already licensed and registered Professional Engineers, which has extensive requirements itself. In Canada, the term 'engineer' is protected by law. As a result, a candidate must be registered as a Professional Engineer (P.Eng.) before they can become a Certified Clinical Engineer. == In the UK == Clinical engineers in the UK typically work within the NHS. Clinical engineering is a modality of the clinical scientist profession, registered by the HCPC. The responsibilities of clinical engineers are varied and often include providing specialist clinical services, inventing and developing medical devices, and medical device management. The roles typically involve both patient contact and academic research. Clinical engineering units within an NHS organization are often part of a larger medical physics department. Clinical engineers are supported and represented by the Institute of Physics and Engineering in Medicine, within which the clinical engineering special interest group oversees the engineering activities. The three primary aims of Clinical Engineering with the NHS are: To ensure medical equipment in the clinical environment is available and appropriate to the needs of the clinical service. To ensure medical equipment functions effectively and safely. To ensure medical equipment and its management represents value for patient benefit. === Registration === Clinical engineers are registered with the HCPC, or the RCT (Register of Clinical Technologist). Assessments prior to registration are provided by the National School of Healthcare Science, the Association of Clinical Scientists or the AHCS. There are two HCPC programs for becoming a clinical scientist. The first is a Certificate of Attainment, awarded for completing the NHS Scientist Training Programme (STP). The second is the Certificate of Equivalence, awarded on successful demonstration of equivalence to the STP. This route is normally chosen by individuals that have significant scientific experience prior to seeking registration. Both are provided by the AHCS. === Electronics and Biomedical Engineering === EBME technicians and engineers in the UK work in the NHS and private sector. They are part of the Clinical Engineering familiar in the UK. Their role is to manage and maintain medical equipment assets in NHS and private healthcare organizations. They are professionally registered with the Engineering Council as Chartered Engineers, Incorporated Engineers, or engineering technicians. The EBME community share their knowledge on the EBME Forums. There is also an annual 2-day National Exhibition and Conference, wherein engineers meet to learn about the latest medical products and to attend the 500-seat conference where academic and business leaders share their expertise. The conference was founded in 2009 as a way of improving healthcare through sharing knowledge from experienced professionals involved in medical equipment management. == In India == Healthcare has increasingly become technology-driven and requires trained manpower to keep pace with the growing demand for professionals in the field. An M-Tech Clinical Engineering course was initiated by Indian Institute of Technology Madras, Sree Chitra Thirunal Institute of Medical Sciences and Technology, Trivandrum and Christian Medical College, Vellore, to address the country's need for human resource development. This was aimed at indigenous biomedical device development as well as technology management in order to contribute to the overall development of healthcare delivery in the country. During the course, students of engineering are given an insight into biology, medicine, relevant electronic background, clinical practices, device development, and even management aspects. Students are paired with clinical doctors from CMC and SCTIMST to get hands-on experience during internships. An important aspect of this training is simultaneous, long-term, and detailed exposure to the clinical environment as well as to medical device development activity. This will help students understand how to recognize unmet clinical needs and contribute to the creation of future medical devices. Engineers will be trained to handle and oversee the safe and effective use of technology in healthcare delivery sites as part of the program. The minimum qualification for joining this course is a bachelor's degree in any discipline of engineering, technology, or architecture, and a valid GATE score with an interview process in that field. == See also == Biomedical engineering == References == == Further reading == Villafane, Carlos, CBET. (June 2009). Biomed: From the Student's Perspective, First Edition. [Techniciansfriend.com]. ISBN 978-1-61539-663-4.{{cite book}}: CS1 maint: multiple names: authors list (link) Medical engineering stories in the news School of Engineering and Materials Science, Queen Mary University of London == External links == EBME website EBME website for Medical, Biomedical, and Clinical engineering professionals.
Wikipedia/Clinical_engineer
Microneedles (MNs) are medical tools used for microneedling, primarily in drug delivery, disease diagnosis, and collagen induction therapy. Known for their minimally invasive and precise nature, MNs consist of arrays of micro-sized needles ranging from 25μm to 2000μm. Although the concept of microneedling was first introduced in the 1970s, its popularity has surged due to its effectiveness in drug delivery and its cosmetic benefits. Since the 2000s, there has been discoveries on new fabrication materials of MNs, like silicon, metal and polymer. Alongside with materials, a variety of MNs types (solid, hollow, coated, hydrogel) has also been developed to possess different functions. The research on MNs has led to improvements in different aspects, including instruments and techniques, yet adverse events are possible in MNs users. Microneedle patches or Microarray patches are micron-scaled medical devices used to administer vaccines, drugs, and other therapeutic agents. While microneedles were initially explored for transdermal drug delivery applications, their use has been extended for the intraocular, vaginal, transungual, cardiac, vascular, gastrointestinal, and intracochlear delivery of drugs. Microneedles are constructed through various methods, usually involving photolithographic processes or micromolding. These methods involve etching microscopic structure into resin or silicon in order to cast microneedles. Microneedles are made from a variety of material ranging from silicon, titanium, stainless steel, and polymers. Some microneedles are made of a drug to be delivered to the body but are shaped into a needle so they will penetrate the skin. The microneedles range in size, shape, and function but are all used as an alternative to other delivery methods like the conventional hypodermic needle or other injection apparatus. Stimuli-responsive microneedles are advanced devices that respond to environmental triggers such as temperature, pH, or light to release therapeutic agents. Microneedles are usually applied through even single needle or small arrays. The arrays used are a collection of microneedles, ranging from only a few microneedles to several hundred, attached to an applicator, sometimes a patch or other solid stamping device. The arrays are applied to the skin of patients and are given time to allow for the effective administration of drugs. Microneedles are an easier method for physicians as they require less training to apply and because they are not as hazardous as other needles, making the administration of drugs to patients safer and less painful while also avoiding some of the drawbacks of using other forms of drug delivery, such as risk of infection, production of hazardous waste, or cost. == History == The concept of microneedles was first derived from the use of large hypodermic needles in the 1970s, but it only became prominent in the 1990s as microfabrication manufacturing technology developed. Later, the concept of MNs finally came into experimentation in 1994 when Orentreich discovered the insertion of tri-beveled needles to the skin could possibly stimulates the release of fibrous strand. The investigation on MNs’ potential to improve transdermal drug delivery gradually raised public awareness of MNs. Since then, there has been massive research conducted on MNs, contributing to the development of different materials, types, and fabrication methods of MNs. Application and adverse events are explored. In the 2000s, clinical trials on MNs’ use in drug delivery began. Microneedles were first mentioned in a 1998 paper by the research group headed by Mark Prausnitz at the Georgia Institute of Technology that demonstrated that microneedles could penetrate the uppermost layer (stratum corneum) of the human skin and were therefore suitable for the transdermal delivery of therapeutic agents. Subsequent research into microneedle drug delivery has explored the medical and cosmetic applications of this technology through its design. This early paper sought to explore the possibility of using microneedles in the future for vaccination. Since then researchers have studied microneedle delivery of insulin, vaccines, anti-inflammatories, and other pharmaceuticals. In dermatology, microneedles are used for scarring treatment with skin rollers. As mentioned before, microneedles have also been explored for local targeted drug delivery at other drug delivery sites, such as the gastrointestinal, ocular, vascular etc., of which, ocular, vaginal and gastrointestinal have shown increasingnly convincing outcomes where they serve as a more efficient, localised drug delivery system, without the drawbacks of systemic exposure/toxicity. The major goal of any microneedle design is to penetrate the skin's outermost layer, the stratum corneum (10-15μm). Microneedles are long enough to cross the stratum corneum but not so long that they stimulate nerves which are located deeper in the tissues and therefore cause little to no pain. Research has shown that there is a limit on the type of drugs that can be delivered through intact skin. Only compounds with a relatively low molecular weight, like the common allergen nickel (130 Da), can penetrate the skin. Compounds that weigh more than 500 Da cannot penetrate the skin. == Materials of microneedles == Microneedles (MNs) consist of micro-sized needles arrays that are made of various materials exhibiting different characteristics and are suitable in the synthesis of different types of MNs. The selection of materials for formation of MNs greatly depends on the strength of skin penetration, manufacturing method, and rate of drug release. Silicon is the first material used for the production of MNs. While the flexible nature of silicon allows easy manufacture of different sizes and types of MNs, silicon MNs can easily fracture during insertion in the skin. On the contrary, MNs made of metals like stainless steel, titanium, and aluminum, are non-toxic and possess strong mechanical properties to penetrate the skin without breakage. Nevertheless, metal MNs may cause allergic effects in some patients and it creates non-biodegradable wastes. Polymer is also regarded as a promising material for MNs due to its good biocompatibility and low toxicity. Water-soluble polymers are more commonly used within the big polymer group and MNs tip breaking is more likely compared to MNs made of silicon and metal. Therefore, polymer is a more suitable material for dissolving MNs or hydrogel-forming MNs. == Types of microneedles == Since their conceptualization in 1998, several advances have been made in terms of the variety of types of microneedles that can be fabricated. The 5 main types of microneedles are solid, hollow, coated, dissolvable/dissolving, and hydrogel-forming. The distinct characteristic of each type of MNs allow a variety of clinical applications, including diagnosis and treatment. Micro-sized needles in a microneedles (MNs) device can be as short as 25μm or even 2000μm in length depending on their types. === Solid microneedles === Solid MNs are the first type of MNs fabricated and are the most commonly used. Hard solid MNs have sharp tips that pierce through and form pores on the stratum corneum. A drug patch will then be applied to the skin for drug to be absorbed slowly and passively through numerous micropores. This type of array is designed as a two part system; the microneedle array is first applied to the skin to create microscopic wells just deep enough to penetrate the outermost layer of skin, and then the drug is applied via transdermal patch. Solid microneedles are already used by dermatologists in collagen induction therapy, a method which uses repeated puncturing of the skin with microneedles to induce the expression and deposition of the proteins collagen and elastin in the skin. Solid MNs help increase the permeability and absorption of drugs. === Hollow microneedles === Hollow MNs are designed with a hole at the tip and a hollow capacity that store drugs. Upon MNs insertion, stored drug is directly injected into the dermis and this effectively facilitates the absorption of either large-molecular or large-dosage drug. Yet, a portion of the drug can be leaked or clogged and it may hinders the overall drug administration. Since the delivery of the drug depends on the flow rate of the microneedle, this type of array could become clogged by excessive swelling or flawed design. This design also increases the likelihood of buckling under the pressure and therefore failing to deliver any drugs. === Coated microneedles === Coated MNs are fabricated by coating drug solution over solid MNs and the thickness of the drug layer can be adjusted depending on the amount of drug to be administered. A benefit of coated MNs is that less amount of drug is needed as compared to other drug administration route. This is because the layer of drug will quickly dissolve and delivered into the systemic circulation directly across the skin. The solid MNs which are removed afterwards may be contaminated by left-over drugs and the reuse of those MNs raise the concern of cross-infection between patients. Coated microneedles are often covered in other surfactants or thickening agents to assure that the drug is delivered properly. Some of the chemicals used on coated microneedles are known irritants. While there is risk of local inflammation to the area where the array was, the array can be removed immediately with no harm to the patient. === Dissolving microneedles === Dissolving MNs are mostly composed of water-soluble drugs that enable the dissolution of MN tips when inserted into skin. This is a one-step approach which does not require the removal of MNs and is convenient for long-term therapy. However, incomplete insertion and delay dissolution is observed with the use of dissolving MNs. This polymer would allow the drug to be delivered into the skin and could be broken down once inside the body. Pharmaceutical companies and researchers have begun to study and implement polymers such as Fibroin, a silk-based protein that can be molded into structures like microneedles and dissolved once in the body. === Hydrogel-forming microneedles === The primary material for the fabrication of hydrogel-forming microneedles (HFMs) is hydrophilic polymer that encloses drugs. This material draws water from interstitial fluid in the stratum corneum and results in polymer swelling and release of drug. Besides, the hydrophilic features of HFMs allow readily uptake of interstitial fluid that could be used for disease diagnosis. == Application and principle == === Transdermal drug delivery === The most abundant transdermal drug administration route currently is via hypodermic needles, transdermal patches, and topical creams. However, these routes have limited therapeutic effects because stratum corneum serves as a barrier that reduces the entry of drug molecules into the systemic circulation and target tissues. The invention of MNs have retained the benefits of both hypodermic needles and transdermal patches while minimizing their cons. Compared to hypodermic needles, MNs provide a pain-free administration. MNs are able to penetrate through the epidermis, but not any deeper to compress on nerve-ends to produce pain responses. The superficial penetration also lessen the infection risk. Compared to transdermal patches, MNs are proven to be effective in producing micropores on the epidermis. The micropores facilitate the absorption of large molecules, like calcein and insulin, by 4 times via in-vitro skin models. In addition, MNs' direct drug delivery to systemic circulation avoided the first-pass effect in the liver. Significantly increasing the drug bioavailability, and the fast absorption into the systemic circulation also allowed a fast onset of action. Therefore, MNs could benefit diabetes treatment as common oral delivery would lead to a significant loss of insulin from degradation in the liver (first-pass effect) and insulin molecules are too large to be absorbed using common transdermal patches. Furthermore, the high precision of MNs also allows drug reaching to localized tissues precisely, for instance, intradermal layers for cancer or the eye for ophthalmic disorder. ==== Vaccination ==== MNs are suitable for vaccination with their capability to deliver macromolecules and maintain a slow and sustained release of vaccine agents by using both coated and dissolving MNs. In addition, MNs' biodegradability minimizes biohazardous waste, unlike hypodermic needles. The application of MNs in vaccination would benefit people who avoid vaccination due to trypanophobia (fear of needles in medical settings). As of 2024, it has been found to generate an immune response similar to injection of measles and rubella vaccine. === Disease diagnosis and monitoring === Disease diagnosis and monitoring of therapeutic efficacy is possible by detecting several biomarkers in body fluid. However, current tissue fluid extraction methods are pain-inducing, and it may take up to hours or days for samples to be analyzed in medical laboratories. MNs could collect body fluid in an almost painless manner, and it could provide immediate diagnosis when combined with a sensor. MNs allow penetration through the epidermis but not long enough to compress nerves in deeper layers, and thus, they are minimally invasive and almost painless. MNs' precision also allow the extraction of fluid surrounding diseased tissues, which may contain higher concentration of different biomarkers and specific biomarkers that are not present in the systemic circulation. These fluids provide more clinically significant and accurate values than those extracted from the systemic circulation, subsequently lowering the chances of underestimation of disease severity, especially for localized diseases. Furthermore, MNs are capable of providing (near) real-time diagnosis, and it is easily administrated with simple procedures. Thus, MNs are potential candidates for Point-of-care (PoC) testing which could be conducted bedside. Hollow MNs and hydrogel MNs could be used to diagnose and monitor several diseases including Cataracts, Diabetes, Cancer, and Alzheimer’s disease. For instance, hollow glass MNs and hydrogel MNs could extract skin interstitial fluid for the detection of glucose levels. === Collagen induction therapy === In the field of dermatology, MNs are more commonly known as collagen induction therapy. The therapy induces dermis regeneration via repetitive perforation of the skin using sterilized MNs. The repetitive penetration through the stratum corneum forms micropores, and these physical traumas to the skin sequentially stimulate the wound-healing cascade and expression of collagen and elastin in the dermis. By making use of the human natural regeneration properties, microneedling could be used alone to treat scars, wrinkles, and skin rejuvenation, or in combination therapy with topical tretinoin and vitamin C for enhanced effect. Recent research has expanded the possibilities of MNs to treat pigmentation disorder, actinic keratosis, and promote hair growth in patients of androgenetic alopecia and alopecia areata. MNs have been diverged into different forms, including Dermapen and Dermarollers. Dermarollers are hand-held rollers equipped with a total of 192 solid steel micro-sized needles arranged into 24 arrays, lengths ranging from 0.5-1.5mm. With the growing popularity of microneedling, MNs have also been commodified into home care Dermarollers, which are similar to medical dermarollers, except that the needles are shorter (0.15mm). This is a more budget-friendly device that allows individuals to perform microneedling at home. == Advantages == There are many advantages to the use of microneedles, the most prominent being the improved comfort of patients. Needle phobia can affect both adults and children, and sometimes can lead to fainting. The benefit of microneedle arrays is that they reduce anxiety that patients have when confronted with a hypodermic needle. In addition to improving psychological and emotional comfort, microneedles have been shown to be substantially less painful than conventional injections. Some studies recorded children's views on blood sampling with microneedles and found patients were more willing when prompted with a less painful procedure than traditional sampling with needles. Microneedles are beneficial to physicians as well, since they produce less hazardous waste than needles and are generally easier to use. Microneedles are also less expensive than needles as they require less material and the material used is cheaper than the materials in hypodermic needles. Microneedles present a new opportunity for home and community-based healthcare. One of the biggest drawbacks of traditional needles is the hazardous waste that they produce, making disposal a serious concern for doctors and hospitals. For patients who require regular administration of medication at home, disposal can become an environmental concern is needles are placed in the trash. Dissolvable or swelling microneedles would provide those who are limited in their ability to seek hospital care with the ability to safely administer drugs in the comfort of their homes, although disposal of solid or hollow microneedles could still pose a needle-stick or blood borne pathogen infection risk. Another benefit of microneedles is their lower rates of microbial invasion into delivery sites. Traditional injection methods can leave puncture wounds for up to 48 hours post-treatment. This leaves a large window of opportunity for harmful bacteria to enter into the skin. Microneedles only damage the skin to a depth of 10-15μm, making it difficult for bacteria to enter the bloodstream and giving the body a smaller wound to repair. Further research is required to determine the types of bacteria able to breach the shallow puncture site of microneedles. == Disadvantages == There are some concerns about how physicians can be sure that all of the drug or vaccine has entered the skin when microneedles are applied. Hollow and coated microneedles both possess the risk that the drug will not properly enter the skin and will not be effective. Both of these types of microneedles can leak onto a person's skin either by damage of the microneedle or incorrect application by the physician. This is why it is essential that physicians are trained how to properly apply the arrays. Another concern is that incorrectly applied arrays could leave foreign material in the body. Although there is a lower risk of infection associated with microneedles, the arrays are more fragile than a typical hypodermic needle due to their small size and thus have a chance of breaking off and remaining in the skin. Some of the material used to construct the microneedles, such as titanium, cannot be absorbed by the body and any fragments of the needles would cause irritation. There is a limited amount of literature available on the subject of microneedle drug delivery, as current research is still exploring how to make effective needles. In terms of design and manufacture, low drug loading is a key barrier towards reaching the clinics. == Safety profile == Apart from procedural pain, some common post-treatment adverse events (AEs) of MNs include temporary discomfort, erythema (skin redness), and edema. Pinpoint bleeding, itching, irritation, and bruising are also possible in some cases. However, most of the adverse side effects are not long-lasting and could be resolved spontaneously within 24 hours after the treatment, making MNs a rather safe tool. Photoprotection and minimal exposure to chemicals irritants are often advised for an effective recovery and lowered chance of skin inflammation. Severe risks may be possible if there are technical errors during the procedure. For example, the usage of non-sterile tools might result in post-inflammatory hyperpigmentation, systemic hypersensitivity, local infections, etc. Moreover, if excess pressure is used over a bony prominence, it could lead to “Tram-track scarring”. But this could be avoided by using smaller needles and prevent over-pressurizing on top of these areas. In addition, if the patient is allergic to the either the drug used or the material of MNs, contact dermatitis is possible. Therefore, clinicians should be cautious towards patients with high risks of allergy. == References == == Further reading == Ita K (2022). "Introduction". In Ita K (ed.). Microneedles. London: Academic Press. pp. 1–19. ISBN 978-0-323-97234-5. == External links == Microneedles: a new way to deliver vaccines, Dawn Connelly, The Pharmaceutical Journal, 2021
Wikipedia/Microneedle_drug_delivery
In pharmacology and toxicology, a route of administration is the way by which a drug, fluid, poison, or other substance is taken into the body. Routes of administration are generally classified by the location at which the substance is applied. Common examples include oral and intravenous administration. Routes can also be classified based on where the target of action is. Action may be topical (local), enteral (system-wide effect, but delivered through the gastrointestinal tract), or parenteral (systemic action, but is delivered by routes other than the GI tract). Route of administration and dosage form are aspects of drug delivery. == Classification == Routes of administration are usually classified by application location (or exposition). The route or course the active substance takes from application location to the location where it has its target effect is usually rather a matter of pharmacokinetics (concerning the processes of uptake, distribution, and elimination of drugs). Exceptions include the transdermal or transmucosal routes, which are still commonly referred to as routes of administration. The location of the target effect of active substances is usually rather a matter of pharmacodynamics (concerning, for example, the physiological effects of drugs). An exception is topical administration, which generally means that both the application location and the effect thereof is local. Topical administration is sometimes defined as both a local application location and local pharmacodynamic effect, and sometimes merely as a local application location regardless of location of the effects. === By application location === ==== Enteral/gastrointestinal route ==== Through the gastrointestinal tract is sometimes termed enteral or enteric administration (literally meaning 'through the intestines'). Enteral/enteric administration usually includes oral (through the mouth) and rectal (into the rectum) administration, in the sense that these are taken up by the intestines. However, uptake of drugs administered orally may also occur already in the stomach, and as such gastrointestinal (along the gastrointestinal tract) may be a more fitting term for this route of administration. Furthermore, some application locations often classified as enteral, such as sublingual (under the tongue) and sublabial or buccal (between the cheek and gums/gingiva), are taken up in the proximal part of the gastrointestinal tract without reaching the intestines. Strictly enteral administration (directly into the intestines) can be used for systemic administration, as well as local (sometimes termed topical), such as in a contrast enema, whereby contrast media are infused into the intestines for imaging. However, for the purposes of classification based on location of effects, the term enteral is reserved for substances with systemic effects. Many drugs as tablets, capsules, or drops are taken orally. Administration methods directly into the stomach include those by gastric feeding tube or gastrostomy. Substances may also be placed into the small intestines, as with a duodenal feeding tube and enteral nutrition. Enteric coated tablets are designed to dissolve in the intestine, not the stomach, because the drug present in the tablet causes irritation in the stomach. The rectal route is an effective route of administration for many medications, especially those used at the end of life. The walls of the rectum absorb many medications quickly and effectively. Medications delivered to the distal one-third of the rectum at least partially avoid the "first pass effect" through the liver, which allows for greater bio-availability of many medications than that of the oral route. Rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. A suppository is a solid dosage form that fits for rectal administration. In hospice care, a specialized rectal catheter, designed to provide comfortable and discreet administration of ongoing medications provides a practical way to deliver and retain liquid formulations in the distal rectum, giving health practitioners a way to leverage the established benefits of rectal administration. The Murphy drip is an example of rectal infusion. ==== Parenteral route ==== The parenteral route is any route that is not enteral (par- + enteral). Parenteral administration can be performed by injection, that is, using a needle (usually a hypodermic needle) and a syringe, or by the insertion of an indwelling catheter. Locations of application of parenteral administration include: Central nervous system: Epidural (synonym: peridural) (injection or infusion into the epidural space), e.g. epidural anesthesia. Intracerebral (into the cerebrum) administration by direct injection into the brain. Used in experimental research of chemicals and as a treatment for malignancies of the brain. The intracerebral route can also interrupt the blood brain barrier from holding up against subsequent routes. Intracerebroventricular (into the cerebral ventricles) administration into the ventricular system of the brain. One use is as a last line of opioid treatment for terminal cancer patients with intractable cancer pain. Epicutaneous (application onto the skin). It can be used both for local effect as in allergy testing and typical local anesthesia, as well as systemic effects when the active substance diffuses through skin in a transdermal route. Sublingual and buccal medication administration is a way of giving someone medicine orally (by mouth). Sublingual administration is when medication is placed under the tongue to be absorbed by the body. The word "sublingual" means "under the tongue." Buccal administration involves placement of the drug between the gums and the cheek. These medications can come in the form of tablets, films, or sprays. Many drugs are designed for sublingual administration, including cardiovascular drugs, steroids, barbiturates, opioid analgesics with poor gastrointestinal bioavailability, enzymes and, increasingly, vitamins and minerals. Extra-amniotic administration, between the endometrium and fetal membranes. Nasal administration (through the nose) can be used for topically acting substances, as well as for insufflation of e.g. decongestant nasal sprays to be taken up along the respiratory tract. Such substances are also called inhalational, e.g. inhalational anesthetics. Intra-arterial (into an artery), e.g. vasodilator drugs in the treatment of vasospasm and thrombolytic drugs for treatment of embolism. Intra-articular, into a joint space. It is generally performed by joint injection. It is mainly used for symptomatic relief in osteoarthritis. Intracardiac (into the heart), e.g. adrenaline during cardiopulmonary resuscitation (no longer commonly performed). Intracavernous injection, an injection into the base of the penis. Intradermal, (into the skin itself) is used for skin testing some allergens, and also for mantoux test for tuberculosis. Intralesional (into a skin lesion), is used for local skin lesions, e.g. acne medication. Intramuscular (into a muscle), e.g. many vaccines, antibiotics, and long-term psychoactive agents. Recreationally the colloquial term 'muscling' is used. Intraocular, into the eye, e.g., some medications for glaucoma or eye neoplasms. Intraosseous infusion (into the bone marrow) is, in effect, an indirect intravenous access because the bone marrow drains directly into the venous system. This route is occasionally used for drugs and fluids in emergency medicine and pediatrics when intravenous access is difficult. Intraperitoneal, (infusion or injection into the peritoneum) e.g. peritoneal dialysis. Intrathecal (into the spinal canal) is most commonly used for spinal anesthesia and chemotherapy. Intrauterine. Intravaginal administration, in the vagina. Intravenous (into a vein), e.g. many drugs, total parenteral nutrition. Intravesical infusion is into the urinary bladder. Intravitreal, through the eye. Subcutaneous (under the skin). This generally takes the form of subcutaneous injection, e.g. with insulin. Skin popping is a slang term that includes subcutaneous injection, and is usually used in association with recreational drugs. In addition to injection, it is also possible to slowly infuse fluids subcutaneously in the form of hypodermoclysis. Transdermal (diffusion through the intact skin for systemic rather than topical distribution), e.g. transdermal patches such as fentanyl in pain therapy, nicotine patches for treatment of addiction and nitroglycerine for treatment of angina pectoris. Perivascular administration (perivascular medical devices and perivascular drug delivery systems are conceived for local application around a blood vessel during open vascular surgery). Transmucosal (diffusion through a mucous membrane), e.g. insufflation (snorting) of cocaine, sublingual, i.e. under the tongue, sublabial, i.e. between the lips and gingiva, and oral spray or vaginal suppository for nitroglycerine. === Topical route === The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local. In other cases, topical is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution. If defined strictly as having local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One such medication is the antibiotic vancomycin, which cannot be absorbed in the gastrointestinal tract and is used orally only as a treatment for Clostridioides difficile colitis. == Choice of routes == The reason for choice of routes of drug administration are governing by various factors: Physical and chemical properties of the drug. The physical properties are solid, liquid and gas. The chemical properties are solubility, stability, pH, irritancy etc. Site of desired action: the action may be localised and approachable or generalised and not approachable. Rate of extent of absorption of the drug from different routes. Effect of digestive juices and the first pass metabolism of drugs. Condition of the patient. In acute situations, in emergency medicine and intensive care medicine, drugs are most often given intravenously. This is the most reliable route, as in acutely ill patients the absorption of substances from the tissues and from the digestive tract can often be unpredictable due to altered blood flow or bowel motility. === Convenience === Enteral routes are generally the most convenient for the patient, as no punctures or sterile procedures are necessary. Enteral medications are therefore often preferred in the treatment of chronic disease. However, some drugs can not be used enterally because their absorption in the digestive tract is low or unpredictable. Transdermal administration is a comfortable alternative; there are, however, only a few drug preparations that are suitable for transdermal administration. === Desired target effect === Identical drugs can produce different results depending on the route of administration. For example, some drugs are not significantly absorbed into the bloodstream from the gastrointestinal tract and their action after enteral administration is therefore different from that after parenteral administration. This can be illustrated by the action of naloxone (Narcan), an antagonist of opiates such as morphine. Naloxone counteracts opiate action in the central nervous system when given intravenously and is therefore used in the treatment of opiate overdose. The same drug, when swallowed, acts exclusively on the bowels; it is here used to treat constipation under opiate pain therapy and does not affect the pain-reducing effect of the opiate. === Oral === The oral route is generally the most convenient and costs the least. However, some drugs can cause gastrointestinal tract irritation. For drugs that come in delayed release or time-release formulations, breaking the tablets or capsules can lead to more rapid delivery of the drug than intended. The oral route is limited to formulations containing small molecules only while biopharmaceuticals (usually proteins) would be digested in the stomach and thereby become ineffective. Biopharmaceuticals have to be given by injection or infusion. However, recent research found various ways to improve oral bioavailability of these drugs. In particular permeation enhancers, ionic liquids, lipid-based nanocarriers, enzyme inhibitors and microneedles have shown potential. Oral administration is often denoted "PO" from "per os", the Latin for "by mouth". The bioavailability of oral administration is affected by the amount of drug that is absorbed across the intestinal epithelium and first-pass metabolism. === Oral mucosal === The oral mucosa is the mucous membrane lining the inside of the mouth. ==== Buccal ==== Buccally administered medication is achieved by placing the drug between gums and the inner lining of the cheek. In comparison with sublingual tissue, buccal tissue is less permeable resulting in slower absorption. ==== Sublabial ==== ==== Sublingual ==== Sublingual administration is fulfilled by placing the drug between the tongue and the lower surface of the mouth. The sublingual mucosa is highly permeable and thereby provides access to the underlying expansive network composed of capillaries, leading to rapid drug absorption. === Intranasal === Drug administration via the nasal cavity yields rapid drug absorption and therapeutic effects. This is because drug absorption through the nasal passages does not go through the gut before entering capillaries situated at tissue cells and then systemic circulation and such absorption route allows transport of drugs into the central nervous system via the pathways of olfactory and trigeminal nerve. Intranasal absorption features low lipophilicity, enzymatic degradation within the nasal cavity, large molecular size, and rapid mucociliary clearance from the nasal passages, which explains the low risk of systemic exposure of the administered drug absorbed via intranasal. === Local === By delivering drugs almost directly to the site of action, the risk of systemic side effects is reduced. Skin absorption (dermal absorption), for example, is to directly deliver drug to the skin and, hopefully, to the systemic circulation. However, skin irritation may result, and for some forms such as creams or lotions, the dosage is difficult to control. Upon contact with the skin, the drug penetrates into the dead stratum corneum and can afterwards reach the viable epidermis, the dermis, and the blood vessels. === Parenteral === The term parenteral is from para-1 'beside' + Greek enteron 'intestine' + -al. This name is due to the fact that it encompasses a route of administration that is not intestinal. However, in common English the term has mostly been used to describe the four most well-known routes of injection. The term injection encompasses intravenous (IV), intramuscular (IM), subcutaneous (SC) and intradermal (ID) administration. Parenteral administration generally acts more rapidly than topical or enteral administration, with onset of action often occurring in 15–30 seconds for IV, 10–20 minutes for IM and 15–30 minutes for SC. They also have essentially 100% bioavailability and can be used for drugs that are poorly absorbed or ineffective when they are given orally. Some medications, such as certain antipsychotics, can be administered as long-acting intramuscular injections. Ongoing IV infusions can be used to deliver continuous medication or fluids. Disadvantages of injections include potential pain or discomfort for the patient and the requirement of trained staff using aseptic techniques for administration. However, in some cases, patients are taught to self-inject, such as SC injection of insulin in patients with insulin-dependent diabetes mellitus. As the drug is delivered to the site of action extremely rapidly with IV injection, there is a risk of overdose if the dose has been calculated incorrectly, and there is an increased risk of side effects if the drug is administered too rapidly. === Respiratory tract === ==== Mouth inhalation ==== Inhaled medications can be absorbed quickly and act both locally and systemically. Proper technique with inhaler devices is necessary to achieve the correct dose. Some medications can have an unpleasant taste or irritate the mouth. In general, only 20–50% of the pulmonary-delivered dose rendered in powdery particles will be deposited in the lung upon mouth inhalation. The remainder of 50-70% undeposited aerosolized particles are cleared out of lung as soon as exhalation. An inhaled powdery particle that is >8 μm is structurally predisposed to depositing in the central and conducting airways (conducting zone) by inertial impaction. An inhaled powdery particle that is between 3 and 8 μm in diameter tend to largely deposit in the transitional zones of the lung by sedimentation. An inhaled powdery particle that is <3 μm in diameter is structurally predisposed to depositing primarily in the respiratory regions of the peripheral lung via diffusion. Particles that deposit in the upper and central airways are generally absorbed systemically to great extent because they are only partially removed by mucociliary clearance, which results in orally mediated absorption when the transported mucus is swallowed, and first pass metabolism or incomplete absorption through loss at the fecal route can sometimes reduce the bioavailability. This should in no way suggest to clinicians or researchers that inhaled particles are not a greater threat than swallowed particles, it merely signifies that a combination of both methods may occur with some particles, no matter the size of or lipo/hydrophilicity of the different particle surfaces. ==== Nasal inhalation ==== Inhalation by nose of a substance is almost identical to oral inhalation, except that some of the drug is absorbed intranasally instead of in the oral cavity before entering the airways. Both methods can result in varying levels of the substance to be deposited in their respective initial cavities, and the level of mucus in either of these cavities will reflect the amount of substance swallowed. The rate of inhalation will usually determine the amount of the substance which enters the lungs. Faster inhalation results in more rapid absorption because more substance finds the lungs. Substances in a form that resists absorption in the lung will likely resist absorption in the nasal passage, and the oral cavity, and are often even more resistant to absorption after they fail absorption in the former cavities and are swallowed. == Research == Neural drug delivery is the next step beyond the basic addition of growth factors to nerve guidance conduits. Drug delivery systems allow the rate of growth factor release to be regulated over time, which is critical for creating an environment more closely representative of in vivo development environments. == See also == ADME Catheter Dosage form Drug injection Ear instillation Hypodermic needle Intravenous marijuana syndrome List of medical inhalants Nanomedicine Absorption (pharmacology) == References == == External links == The 10th US-Japan Symposium on Drug Delivery Systems FDA Center for Drug Evaluation and Research Data Standards Manual: Route of Administration. FDA Center for Drug Evaluation and Research Data Standards Manual: Dosage Form. A.S.P.E.N. American Society for Parenteral and Enteral Nutrition Drug Administration Routes at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Oral_drug_administration
Drug delivery to the brain is the process of passing therapeutically active molecules across the blood–brain barrier into the brain. This is a complex process that must take into account the complex anatomy of the brain as well as the restrictions imposed by the special junctions of the blood–brain barrier. == Anatomy == The blood–brain barrier is formed by special tight junctions between endothelial cells lining brain blood vessels. Blood vessels of all tissues contain this monolayer of endothelial cells, however only brain endothelial cells have tight junctions preventing passive diffusion of most substances into the brain tissue. The structure of these tight junctions was first determined in the 1960s by Tom Reese, Morris Kranovsky, and Milton Brightman. Furthermore, astrocytic "end feet", the terminal regions of the astrocytic processes, surround the outside of brain capillary endothelial cells". The astrocytes are glial cells restricted to the brain and spinal cord and help maintain blood-brain barrier properties in brain endothelial cells. == Physiology == The primary function of the blood-brain barrier is to protect the brain and keep it isolated from harmful toxins that are potentially in the blood stream. It accomplishes this because of its structure, as is usual in the body that structure defines its function. The tight junctions between the endothelial cells prevent large molecules and many ions from passing between the junction spaces. This forces molecules to go through the endothelial cells to enter the brain tissue, meaning that they must pass through the cell membranes of the endothelial cells. Because of this, the only molecules that can easily transverse the blood–brain barrier are very lipid-soluble ones. These are not the only molecules that can transverse the blood–brain barrier; glucose, oxygen and carbon dioxide are not lipid-soluble but are actively transported across the barrier, to support the normal cellular function of the brain. The fact that molecules have to fully transverse the endothelial cells makes them a perfect barricade to unspecified particles from entering the brain, working to protect the brain at all costs. Also, because most molecules are transported across the barrier, it does a very effective job of maintaining homeostasis for the most vital organ of the human body. == Drug delivery to the blood–brain barrier == Because of the difficulty for drugs to pass through the blood–brain barrier, a study was conducted to determine the factors that influence a compound’s ability to transverse the blood–brain barrier. In this study, they examined several different factors to investigate diffusion across the blood–brain barrier. They used lipophilicity, Gibbs Adsorption Isotherm, a Co CMC Plot, and the surface area of the drug to water and air. They began by looking at compounds whose blood–brain permeability was known and labeled them either CNS+ or CNS- for compounds that easily transverse the barrier and those that did not. They then set out to analyze the above factors to determine what is necessary to transverse the blood–brain barrier. What they found was a little surprising; lipophilicity is not the leading characteristic for a drug to pass through the barrier. This is surprising because one would think that the most effective way to make a drug move through a lipophilic barrier is to increase its lipophilicity, it turns out that it is a complex function of all of these characteristics that makes a drug able to pass through the blood–brain barrier. The study found that barrier permittivity is "based on the measurement of the surface activity and as such takes into account the molecular properties of both hydrophobic and charged residues of the molecule of interest." They found that there is not a simple answer to what compounds transverse the blood–brain barrier and what does not. Rather, it is based on the complex analysis of the surface activity of the molecule as well as relative size. == Problems faced in drug delivery == Other problems persist besides just simply getting through the blood–brain barrier. The first of these is that a lot of times, even if a compound transverses the barrier, it does not do it in a way that the drug is in a therapeutically relevant concentration. This can have many causes, the most simple being that the way the drug was produced only allows a small amount to pass through the barrier. Another cause of this would be the binding to other proteins in the body rendering the drug ineffective to either be therapeutically active or able to pass through the barrier with the adhered protein. Another problem that must be accounted for is the presence of enzymes in the brain tissue that could render the drug inactive. The drug may be able to pass through the membrane fine, but will be deconstructed once it is inside the brain tissue rendering it useless. All of these are problems that must be addressed and accounted for in trying to deliver effective drug solutions to the brain tissue. == Possible solutions == === Exosomes to deliver treatments across the blood–brain barrier === A group from the University of Oxford led by Prof. Matthew Wood claims that exosomes can cross the blood–brain barrier and deliver siRNAs, antisense oligonucleotides, chemotherapeutic agents and proteins specifically to neurons after inject them systemically (in blood). Because these exosomes are able to cross the blood–brain barrier, this protocol could solve the issue of poor delivery of medications to the central nervous system and cure Alzheimer's, Parkinson's Disease and brain cancer, among other diseases. The laboratory has been recently awarded a major new €30 million project leading experts from 14 academic institutions, two biotechnology companies and seven pharmaceutical companies to translate the concept to the clinic. === Pro-drugs === This is the process of disguising medically active molecules with lipophilic molecules that allow it to better sneak through the blood–brain barrier. Drugs can be disguised using more lipophilic elements or structures. This form of the drug will be inactive because of the lipophilic molecules but then would be activated, by either enzyme degradation or some other mechanism for removal of the lipophilic disguise to release the drug into its active form. There are still some major drawbacks to these pro-drugs. The first of which is that the pro-drug may be able to pass through the barrier and then also re-pass through the barrier without ever releasing the drug in its active form. The second is the sheer size of these types of molecules makes it still difficult to pass through the blood–brain barrier. === Peptide masking === Similar to the idea of pro-drugs, another way of masking the drugs chemical composition is by masking a peptide’s characteristics by combining with other molecular groups that are more likely to pass through the blood–brain barrier. An example of this is using a cholesteryl molecule instead of cholesterol that serves to conceal the water soluble characteristics of the drug. This type of masking as well as aiding in traversing the blood–brain barrier. It also can work to mask the drug peptide from peptide-degrading enzymes in the brain Also a "targetor" molecule could be attached to the drug that helps it pass through the barrier and then once inside the brain, is degraded in such a way that the drug cannot pass back through the brain. Once the drug cannot pass back through the barrier the drug can be concentrated and made effective for therapeutic use. However drawbacks to this exist as well. Once the drug is in the brain there is a point where it needs to be degraded to prevent overdose to the brain tissue. Also if the drug cannot pass back through the blood–brain barrier, it compounds the issues of dosage and intense monitoring would be required. For this to be effective there must be a mechanism for the removal of the active form of the drug from the brain tissue. === Receptor-mediated permabilitizers === These are drug compounds that increase the permeability of the blood–brain barrier. By decreasing the restrictiveness of the barrier, it is much easier to get a molecule to pass through it. These drugs increase the permeability of the blood–brain barrier temporarily by increasing the osmotic pressure in the blood which loosens the tight junctions between the endothelial cells. By loosening the tight junctions normal injection of drugs through an [IV] can take place and be effective to enter the brain. This must be done in a very controlled environment because of the risk associated with these drugs. Firstly, the brain can be flooded with molecules that are floating through the blood stream that are usually blocked by the barrier. Secondly, when the tight junctions loosen, the homeostasis of the brain can also be thrown off which can result in seizures and the compromised function of the brain. === Nanoparticles === The most promising drug delivery system is using nanoparticle delivery systems, these are systems where the drug is bound to a nanoparticle capable of traversing the blood–brain barrier. The most promising compound for the nanoparticles is Human Serum Albumin (HSA). The main benefits of this is that particles made of HSA are well tolerated without serious side effects as well as the albumin functional groups can be utilized for surface modification that allows for specific cell uptake. These nanoparticles have been shown to transverse the blood–brain barrier carrying host drugs. To enhance the effectiveness of nanoparticles, scientists are attempting to coat the nanoparticles to make them more effective to cross the blood–brain barrier. Studies have shown that "the overcoating of the [nanoparticles] with polysorbate 80 yielded doxorubicin concentrations in the brain of up to 6 μg/g after i.v. injection of 5 mg/kg" as compared to no detectable increase in an injection of the drug alone or the uncoated nanoparticle. This is very new science and technology so the real effectiveness of this process has not been fully understood. However young the research is, the results are promising pointing to nanotechnology as the way forward in treating a variety of brain diseases. === Loaded microbubble-enhanced focused ultrasound === Microbubbles are small "bubbles" of mono-lipids that are able to pass through the blood–brain barrier. They form a lipophilic bubble that can easily move through the barrier. One barrier to this however is that these microbubbles are rather large, which prevents their diffusion into the brain. This is counteracted by a focused ultrasound. The ultrasound increases the permeability of the blood–brain barrier by causing interference in the tight junctions in localized areas. This combined with the microbubbles allows for a very specific area of diffusion for the microbubbles, because they can only diffuse where the ultrasound is disrupting the barrier. The hypothesis and usefulness of these is the possibility of loading a microbubble with an active drug to diffuse through the barrier and target a specific area. There are several important factors in making this a viable solution for drug delivery. The first is that the loaded microbubble must not be substantially greater than the unloaded bubble. This ensures that the diffusion will be similar and the ultrasound disruption will be enough to induce diffusion. A second factor that must be determined is the stability of the loaded micro-bubble. This means is the drug fully retained in the bubble or is there leakage. Lastly, it must be determined how the drug is to be released from the microbubble once it passes through the blood–brain barrier. Studies have shown the effectiveness of this method for getting drugs to specific sites in the brain in animal models. == See also == Retrometabolic drug design == References ==
Wikipedia/Drug_delivery_to_the_brain
Targeted drug delivery is one of many ways researchers seek to improve drug delivery systems' overall efficacy, safety, and delivery. Within this medical field is a special reversal form of drug delivery called chemotactic drug targeting. By using chemical agents to help guide a drug carrier to a specific location within the body, this innovative approach seeks to improve precision and control during the drug delivery process, decrease the risk of toxicity, and potentially lower the required medical dosage needed. The general components of the conjugates are designed as follows: (i) carrier – regularly possessing promoter effect also on internalization into the cell; (ii) chemotactically active ligands acting on the target cells; (iii) drug to be delivered in a selective way and (iv) spacer sequence which joins drug molecule to the carrier and due to it enzyme labile moiety makes possible the intracellular compartment specific release of the drug. Careful selection of chemotactic component of the ligand not only the chemoattractant character could be expended, however, chemorepellent ligands are also valuable as they are useful to keep away cell populations degrading the conjugate containing the drug. In a larger sense, chemotactic drug-targeting has the potential to improve cancer, inflammation, and arthritis treatment by taking advantage of the difference in environment between the target site and its surroundings. Therefore, this Wikipedia article aims to provide a brief overview of chemotactic drug targeting, the principles behind the approach, possible limitations and advantages, and its application to cancer and inflammation. == Importance of chemotaxis in chemotactic drug targeting == In general terms, chemotaxis is a biological process where living entities, such as cells or organisms, detect, maneuver, and react in response to a chemical signal in their environment. Such a phenomenon is critical for many biological processes, including but not limited to wound healing, detection of food, and avoidance of many toxins. Chemotaxis also plays an essential role in several diseases, such as tumor metastasis, the recruitment of T-lymphocytes during inflammation, and HIV-1 entry into T cells. At the core of chemotaxis are specialized sensory cells called chemoreceptors. These cells allow an organism to detect chemical molecules within its environment and respond accordingly. Such chemical molecules are either known as chemoattractants or chemorepellents, which play a crucial role in attracting or repelling the organism towards or away from the source of the chemical signal, respectively. Thus, with this natural process of chemotaxis in mind, researchers have sought to apply the same phenomenon to targeted drug delivery, a medical technique aimed at delivering drugs to a specific cell, tissue, or organ within the body while minimizing its disruptive effects on healthy tissue. By using both chemotaxes to help guide the drug delivery process, researchers aim to reduce toxicity by avoiding healthy tissues, improve drug efficacy by focusing only on the intended site, and decrease drug dosage by delivering the directly rather than throughout the whole body. == Chemotactic drug targeting systems == Chemotactic drug delivery systems are an emerging field of drug delivery that aims to apply the natural phenomenon of chemotaxis in guiding and delivering a drug to a specific tissue or cell within the body. Thus, similar to how organisms use chemotaxis, researchers have designed drug delivery systems to detect, maneuver, and react to chemical molecules released by a desired cell or its surrounding area. === Microdroplets === Recent progress in the field of microfluidics has led to the development of microdroplets, a new drug-delivery system that uses uniform droplets to deliver drugs to specific locations within the body. These microdroplets allow researchers to load drugs during the polymerization step of their formation and provide variations in porosity, which can control the time it takes to release a therapeutic payload. Thus, by using the natural process of chemotaxis, researchers aim to guide these tiny droplets by using chemical gradients released by a specific cell, tissue, or organ within the body. In fact, a few examples of microdroplet systems that use chemotaxis are self-propelling, ionic liquid-based, and synthetic base. These microdroplet-based drug delivery systems offer several advantages over traditional drug delivery methods, which are talked about later in the advantage and limitations subsection of this article. Overall, the development of microdroplet-based drug delivery systems using the phenomenon of chemotaxis is just one of may avenues to potentially revolutionize the field of medicine and targeted drug delivery. === Protocells === Another drug delivery system that has shown potential for chemotactic applicability is protocells. In general, protocells are artificial cells that mimic living cells but cannot reproduce and have genetic mutations like living cells do. Moreover, protocells combine the advantages of liposomes with that of mesoporous silica nanoparticles. These advantages include but are not limited to stability, large capacity for various cargos, low toxicity, immunogenicity, and the ability to circulate the blood for long periods. Thus, researchers aim to create a tunable chemotactic protocell that can move towards or away from a chemical signal. In fact, researchers have devised a way to use the enzymes catalase, urease, and ATPase to move the protocell closer or further away from the reactant, giving them direction and movement control of these protocells. Overall, the development of chemotactic controlled protocols holds great promise for the targeted delivery of drugs to specific areas of the body, potentially increasing treatment efficacy while minimizing side effects. However, more research is needed to fully understand the capabilities and limitations of protocells as drug delivery systems and optimize their design and functionality for specific applications. === Biological and bio-hybrid drug carriers === Finally, biological and bio-hybrid drug carriers have shown potential for chemotactic applications. In general, these systems are inspired by microorganisms or cells to help design drug delivery systems that mimic their surface, shape, texture, and movement. One phenomenon that has become increasingly popular in improving the movement and release of bio-hybrid drug carriers is that of chemotaxis. Indeed, thanks to their natural chemotactic sensing property, bacteria can be used to locate a tumor, carry a therapeutic payload to the site, and release that drug in a controlled manner. Researchers can also genetically modify these bacteria to produce a specific protein like anti-tumor cytotoxins for cancer treatment. Yet, this is not to say that they don't come with their own set of challenges and limitations. For one, the genetic modifications of the bacteria used can be manipulated by recent or unforeseen mutations, leading to a decrease in the efficacy of the drug and drug carrier. Moreover, the therapeutic proteins produced may have incomplete protein folding, decreasing the drug's effectiveness or causing unforeseen side effects. Generally speaking, using bacteria may provide some advantages, but further research and development are still needed to address their limitations. Another example of bio-hybrid drug carriers is human cells, like macrophages, which offer compatibility with the human immune system and a simple way to load drugs as a bio-hybrid drug carrier. Leukocytes demonstrate great promise because Tumor cells secrete large amounts of chemoattractants when the cell undergoes inflammation. This secretion of chemoattractants naturally attracts leukocytes, such as macrophages, to the T cell location. Thus, with their well-known chemotactic homing behavior to inflammation or pathogens' sites in mind, researchers can manipulate leukocytes to carry and deliver a therapeutic payload to the tumor site. However, this is not to say that Biological and bio-hybrid drug carriers do not have challenges and limitations of their own. For example, Leukocytes cannot penetrate deeply into the tumors, have a low capacity for carrying drugs, and slow down when the tumor size reduces. Thus, similar to bacteria drug carriers, further research and development are still needed to address their limitations and improve the overall drug delivery system. == Applications == The applications of chemotactic drug delivery systems include but are not limited to cancer therapy, wound healing, and inflammation. The ability to target specific cells and locations within the body through chemical cues has opened up new avenues for the field of drug delivery, allowing for increased drug efficacy and reducing harmful side effects. === Cancer === Cancer is not just one disease but a group of diseases involving abnormal cell growth and metastasis of such cells to other body parts. There are also several types of cancers, each with its own distinctive characteristics and stages that may require different treatment or targeted drug delivery approaches. Yet, even these treatments have their own advantages and disadvantages. Thus, since the discovery of cancer, researchers have constantly been developing new and innovative cancer treatments, including chemotactic drug delivery. For example, and as mentioned earlier in this article, researchers have sought to use microdroplets, protocells, and biological and bio-hybrid drug carriers to deliver drugs to cancer cells in a more effective manner, while reducing unwanted side effects. In fact, the justification for using such systems, guided by chemotaxis, is that the environment inside a tumor has a higher resting temperature, higher peroxide concentration, lower pH, and a lower oxygen concentration than its surrounding tissue. With these unique conditions, researchers can exploit chemotactic drug delivery to target tumor cells directly, avoiding healthy tissues, reducing toxicity, improving drug efficacy, and decreasing drug dosage. === Inflammation === Inflammation is the body's response to foreign objects, irritants, germs, and even pathogens. Although such a response is standard in some cases, if left untreated, chronic inflammation can lead to muscle degeneration, gastrointestinal disorders, and some types of cancers. While most treatments, such as anti-inflammatory drugs and steroid injections, can help relieve symptoms, they often fail to address the condition's underlying cause. Therefore, researchers have sought to explore new and innovative ways of inflammation treatment, such as chemotactic drug delivery. One promising drug delivery system was based on engineered neutrophils that targeted inflammation sites through chemotaxis's unique properties. This approach took advantage of the concentration difference between iNOS and ROS for inflammatory disease sites and normal tissues. By doing so, this drug delivery system provides the possibility to target areas of inflammation, increase drug efficacy, and minimize damage to the surrounding tissue. Moreover, because this concentration gradient is ubiquitous in the microenvironment of inflammatory diseases, common drug-targeting limitations such as individual differences can be avoided. Another example of an innovative drug delivery system that uses the property of chemotaxis is leukocytes. Indeed, during inflammation, the molecules on a cell that allows for adhesion are overly produced. With this unique condition, researchers can modify leukocytes to quickly detect the cell, attach itself to the surface, and deliver a therapeutic payload. Overall, many promising therapies and drug delivery systems are being developed to target inflammation more effectively. Chemotactic drug delivery systems are just one of many promising avenues that seek to increase target sites specifically, decreasing the needed drug dosage, reducing toxicity, and increasing drug efficacy. == Advantages and limitations == While this emerging field of drug delivery shows excellent promise in targeting specific cells and locations within the body, understanding current challenges and drawbacks can allow researchers to optimize design, development, and delivery to improve the overall outcome of their medical treatment. === Microdroplets === Advantages By using uniform droplets to deliver therapeutic payloads to specific locations in the body, researchers can achieve greater precision and control over drug delivery while also minimizing toxicity and harmful side effects. For example, these droplets can be quickly loaded during the polymerization process and can be varied in porosity to control the time it takes to release a drug. Microdroplets-based drug delivery also has a significant advantage over traditional systems in that they can minimize side effects, reduce the need for invasive procedures, and even improve a drug's efficacy. Overall, microdroplet-based drug delivery systems show great promise for revolutionizing medicine with significant potential for targeted drug delivery. Limitations Nevertheless, it is essential to note some common challenges associated with microdroplet-based drug delivery systems, including their biocompatibility, toxicity, and scalability. The biocompatibility and toxicity of Microdroplets are essential to consider because these can affect a drug's safety and overall efficacy, causing unwanted side effects and possibly death. On the other hand, scalability is another crucial challenge to consider because this aspect can lead to increased manufacturing costs, problems with quality control, and limitations in equipment used. All in all, even with great promise to revolutionize targeted drug delivery, researchers must keep in mind the biocompatibility, toxicity, and scalability of microdroplet-based drug delivery systems when using them. === Protocells === Advantages At large, protocells are advantageous because they can store more drugs, be loaded faster than other nanomedicine delivery systems, and are more stable than liposomes. By keeping more drugs, researchers can reduce the quantity of medications needed to be administered, potentially reducing side effects and toxicity. In like manner, controlling the direction and movement of a drug also reduces the amount of medication needed, increases the speed of delivery, and allows for the controlled release of high-concentration multicomponent cargo within cancer cells. Finally, the stability of protocells is vital because it ensures that the drugs remain effective and do not degrade before reaching their target. Overall, the development of protocells as a drug delivery system, coupled with chemotactic properties, holds great promise for targeted drug delivery. Limitations One fundamental limitation of protocells is their modularity and versatility, which must be accounted for when assessing clinical applications. Modularity and versatility are essential considerations for targeted drug delivery because they enable the customization and adaptation of drug delivery systems to meet specific clinical needs. In fact, without modularity and versatility, it will be hard to tailor protocells to different therapeutic applications and particular populations. Another critical challenge, especially when using enzymes to maneuver the protocell, is that the motility reduces when the enzymes become oversaturated with the chemical stimuli. Reducing motility becomes a problem because this is essential for targeted drug delivery efficiency, limiting the system's effectiveness and increasing the risk of off-target effects. Therefore, further research is still needed to improve our understanding of protocells and their potential clinical applications. === Biological and Bio-hybrid drug carriers === Advantages Some advantages to Biological and bio-hybrid drug carriers include but are not limited to offering compatibility with the human immune system, having the potential to be genetically modified, and having the capacity to hold drugs. Moreover, an essential advantage is their natural property of homing to inflammation and tumor sites. This natural property of homing to inflammation and tumor sites can enhance the targeted delivery of drugs, minimizing the risk of off-target effects and reducing the required dosage. Additionally, these systems have the potential to increase drug stability and prolong circulation time in the body, improving drug efficacy and reducing the frequency of dosing. Overall, these advantages make biological and bio-hybrid drug carriers promising for developing more effective and targeted drug delivery systems. Limitations One limitation of Biological and bio-hybrid drug carriers, especially leukocytes, is that they have a low drug-carrying capacity. A limit in the carrying capacity of a carrier means that researchers will have to use more medication to achieve the desired therapeutic effect, increasing the risk of adverse side effects and the cost of the treatment. Moreover, the short lifespan can limit their potential use for long-term drug delivery applications [35]. Coupling this aspect with an inability to penetrate deep into tumors and the potential for genetic mutations can pose significant challenges for future drug delivery systems. Therefore, despite their advantages, further research and development are needed to address current limitations and improve their clinical feasibility. == Conclusion == Generally speaking, chemotactic drug-targeting is a drug delivery strategy with promising avenues for treating diseases such as cancer and inflammation. This approach mimics the biological process of chemotaxis, which biological organisms use to detect, maneuver, and react to chemical signals in their environment. By applying this technique to targeted drug delivery, researchers aim to create drugs that can precisely reach their intended targets, minimizing the potential for side effects, improving drug efficacy, and decreasing drug dosage. Some examples include but are not limited to microdroplets, protocells, biological and bio-hybrid drug carriers, leukocytes, and neutrophils. While chemotactic drug targeting holds great promise for drug delivery, there are key advantages and limitations that must be considered. One main advantage is that these systems can precisely target specific cells, tissues, or organs within the body while minimizing their disruptive effects on healthy tissue. Moreover, by delivering the drug directly to the desired target, researchers can effectively reduce the required drug dosage needed. However, some limitations to chemotactic drug targeting include issues with biocompatibility, drug-carrying capacity, and the life span of specific carriers. Another major challenge with this approach is motility, when either the chemical stimuli diminish, or the attached enzymes become oversaturated. This can limit the effectiveness of the drug delivery system and may require additional modifications to improve its performance. Thus, although these approaches have shown great promise, more research is still needed to fully understand chemotaxis mechanisms and optimize this property for targeted drug delivery strategies. == References == == External links == Chemotaxis Archived 30 July 2014 at the Wayback Machine
Wikipedia/Chemotactic_drug-targeting
Gated drug delivery systems are a method of controlled drug release that center around the use of physical molecules that cover the pores of drug carriers until triggered for removal by an external stimulus. Gated drug delivery systems are a recent innovation in the field of drug delivery and pose as a promising candidate for future drug delivery systems that are effective at targeting certain sites without having leakages or off target effects in normal tissues. This new technology has the potential to be used in a variety of tissues over a wide range of disease states and has the added benefit of protecting healthy tissues and reducing systemic side effects. == Uses == Gated drug delivery systems are an emerging concept that have drawn a lot of attention for their wide variety of potential applications in the medical field. The abnormal physiological conditions found within the tumor environment provide a breadth of options that could be used for externally stimulating these systems to release cargo. Gated systems in cancer therapy also have the added effect of reducing off target effects and decreasing leakage and delivery of drug to normal tissues. Another use for this technology could also be antibacterial regulation. These systems could be used to limit bacterial resistance as well as accumulation of antibiotics within the body. Antibacterial regulation potentially opens the door to using gated systems in theranostics, in which the system is able to detect an issue and then provide a therapeutic response. There is also the potential for inhalable pulmonary drug delivery. With an increase in respiratory disease cases, the need for a drug delivery system that can be targeted to the lungs and provide sustained release is becoming more severe. This type of system would be applicable to patients experiencing asthma, pneumonia, obstructive pulmonary disease, and a number of other lung related diseases. == History == The history of gated drug delivery systems starts in the mid-1960s when the concept of zero order controlled drug delivery was first thought of. Researchers raced to be able to find a drug delivery platform that would be able to have perfectly sustained drug release. These efforts were initially on the macroscopic level with some of the first controlled drug delivery (CDD) devices being an ophthalmic insert, an intrauterine device, and a skin patch. In the 1970s the drug delivery field shifted from macroscopic systems and started to delve into microscopic systems. Ideas such as steroid loaded poly (lactic-co-glycolic acid), PLGA, microparticles came into existence. The next major jump came in the 1980s in the form of nanotherapeutics. There were some major technological advances that allowed this next generation of drug delivery systems to come along. Those ideas were PEGylation, active targeting, and the enhanced permeation and retention effect (EPR). Some of the issues that had been seen with earlier renditions of nanoparticle drug delivery was that there were off target effects from drug being delivered to normal tissue, the delivery system wasn't highly controllable, and there wasn't optimal accumulation of drug in the targeted area. This is when the development of "smart drug delivery" originated. Encapsulated within the idea of smart drug delivery is the use of gated delivery systems. Researchers discovered that certain materials could be loaded and capped to prevent premature drug release. The caps could subsequently be removed using different external stimuli. This created a class of drug delivery systems that were able to solve a number of problems exhibited by normal nanoparticle drug delivery systems. These smart drug delivery systems are able to deliver the drug with minimal leakage, can be actively or passively targeted to different areas within the body, and will only release drug in the presence of certain triggers, creating a sustained local response and accumulation of drug at the disease area. == Scaffold fabrication == There are many different materials and fabrication methods that can be used to produce gated drug delivery scaffolding. In general, porous materials, such as mesoporous silica nanoparticles are used because of their expansive surface area, large loading capacity, and porous structures. These characteristics make it possible to load a variety of molecules that vary greatly in size. === Mesoporous silica nanoparticles === Mesoporous silica nanoparticles (MSN) are considered to be one of the most widely used systems for drug delivery. MSN's have some of the characteristic features of gated systems such as being porous and having a high loading capacity, but they also exhibit some special features such as increased biocompatibility and chemical inertness. These delivery systems are composed of two parts: the inorganic scaffold and the molecular gates. In a study conducted by the Kong Lab at Deakin University in Australia, the researchers generated MSN's by adding tetraethyl orthosilicate to aqueous cetyltrimethylammonium bromide. The MSN's they created had a surface area of 363 m^2/g, an average pore size of 2.59 nm, and a pore volume of 0.33 cm^3/g. === Mesoporous carbon Nanoparticles === Mesoporous carbon nanoparticles (MCN) are similar to MSN's. They have a similar structure and share key physical properties and characteristics. However, it has been found that MCN's can exhibit lower toxicity that MSN's. To date, not much research has been done on MCN's. The Du lab based in Nanjing, China took made MSN templates using the common method of combining CTAB and TEOS. The researchers then took the MSN templates and dispersed them in a glucose solution followed by autoclaving the mixture to produce a reaction. The product was then subjected to carbonization at 900 degrees Celsius and the MCN's were generated. The researchers found that MCN's had a surface area of 1575 m^2/g, a pore size of 2.2 nm, and an average diameter of 115 nm. == External stimuli == There is a number of external triggers that can be used to release cargo on gated delivery systems. Examples of some triggers include pH, redox, enzyme, light, temperature, magnetic, ultrasound, and small molecule responsive gated systems. === pH === One of the most common triggers for drug delivery systems is pH. This stimulus is abundantly used in cancer therapies due to the fact that the tumor microenvironment is acidic. The development of pH triggered systems meant that drug could be introduced to the body but not be deployed until encountering the tumor microenvironment. Hence a possible and probable reason that pH triggered systems are so common. There are a few approaches to making these systems. One method is using linkages that dissolve at certain pH levels. As the system enters an acidic environment, the linkages that hold that gates onto the porous scaffold are hydrolyzed and the cargo can be released. Examples of pH linkages are imine, amides, esters, and acetals. Another method that can be used is protonation. This method relies on electrostatic interactions between the gate molecule and the porous scaffold. The two will be linked together with a certain molecule, for example, acetylated carboxymethyl. When the system reaches an acidic environment, protonation of the molecule is initiated. The protonation causes a disruption in the linkage and the cargo can be released. === Redox === Redox reactions are also used for gated delivery systems. Within cells and the bloodstream there are several reducing agents that can be used to trigger drug release in gated systems. The most common reducing agent used in gated delivery system is glutathione (GSH) because it has been determined that GSH is the most abundant reducing agent in the body. GSH also has significantly different concentrations between the intracellular and extracellular environments making it easier to target either environment without getting triggered by the other. Furthermore, GSH is found in higher concentration within tumor cells. This provides another way to have sustained and local release of drug at tumor sites. There are generally 2 different mechanisms for this type of gated system. One method is to cleave disulfide bonds. Another method is to cleave bonds through the use of reactive oxygen species (ROS). Bonds that are able to be cleaved by ROS are generally thioketals, ketals, and diselenides. === Enzyme === Enzyme responsive gated materials are another class of gated delivery systems. In these scenarios, enzymes can trigger release of the gates from the scaffolds in drug delivery systems. The mechanism for this type of gate is that certain linkages are used that can be hydrolyzed by select enzymes. The two most popular choices are protease and hyaluronidase. An advantage of using enzyme responsive triggers is that there is a large amount of substrate specificity, and the enzymes are able to trigger their target with high selectivity, even under mild conditions. Another advantage of this system is that enzymes are found throughout the entire body and work on almost all biological processes so the delivery system could potentially be activated in any part of the body during many points within a singular process. One study done by the Martinez-Manez lab in Valencia, Spain aimed to generate MSNs linked to poly-l-glutamic acid (PGA) gates through peptide bonds. The trigger for this system was the presence of a lysosomal proteolytic enzyme (protease), in this case, pronase. The researchers found that in the absence of pronase, the system was only able to release less than 20% of its cargo in 24 hours, however, in the presence of pronase, there was a 90% release of cargo within 5 hours. === Magnetic and temperature === Within the topic of gated drug delivery systems, utilizing magnetic forces generally goes hand in hand with temperature stimulus. The phenomenon of magnetic hyperthermia is when superparamagnetic nanoparticles reorient themselves after being exposed to heat generated by an alternating magnetic field (AMF). This concept has been utilized within the drug delivery field wherein gatekeepers are magnetically linked to the scaffolding and upon the application of heat, reorient and allow for the release of drug. This particular method has not been researched as heavily given the drawback that high energy is needed to produce the AMF and uncap the system. However, the Vallet-Regi lab based in Madrid, Spain decided to investigate the possibility of using magnetic gates bound to the scaffold using DNA. The lab generated oligonucleotide-modified superparamagnetic mesoporous silica nanoparticles. They capped the scaffolding using iron oxide nanoparticles that carried complementary DNA to the scaffold's oligonucleotide sequence. What the lab found was that they were able to cap their system due to the DNA coming together and creating a double strand. Upon heating the system using an AMF, the DNA bonds detached, the system became uncapped, and the drug was able to be released. Furthermore, the lab found that this linkage was reversible. As temperature was reduced, the DNA was able to re-link to its complementary half. This study was able to illustrate the possibility of having a drug delivery system that could be remotely triggered and exhibit an on-off switch. === Electrostatic === Researchers started investigating electrostatic gating because some trigger drug delivery systems on the market are not entirely feasible. The main complaint of these other systems is that continual external stimulation is required for the therapy to function. In order to combat this complaint, the Grattoni lab in Houston, Texas started working on a drug delivery system that utilized electrostatic gating. The researchers generated a silicon carbide coated nanofluidic membrane that would have controlled release of a drug when a buried electrode was exposed to low intensity voltage. What the researchers found was that their device was able to successfully release drug and do it in such a way that drug release was proportional to the applied voltage. They also found that the device was chemically inert, making it feasible for long term implantation. == References ==
Wikipedia/Gated_drug_delivery_systems
Stretch-triggered drug delivery is a method of controlled drug delivery stimulated by mechanical forces. The most commonly used materials for stretch-triggered autonomous drug release systems are hydrogels and elastomers. This method of drug delivery falls in the category of stimuli-responsive drug delivery systems which include pH, temperature, and redox-responsive systems. Mechanical forces occur naturally throughout the human body therefore, stretch-triggered drug delivery systems may be used to autonomously deliver medications to the body when needed. The use of autonomous drug release systems reduces outcomes such as delays in receiving treatment and inaccurate dosages. Autonomous drug release systems induced by stretch apply to drugs such as antimicrobial agents, cardiovascular medication, and anticancer drugs. Theranostic agents are also applicable to this drug delivery system, allowing for simultaneous treatment and diagnosis of diseases. == Types of Mechanical Stimuli == Compression, tension, and shear are the three main types of mechanical stimuli. Compression force is when an object experiences forces from two sides, going in opposite directions, causing it to become compacted. Tensile force is when an object experiences forces from two sides, pointing in opposite directions, causing it to stretch. Shear forces are when an object experiences forces that are parallel and are going in opposite directions. Ultrasound and magnetic fields are also examples of mechanical forces. Depending on the mechanical stimuli, a different material may improve the desired results. The human body is exposed to mechanical forces on or within bones, organs, joints, blood vessels, and cartilage. == Naturally Occurring Mechanical Stimuli == There are naturally occurring mechanical forces in the human body such as increased stress within blood vessels due to atherosclerotic plaque. The naturally occurring mechanical forces in the body enable the self-administration of medications. Motion-triggered drug delivery of anticancer therapy is achievable through the natural forces generated by organ movements. Research has been conducted on contact lenses that are pre-loaded with glaucoma medication that is released by the stretch of the contact lens during natural eye movements. The movement of joints has been used to trigger the release of antibacterial drugs into the body. == Applications == Stretch-triggered drug delivery has a variety of applications. Intracellular transfection can be achieved through drug-delivery systems that are responsive to mechanical stimuli. Drug release can be controlled by triggers due to forces experienced by the body from daily motions. Mechanical triggers have been applied to polymers to release 2-furylcarbonil derivatives which then trigger the release of molecular cargo. An application of stretch-triggered drug delivery systems is the delivery of chemotherapy triggered by esophageal stent expansion. Also, the incorporation of several drugs into stretch-triggered autonomous drug release systems is a possibility, allowing drugs to be released by the same or different signals. Stretch-triggered drug delivery is also applied to nanoparticle-loaded stretchable elastomers that release drugs due to their expanded surface area. Stretch-triggered drug delivery has been applied to the cardiovascular system through the use of drug-loaded hydrogels that lead to increased vascularization. A research study demonstrated that quinine-loaded hydrogels resulted in restricted growth of bacteria as a result of exposure to stretching. == Limitations == Due to the limited research on mechanical force-responsive drug delivery systems, the effects of mechanical forces on cells remain unclear. Current research on stretch-triggered drug delivery systems mostly involves in vitro studies, therefore, extensive in-vivo studies are required to further improve knowledge in this subject. A limitation of current technology is the release of drugs in the absence of tensile triggers and a limit of loading agents. Transdermal drug delivery systems may include stretch-triggered technology but these devices are typically used for long-term administration, making drug reloading a topic of concern. Issues of environmental impact are also a concern when it comes to transdermal drug delivery due to the material's lack of ability to biodegrade and associated electronic waste. An area of interest regarding drug delivery devices that use naturally occurring triggers is the variability of physiological parameters between people. This makes it difficult to set a standard of what will trigger this technology. == References ==
Wikipedia/Stretch-triggered_drug_delivery
A self-microemulsifying drug delivery system (SMEDDS) is a drug delivery system that uses a microemulsion achieved by chemical rather than mechanical means. That is, by an intrinsic property of the drug formulation, rather than by special mixing and handling. It employs the familiar ouzo effect displayed by anethole in many anise-flavored liquors. Microemulsions have significant potential for use in drug delivery, and SMEDDS (including so-called "U-type" microemulsions) are the best of these systems identified to date. SMEDDS are of particular value in increasing the absorption of lipophilic drugs taken by mouth. SMEDDS in research or development include formulations of the drugs anethole trithione, oridonin, curcumin, vinpocetine, tacrolimus, mitotane, berberine hydrochloride, nobiletin, piroxicam, anti-malaria drugs beta-artemether and halofantrine, anti-HIV drug UC 781, nimodipine, exemestane, anti-cancer drugs 9-nitrocamptothecin (9-NC) paclitaxel, and seocalcitol, alprostadil (intraurethral use), probucol, itraconazole, fenofibrate, acyclovir, simvastatin, xibornol, silymarin, alpha-asarone, enilconazole, puerarin (an isoflavone found in Pueraria lobata), atorvastatin, heparin, carvedilol, ketoconazole, gentamicin, labrasol, flurbiprofen, celecoxib, danazol, cyclosporine, and idebenone. Actual applications of Self-microemulsifying drug delivery system' (SMEDDS) remain rare. The first drug marketed as a SMEDDS was cyclosporin, and it had significantly improved bioavailability compared with the conventional solution. In the last decade, several SMEDDS loaded with antiviral drugs (ritonavir, saquinavir) were tested for treatment of HIV infection, but the relative improvement in clinical benefit was not significant. The SMEDDS formulation of ritonavir (soft capsules) has been withdrawn in some countries. Within the last years SMEDDS were also utilized for the oral administration of biologics. Due to ion pairing with appropriate surfactants these mainly hydrophilic macromolecular drugs can be incorporated in the lipophilic phase of SMEDDS. Provided that the oily droplets being formed in the gut are sufficiently stable towards lipases, can permeate the mucus gel layer in sufficient quantities and exhibit permeation enhancing properties the oral bioavailability of various biologics can be strongly improved SMEDDS offer numerous advantages: spontaneous formation, ease of manufacture, thermodynamic stability, and improved solubilization of bioactive materials. Improved solubility contributes to faster release rates and greater bioavailability. For many drugs taken by mouth, faster release rates improve the drug acceptance by consumers. Greater bioavailability means that less drug need be used; this may lower cost, and does lower the stomach irritation and toxicity of drugs taken by mouth. For oral use, SMEDDS may be formulated as liquids or solids, the solids packaged in capsules or tablets. Limited studies comparing these report that in terms of bioavailability liquid SMEDDS are superior to solid SMEDDS, which are superior to conventional tablets. Liquid SMEDDS have also shown value in injectable (IV and urethral) formulations and in a topical (oral) spray. == See also == Excipient == References == == Further reading == Singh, A.; Singh, V.; Juyal, D.; Rawat, G. (2015). "Self emulsifying systems: A review". Asian Journal of Pharmaceutics. 9 (1): 13. doi:10.4103/0973-8398.150031. Cherniakov, I.; Domb, A. J.; Hoffman, A. (2015). "Self-nano-emulsifying drug delivery systems: an update of the biopharmaceutical aspects". Expert Opinion on Drug Delivery. 12 (7): 1121–1133. doi:10.1517/17425247.2015.999038. PMID 25556987. S2CID 207490348. Weerapol, Y.; Limmatvapirat, S.; Takeuchi, H.; Sriamornsak, P. (2015). "Fabrication of spontaneous emulsifying powders for improved dissolution of poorly water-soluble drugs". Powder Technology. 271: 100–108. doi:10.1016/j.powtec.2014.10.037.
Wikipedia/Self-microemulsifying_drug_delivery_system
In pharmacology and toxicology, a route of administration is the way by which a drug, fluid, poison, or other substance is taken into the body. Routes of administration are generally classified by the location at which the substance is applied. Common examples include oral and intravenous administration. Routes can also be classified based on where the target of action is. Action may be topical (local), enteral (system-wide effect, but delivered through the gastrointestinal tract), or parenteral (systemic action, but is delivered by routes other than the GI tract). Route of administration and dosage form are aspects of drug delivery. == Classification == Routes of administration are usually classified by application location (or exposition). The route or course the active substance takes from application location to the location where it has its target effect is usually rather a matter of pharmacokinetics (concerning the processes of uptake, distribution, and elimination of drugs). Exceptions include the transdermal or transmucosal routes, which are still commonly referred to as routes of administration. The location of the target effect of active substances is usually rather a matter of pharmacodynamics (concerning, for example, the physiological effects of drugs). An exception is topical administration, which generally means that both the application location and the effect thereof is local. Topical administration is sometimes defined as both a local application location and local pharmacodynamic effect, and sometimes merely as a local application location regardless of location of the effects. === By application location === ==== Enteral/gastrointestinal route ==== Through the gastrointestinal tract is sometimes termed enteral or enteric administration (literally meaning 'through the intestines'). Enteral/enteric administration usually includes oral (through the mouth) and rectal (into the rectum) administration, in the sense that these are taken up by the intestines. However, uptake of drugs administered orally may also occur already in the stomach, and as such gastrointestinal (along the gastrointestinal tract) may be a more fitting term for this route of administration. Furthermore, some application locations often classified as enteral, such as sublingual (under the tongue) and sublabial or buccal (between the cheek and gums/gingiva), are taken up in the proximal part of the gastrointestinal tract without reaching the intestines. Strictly enteral administration (directly into the intestines) can be used for systemic administration, as well as local (sometimes termed topical), such as in a contrast enema, whereby contrast media are infused into the intestines for imaging. However, for the purposes of classification based on location of effects, the term enteral is reserved for substances with systemic effects. Many drugs as tablets, capsules, or drops are taken orally. Administration methods directly into the stomach include those by gastric feeding tube or gastrostomy. Substances may also be placed into the small intestines, as with a duodenal feeding tube and enteral nutrition. Enteric coated tablets are designed to dissolve in the intestine, not the stomach, because the drug present in the tablet causes irritation in the stomach. The rectal route is an effective route of administration for many medications, especially those used at the end of life. The walls of the rectum absorb many medications quickly and effectively. Medications delivered to the distal one-third of the rectum at least partially avoid the "first pass effect" through the liver, which allows for greater bio-availability of many medications than that of the oral route. Rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. A suppository is a solid dosage form that fits for rectal administration. In hospice care, a specialized rectal catheter, designed to provide comfortable and discreet administration of ongoing medications provides a practical way to deliver and retain liquid formulations in the distal rectum, giving health practitioners a way to leverage the established benefits of rectal administration. The Murphy drip is an example of rectal infusion. ==== Parenteral route ==== The parenteral route is any route that is not enteral (par- + enteral). Parenteral administration can be performed by injection, that is, using a needle (usually a hypodermic needle) and a syringe, or by the insertion of an indwelling catheter. Locations of application of parenteral administration include: Central nervous system: Epidural (synonym: peridural) (injection or infusion into the epidural space), e.g. epidural anesthesia. Intracerebral (into the cerebrum) administration by direct injection into the brain. Used in experimental research of chemicals and as a treatment for malignancies of the brain. The intracerebral route can also interrupt the blood brain barrier from holding up against subsequent routes. Intracerebroventricular (into the cerebral ventricles) administration into the ventricular system of the brain. One use is as a last line of opioid treatment for terminal cancer patients with intractable cancer pain. Epicutaneous (application onto the skin). It can be used both for local effect as in allergy testing and typical local anesthesia, as well as systemic effects when the active substance diffuses through skin in a transdermal route. Sublingual and buccal medication administration is a way of giving someone medicine orally (by mouth). Sublingual administration is when medication is placed under the tongue to be absorbed by the body. The word "sublingual" means "under the tongue." Buccal administration involves placement of the drug between the gums and the cheek. These medications can come in the form of tablets, films, or sprays. Many drugs are designed for sublingual administration, including cardiovascular drugs, steroids, barbiturates, opioid analgesics with poor gastrointestinal bioavailability, enzymes and, increasingly, vitamins and minerals. Extra-amniotic administration, between the endometrium and fetal membranes. Nasal administration (through the nose) can be used for topically acting substances, as well as for insufflation of e.g. decongestant nasal sprays to be taken up along the respiratory tract. Such substances are also called inhalational, e.g. inhalational anesthetics. Intra-arterial (into an artery), e.g. vasodilator drugs in the treatment of vasospasm and thrombolytic drugs for treatment of embolism. Intra-articular, into a joint space. It is generally performed by joint injection. It is mainly used for symptomatic relief in osteoarthritis. Intracardiac (into the heart), e.g. adrenaline during cardiopulmonary resuscitation (no longer commonly performed). Intracavernous injection, an injection into the base of the penis. Intradermal, (into the skin itself) is used for skin testing some allergens, and also for mantoux test for tuberculosis. Intralesional (into a skin lesion), is used for local skin lesions, e.g. acne medication. Intramuscular (into a muscle), e.g. many vaccines, antibiotics, and long-term psychoactive agents. Recreationally the colloquial term 'muscling' is used. Intraocular, into the eye, e.g., some medications for glaucoma or eye neoplasms. Intraosseous infusion (into the bone marrow) is, in effect, an indirect intravenous access because the bone marrow drains directly into the venous system. This route is occasionally used for drugs and fluids in emergency medicine and pediatrics when intravenous access is difficult. Intraperitoneal, (infusion or injection into the peritoneum) e.g. peritoneal dialysis. Intrathecal (into the spinal canal) is most commonly used for spinal anesthesia and chemotherapy. Intrauterine. Intravaginal administration, in the vagina. Intravenous (into a vein), e.g. many drugs, total parenteral nutrition. Intravesical infusion is into the urinary bladder. Intravitreal, through the eye. Subcutaneous (under the skin). This generally takes the form of subcutaneous injection, e.g. with insulin. Skin popping is a slang term that includes subcutaneous injection, and is usually used in association with recreational drugs. In addition to injection, it is also possible to slowly infuse fluids subcutaneously in the form of hypodermoclysis. Transdermal (diffusion through the intact skin for systemic rather than topical distribution), e.g. transdermal patches such as fentanyl in pain therapy, nicotine patches for treatment of addiction and nitroglycerine for treatment of angina pectoris. Perivascular administration (perivascular medical devices and perivascular drug delivery systems are conceived for local application around a blood vessel during open vascular surgery). Transmucosal (diffusion through a mucous membrane), e.g. insufflation (snorting) of cocaine, sublingual, i.e. under the tongue, sublabial, i.e. between the lips and gingiva, and oral spray or vaginal suppository for nitroglycerine. === Topical route === The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local. In other cases, topical is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution. If defined strictly as having local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One such medication is the antibiotic vancomycin, which cannot be absorbed in the gastrointestinal tract and is used orally only as a treatment for Clostridioides difficile colitis. == Choice of routes == The reason for choice of routes of drug administration are governing by various factors: Physical and chemical properties of the drug. The physical properties are solid, liquid and gas. The chemical properties are solubility, stability, pH, irritancy etc. Site of desired action: the action may be localised and approachable or generalised and not approachable. Rate of extent of absorption of the drug from different routes. Effect of digestive juices and the first pass metabolism of drugs. Condition of the patient. In acute situations, in emergency medicine and intensive care medicine, drugs are most often given intravenously. This is the most reliable route, as in acutely ill patients the absorption of substances from the tissues and from the digestive tract can often be unpredictable due to altered blood flow or bowel motility. === Convenience === Enteral routes are generally the most convenient for the patient, as no punctures or sterile procedures are necessary. Enteral medications are therefore often preferred in the treatment of chronic disease. However, some drugs can not be used enterally because their absorption in the digestive tract is low or unpredictable. Transdermal administration is a comfortable alternative; there are, however, only a few drug preparations that are suitable for transdermal administration. === Desired target effect === Identical drugs can produce different results depending on the route of administration. For example, some drugs are not significantly absorbed into the bloodstream from the gastrointestinal tract and their action after enteral administration is therefore different from that after parenteral administration. This can be illustrated by the action of naloxone (Narcan), an antagonist of opiates such as morphine. Naloxone counteracts opiate action in the central nervous system when given intravenously and is therefore used in the treatment of opiate overdose. The same drug, when swallowed, acts exclusively on the bowels; it is here used to treat constipation under opiate pain therapy and does not affect the pain-reducing effect of the opiate. === Oral === The oral route is generally the most convenient and costs the least. However, some drugs can cause gastrointestinal tract irritation. For drugs that come in delayed release or time-release formulations, breaking the tablets or capsules can lead to more rapid delivery of the drug than intended. The oral route is limited to formulations containing small molecules only while biopharmaceuticals (usually proteins) would be digested in the stomach and thereby become ineffective. Biopharmaceuticals have to be given by injection or infusion. However, recent research found various ways to improve oral bioavailability of these drugs. In particular permeation enhancers, ionic liquids, lipid-based nanocarriers, enzyme inhibitors and microneedles have shown potential. Oral administration is often denoted "PO" from "per os", the Latin for "by mouth". The bioavailability of oral administration is affected by the amount of drug that is absorbed across the intestinal epithelium and first-pass metabolism. === Oral mucosal === The oral mucosa is the mucous membrane lining the inside of the mouth. ==== Buccal ==== Buccally administered medication is achieved by placing the drug between gums and the inner lining of the cheek. In comparison with sublingual tissue, buccal tissue is less permeable resulting in slower absorption. ==== Sublabial ==== ==== Sublingual ==== Sublingual administration is fulfilled by placing the drug between the tongue and the lower surface of the mouth. The sublingual mucosa is highly permeable and thereby provides access to the underlying expansive network composed of capillaries, leading to rapid drug absorption. === Intranasal === Drug administration via the nasal cavity yields rapid drug absorption and therapeutic effects. This is because drug absorption through the nasal passages does not go through the gut before entering capillaries situated at tissue cells and then systemic circulation and such absorption route allows transport of drugs into the central nervous system via the pathways of olfactory and trigeminal nerve. Intranasal absorption features low lipophilicity, enzymatic degradation within the nasal cavity, large molecular size, and rapid mucociliary clearance from the nasal passages, which explains the low risk of systemic exposure of the administered drug absorbed via intranasal. === Local === By delivering drugs almost directly to the site of action, the risk of systemic side effects is reduced. Skin absorption (dermal absorption), for example, is to directly deliver drug to the skin and, hopefully, to the systemic circulation. However, skin irritation may result, and for some forms such as creams or lotions, the dosage is difficult to control. Upon contact with the skin, the drug penetrates into the dead stratum corneum and can afterwards reach the viable epidermis, the dermis, and the blood vessels. === Parenteral === The term parenteral is from para-1 'beside' + Greek enteron 'intestine' + -al. This name is due to the fact that it encompasses a route of administration that is not intestinal. However, in common English the term has mostly been used to describe the four most well-known routes of injection. The term injection encompasses intravenous (IV), intramuscular (IM), subcutaneous (SC) and intradermal (ID) administration. Parenteral administration generally acts more rapidly than topical or enteral administration, with onset of action often occurring in 15–30 seconds for IV, 10–20 minutes for IM and 15–30 minutes for SC. They also have essentially 100% bioavailability and can be used for drugs that are poorly absorbed or ineffective when they are given orally. Some medications, such as certain antipsychotics, can be administered as long-acting intramuscular injections. Ongoing IV infusions can be used to deliver continuous medication or fluids. Disadvantages of injections include potential pain or discomfort for the patient and the requirement of trained staff using aseptic techniques for administration. However, in some cases, patients are taught to self-inject, such as SC injection of insulin in patients with insulin-dependent diabetes mellitus. As the drug is delivered to the site of action extremely rapidly with IV injection, there is a risk of overdose if the dose has been calculated incorrectly, and there is an increased risk of side effects if the drug is administered too rapidly. === Respiratory tract === ==== Mouth inhalation ==== Inhaled medications can be absorbed quickly and act both locally and systemically. Proper technique with inhaler devices is necessary to achieve the correct dose. Some medications can have an unpleasant taste or irritate the mouth. In general, only 20–50% of the pulmonary-delivered dose rendered in powdery particles will be deposited in the lung upon mouth inhalation. The remainder of 50-70% undeposited aerosolized particles are cleared out of lung as soon as exhalation. An inhaled powdery particle that is >8 μm is structurally predisposed to depositing in the central and conducting airways (conducting zone) by inertial impaction. An inhaled powdery particle that is between 3 and 8 μm in diameter tend to largely deposit in the transitional zones of the lung by sedimentation. An inhaled powdery particle that is <3 μm in diameter is structurally predisposed to depositing primarily in the respiratory regions of the peripheral lung via diffusion. Particles that deposit in the upper and central airways are generally absorbed systemically to great extent because they are only partially removed by mucociliary clearance, which results in orally mediated absorption when the transported mucus is swallowed, and first pass metabolism or incomplete absorption through loss at the fecal route can sometimes reduce the bioavailability. This should in no way suggest to clinicians or researchers that inhaled particles are not a greater threat than swallowed particles, it merely signifies that a combination of both methods may occur with some particles, no matter the size of or lipo/hydrophilicity of the different particle surfaces. ==== Nasal inhalation ==== Inhalation by nose of a substance is almost identical to oral inhalation, except that some of the drug is absorbed intranasally instead of in the oral cavity before entering the airways. Both methods can result in varying levels of the substance to be deposited in their respective initial cavities, and the level of mucus in either of these cavities will reflect the amount of substance swallowed. The rate of inhalation will usually determine the amount of the substance which enters the lungs. Faster inhalation results in more rapid absorption because more substance finds the lungs. Substances in a form that resists absorption in the lung will likely resist absorption in the nasal passage, and the oral cavity, and are often even more resistant to absorption after they fail absorption in the former cavities and are swallowed. == Research == Neural drug delivery is the next step beyond the basic addition of growth factors to nerve guidance conduits. Drug delivery systems allow the rate of growth factor release to be regulated over time, which is critical for creating an environment more closely representative of in vivo development environments. == See also == ADME Catheter Dosage form Drug injection Ear instillation Hypodermic needle Intravenous marijuana syndrome List of medical inhalants Nanomedicine Absorption (pharmacology) == References == == External links == The 10th US-Japan Symposium on Drug Delivery Systems FDA Center for Drug Evaluation and Research Data Standards Manual: Route of Administration. FDA Center for Drug Evaluation and Research Data Standards Manual: Dosage Form. A.S.P.E.N. American Society for Parenteral and Enteral Nutrition Drug Administration Routes at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Neural_drug_delivery_systems
Sonodynamic therapy (SDT) is a noninvasive treatment, often used for tumor irradiation, that utilizes a sonosensitizer and the deep penetration of ultrasound to treat lesions of varying depths by reducing target cell number and preventing future tumor growth. Many existing cancer treatment strategies cause systemic toxicity or cannot penetrate tissue deep enough to reach the entire tumor; however, emerging ultrasound stimulated therapies could offer an alternative to these treatments with their increased efficiency, greater penetration depth, and reduced side effects. Sonodynamic therapy could be used to treat cancers and other diseases, such as atherosclerosis, and diminish the risk associated with other treatment strategies since it induces cytotoxic effects only when externally stimulated by ultrasound and only at the cancerous region, as opposed to the systemic administration of chemotherapy drugs. Reactive oxygen species (ROS) are an essential component of SDT as they provide the cytotoxicity of sonodynamic therapy; they are produced when ultrasound is coupled with a sensitizing drug and molecular oxygen. Without ultrasound, the drug is not toxic. However, once the drug is exposed to ultrasound and molecular oxygen, it becomes toxic. Photodynamic therapy, from which sonodynamic therapy was derived, uses a similar mechanism. Instead of ultrasound, light is used to activate the drug. SDT allows the ultrasound to reach deeper into the tissue (to about 30 centimeters) compared to photodynamic therapy (PDT) since it can be highly focused. This increased penetration depth ultimately means that SDT can be utilized to treat deeper, less accessible tumors and is more cost-effective than PDT. Photodynamic therapy can be used in combination with sonodynamic therapy and is expanded upon in the Applications section of this article. Sonodynamic therapy can be used synergistically with other therapeutic methods such as drug-loaded microbubbles, nanoparticles, exosomes, liposomes, and genes for improved efficacy. Currently, SDT does not have any clinical products and acts as an adjuvant for the aforementioned therapeutic methods, but it has been explored for use in atherosclerosis and cancer treatment to reduce tumor size in breast, pancreas, liver, and spinal sarcomas. == Mechanism of Action == The mechanism of action for sonodynamic therapy is the use of low-intensity ultrasound through the use of focused mechanical waves to create a cytotoxic effect. However, SDT itself is non-thermal, non-toxic, and is able to non-invasively penetrate deep into tissue compared to other delivery methods such as photodynamic therapy. SDT is often performed alongside the use of a sonosensitizer such as porphyrin, phthalocyanines, xanthenes, and antitumor drugs. Ultrasound waves are also classified as acoustic waves, and the effect they have on the tissue of application can be described by a process called cavitation. Cavitation occurs as a specific interaction between ultrasound and aqueous surroundings and causes gas bubbles to break upon exposure to particular ultrasonic parameters, thus promoting penetration of the therapeutic into the biological tissues by generating cavities near the edge of the membrane. Cavitation can be broken down into stable and inertial cavitation. In stable cavitation, the oscillation of gas bubbles causes the environmental media to intermix. In inertial cavitation, gas bubbles increase in volume and almost reach their resonance volume, swelling before aggressively collapsing. The implosion of vesicles results in a drastic temperature and pressure change, thereby increasing the cell membrane's permeability to various drugs. Microbubbles are created by the acoustic waves from the ultrasound that expand and collapse, releasing energy, bringing the sonosensitizer into an excited state, and generating a ROS. The cavitation of this gas bubble can form the ROS with different methodologies such as sonoluminescence and pyrolysis. Apoptosis results from the formation of ROS and mechanical forces of SDT through membrane disruption in a process called lipid peroxidation. Necrosis is also a potential result of SDT. The influence of sonoluminescence on SDT and ROS has not been fully elaborated within literature. Currently, it is understood that sonoluminescence allows the emission of light upon bubble collapse which can activate sensitizers. A study by Hachimine et al. highlights the use of SDT as a method to activate a low photosensitive sonosensitizer, DCPH-P-Na(I), for cancer that is too deep within the tissue to combat utilizing PDT without skin irritation. Pyrolysis raises the surrounding temperature, enhances the cavitation process, breaks down the sensitizer, generating free radicals, and the free radicals interact within their environment to generate ROS. For both methods, the importance of the singlet oxygen compared to the hydroxyl radical to induce cytotoxicity has been highlighted. While other studies have found the singlet oxygen to not have a substantial effect. Overall, both of these methodologies lack significant breadth in literature to fully explain their role in ROS formation. However, literature has shown success in their analysis and application. === Sonoluminescence === Two primary mechanisms of ROS generation exist in sonodynamic therapy: sonoluminescence and pyrolysis. Sonoluminescence occurs when ultrasound produces light after irradiating an aqueous solution The exact mechanism with which light is produced remains unclear. However, it is suggested that inertial cavitation is a key element for this process. Other studies also indicate the potential role of stable cavitation === Pyrolysis === Pyrolysis is believed to occur when inertial cavitation induces an extreme temperature increase, degrades the sonosensitizers, thus producing free radicals that can react and ultimately produce ROS necessary for SDT. The localized temperature increase assists in the inertial cavitation and breakdown of the sonosensitizer in order to create ROS. The pyrolysis within the cavitation bubbles will produce H+ and OH- via weak bonding within the solute molecule. === Lipid Peroxidation === In addition to chemical methods, mechanical properties of the acoustic wave generated from the ultrasound can assist in initiating cytotoxic effects. This occurs through disruption of the membrane with a hydrophobic sonosensitizer. The mechanical disruption of the membrane causes a process called lipid peroxidation and adjustments to the cell membrane can change cell drug permeability. Both sonochemical and sonomechanical methodologies are used to generate ROS and release cargo from vesicles for applications such as tumor targeting. === Apoptosis === Low intensity ultrasound has been shown within past literature to induce apoptotic effects within surrounding cells. It has been found that it is not the initial ROS that causes apoptosis within the cells, but the free radicals within the mitochondria. In a study by Honda et al., it was determined that the mitochondria-caspase pathway is responsible for apoptosis through the increase of intracellular calcium. Outside of ROS induced apoptosis, cavitation is another factor involved within apoptosis of surrounding cells. Both cavitation types are able to induce apoptosis through damage to the membrane. Conditions such as frequency, duty cycle, pulse, and intensity can be manipulated to optimize cell death conditions such as necrosis, lysis, or apoptosis. === Autophagy === This method of cell death can occur by cell organelles becoming entrapped into autophagosomes that combine with lysosomes. Continuation of this process will lead to cell death and autophagy inhibitors or promoters can be controlled to encourage or discourage cell death and uptake of chemotherapeutics. == Sonosensitizers == Sonosensitizers, or sonosensitizing therapeutics, are the primary element of SDT and can be tailored to treat various cancers and generate different effects. These therapeutics, often involving the use of porphyrin or xanthene, will initiate a toxic effect via the ROS upon exposure to ultrasound. === Porphyrin-based sensitizers === Porphyrin-based sensitizers, initially used as a photosensitizer in PDT, are fairly hydrophobic molecules derived from hematoporphyrin. Single oxygen atoms or hydroxyl radicals are produced by porphyrin-based sensitizers upon exposure to ultrasound or light, providing the cytotoxic effects desired with sonodynamic and photodynamic therapies. However, the result of porphyrin-based sensitizers is not as local as desired for sonodynamic therapy since they are also located in non-targeted tissue between the tumor and the ultrasound emitter. === Xanthene-based sensitizers === Xanthene-based sensitizers, on the other hand, have shown successful cytotoxicity in vitro by producing reactive oxygen species after being triggered by ultrasound. More research is necessary to improve its potential in vivo performance since it is quickly processed by the liver and cleared from the body. Rose Bengal is a commonly used xanthene-based sonosensitizer. === Additional sensitizers === Other sensitizers that have been investigated for their potential in sonodynamic therapy (and have also been used previously in PDT) include acridine orange, methylene blue, curcumin, and indocyanine green. A study by Suzuki et al. used acridine orange, a fluorescent cationic dye that can insert itself into nucleic acids, for treating sarcoma 180 cells with ultrasound and demonstrated that reactive oxygen species are a critical element of SDT considering that their absence decreased the efficacy of SDT. Similar to the previous study, a recent study by Komori et al. utilized ultrasound coupled with methylene blue (a phenothiazine dye commonly used in PDT that exhibits low toxicity) to irradiate sarcoma 180 cells and found that methylene blue was an effective sonosensitizer in decreasing cell viability. Interestingly, curcumin is a spice that also can act as a sensitizer for PDT and SDT. In a study by Waksman et al., curcumin was able to impact macrophages, which are important for development of plaques found in atherosclerosis patients, thus reducing the amount of plaque in an animal model. These findings along with other research indicate that curcumin sensitizers could be used in SDT cancer treatments. Indocyanine green is a dye that absorbs near infrared wavelengths and is another sensitizer that has been shown to reduce cell viability when coupled with ultrasound and/or light. An in vivo study demonstrated that treating a mouse tumor model with indocyanine green coupled with ultrasound and light resulted in a 98% reduction in tumor volume by 27 days after treatment. == Carriers == As aforementioned, sonosensitizers are often used in conjunction with different drug carriers such as microbubbles, nanobubbles, liposomes, and exosomes to improve therapeutic agent concentration and penetration. === Liposomes === Liposomes are a common vehicle in drug delivery and specifically for the treatment of cancer. Liposomes contain a phospholipid bilayer. It is prevalent due to its ability to penetrate leaky vasculature and poor lymphatic drainage within tumors for enhanced permeability retention. These drug carriers can encapsulate hydrophobic and lipophilic molecules within their lipid bilayer and can be made naturally or synthetically. In addition, liposomes can entrap hydrophilic molecules in their hydrophilic core. Compared to the common cancer treatment chemotherapy, drugs loaded into liposomes allow for decreased systemic toxicity and a potential increase in the efficacy of targeted delivery. Success with liposomes as drug delivery systems has been shown both in vivo and in vitro. A study by Liu et al. showed that liposomes can be used alongside SDT to trigger the release of drugs via oxidation of the lipid components. Another study by Ninomiya et al. utilized nanoemulsion droplets exposed to ultrasonic waves for the formation of larger gas bubbles to disrupt the liposome membrane for drug release. Many properties and elements of liposomes can be altered for their specific purpose and to increase effectiveness, particularly their ability to travel in the blood and interact with cells and tissues in the body. These elements include their diameter, charge, arrangement, as well as the makeup of their membranes. Dai et al. proposed the incorporation of sonosensitizers with liposomes to enhance target specificity. Since SDT stimulates cancerous tissues to absorb and retain sonosentizers followed by activation with extracorporeal ultrasound, Dai et al. investigated the effect of liposome-encapsulated drugs on the efficacy of targeted delivery in SDT. They found that, in addition to its convenience and practicality, SDT is a safe and effective option for treating cancer. === Exosomes === Exosomes are nanocarriers that can provide targeted drug delivery of therapeutics to enhance local cytotoxic effects while minimizing any systemic impact. They are acquired from cells and are used for transportation purposes within the cell as membrane-bound vesicles. Advantages of exosomes for drug delivery purposes include their ability to be manipulated and engineered, in addition to their low toxicity and immunogenicity. They have also inspired research into non-cell-based treatment methods for various cancers and diseases. Other desirable aspects of exosomes include their overall biocompatibility and stability. A study by Nguyen Cao et al. investigated the use of exosomes for the delivery of indocyanine green (ICG), a sonosensitizer for breast cancer treatment. Significantly increased reactive oxygen species generation was observed in breast cancer cells treated with folic acid-conjugated exosomes. This is one example of a sonosensitizer used to treat a specific cancer using sonodynamic therapy. Another example of exosome-based sonodynamic therapy was illustrated by Liu et al. In this study, exosomes were decorated with porphyrin sensitizers and this system was used with an external ultrasound device to control and target drug delivery through SDT. Liu et al. provided a non-invasive method for treating cancer through extracorporeal activation of exosomes through ultrasound. === Microbubbles === Due to their ability to oscillate with exposure to low-frequency ultrasound, microbubbles have been used as contrast agents in order to visualize tissues in which the microbubbles have permeated. However, when these microspheres are exposed to higher pressure ultrasound, they can rupture, which could be beneficial for drug delivery purposes. Through SDT, these microbubbles could be selectively bursted at the tumor microenvironment in order to decrease systemic levels of the encapsulated drug and increase therapeutic efficacy. When applying SDT, the increase in acoustic pressure leads to the inertial cavitation, or collapse of the microbubble and local release of the cargo within. The inertial cavitation of the microbubbles when exposed to SDT is also referred to as ultrasound mediated microbubble destruction (UMMD). The shell of microbubbles can be decorated with different components, including polymers, lipids, or proteins depending on their intended purpose. Microbubbles have also been used for the localized release of attached cargo. This cargo is typically chemotherapeutics, antibiotics, or genes. Different drugs can be directly loaded into the microbubble with methods such as conjugation and nanoparticle, liposome loading, and genes. The combination of genes and SDT is referred to as sonotransfection. Examples of outer shell modifications can be seen in a study by McEwan et al. which found that lipid microbubbles showed reduced stability when sonosensitizers were added to their shells. However, attaching the polymer poly lactic-co-glycolic acid (PLGA) to the shell resulted in increased stability compared to the lipid microbubbles without losing other desirable properties such as targeted delivery and selective cytotoxicity. In another study, McEwan et al. investigated the ability of microbubbles carrying oxygen to increase production of reactive oxygen species, which are a necessary component of SDT, in the hypoxic environment of many solid tumors. These microbubbles were stabilized with lipids and a Rose Bengal sonosensitizer was attached to the surface to treat pancreatic cancer. Their work showed that coupling oxygen-loaded microbubbles that are sensitive to ultrasound with sonosensitizing drugs could allow for increased drug activation at the desired target even if hypoxia is present. Examples of therapeutics that have been loaded into microbubbles are gemcitabine, paclitaxel nanoparticles, plasmid DNA and 2,2′-azobis[2-(2-imidazolin-2-yl)propane]dihydrochloride loaded liposomes. Due to the targeting nature of the ligands connected to the microbubble, it allows for the controlled and specific targeting of the desired tissue for treatment. Another study performed by Nesbitt et al. has shown improved tumor reduction when gemcitabine was loaded into the microbubble and applied to a human pancreatic cancer xenograft model with SDT. === Nanobubbles === Similar to microbubbles, nanobubbles have shown efficacy in SDT. However, due to their smaller size, nanobubbles are able to reach targets that microbubbles cannot. Nanobubbles can reach deeper tissue and travel past the vasculature. Previous research has demonstrated that nanobubbles are more capable of reaching the tumor since they can permeate endothelial cells and migrate away from the vasculature. One study by Nittayacharn et al. developed doxorubicin-loaded nanobubbles and paired them with porphyrin sensitizers to be used in SDT for treatment of breast and ovarian cancer cells in vitro. They found an almost 70% increase in cytotoxicity when using SDT compared to only perfluoropropane nanobubbles filled with iridium(III). Additionally, compared to empty nanobubbles and/or free iridium(III), they observed greatest reactive oxygen species generation in the iridium(III)-nanobubbles exposed to ultrasound. These results demonstrate that nanobubbles loaded with a sonosensitizer and exposed to ultrasound could be a potential effective treatment for cancer using SDT. As with microbubbles, nanobubbles have also shown promise as oxygen-delivering vesicles to enhance the effectiveness of SDT. In order to mitigate hypoxia of target tissue, Owen et al. used a pancreatic cancer rodent model to deliver phospholipid stabilized nanobubbles filled with oxygen. The mice were divided into groups, one that received oxygen-filled nanobubbles prior to injection of a sonosensitizer and one that didn't. A statistically significant difference between the levels of oxygen in the tumors of the two groups was observed, indicating that nanobubbles could be an effective addition to SDT to treat cancers in a hypoxic environment. == Applications == === Combination with other therapies === Sonodynamic therapy can be combined with other therapeutic techniques to enhance treatment efficacy for various types of cancers and diseases. SDT can be combined with photodynamic therapy, chemotherapy, radiation, MRI, and immunotherapy. PDT has often been used in combination with SDT as sonosensitizers are also photosensitive. During initial development of SDT, Umemura et al., have determined that hematoporphyrins were able to initiate cell death similarly to PDT. This is due to SDT being able to initiate sonoluminescence. However, the advantage of SDT over PDT is that it can penetrate deep and precisely into the targeted tissue. In a study by Lui et al., it was shown that using a combination of these two delivery methods results in increased cytotoxicity with sino porphyrin in a metastatic xenograft model. In another example of combining SDT with PDT, Borah et al. investigated the advantage of 2-(1-hexyloxyethyl)-2-devinyl pyropheophorbide-a (HHPH), a photodynamic therapy drug, as a sonosensitizer and a photosensitizer for treating glioblastoma. Combining these therapies showed increased cell kill/tumor response, possibly caused by synergistic effects. The goal of a study by Browning et al. was to investigate the potential enhancement of chemoradiation efficacy through combining it with sonodynamic therapy in pancreatic cancer patients. In one model, survival increased with the combination compared to chemoradiation alone. Differences in the results for the two different models could be attributed to variations in tumor organization. The tumors that showed the greatest reduction in size were less vascularized, perhaps making them more vulnerable to SDT. Another study, by Huang et al. used elements of mesoporous organosilica-based nanosystems to fabricate a sonosensitizer to be used with MRI-guided SDT. Increased cell death and inhibiting tumor growth was induced by the sonosensitizers, indicating high SDT efficiency. This shows how SDT can assist with both removal and inhibition of tumor growth. SDT has also been combined with immunotherapy. A study by Lin et al. aimed to use cascade immuno-sonodynamic therapy to enhance tumor treatment using antibodies. The nanosonosensitizers resulted in high drug loading efficiency and a tumor-specific adaptive immune response. This serves as an example as to how SDT can be coupled with checkpoint blockade immunotherapy to enhance efficiency in cancer treatments. Another study by Yue et al. strived to combine checkpoint-blockade immunotherapy with nanosonosensitizers-augmented noninvasive sonodynamic therapy. Along with inhibiting lung metastasis, this combination promoted an anti-tumor response that prohibited tumor growth. This provides a proof-of-concept for combining SDT with another therapy to enhance treatment effects for the short and long term. === Types of cancers SDT has been shown to treat === ==== Cancer Treatment ==== The treatment of many different types of cancers has been investigated using sonodynamic therapy both in vitro and/or in vivo including, glioblastoma, pancreatic, breast, ovarian, lung, prostate, liver, stomach, and colon cancers. A study by Gao et al. showed that SDT is capable of inhibiting angiogenesis through the production of ROS. This hindered the proliferation, migration, and invasion of endothelial cells, tumor growth, intratumoral vascularity, and vascular endothelial growth factor expression within the tumor cell in xenograft rat models. Hachimine et al. performed a large in vitro study testing SDT on seventeen different cancer cell lines. The types of cancers included were pancreatic, breast, lung, prostate, liver, stomach, and colon cancers. The most successful treatment was that of lung cancer with 23.4% cell viability post-therapy. Qu et al. aimed to develop an "all-in-one" nanosensitizer platform triggered by SDT that combines various diagnostic and therapeutic effects to treat glioblastoma. Apoptosis was successfully induced and mitophagy was inhibited in glioma cells. This is an example of how SDT can be used with a different platform to treat glioblastoma. Borah et al., as mentioned above, also investigated the ability of SDT (and PDT) to treat glioblastoma and found that SDT (combined with PDT) was able to increase the number of tumor cells killed. McEwan et al. and Owen et al. both demonstrated the use of micro/nanobubbles to enhance the oxygen concentration near hypoxic pancreatic tumors, thereby increasing the efficacy of SDT. ==== Breast Cancer ==== 12% of women in the US will be diagnosed with breast cancer. Metastasis and recurrence is a large challenge for deep-seated solid state tumors. SDT is currently being explored as a treatment method for breast cancer, while avoiding the side effects associated with current therapeutic methods. There has been shown success in utilizing SDT in animal and human clinical trials in reduction of tumor size through mitochondrial targeting to initiate apoptosis of tumor cells and autophagy and immune response regulation. However, there are still complications with proper therapeutic efficacy when used alone. ==== Glioma ==== Malignant glioma is an extremely difficult to treat brain tumor that is a leading cause of death worldwide and half of cancer-related deaths. Complications associated with treating glioma include the blood brain barrier (BBB). This protective mechanism for the brain also raises challenges for drug delivery through the tight junctions between endothelial cells, only allowing small lipid-soluble drugs (<400 Da) to permeate. Current delivery methods are surgery and chemotherapy. SDT has been implemented as a method to open the BBB and has shown success in opening tight junctions for delivery. Examples of sonosensitizers that have shown success in glioma treatment are hematopor-phyrin monomethyl ether (HMME), porfimer sodium (Photofrin), di-sulfo-di-phthalimidomethyl phthalolcyaninezinc (ZnPcS2P2), Photolon, 5-aminolevulinic acid (5-ALA), and rose bengal (RB). These have shown to induce effects such as opening of the BBB, improved vascular permeability, and apoptosis of glioma cells. ==== Prostate Cancer ==== Prostate cancer is the second cause of cancer and the most common malignancy associated with deaths in men worldwide. Current methods of treatments are invasive resection therapy, radiation therapy, and prostatectomy that can cause complications such as incontinence, impotence, and damage to surrounding organs and tissues. Current studies have shown success in using SDT as a stand-alone treatment. SDT uses mitochondria related apoptosis for the reduction of cell viability. SDT for prostate cancer treatment has also been used alongside chemotherapeutics such as docetaxel microbubbles. This has shown to enhance the effects of docetaxel through a reduction in tumor perfusion and enhanced necrosis and apoptosis. The SDT and docetaxel group showed reduction in tumor growth. Overall, the use of SDT has shown promising results in prostate cancer treatment. ==== Arterial Diseases ==== Sonodynamic therapy could be used to treat more than just cancers. Atherosclerosis, which is a chronic arterial disease, is another target that has been observed in the literature. This disease occurs when fatty plaques aggregate on the inner surface of the artery and could be caused by malfunctions in lipid metabolism. More specifically, atherosclerosis is caused by an increase in endothelial permeability causing low-density lipoprotein particles to become oxidized and undergo sedimentation. These lipoproteins cause an increase in macrophages and lead to intensified plaque build up. As a result, the high influx of macrophages is the target for AS treatment in order to slow plaque build-up. Alongside the relationship between plaque build-up and macrophages, monocyte's differentiation into macrophages exacerbates the aforementioned process in addition to causing inflammation. A study by Wang et al. aimed to understand the underlying mechanisms regarding the potential effect of non-lethal SDT on atheroscleroic plaques. It was determined that non-lethal SDT prevents plaque development. A study performed by Jiang et al., showed success in SDT through the reduction of macrophage inflammatory factors such as TNF-alpha, IL-12, and IL-1B. They also showed that SDT could inhibit plaque inflammation in patients with peripheral artery disease and continue to promote positive results for longer than six months. Popular sonosensitizers for AS treatment are protoporphyrin IX (PpIX) and 5-aminolevulinic acid (5-ALA). PpIX is often used in PDT and is generated through 5-ALA, a non ultrasound-activated component, through increasing PpIX concentration within a cell. A study by Cheng et al. determined that THP-1 macrophage apoptosis is induced by an increase in PpiX concentration, leading to the production of large amounts of ROS. The use of SDT for AS treatment has also shown success in promoting the repopulation of vascular smooth muscle cells (VMSCs) through inducing further expression and autophagy to prevent VMSC evolution into plaque-holding macrophages. A study performed by Dan et al. showed the increase in smooth muscle a-actin, smooth muscle 22a, p38 mitogen-activated protein kinase phosphorylation. While a study by Geng et al. showed improved VMSC autophagy. Each of these factors contributed to the improved differentiation and development of VMSCs. === In Vitro and In Vivo Work === ==== In vitro ==== In vitro experimentation provides great insight and knowledge to characterize the potential of sonosensitizer behavior in vivo. In addition, SDT has shown success through its low intensity allowing increased plasma membrane permeability without cell death. Sonosensitizers have also been used in vitro in applications with different cell lines and to further understand the mechanism of action for cell death. It is currently understood that PDT and SDT have similar mechanisms for free radical generation for inducing apoptosis and necrosis. However, each cell line is unique and can cause cell death with different efficacy. Some examples of in vitro work include initial studies that were performed by Yumita et al., 1989 who used haematoprophyrin and SDT for mouse sarcoma 180 and rat ascites hepatoma (AH) that showed a relationship between dosage and ultrasound, and microbubbles causing cavitation leading to cell damage without the use of drugs. This study also emphasized the difference in efficacy between cell lines through SDT 180 having less lysis compared to AH-130 cells. Another study by Hachimine et al. emphasized efficacy between cell lines by examining seven different cancers with 17 cell lines total under the use of DCPH-P-NA(I). This study revealed that the stomach and lung cancer lines of MKN-28 and LU65A respectively had the highest survival rate, but the stomach and lung cancer lines of RERFLC-KJ and MKN-45 respectively had the lowest survival rates. Another study by Honda et al., with U937 and K562 showed that sonication increases the intracellular calcium ion levels and decreases GSH concentration respectively. This increased concentration of calcium plays a significant role in cell death through DNA fragmentation and mitochondrial membrane disruption. While a decreased concentration of GSH plays a significant role in allowing the formation of more free radicals. A study by Umemura et al., found that ATX-70 versus hematoporphyrin has increased cytotoxic activity. Current research typically focuses on using tumor xenograft models to determine the effect of SDT on target cells and delivery efficacy. ==== In vivo ==== Building upon the study by Umemura et al. and ATX-70, it was found that 24h after administration of the sonosensitizer had improved efficacy when ultrasound was applied compared to immediate administration. It was also determined that most ultrasound frequencies range between 1-3 MHz and 0.5-4W/cm^2. Higher frequencies at values such as 20W/cm^2 and 25W/cm^2 resulted in large necrotic lesions. This established a relationship between sonosensitizer formulation and ultrasound intensity to necrosis. Other studies have continued to innovate upon this by controlling drug ultrasound interval (DUI) for different sonosensitizers in order to determine the optimal time period to apply the ultrasound for improved efficacy. In addition, it has been shown that SDT can disturb surrounding vasculature in tumors. This has been shown in studies by Gao et al. with 5-ALA in mice and human umbilical vein endothelial cell lines through inhibition of microvessel density and cell proliferation, migration, and invasion. == Challenges and development == One of the many advantages of SDT compared to PDT is the ability of SDT to penetrate deeply placed solid tumors allowing a wider treatment range. Despite this fact, there are limitations to SDT that must be overcome or have optimized components in order to expand the effect and application of SDT. SDT does allow for precise activation of the therapeutic, but is limited in the delivery and accumulation of the delivery modality to penetrate deeply into the desired tumor site. This is often accommodated for through delivery vessels such as nanoparticles or liposomes. However, nanomedicine is limited by the enhanced permeability and retention effect and struggles to deliver in targeted abundance depending on the delivery vesicle. This can be seen in nanoparticles struggling with non-specific delivery. Future research has been focused on developing high targeting and penetrating nanoparticles for improved delivery and pharmacokinetics. Due to the complex nature of tumors and their microenvironments, they are difficult to treat with only one therapy. In order to enhance the oftentimes low production of reactive oxygen species to address the hypoxic tumor environment, SDT can be combined with other therapies, such as PDT, chemotherapy, and immunotherapy to improve patient outcomes. SDT alone does not respond well in hypoxic environments. However, bioreductive therapy could be used to reduce the impact of SDT's limitations regarding hypoxia in the tumor while leaving healthy/normal tissue alone. Sonosensitizers also require continuous high levels of oxygen to create ROS, which is not readily available within a hypoxic tumor microenvironment. However, strategies such as oxygen supplementation and production to supply the required oxygen and enhance cavitation, and glutathione depletion to avoid the reduction of the free radicals produced have been implemented alongside sonosensitizers to supply the required oxygen or reduce the combative function. In addition to its relatively low generation of reactive oxygen species, SDT also can cause permanent destruction of normal tissues. This lack of selectivity is caused by ultrasound divergence, resulting in heat and shear that impacts off-target tissues. Although advantages of organic sonosensitizers exist, such as high reproducibility, biocompatibility, production of reactive oxygen species, they also have limitations. Factors that limit the translation of organic sensitizers to clinical applications include low water solubility, sonotoxicity, and targetability as well as high phototoxiticty. Other properties could promote rapid clearance of the drug, which is why various nano and microparticles are used to transport the drug to the desired location. In addition, sonosensitizers in SDT often require increased dosage, and the relationship between therapeutic dosage and toxicity of sonosensitizers has not been properly characterized alongside other variables such as tissue type and acoustic pressure. Inorganic sensitizers produce reactive oxygen species, but in lower concentrations than desirable for SDT, limiting their ability to be used in a clinical setting. Another challenge is reflected in vitro and in vivo work. An example of this can be seen in a study using rose bengal, a xanthene dye. It was found to be successful in vitro, but in vivo showed significantly less efficacy due to liver squestation and clearance. Lastly, there are no current standardized computer simulations to predict the characteristics of different sonosenistizers within tissue, which would provide further insight into how sonosensitizers may behave. == Current clinical use == SDT has been researched most commonly to combat cancers and atherosclerosis such as breast cancer, pancreatic cancer, liver, and spinal sarcomas. Currently, there are no FDA approved clinical applications of SDT. However, for PDT, Photofrin is an FDA approved hematoporphyrin (PHOTOFRIN®). However, SDT has been used in a clinical trial in combination with PDT to assess for reduction in tumor size in patients with breast cancer. However, it was difficult to determine if SDT PDT or the drug dosage was the primary mechanism of treatment. Another case study expanded on this by using SDT as a standalone treatment with a Gc protein hormone therapy with the use of 5-ALA or chlorin e6 as a sonosensitizer. It was shown that tumor markers significantly decreased during treatment. == Future directions == The effectiveness of sonodynamic therapy as a cancer treatment is supported by many in vitro and in vivo studies. However, large-scale clinical trials are necessary for translation into the clinical setting. In order to mitigate the limitations aforementioned, new sonosensitizers are being developed and SDT is being combined with other therapies in novel ways. Particularly, organic sonosensitizers with high solubility in water, high sonotoxocity, increased ability to target tumors, and low phototoxicity need to be developed in order to improve the therapeutic efficacy of SDT and allow it to be used for treating cancers. In addition, the mechanisms by which ROS are produced by sonosensitizers upon exposure to ultrasound is yet to be determined, reducing the ability to control its function and outcomes. Ultimately, the synergistic effects of combining SDT with other therapies would allow each to compensate for the limitations of the other, improving their therapeutic efficacy and increasing their ability to destroy tumors. == References ==
Wikipedia/Acoustic_targeted_drug_delivery
Nanoparticles for drug delivery to the brain is a method for transporting drug molecules across the blood–brain barrier (BBB) using nanoparticles. These drugs cross the BBB and deliver pharmaceuticals to the brain for therapeutic treatment of neurological disorders. These disorders include Parkinson's disease, Alzheimer's disease, schizophrenia, depression, and brain tumors. Part of the difficulty in finding cures for these central nervous system (CNS) disorders is that there is yet no truly efficient delivery method for drugs to cross the BBB. Antibiotics, antineoplastic agents, and a variety of CNS-active drugs, especially neuropeptides, are a few examples of molecules that cannot pass the BBB alone. With the aid of nanoparticle delivery systems, however, studies have shown that some drugs can now cross the BBB, and even exhibit lower toxicity and decrease adverse effects throughout the body. Toxicity is an important concept for pharmacology because high toxicity levels in the body could be detrimental to the patient by affecting other organs and disrupting their function. Further, the BBB is not the only physiological barrier for drug delivery to the brain. Other biological factors influence how drugs are transported throughout the body and how they target specific locations for action. Some of these pathophysiological factors include blood flow alterations, edema and increased intracranial pressure, metabolic perturbations, and altered gene expression and protein synthesis. Though there exist many obstacles that make developing a robust delivery system difficult, nanoparticles provide a promising mechanism for drug transport to the CNS. == Background == The first successful delivery of a drug across the BBB occurred in 1995. The drug used was hexapeptide dalargin, an anti-nociceptive peptide that cannot cross the BBB alone. It was encapsulated in polysorbate 80 coated nanoparticles and intravenously injected. This was a huge breakthrough in the nanoparticle drug delivery field, and it helped advance research and development toward clinical trials of nanoparticle delivery systems. Nanoparticles range in size from 10 - 1000 nm (or 1 μm) and they can be made from natural or artificial polymers, lipids, dendrimers, and micelles. Most polymers used for nanoparticle drug delivery systems are natural, biocompatible, and biodegradable, which helps prevent contamination in the CNS. Several current methods for drug delivery to the brain include the use of liposomes, prodrugs, and carrier-mediated transporters. Many different delivery methods exist to transport these drugs into the body, such as peroral, intranasal, intravenous, and intracranial. For nanoparticles, most studies have shown increasing progression with intravenous delivery. Along with delivery and transport methods, there are several means of functionalizing, or activating, the nanoparticle carriers. These means include dissolving or absorbing a drug throughout the nanoparticle, encapsulating a drug inside the particle, or attaching a drug on the surface of the particle. == Types of nanoparticles for CNS drug delivery == === Lipid-based === One type of nanoparticle involves use of liposomes as drug molecule carriers. The diagram on the right shows a standard liposome. It has a phospholipid bilayer separating the interior from the exterior of the cell. Liposomes are composed of vesicular bilayers, lamellae, made of biocompatible and biodegradable lipids such as sphingomyelin, phosphatidylcholine, and glycerophospholipids. Cholesterol, a type of lipid, is also often incorporated in the lipid-nanoparticle formulation. Cholesterol can increase stability of a liposome and prevent leakage of a bilayer because its hydroxyl group can interact with the polar heads of the bilayer phospholipids. Liposomes have the potential to protect the drug from degradation, target sites for action, and reduce toxicity and adverse effects. Lipid nanoparticles can be manufactured by high pressure homogenization, a current method used to produce parenteral emulsions. This process can ultimately form a uniform dispersion of small droplets in a fluid substance by subdividing particles until the desired consistency is acquired. This manufacturing process is already scaled and in use in the food industry, which therefore makes it more appealing for researchers and for the drug delivery industry. Liposomes can also be functionalized by attaching various ligands on the surface to enhance brain-targeted delivery. === Cationic liposomes === Another type of lipid-nanoparticle that can be used for drug delivery to the brain is a cationic liposome. These are lipid molecules that are positively charged. One example of cationic liposomes uses bolaamphiphiles, which contain hydrophilic groups surrounding a hydrophobic chain to strengthen the boundary of the nano-vesicle containing the drug. Bolaamphiphile nano-vesicles can cross the BBB, and they allow controlled release of the drug to target sites. Lipoplexes can also be formed from cationic liposomes and DNA solutions, to yield transfection agents. Cationic liposomes cross the BBB through adsorption mediated endocytosis followed by internalization in the endosomes of the endothelial cells. By transfection of endothelial cells through the use of lipoplexes, physical alterations in the cells could be made. These physical changes could potentially improve how some nanoparticle drug-carriers cross the BBB. === Metallic === Metal nanoparticles are promising as carriers for drug delivery to the brain. Common metals used for nanoparticle drug delivery are gold, silver, and platinum, owing to their biocompatibility. These metallic nanoparticles are used due to their large surface area to volume ratio, geometric and chemical tunability, and endogenous antimicrobial properties. Silver cations released from silver nanoparticles can bind to the negatively charged cellular membrane of bacteria and increase membrane permeability, allowing foreign chemicals to enter the intracellular fluid. Metal nanoparticles are chemically synthesized using reduction reactions. For example, drug-conjugated silver nanoparticles are created by reducing silver nitrate with sodium borohydride in the presence of an ionic drug compound. The drug binds to the surface of the silver, stabilizing the nanoparticles and preventing the nanoparticles from aggregation. Metallic nanoparticles typically cross the BBB via transcytosis. Nanoparticle delivery through the BBB can be increased by introducing peptide conjugates to improve permeability to the central nervous system. For instance, recent studies have shown an improvement in gold nanoparticle delivery efficiency by conjugating a peptide that binds to the transferrin receptors expressed in brain endothelial cells. === Solid lipid === Also, solid lipid nanoparticles (SLNs) are lipid nanoparticles with a solid interior as shown in the diagram on the right. SLNs can be made by replacing the liquid lipid oil used in the emulsion process with a solid lipid. In solid lipid nanoparticles, the drug molecules are dissolved in the particle's solid hydrophobic lipid core, this is called the drug payload, and it is surrounded by an aqueous solution. Many SLNs are developed from triglycerides, fatty acids, and waxes. High-pressure homogenization or micro-emulsification can be used for manufacturing. Further, functionalizing the surface of solid lipid nanoparticles with polyethylene glycol (PEG) can result in increased BBB permeability. Different colloidal carriers such as liposomes, polymeric nanoparticles, and emulsions have reduced stability, shelf life and encapsulation efficacy. Solid lipid nanoparticles are designed to overcome these shortcomings and have an excellent drug release and physical stability apart from targeted delivery of drugs. === Nanoemulsions === Another form for nanoparticle delivery systems is oil-in-water emulsions done on a nano-scale. This process uses common biocompatible oils such as triglycerides and fatty acids, and combines them with water and surface-coating surfactants. Oils rich in omega-3 fatty acids especially contain important factors that aid in penetrating the tight junctions of the BBB. === Polymer-based === Other nanoparticles are polymer-based, meaning they are made from a natural polymer such as polylactic acid (PLA), poly D,L-glycolide (PLG), polylactide-co-glycolide (PLGA), and polycyanoacrylate (PCA). Some studies have found that polymeric nanoparticles may provide better results for drug delivery relative to lipid-based nanoparticles because they may increase the stability of the drugs or proteins being transported. Polymeric nanoparticles may also contain beneficial controlled release mechanisms. Nanoparticles made from natural polymers that are biodegradable have the abilities to target specific organs and tissues in the body, to carry DNA for gene therapy, and to deliver larger molecules such as proteins, peptides, and even genes. To manufacture these polymeric nanoparticles, the drug molecules are first dissolved and then encapsulated or attached to a polymer nanoparticle matrix. Three different structures can then be obtained from this process; nanoparticles, nanocapsules (in which the drug is encapsulated and surrounded by the polymer matrix), and nanospheres (in which the drug is dispersed throughout the polymeric matrix in a spherical form). One of the most important traits for nanoparticle delivery systems is that they must be biodegradable on the scale of a few days. A few common polymer materials used for drug delivery studies are polybutyl cyanoacrylate (PBCA), poly(isohexyl cyanoacrylate) (PIHCA), polylactic acid (PLA), or polylactide-co-glycolide (PLGA). PBCA undergoes degradation through enzymatic cleavage of its ester bond on the alkyl side chain to produce water-soluble byproducts. PBCA also proves to be the fastest biodegradable material, with studies showing 80% reduction after 24 hours post intravenous therapy injection. PIHCA, however, was recently found to display an even lower degradation rate, which in turn further decreases toxicity. PIHCA, due to this slight advantage, is currently undergoing phase III clinical trials for transporting the drug doxorubicin as a treatment for hepatocellular carcinomas. Human serum albumin (HSA) and chitosan are also materials of interest for the generation of nanoparticle delivery systems. Using albumin nanoparticles for stroke therapy can overcome numerous limitations. For instance, albumin nanoparticles can enhance BBB permeability, increase solubility, and increase half-life in circulation. Patients who have brain cancer overexpress albumin-binding proteins, such as SPARC and gp60, in their BBB and tumor cells, naturally increasing the uptake of albumin into the brain. Using this relationship, researches have formed albumin nanoparticles that co-encapsulate two anticancer drugs, paclitaxel and fenretinide, modified with low weight molecular protamine (LMWP), a type of cell-penetrating protein, for anti-glioma therapy. Once injected into the patient's body, the albumin nanoparticles can cross the BBB more easily, bind to the proteins and penetrate glioma cells, and then release the contained drugs. This nanoparticle formulation enhances tumor-targeting delivery efficiency and improves the solubility issue of hydrophobic drugs. Specifically, cationic bovine serum albumin-conjugated tanshinone IIA PEGylated nanoparticles injected into a MCAO rat model decreased the volume of infarction and neuronal apoptosis. Chitosan, a naturally abundant polysaccharide, is particularly useful due to its biocompability and lack of toxicity. With its adsorptive and mucoadhesive properties, chitosan can overcome limitations of internasal administration to the brain. It has been shown that cationic chitosan nanoparticles interact with the negatively charged brain endothelium. Coating these polymeric nanoparticle devices with different surfactants can also aid BBB crossing and uptake in the brain. Surfactants such as polysorbate 80, 20, 40, 60, and poloxamer 188, demonstrated positive drug delivery through the blood–brain barrier, whereas other surfactants did not yield the same results. It has also been shown that functionalizing the surface of nanoparticles with polyethylene glycol (PEG), can induce the "stealth effect", allowing the drug-loaded nanoparticle to circulate throughout the body for prolonged periods of time. Further, the stealth effect, caused in part by the hydrophilic and flexible properties of the PEG chains, facilitates an increase in localizing the drug at target sites in tissues and organs. == Mechanisms for delivery == === Liposomes === A mechanism for liposome transport across the BBB is lipid-mediated free diffusion, a type of facilitated diffusion, or lipid-mediated endocytosis. There exist many lipoprotein receptors which bind lipoproteins to form complexes that in turn transport the liposome nano-delivery system across the BBB. Apolipoprotein E (apoE) is a protein that facilitates transport of lipids and cholesterol. ApoE constituents bind to nanoparticles, and then this complex binds to a low-density lipoprotein receptor (LDLR) in the BBB and allows transport to occur. === Polymeric nanoparticles === The mechanism for the transport of polymer-based nanoparticles across the BBB has been characterized as receptor-mediated endocytosis by the brain capillary endothelial cells. Transcytosis then occurs to transport the nanoparticles across the tight junction of endothelial cells and into the brain. Surface coating nanoparticles with surfactants such as polysorbate 80 or poloxamer 188 was shown to increase uptake of the drug into the brain also. This mechanism also relies on certain receptors located on the luminal surface of endothelial cells of the BBB. Ligands coated on the nanoparticle's surface bind to specific receptors to cause a conformational change. Once bound to these receptors, transcytosis can commence, and this involves the formation of vesicles from the plasma membrane pinching off the nanoparticle system after internalization. Additional receptors identified for receptor-mediated endocytosis of nanoparticle delivery systems are the scavenger receptor class B type I (SR-BI), LDL receptor (LRP1), transferrin receptor, and insulin receptor. As long as a receptor exists on the endothelial surface of the BBB, any ligand can be attached to the nanoparticle's surface to functionalize it so that it can bind and undergo endocytosis. Another mechanism is adsorption mediated transcytosis, where electrostatic interactions are involved in mediating nanoparticle crossing of the BBB. Cationic nanoparticles (including cationic liposomes) are of interest for this mechanism, because their positive charges assist binding on the brain's endothelial cells. Using TAT-peptides, a cell-penetrating peptide, to functionalize the surface of cationic nanoparticles can further improve drug transport into the brain. === Magnetic and Magnetoelectric nanoparticles === In contrast to the above mechanisms, a delivery with magnetic fields does not strongly depend on the biochemistry of the brain. In this case, nanoparticles are literally pulled across the BBB via application of a magnetic field gradient. The nanoparticles can be pulled in as well as removed from the brain merely by controlling the direction of the gradient. For the approach to work, the nanoparticles must have a non-zero magnetic moment and have a diameter of less than 50 nm. Both magnetic and magnetoelectric nanoparticles (MENs) satisfy the requirements. However, it is only the MENs which display a non-zero magnetoelectric (ME) effect. Due to the ME effect, MENs can provide a direct access to local intrinsic electric fields at the nanoscale to enable a two-way communication with the neural network at the single-neuron level. MENs, proposed by the research group of Professor Sakhrat Khizroev at Florida International University (FIU), have been used for targeted drug delivery and externally controlled release across the BBB to treat HIV and brain tumors, as well as to wirelessly stimulate neurons deep in the brain for treatment of neurodegenerative diseases such as Parkinson's Disease and others. === Focused ultrasound === Studies have shown that focused ultrasound bursts can noninvasively be used to disrupt tight junctions in desired locations of BBB, allowing for the increased passage of particles at that location. This disruption can last up to four hours after burst administration. Focused ultrasound works by generating oscillating microbubbles, which physically interact with the cells of the BBB by oscillating at a frequency which can be tuned by the ultrasound burst. This physical interaction is believed to cause cavitation and ultimately the disintegration of the tight junction complexes which may explain why this effect lasts for several hours. However, the energy applied from ultrasound can result in tissue damage. Fortunately, studies have demonstrated that this risk can be reduced if preformed microbubbles are first injected before focused ultrasound is applied, reducing the energy required from the ultrasound. This technique has applications in the treatment of various diseases. For example, one study has shown that using focused ultrasound with oscillating bubbles loaded with a chemotherapeutic drug, carmustine, facilitates the safe treatment of glioblastoma in an animal model. This drug, like many others, normally requires large dosages to reach the target brain tissue diffusion from the blood, leading to systemic toxicity and the possibilities of multiple harmful side effects manifesting throughout the body. However, focused ultrasound has the potential to increase the safety and efficacy of drug delivery to the brain. == Toxicity == A study was performed to assess the toxicity effects of doxorubicin-loaded polymeric nanoparticle systems. It was found that doses up to 400 mg/kg of PBCA nanoparticles alone did not cause any toxic effects on the organism. These low toxicity effects can most likely be attributed to the controlled release and modified biodistribution of the drug due to the traits of the nanoparticle delivery system. Toxicity is a highly important factor and limit of drug delivery studies, and a major area of interest in research on nanoparticle delivery to the brain. Metal nanoparticles are associated with risks of neurotoxicity and cytotoxicity. These heavy metals generate reactive oxygen species, which causes oxidative stress and damages the cells' mitochondria and endoplasmic reticulum. This leads to further issues in cellular toxicity, such as damage to DNA and disruption of cellular pathways. Silver nanoparticles in particular have a higher degree of toxicity compared to other metal nanoparticles such as gold or iron. Silver nanoparticles can circulate through the body and accumulate easily in multiple organs, as discovered in a study on the silver nanoparticle distribution in rats. Traces of silver accumulated in the rats' lungs, spleen, kidney, liver, and brain after the nanoparticles were injected subcutaneously. In addition, silver nanoparticles generate more reactive oxygen species compared to other metals, which leads to an overall larger issue of toxicity. == Research == In the early 21st century, extensive research is occurring in the field of nanoparticle drug delivery systems to the brain. One of the common diseases being studied in neuroscience is Alzheimer's disease. Many studies have been done to show how nanoparticles can be used as a platform to deliver therapeutic drugs to these patients with the disease. A few Alzheimer's drugs that have been studied especially are rivastigmine, tacrine, quinoline, piperine, and curcumin. PBCA, chitosan, and PLGA nanoparticles were used as delivery systems for these drugs. Overall, the results from each drug injection with these nanoparticles showed remarkable improvements in the effects of the drug relative to non-nanoparticle delivery systems. This possibly suggests that nanoparticles could provide a promising solution to how these drugs could cross the BBB. One factor that still must be considered and accounted for is nanoparticle accumulation in the body. With long-term and frequent injections that are often required to treat chronic diseases such as Alzheimer's disease, polymeric nanoparticles could potentially build up in the body, causing undesirable effects. This area for concern would have to be further assessed to analyze these possible effects and to improve them. == References == == External links == Shityakov, Sergey; Salvador, Ellaine; Pastorin, Giorgia; Förster, Carola (2015). "Blood-brain barrier transport studies, aggregation, and molecular dynamics simulation of multiwalled carbon nanotube functionalized with fluorescein isothiocyanate". International Journal of Nanomedicine. 10: 1703–1713. doi:10.2147/IJN.S68429. PMC 4356663. PMID 25784800.
Wikipedia/Nanoparticles_for_drug_delivery_to_the_brain
Computed axial lithography is a method for 3D printing based on computerised tomography scans to create objects from photo-curable resin. The process was developed by a collaboration between the University of California, Berkeley and the Lawrence Livermore National Laboratory. Unlike other methods of 3D printing, computed axial lithography does not build models through depositing layers of material, as fused deposition modelling and stereolithography does, instead it creates objects by projecting a 2D image of the spinning 3D model onto a cylinder of resin spinning at the same rate. It is notable for its ability to build an object much more quickly than other methods using resins and the ability to embed objects within the objects. == References ==
Wikipedia/Computed_axial_lithography
Digital modeling and fabrication is a design and production process that combines 3D modeling or computing-aided design (CAD) with additive and subtractive manufacturing. Additive manufacturing is also known as 3D printing, while subtractive manufacturing may also be referred to as machining, and many other technologies can be used to physically produce the designed objects. == Modeling == Digitally fabricated objects are created with a variety of CAD software packages, using both 2D vector drawing, and 3D modeling. Types of 3D models include wireframe, solid, surface and mesh. A design has one or more of these model types. == Machines for fabrication == Three machines are popular for fabrication: 1. CNC router 2. Laser cutter 3. 3D Printer === CNC milling machine === CNC stands for "computer numerical control". CNC mills or routers include proprietary software which interprets 2D vector drawings or 3D models and converts this information to a G-code, which represents specific CNC functions in an alphanumeric format, which the CNC mill can interpret. The G-codes drive a machine tool, a powered mechanical device typically used to fabricate components. CNC machines are classified according to the number of axes that they possess, with 3, 4 and 5 axis machines all being common, and industrial robots being described with having as many as 9 axes. CNC machines are specifically successful in milling materials such as plywood, plastics, foam board, and metal at a fast speed. CNC machine beds are typically large enough to allow 4' × 8' (123 cm x 246 cm) sheets of material, including foam several inches thick, to be cut. === Laser cutter === The laser cutter is a machine that uses a laser to cut materials such as chip board, matte board, felt, wood, and acrylic up to 3/8 inch (1 cm) thickness. The laser cutter is often bundled with a driver software which interprets vector drawings produced by any number of CAD software platforms. The laser cutter is able to modulate the speed of the laser head, as well as the intensity and resolution of the laser beam, and as such is able in both to cut and to score material, as well as approximate raster graphics. Objects cut out of materials can be used in the fabrication of physical models, which will only require the assembly of the flat parts. === 3D printers === 3D printers use a variety of methods and technology to assemble physical versions of digital objects. Typically desktop 3D printers can make small plastic 3D objects. They use a roll of thin plastic filament, melting the plastic and then depositing it precisely to cool and harden. They normally build 3D objects from bottom to top in a series of many very thin plastic horizontal layers. This process often happens over the course of several hours. ==== Fused deposition modeling ==== Fused deposition modeling, also known as fused filament fabrication, uses a 3-axis robotic system that extrudes material, typically a thermoplastic, one thin layer at a time and progressively builds up a shape. Examples of machines that use this method are the Dimension 768 and the Ultimaker. ==== Stereolithography ==== Stereolithography uses a high intensity light projector, usually using DLP technology, with a photosensitive polymer resin. It will project the profile of an object to build a single layer, curing the resin into a solid shape. Then the printer will move the object out of the way by a small amount and project the profile of the next layer. Examples of devices that use this method are the Form-One printer and Os-RC Illios. ==== Selective laser sintering ==== Selective laser sintering uses a laser to trace out the shape of an object in a bed of finely powdered material that can be fused together by the application of heat from the laser. After one layer has been traced by a laser, the bed and partially finished part is moved out of the way, a thin layer of the powdered material is spread, and the process is repeated. Typical materials used are alumide, steel, glass, thermoplastics (especially nylon), and certain ceramics. Example devices include the Formiga P 110 and the Eos EosINT P730. ==== Powder printer ==== Powder printers work in a similar manner to SLS machines, and typically use powders that can be cured, hardened, or otherwise made solid by the application of a liquid binder that is delivered via an inkjet printhead. Common materials are plaster of paris, clay, powdered sugar, wood-filler bonding putty, and flour, which are typically cured with water, alcohol, vinegar, or some combination thereof. The major advantage of powder and SLS machines is their ability to continuously support all parts of their objects throughout the printing process with unprinted powder. This permits the production of geometries not easily otherwise created. However, these printers are often more complex and expensive. Examples of printers using this method are the ZCorp Zprint 400 and 450. == See also == Direct digital manufacturing Industry 4.0 Rapid Prototyping Responsive computer-aided design Technology education == References ==
Wikipedia/Digital_modeling_and_fabrication
3D Systems Corporation is an American company based in Rock Hill, South Carolina, that engineers, manufactures, and sells 3D printers, 3D printing materials, 3D printed parts, and application engineering services. The company creates product concept models, precision and functional prototypes, master patterns for tooling, as well as production parts for direct digital manufacturing. It uses proprietary processes to fabricate physical objects using input from computer-aided design and manufacturing software, or 3D scanning and 3D sculpting devices. 3D Systems' technologies and services are used in the design, development, and production stages of many industries, including aerospace, automotive, healthcare, dental, entertainment, and durable goods. The company offers a range of professional- and production-grade 3D printers, as well as software, materials, and the online rapid part printing service on demand. It is notable within the 3D printing industry for developing stereolithography and the STL file format. Chuck Hull, CTO and former president, pioneered stereolithography and obtained a patent for the technology in 1986. As of 2020, 3D Systems employed over 2,400 people in 25 offices worldwide. == History == 3D Systems was founded in Valencia, California, by Chuck Hull, the inventor and patent-holder of the first stereolithography (SLA) rapid prototyping system. Prior to Hull's introduction of SLA rapid prototyping, concept models required extensive time and money to produce. The innovation of SLA reduced these resource expenditures while increasing the quality and accuracy of the resulting model. Early SLA systems were complex and costly, and required extensive redesigns before achieving commercial viability. Primary issues concerned hydrodynamic and chemical complications. In 1996, the introduction of solid-state lasers permitted Hull and his team to reformulate their materials. Engineers in transportation, healthcare, and consumer products helped fuel early phases of 3D Systems' rapid prototyping research and development. These industries remain key followers of 3D Systems' technology. In late 2001, 3D Systems began an acquisitions program that expanded the company's technology through ownership of software, materials, printers, and printable content, as well as access to the skills of engineers and designers. The rate of 3D Systems' acquisitions (16 in 2011) raised questions with regard to the task facing the company's management team. Other onlookers pointed to the encompassing scope of the acquisitions as indicating calculated steps by 3D Systems to consolidate the 3D printing industry under one roof and logo, and to become capable of servicing each link in the scan/create-to-print chain. In 2003, Hull was succeeded by Avi Reichental. Both Reichental and Hull are listed among the top twenty most influential people in rapid technologies by TCT Magazine. Hull remains an active member of 3D Systems' board and serves as the company's Chief Technology Officer and Executive Vice President. In 2005, 3D Systems relocated its headquarters to Rock Hill, South Carolina, citing a favorable business climate, a sustained lower cost of doing business, and significant investment and tax benefits as reasons for the move. In May 2011, 3D Systems transferred from Nasdaq (TDSC) to the New York Stock Exchange (DDD). In January 2012 3D Systems acquired Z Corporation for US$137 million. That same year a Gray Wolf Report predicted 3D Systems' rate of growth to be unsustainable, pointing to inflated impressions from acquisitions as a corporate misstatement of organic growth. 3D Systems responded to this article on November 19, 2012, claiming it to "contain materially false statements and erroneous conclusions that we believe defamed the company and its reputation and resulted in losses to our shareholders". In January 2014 it was announced that 3D Systems had acquired the Burbank, CA-based collectibles company Gentle Giant Studios, which designs, develops, and manufactures three-dimensional representations of characters from a variety of globally recognized franchises, including Marvel, Disney, AMC’s The Walking Dead, Avatar, Harry Potter and Star Wars. In July 2014, 3D Systems announced the acquisition of Israeli medical imaging company Simbionix for US$120,000,000. In September 2014, 3D Systems acquired the Leuven, Belgium-based LayerWise, a principal provider of direct metal 3D printing and manufacturing services spun off from KU Leuven. The terms of the acquisition were not disclosed by either company. In January 2015, 3D Systems acquired the 3D printer manufacturer botObjects, the first company to commercialize a full-color printer using the fused filament fabrication technique. botObjects was founded by Martin Warner (CEO) and Mike Duma (CTO). botObjects' proprietary 5-color CMYKW cartridge system was claimed to be able to generate color combinations and gradients by mixing primary printing colors. There was some skepticism about botObjects' claims. In April 2015, 3D Systems announced its acquisition of the Chinese Easyway Group, creating 3D Systems China. Easyway is a Chinese 3D printing sales and service provider, with key operations in Shanghai, Wuxi, Beijing, Guangdong, and Chongqing. In October 2015, Reichental stepped down as the president and CEO of 3D Systems, Inc. and was replaced on an interim basis by the company's chief legal officer Andrew Johnson. Vyomesh Joshi (VJ) was appointed as president and CEO on April 4, 2016. On May 14, 2020, the 3D Systems board named Jeff Graves as president and CEO, effective May 26. He remains the CEO as of February 17, 2023. == Technology == 3D Systems manufactures stereolithography (SLA), fused deposition modeling (FDM), selective laser sintering (SLS), color-jet printing (CJP), multi-jet printing (MJP), and direct metal printing (DMP, a version of SLS that uses metal powder) systems. Each technology uses digital 3D data to create parts through an additive layer-by-layer process. The systems vary in their materials, print capacities, and applications. Color jet printing uses inkjet technology to deposit a liquid binder across a bed of powder. Powder is released and spread with a roller to form each new layer. This technology was originally developed by Z Corporation. Multi-jet printing refers to the process of depositing liquid photopolymers onto a build surface using inkjet technology. A high resolution is attainable, with a support material that can be easily removed in post-processing. == Products and patents == As part of 3D Systems' effort to consolidate 3D printing under one company, its products span a range of 3D printers and print products to target users of its technologies across industries. 3D Systems offers both professional and production printers. In addition to printers, 3D Systems offers content creation software, including reverse engineering software and organic 3D modeling software. Following a razor and blades model, 3D Systems offers more than one hundred materials to be used with its printers, including waxes, rubber-like materials, metals, composites, plastics and nylons. 3D Systems is a closed-source company, using in-house technologies for product development and patents to protect their technologies from competitors. Critics of the closed-source model have blamed seemingly slow development and innovation in 3D printing not on a lack of technology, but on a lack of open information sharing within the industry, and supporters argue that the right to patents inspires and motivates higher-quality innovations, leading to a better and more impressive final product. In November 2012, 3D Systems filed a lawsuit against prosumer 3D printer company Formlabs and the Kickstarter crowdfunding website over Formlabs' attempt to fund a printer which it claimed infringed its patent on "Simultaneous multiple layer curing in stereolithography." The legal procedure lasted more than two years and was significant enough to be covered in a Netflix documentary about 3D printing, called "Print the Legend". 3D Systems has applied for patents for the following innovations and technologies: the rapid prototyping and manufacturing system and method; radiation-curable compositions useful in image projection systems; compensation of actinic radiation intensity profiles for 3D modelers; apparatus and methods for cooling laser-sintered parts; radiation-curable compositions useful in solid freeform fabrication systems; apparatus for 3D printing using imaged layers; compositions and methods for selective deposition modeling; edge smoothness with low-resolution projected images for use in solid imaging; an elevator and method for tilting a solid image build platform for reducing air entrapment and for build release; selective deposition modeling methods for improved support-object interface; region-based supports for parts produced by solid freeform fabrication; additive manufacturing methods for improved curl control and sidewall quality; support and build material and applications. == Applications and industries == 3D Systems' products and services are used across industries to assist, either in part or in full, the design, manufacture and/or marketing processes. 3D Systems' technologies and materials are used for prototyping and the production of functional end-use parts, in addition to fast, precise design communication. Current 3D Systems-reliant industries include automotive, aerospace and defense, architecture, dental and healthcare, consumer goods, and manufacturing. Examples of industry-specific applications include: Aerospace, for the manufacture and tooling of complex, durable and lighter-weight flight parts Architecture, for structure verification, design review, client concept communication, reverse structure engineering, and expedited scaled modeling Automotive, for design verification, difficult visualizations, and new engine development Defense, for lightweight flight and surveillance parts and the reduction of inventory with on-demand printing Dentistry, for restorations, molds and treatments. Invisalign orthodontics devices use 3D Systems' technologies. Education, for equation and geometry visualizations, art education, and design initiatives Entertainment, for the manufacture and prototyping of action figures, toys, games and game components; printing of sustainable guitars and basses, multifunction synthesizers, etc. Healthcare, for customized hearing aids and prosthetics, improved medicine delivery methods, respiratory devices, therapeutics, and flexible endoscopy and laparoscopy devices for improved procedures and recovery times Manufacturing, for faster product development cycles, mold production, prototypes, and design troubleshooting For industries such as aerospace and automotive, 3D Systems' technologies have reduced the time needed to incorporate design drafts and enabled the production of more efficient parts of lighter weight. Because 3D printing builds layer-by-layer according to design, it does not need to accommodate the traditional manufacturing tools of subtractive methods, often resulting in lighter parts and more efficient geometries. == Operations == In 2007, the company consolidated its offices, operations, and research and development functions into a new global headquarters in Rock Hill, South Carolina, US. About half of the headquarters' 80,000 square feet (7,400 m2) consist of research and development laboratories with an 18,000-square-foot (1,700 m2) Rapid Manufacturing Center (RMC) with 3D Systems' rapid prototyping, rapid manufacturing and 3D printing systems at work. With customers in 80 countries, 3D Systems has over 2100 employees in 25 worldwide locations, including San Francisco, Leuven, France, Germany, Italy, Switzerland, South Korea, Brazil, the United Kingdom, China and Japan. The company has more than 359 U.S. and foreign patents. In 2019, the company consolidated resources within its On Demand domestic rapid printing service locations into Littleton, Seattle, Lawrenceburg, and Wilsonville. Restructuring and additions were made to the Lawrenceburg facility for future expansions and growth, which nearly doubled its size. === Community involvement and partnerships === 3D Systems is involved in a multi-year agreement with the Smithsonian Institution as part of an effort to strengthen collections' stewardship and increase collection accessibility through 3D representations. In 2012, 3D Systems began partnering with the Scholastic Art & Writing Awards in the Future New category, where three winners are awarded with a $1000 scholarship in addition to the prizes and recognition granted to winners by the Scholastic Awards, and contributed two production-grade 3D printers to the National Network for Manufacturing Innovation (NNMI), which aims to re-localize manufacturing and increase US manufacturing competitiveness. 3D Systems is also a corporate underwriter of the National Children's Oral Health Foundation (NCOHF), which delivers educational, preventative and treatment oral health services to children in at-risk populations. On February 18 of 2014, Ekso Bionics debuted the first ever 3D-printed hybrid exoskeleton in collaboration with 3D Systems. == See also == List of 3D printer manufacturers == References == == External links == Official website Business data for 3D Systems:
Wikipedia/3D_Systems
Neuronavigation is the set of computer-assisted technologies used by neurosurgeons to guide or "navigate" within the confines of the skull or vertebral column during surgery, and used by psychiatrists to accurately target rTMS (transcranial magnetic stimulation). The set of hardware for these purposes is referred to as a neuronavigator. == Stereotactic surgery == Neuronavigation is recognized as the next evolutionary step of stereotactic surgery, a set of techniques that dates back to the early 1900s and that gained popularity during the 1940s, particularly in Germany, France and the U.S., with the development of surgery for the treatment of movement disorders such as Parkinson's disease and dystonias. In its infancy the purpose of this technology was to create a mathematical model describing a proposed coordinate system for the space within a closed structure, e.g., the skull. This "fiducial spatial coordinate system” uses fiducial markers as a reference to describe with high accuracy the position of specific structures within this arbitrarily defined space. The surgeon then refers to that data to target particular structures within the brain. This technology was boosted by the collection of data on human anatomy in “stereotactic atlases”, expanding the quantitatively defined “targets” that could be readily used in surgery. Finally, the advent of modern neuro-imaging technologies such as computed tomography (CT) and magnetic resonance imaging (MRI)—along with the ever-increasing capabilities of digitalization, computer-graphic modelling and accelerated manipulation of data through complex mathematical algorithms via robust computer technologies—made possible the real-time quantitative spatial fusion of images of the patient's brain with the created “fiducial coordinate system” for the purpose of guiding the surgeon's instrument or probe to a selected target. In this way the observations done via highly sophisticated neuro-imaging technologies (CT, MRI, angiography) are related to the actual patient during surgery. == Neuro imaging == The ability to relate the position of a real surgical instrument in the surgeon's hand or the microscope's focal point to the location of the imaged pathology, updated in "real time" in an "integrated operating room", highlights the modern version of this set of technologies. In its current form, neuronavigation began in the 1990s and has adapted to new neuro-imaging technologies, real-time imaging capabilities, new technologies to transfer the information in the operating room for 3-D localization, real-time neuro-monitoring, robotics, and new and better algorithms to handle data via more sophisticated computer technology. == Surgical virtualization == In its later conceptualization, the term neuronavigation has started to overlap with surgical-virtualization in which a neurosurgeon is able to visualize the scenario for surgery in a 3-D model of manipulable computer data. In this way the physician can "practice and check" the surgery, try alternative approaches, assess possible difficulties, etc., before the real surgery takes place. == Neuronavigation for transcranial magnetic stimulation == The standard TMS protocol which was FDA approved in 2008 estimates the location of the DLPFC by finding the left motor cortex and marking a spot 5 cm anterior to it. Later two more methods were introduced using measurements of the head and calculating the location of the DLPFC as 1) the F3 (EEG 10/20 system) or 2) the Beam method. Both were estimations with some limitations. With the introduction of Neuronavigation, direct visualization of structures can be achieved either with an individual's (specially ordered) MRI or an average brain (MNI) stretched to the dimensions of the individual. There is now greater significance of this increased accuracy due to recent evidence that stimulation of the gyral crown is less effective than stimulation of the sulcal bank. The introduction of robotic controlled TMS also may make Neuronavigation more important. Several manufacturers offer complete systems including Ant Neuro or Axilum Robotics. == Neuronavigation for spine surgery == Assistive technologies are used during spinal fusion surgery to increase accuracy, especially for the placement of pedicle screws. A review of navigation techniques for spine surgery published in 2019 listed four currently available options: Medtronic stealth system BrainLab Stryker navigation 7D Surgical system == External links == American Association of Neurological Surgeons (AANS.org) | Library. Research List. Neggers SF, Langerak TR, Schutter DJ, et al. (April 2004). "A stereotactic method for image-guided transcranial magnetic stimulation validated with fMRI and motor-evoked potentials". NeuroImage. 21 (4): 1805–17. doi:10.1016/j.neuroimage.2003.12.006. PMID 15050601. S2CID 25409984. == References ==
Wikipedia/Neuronavigation
Biology is the scientific study of life and living organisms. It is a broad natural science that encompasses a wide range of fields and unifying principles that explain the structure, function, growth, origin, evolution, and distribution of life. Central to biology are five fundamental themes: the cell as the basic unit of life, genes and heredity as the basis of inheritance, evolution as the driver of biological diversity, energy transformation for sustaining life processes, and the maintenance of internal stability (homeostasis). Biology examines life across multiple levels of organization, from molecules and cells to organisms, populations, and ecosystems. Subdisciplines include molecular biology, physiology, ecology, evolutionary biology, developmental biology, and systematics, among others. Each of these fields applies a range of methods to investigate biological phenomena, including observation, experimentation, and mathematical modeling. Modern biology is grounded in the theory of evolution by natural selection, first articulated by Charles Darwin, and in the molecular understanding of genes encoded in DNA. The discovery of the structure of DNA and advances in molecular genetics have transformed many areas of biology, leading to applications in medicine, agriculture, biotechnology, and environmental science. Life on Earth is believed to have originated over 3.7 billion years ago. Today, it includes a vast diversity of organisms—from single-celled archaea and bacteria to complex multicellular plants, fungi, and animals. Biologists classify organisms based on shared characteristics and evolutionary relationships, using taxonomic and phylogenetic frameworks. These organisms interact with each other and with their environments in ecosystems, where they play roles in energy flow and nutrient cycling. As a constantly evolving field, biology incorporates new discoveries and technologies that enhance the understanding of life and its processes, while contributing to solutions for challenges such as disease, climate change, and biodiversity loss. == Etymology == From Greek bios, life, (from Proto-Indo-European root *gwei-, to live) and logy, study of. The compound was coined in 1800 by Karl Friedrich Burdach and in 1802 used by both German naturalist Gottfried Reinhold Treviranus and Jean-Baptiste Lamarck. == History == The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scientific study of plants. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz (781–869), Al-Dīnawarī (828–896), who wrote on botany, and Rhazes (865–925) who wrote on anatomy and physiology. Medicine was especially well studied by Islamic scholars working in Greek philosopher traditions, while natural history drew heavily on Aristotelian thought. Biology began to quickly develop with Anton van Leeuwenhoek's dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, infusoria and the diversity of microscopic life. Investigations by Jan Swammerdam led to new interest in entomology and helped to develop techniques of microscopic dissection and staining. Advances in microscopy had a profound impact on biological thinking. In the early 19th century, biologists pointed to the central importance of the cell. In 1838, Schleiden and Schwann began promoting the now universal ideas that (1) the basic unit of organisms is the cell and (2) that individual cells have all the characteristics of life, although they opposed the idea that (3) all cells come from the division of other cells, continuing to support spontaneous generation. However, Robert Remak and Rudolf Virchow were able to reify the third tenet, and by the 1860s most biologists accepted all three tenets which consolidated into cell theory. Meanwhile, taxonomy and classification became the focus of natural historians. Carl Linnaeus published a basic taxonomy for the natural world in 1735, and in the 1750s introduced scientific names for all his species. Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living forms as malleable—even suggesting the possibility of common descent. Serious evolutionary thinking originated with the works of Jean-Baptiste Lamarck, who presented a coherent theory of evolution. The British naturalist Charles Darwin, combining the biogeographical approach of Humboldt, the uniformitarian geology of Lyell, Malthus's writings on population growth, and his own morphological expertise and extensive natural observations, forged a more successful evolutionary theory based on natural selection; similar reasoning and evidence led Alfred Russel Wallace to independently reach the same conclusions. The basis for modern genetics began with the work of Gregor Mendel in 1865. This outlined the principles of biological inheritance. However, the significance of his work was not realized until the early 20th century when evolution became a unified theory as the modern synthesis reconciled Darwinian evolution with classical genetics. In the 1940s and early 1950s, a series of experiments by Alfred Hershey and Martha Chase pointed to DNA as the component of chromosomes that held the trait-carrying units that had become known as genes. A focus on new kinds of model organisms such as viruses and bacteria, along with the discovery of the double-helical structure of DNA by James Watson and Francis Crick in 1953, marked the transition to the era of molecular genetics. From the 1950s onwards, biology has been vastly extended in the molecular domain. The genetic code was cracked by Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg after DNA was understood to contain codons. The Human Genome Project was launched in 1990 to map the human genome. == Chemical basis == === Atoms and molecules === All organisms are made up of chemical elements; oxygen, carbon, hydrogen, and nitrogen account for most (96%) of the mass of all organisms, with calcium, phosphorus, sulfur, sodium, chlorine, and magnesium constituting essentially all the remainder. Different elements can combine to form compounds such as water, which is fundamental to life. Biochemistry is the study of chemical processes within and relating to living organisms. Molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including molecular synthesis, modification, mechanisms, and interactions. === Water === Life arose from the Earth's first ocean, which formed some 3.8 billion years ago. Since then, water continues to be the most abundant molecule in every organism. Water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. Once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. In terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen (H) atoms to one oxygen (O) atom (H2O). Because the O–H bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. This polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. Surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. Water is also adhesive as it is able to adhere to the surface of any polar or charged non-water molecules. Water is denser as a liquid than it is as a solid (or ice). This unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. Water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. Thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. As a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. In pure water, the number of hydrogen ions balances (or equals) the number of hydroxyl ions, resulting in a pH that is neutral. === Organic compounds === Organic compounds are molecules that contain carbon bonded to another element such as hydrogen. With the exception of water, nearly all the molecules that make up each organism contain carbon. Carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. For example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide (CO2), or a triple covalent bond such as in carbon monoxide (CO). Moreover, carbon can form very long chains of interconnecting carbon–carbon bonds such as octane or ring-like structures such as glucose. The simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other elements such as oxygen (O), hydrogen (H), phosphorus (P), and sulfur (S), which can change the chemical behavior of that compound. Groups of atoms that contain these elements (O-, H-, P-, and S-) and are bonded to a central carbon atom or skeleton are called functional groups. There are six prominent functional groups that can be found in organisms: amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. In 1953, the Miller–Urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early Earth, thus suggesting that complex organic molecules could have arisen spontaneously in early Earth (see abiogenesis). === Macromolecules === Macromolecules are large molecules made up of smaller subunits or monomers. Monomers include sugars, amino acids, and nucleotides. Carbohydrates include monomers and polymers of sugars. Lipids are the only class of macromolecules that are not made up of polymers. They include steroids, phospholipids, and fats, largely nonpolar and hydrophobic (water-repelling) substances. Proteins are the most diverse of the macromolecules. They include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. The basic unit (or monomer) of a protein is an amino acid. Twenty amino acids are used in proteins. Nucleic acids are polymers of nucleotides. Their function is to store, transmit, and express hereditary information. == Cells == Cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. Most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. There are generally two types of cells: eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. Prokaryotes are single-celled organisms such as bacteria, whereas eukaryotes can be single-celled or multicellular. In multicellular organisms, every cell in the organism's body is derived ultimately from a single cell in a fertilized egg. === Cell structure === Every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. A cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. Cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. Cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. Cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. Within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. In addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. These organelles include the cell nucleus, which contains most of the cell's DNA, or mitochondria, which generate adenosine triphosphate (ATP) to power cellular processes. Other organelles such as endoplasmic reticulum and Golgi apparatus play a role in the synthesis and packaging of proteins, respectively. Biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. Plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. Eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. In terms of their structural composition, the microtubules are made up of tubulin (e.g., α-tubulin and β-tubulin) whereas intermediate filaments are made up of fibrous proteins. Microfilaments are made up of actin molecules that interact with other strands of proteins. === Metabolism === All cells require energy to sustain cellular processes. Metabolism is the set of chemical reactions in an organism. The three main purposes of metabolism are: the conversion of food to energy to run cellular processes; the conversion of food/fuel to monomer building blocks; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, the breaking down of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy. The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy that will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly without being consumed by it—by reducing the amount of activation energy needed to convert reactants into products. Enzymes also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells. === Cellular respiration === Cellular respiration is a set of metabolic reactions and processes that take place in cells to convert chemical energy from nutrients into adenosine triphosphate (ATP), and then release waste products. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, releasing energy. Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it clearly does not resemble one when it occurs in a cell because of the slow, controlled release of energy from the series of reactions. Sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. Cellular respiration involving oxygen is called aerobic respiration, which has four stages: glycolysis, citric acid cycle (or Krebs cycle), electron transport chain, and oxidative phosphorylation. Glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of ATP being produced at the same time. Each pyruvate is then oxidized into acetyl-CoA by the pyruvate dehydrogenase complex, which also generates NADH and carbon dioxide. Acetyl-CoA enters the citric acid cycle, which takes places inside the mitochondrial matrix. At the end of the cycle, the total yield from 1 glucose (or 2 pyruvates) is 6 NADH, 2 FADH2, and 2 ATP molecules. Finally, the next stage is oxidative phosphorylation, which in eukaryotes, occurs in the mitochondrial cristae. Oxidative phosphorylation comprises the electron transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from NADH and FADH2 that is coupled to the pumping of protons (hydrogen ions) across the inner mitochondrial membrane (chemiosmosis), which generates a proton motive force. Energy from the proton motive force drives the enzyme ATP synthase to synthesize more ATPs by phosphorylating ADPs. The transfer of electrons terminates with molecular oxygen being the final electron acceptor. If oxygen were not present, pyruvate would not be metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen. === Photosynthesis === Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism's metabolic activities via cellular respiration. This chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. In most cases, oxygen is released as a waste product. Most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the energy necessary for life on Earth. Photosynthesis has four stages: Light absorption, electron transport, ATP synthesis, and carbon fixation. Light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. The absorbed light energy is used to remove electrons from a donor (water) to a primary electron acceptor, a quinone designated as Q. In the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of NADP+, which is reduced to NADPH, a process that takes place in a protein complex called photosystem I (PSI). The transport of electrons is coupled to the movement of protons (or hydrogen) from the stroma to the thylakoid membrane, which forms a pH gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. This is analogous to the proton-motive force generated across the inner mitochondrial membrane in aerobic respiration. During the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the ATP synthase is coupled to the synthesis of ATP by that same ATP synthase. The NADPH and ATPs generated by the light-dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate (RuBP) in a sequence of light-independent (or dark) reactions called the Calvin cycle. === Cell signaling === Cell signaling (or communication) is the ability of cells to receive, process, and transmit signals with its environment and with itself. Signals can be non-chemical such as light, electrical impulses, and heat, or chemical signals (or ligands) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. There are generally four types of chemical signals: autocrine, paracrine, juxtacrine, and hormones. In autocrine signaling, the ligand affects the same cell that releases it. Tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self-division. In paracrine signaling, the ligand diffuses to nearby cells and affects them. For example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle cell. In juxtacrine signaling, there is direct contact between the signaling and responding cells. Finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. Once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. For instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. Other types of receptors include protein kinase receptors (e.g., receptor for the hormone insulin) and G protein-coupled receptors. Activation of G protein-coupled receptors can initiate second messenger cascades. The process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction. === Cell cycle === The cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. These events include the duplication of its DNA and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. In eukaryotes (i.e., animal, plant, fungal, and protist cells), there are two distinct types of cell division: mitosis and meiosis. Mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA is replicated) and is often followed by telophase and cytokinesis; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the mitotic phase of an animal cell cycle—the division of the mother cell into two genetically identical daughter cells. The cell cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. After cell division, each of the daughter cells begin the interphase of a new cycle. In contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of DNA replication followed by two divisions. Homologous chromosomes are separated in the first division (meiosis I), and sister chromatids are separated in the second division (meiosis II). Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor. Prokaryotes (i.e., archaea and bacteria) can also undergo cell division (or binary fission). Unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. Before binary fission, DNA in the bacterium is tightly coiled. After it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. Growth of a new cell wall begins to separate the bacterium (triggered by FtsZ polymerization and "Z-ring" formation). The new cell wall (septum) fully develops, resulting in the complete split of the bacterium. The new daughter cells have tightly coiled DNA rods, ribosomes, and plasmids. === Sexual reproduction and meiosis === Meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. Two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic DNA damage and genetic complementation which masks the expression of deleterious recessive mutations. The beneficial effect of genetic complementation, derived from outcrossing (cross-fertilization) is also referred to as hybrid vigor or heterosis. Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted “The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented.” Genetic variation, often produced as a byproduct of sexual reproduction, may provide long-term advantages to those sexual lineages that engage in outcrossing. == Genetics == === Inheritance === Genetics is the scientific study of inheritance. Mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. It has several principles. The first is that genetic characteristics, alleles, are discrete and have alternate forms (e.g., purple vs. white or tall vs. dwarf), each inherited from one of two parents. Based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive; an organism with at least one dominant allele will display the phenotype of that dominant allele. During gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. Heterozygotic individuals produce gametes with an equal frequency of two alleles. Finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i.e., genes are unlinked. An exception to this rule would include traits that are sex-linked. Test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. A Punnett square can be used to predict the results of a test cross. The chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by Thomas Morgans's experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. === Genes and DNA === A gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid (DNA) that carries genetic information that controls form or function of an organism. DNA is composed of two polynucleotide chains that coil around each other to form a double helix. It is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell is collectively known as its genome. In eukaryotes, DNA is mainly in the cell nucleus. In prokaryotes, the DNA is held within the nucleoid. The genetic information is held within genes, and the complete assemblage in an organism is called its genotype. DNA replication is a semiconservative process whereby each strand serves as a template for a new strand of DNA. Mutations are heritable changes in DNA. They can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical (e.g., nitrous acid, benzopyrene) or radiation (e.g., x-ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes). Mutations can lead to phenotypic effects such as loss-of-function, gain-of-function, and conditional mutations. Some mutations are beneficial, as they are a source of genetic variation for evolution. Others are harmful if they were to result in a loss of function of genes needed for survival. === Gene expression === Gene expression is the molecular process by which a genotype encoded in DNA gives rise to an observable phenotype in the proteins of an organism's body. This process is summarized by the central dogma of molecular biology, which was formulated by Francis Crick in 1958. According to the Central Dogma, genetic information flows from DNA to RNA to protein. There are two gene expression processes: transcription (DNA to RNA) and translation (RNA to protein). === Gene regulation === The regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, RNA splicing, translation, and post-translational modification of a protein. Gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the DNA sequence close to or at a promoter. A cluster of genes that share the same promoter is called an operon, found mainly in prokaryotes and some lower eukaryotes (e.g., Caenorhabditis elegans). In positive regulation of gene expression, the activator is the transcription factor that stimulates transcription when it binds to the sequence near or at the promoter. Negative regulation occurs when another transcription factor called a repressor binds to a DNA sequence called an operator, which is part of an operon, to prevent transcription. Repressors can be inhibited by compounds called inducers (e.g., allolactose), thereby allowing transcription to occur. Specific genes that can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. In contrast to both, structural genes encode proteins that are not involved in gene regulation. In addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of DNA and protein found in eukaryotic cells. === Genes, development, and evolution === Development is the process by which a multicellular organism (plant or animal) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. There are four key processes that underlie development: Determination, differentiation, morphogenesis, and growth. Determination sets the developmental fate of a cell, which becomes more restrictive during development. Differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. Stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. Cellular differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. Thus, different cells can have very different physical characteristics despite having the same genome. Morphogenesis, or the development of body form, is the result of spatial differences in gene expression. A small fraction of the genes in an organism's genome called the developmental-genetic toolkit control the development of that organism. These toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Among the most important toolkit genes are the Hox genes. Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. == Evolution == === Evolutionary processes === Evolution is a central organizing concept in biology. It is the change in heritable characteristics of populations over successive generations. In artificial selection, animals were selectively bred for specific traits. Given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, Darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. Darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. He further inferred that this would lead to the accumulation of favorable traits over successive generations, thereby increasing the match between the organisms and their environment. === Speciation === A species is a group of organisms that mate with one another and speciation is the process by which one lineage splits into two lineages as a result of having evolved independently from each other. For speciation to occur, there has to be reproductive isolation. Reproductive isolation can result from incompatibilities between genes as described by Bateson–Dobzhansky–Muller model. Reproductive isolation also tends to increase with genetic divergence. Speciation can occur when there are physical barriers that divide an ancestral species, a process known as allopatric speciation. === Phylogeny === A phylogeny is an evolutionary history of a specific group of organisms or their genes. It can be represented using a phylogenetic tree, a diagram showing lines of descent among organisms or their genes. Each line drawn on the time axis of a tree represents a lineage of descendants of a particular species or population. When a lineage divides into two, it is represented as a fork or split on the phylogenetic tree. Phylogenetic trees are the basis for comparing and grouping different species. Different species that share a feature inherited from a common ancestor are described as having homologous features (or synapomorphy). Phylogeny provides the basis of biological classification. This classification system is rank-based, with the highest rank being the domain followed by kingdom, phylum, class, order, family, genus, and species. All organisms can be classified as belonging to one of three domains: Archaea (originally Archaebacteria), Bacteria (originally eubacteria), or Eukarya (includes the fungi, plant, and animal kingdoms). === History of life === The history of life on Earth traces how organisms have evolved from the earliest emergence of life to present day. Earth formed about 4.5 billion years ago and all life on Earth, both living and extinct, descended from a last universal common ancestor that lived about 3.5 billion years ago. Geologists have developed a geologic time scale that divides the history of the Earth into major divisions, starting with four eons (Hadean, Archean, Proterozoic, and Phanerozoic), the first three of which are collectively known as the Precambrian, which lasted approximately 4 billion years. Each eon can be divided into eras, with the Phanerozoic eon that began 539 million years ago being subdivided into Paleozoic, Mesozoic, and Cenozoic eras. These three eras together comprise eleven periods (Cambrian, Ordovician, Silurian, Devonian, Carboniferous, Permian, Triassic, Jurassic, Cretaceous, Tertiary, and Quaternary). The similarities among all known present-day species indicate that they have diverged through the process of evolution from their common ancestor. Biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean eon and many of the major steps in early evolution are thought to have taken place in this environment. The earliest evidence of eukaryotes dates from 1.85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. Later, around 1.7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. Algae-like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2.7 billion years ago. Microorganisms are thought to have paved the way for the inception of land plants in the Ordovician period. Land plants were so successful that they are thought to have contributed to the Late Devonian extinction event. Ediacara biota appear during the Ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the Cambrian explosion. During the Permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the Permian–Triassic extinction event 252 million years ago. During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates; one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods. After the Cretaceous–Paleogene extinction event 66 million years ago killed off the non-avian dinosaurs, mammals increased rapidly in size and diversity. Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. == Diversity == === Bacteria and Archaea === Bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. Typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the Earth's crust. Bacteria also live in symbiotic and parasitic relationships with plants and animals. Most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. Archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), a term that has fallen out of use. Archaeal cells have unique properties separating them from the other two domains, Bacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of Haloquadratum walsbyi. Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. Archaea use more energy sources than eukaryotes: these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. Salt-tolerant archaea (the Haloarchaea) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fission, fragmentation, or budding; unlike bacteria, no known species of Archaea form endospores. The first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet. Archaea are a major part of Earth's life. They are part of the microbiota of all organisms. In the human microbiome, they are important in the gut, mouth, and on the skin. Their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles: carbon fixation; nitrogen cycling; organic compound turnover; and maintaining microbial symbiotic and syntrophic communities, for example. === Eukaryotes === Eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria (or symbiogenesis) that gave rise to mitochondria and chloroplasts, both of which are now part of modern-day eukaryotic cells. The major lineages of eukaryotes diversified in the Precambrian about 1.5 billion years ago and can be classified into eight major clades: alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. Five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. While it is likely that protists share a common ancestor (the last eukaryotic common ancestor), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. Like groupings such as algae, invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. Most protists are unicellular; these are called microbial eukaryotes. Plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom Plantae, which would exclude fungi and some algae. Plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. The first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. Algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of Plantae. Unlike glaucophytes, the other algal clades such as red and green algae are multicellular. Green algae comprise three major clades: chlorophytes, coleochaetophytes, and stoneworts. Fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. Many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. Animals are multicellular eukaryotes. With few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million animal species in total. They have complex interactions with each other and their environments, forming intricate food webs. === Viruses === Viruses are submicroscopic infectious agents that replicate inside the cells of organisms. Viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. More than 6,000 virus species have been described in detail. Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity. The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. Because viruses possess some but not all characteristics of life, they have been described as "organisms at the edge of life", and as self-replicators. == Ecology == Ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. === Ecosystems === The community of living (biotic) organisms in conjunction with the nonliving (abiotic) components (e.g., water, light, radiation, temperature, humidity, atmosphere, acidity, and soil) of their environment is called an ecosystem. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy from the sun enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals move matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and other microbes. === Populations === A population is the group of organisms of the same species that occupies an area and reproduce from generation to generation. Population size can be estimated by multiplying population density by the area or volume. The carrying capacity of an environment is the maximum population size of a species that can be sustained by that specific environment, given the food, habitat, water, and other resources that are available. The carrying capacity of a population can be affected by changing environmental conditions such as changes in the availability of resources and the cost of maintaining them. In human populations, new technologies such as the Green revolution have helped increase the Earth's carrying capacity for humans over time, which has stymied the attempted predictions of impending population decline, the most famous of which was by Thomas Malthus in the 18th century. === Communities === A community is a group of populations of species occupying the same geographical area at the same time. A biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, like pollination and predation, or long-term; both often strongly influence the evolution of the species involved. A long-term interaction is called a symbiosis. Symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. Every species participates as a consumer, resource, or both in consumer–resource interactions, which form the core of food chains or food webs. There are different trophic levels within any food web, with the lowest level being the primary producers (or autotrophs) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. At the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. Heterotrophs that consume plants are primary consumers (or herbivores) whereas heterotrophs that consume herbivores are secondary consumers (or carnivores). And those that eat secondary consumers are tertiary consumers and so on. Omnivorous heterotrophs are able to consume at multiple levels. Finally, there are decomposers that feed on the waste products or dead bodies of organisms. On average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one-tenth of the energy of the trophic level that it consumes. Waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. === Biosphere === In the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. For example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. A biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic (biosphere) and the abiotic (lithosphere, atmosphere, and hydrosphere) compartments of Earth. There are biogeochemical cycles for nitrogen, carbon, and water. === Conservation === Conservation biology is the study of the conservation of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. It is concerned with factors that influence the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years, which has contributed to poverty, starvation, and will reset the course of evolution on this planet. Biodiversity affects the functioning of ecosystems, which provide a variety of services upon which people depend. Conservation biologists research and educate on the trends of biodiversity loss, species extinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Organizations and citizens are responding to the current biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales. == See also == == References == == Further reading == == External links == OSU's Phylocode Biology Online – Wiki Dictionary MIT video lecture series on biology OneZoom Tree of Life Journal of the History of Biology (springer.com) Journal links PLOS ONE PLOS Biology A peer-reviewed, open-access journal published by the Public Library of Science Current Biology: General journal publishing original research from all areas of biology Biology Letters: A high-impact Royal Society journal publishing peer-reviewed biology papers of general interest Science: Internationally renowned AAAS science journal – see sections of the life sciences International Journal of Biological Sciences: A biological journal publishing significant peer-reviewed scientific papers Perspectives in Biology and Medicine: An interdisciplinary scholarly journal publishing essays of broad relevance
Wikipedia/Biological_science
Endoscopic optical coherence tomography, also intravascular optical coherence tomography is a catheter-based imaging application of optical coherence tomography (OCT). It is capable of acquiring high-resolution images from inside a blood vessel using optical fibers and laser technology. One of its main applications is for coronary arteries, which are often treated by endoscopic, minimally invasive surgical procedures. Other applications for peripheral arteries and for neurovascular procedures have been proposed and are being investigated. Neurovascular applications required significant technological developments, due to the highly tortuous anatomy of the cerebrovasculature. Intravascular OCT rapidly creates three-dimensional images at a resolution of approximately 15 micrometers, an improved resolution with respect to intravascular ultrasound and coronary angiogram, the other imaging techniques. This offers additional information that can be used to optimize the treatment and management of vascular disease. == Theory == OCT is analogous to medical ultrasound, measuring the backreflection of infrared light rather than sound. The time for light to be reflected back from the tissue under inspection is used to measure distances. However, due to the high speed of light, the backreflection time cannot be measured directly, but is instead measured using interferometry. OCT is measured using either time domain (TD-OCT) or frequency domain techniques (FD-OCT). Commercially available coronary OCT technology is based on frequency domain techniques, resulting in rapid acquisition procedures (1 to 2 seconds). Intracoronary OCT uses near-infrared light at 1300 nm and can visualize the microstructure of the arterial wall, its size, and therapeutic devices with high accuracy. == History == Intravascular OCT was developed for the imaging of arterial disease at a resolution higher than the other techniques available, such as x-ray angiography and intravascular ultrasounds. OCT allows to assess atherosclerotic plaques characteristics at a resolution of approximately 15 μm (or better) and found applications for the guidance of catheter-based coronary interventions (ie, percutaneous coronary interventions). The first report of endoscopic OCT appeared in 1997 in the journal Science exploring various applications including gastroenterology and airways. The first intravascular in vivo use in a preclinical model was reported in 1994 and first in human, clinical imaging in 2003. The first OCT imaging catheter and system was commercialized by LightLab Imaging, Inc., a company based in Massachusetts formed following a technology transfer in 1997 from Fujimoto's lab (MIT). Early on, time-domain OCT technology required slow acquisitions (>10 seconds long) requiring the use of balloon occlusion techniques to displace the blood from the arterial lumen, opaque to near-infrared light. This prevented a broader adoption for several years. Aroun 2008-2009, the advent of rapid sweep source lasers allowed for the development of intravascular Fourier-Domain OCT (FD-OCT). This enabled for the first-time rapid acquisitions of a long coronary segment in a couple of seconds, allowing non occlusive brief contrast injections to clear the arterial lumen from blood. Initial demonstration of FD-OCT for coronary imaging was achieved in 2008-2009 which significantly accelerated clinical adoption starting in 2009. == Cardiovascular applications == Following regulatory clearances in the major geographies between 2009 and 2012 of fast acquisition Fourier domain OCT, the use of intracoronary OCT rapidly increased. It is used to help coronary disease diagnosis, planning of the intervention, assess procedural results, and prevent complications. In the last decade, clinical benefits of coronary OCT have been systematically investigated. Several studies have linked the use of intravascular imaging such as IVUS and OCT to better stent expansion, a metric strongly correlated to better clinical outcomes in patients suffering from coronary artery disease and myocardial infarction. Larger randomized clinical trials have been undertaken. In 2023, a double-blind prospective trial demonstrated improvement in morbidity and mortality in coronary bifurcation interventions: "Among patients with complex coronary-artery bifurcation lesions, OCT-guided PCI was associated with a lower incidence of MACE at 2 years than angiography-guided PCI." Although not every study showed significant results, to date, several studies demonstrated the benefits in patient outcomes of using intravascular imaging during coronary arteries interventions. The use of intravascular imaging for coronary intervention is reported on the current cardiology guidelines. Data published in late 2016 showed that over 150,000 intracoronary optical coherence tomography procedures are performed every year, and its adoption is rapidly growing at a rate of ~10-20% every year. Assessment of artery lumen morphology is the cornerstone of intravascular imaging criteria to evaluate disease severity and guide intervention. The high-resolution of OCT imaging allows to assess with high accuracy vessel lumen area, wall microstructure, intracoronary stent apposition and expansion. OCT has an improved ability with respect to intravascular ultrasound to penetrate and delineate calcium in the vessel wall that makes it well suited to guide complex interventional strategies in vessels with superficial calcification. OCT has the capability of visualize coronary plaque erosion and fibrotic caps overlying lipid plaques. == Neurovascular applications == In the last decade, significant advances have been made in the endovascular treatment of stroke, including brain aneurysms, intracranial atherosclerosis and ischemic stroke. Intravascular OCT has been proposed has a key technology that can improve current procedure and treatments. However, current intracoronary OCT catheters are not designed for navigation and reliable imaging of tortuous cerebrovascular arteries. Recently, different (wire-like) OCT catheters have been proposed and were specifically designed for the human cerebrovasculature, named neuro optical coherence tomography (nOCT). A first clinical study to investigate safety, feasibility, and clinical potential has been conducted. Initial applications for the treatment of brain aneurysms and intracranial atherosclerosis have been demonstrated showing future potential. == Technology == The most critical technological advance was the catheter and the development of fast wavelength sweeping near-infrared lasers. The fiber optic catheter/endoscope required rapid alignment of two optical fibers with 8 μm cores (one rotating) across free space. The distal end has a focusing component (GRIN or ball lens, typically). State-of-the-art intracoronary optical coherence tomography uses a swept-source laser to make OCT images at high-speed (i.e., approximately 80,000 kHz - A-scan lines per second) to complete acquisition of a 3D OCT volume of coronary segments in a few-seconds. The first intravascular FD-OCT was introduced to the market in 2009 (EU and Asia) and in 2012 (US). In 2018, two intracoronary OCT catheters are clinically available for use in the coronary arteries, having a size in diameter between 2.4F and 2.7F. The axial resolution of state-of-the-art commercial systems is less than 20 micrometers, which is decoupled from the catheter lateral resolution. The highest resolution of OCT allows for the in vivo imaging of vessel microstructural features at an unprecedented level, enabling visualization of vessel wall atherosclerosis, pathology, and interaction with therapeutic devices at a microscopic level. Recent developments included the combination of OCT with spectroscopy and fluorescence in a single imaging catheter and miniaturization of the imaging catheter. == Safety == Safety of intravascular imaging, including intracoronary OCT and intravascular ultrasound, has been investigated by several studies. Recent clinical trials reported a very low rate of self-limiting, minor complications on over 3,000 patients where in all cases no harm or prolongation of hospital stay was observed. Intracoronary optical coherence tomography was demonstrated to be safe among heterogeneous groups of patients presenting varying clinical setting. == See also == Fractional flow reserve Intravascular fluorescence == References ==
Wikipedia/Intravascular_optical_coherence_tomography
A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components that form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits use photons (or particles of light) as opposed to electrons that are used by electronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the visible spectrum or near-infrared (850–1650 nm). One of the most commercially utilized material platforms for photonic integrated circuits is indium phosphide (InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-section distributed Bragg reflector (DBR) lasers, consisting of two independently controlled device sections—a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip. Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, the Eindhoven University of Technology, and the University of Twente in the Netherlands. A 2005 development showed that silicon can, even though it is an indirect bandgap material, still be used to generate laser light via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source. == History == Photonics is the science behind the detection, generation, and manipulation of photons. According to quantum mechanics and the concept of wave–particle duality first proposed by Albert Einstein in 1905, light acts as both an electromagnetic wave and a particle. For example, total internal reflection in an optical fibre allows it to act as a waveguide. Integrated circuits using electrical components were first developed in the late 1940s and early 1950s, but it took until 1958 for them to become commercially available. When the laser and laser diode were invented in the 1960s, the term "photonics" fell into more common usage to describe the application of light to replace applications previously achieved through the use of electronics. By the 1980s, photonics gained traction through its role in fibre optic communication. At the start of the decade, an assistant in a new research group at Delft University Of Technology, Meint Smit, started pioneering in the field of integrated photonics. He is credited with inventing the Arrayed Waveguide Grating (AWG), a core component of modern digital connections for the Internet and phones. Smit has received several awards, including an ERC Advanced Grant, a Rank Prize for Optoelectronics and a LEOS Technical Achievement Award. In October 2022, during an experiment held at the Technical University of Denmark in Copenhagen, a photonic chip transmitted 1.84 petabits per second of data over a fibre-optic cable more than 7.9 kilometres long. First, the data stream was split into 37 sections, each of which was sent down a separate core of the fibre-optic cable. Next, each of these channels was split into 223 parts corresponding to equidistant spikes of light across the spectrum. == Comparison to electronic integration == Unlike electronic integration where silicon is the dominant material, system photonic integrated circuits have been fabricated from a variety of material systems, including electro-optic crystals such as lithium niobate, silica on silicon, silicon on insulator, various polymers, and semiconductor materials which are used to make semiconductor lasers such as GaAs and InP. The different material systems are used because they each provide different advantages and limitations depending on the function to be integrated. For instance, silica (silicon dioxide) based PICs have very desirable properties for passive photonic circuits such as AWGs (see below) due to their comparatively low losses and low thermal sensitivity, GaAs or InP based PICs allow the direct integration of light sources and Silicon PICs enable co-integration of the photonics with transistor based electronics. The fabrication techniques are similar to those used in electronic integrated circuits in which photolithography is used to pattern wafers for etching and material deposition. Unlike electronics where the primary device is the transistor, there is no single dominant device. The range of devices required on a chip includes low loss interconnect waveguides, power splitters, optical amplifiers, optical modulators, filters, lasers and detectors. These devices require a variety of different materials and fabrication techniques making it difficult to realize all of them on a single chip. Newer techniques using resonant photonic interferometry is making way for UV LEDs to be used for optical computing requirements with much cheaper costs leading the way to petahertz consumer electronics. == Examples of photonic integrated circuits == The primary application for photonic integrated circuits is in the area of fiber-optic communication though applications in other fields such as biomedical and photonic computing are also possible. The arrayed waveguide gratings (AWGs) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fiber-optic communication systems are an example of a photonic integrated circuit which has replaced previous multiplexing schemes which utilized multiple discrete filter elements. Since separating optical modes is a need for quantum computing, this technology may be helpful to miniaturize quantum computers (see linear optical quantum computing). Another example of a photonic integrated chip in wide use today in fiber-optic communication systems is the externally modulated laser (EML) which combines a distributed feed back laser diode with an electro-absorption modulator on a single InP based chip. == Applications == As global data consumption rises and demand for faster networks continues to grow, the world needs to find more sustainable solutions to the energy crisis and climate change. At the same time, ever more innovative applications for sensor technology, such as Lidar in autonomous driving vehicles, appear on the market. There is a need to keep pace with technological challenges. The expansion of 5G data networks and data centres, safer autonomous driving vehicles, and more efficient food production cannot be sustainably met by electronic microchip technology alone. However, combining electrical devices with integrated photonics provides a more energy efficient way to increase the speed and capacity of data networks, reduce costs and meet an increasingly diverse range of needs across various industries. === Data and telecommunications === The primary application for PICs is in the area of fibre-optic communication. The arrayed waveguide grating (AWG) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fibre-optic communication systems are an example of a photonic integrated circuit. Another example in fibre-optic communication systems is the externally modulated laser (EML) which combines a distributed feedback laser diode with an electro-absorption modulator. The PICs can also increase bandwidth and data transfer speeds by deploying few-modes optical planar waveguides. Especially, if modes can be easily converted from conventional single-mode planar waveguides into few-mode waveguides, and selectively excite the desired modes. For example, a bidirectional spatial mode slicer and combiner can be used to achieve the desired higher or lower-order modes. Its principle of operation depends on cascading stages of V-shape and/ or M-shape graded-index planar waveguides. Not only can PICs increase bandwidth and data transfer speeds, they can reduce energy consumption in data centres, which spend a large proportion of energy on cooling servers. === Healthcare and medicine === Using advanced biosensors and creating more affordable diagnostic biomedical instruments, integrated photonics opens the door to lab-on-a-chip (LOC) technology, cutting waiting times, and taking diagnosis out of laboratories and into the hands of doctors and patients. Based on an ultrasensitive photonic biosensor, SurfiX Diagnostics' diagnostics platform provides a variety of point-of-care tests. Similarly, Amazec Photonics has developed a fibre optic sensing technology with photonic chips which enables high-resolution temperature sensing (fractions of 0.1 milliKelvin) without having to inject the temperature sensor within the body. This way, medical specialists are able to measure both cardiac output and circulating blood volume from outside the body. Another example of optical sensor technology is EFI's "OptiGrip" device, which offers greater control over tissue feeling for minimal invasive surgery. === Automotive and engineering applications === PICs can be applied in sensor systems, like Lidar (which stands for light detection and ranging), to monitor the surroundings of vehicles. It can also be deployed in-car connectivity through Li-Fi, which is similar to WiFi but uses light. This technology facilitates communication between vehicles and urban infrastructure to improve driver safety. For example, some modern vehicles pick up traffic signs and remind the driver of the speed limit. In terms of engineering, fibre optic sensors can be used to detect different quantities, such as pressure, temperature, vibrations, accelerations, and mechanical strain. Sensing technology from PhotonFirst uses integrated photonics to measure things like shape changes in aeroplanes, electric vehicle battery temperature, and infrastructure strain. === Agriculture and food === Sensors play a role in innovations in agriculture and the food industry in order to reduce wastage and detect diseases. Light sensing technology powered by PICs can measure variables beyond the range of the human eye, allowing the food supply chain to detect disease, ripeness and nutrients in fruit and plants. It can also help food producers to determine soil quality and plant growth, as well as measuring CO2 emissions. A new, miniaturised, near-infrared sensor, developed by MantiSpectra, is small enough to fit into a smartphone, and can be used to analyse chemical compounds of products like milk and plastics. === AI applications === In 2025, researchers at Columbia Engineering developed a 3D photonic-electronic chip that could significantly improve AI hardware. By combining light-based data movement with CMOS electronics, this chip addressed AI's energy and data transfer bottlenecks, improving both efficiency and bandwidth. The breakthrough allowed for high-speed, energy-efficient data communication, enabling AI systems to process vast amounts of data with minimal power. With a bandwidth of 800 Gb/s and a density of 5.3 Tb/s/mm², this technology offered major advances for AI, autonomous vehicles, and high-performance computing. == Types of fabrication and materials == The fabrication techniques are similar to those used in electronic integrated circuits, in which photolithography is used to pattern wafers for etching and material deposition. The platforms considered most versatile are indium phosphide (InP) and silicon photonics (SiPh): Indium phosphide (InP) PICs have active laser generation, amplification, control, and detection. This makes them an ideal component for communication and sensing applications. Silicon nitride (SiN) PICs have a vast spectral range and ultra low-loss waveguide. This makes them highly suited to detectors, spectrometers, biosensors, and quantum computers. The lowest propagation losses reported in SiN (0.1 dB/cm down to 0.1 dB/m) have been achieved by LioniX International's TriPleX waveguides. Silicon photonics (SiPh) PICs provide low losses for passive components like waveguides and can be used in minuscule photonic circuits. They are compatible with existing electronic fabrication. The term "silicon photonics" actually refers to the technology rather than the material. It combines high density photonic integrated circuits (PICs) with complementary metal oxide semiconductor (CMOS) electronics fabrication. The most technologically mature and commercially used platform is silicon on insulator (SOI). Other platforms include: Lithium niobate (LiNbO3) is an ideal modulator for low loss mode. It is highly effective at matching fibre input–output due to its low index and broad transparency window. For more complex PICs, lithium niobate can be formed into large crystals. As part of project ELENA, there is a European initiative to stimulate production of LiNbO3-PICs. Attempts are also being made to develop lithium niobate on insulator (LNOI). Silica has a low weight and small form factor. It is a common component of optical communication networks, such as planar light wave circuits (PLCs). Gallium arsenide (GaAS) has high electron mobility. This means GaAS transistors operate at high speeds, making them ideal analogue integrated circuit drivers for high speed lasers and modulators. By combining and configuring different chip types (including existing electronic chips) in a hybrid or heterogeneous integration, it is possible to leverage the strengths of each. Taking this complementary approach to integration addresses the demand for increasingly sophisticated energy-efficient solutions. == Current status == As of 2010, photonic integration was an active topic in U.S. Defense contracts. It was included by the Optical Internetworking Forum for inclusion in 100 gigahertz optical networking standards. A recent study presents a novel two-dimensional photonic crystal design for electro-reflective modulators, offering reduced size and enhanced efficiency compared to traditional bulky structures. This design achieves high optical transmission ratios with precise angle control, addressing critical challenges in miniaturizing optoelectronic devices for improved performance in PICs. In this structure, both lateral and vertical fabrication technologies are combined, introducing a novel approach that merges two-dimensional designs with three-dimensional structures. This hybrid technique offers new possibilities for enhancing the functionality and integration of photonic components within photonic integrated circuits. == See also == Integrated quantum photonics Optical computing Optical transistor Silicon photonics == Notes == == References == Larry Coldren; Scott Corzine; Milan Mashanovitch (2012). Diode Lasers and Photonic Integrated Circuits (Second ed.). John Wiley and Sons. ISBN 9781118148181. McAulay, Alastair D. (1999). Optical Computer Architectures: The Application of Optical Concepts to Next Generation Computers. Guha, A.; Ramnarayan, R.; Derstine, M. (1987). "Architectural issues in designing symbolic processors in optics". Proceedings of the 14th annual international symposium on Computer architecture - ISCA '87. p. 145. doi:10.1145/30350.30367. ISBN 0818607769. S2CID 14228669. Altera Corporation (2011). "Overcome Copper Limits with Optical Interfaces" (PDF). Brenner, K.-H.; Huang, Alan (1986). "Logic and architectures for digital optical computers (A)". J. Opt. Soc. Am. A3: 62. Bibcode:1986JOSAA...3...62B. Brenner, K.-H. (1988). "A programmable optical processor based on symbolic substitution". Appl. Opt. 27 (9): 1687–1691. Bibcode:1988ApOpt..27.1687B. doi:10.1364/AO.27.001687. PMID 20531637. S2CID 43648075.
Wikipedia/Photonic_integrated_circuits
Dual-axis optical coherence tomography (DA-OCT) is an imaging modality that is based on the principles of optical coherence tomography (OCT). These techniques are largely used for medical imaging. OCT is non-invasive and non-contact. It allows for real-time, in situ imaging and provides high image resolution. OCT is analogous to ultrasound but relies on light waves (typically near-infrared), which makes it faster than ultrasound. In general, OCT has proven to be compact and portable. It is compatible with arterial catheters and endoscopes, which helps diagnose diseases within long internal cavities, including the esophagus (Barrett's disease) and coronary arteries (cardiovascular disease). The biggest limitation with traditional OCT is that it relies on detecting ballistic (non-scattered) photons, which can have a mean free path of only 100 microns, or singly backscattered photons. This strongly restricts depth penetration in highly-scattering biological tissue. It causes unsatisfactory signal-to-noise ratio (SNR) at deep regions. To overcome this issue, DA-OCT uses angled source and detection components and a tunable lens to create an enhanced depth of focus and improve depth penetration in biological tissue. == Design == === Dual-axis architecture === DA-OCT applies a dual-axis architecture to a spectral-domain OCT system. The objective is to improve the depth of view within biological tissue. Dual-axis architecture with coherence imaging was introduced in the early 2010s. Prior to the development of DA-OCT, the dual-axis design was commonly used with multiple-scattering multispectral low coherence interferometry (ms2/LCI), a technique that also analyzes multiply scattered light to take depth-resolved images from optical scattering media. For this architecture, the light source and detector are tilted at equal and opposite angles to create a dual-axis. The slight scattering angle increases the chance of collecting more photons being scattered within the tissue. The greater the angle of the source and the detector, the deeper the focal zone. But there is also a problem: the greater the angle, the smaller the focal zone. Even though the chance of detecting a diffused photon increases, the size of the region has decreased. === Tunable lens === To fix the decreasing focal zone size problem, a tunable lens is used. The tunable lens allows dynamic focusing, where the focal zone can be scanned at various tissue depths. The data from different scans are stitched into a single image using an algorithm similar to one used in Gabor-domain optical coherence microscopy. This forms an enhanced depth of focus, allowing for greater penetration depth within turbid media. === Instrument setup === Light from a broadband supercontinuum laser is filtered to a range of 1240 to 1390 nm and directed into a fiber coupler. The fiber coupler implements an interferometer, the hallmark of OCT, which splits the input light into sample and reference arms. The dual-axis architecture was added to the sample arm, angling the both light coming from the laser source and the light directed at the detector. By changing the angle, it increases the chance of gathering more light scattered at random angles deep in the media. DA-OCT also uses a micro-electromechanical system (MEMS) mirror for faster beam scanning. This helps decrease the integration time since DA-OCT has to gather scans at multiple depths to form a single image. == Experimental applications == For both DA-OCT and OCT, the research group imaged the samples with the tunable lens and without the tunable lens. In their results, they referred to DA-OCT with the tunable lens as DA-DOF+ and DA-OCT without the tunable lens as just DA-OCT. (DOF+ indicates "enhanced depth of focus".) The group referred to on-axis OCT with the tunable lens as On-Axis OCT DOF+. They referred to on-axis OCT without the tunable lens as OCT or On-Axis OCT. For quantitative experiments, contrast-to-noise ratio (CNR) was used as the main metric to determine image quality. They typically imaged a needle inside the scattering media, so CNR was expressed by: C N R = | μ s − μ m | σ s 2 + σ m 2 {\displaystyle CNR={\left\vert \mu _{s}-\mu _{m}\right\vert \over {\sqrt {\sigma _{s}^{2}+\sigma _{m}^{2}}}}} where μs is the mean pixel count of the needle profile, μm is the mean pixel count of the surrounding media, σs and σm are the corresponding standard deviations. === Imaging of scattering media === Wax's research group developed Intralipid-based hydrogel phantoms, which were imaged with DA-OCT, On-Axis OCT, and DA-DOF+. To mimic highly forward scattering biological tissue, one hydrogel phantom had a reduced scattering coefficient of 1.6 mm−1 and an anisotropy of 0.9. The other hydrogel phantom had a near-zero anisotropy value to act as the control. A needle was placed in both hydrogel phantoms to be imaged. In the high anisotropy case, there was no improvement in the CNR of DA-OCT compared to On-Axis OCT. Comparing DA-DOF+ to On-Axis OCT, there was a 17% increase in CNR. In the low anisotropy case, there was no significant increase in CNR of DA-OCT over On-Axis OCT, but there was a 31% increase for DA-DOF+ over On-Axis OCT. === In-vivo imaging === Wax's research group also observed a needle's CNR profile at different depths (~0 mm, 1.3 mm, 2.5 mm) within mouse skin. They imaged with On-Axis OCT, DA-OCT, On-Axis OCT DOF+, and DA-DOF+. For larger depths (>1 mm), DA-OCT and DA-DOF+ produced a better CNR than On-Axis OCT and On-Axis OCT DOF+. For example, the group found a 195% increase with DA-OCT versus On-Axis OCT, and a 169% increase with DA-DOF+ versus On-Axis OCT DOF+. The DA-OCT and DA-DOF+ did not show strong CNR at shallower depths compared to On-Axis OCT and On-Axis OCT DOF+ because the needle surface was located too far from the system's focal zone. In all cases, the modes with enhanced depth of focus (DOF+) had a significantly better CNR than the corresponding modes without the tunable lens. Overall, the trends match the group's conclusions: DA-OCT DOF+ provides the best CNR at greater depths. === Ex-vivo imaging === The research group led by Wax conducted a couple of qualitative studies. Firstly, they examined ex-vivo porcine ear skin using DA-OCT and traditional OCT. The epidermis appears brighter in the DA-OCT image, whereas it blends into the dermis layer in the traditional OCT image. DA-OCT detected a stronger signal from the photons than traditional OCT detected. Also, the epidermis layer appears thicker in the DA-OCT image meaning that more multiply-scattered photons were detected with DA-OCT compared to traditional OCT. The group compared DA-OCT images of injured rat skin to histopathology slides of the same samples. According to the histopathology slides, the base of the rat skin is healthy (the control), while the middle and tip indicate injury and structural damage. The DA-OCT images match these conclusions. For the healthy base, the DA-OCT image shows homogeneous backscattering intensity. For the middle and tip, the DA-OCT images show regions of inhomogeneous backscattering, which are indicative of tissue necrosis. == See also == Angle-resolved low-coherence interferometry Ballistic photon Interferometry Medical imaging Optical coherence tomography Multiple-scattering low-coherence interferometry Tomography == References ==
Wikipedia/Dual-axis_optical_coherence_tomography
Leica Microsystems GmbH is a German microscope manufacturing company. It is a manufacturer of optical microscopes, equipment for the preparation of microscopic specimens and related products. There are ten plants in eight countries with distribution partners in over 100 countries. Leica Microsystems emerged in 1997 out of a 1990 merger between Wild-Leitz, headquartered in Heerbrugg Switzerland, and Cambridge Instruments of Cambridge England. The merger of those two umbrella companies created an alliance of the following 8 individual manufacturers of scientific instruments. American Optical Scientific Products, Carl Reichert Optische Werke AG, R.Jung, Bausch and Lomb Optical Scientific Products Division, Cambridge Instruments, E.Leitz Wetzlar, Kern & Co., and Wild Heerbrugg AG, bringing much-needed modernization and a broader degree of expertise to the newly created entity called Leica Holding B.V. group. In 1997 the name was changed to Leica Microsystems and is a wholly-owned entity of Danaher Corporation since July 2005. Danaher is an American global conglomerate. == Details == The company employed over 4,000 workers and had a $1 billion turnover in 2008. It is headquartered in Wetzlar, Germany, and represented in over 100 other countries. The company manufactures products for applications requiring microscopic imaging, measurement and analysis. It also offers system solutions in the areas of Life Science including biotechnology and medicine, as well as the science of raw materials and industrial quality assurance. Product categories include Virtual microscopes, Light microscopes, products for Confocal Microscopy, Surgical Microscopes, Stereo Microscopes & Macroscopes, Digital microscopes, Microscope Software, Microscope Cameras, Electron microscope Sample Preparation Equipment In the field of high resolution optical microscopy they produce commercial versions of the STED microscope offering sub-diffraction resolution. In 2007 they launched the TCS STED which operates at a resolution <100 nm. In 2009 they launched the TCS STED CW using a CW laser light source where a resolution <80 nm can be achieved. On 29 September 2011 Leica Microsystems and TrueVision 3D Surgical announced their intention to jointly produce products that will improve microsurgery outcomes in ophthalmology and neurosurgery under the Leica brand. == See also == Heinrich Wild Wild Heerbrugg == References ==
Wikipedia/Leica_Microsystems
Terahertz tomography is a class of tomography where sectional imaging is done by terahertz radiation. Terahertz radiation is electromagnetic radiation with a frequency between 0.1 and 10 THz; it falls between radio waves and light waves on the spectrum; it encompasses portions of the millimeter waves and infrared wavelengths. Because of its high frequency and short wavelength, terahertz wave has a high signal-to-noise ratio in the time domain spectrum. Tomography using terahertz radiation can image samples that are opaque in the visible and near-infrared regions of the spectrum. Terahertz wave three-dimensional (3D) imaging technology has developed rapidly since its first successful application in 1997, and a series of new 3D imaging technologies have been proposed successively. == Terahertz imaging == Terahertz imaging has advantages over the more expensive and shorter range X-ray scanners. A variety of materials are transparent to terahertz radiation, which allows it to measure the thickness, density, and structural properties of materials that are difficult to detect. Since terahertz is not ionizing radiation, the use of terahertz does not cause damage to living tissue, making terahertz a safe, non-invasive biomedical imaging technique. Moreover, because many materials have a unique spectral signature in the terahertz range, terahertz radiation can be used to identify materials. Terahertz imaging is widely used in the study of semiconductor material properties, biomedical cell imaging, and chemical and biological examination. Terahertz time domain systems (THz-tds) have made significant advances in 2D imaging. THz-tds is able to determine the sample complex dielectric constant, usually 0.1–4 THz, and provides information about the static characteristics of the sample over dozens of frequencies. However, this technology has some limitations. For example, due to the lower power of the beam, the sensor must be more sensitive. Low image acquisition speeds may force a tradeoff between time and resolution. === Applications === Terahertz imaging can be useful for luggage and postal mail screening because it can identify substances on the basis of their characteristic spectra in this frequency band, such as explosives and illicit drugs; for example, several liquid explosives can be distinguished by the change in dielectric response in the terahertz range as a function of alcohol percentage. Although dangerous metal objects, such as knives, can be recognized by their shapes through certain pattern recognition algorithms, it is impossible to see through metallic packages with terahertz waves. Thus, terahertz spectrometers cannot replace X-ray scanners, even though they provide more information than X-ray scanners for low-density materials and chemical separation. Terahertz systems are used for production control in the paper and polymer industries. They can detect thickness and moisture content in paper and conductive properties, moisture level, fiber orientation and glass-transition temperature in polymers. Terahertz systems facilitate the detection of metallic and nonmetallic contamination in food. For example, terahertz waves made it possible to detect metallic and nonmetallic foreign matter in chocolate bars, since food with low water contents, such as chocolates, are almost transparent in the terahertz band. Terahertz tomography is also useful in the wine and spirits industries for quantifying moisture and analysing cork non-destructively. Terahertz imaging can detect different isomers have different spectral fingerprints in the terahertz range, which enables terahertz spectroscopy to distinguish between stereoisomers—a crucial distinction in pharmacy, where one isomer may be the active compound and its enantiomer may be inactive or even dangerous. Terahertz systems are also used for gauging tablet coating quality. Terahertz imaging enables non-destructive analysis of valuable artworks and can be conducted onsite. It can reveal hidden layers and via the transmittance of various pigments. It is also being investigated as a tool for 3D visualization. ==== Skin Cancer Imaging ==== Terahertz tomography typically relies on pulsed THz time-domain, where short bursts of terahertz radiation are emitted and detected to capture both the amplitude and phase of the transmitted or reflected signal. As the sample rotates (in transmission geometry) or the beam is scanned across the surface (in reflection geometry), a sinogram is built, encoding spatial and spectral information essential for tomographic reconstruction. In the case of skin cancer imaging, the use of reflection geometry allows for the encoding of skin cancer lesions over the range of the image obtaining multiple data points. Experimental acquisition of skin cancer data often employs quantum cascade lasers (QCLs) and laser feedback interferometry, where the laser functions as both the source and the detector. This approach offers high sensitivity, coherent detection, self-alignment, and high frame rates, while also mitigating the limitations associated with traditional detectors. Once the sinogram is acquired, the data is reconstructed into volumetric images. For weakly absorbing samples, conventional filtered back-projection (FBP) suffices. However, for strongly scattering tissues or limited-view problems, more advanced methods are employed: Algebraic reconstruction techniques (ART) or iterative solvers with regularization Compressed sensing approaches exploiting signal sparsity Deep learning-based inverters, which are increasingly used to accelerate and stabilize reconstructions, especially under noisy or undersampled conditions A key advantage of this instrumentation is that each pixel is sampled multiple times by varying the position of the sample and/or laser, enabling more accurate data acquisition while requiring less computational power. Data collection becomes considerably simpler following building the system. There are two possibilities to collect data, the first one is single images, where every pixel is scanned once and built as a photograph. These allow for faster scan times but cannot account for motion blur or other variations. The second is multiple scans of the region (and averaging those. This accounts for motion blur however it takes longer to acquire, process, and have a result. The advantage of both methods is that this data becomes four dimensional, with two dimensions being the space, and the other two being the amplitude and phase of the reflected signal. These methodological and technological advancements are critical for intraoperative imaging in oncology. The high contrast between cancerous and healthy tissue in the terahertz range enables more accurate detection of tumor margins—essential for clean excision, particularly in delicate anatomical regions such as the face, brain, or breast. By enabling real-time margin assessment, terahertz tomography reduces the likelihood of incomplete resections and reoperations, ultimately improving patient outcomes and conserving healthcare resources. Furthermore, terahertz tomography allows for consistent treatment analysis without the implied risk of constant imaging. Being an imaging technique utilizing non-ionizing radiation, oncologists can issue consistent imaging sessions without risk of radiative damage to the patients. THz tomography also reduces the need for repeated surgeries, therefore saving money, reducing recovery time, and significantly improving the patient experience. Terahertz tomography is particularly successful at cancer detection because tissue water molecule content significantly impacts this modality’s reconstructed image contrast because water molecules have strong absorption and refractive indices at varying frequencies within the THz radiation region. Because diseased tissues absorb more water in comparison to healthy patient tissue, the THz radiation response is much stronger and the generated image can clearly resolve cancerous tissue from healthy ones. Other factors that impact the image contrast of terahertz tomography include tissue structure, protein composition, and blood flow. == Methods == Terahertz tomography can be divided into transmission and reflection mode. It acts as an extension of X-ray computed tomography (CT) to a different waveband. It mainly studies the establishment of process models such as refraction, reflection and diffraction when terahertz waves transmit samples, which has certain requirements for reconstruction algorithms. According to the different transmission delay of Terahertz wave reflected signal at different depths inside the sample, the depth information can be obtained by processing the reflected signal inside the sample to realize the tomography. Terahertz time-of-flight tomography (THz-TOF) and THz optical coherence tomography (Thz-OCT) are mainly used in implementation. === THz diffraction tomography === In diffraction tomography, the detection beam interacts with the target and uses the resulting scattered waves to build a 3D image of the sample. The diffraction effect and the diffraction slice theorem shine light on the surface of the scattered object and record the reflected signal to obtain the diffraction field distribution after the sample in order to explore the surface shape of the target object. For fine samples with more complex surface structure, diffraction tomography is effective because it can provide a sample refractive index distribution. However, there are also drawbacks: although the imaging speed of terahertz diffraction tomography is faster, its imaging quality is poor due to the lack of an effective reconstruction algorithm. In 2004, S. Waang et al. first used diffraction chromatography based on the THz-tds system to image polyethylene samples. === THz tomosynthesis === Tomosynthesis is a technique used to create high-image tomography. The reconstruction can be done by several projection angles, which creates the image faster. This technique has low resolution but faster imaging speed. This technique also has an advantage over terahertz CT. Terahertz CT is significantly affected by reflection and refraction, especially for wide and flat plate samples, which has a large incidence angle at the edge and severe signal attenuation. Therefore, it is difficult to obtain both complete projection data and substantial noise information simultaneously. However, terahertz fault synthetic tomography is not affected by refraction and reflection because of the small incidence angle during projection. It is an effective method for local imaging, rapid imaging, or incomplete sample rotation. In 2009, N. Unaguchi et al. in Japan used continuous terahertz solid-state frequency multiplier with frequency of 540 GHz to conduct TS imaging on three letters "T", "H" and "Z" at different depths of post-it notes. The back projection method and wiener filter were used to reconstruct the spatial distribution of three letters. === THz time of flight tomography === Terahertz fault chromatography can reconstruct the 3D distribution of the refractive index by reflecting the terahertz pulse at different depths in the sample. The depth distribution information of the refractive index can be obtained by analyzing the time delay of the peak value of the reflected pulse. The longitudinal resolution of time-of-flight tomography depends on the pulse width of terahertz waves (usually in the tens of microns); therefore, the vertical resolution of flight time chromatography is very high. In 2009, J.Takayanagi et al. designed an experimental system that successfully used tomography on a semiconductor sample consisting of three sheets of superimposed paper and a thin two-micron thick layer of GaAs. === 3D holography === The THz beam can be incorporated into 3D holography if the differentiation of each multiple scattered terahertz waves of different scattering orders is enabled. With both intensity and phase distribution recorded, the interference pattern generated by object light and reference light encodes more information than a focused image. The holograms can provide a 3D visualization of the object of interest when reconstructed via Fourier optics. However, it remains a challenge to obtain high quality images with this technique due to scattering and diffraction effects required for measurement. The high order scattering measurement usually results in poor signal to noise ratio (SNR). === Fresnel lenses === Fresnel lenses serve as a replacement for traditional refractive lenses with the advantages of being small and lightweight. As their focal lengths depend on frequencies, samples can be imaged at various locations along the propagation path to the imaging plane, which can be applied to tomographic imaging. === Synthetic aperture processing (SA) === Synthetic aperture processing (SA) differs from traditional imaging systems when collecting data. In contrast to the point-to-point measurement scheme, SA uses a diverging or unfocused beam. The phase information collected by SA can be adopted for 3D reconstruction. === Terahertz computed tomography (CT) === Terahertz computed tomography records both amplitude and spectral phase information when compared to X-ray imaging. Terahertz CT can identify and compare different substances while non-destructively locating them. === Laser Feedback Interferometry === Laser feedback interferometry (LFI) is a technique in which a portion of the laser’s emitted light is reflected back into the laser cavity after interacting with a target. This re-injected light interferes with the intracavity field, causing measurable changes in the laser’s output intensity or frequency. By analyzing these variations, information about the target’s displacement, surface profile, or optical properties can be extracted. In skin cancer imaging, LFI paired with quantum cascade lasers allows for precise, real-time detection due to its high sensitivity, inherent self-alignment, and ability to operate without the need for external detectors. == See also == Terahertz metamaterial Terahertz nondestructive evaluation Terahertz radiation Terahertz time-domain spectroscopy Tomography == References ==
Wikipedia/Terahertz_tomography
Endoscopic optical coherence tomography, also intravascular optical coherence tomography is a catheter-based imaging application of optical coherence tomography (OCT). It is capable of acquiring high-resolution images from inside a blood vessel using optical fibers and laser technology. One of its main applications is for coronary arteries, which are often treated by endoscopic, minimally invasive surgical procedures. Other applications for peripheral arteries and for neurovascular procedures have been proposed and are being investigated. Neurovascular applications required significant technological developments, due to the highly tortuous anatomy of the cerebrovasculature. Intravascular OCT rapidly creates three-dimensional images at a resolution of approximately 15 micrometers, an improved resolution with respect to intravascular ultrasound and coronary angiogram, the other imaging techniques. This offers additional information that can be used to optimize the treatment and management of vascular disease. == Theory == OCT is analogous to medical ultrasound, measuring the backreflection of infrared light rather than sound. The time for light to be reflected back from the tissue under inspection is used to measure distances. However, due to the high speed of light, the backreflection time cannot be measured directly, but is instead measured using interferometry. OCT is measured using either time domain (TD-OCT) or frequency domain techniques (FD-OCT). Commercially available coronary OCT technology is based on frequency domain techniques, resulting in rapid acquisition procedures (1 to 2 seconds). Intracoronary OCT uses near-infrared light at 1300 nm and can visualize the microstructure of the arterial wall, its size, and therapeutic devices with high accuracy. == History == Intravascular OCT was developed for the imaging of arterial disease at a resolution higher than the other techniques available, such as x-ray angiography and intravascular ultrasounds. OCT allows to assess atherosclerotic plaques characteristics at a resolution of approximately 15 μm (or better) and found applications for the guidance of catheter-based coronary interventions (ie, percutaneous coronary interventions). The first report of endoscopic OCT appeared in 1997 in the journal Science exploring various applications including gastroenterology and airways. The first intravascular in vivo use in a preclinical model was reported in 1994 and first in human, clinical imaging in 2003. The first OCT imaging catheter and system was commercialized by LightLab Imaging, Inc., a company based in Massachusetts formed following a technology transfer in 1997 from Fujimoto's lab (MIT). Early on, time-domain OCT technology required slow acquisitions (>10 seconds long) requiring the use of balloon occlusion techniques to displace the blood from the arterial lumen, opaque to near-infrared light. This prevented a broader adoption for several years. Aroun 2008-2009, the advent of rapid sweep source lasers allowed for the development of intravascular Fourier-Domain OCT (FD-OCT). This enabled for the first-time rapid acquisitions of a long coronary segment in a couple of seconds, allowing non occlusive brief contrast injections to clear the arterial lumen from blood. Initial demonstration of FD-OCT for coronary imaging was achieved in 2008-2009 which significantly accelerated clinical adoption starting in 2009. == Cardiovascular applications == Following regulatory clearances in the major geographies between 2009 and 2012 of fast acquisition Fourier domain OCT, the use of intracoronary OCT rapidly increased. It is used to help coronary disease diagnosis, planning of the intervention, assess procedural results, and prevent complications. In the last decade, clinical benefits of coronary OCT have been systematically investigated. Several studies have linked the use of intravascular imaging such as IVUS and OCT to better stent expansion, a metric strongly correlated to better clinical outcomes in patients suffering from coronary artery disease and myocardial infarction. Larger randomized clinical trials have been undertaken. In 2023, a double-blind prospective trial demonstrated improvement in morbidity and mortality in coronary bifurcation interventions: "Among patients with complex coronary-artery bifurcation lesions, OCT-guided PCI was associated with a lower incidence of MACE at 2 years than angiography-guided PCI." Although not every study showed significant results, to date, several studies demonstrated the benefits in patient outcomes of using intravascular imaging during coronary arteries interventions. The use of intravascular imaging for coronary intervention is reported on the current cardiology guidelines. Data published in late 2016 showed that over 150,000 intracoronary optical coherence tomography procedures are performed every year, and its adoption is rapidly growing at a rate of ~10-20% every year. Assessment of artery lumen morphology is the cornerstone of intravascular imaging criteria to evaluate disease severity and guide intervention. The high-resolution of OCT imaging allows to assess with high accuracy vessel lumen area, wall microstructure, intracoronary stent apposition and expansion. OCT has an improved ability with respect to intravascular ultrasound to penetrate and delineate calcium in the vessel wall that makes it well suited to guide complex interventional strategies in vessels with superficial calcification. OCT has the capability of visualize coronary plaque erosion and fibrotic caps overlying lipid plaques. == Neurovascular applications == In the last decade, significant advances have been made in the endovascular treatment of stroke, including brain aneurysms, intracranial atherosclerosis and ischemic stroke. Intravascular OCT has been proposed has a key technology that can improve current procedure and treatments. However, current intracoronary OCT catheters are not designed for navigation and reliable imaging of tortuous cerebrovascular arteries. Recently, different (wire-like) OCT catheters have been proposed and were specifically designed for the human cerebrovasculature, named neuro optical coherence tomography (nOCT). A first clinical study to investigate safety, feasibility, and clinical potential has been conducted. Initial applications for the treatment of brain aneurysms and intracranial atherosclerosis have been demonstrated showing future potential. == Technology == The most critical technological advance was the catheter and the development of fast wavelength sweeping near-infrared lasers. The fiber optic catheter/endoscope required rapid alignment of two optical fibers with 8 μm cores (one rotating) across free space. The distal end has a focusing component (GRIN or ball lens, typically). State-of-the-art intracoronary optical coherence tomography uses a swept-source laser to make OCT images at high-speed (i.e., approximately 80,000 kHz - A-scan lines per second) to complete acquisition of a 3D OCT volume of coronary segments in a few-seconds. The first intravascular FD-OCT was introduced to the market in 2009 (EU and Asia) and in 2012 (US). In 2018, two intracoronary OCT catheters are clinically available for use in the coronary arteries, having a size in diameter between 2.4F and 2.7F. The axial resolution of state-of-the-art commercial systems is less than 20 micrometers, which is decoupled from the catheter lateral resolution. The highest resolution of OCT allows for the in vivo imaging of vessel microstructural features at an unprecedented level, enabling visualization of vessel wall atherosclerosis, pathology, and interaction with therapeutic devices at a microscopic level. Recent developments included the combination of OCT with spectroscopy and fluorescence in a single imaging catheter and miniaturization of the imaging catheter. == Safety == Safety of intravascular imaging, including intracoronary OCT and intravascular ultrasound, has been investigated by several studies. Recent clinical trials reported a very low rate of self-limiting, minor complications on over 3,000 patients where in all cases no harm or prolongation of hospital stay was observed. Intracoronary optical coherence tomography was demonstrated to be safe among heterogeneous groups of patients presenting varying clinical setting. == See also == Fractional flow reserve Intravascular fluorescence == References ==
Wikipedia/Intracoronary_optical_coherence_tomography
Spectroscopic optical coherence tomography (SOCT) is an optical imaging and sensing technique, which provides localized spectroscopic information of a sample based on the principles of optical coherence tomography (OCT) and low coherence interferometry. The general principles behind SOCT arise from the large optical bandwidths involved in OCT, where information on the spectral content of backscattered light can be obtained by detection and processing of the interferometric OCT signal. SOCT signal can be used to quantify depth-resolved spectra to retrieve the concentration of tissue chromophores (e.g., hemoglobin and bilirubin), characterize tissue light scattering, and/or used as a functional contrast enhancement for conventional OCT imaging. == Theory == The following discussion of techniques for quantitatively obtaining localized optical properties using SOCT is a summary of the concepts discussed in Bosscharrt et al. === Localized spectroscopic information === The general form of the detected OCT interferogram is written as: i d = | E s | 2 + | E r | 2 + 2 { E s E r cos ⁡ ( k 2 d ) } {\displaystyle i_{d}=|E_{s}|^{2}+|E_{r}|^{2}+2\{E_{s}E_{r}\cos(k2d)\}} Where, E s {\textstyle E_{s}} and E r {\textstyle E_{r}} are the fields returning from sample and reference arm, respectively, with wavenumber k = 2 π / λ {\textstyle k=2\pi /\lambda } with λ {\textstyle \lambda } the wavelength. Further, 2 d {\textstyle 2d} is the optical path length difference so that d {\textstyle d} is the assigned depth location in the tissue. Both the spatial domain and spectral domain descriptions of the collected OCT signal, can be related by Fourier transformation: i d ( 2 d ) = | F { i d ( k ) } | {\displaystyle i_{d}(2d)=|{\mathcal {F}}\{i_{d}(k)\}|} where F {\textstyle {\mathcal {F}}} is the Fourier transform. However, due to the wavelength dependence with depth for both scattering and absorption in tissue, direct Fourier transform cannot be applied to obtain localized spectroscopic information from the OCT signal. For this reason, a time-frequency analysis method must be applied. ==== Time-frequency analysis methods ==== Time-frequency analysis allows for extraction of information of both time and frequency components of a signal. In most SOCT applications a continuous short-time Fourier transform (STFT) method is used, STFT ( k , d ; w ) = ∫ − ∞ ∞ i d ( d ′ ) w ( d − d ′ ; Δ d ) e − i k d ′ d ( d ′ ) {\displaystyle {\text{STFT}}(k,d;w)=\int _{-\infty }^{\infty }i_{d}(d')w(d-d';\Delta d)e^{-ikd'}d(d')} where w {\textstyle w} is a spatially confined windowing function that extracts spatially-localized frequency information by suppressing information from outside of the window, commonly a Gaussian distribution, centered around d {\textstyle d} with width Δ d {\textstyle \Delta d} . As a result, there exists an inherent trade-off between spatial and frequency resolution using the STFT method. A wavelet transform (WT) approach may also be considered. Using both a series of function localized in both real and Fourier space from the complex window function w, by translations and dilations WT ( k , d ) = ∫ − ∞ ∞ ( d ′ ) w ( d − d ′ κ ) d ( d ′ ) {\displaystyle {\text{WT}}(k,d)=\int _{-\infty }^{\infty }(d')w{\bigg (}{\frac {d-d'}{\kappa }}{\bigg )}d(d')} Where κ {\textstyle \kappa } is the scaling factor, which dilates or compress the wavelet w {\textstyle w} . In this case, the physical process can be considered as an array of band-filters with constant relative bandwidth to the center frequency, using short windows at high frequencies and long windows at low frequencies. Unlike the STFT, the WT method is not constrained by constraint bandwidth and may adapt the window size to a desired frequency. For this method the tradeoff is this between time and frequency resolutions. Bilinear transforms may be applied, where under the right conditions have a reduced resolution penalty. For SOCT purposes the Wigner distribution: WD ( k , d ) = ∫ − ∞ ∞ i d ( d + d ′ ) i d ∗ ( d − d ′ ) e − i k d ′ d ( d ′ ) {\displaystyle {\text{WD}}(k,d)=\int _{-\infty }^{\infty }i_{d}(d+d')i_{d}*(d-d')e^{-ikd'}d(d')} can be used to extract structural knowledge of samples from time-localized information contained within the cross-terms. The Wigner distribution applies a Fourier transform to the autocorrelation of the OCT interferogram. The drawback of this method lies in its quadratic nature, contained in its interference terms. Separation between the two overlapping signal terms is challenging as this information is contained within the interference terms. For time-frequency analysis, the WD effectively suppresses the interference terms and as a result compromises joint time-frequency resolution with the level of suppression of the interference terms. === Quantitative determination of optical properties === The time-frequency analysis methods described above, result in a wavelength resolved power spectrum S {\textstyle S} as a function of depth d {\textstyle d} . Assuming the first Born approximation, we can describe S ( d ) {\textstyle S(d)} using Beer's law: S ( d ) = ξ ⋅ μ b , N A e − 2 μ O C T d {\displaystyle S(d)=\xi \cdot \mu _{b,NA}e^{-2\mu _{OCT}d}} μ O C T {\textstyle \mu _{OCT}} is the OCT signal attenuation coefficient and the factor 2 accounts for the double pass attenuation from depth d {\textstyle d} . The parameters ξ {\textstyle \xi } and μ b , N A {\textstyle \mu _{b,NA}} determine the amplitude of S ( d ) {\textstyle S(d)} at d = 0. These system-dependent parameters are defined such that with S 0 {\textstyle S_{0}} the source power spectrum incident on the sample and T the axial PSF. The backscattering coefficient, μ b , N A {\textstyle \mu _{b,NA}} is sample dependent and is discussed in further detail below. From the experimentally determined value of the OCT attenuation coefficient can be further expressed as: μ O C T = μ t = μ s + μ a {\displaystyle \mu _{OCT}=\mu _{t}=\mu _{s}+\mu _{a}} with the total attenuation coefficient μ t {\textstyle \mu _{t}} , being the sum of both the scattering coefficient μ s {\textstyle \mu _{s}} and the absorption coefficient μ a {\textstyle \mu _{a}} . The backscattering coefficient is both sample and source dependent and defined as: μ b , N A = μ s ⋅ 2 π ∫ π − N A π p ( θ ) sin ⁡ θ d θ {\displaystyle \mu _{b,NA}=\mu _{s}\cdot 2\pi \textstyle \int _{\pi -NA}^{\pi }p(\theta )\sin \theta d\theta } Where p ( y ) {\textstyle p(y)} is the scattering phase function, integrated over the numerical aperture N A {\textstyle NA} . The backscattering coefficient may be experimentally determined as long as a full understanding of zeta. Commonly zeta is measured by separate calibration with a sample having a known backscattering coefficient defined by Mie theory. === Separation of μs and μa === Several approaches have been used to effectively isolate the individual contributions of absorption ( μ a {\textstyle \mu _{a}} ) and scattering ( μ s {\textstyle \mu _{s}} ) from the overall OCT signal attenuation ( μ O C T {\textstyle \mu _{OCT}} ) One method is by least-squares fitting, where the scattering dependence on wavelength with a power law. In this approach the absorption spectrum is regarded as the total absorption contribution overall known chromophores, with a least-squares fitting to the measured attenuation values. μ O C T = a ⋅ λ − b ∑ i ( c i μ a , i ) {\displaystyle \mu _{OCT}=a\cdot \lambda ^{-b}\textstyle \sum _{i}\displaystyle (c_{i}\mu _{a,i})} The first term on the right represents the scattering component with a scaling factor a {\textstyle a} and scatter power b {\textstyle b} , and the second term modeling the total absorption overall chromophores i {\textstyle i} with individual contribution c i {\textstyle c_{i}} . A limitation of this method is that the localization of present chromophores and their absorption properties need to be known to be effective. Similarly another common approach is simply though calibration measurements, if the absorption coefficient of a scattering sample can be obtained through a separate calibration measurement, then isolating the scattering coefficient is pretty straight forward. One problem with this method is it assumes that tissue scattering is equal across various tissue regions, but if different structures have different absorption parameters it would just throw off the measurements. Finally for certain applications, the real and imaginary part of the complex refractive index may be used to isolate the individual contributions from both absorption and scattering. using Kramers-Kronig (KK) relations. This is because the imaginary part of the refractive index can be tied to the absorption spectra using Kramer-Kronig relations. Robles et al. showed it was possible to separate the necessary contributions from the real part of the refractive index from a nonlinear dispersion phase term in the OCT signal. === Accuracy === The overall accuracy of SOCT to isolate the localized optical spectra is limited by several factors: First the number of acquisitions – averaging and multiple integrations are critical for valid measurements due to the presence of speckle noise. But this value reduces with the square root of the number of independent scans in the averaging. Due to losses in spectral resolution sample inhomogeneity can be a factor, and there are sensitivity issues with system NA and spectrometer roll off that also affect both accuracy and resolution. == References ==
Wikipedia/Spectroscopic_optical_coherence_tomography
Antimicrobial photodynamic therapy (aPDT), also referred to as photodynamic inactivation (PDI), photodisinfection (PD), or photodynamic antimicrobial chemotherapy (PACT), is a photochemical antimicrobial method that has been studied for over a century. Supported by in vitro, in vivo and clinical studies, aPDT offers a treatment option for broad-spectrum infections, particularly in the context of rising antimicrobial resistance. Its multi-target mode of action allows aPDT to be a viable therapeutic strategy against drug-resistant microorganisms. The procedure involves the application of photosensitizing compounds, also called photoantimicrobials, which, upon activation by light, generate reactive oxygen species (ROS). These ROS lead to the oxidation of cellular components of a wide array of microbes, including pathogenic bacteria, fungi, protozoa, algae, and viruses. == Historical perspective == In the early 20th century, decades before the first chemical antibiotics were developed, Dr. Niels Finsen discovered that blue light could be used to treat skin infections. In the following years, Finsen's phototherapy was used in many European medical institutions as a topical antimicrobial. In 1903, the Nobel Prize committee awarded him for his work in Physiology/Medicine, "in recognition of his contribution to the treatment of diseases, especially lupus vulgaris, with concentrated light radiation, whereby he has opened a new avenue for medical science". Similarly, in the beginning of the 20th century, Oscar Raab, a German medical student supervised by Professor Herman Von Tappeiner, accidentally made a scientific observation of the antimicrobial effects of light-activated dyes. While conducting experiments on the viability of motile protozoa, Raab noticed that fluorescent dyes, like some acridine and xanthene dyes, could kill stained microbes when sunlight was directed onto the stained samples. These effects were particularly pronounced during the summer, when sunlight is brightest. This chance observation highlighted the ability of certain fluorescent compounds, now termed "photosensitizers" (PS), to artificially induce light sensitivity in microorganisms and enhance the known antimicrobial effects of sunlight. Shortly thereafter, Von Tappeiner and Jodlbauer discovered that oxygen was crucial for light-mediated reactions, leading to the creation of the term "photodynamische wirkung" (photodynamic effect). However, it wasn't until the 1970s that researchers began to systematically explore the potential of photodynamic therapy for medical applications. Since then, significant progress has been made in understanding the underlying mechanisms and optimizing the efficacy of photodynamic therapy (PDT) for treatment of cancers and age-related macular degeneration. Today, the branch of PDT focused on killing microbial cells is considered as an option to prevent and treat infectious diseases in a manner that avoids the emergence of antimicrobial drug-resistance. == Mechanism of action == The photochemical principle underlying antimicrobial photodynamic therapy involves the activation of a photosensitizer, a light-sensitive compound that can locally generate reactive products, such as radicals and reactive oxygen species (ROS), upon exposure to specific wavelengths of light. An ideal photosensitizer selectively accumulates in the target microbial cells, where it remains inactive and non-toxic until it is activated by irradiation with light of a specific wavelength. This activation promotes the photosensitizer molecules to a short-lived excited state that possesses different chemical reactivity relative to its ground-state counterpart. When the photosensitizer molecule is in an excited triplet state, it can induce local Type 1 photodynamic reactions by direct contact with molecular oxygen, inorganic ions or biological targets. These redox reactions (Type 1) involve charge transfers, by donation of electron (e–) or Hydrogen ion (H+), to form radicals and ROS, such as anion radical superoxide, hydrogen peroxide and hydroxyl radicals. The excited triplet-state photosensitizer can also transfer energy to molecular triplet-state oxygen producing singlet oxygen via Type 2 photodynamic reactions. The photoinduced burst of active reactants affect cellular redox regulations and can cause oxidative damage to vital structures made of proteins, lipids, carbohydrates and nucleic acids, leading to localized cellular death. == Efficacy against drug-resistant pathogens == The efficacy of antimicrobial photodynamic therapy, using various distinct photosensitizers, has been studied since the 1990s. Most studies have yielded positive outcomes, often achieving disinfection levels, as defined by infection control guidelines, exceeding 5 log10 (99.999%) of microbial inactivation. Over the past decade, a collection of novel photoantimicrobials has been developed, exhibiting improved efficiencies in antimicrobial photodynamic action against various bacterial species. These studies have primarily focused on the inactivation of planktonic cultures, which are free-floating bacterial cells. This method serves as a convenient approach for high-throughput antimicrobial screening of multiple compounds, such as evaluating whether minor chemical modifications to a given photosensitizer can enhance antimicrobial efficacy. However, when present in biofilms, microbial populations can exhibit distinct characteristics compared to their planktonic counterparts, including significantly higher tolerance towards antimicrobials (up to 1,000-fold). Among the various factors contributing to this enhanced tolerance is the biofilm matrix composed extracellular polymeric substance (EPS). The EPS can shield constituent bacteria from antimicrobials through dual mechanisms: 1) by impeding the penetration of antimicrobial substances throughout the biofilm due to interactions between positively charged agents and negatively charged EPS residues, and also by 2) redox processes and π-π interactions involving aromatic surfaces generally acting to dismute the incoming active substance. EPS must be considered in the rational design of antimicrobial photosensitizers, because the densely cross linked matrix may also obstruct diffusion of photosensitizer into deeper biofilm layers. The multi-target mechanisms of aPDT avoid antimicrobial resistance, which continues to be a major global health concern. The likelihood of developing resistance in pathogens is higher for antimicrobial strategies that have a specific target structure, following the key-lock principle, embodied in many antibiotics or antiseptics. In such cases, pathogens can evade the antimicrobial challenge through specific mutations, upregulation of efflux pumps, or production of enzymes that deactivate antimicrobials. In contrast, aPDT acts through a variety of non-specific oxidative mechanisms targeting multiple structures and pathways simultaneously, making the technique far less prone to resistance. The possibility of bacteria developing tolerance to aPDT has therefore been deemed highly unlikely. Several studies have demonstrated the efficacy of aPDT against various drug-resistant pathogens, including the World Health Organization (WHO) priority pathogens, such as Staphylococcus aureus, Pseudomonas aeruginosa, Klebsiella pneumoniae, Acinetobacter baumannii, Enterococcus faecium, Candida auris, Escherichia coli and many others. == Light sources == Light is required to excite the photosensitizer, which leads to the photochemical production of ROS. To efficiently transfer photon energy to the electron structure of the photosensitizer, the wavelength of the light source must be matched to the absorption spectrum of the photosensitizer. Different light sources have been used in aPDT, such as lamps (e.g. tungsten filament, Xenon arc and fluorescent lamps), lasers and light emitting diodes (LEDs). Lamps typically emit white light, but a filter can be used to select the appropriate wavelength to be absorbed by the photosensitizer and to avoid undesired thermal effects. In contrast, lasers are monochromatic light sources that can be easily coupled to optical fibers to access non-surface regions. LEDs are also monochromatic light sources, although their spectral emission bands are wider than those of lasers. However, the coupling of LEDs and optical fibers is not efficient, resulting in significant loss of light. More recently, organic LEDs (OLEDs) have been used in aPDT as wearable light sources because they can be made to be more flexible, thinner, and lighter than conventional LEDs. Sunlight can also serve as a source of light for aPDT; however, exact illumination parameters may be difficult to precisely reproduce. == Light dosimetry == aPDT results depend on the interplay of three physical quantities: irradiance, radiant exposure and exposure time. Irradiance is defined as the optical power of the light source in Watts, divided by the area of tissue illumination conventionally described in square meters or centimeters (W/m2 or W/cm2). The irradiance, as a photodynamic parameter, is limited by the onset of adverse thermal factors in exposed tissue, or by degradative consequences to the sensitizer itself (commonly referred to as "photobleaching"). Radiant exposure is given by the product of irradiance and exposure time in seconds, divided by the illuminated area (J/cm2), and is commonly termed the light dose. This parameter is often limited by acceptable treatment times because lengthy treatment times can be unacceptable in a point-of-care setting. Fluence is a different physical quantity often used by aPDT practitioners, which considers the backscattering flux of light-tissue interaction causing re-entry of photons back into the treated area. == Photosensitizers == Photodynamic action relies on absorption of electromagnetic radiation by the photosensitizing compound and conversion of this energy into redox chemical reactions or transfer to ground-state oxygen, producing the highly oxidizing species, singlet oxygen. Consequently, the photosensitizer can be considered a photocatalyst, but it is also true that the sensitizer directly interacts with target moieties such as microbes to establish, for example, molecular targeting. This explains why not all photosensitizers are useful as photoantimicrobials. The most effective photosensitizer molecules carry a positive charge (cationic). This promotes electrostatic attraction with negatively charged groups found on microbial cell surfaces (e.g. phosphate, carboxylate, sulfate), thus ensuring that during illumination, production of reactive oxygen species occurs in close contact with the targeted cellular population. Consequently, negatively charged photosensitizers are less effective, particularly against gram-negative bacterial cells that carry a strongly negative zeta potential. The most widely employed photosensitizer in clinical practice is the phenothiazine derivative, methylene blue, which carries a +1 charge. Methylene blue is also favored due to its long record of safe use in patients, both in surgical staining and the systemic treatment of methemoglobinemia. Many other photosensitizers have been suggested, from various chemical classes, such as porphyrins, phthalocyanines and xanthenes, but the requirement for cationic nature and proven safety for human/animal use represents a high barrier to new chemical entity development. == aPDT Enhancement by inorganic salts and gold nanoparticles == It was discovered in 2015 that the addition of inorganic salts can potentiate aPDT by several orders of magnitude, and may even allow oxygen-independent photoinactivation to take place. Potassium iodide (KI) is the most relevant example. Other inorganic salts such as potassium thiocyanate (KSCN), potassium selenocyanate (KSeCN), potassium bromide (KBr), sodium nitrite (NaNO2) and even sodium azide (NaN3, toxic) have also been shown to increase the killing of a broad range of pathogens by up to one million times. The addition of KI at concentrations up to 100 mM allows gram-negative bacteria to be killed by photosensitizers, which have no effect on their own, and this was shown to be effective in several animal models of localized infections. KI was shown to be effective in human AIDS patients with oral candidiasis who were treated with methylene blue aPDT. Oral consumption of saturated KI solution (4-6 g KI/day) is a standard treatment for some deep fungal infections of the skin. The photochemical mechanisms of action are complex. KI can react with singlet oxygen to form free molecular iodine plus hydrogen peroxide, which show synergistic and long-lived antimicrobial effects, as well as forming short-lived, reactive iodine radicals. Type 1 photosensitizers can carry out direct electron transfer to form iodine radicals, even in the absence of oxygen. KSCN reacts with singlet oxygen to form sulfur trioxide radicals, while KSeCN forms semi-stable selenocyanogen. KBr reacts with TiO2 photocatalysis to form hypobromite, while NaNO2 reacts with singlet oxygen to form unstable peroxynitrate. NaN3 quenches singlet oxygen so it can only react by electron transfer to form azide radicals. Relatively high concentrations of salts are necessary to trap the short-lived reactive species produced during aPDT. The presence of gold nanoparticles is able to enhance the antimicrobial effectiveness of photosensitizers such as toludine blue. Covalently linking nanoparticles to a photosensitizer also results in enhanced antimicrobial activity. The gold nanoparticles have two roles: firstly they enhance the light capture of the dye and secondly they help direct the decay pathway for the dye, encouraging a non-radiative process through the formation of excess bactericidal radical species. == Incorporation of photosensitizers into polymers == Photosensitizers can be incorporated into polymers resulting in materials that can kill microbes on their surfaces when activated by visible light. Such polymers have been shown to be effective in killing bacteria in a clinical environment. These self-disinfecting materials could, therefore, be used to coat surfaces in order to reduce the spread of disease-causing microbes in clinical environments as well as in food-processing and food-handling premises. Advances in medicine and surgery have led to increasing reliance on a variety of medical devices of which the catheter is the most widely used. Unfortunately, the non-shedding surfaces of catheters can be colonized by microbes resulting in biofilm formation and, consequently, lead to an infection. Such catheter-related infections are a major cause of morbidity and mortality. Photosensitizers such as methylene blue and toluidine blue have been incorporated into silicone, the main polymer used in the manufacture of catheters, and the resulting composites have been shown to exert an antimicrobial effect when exposed to light of a suitable wavelength. Suitable irradiation of such materials has been shown to be able to significantly reduce biofilm accumulation on their surfaces. This approach has potential for reducing the morbidity and mortality associated with catheter-associated infections. == Microbial resistance to aPDT == The generation of reactive oxygen species (ROS) in neutrophils, macrophages, and eosinophils is one of the primary means by which the human immune system combats infecting microbes. Highly adaptable microbes have evolved some level of protection strategies against these reactive molecules by upregulating antioxidant enzymes when exposed to ROS, suggesting one method by which microbes could develop increased resistance to aPDT. However, these biochemical responses are limited when compared to the magnitude of oxidative stress placed on the microbe by aPDT. Numerous investigations involving the repeated exposure of microorganisms to sublethal doses of antimicrobial photodynamic therapy (aPDT) and the subsequent analysis of the resilience of the cultured cells that survive, consistently reveal no significant indication of the development of resistance in these microorganisms. In fact, a study using methylene blue as a photosensitizer (PS) against MRSA, a series of aPDT exposure followed by re-cultivation tests conducted over multiple years showed that the microorganism's sensitivity to aPDT remained unchanged. In contrast, significant resistance to oxacillin emerged in fewer than twelve cycles. == Virulence inhibition by aPDT == Pathogenic microbes cause harm to their hosts and evade host defense mechanisms through a range of virulence factors, which include elements like exotoxins, endotoxins, capsules, adhesins, invasins, and proteases. While antibiotics can inactivate microbes and thereby prevent further production of host-damaging virulence factors, few have any effect on pre-existing virulence factors or those which are released during the bactericidal process. These factors can continue to produce damaging effects even after the offending microbial cells have been inactivated. Unlike most antimicrobial drugs, antimicrobial photodynamic therapy (aPDT) is typically capable of neutralizing or diminishing the effectiveness of microbial virulence factors, or it can reduce their expression. The ability to inhibit microbial virulence is of particular interest because it could be related to accelerated infection site healing when compared to standard antimicrobial chemotherapy that only relies on bacteriostatic or bactericidal effects. Secreted virulence factors normally contain peptides, and it is well known that some amino acids (e.g. histidine, cysteine, tyrosine, tryptophan and methionine) are highly vulnerable to oxidation. Photodynamic reactions have demonstrated significant effectiveness in diminishing the harmful activity of lipopolysaccharides (LPS), proteases, and various other microbial toxins. The capability to not only eliminate the microbes causing an infection but also to inhibit expression of various molecules that lead to host tissue damage offers a significant benefit over traditional antimicrobial drugs. == Nasal decolonization == Nasal decolonization is recognized as a primary preventive intervention in the development of hospital-acquired infections (HAIs), especially surgical site infections (SSIs). HAIs represent a serious public health concern worldwide, with approximately 2.5 million HAIs annually in the United States leading to high morbidity and mortality (e.g. 30,000 deaths per year directly attributable to HAIs). HAIs affect one in every 31 hospitalized patients in the USA. Staphylococcus aureus, a gram-positive bacterium, is the most common cause of nosocomial pneumonia and surgical site infections and the second-most common cause of bloodstream, cardiovascular, and eye, ear, nose, and throat infections. S. aureus is by far the leading cause of skin and soft tissue HAIs, which can lead to potentially lethal bacteremia. SSIs are among the most common healthcare-associated infections with substantial morbidity and mortality. An analysis of the 2005 Nationwide Inpatient Sample Database showed that S. aureus infections in inpatients tripled the duration of hospital stay, increasing length of stay by an average of 7.5 days for surgical site infections. The anterior nares have been classified as the most consistent site of S. aureus colonization. Asymptomatic S. aureus nasal carriage in healthy individuals has been reported at 20-55%, causing increased risk of surgical-site infection by almost 4-fold. Critically,a growing proportion of these bacterial populations exhibit antibiotic resistance. Nasal decolonization of S. aureus to reduce the incidence of SSIs is expanding into current standard of care in both intensive care units (ICU) and presurgical settings. Various decolonization strategies have been used in hospitals in an effort to reduce transmission of bacteria and decrease overall infection rate. Decolonization effects are both directly and indirectly related via reduction of the overall bioburden when broadly administered within an acute care setting. There is the added benefit of effects that go beyond the treated patients extending to healthcare workers and other patients. Several clinical studies performed using the current standard of care – intranasal mupirocin 2% antibiotic ointment – in surgical patients, concluded that this treatment significantly decreased the rate of hospital-acquired infections. One study found a 44% reduction in bloodstream infection rates when universal decolonization was used (e.g. intranasal mupirocin ointment and chlorhexidine body wash) in a trial involving 73,256 hospital patients. In addition, researchers have demonstrated that eradicating S. aureus from the anterior nares also utilizing intranasal mupirocin ointment reduced surgical site infection rates up to 58% in hospitalized patients who were nasal carriers. However, widespread use of mupirocin is associated with development of mupirocin-resistant strains of MRSA, with one hospital in Canada experiencing an increase from 2.7% to 65% resistant strains in three years. A targeted – as opposed to universal – decolonization approach is sometimes recommended because of increasing levels of mupirocin resistance. Currently, only universal decolonization with mupirocin has been demonstrated to be an effective control measure and therefore selective administration of mupirocin is contraindicated. Nasal aPDT addresses the issues of antibiotic-induced resistance in multiple ways. As a site-specific therapy, it does not interfere with the overall microbiome because it is not systemically administered. Moreover, phenothiazinium photosensitizers can target negatively charged bacterial cells leaving zwitterionic host tissues unharmed. Treatment of the nose specifically targets the respiratory outlet, which is a key source of microbial colonization and dissemination through touch or normal respiration. Yet, the unspecific mechanisms of action effectively prevent development of resistance. The first large-scale study involving aPDT for nasal decolonization, initially conducted exclusively on specific surgery types, the study demonstrated a significant 42% reduction in surgical site infections. The most significant reduction in SSI rates were in orthopedic and spinal surgeries. Currently, the use of nasal photodisinfection has been expanded to encompass a wide range of surgeries, resulting in an increased effect size with an approximate efficacy of 80%. The technique has been deployed in multiple Canadian hospitals since that time, and is undergoing clinical trials in the US for the same purpose. Specialty-specific studies have also been carried out, especially in high-risk surgery of the spine. One large Canadian study found that the spine-surgery SSI rate decreased 5.6% (from 7.2% to 1.6%) because of nasal aPDT combined with chlorhexidine bathing, saving on average $45–55 CAD per treated patient ($4.24 million CAD annually). This study concluded that "CSD/nPDT is both efficacious and cost-effective in preventing surgical site infections". No adverse events were reported. == Skin infections == There are three main types of skin infections in humans that have been treated with aPDT: 1) Fungal infections, 2) Mycobacterial infections and 3) Cutaneous Leishmaniasis. The most clinically used photosensitizers are methylene blue and curcumin, as well as the protoporphyrin IX precursors, aminolevulinic acid (ALA) and methyl-ALA. Fungal infections treated with aPDT have included both Dermatophytosis and Sporotrichosis. Infections with filamentous fungi such as Trichophyton spp. which express keratinase enzymes usually affect the toenails (onychomycosis), but can also affect the skin (tinea). In onychomycosis (tinea unguium), efforts are often made to increase the penetration of photosensitizers into the toenail matrix before the application of light. Cutaneous tinea infections affecting the foot, scalp or crotch have been treated with ALA-aPDT. Sporotrichosis is a zoonosis caused by the dimorphic fungus Sporothrix spp often transmitted by animal bites or scratches. It has been treated with aPDT mediated by ALA or methylene blue. Skin infections can be caused by non-tuberculous mycobacteria, including rapidly growing species such as Mycobacterium marinum (swimmers' granuloma) and Mycobacterium avium complex. Some of these infections have been treated with aPDT using ALA in combination with conventional antibiotics. Leishmaniasis is caused by an intracellular parasitic infection caused by single-celled protozoa of the genus Leishmania. It is transmitted by the bites of infected sand flies found in both the Old World (Southern Europe and Middle East) and the New World (Central and South America). Each year there are up to 2 million new cases and 70,000 deaths worldwide. Leishmaniasis infections can be either cutaneous, mucosal, or visceral, with the latter type being the deadliest. Cutaneous leishmaniasis has been treated with aPDT mediated by either ALA or methylene blue, because the standard treatment using systemic amphotericin B or topical pentavalent antimonial preparations have several drawbacks. == Chronic wounds == Chronic wounds are those that do not heal within months of treatment. They are classified into three main types, i.e. venous, diabetic, and pressure ulcers and are frequently sites of microbial infection that become a major deterrent to for patient recovery. aPDT offers a treatment option for chronic wounds, because of its lethal action against drug-resistant microorganisms. Diabetic Foot ulcers (DFU) affect 10 to 25% of diabetic patients during their lives, requiring long and intensive hospitalization. The economic impact of DFU to worldwide health care systems is significant. DFU are frequently infected with a combination of fungi and bacteria including the genera Serratia, Morganella, Proteus, Haemophilus, Acinetobacter, Enterococcus, and Staphylococcus. In addition, there is an increased likelihood of contracting resistant strains of these and other microorganisms from hospital settings. DFU patients commonly respond poorly to antibiotic therapy. Consequently, amputation becomes indicated to prevent other complications, such as osteonecrosis, thrombosis and more disseminated types of bacteremia. aPDT has been successfully used to treat the diabetic foot, reducing the incidence of amputation in DFU patients. DFU patients treated with aPDT were associated with only a 2.9% chance of amputation, compared to 100% in the control group (classical antibiotic therapy, without aPDT). Using an initial cohort study of 62 patients and subsequently of 218 patients, Tardivo and colleagues developed the Tardivo algorithm as a prognostic score to determine the risk of amputation and to predict the ideal therapeutic options for the treatment of DFU by aPDT. The score is based on three factors: Wagner's classification, signs of PAD, and location of foot ulcers. Values for the independent parameters are multiplied together and, for patients with scores below 16, treatment with aPDT is associated with approximately 85% (95% CI) chance of recovery. == Oral infections == In the early 90s, Emeritus Professor Michael Wilson from University College London (UCL), initiated scientific investigations on the potential of aPDT to combat bacteria of interest in dentistry. Since then, aPDT has been explored for various oral conditions, such as periodontal disease (gum disease), dental caries (cavities), endodontic treatment (root canal treatment), oral herpes and oral candidiasis. Research and clinical studies have shown promising results in reducing microbial load and treating infections. However, the efficacy of aPDT can vary based on factors like the type and concentration of photosensitizer used, light parameters, and the specific infection being treated. While aPDT can be considered as an adjunctive treatment to standard of care, it is not currently intended to replace conventional therapies. This may change in the future, as drug-resistance patterns in the oral microbiome develop over time, making aPDT monotherapy increasingly necessary. Some advantages of aPDT in oral infections include broad-spectrum action since aPDT can target a wide range of microorganisms (e.g. bacteria, fungi, and virus), including antibiotic-resistant strains, and oral biofilm is composed of wide variety of microorganisms. Another advantage is the localized treatment that can be used to target specific infected areas, minimizing damage to healthy tissues, and maintaining the normal microbiota without significant damage. To date, no significant adverse events associated with intraoral aPDT have been reported. aPDT offers the dental practitioner an intraoral decontamination therapy that its minimally invasive nature, broad-spectrum action, rapid microbicidal effect, reduced antibiotic use, patient comfort factor, high compliance rate, treatment of resistant strains and minimization of microbial resistance selection. == Disinfection of blood-products == During the 1980s, the realization of the presence of the human immunodeficiency virus (HIV) in the global supply of donated blood led to the development of both thorough hemovigilance and of methods for the safe disinfection of microbial species in donated blood and blood products. Blood is a mixture of cells and proteins and is routinely separated into its constituent parts for use in various therapies, e.g. platelets, red cells and plasma might be used in specific replacement, and proteins (typically clotting factors) derived from the plasma fraction are provided for the treatment of hemophilia, for example. Viruses, such as HIV, might be associated with the cellular components or suspended extracellularly, thus representing a threat of recipient infection whichever of these fractions is used. However, treatments aimed at viral inactivation/destruction must preserve cell/protein function, and this represents a barrier, particularly to cellular disinfection. In terms of the use of photosensitizers, both methylene blue and riboflavin are employed for the photodisinfection of plasma, using visible or long-wave ultraviolet illumination respectively, while riboflavin is also used for disinfection of platelets. However, neither approach is employed for red blood cell concentrates (packed RBC). Among related approaches, the psoralen derivative Amotosalen, activated by long-wavelength UV light, is used in Europe for disinfection of plasma and platelets. However, this represents a photochemical reaction between the psoralen nucleus and viral nucleic acids, rather than a purely photodynamic effect. Disinfection of packed RBC and whole blood is still being developed and might not necessarily involve light activation. A proposed use of blood disinfection is in the treatment of spesis, where the patient's blood is disinfected and infused back. == Veterinary applications == In small animal practice, aPDT has been investigated for the treatment of different dermatological diseases with positive results. Although there are limited scientific data in this field, successful applications include otitis externa caused by multidrug-resistant Pseudomonas aeruginosa, dermatophytosis caused by Microsporum canis, and in association with itraconazole for sporotrichosis. aPDT can also be used as a non-antibiotic platform for the treatment of infectious diseases in food-producing animals. Indeed, overuse of antimicrobials in these animals may lead to contamination of meat and milk by antibiotic-resistant bacteria or antibiotic residues. In this regard, aPDT has proven effective in the treatment of caseous lymphadenitis and streptococcal abscesses in sheep, and is demonstrably more effective than oxytetracycline (gold standard treatment) for bovine digital dermatitis. Other applications of aPDT include the treatment of mastitis in dairy cattle and sheep, and sole ulcers and surgical wound healing in cattle. Exotic, zoo, and wildlife medicine is challenging and stands out as another field of possibility for aPDT. In this regard, aPDT has been successfully used to treat penguins suffering from pododermatitis and snakes with infectious stomatitis caused by gram-negative bacteria. Additionally, aPDT has been deployed as an adjuvant endodontic treatment for a traumatic tusk fracture in an elephant. == Food decontamination == The ever-increasing demand for food decontamination technologies has resulted in several studies focusing on the evaluation of the antimicrobial efficacy of aPDT in food and its effect on the organoleptic properties of the food products. aPDT has shown antimicrobial efficacy against microbes on fruits, vegetables, seafood, and meat. The efficacy of aPDT used in this way is dependent on several factors including wavelength of light, temperature, and food-related factors such as acidity, surface properties and water activity. Endogenous porphyrins that are light-absorbing compounds located within certain bacteria produce photosensitized reactions in the presence of light in the blue region of the spectrum (400-500 nm), showing better antimicrobial efficacy than other wavelengths in the visible spectrum (e.g. green and red, 500-700 nm) in the absence of an exogenous photosensitizer. Acidity of the food being disinfected plays an important role, as gram-positive bacteria have been found to be more sensitive to aPDT in acidic conditions while gram-negative bacteria are more sensitive to aPDT at alkaline conditions. Since aPDT is a surface decontamination technology, the surface characteristics of the tested material play an important role. The irregular surfaces of products like pet food pellets can lead to a shadowing effect, where microorganisms can hide in food crevices and be shielded from the light treatment. Flat surfaces can show better efficacy of aPDT as compared to the spherical or irregular surfaces. Moreover, high water activity conditions contribute to the success of aPDT compared to low water activity conditions, due to limited penetration of light in more desiccated foods. Other factors like irradiance, treatment time (or dose), microbial strain, and distance of the product from the light source also play a major role in the microbicidal efficacy of food-based aPDT. A recent study demonstrated that appropriate concentrations of a photosensitizer potentially useful for food-based disinfection combined with appropriate peak absorption wavelength light resulted in upwards of 99.999% (5 log10) reduction in MRSA and complete kill in Salmonella cell counts. In addition to bacteria, aPDT has shown efficacy against fungal species. Optimization of the factors influencing antimicrobial efficacy and scalability of aPDT are required for successful application in the food industry. == References == == External links == Academic journals focused on photodynamic science and technology Journal of Photochemistry and Photobiology A: Chemistry Journal of Photochemistry and Photobiology B: Biology Photodiagnosis and Photodynamic Therapy Photochemistry and Photobiology Photochemical and Photobiological Sciences Journal of Biophotonics Lasers in Surgery and Medicine Lasers in Medical Science Professional associations promoting research on photodynamic therapy International Photodynamic Association (IPA) European Society for Photobiology (ESP) American Society for Photobiology (ASP) International Society for Optics and Photonics (SPIE)
Wikipedia/Antimicrobial_photodynamic_therapy
Light therapy, also called phototherapy or bright light therapy is the exposure to direct sunlight or artificial light at controlled wavelengths in order to treat a variety of medical disorders, including seasonal affective disorder (SAD), circadian rhythm sleep-wake disorders, cancers, neonatal jaundice, and skin wound infections. Treating skin conditions such as neurodermatitis, psoriasis, acne vulgaris, and eczema with ultraviolet light is called ultraviolet light therapy. == Medical uses == === Nutrient deficiency === ==== Vitamin D deficiency ==== Exposure to UV-B light at wavelengths of 290-300 nanometers enables the body to produce vitamin D3 to treat vitamin D3 deficiency. === Skin conditions === Light therapy treatments for the skin usually involve exposure to ultraviolet light. The exposures can be to a small area of the skin or over the whole body surface, as in a tanning bed. The most common treatment is with narrowband UVB, which has a wavelength of approximately 311–313 nanometers. Full body phototherapy can be delivered at a doctor's office or at home using a large high-power UVB booth. Tanning beds, however, generate mostly UVA light, and only 4% to 10% of tanning-bed light is in the UVB spectrum. ==== Acne vulgaris ==== As of 2012 evidence for light therapy and lasers in the treatment of acne vulgaris was not sufficient to recommend them. There is moderate evidence for the efficacy of blue and blue-red light therapies in treating mild acne, but most studies are of low quality. While light therapy appears to provide short-term benefit, there is a lack of long-term outcome data or data in those with severe acne. ==== Atopic dermatitis ==== Light therapy is considered one of the best monotherapy treatments for atopic dermatitis (AD) when applied to patients who have not responded to traditional topical treatments. The therapy offers a wide range of options: UVA1 for acute AD, NB-UVB for chronic AD, and balneophototherapy have proven their efficacy. Patients tolerate the therapy safely but, as in any therapy, there are potential adverse effects and care must be taken in its application, particularly to children. According to a study involving 21 adults with severe atopic dermatitis, narrowband UVB phototherapy administered three times per week for 12 weeks reduced atopic dermatitis severity scores by 68%. In this open study, 15 patients still experienced long-term benefits six months later. ==== Cancer ==== According to the American Cancer Society, there is some evidence that ultraviolet light therapy may be effective in helping treat certain kinds of skin cancer, and ultraviolet blood irradiation therapy is established for this application. However, alternative uses of light for cancer treatment – light box therapy and colored light therapy – are not supported by evidence. Photodynamic therapy (often with red light) is used to treat certain superficial non-melanoma skin cancers. ==== Psoriasis ==== For psoriasis, UVB phototherapy has been shown to be effective. A feature of psoriasis is localized inflammation mediated by the immune system. Ultraviolet radiation is known to suppress the immune system and reduce inflammatory responses. Light therapy for skin conditions like psoriasis usually use 313 nanometer UVB though it may use UVA (315–400 nm wavelength) or a broader spectrum UVB (280–315 nm wavelength). UVA combined with psoralen, a drug taken orally, is known as PUVA treatment. In UVB phototherapy the exposure time is very short, seconds to minutes depending on intensity of lamps and the person's skin pigment and sensitivity. ==== Vitiligo ==== About 1% of the human population has vitiligo which causes painless distinct light-colored patches of the skin on the face, hands, and legs. Phototherapy is an effective treatment because it forces skin cells to manufacture melanin to protect the body from UV damage. Prescribed treatment is generally 3 times a week in a clinic or daily at home. About 1 month usually results in re-pigmentation in the face and neck, and 2–4 months in the hands and legs. Narrowband UVB is more suitable to the face and neck and PUVA is more effective at the hands and legs. ==== Other skin conditions ==== Some types of phototherapy may be effective in the treatment of polymorphous light eruption, cutaneous T-cell lymphoma and lichen planus. Narrowband UVB between 311 and 313 nanometers is the most common treatment. === Retinal conditions === There is preliminary evidence that light therapy is an effective treatment for diabetic retinopathy and diabetic macular oedema. === Mood and sleep related === ==== Seasonal affective disorder ==== The effectiveness of light therapy for treating seasonal affective disorder (SAD) may be linked to reduced sunlight exposure in the winter months. Light resets the body's internal clock. Studies show that light therapy helps reduce the debilitating depressive symptoms of SAD, such as excessive sleepiness and fatigue, with results lasting for at least 1 month. Light therapy is preferred over antidepressants in the treatment of SAD because it is a relatively safe and easy therapy with minimal side effects. Two methods of light therapy, bright light and dawn simulation, have similar success rates in the treatment of SAD. It is possible that response to light therapy for SAD could be season dependent. Morning therapy has provided the best results because light in the early morning aids in regulating the circadian rhythm. People affected by SAD often have low energy, tend to eat more carbohydrates and sleep longer, but symptoms can vary between people. A Cochrane review conducted in 2019 states the evidence that light therapy's effectiveness as a treatment for the prevention of seasonal affective disorder is limited, although the risk of adverse effects are minimal. Therefore, the decision to use light therapy should be based on a person's preference of treatment. ==== Non-seasonal depression ==== Light therapy has also been suggested in the treatment of non-seasonal depression and other psychiatric mood disturbances, including major depressive disorder, bipolar disorder and postpartum depression. A meta-analysis by the Cochrane Collaboration concluded that "for patients suffering from non-seasonal depression, light therapy offers modest though promising antidepressive efficacy." A 2008 systematic review concluded that "overall, bright light therapy is an excellent candidate for inclusion into the therapeutic inventory available for the treatment of nonseasonal depression today, as adjuvant therapy to antidepressant medication, or eventually as stand-alone treatment for specific subgroups of depressed patients." A 2015 review found that supporting evidence for light therapy was limited due to serious methodological flaws. A 2016 meta-analysis showed that bright light therapy appeared to be efficacious, particularly when administered for 2–5 weeks' duration and as monotherapy. ==== Chronic circadian rhythm sleep disorders (CRSD) ==== In the management of circadian rhythm disorders such as delayed sleep phase disorder (DSPD), the timing of light exposure is critical. Light exposure administered to the eyes before or after the nadir of the core body temperature rhythm can affect the phase response curve. Use upon awakening may also be effective for non-24-hour sleep–wake disorder. Some users have reported success with lights that turn on shortly before awakening (dawn simulation). Evening use is recommended for people with advanced sleep phase disorder. Some, but not all, totally blind people whose retinae are intact, may benefit from light therapy. ==== Circadian rhythm sleep disorders and jet lag ==== Source: ===== Situational CRSD ===== Light therapy has been tested for individuals with shift work sleep disorder and for jet lag. ===== Sleep disorder in Parkinson's disease ===== Light therapy has been trialed in treating sleep disorders experienced by patients with Parkinson's disease. ===== Sleep disorder in Alzheimer's disease ===== Studies have shown that daytime and evening light therapy for nursing home patients with Alzheimer's disease, who often struggle with agitation and fragmented wake/rest cycles effectively led to more consolidated sleep and an increase in circadian rhythm stability. === Neonatal jaundice (Postnatal Jaundice) === Light therapy is used to treat cases of neonatal jaundice. Bilirubin, a yellow pigment normally formed in the liver during the breakdown of old red blood cells, cannot always be effectively cleared by a neonate's liver causing neonatal jaundice. Accumulation of excess bilirubin can cause central nervous system damage, and so this buildup of bilirubin must be treated. Phototherapy uses the energy from light to isomerize the bilirubin and consequently transform it into compounds that the newborn can excrete via urine and stools. Bilirubin is most successful absorbing light in the blue region of the visible light spectrum, which falls between 460 and 490 nm. Therefore, light therapy technologies that utilize these blue wavelengths are the most successful at isomerizing bilirubin. == Techniques == === Photodynamic therapy === Photodynamic therapy (PDT) is a form of phototherapy using nontoxic light-sensitive compounds (photosensitizers) that are exposed selectively to light at a controlled wavelength, laser intensity, and irradiation time, whereupon they generate toxic reactive oxygen species (ROS) that target malignant and other diseased cells. Oxygen is thus required for activity, lowering efficacy in highly developed tumors and other hypoxic environments. Selective apoptosis of diseased cells is difficult due to the radical nature of ROS, but may be controlled for through membrane potential and other cell-type specific properties' effects on permeability or through photoimmunotherapy. In developing any phototherapeutic agent, the phototoxicity of the treatment wavelength should be considered. ==== Photodynamic cancer therapy ==== Various cancer treatments utilizing PDT have been approved by the FDA. Treatments are available for actinic keratosis (blue light with aminolevulinic acid), cutaneous T-cell lymphoma, Barrett esophagus, basal cell skin cancer, esophageal cancer, non-small cell lung cancer, and squamous cell skin cancer (Stage 0). Photosensitizing agents clinically-approved or undergoing clinical trials for the treatment of cancers include Photofrin, Temoporfin, Motexafin lutetium, Palladium bacteriopheophorbide, Purlytin, and Talaporfin. Verteporfin is approved to treat eye conditions such as macular degeneration, myopia, and ocular histoplasmosis. Third-generation photosensitizers are currently in development, but none are yet approved for clinical trials. ==== Antimicrobial photodynamic therapy ==== PDT may also be utilized to treat multidrug-resistant skin, wound, or other superficial infections. This is known as antimicrobial photodynamic therapy (aPDT) or photodynamic inactivation (PDI). aPDT has been observed to be effective against both gram-positive and gram-negative bacteria such as Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, and Mycobacterium. aPDT has shown lowered efficacy on some other bacterial species, such as Klebsiella pneumoniae and Acinetobacter baumannii. This is likely due to factors such as cell wall thickness and membrane potential. Many studies utilizing aPDT focus on the application of the photosensitizer through leakage from a hydrogel, which has been found to increase wound healing speed of skin infections through the upregulation of vascular endothelial growth factor (VEGF) and hypoxia inducible factor (HIF). This controlled leakage allows for prolonged but limited generation of ROS, lowering the impact on human cell viability due to ROS cytotoxicity. It is unlikely for drug resistance to photosensitizers to form due to the nontoxic nature of the photosensitizer itself as well as the ROS generation mechanism of action, which cannot be prevented outside of hypoxic environments. Certain dental infections (peri-implantitis, periodontitis) are more difficult to treat with PDT as opposed to photothermal therapy due to the requirement of oxygen, though a significant response is still observed. Increased antimicrobial activity and wound healing speeds are typically observed when PDT is combined with photothermal therapy in photodynamic/photothermal combination therapy. === Photothermal Therapy === Photothermal therapy (PTT) is a form of phototherapy that uses non-toxic compounds called photothermal agents (PTA) that, when irradiated at a certain wavelength of light, converts the light energy directly to heat energy. The photothermal conversion efficiency determines the amount of light converted to heat, which can dictate the necessary irradiation time and/or laser intensity for treatments. Typically PTT treatments use wavelengths in the near-infrared (NIR) spectra, which can be further divided into NIR-I (760-900 nm), NIR-II (900-1880 nm), and NIR-III (2080-2340 nm) windows. Wavelengths in these regions are typically less phototoxic than UV or high-energy visible light. In addition, NIR-II wavelengths have been observed to show deeper penetration than NIR-I wavelengths, allowing for treatment of deeper wounds, infections, and cancers. Important considerations for the development of a PTA include photothermal conversion efficiency, phototoxicity, laser intensity, irradiation time, and the temperature at which human cell viability is impaired (around 46-60 °C). Currently, the only FDA-approved photothermal agent is indocyanine green which is active against both tumor and bacterial cells. PTT is less selective than photodynamic therapy (PDT, see above) due to its heat-based mechanism of action, but also less likely to promote drug resistance than most, if not all, currently developed treatments. In addition, PTT can be used in hypoxic environments and on deeper wounds, infections, and tumors than PDT due to the higher wavelength of light. Due to PTT activity in hypoxic environments, it may be also used on more developed tumors than PDT. Low-temperature PTT (≤ 45 °C) for treatment of infections is also a possibility when combined with an antibiotic compound due to heat's proportionality with membrane permeability - a hotter environment causes heightened membrane permeability, which thus allows the drug into the cell. This would reduce/eliminate the impact on human cell viability, and aiding in antibiotic accumulation within the target cell may assist in restoring activity in antibiotics that pathogens had developed resistance to. PTT is typically seen to have improved antimicrobial and wound healing activity when combined with an additional mechanism of action through PDT or added antibiotic compounds in the application. === Light boxes === The production of the hormone melatonin, a sleep regulator, is inhibited by light and permitted by darkness as registered by photosensitive ganglion cells in the retina. To some degree, the reverse is true for serotonin, which has been linked to mood disorders. Hence, for the purpose of manipulating melatonin levels or timing, light boxes providing very specific types of artificial illumination to the retina of the eye are effective. Light therapy uses either a light box which emits up to 10,000 lux of light at a specified distance, much brighter than a customary lamp, or a lower intensity of specific wavelengths of light from the blue (460 nm) to the green (525 nm) areas of the visible spectrum. A 1995 study showed that green light therapy at doses of 350 lux produces melatonin suppression and phase shifts equivalent to 10,000 lux white light therapy, but another study published in May 2010 suggests that the blue light often used for SAD treatment should perhaps be replaced by green or white illumination, because of a possible involvement of the cones in melatonin suppression. == Risks and complications == === Ultraviolet === Ultraviolet light causes progressive damage to human skin and erythema even from small doses. This is mediated by genetic damage, collagen damage, as well as destruction of vitamin A and vitamin C in the skin and free radical generation. Ultraviolet light is also known to be a factor in formation of cataracts. Ultraviolet radiation exposure is strongly linked to incidence of skin cancer. === Visible light === Optical radiation of any kind with enough intensity can cause damage to the eyes and skin including photoconjunctivitis and photokeratitis. Researchers have questioned whether limiting blue light exposure could reduce the risk of age-related macular degeneration. According to the American Academy of Ophthalmology, there is no scientific evidence showing that exposure to blue light emitting devices result in eye damage. According to Harriet Hall, blue light exposure is reported to suppress the production of melatonin, which affects our body's circadian rhythm and can decrease sleep quality. It is reported that, in reproductive-age females, bright light therapy may activate the production of reproductive hormones, such as luteinizing hormone, follicle-stimulating hormone, and estradiol Modern phototherapy lamps used in the treatment of seasonal affective disorder and sleep disorders either filter out or do not emit ultraviolet light and are considered safe and effective for the intended purpose, as long as photosensitizing drugs are not being taken at the same time and in the absence of any existing eye conditions. Light therapy is a mood altering treatment, and just as with drug treatments, there is a possibility of triggering a manic state from a depressive state, causing anxiety and other side effects. While these side effects are usually controllable, it is recommended that patients undertake light therapy under the supervision of an experienced clinician, rather than attempting to self-medicate. Contraindications to light therapy for seasonal affective disorder include conditions that might render the eyes more vulnerable to phototoxicity, tendency toward mania, photosensitive skin conditions, or use of a photosensitizing herb (such as St. John's wort) or medication. Patients with porphyria should avoid most forms of light therapy. Patients on certain drugs such as methotrexate or chloroquine should use caution with light therapy as there is a chance that these drugs could cause porphyria. Side effects of light therapy for sleep phase disorders include jumpiness or jitteriness, headache, eye irritation and nausea. Some non-depressive physical complaints, such as poor vision and skin rash or irritation, may improve with light therapy. == History == Many ancient cultures practiced various forms of heliotherapy, including people of Ancient Greece, Ancient Egypt, and Ancient Rome. The Inca, Assyrian and early Germanic peoples also worshipped the sun as a health bringing deity. Indian medical literature dating to 1500 BCE describes a treatment combining herbs with natural sunlight to treat non-pigmented skin areas. Buddhist literature from about 200 CE and 10th-century Chinese documents make similar references. The Faroese physician Niels Finsen is believed to be the father of modern phototherapy. He developed the first artificial light source for this purpose. Finsen used short wavelength light to treat lupus vulgaris, a skin infection caused by Mycobacterium tuberculosis. He thought that the beneficial effect was due to ultraviolet light killing the bacteria, but recent studies showed that his lens and filter system did not allow such short wavelengths to pass through, leading instead to the conclusion that light of approximately 400 nanometers generated reactive oxygen that would kill the bacteria. Finsen also used red light to treat smallpox lesions. He received the Nobel Prize in Physiology or Medicine in 1903. Scientific evidence for some of his treatments is lacking, and later eradication of smallpox and development of antibiotics for tuberculosis rendered light therapy obsolete for these diseases. In the early 20th-century light therapy was promoted by Auguste Rollier and John Harvey Kellogg. In 1924, Caleb Saleeby founded The Sunlight League. From the late nineteenth century until the early 1930s, light therapy was considered an effective and mainstream medical therapy in the UK for conditions such as varicose ulcer, 'sickly children' and a wide range of other conditions. Controlled trials by the medical scientist Dora Colebrook, supported by the Medical Research Council, indicated that light therapy was not effective for such a wide range of conditions. == Controversy == Red light therapy involves exposure to low levels of red light or near-infrared light, typically through lamps or masks. It is promoted for various skin-related benefits, including improved appearance and reduced signed of aging. However, there is currently insufficient scientific evidence to support many of these claims. There has been some indication that it may reduce inflammation associated with conditions such as acne or rosacea, but evidence supporting its anti-aging effects remain limited. Most existing research has focused on in-office treatments, while at-home devices are generally less powerful and precise, which may lead to inconsistent results. It is generally considered safe, however if misused red light therapy could cause eye or skin damage. == See also == Blood irradiation therapy Chromotherapy Crib A'Glow Free-running sleep Low level laser therapy Neuromodulation Neurostimulation Neurotechnology Photodynamic therapy Sun tanning UV-B lamps == References == == External links == Media related to Phototherapy at Wikimedia Commons Our Friend, the Sun: Images of Light Therapeutics from the Osler Library Collection, c. 1901–1944. Digital exhibition by the Osler Library of the History of Medicine, McGill University
Wikipedia/Phototherapy
Light harvesting materials harvest solar energy that can then be converted into chemical energy through photochemical processes. Synthetic light harvesting materials are inspired by photosynthetic biological systems such as light harvesting complexes and pigments that are present in plants and some photosynthetic bacteria. The dynamic and efficient antenna complexes that are present in photosynthetic organisms has inspired the design of synthetic light harvesting materials that mimic light harvesting machinery in biological systems. Examples of synthetic light harvesting materials are dendrimers, porphyrin arrays and assemblies, organic gels, biosynthetic and synthetic peptides, organic-inorganic hybrid materials, and semiconductor materials (non-oxides, oxynitrides and oxysulfides). Synthetic and biosynthetic light harvesting materials have applications in photovoltaics, photocatalysis, and photopolymerization. == Photochemical Processes == === Organic Photovoltaic Cells === During photochemical processes employing donor and acceptor chromophores in organic solar cells, a photon is absorbed by the donor and an exciton is generated. The exciton diffuses to a donor/acceptor interface, or heterojunction, where an electron from the lowest unoccupied molecular orbital (LUMO) of the donor is transferred to the LUMO of the acceptor. This results in the formation of electron-hole pairs. When the photon is absorbed by the acceptor and the exciton reaches a heterojunction, an electron will then transfer from the HOMO of the donor to the HOMO of the acceptor. In order to make certain there is effective charge transfer, the continuous donor or acceptor domains must be smaller than the exciton diffusion length (< ~0.4 nm). === Light Harvesting Efficiency === The light harvesting efficiency of energy transfer in light harvesting materials can be enhanced by either decreasing the distance between the donor and acceptor or designing a material that contains multiple antenna chromophores per acceptor (antenna effect). Förster Resonance Energy Transfer (FRET) Efficiency corresponds to the light harvesting efficiency and is determined by the spectroscopic properties of dyes/pigments or chromophores and the distances between the donor and acceptor; the limitations of FRET can be overcome by enhancing the antenna effect through modifying the stoichiometry of the electron donor, transmitter, and acceptor. == Photosynthetic biological systems == Photosynthetic biological systems utilize sunlight, an abundant and ubiquitous energy source, as metabolic fuel. The highest efficiency for the conversion of energy from the sun into biomass by plants is around 4.6% at 30 °C and 380 ppm of atmospheric CO2 for carbon fixation during photosynthesis. Natural light harvesting complexes have molecular machinery that make possible the conversion of sunlight into chemical energy with almost 100% quantum efficiency. The ability of living organisms to harvest solar energy and achieve quantum efficiency near unity is due to the culmination of ~3.5 billion years of evolution. This efficiency is achieved in plants with a series of energy transfer steps, that are carried out through pigment-protein complexes (e.g. Photosystem II). Pigment-protein complexes (PPC) contain chromophore molecules, specifically chlorophylls and carotenoids that are embedded in a protein matrix. PPC serve as antenna complexes that absorb sunlight and the harvested energy from the sunlight then travels hundreds of nanometers to the reaction center; this energy essentially powers the electron transfer chain essential to photosynthesis and the downstream photosynthesis of plants. In order for charge or energy transfer to occur in the multielectron redox processes of the electron transfer chain, charge separation must occur first, which is induced by light harvesting. === Purple bacteria complexes === Purple bacteria, a photosynthetic organism also contains a PPC that is structurally different to the photosystems in plants but similar in terms of function. Exciton-transporting proteins found in purple bacteria such as Rhodospirillum photometricum or Rhodoblastus acidophilus, are light harvesting complex 1 and light harvesting complex 2. Light harvesting complex 2 in the purple bacteria Rhodoblastus acidophilus is shown in Figure 2. The light harvesting complex in purple bacteria is multifunctional; at high light intensities, the light harvesting complex typically switches into a quenched state through a conformational change of the PPC, and at low light intensities, the light harvesting complex typically reverts to an unquenched state. These conformational changes occur in light harvesting complex 2 in order to manage the metabolic cost corresponding to protein synthesis in purple bacteria. === Complexes in green plants === Conformational changes of proteins in PPC of vascular plants or higher plants also occur on the basis of light intensity. When there are lower light intensities for example on an overcast day, any absorbed sunlight by higher plants is converted to electricity for photosynthesis. When conditions allow for direct sunlight the capacity of PPC in higher plants to absorb and transfer energy, exceeds the capacity of downstream metabolic or biochemical processes. During periods of high light intensity plants and algae will enter a stage of non-photochemical quenching. == Design and characterization of synthetic materials == === Materials based on Porphyrins, Chlorophyll, and Carotenoids === Artificial light harvesting materials that serve as antenna are based on non-covalent supramolecular assemblies that contain motifs that are inspired by the pigment molecules chlorophyll and carotenoids that are embedded in protein-pigment complexes in nature. The class of pigments that are most commonly found in nature are chlorophylls and bacteriochlorophylls, the synthetic analogs of these biological chromophore molecules are porphyrins which are the most extensively used compounds in artificial light harvesting applications. The porphyrin moieties present in biological light harvesting complexes play a critical role in the efficient absorption of visible light, the harvested energy from the porphyrin-based molecules is then collected in the reaction center through the excitation energy transfer relay. The light-driven charge separation process occurs at the reaction center due to the cooperation of two porphyrin derivatives. ==== Porphyrin and chlorophyll bioinspired materials ==== Supramolecular assemblies of synthetic porphyrin-based materials for light harvesting are commonly studied and utilized for electronic energy transfer. The supramolecular assemblies typically employ coordination and hydrogen bonding as an efficient means of tuning interactions and directionality between donor chromophores and acceptor fluorophores. Zinc porphyrin is frequently coupled to free-base porphyrin in synthetic electronic energy transfer systems due to the separated absorption features of both of these molecules. The zinc porphyrin serves as the donor and the free-base porphyrin serves as the acceptor, since the fluorescence of the zinc porphyrin overlaps with the absorption of the free-base porphyrin. Porphyrin arrays and oligomers have been combined with charge-separation molecules in order to emulate charge-separation functions that are present in photosynthetic proteins, in addition to the light harvesting properties of biological light harvesting complexes. The charge-separation molecules that are usually combined with donor chromophore zinc metallated porphyrins are ferrocene which serves as an electron donor and fullerene which serves as an electron acceptor. ==== Carotenoid bioinspired materials ==== Carotenoids are another class of pigment/dye molecules found in retinal photoreceptors and biological light harvesting systems (e.g. Photosystem I, Photosystem II, and Light Harvesting Complex II). When finely arranged with chlorophylls in biological photosynthetic systems, carotenoids effectively promote photoinduced charge separation and electron transfer. Carotenoid is highly conjugated and is structurally very similar to polyacetylene oligomers. Naturally derived carotenoids have been combined with fullerene derivatives for photovoltaic applications. In photovoltaic devices, carotenoid molecules exhibited p-type semiconductor behavior since the molecular structure is very similar to polyacetylene. Artificial dyad and triad systems in which carotenoids are covalently bound have been able to mimic the charge separation and light harvesting mechanisms present in phototrophic organisms. Carotenoid that is covalently bound to porphyrin is a typical example of a dyad containing carotenoid, the dyad can then be covalently bound to a fullerene to form a triad (Figure 3). The triad systems display electron transport that results in long lasting charge separated states. === Biomaterials === Natural light harvesting complexes contain proteins that combine through self-assembly with effective donor chromophores in order to promote light harvesting and energy transfer during photosynthesis; synthetic peptides can be designed to have optoelectronic properties that mimic this phenomenon in natural light harvesting complexes. Proteins in PPCs not only serve as a support for the arrangement of chromophores during light harvesting but also actively play a role in the photophysical dynamics of photosynthesis. Some biomimetic artificial light harvesting complexes have been designed to have proteins and peptides that self-assemble in such a way that chromophores in the complex are arranged for optimized light harvesting efficiency. Peptide self-assemblies and polypeptides modified with porphyrins have also been designed to have the dual function of charge separation and light harvesting. Other examples of peptide donor and acceptor chromophore conjugates utilize the self-assembly of amyloid fibrils into a beta sheet that allows the chromophores to become arranged in such a way that is fine tuned for efficient light harvesting. Synthetic peptides and proteins are one example of the biological materials that are utilized in artificial light harvesting systems, virus templated assemblies and DNA origami have also been employed for light harvesting applications. === Organic gels and nanocrystals === Reversible molecular organic gel networks are held together by noncovalent interactions (e.g. hydrogen bonding, π-stacking, van der Waals interactions and donor–acceptor interactions). The gelator molecules can self-organize in one-dimensional arrays due to the directional nature of intermolecular interactions, producing elongated fibrous structures that can serve as antenna molecules. The organic gels assemble in such a way that there is proper arrangement of donor and acceptor chromophores which is the principle requirement for efficient energy transfer. π-conjugated molecules are commonly used in organic gels since these molecules are impacted by the orientation of chromophores in self assemblies. Some examples of π-conjugated molecules that are employed in organic gels are oligo-p-phenylenevinylene, anthracene, pyrene and porphyrin derivatives. Organic and Organometallic Nanocrystals (NCs) are promising for light harvesting and energy applications because NCs can be solubilized, demonstrate capability of absorbing a large fraction of the solar spectrum, and have a tunable band-gap due to quantum-confinement effects. Organic and organometallic crystals are commonly formed through noncovalent interactions, including hydrogen bonding, π–π stacking, and electrostatic interactions. Organic NCs can be composed of organic arrays that incorporate dye molecules such as boron dipyrromethene. Sun et al. developed two polymorphic organometallic nanocrystals formed from platinum (II)-β-diketonate complexes demonstrated light harvesting and photoluminescent properties. Zeolite nanocrystals that allow for the supramolecular organization of organic dye molecules have also been designed for light harvesting. === Dendrimers === Since the late 1990s a lot of emphasis has been placed on the design of supramolecular species that can partake as antenna molecules for artificial photosynthetic applications; many of these artificially designed antennas are dendrimers. Light harvesting dendritic molecular structures are designed to have a high abundance of light-collecting donor chromophores that transfer the energy to an energy “sink” at the center of dendrimer. An important consideration when designing dendrimers for light harvesting applications is that as the dendrimer generation increases, the number of terminal groups that serve as donor chromophores doubles; however, this results in an increased distance between the terminal groups and the energy acceptor core, thereby decreasing energy transfer efficiency. Dendrimers can contain a large number of chromophoric groups such as coumarin-based donor chromophores in highly ordered arrays to enable effective energy transfer. The core (energy acceptor) of dendrimer molecules can be functionalized with porphyrins, fullerenes and metal complexes. Some reported dendrimer systems can achieve up to 99% energy transfer, an example of a dendrimer that can achieve this efficiency has a perylene core and dendrimer branches composed of coumarin units. === Nanocomposites === Nanomaterials with tunable band gaps can be combined to form heterogeneous structures that self-assemble to form stable abiotic structures, that have potential in artificial photosynthesis and bionic vision. The electronic and physical properties of graphene based composites show promise for light energy conversion. One example of a graphene based composite employed negatively and positively charged graphene oxide multilayers, the layers stacked horizontally based on electrostatic interactions forming a horizontal heterostructure that was able to undergo light-ionic-energy conversion. Negatively charged graphene oxide can also be combined with positively charged polymer nanoparticles; the aggregation of polymers in polymer nanoparticles allows for a broader range for tunable responses to visible light when compared to pristine polymers. The high extinction coefficients of the polymer aggregates allow for enhanced light harvesting as well as charge separation. The delocalization of the electrons of the polymer nanoparticles combined with the graphene allows for π–π* transitions and the materials in the composite match energetically. === Organic and inorganic hybrids and inorganic nanomaterials === In organic and inorganic hybrid systems such as Organic-Inorganic Hybrid Perovskite and Metal–Organic Frameworks (MOFs), the organic–inorganic interface is a critical parameter that controls the performance of light-harvesting devices. Lead-halide perovskite materials demonstrate exceptional photophysical properties and have optoelectronic applications. Halide perovskite materials more generally, have high optical absorption characteristics and allow for charge transport, demonstrating these materials have potential for photovoltaic applications and solar energy conversion. MOFs can be designed to have solar light harvesting properties through different synthetic strategies such as using porphyrin containing struts or metalloporphyrins as the primary organic building blocks. MOFs may also be functionalized through surface modification with quantum dots, or through the embedding of photosensitive ruthenium or osmium metal complexes into the MOF structure. Inorganic materials such as silicon nanostructures, inorganic oxide films (e.g. titanium oxide and indium oxide), and ultrathin two-dimensional inorganic materials (e.g. bismuth oxychloride, tin sulfide, and titanium sulfide nanosheets) have light harvesting and optoelectronic properties. Silicon is commonly used in solar cells and in 1954 Bell Labs invented the first effective silicon solar cell with an efficiency of 5%. The efficiency of the device that was invented by Bell Labs rapidly increased upon n-type and p-type doping and by 1961 reached an efficiency of 14.5%. Silicon is highly abundant, has extensive charge carrier mobility and high stability, allowing it to be widely used in photovoltaic and semiconductor applications. Currently the most efficient single junction device employing silicon has reached a solar conversion efficiency as high as 29.1%. Silicon nanostructures such as nanowires, nanocrystals, quantum dots, and porous nanoparticles have shown improvements over bulk or planar silicon due to enhanced charge separation and transfer, intrinsically higher specific volume, and surface curvature. Silicon nanostructures also allow for the quantum confinement effect which can improve light absorption ranges and light-induced responses. Dye-sensitized solar cells frequently incorporate titanium dioxide as a principal component because it imparts sensitizer adsorption as well as charge separation and electron transport characteristics. The dye molecules present in dye-sensitized solar cells, upon light harvesting, transfer excited electrons to titanium dioxide which then separates the charge. Indium oxide sheets with oxygen vacancies have narrowed band gaps and enhanced charge carrier properties that allow for charge carrier separation efficiency making this material a potential candidate for light harvesting. Ultrathin bismuth oxychloride with oxygen vacancies also allows for enhanced light harvesting and charge separation properties. == Applications == === Photovoltaics === The field of organic photovoltaics in particular, has developed rapidly since the late 1990s and small solar cells have demonstrated power conversion efficiencies up to 13%. The abundance of solar power and the ability to leverage this for conversion to chemical energy via artificial photosynthesis can allow for mass renewable energy sources. Understanding the fundamental processes of photosynthesis in biological systems is important to the development of solar renewable sources. Light-induced charge separation in photosynthetic organisms, catalyzes the conversion of solar energy into chemical or metabolic energy and this has inspired the design of synthetic light-harvesting materials that can then be integrated into photovoltaic devices that generate electrical voltage and current upon absorption of photons. Excitonic networks are then formed for efficient energy transfer. Wide‐ranging molecular and solid‐state materials have applications in photovoltaics. In the design of photovoltaic devices, it is critical to take into account the effects of high pigment or chromophore concentration, the arrangement of chromophores, as well as the geometry of antenna moieties embedded in light harvesting devices, in order to optimize power generation and maximize quantum efficiency. One common form of chromophore within solar cells is that of dye-sensitized solar cells. The dynamic and responsive molecular machinery present in photosynthetic organisms as well as the principles of self-assembly has influenced the design of “smart” photovoltaic devices. === Photocatalysis === Semiconductive surfaces (e.g. metal oxides) functionalized with light harvesting materials (e.g. fullerenes, conductive polymers, porphyrin and phthalocyanine based systems, nanoparticles) can photocatalyze water oxidation or water dissociation in a photoanodic device. Solar energy conversion may be applied to photoelectrochemical water splitting. A majority of water-splitting systems employ inorganic semiconductor materials, however, organic semiconductor materials are gaining traction for this application. Oxynitrides and oxysulfides have also been designed for the photocatalysis of water degradation as well. === Photodynamic therapy === Photodynamic therapy is a medical treatment that employs photochemical processes, through the combination of light and a photosensitizer to generate a cytotoxic effect to cancerous or diseased tissue. Examples of photosensitizers or light harvesting materials that are used to target cancer cells are semiconductor nanoparticles, ruthenium complexes, and nanocomplexes. Photosensitizers can be used for the formation of singlet oxygen upon photoinduction and this plays an important role in photodynamic therapy and this capability has been displayed by titanium dioxide nanoparticles. == See also == Photosensitizer Photodynamic Therapy Photocatalysis Photoswitch == References ==
Wikipedia/Light_harvesting_materials
Cutaneous squamous-cell carcinoma (cSCC), also known as squamous-cell carcinoma of the skin or squamous-cell skin cancer, is one of the three principal types of skin cancer, alongside basal-cell carcinoma and melanoma. cSCC typically presents as a hard lump with a scaly surface, though it may also present as an ulcer. Onset and development often occurs over several months. Compared to basal cell carcinoma, cSCC is more likely to spread to distant areas. When confined to the epidermis, the outermost layer of the skin, the pre-invasive or in situ form of cSCC is termed Bowen's disease. The most significant risk factor for cSCC is extensive lifetime exposure to ultraviolet radiation from sunlight. Additional risk factors include prior scars, chronic wounds, actinic keratosis, lighter skin susceptible to sunburn, Bowen's disease, exposure to arsenic, radiation therapy, tobacco smoking, poor immune system function, previous basal cell carcinoma, and HPV infection. The risk associated with UV radiation correlates with cumulative exposure rather than early-life exposure. Tanning beds have emerged as a significant source of UV radiation. Genetic predispositions, such as xeroderma pigmentosum and certain forms of epidermolysis bullosa, also increase susceptibility to cSCC. The condition originates from squamous cells located in the skin's upper layers. Diagnosis typically relies on skin examination, and is confirmed through skin biopsy. Research, both in vivo and in vitro, indicates a crucial role for the upregulation of FGFR2, part of the fibroblast growth factor receptor immunoglobin family, in cSCC cell progression. Mutations in the TPL2 gene leads to overexpression of FGFR2, which activates the mTORC1 and AKT pathways in primary and metastatic cSCC cell lines. Utilization of a "pan FGFR inhibitor" has shown to reduce cell migration and proliferation in cSCC in vitro studies. Preventive measures against cSCC include minimizing exposure to ultraviolet radiation and the use of sunscreen. Surgical removal is the typical treatment method, employing simple excision for minor cases or Mohs surgery for more extensive instances. Other options include cryotherapy and radiation therapy. For cases with distant metastasis, chemotherapy or biologic therapy may be employed. As of 2015, approximately 2.2 million individuals globally were living with cSCC at any given time, constituting about 20% of all skin cancer cases. In the United States, approximately 12% of males and 7% of females are diagnosed with cSCC at some point in their lives. While prognosis remains favorable in the absence of metastasis, upon distant spread the five-year survival rate is markedly reduced to ~34%. In 2015, global deaths attributed to cSCC numbered around 52,000. The average age at diagnosis is approximately 66 years. Following successful treatment of an initial cSCC lesion, there is a substantial risk of developing subsequent lesions. == Signs and symptoms == SCC of the skin begins as a small nodule and as it enlarges the center becomes necrotic and sloughs and the nodule turns into an ulcer, and generally are developed from an actinic keratosis. Once keratinocytes begin to grow uncontrollably, they have the potential to become cancerous and produce cutaneous squamous-cell carcinoma. The lesion caused by cSCC is often asymptomatic Ulcer or reddish skin plaque that is slow growing Intermittent bleeding from the tumor, especially on the lip The clinical appearance is highly variable Usually the tumor presents as an ulcerated lesion with hard, raised edges The tumor may be in the form of a hard plaque or a papule, often with an opalescent quality, with tiny blood vessels The tumor can lie below the level of the surrounding skin, and eventually ulcerates and invades the underlying tissue The tumor commonly presents on sun-exposed areas (e.g. back of the hand, scalp, lip, and superior surface of pinna) On the lip, the tumor forms a small ulcer, which fails to heal and bleeds intermittently Evidence of chronic skin photodamage, as in multiple actinic keratoses (solar keratoses) The tumor grows relatively slowly === Spread === Unlike basal-cell carcinoma (BCC), squamous-cell carcinoma (SCC) has a higher risk of metastasis. Risk of metastasis is higher clinically in SCC arising in scars, on the lower lips, ears, or mucosa, and occurring in immunosuppressed and solid organ transplant patients. Risk of metastasis is also higher in SCC that are > 2 cm in diameter, growth into the fat layer and along nerves, presence of lymphovascular invasion, poorly differentiated cell architecture on histology, or thickness greater than 6 mm. == Causes == Cutaneous squamous-cell carcinoma is the second-most common cancer of the skin (after basal-cell carcinoma, but more common than melanoma). It usually occurs in areas exposed to the sun. Sunlight exposure and immunosuppression are risk factors for SCC of the skin, with chronic sun exposure being the strongest environmental risk factor. There is a risk of metastasis starting more than 10 years after diagnosable appearance of squamous-cell carcinoma, but the risk is low, though much higher than with basal-cell carcinoma. Squamous-cell cancers of the lip and ears have high rates of local recurrence and distant metastasis. In a recent study, it has also been shown that the deletion or severe down-regulation of a gene titled Tpl2 (tumor progression locus 2) may be involved in the progression of normal keratinocytes into becoming squamous-cell carcinoma. cSCC represents about 20% of the non-melanoma skin cancers; 80-90% of cSCCs with metastatic potential are located on the head and neck. Tobacco smoking also increases the risk for cutaneous squamous-cell carcinoma. The vast majority of cSCC cases are located on exposed skin, and are often the result of ultraviolet exposure. cSCC usually occurs on portions of the body commonly exposed to the sun; the face, ears, neck, hands, or arms. The primary sign is a growing bump that may have a rough, scaly surface, and flat, reddish patches. Unlike basal-cell carcinoma, cSCC carries a higher risk of metastasis than does basal-cell carcinoma, and may spread to the regional lymph nodes, Erythroplasia of Queyrat (SCC in situ of the glans or prepuce in males, M: 733 : 656  or the vulva in females.) may be induced by human papilloma virus. It is reported to occur in the corneoscleral limbus. Erythroplasia of Queyrat may also occur on the anal mucosa or the oral mucosa. Genetically, cSCC tumors harbor high frequencies of NOTCH and p53 mutations as well as less frequent alterations in histone acetyltransferase EP300, subunit of the SWI/SNF chromatin remodeling complex PBRM1, DNA-repair deubiquitinase USP28, and NF-κB signaling regulator CHUK. A significant proportion of cSCC and its precursor lesions carry UV-induced p53 mutations. In fact, these mutations are present in up to 90% of cSCC cases. The detection of p53 mutations in precursor lesions indicates that this could be an early event in the development of squamous cell carcinoma. === Immunosuppression === People who have received solid organ transplants are at a significantly increased risk of developing squamous-cell carcinoma due to the use of chronic immunosuppressive medication. While the risk of developing all skin cancers increases with these medications, this effect is particularly severe for cSCC, with hazard ratios as high as 250 being reported, versus 40 for basal cell carcinoma. The incidence of cSCC development increases with time posttransplant. Heart and lung transplant recipients are at the highest risk of developing cSCC due to more intensive immunosuppressive medications used. Cutaneous squamous-cell carcinoma in individuals on immunotherapy or who have lymphoproliferative disorders (e.g. leukemia) tend to be much more aggressive, regardless of their location. The risk of cSCC, and non-melanoma skin cancers generally, varies with the immunosuppressive drug regimen chosen. The risk is greatest with calcineurin inhibitors like cyclosporine and tacrolimus, and least with mTOR inhibitors, such as sirolimus and everolimus. The antimetabolites azathioprine and mycophenolic acid have an intermediate risk profile. == Diagnosis == Diagnosis is confirmed via skin biopsy of the tissue or tissues suspected to be affected by SCC. The pathological appearance of a squamous-cell cancer varies with the depth of the biopsy. For that reason, a biopsy including the subcutaneous tissue and basilar epithelium, to the surface is necessary for correct diagnosis. The performance of a shave biopsy (see skin biopsy) might not acquire enough information for a diagnosis. An inadequate biopsy might be read as actinic keratosis with follicular involvement. A deeper biopsy down to the dermis or subcutaneous tissue might reveal the true cancer. An excision biopsy is ideal, but not practical in most cases. An incisional or punch biopsy is preferred. A shave biopsy is least ideal, especially if only the superficial portion is acquired. === Histological characteristics === Histopathologically, the epidermis in cSCC in situ (Bowen's disease) will show hyperkeratosis and parakeratosis. There will also be marked acanthosis with elongation and thickening of the rete ridges. These changes will overly keratinocytic cells which are often highly atypical and may in fact have a more unusual appearance than invasive cSCC. The atypia spans the full thickness of the epidermis, with the keratinocytes demonstrating intense mitotic activity, pleomorphism, and greatly enlarged nuclei. They will also show a loss of maturity and polarity, giving the epidermis a disordered or "windblown" appearance. Two types of multinucleated cells may be seen: the first will present as a multinucleated giant cell, and the second will appear as a dyskeratotic cell engulfed in the cytoplasm of a keratinocyte. Occasionally, cells of the upper epidermis will undergo vacuolization, demonstrating an abundant and strongly eosinophilic cytoplasm. There may be a mild to moderate lymphohistiocytic infiltrate detected in the upper dermis. === In situ disease === Bowen's disease is essentially equivalent to and used interchangeably with cSCC in situ, when not having invaded through the basement membrane. Depending on source, it is classified as precancerous or cSCC in situ (technically cancerous but non-invasive). In cSCC in situ (Bowen's disease), atypical squamous cells proliferate through the whole thickness of the epidermis. The entire tumor is confined to the epidermis and does not invade into the dermis. The cells are often highly atypical under the microscope, and may in fact look more unusual than the cells of some invasive squamous-cell carcinomas. Erythroplasia of Queyrat is a particular type of Bowen's disease that can arise on the glans or prepuce in males,: 733 : 656  and the vulva in females. It mainly occurs in uncircumcised males, over the age of 40. === Invasive disease === In invasive cSCC, tumor cells infiltrate through the basement membrane. The infiltrate can be somewhat difficult to detect in the early stages of invasion: however, additional indicators such as full thickness epidermal atypia and the involvement of hair follicles can be used to facilitate the diagnosis. Later stages of invasion are characterized by the formation of nests of atypical tumor cells in the dermis, often with a corresponding inflammatory infiltrate. === Degree of differentiation === == Prevention == Appropriate sun-protective clothing, use of broad-spectrum (UVA/UVB) sunscreen with at least SPF 50, and avoidance of intense sun exposure may prevent skin cancer. A 2016 review of sunscreen for preventing cutaneous squamous-cell carcinoma found insufficient evidence to demonstrate whether it was effective. == Management == Most cutaneous squamous-cell carcinomas are removed with surgery. A few selected cases are treated with topical medication. Surgical excision with a free margin of healthy tissue is a frequent treatment modality. Radiotherapy, given as external beam radiotherapy or as brachytherapy (internal radiotherapy), can also be used to treat cSCC. There is little evidence comparing the effectiveness of different treatments for non-metastatic cSCC. Cosibelimab (Unloxcyt) was approved for medical use in the United States in December 2024, for the treatment of adults with metastatic cutaneous squamous cell carcinoma or locally advanced cutaneous squamous cell carcinoma who are not candidates for curative surgery or curative radiation. Mohs surgery is frequently utilized; considered the treatment of choice for squamous-cell carcinoma of the skin, physicians have also utilized the method for the treatment of squamous-cell carcinoma of the mouth, throat, and neck. An equivalent method of the CCPDMA standards can be utilized by a pathologist in the absence of a Mohs-trained physician. Radiation therapy is often used afterward in high risk cancer or patient types. Radiation or radiotherapy can also be a standalone option in treating cSCC. As a non-invasive option brachytherapy serves a painless possibility to treat in particular but not only difficult to operate areas like the earlobes or genitals. An example of this kind of therapy is the high-dose brachytherapy Rhenium-SCT which makes use of the beta rays emitting property of rhenium-188. The radiation source is enclosed in a compound which is applied to a thin protection foile directly over the lesion. This way the radiation source can be applied to complex locations and minimize radiation to healthy tissue. After removal of the cancer, closure of the skin for patients with a decreased amount of skin laxity involves a split-thickness skin graft. A donor site is chosen and enough skin is removed so that the donor site can heal on its own. Only the epidermis and a partial amount of dermis is taken from the donor site which allows the donor site to heal. Skin can be harvested using either a mechanical dermatome or Humby knife. Electrodessication and curettage (EDC) can be done on selected squamous-cell carcinoma of the skin. In areas where cSCC is known to be non-aggressive, and where the patient is not immunosuppressed, EDC can be performed with good to adequate cure rate. Treatment options for cSCC in situ (Bowen's disease) include photodynamic therapy with 5-aminolevulinic acid, cryotherapy, topical 5-fluorouracil or imiquimod, and excision. A meta-analysis showed evidence that PDT is more effective than cryotherapy and has better cosmetic outcomes. There is generally a lack of evidence comparing the effectiveness of all treatment options. High-risk squamous-cell carcinoma, as defined by that occurring around the eye, ear, or nose, is of large size, is poorly differentiated, and grows rapidly, requires more aggressive, multidisciplinary management. Nodal spread: Surgical block dissection if palpable nodes or in cases of Marjolin's ulcers but the benefit of prophylactic block lymph node dissection with Marjolin's ulcers is not proven. Radiotherapy Adjuvant therapy may be considered in those with high-risk cSCC even in the absence of evidence for local metastasis. Imiquimod (Aldara) has been used with success for squamous-cell carcinoma in situ of the skin and the penis, but the morbidity and discomfort of the treatment is severe. An advantage is the cosmetic result: after treatment, the skin resembles normal skin without the usual scarring and morbidity associated with standard excision. Imiquimod is not FDA-approved for any squamous-cell carcinoma. In general, squamous-cell carcinomas have a high risk of local recurrence, and up to 50% do recur. Frequent skin exams with a dermatologist is recommended after treatment. == Prognosis == The long-term outcome of squamous-cell carcinoma is dependent upon several factors: the sub-type of the carcinoma, available treatments, location and severity, and various patient health-related variables (accompanying diseases, age, etc.). Generally, the long-term outcome is positive, with a metastasis rate of 1.9-5.2% and a mortality rate of 1.5-3.4%. When it does metastasize, the most commonly involved organs are the lungs, brain, bone and other skin locations. Squamous-cell carcinoma occurring in immunosuppressed people (such as those with organ transplant, human immunodeficiency virus infection, or chronic lymphocytic leukemia) the risk of developing cSCC and having metastasis is much higher than the general population. One study found squamous-cell carcinoma of the penis had a much greater rate of mortality than some other forms of squamous-cell carcinoma, that is, about 23%, although this relatively high mortality rate may be associated with possibly latent diagnosis of the disease due to patients avoiding genital exams until the symptoms are debilitating, or refusal to submit to a possibly scarring operation upon the genitalia. == Epidemiology == The incidence of cutaneous squamous-cell carcinoma continues to rise around the world. This is theorized to be due to several factors; including an aging population, a greater incidence of those who are immunocompromised and the increasing use of tanning beds. A recent study estimated that there are between 180,000 and 400,000 cases of cSCC in the United States in 2013. Risk factors for cSCC varies with age, gender, race, geography, and genetics. The incidence of cSCC increases with age and with those 75 years or older being at a 5-10 times increased risk of developing cSCC as compared with those who are younger than 55 years old. Males are affected with cSCC at a ratio of 3:1 in comparison to females. Those who have light skin, red or blonde hair and light colored eyes are also at increased risk. Squamous-cell carcinoma of the skin can be found on all areas of the body but is most common on frequently sun-exposed areas, such as the face, legs and arms. Solid organ transplant recipients (heart, lung, liver, pancreas, among others) are also at a heightened risk of developing aggressive, high-risk cSCC. There are also a few rare congenital diseases predisposed to cutaneous malignancy. In certain geographic locations, exposure to arsenic in well water or from industrial sources may significantly increase the risk of cSCC. == Additional images == == See also == List of cutaneous conditions associated with increased risk of nonmelanoma skin cancer == References == == External links == DermNet NZ: Squamous cell carcinoma
Wikipedia/Bowen's_disease
Internal conversion is a transition from a higher to a lower electronic state in a molecule or atom. It is sometimes called "radiationless de-excitation", because no photons are emitted. It differs from intersystem crossing in that, while both are radiationless methods of de-excitation, the molecular spin state for internal conversion remains the same, whereas it changes for intersystem crossing. The energy of the electronically excited state is given off to vibrational modes of the molecule. The excitation energy is transformed into heat. == Examples == A classic example of this process is the quinine sulfate fluorescence, which can be quenched by the use of various halide salts. The excited molecule can de-excite by increasing the thermal energy of the surrounding solvated ions. Several natural molecules perform a fast internal conversion. This ability to transform the excitation energy of photon into heat can be a crucial property for photoprotection by molecules such as melanin. Fast internal conversion reduces the excited state lifetime, and thereby prevents bimolecular reactions. Bimolecular electron transfer always produces a reactive chemical species, free radicals. Nucleic acids (precisely the single, free nucleotides, not those bound in a DNA/RNA strand) have an extremely short lifetime due to a fast internal conversion. Both melanin and DNA have some of the fastest internal conversion rates. In applications that make use of bimolecular electron transfer the internal conversion is undesirable. For example, it is advantageous to have a long-lived excited state in Grätzel cells (Dye-sensitized solar cells). == See also == Fluorescence spectroscopy Förster resonance energy transfer == References ==
Wikipedia/Internal_conversion_(chemistry)
Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security. This article applies in general to reconstruction methods for all kinds of tomography, but some of the terms and physical descriptions refer directly to the reconstruction of X-ray computed tomography. == Introducing formula == The projection of an object, resulting from the tomographic measurement process at a given angle θ {\displaystyle \theta } , is made up of a set of line integrals (see Fig. 1). A set of many such projections under different angles organized in 2D is called a sinogram (see Fig. 3). In X-ray CT, the line integral represents the total attenuation of the beam of X-rays as it travels in a straight line through the object. As mentioned above, the resulting image is a 2D (or 3D) model of the attenuation coefficient. That is, we wish to find the image μ ( x , y ) {\displaystyle \mu (x,y)} . The simplest and easiest way to visualise the method of scanning is the system of parallel projection, as used in the first scanners. For this discussion we consider the data to be collected as a series of parallel rays, at position r {\displaystyle r} , across a projection at angle θ {\displaystyle \theta } . This is repeated for various angles. Attenuation occurs exponentially in tissue: I = I 0 exp ⁡ ( − ∫ μ ( x , y ) d s ) {\displaystyle I=I_{0}\exp \left({-\int \mu (x,y)\,ds}\right)} where μ ( x , y ) {\displaystyle \mu (x,y)} is the attenuation coefficient as a function of position. Therefore, generally the total attenuation p {\displaystyle p} of a ray at position r {\displaystyle r} , on the projection at angle θ {\displaystyle \theta } , is given by the line integral: p θ ( r ) = ln ⁡ ( I I 0 ) = − ∫ μ ( x , y ) d s {\displaystyle p_{\theta }(r)=\ln \left({\frac {I}{I_{0}}}\right)=-\int \mu (x,y)\,ds} Using the coordinate system of Figure 1, the value of r {\displaystyle r} onto which the point ( x , y ) {\displaystyle (x,y)} will be projected at angle θ {\displaystyle \theta } is given by: x cos ⁡ θ + y sin ⁡ θ = r {\displaystyle x\cos \theta +y\sin \theta =r\ } So the equation above can be rewritten as p θ ( r ) = ∫ − ∞ ∞ ∫ − ∞ ∞ f ( x , y ) δ ( x cos ⁡ θ + y sin ⁡ θ − r ) d x d y {\displaystyle p_{\theta }(r)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x,y)\delta (x\cos \theta +y\sin \theta -r)\,dx\,dy} where f ( x , y ) {\displaystyle f(x,y)} represents μ ( x , y ) {\displaystyle \mu (x,y)} and δ ( ) {\displaystyle \delta ()} is the Dirac delta function. This function is known as the Radon transform (or sinogram) of the 2D object. The Fourier Transform of the projection can be written as P θ ( ω ) = ∫ − ∞ ∞ ∫ − ∞ ∞ f ( x , y ) exp ⁡ [ − j ω ( x cos ⁡ θ + y sin ⁡ θ ) ] d x d y = F ( Ω 1 , Ω 2 ) {\displaystyle P_{\theta }(\omega )=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x,y)\exp[-j\omega (x\cos \theta +y\sin \theta )]\,dx\,dy=F(\Omega _{1},\Omega _{2})} where Ω 1 = ω cos ⁡ θ , Ω 2 = ω sin ⁡ θ {\displaystyle \Omega _{1}=\omega \cos \theta ,\Omega _{2}=\omega \sin \theta } P θ ( ω ) {\displaystyle P_{\theta }(\omega )} represents a slice of the 2D Fourier transform of f ( x , y ) {\displaystyle f(x,y)} at angle θ {\displaystyle \theta } . Using the inverse Fourier transform, the inverse Radon transform formula can be easily derived. f ( x , y ) = 1 2 π ∫ 0 π g θ ( x cos ⁡ θ + y sin ⁡ θ ) d θ {\displaystyle f(x,y)={\frac {1}{2\pi }}\int \limits _{0}^{\pi }g_{\theta }(x\cos \theta +y\sin \theta )d\theta } where g θ ( x cos ⁡ θ + y sin ⁡ θ ) {\displaystyle g_{\theta }(x\cos \theta +y\sin \theta )} is the derivative of the Hilbert transform of p θ ( r ) {\displaystyle p_{\theta }(r)} In theory, the inverse Radon transformation would yield the original image. The projection-slice theorem tells us that if we had an infinite number of one-dimensional projections of an object taken at an infinite number of angles, we could perfectly reconstruct the original object, f ( x , y ) {\displaystyle f(x,y)} . However, there will only be a finite number of projections available in practice. Assuming f ( x , y ) {\displaystyle f(x,y)} has effective diameter d {\displaystyle d} and desired resolution is R s {\displaystyle R_{s}} , a rule of thumb for the number of projections needed for reconstruction is N > π d / R s {\displaystyle N>\pi d/R_{s}} == Reconstruction algorithms == Practical reconstruction algorithms have been developed to implement the process of reconstruction of a three-dimensional object from its projections. These algorithms are designed largely based on the mathematics of the X-ray transform, statistical knowledge of the data acquisition process and geometry of the data imaging system. === Fourier-domain reconstruction algorithm === Reconstruction can be made using interpolation. Assume N {\displaystyle N} projections of f ( x , y ) {\displaystyle f(x,y)} are generated at equally spaced angles, each sampled at the same rate. The discrete Fourier transform (DFT) on each projection yields sampling in the frequency domain. Combining all the frequency-sampled projections generates a polar raster in the frequency domain. The polar raster is sparse, so interpolation is used to fill the unknown DFT points, and reconstruction can be done through the inverse discrete Fourier transform. Reconstruction performance may improve by designing methods to change the sparsity of the polar raster, facilitating the effectiveness of interpolation. For instance, a concentric square raster in the frequency domain can be obtained by changing the angle between each projection as follow: θ ′ = R 0 max { | cos ⁡ θ | , | sin ⁡ θ | } {\displaystyle \theta '={\frac {R_{0}}{\max\{|\cos \theta |,|\sin \theta |\}}}} where R 0 {\displaystyle R_{0}} is highest frequency to be evaluated. The concentric square raster improves computational efficiency by allowing all the interpolation positions to be on rectangular DFT lattice. Furthermore, it reduces the interpolation error. Yet, the Fourier-Transform algorithm has a disadvantage of producing inherently noisy output. === Back projection algorithm === In practice of tomographic image reconstruction, often a stabilized and discretized version of the inverse Radon transform is used, known as the filtered back projection algorithm. With a sampled discrete system, the inverse Radon transform is f ( x , y ) = 1 2 π ∑ i = 0 N − 1 Δ θ i g θ i ( x cos ⁡ θ i + y sin ⁡ θ i ) {\displaystyle f(x,y)={\frac {1}{2\pi }}\sum _{i=0}^{N-1}\Delta \theta _{i}g_{\theta _{i}}(x\cos \theta _{i}+y\sin \theta _{i})} g θ ( t ) = p θ ( t ) ⋅ k ( t ) {\displaystyle g_{\theta }(t)=p_{\theta }(t)\cdot k(t)} where Δ θ {\displaystyle \Delta \theta } is the angular spacing between the projections and k ( t ) {\displaystyle k(t)} is a Radon kernel with frequency response | ω | {\displaystyle |\omega |} . The name back-projection comes from the fact that a one-dimensional projection needs to be filtered by a one-dimensional Radon kernel (back-projected) in order to obtain a two-dimensional signal. The filter used does not contain DC gain, so adding DC bias may be desirable. Reconstruction using back-projection allows better resolution than interpolation method described above. However, it induces greater noise because the filter is prone to amplify high-frequency content. === Iterative reconstruction algorithm === The iterative algorithm is computationally intensive but it allows the inclusion of a priori information about the system f ( x , y ) {\displaystyle f(x,y)} . Let N {\displaystyle N} be the number of projections and D i {\displaystyle D_{i}} be the distortion operator for the i {\displaystyle i} th projection taken at an angle θ i {\displaystyle \theta _{i}} . { λ i } {\displaystyle \{\lambda _{i}\}} are a set of parameters to optimize the conversion of iterations. f 0 ( x , y ) = ∑ i = 1 N λ i p θ i ( r ) {\displaystyle f_{0}(x,y)=\sum _{i=1}^{N}\lambda _{i}p_{\theta _{i}}(r)} f k ( x , y ) = f k − 1 ( x , y ) + ∑ i = 1 N λ i [ p θ i ( r ) − D i f k − 1 ( x , y ) ] {\displaystyle f_{k}(x,y)=f_{k-1}(x,y)+\sum _{i=1}^{N}\lambda _{i}[p_{\theta _{i}}(r)-D_{i}f_{k-1}(x,y)]} An alternative family of recursive tomographic reconstruction algorithms are the algebraic reconstruction techniques and iterative sparse asymptotic minimum variance. === Fan-beam reconstruction === Use of a noncollimated fan beam is common since a collimated beam of radiation is difficult to obtain. Fan beams will generate series of line integrals, not parallel to each other, as projections. The fan-beam system requires a 360-degree range of angles, which imposes mechanical constraints, but it allows faster signal acquisition time, which may be advantageous in certain settings such as in the field of medicine. Back projection follows a similar two-step procedure that yields reconstruction by computing weighted sum back-projections obtained from filtered projections. === Deep learning reconstruction === Deep learning methods are widely applied to image reconstruction nowadays and have achieved impressive results in various image reconstruction tasks, including low-dose denoising, sparse-view reconstruction, limited angle tomography and metal artifact reduction. An excellent overview can be found in the special issue of IEEE Transaction on Medical Imaging. One group of deep learning reconstruction algorithms apply post-processing neural networks to achieve image-to-image reconstruction, where input images are reconstructed by conventional reconstruction methods. Artifact reduction using the U-Net in limited angle tomography is such an example application. However, incorrect structures may occur in an image reconstructed by such a completely data-driven method, as displayed in the figure. Therefore, integration of known operators into the architecture design of neural networks appears beneficial, as described in the concept of precision learning. For example, direct image reconstruction from projection data can be learnt from the framework of filtered back-projection. Another example is to build neural networks by unrolling iterative reconstruction algorithms. Except for precision learning, using conventional reconstruction methods with deep learning reconstruction prior is also an alternative approach to improve the image quality of deep learning reconstruction. == Tomographic reconstruction software == Tomographic systems have significant variability in their applications and geometries (locations of sources and detectors). This variability creates the need for very specific, tailored implementations of the processing and reconstruction algorithms. Thus, most CT manufacturers provide their own custom proprietary software. This is done not only to protect intellectual property, but may also be enforced by a government regulatory agency. Regardless, there are a number of general purpose tomographic reconstruction software packages that have been developed over the last couple decades, both commercial and open-source. Most of the commercial software packages that are available for purchase focus on processing data for benchtop cone-beam CT systems. A few of these software packages include Volume Graphics, InstaRecon, iTomography, Livermore Tomography Tools (LTT), and Cone Beam Software Tools (CST). Some noteworthy examples of open-source reconstruction software include: Reconstruction Toolkit (RTK), CONRAD, TomoPy, the ASTRA toolbox, PYRO-NN, ODL, TIGRE, and LEAP. == Gallery == Shown in the gallery is the complete process for a simple object tomography and the following tomographic reconstruction based on ART. == See also == Operation of computed tomography#Tomographic reconstruction Cone beam reconstruction Industrial computed tomography Industrial Tomography Systems plc == References == == Further reading == Avinash Kak & Malcolm Slaney (1988), Principles of Computerized Tomographic Imaging, IEEE Press, ISBN 0-87942-198-3. Bruyant, P.P. "Analytic and iterative reconstruction algorithms in SPECT" Journal of Nuclear Medicine 43(10):1343-1358, 2002 == External links == Slaney, A. C. Kak and Malcolm. "Principles of Computerized Tomographic Imaging". Slaney.org. Retrieved 7 September 2018. Insight ToolKit; open-source tomographic support software "TomoPy — TomoPy 1.1.3 documentation". Tomopy.readthedocs.org. Retrieved 7 September 2018. ASTRA (All Scales Tomographic Reconstruction Antwerp) toolbox; very flexible, fast open-source software for computed tomographic reconstruction NiftyRec; comprehensive open-source tomographic reconstruction software; Matlab and Python scriptable Open-source tomographic reconstruction and visualization tool "ITS plc - Electrical Process Tomography For Industrial Visualization". Itoms.com. Retrieved 7 September 2018.
Wikipedia/Reconstruction_algorithm
In physics and chemistry, photoemission orbital tomography (POT; sometimes called photoemission tomography) is a combined experimental / theoretical approach which was initially developed to reveal information about the spatial distribution of individual one-electron surface-state wave functions and later extended to study molecular orbitals. Experimentally, it uses angle-resolved photoemission spectroscopy (ARPES) to obtain constant binding energy photoemission angular distribution maps. In their pioneering work, Mugarza et al. in 2003 used a phase-retrieval method to obtain the wave function of electron surface states based on ARPES data acquired from stepped gold crystalline surfaces; they obtained the respective wave functions and, upon insertion into the Schrödinger equation, also the binding potential. More recently, photoemission maps, also known as tomograms (also known as momentum maps or k {\displaystyle k} -maps), have been shown to reveal information about the electron probability distribution in molecular orbitals. Theoretically, one rationalizes these tomograms as hemispherical cuts through the molecular orbital in momentum space. This interpretation relies on the assumption of a plane wave final state, i.e., the idea that the outgoing electron can be treated as a free electron, which can be further exploited to reconstruct real-space images of molecular orbitals on a sub-Ångström length scale in two or three dimensions. Presently, POT has been applied to various organic molecules forming well-oriented monolayers on single crystal surfaces or to two-dimensional materials. == Theory == Within the framework of POT, the photo-excitation is treated as a single coherent process from an initial (molecular) orbital Ψ i {\displaystyle \Psi _{i}} to the final state Ψ f {\displaystyle \Psi _{f}} , which is referred to as the one-step-model of photoemission. The intensity distribution in the tomograms, I ( k x , k y ; E k i n ) {\displaystyle I(k_{x},k_{y};E_{\mathrm {kin} })} , is then given from Fermi's golden rule as I ( k x , k y ; E k i n ) ∝ | ⟨ Ψ f ( k x , k y ; E k i n ) | A → ⋅ p → | Ψ i ⟩ | 2 × δ ( E i + Φ + E k i n − ℏ ω ) . {\displaystyle I(k_{x},k_{y};E_{\mathrm {kin} })\propto \left|\langle \Psi _{f}(k_{x},k_{y};E_{\mathrm {kin} })|{\vec {A}}\cdot {\vec {p}}|\Psi _{i}\rangle \right|^{2}\times \delta \left(E_{i}+\Phi +E_{\mathrm {kin} }-\hbar \omega \right).} Here, k x {\displaystyle k_{x}} and k y {\displaystyle k_{y}} are the components of the emitted electron's wave vector parallel to the surface, which are related to the polar and azimuthal emission angles θ {\displaystyle \theta } and ϕ {\displaystyle \phi } defined in the figure as follows, k x = k sin ⁡ θ cos ⁡ ϕ {\displaystyle k_{x}=k\sin \theta \cos \phi } k y = k sin ⁡ θ sin ⁡ ϕ {\displaystyle k_{y}=k\sin \theta \sin \phi } where k {\displaystyle k} and E k i n = ℏ 2 k 2 2 m {\displaystyle E_{\mathrm {kin} }={\frac {\hbar ^{2}k^{2}}{2m}}} are the wave number and kinetic energy of the emitted electron, respectively, where ℏ {\displaystyle \hbar } is the reduced Planck constant and m {\displaystyle m} is the electron mass. The transition matrix element is given in the dipole approximation, where p → {\displaystyle {\vec {p}}} and A → {\displaystyle {\vec {A}}} , respectively, denote the momentum operator of the electron and the vector potential of the exciting electromagnetic wave. In the independent electron approximation, the spectral function reduces to a delta function and ensures energy conservation, where Φ {\displaystyle \Phi } denotes the sample work function, E i {\displaystyle E_{i}} the binding energy of the initial state, and ℏ ω {\displaystyle \hbar \omega } the energy of the exciting photon. In POT, the evaluation of the transition matrix element is further simplified by approximating the final state by a plane wave. Then, the photocurrent I i {\displaystyle I_{i}} arising from one particular initial state i {\displaystyle i} is proportional to the Fourier transform Ψ ~ i ( k → ) = F { Ψ i ( r → ) } {\displaystyle {\tilde {\Psi }}_{i}({\vec {k}})={\mathcal {F}}\left\{\Psi _{i}({\vec {r}})\right\}} of the initial state wave function modulated by the weakly angle-dependent polarization factor A → ⋅ k → {\displaystyle {\vec {A}}\cdot {\vec {k}}} : I i ( k x , k y ) ∝ | A → ⋅ k → | 2 ⋅ | Ψ ~ i ( k x , k y ) | 2 with | k → | 2 = k x 2 + k y 2 + k z 2 = 2 m ℏ 2 E k i n {\displaystyle I_{i}(k_{x},k_{y})\propto \left|{\vec {A}}\cdot {\vec {k}}\right|^{2}\cdot \left|{\tilde {\Psi }}_{i}(k_{x},k_{y})\right|^{2}\quad {\textrm {with}}\quad |{\vec {k}}|^{2}=k_{x}^{2}+k_{y}^{2}+k_{z}^{2}={\frac {2m}{\hbar ^{2}}}E_{\mathrm {kin} }} As illustrated in the figure, the relationship between the real space orbital and its photoemission distribution can be represented by an Ewald's sphere-like construction. Thus, a one-to-one relation between the photocurrent and the molecular orbital density in reciprocal space can be established. Moreover, a reconstruction of molecular orbital densities in real space via an inverse Fourier transform and applying an iterative phase retrieval algorithm has also been demonstrated. == Experiment == The basic experimental requirements are a reasonably monoenergetic photon source (inert gas discharge lamps, synchrotron radiation or UV laser sources) and an angle-resolved photoelectron spectrometer. Ideally, a large angular distribution ( k {\displaystyle k} -area) should be collected. Much of the development of POT was made using a toroidal analyzer with p {\displaystyle p} -polarized synchrotron radiation. Here the spectrometer collects the hemicircle of emissions ( − 90 ∘ < θ < + 90 ∘ {\displaystyle -90^{\circ }<\theta <+90^{\circ }} ) in the plane of incidence and polarization, and the momentum maps are obtained by rotating the sample azimuth ( ϕ {\displaystyle \phi } ). A number of commercially available electron spectrometers are now on the market which have been shown to be suited to POT. These include large acceptance angle hemispherical analysers, spectrometers with photoemission electron microscopy (PEEM) lenses and time of flight (TOF) spectrometers. == Applications and future developments == POT has found many interesting applications including the assignment of molecular orbital densities in momentum and real space, the deconvolution of spectra into individual orbital contributions beyond the limits of energy resolution, the extraction of detailed geometric information, or the identification of reaction products. Recently, the extension to the time-domain has been demonstrated by combining time-resolved photoemission using high laser harmonics and a momentum microscope to measure the full momentum-space distribution of transiently excited electrons in organic molecules. The possibility to measure the spatial distribution of electrons in frontier molecular orbitals has stimulated discussions on the interpretation of the concept of orbitals itself. The present understanding is that the information retrieved from photoemission orbital tomography should be interpreted as Dyson orbitals. Approximating the photoelectron's final state by a plane wave have been viewed critically. Indeed, there are cases where the plane-wave final state approximation is problematic including a proper description of the photon energy dependence, the circular dichroism in the photoelectron angular distribution or certain experimental geometries. Nevertheless, the usefulness of the plane wave final state approximation has been extended beyond the originally suggested case of π {\displaystyle \pi } -orbitals of large, planar π {\displaystyle \pi } -conjugated molecules to three-dimensional molecules, small organic molecules and extended to two-dimensional materials. Theoretical approaches beyond the plane wave final state approximation have also been demonstrated including time-dependent density functional theory calculations or Green's function techniques. == References ==
Wikipedia/Photoemission_orbital_tomography
Ultrasound computer tomography (USCT), sometimes also Ultrasound computed tomography, Ultrasound computerized tomography or just Ultrasound tomography, is a form of medical ultrasound tomography utilizing ultrasound waves as physical phenomenon for imaging. It is mostly in use for soft tissue medical imaging, especially breast imaging. == Description == Ultrasound computer tomographs use ultrasound waves to create images. In the first measurement step, a defined ultrasound wave is generated with typically Piezoelectric ultrasound transducers, transmitted in direction of the measurement object and received with other or the same ultrasound transducers. While traversing and interacting with the object the ultrasound wave is changed by the object and carries now information about the object. After being recorded the information from the modulated waves can be extracted and used to create an image of the object in a second step. Unlike X-ray or other physical properties which provide typically only one information, ultrasound provides multiple information of the object for imaging: the attenuation the wave's sound pressure experiences indicate on the object's attenuation coefficient, the time-of-flight of the wave gives speed of sound information, and the scattered wave indicates on the echogenicity of the object (e.g. refraction index, surface morphology, etc.). Unlike conventional ultrasound sonography, which uses phased array technology for beamforming, most USCT systems utilize unfocused spherical waves for imaging. Most USCT systems aiming for 3D-imaging, either by synthesizing ("stacking") 2D images or by full 3D aperture setups. Another aim is quantitative imaging instead of only qualitative imaging. The idea of Ultrasound computer tomography goes back to the 1950s with analogue compounding setups, in the mid-1970s the first "computed" USCT systems were built up, utilizing digital technology. The "computer" in the USCT concept indicates the heavy reliance on computational intensive advanced digital signal processing, image reconstruction and image processing algorithms for imaging. Successfully realization of USCT systems in the last decades was possible through the continuously growing availability of computing power and data bandwidth provided by the digital revolution. == Setup == USCT systems designed for medical imaging of soft tissue typically aim for resolution in the order of centimeters to millimeters and require therefore ultrasound waves in the order of mega-hertz frequency. This requires typically water as low-attenuating transmission medium between ultrasound transducers and object to retain suitable sound pressures. USCT systems share with the common tomography the fundamental architectural similarity that the aperture, the active imaging elements, surround the object. For the distribution of ultrasound transducers around the measurement object, forming the aperture, multiple design approaches exist. There exist mono-, bi- and multistatic setups of transducer configurations. Common are 1D- or 2D- linear arrays of ultrasound transducers acting as emitters on one side of the object, on the opposing side of the object a similar array acting as receiver is placed, forming a parallel setup. Sometimes accompanied by the additional ability to be moved to gather more information from additional angles. While cost-efficient to build the main disadvantage of such a setup is the limited ability (or inability) of gathering reflectivity information, as such an aperture is limited to only transmission information. Another aperture approach is a ring of transducers, sometimes with the degree of freedom of motorized lifting for gathering additional information over the height for 3D imaging ("stacking"). Full 3D setups, with no inherent need for aperture movements, exist in the form of apertures formed by semi-spherical distributed transducers. While the most expensive setup they offer the advantage of nearly-uniform data, gathered from many directions. Also, they are fast in data taking as they don't require time-costly mechanical movements. == Imaging methods and algorithms == Tomographic reconstruction methods used in USCT systems for transmission information based imaging are classical inverse radon transform and fourier slice theorem and derived algorithms (cone beam etc.). As advanced alternatives, ART-based approaches are also utilized. For high-resolution and speckle noise reduced reflectivity imaging Synthetic Aperture Focusing Techniques (SAFT), similar to radar's SAR and sonar's SAS, are widely used. Iterative wave equation inversion approaches as imaging method coming from the seismology are under academic research, but usage for real world applications is due to the enormous computational and memory burden still a challenge. == Application and usage == Many USCT systems are designed for soft tissue imaging and for breast cancer diagnosis specifically. As an ultrasound-based method with low sound pressures, USCT is a harmless and risk-free imaging method, suitable for periodical screening. As USCT setups are fixed or motor moved without direct contact with the breast the reproduction of images is easier as with common, manually guided methods (e.g. Breast ultrasound) which rely on the individual examiners' performance and experience. In comparison with conventional screening methods like mammography, USCT systems offer potentially an increased specificity for breast cancer detection, as multiple breast cancer characteristic properties are imaged at the same time: speed-of-sound, attenuation and morphology. == See also == Medical ultrasound Tomography Ultrasound transmission tomography Ultrasound-modulated optical tomography == References ==
Wikipedia/Ultrasound_computer_tomography
Projectional radiography, also known as conventional radiography, is a form of radiography and medical imaging that produces two-dimensional images by X-ray radiation. The image acquisition is generally performed by radiographers, and the images are often examined by radiologists. Both the procedure and any resultant images are often simply called 'X-ray'. Plain radiography or roentgenography generally refers to projectional radiography (without the use of more advanced techniques such as computed tomography that can generate 3D-images). Plain radiography can also refer to radiography without a radiocontrast agent or radiography that generates single static images, as contrasted to fluoroscopy, which are technically also projectional. == Equipment == === X-ray generator === Projectional radiographs generally use X-rays created by X-ray generators, which generate X-rays from X-ray tubes. === Grid === An anti-scatter grid may be placed between the patient and the detector to reduce the quantity of scattered x-rays that reach the detector. This improves the contrast resolution of the image, but also increases radiation exposure for the patient. === Detector === Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis). === Shielding === Lead is the main material used by radiography personnel for shielding against scattered X-rays. == Image properties == Projectional radiography relies on the characteristics of X-ray radiation (quantity and quality of the beam) and knowledge of how it interacts with human tissue to create diagnostic images. X-rays are a form of ionizing radiation, meaning it has sufficient energy to potentially remove electrons from an atom, thus giving it a charge and making it an ion. === X-ray attenuation === When an exposure is made, X-ray radiation exits the tube as what is known as the primary beam. When the primary beam passes through the body, some of the radiation is absorbed in a process known as attenuation. Anatomy that is denser has a higher rate of attenuation than anatomy that is less dense, so bone will absorb more X-rays than soft tissue. What remains of the primary beam after attenuation is known as the remnant beam. The remnant beam is responsible for exposing the image receptor. Areas on the image receptor that receive the most radiation (portions of the remnant beam experiencing the least attenuation) will be more heavily exposed, and therefore will be processed as being darker. Conversely, areas on the image receptor that receive the least radiation (portions of the remnant beam experience the most attenuation) will be less exposed and will be processed as being lighter. This is why bone, which is very dense, process as being 'white' on radio graphs, and the lungs, which contain mostly air and is the least dense, shows up as 'black'. === Density === Radiographic density is the measure of overall darkening of the image. Density is a logarithmic unit that describes the ratio between light hitting the film and light being transmitted through the film. A higher radiographic density represents more opaque areas of the film, and lower density more transparent areas of the film. With digital imaging, however, density may be referred to as brightness. The brightness of the radiograph in digital imaging is determined by computer software and the monitor on which the image is being viewed. === Contrast === Contrast is defined as the difference in radiographic density between adjacent portions of the image. The range between black and white on the final radiograph. High contrast, or short-scale contrast, means there is little gray on the radiograph, and there are fewer gray shades between black and white. Low contrast, or long-scale contrast, means there is much gray on the radiograph, and there are many gray shades between black and white. Closely related to radiographic contrast is the concept of exposure latitude. Exposure latitude is the range of exposures over which the recording medium (image receptor) will respond with a diagnostically useful density; in other words, this is the "flexibility" or "leeway" that a radiographer has when setting his/her exposure factors. Images having a short-scale of contrast will have narrow exposure latitude. Images having long-scale contrast will have a wide exposure latitude; that is, the radiographer will be able to utilize a broader range of technical factors to produce a diagnostic-quality image. Contrast is determined by the kilovoltage (kV; energy/quality/penetrability) of the x-ray beam and the tissue composition of the body part being radiographed. Selection of look-up tables (LUT) in digital imaging also affects contrast. Generally speaking, high contrast is necessary for body parts in which bony anatomy is of clinical interest (extremities, bony thorax, etc.). When soft tissue is of interest (ex. abdomen or chest), lower contrast is preferable in order to accurately demonstrate all of the soft tissue tones in these areas. === Geometric magnification === Geometric magnification results from the detector being farther away from the X-ray source than the object. In this regard, the source-detector distance or SDD is a measurement of the distance between the generator and the detector. Alternative names are source/focus to detector/image-receptor/film (latter used when using X-ray film) distance (SID, FID or FRD). The estimated radiographic magnification factor (ERMF) is the ratio of the source-detector distance (SDD) over the source-object distance (SOD). The size of the object is given as: S i z e o b j e c t = S i z e p r o j e c t i o n E R M F {\displaystyle Size_{object}={\frac {Size_{projection}}{ERMF}}} , where Sizeprojection is the size of the projection that the object forms on the detector. On lumbar and chest radiographs, it is anticipated that ERMF is between 1.05 and 1.40. Because of the uncertainty of the true size of objects seen on projectional radiography, their sizes are often compared to other structures within the body, such as dimensions of the vertebrae, or empirically by clinical experience. The source-detector distance (SDD) is roughly related to the source-object distance (SOD) and the object-detector distance (ODD) by the equation SOD + ODD = SDD. === Geometric unsharpness === Geometric unsharpness is caused by the X-ray generator not creating X-rays from a single point but rather from an area, as can be measured as the focal spot size. Geometric unsharpness increases proportionally to the focal spot size, as well as the estimated radiographic magnification factor (ERMF). === Geometric distortion === Organs will have different relative distances to the detector depending on which direction the X-rays come from. For example, chest radiographs are preferably taken with X-rays coming from behind (called a "posteroanterior" or "PA" radiograph). However, in case the patient cannot stand, the radiograph often needs to be taken with the patient lying in a supine position (called a "bedside" radiograph) with the X-rays coming from above ("anteroposterior" or "AP"), and geometric magnification will then cause for example the heart to appear larger than it actually is because it is further away from the detector. === Scatter === In addition to using an anti-scatter grid, increasing the ODD alone can improve image contrast by decreasing the amount of scattered radiation that reaches the receptor. However, this needs to be weighted against increased geometric unsharpness if the SDD is not also proportionally increased. == Imaging variations by target tissue == Projection radiography uses X-rays in different amounts and strengths depending on what body part is being imaged: Hard tissues such as bone require a relatively high energy photon source, and typically a tungsten anode is used with a high voltage (50-150 kVp) on a 3-phase or high-frequency machine to generate bremsstrahlung or braking radiation. Bony tissue and metals are denser than the surrounding tissue, and thus by absorbing more of the X-ray photons they prevent the film from getting exposed as much. Wherever dense tissue absorbs or stops the X-rays, the resulting X-ray film is unexposed, and appears translucent blue, whereas the black parts of the film represent lower-density tissues such as fat, skin, and internal organs, which could not stop the X-rays. This is usually used to see bony fractures, foreign objects (such as ingested coins), and used for finding bony pathology such as osteoarthritis, infection (osteomyelitis), cancer (osteosarcoma), as well as growth studies (leg length, achondroplasia, scoliosis, etc.). Soft tissues are seen with the same machine as for hard tissues, but a "softer" or less-penetrating X-ray beam is used. Tissues commonly imaged include the lungs and heart shadow in a chest X-ray, the air pattern of the bowel in abdominal X-rays, the soft tissues of the neck, the orbits by a skull X-ray before an MRI to check for radiopaque foreign bodies (especially metal), and of course the soft tissue shadows in X-rays of bony injuries are looked at by the radiologist for signs of hidden trauma (for example, the famous "fat pad" sign on a fractured elbow). == Projectional radiography terminology == NOTE: The simplified word 'view' is often used to describe a radiographic projection. Plain radiography generally refers to projectional radiography (without the use of more advanced techniques such as computed tomography). Plain radiography can also refer to radiography without a radiocontrast agent or radiography that generates single static images, as contrasted to fluoroscopy. AP - Antero-Posterior PA - Postero-Anterior DP - Dorsal-Plantar Lateral - Projection taken with the central ray perpendicular to the midsagittal plane Oblique - Projection taken with the central ray at an angle to any of the body planes. Described by the angle of obliquity and the portion of the body the X-ray beam exits; right or left and posterior or anterior. For example, a 45 degree Right Anterior Oblique of the Cervical Spine. Flexion - Joint is radiographed while in flexion Extension - Joint is radiographed while in extension Stress Views - Typically taken of joints with external force applied in a direction that is different from main movement of the joint. Test of stability. Weight-bearing - Generally with the subject standing up HBL, HRL, HCR or CTL - Horizontal Beam Lateral, Horizontal Ray Lateral, Horizontal Central Ray, or Cross Table Lateral. Used to obtain a lateral projection usually when patients are unable to move. Prone - Patient lies on their front Supine - Patient lies on the back Decubitus - Patient lying down. Further described by the downside body surface: dorsal (backside down), ventral (frontside down), or lateral (left or right side down). OM - occipito-mental, an imaginary positioning line extending from the menti (chin) to the occiput (particularly the external occiputal protuberance) Cranial or Cephalad - Tube angulation towards the head Caudal - Tube angulation towards the feet == By target organ or structure == === Breasts === Projectional radiography of the breasts is called mammography. This has been used mostly on women to screen for breast cancer, but is also used to view male breasts, and used in conjunction with a radiologist or a surgeon to localise suspicious tissues before a biopsy or a lumpectomy. Breast implants designed to enlarge the breasts reduce the viewing ability of mammography, and require more time for imaging as more views need to be taken. This is because the material used in the implant is very dense compared to breast tissue, and looks white (clear) on the film. The radiation used for mammography tends to be softer (has a lower photon energy) than that used for the harder tissues. Often a tube with a molybdenum anode is used with about 30 000 volts (30 kV), giving a range of X-ray energies of about 15-30 keV. Many of these photons are "characteristic radiation" of a specific energy determined by the atomic structure of the target material (Mo-K radiation). === Chest === Chest radiographs are used to diagnose many conditions involving the chest wall, including its bones, and also structures contained within the thoracic cavity including the lungs, heart, and great vessels. Conditions commonly identified by chest radiography include pneumonia, pneumothorax, interstitial lung disease, heart failure, bone fracture and hiatal hernia. Typically an erect postero-anterior (PA) projection is the preferred projection. Chest radiographs are also used to screen for job-related lung disease in industries such as mining where workers are exposed to dust. For some conditions of the chest, radiography is good for screening but poor for diagnosis. When a condition is suspected based on chest radiography, additional imaging of the chest can be obtained to definitively diagnose the condition or to provide evidence in favor of the diagnosis suggested by initial chest radiography. Unless a fractured rib is suspected of being displaced, and therefore likely to cause damage to the lungs and other tissue structures, an X-ray of the chest is not necessary as it will not alter patient management. === Abdomen === In children, abdominal radiography is indicated in the acute setting in suspected bowel obstruction, gastrointestinal perforation, foreign body in the alimentary tract, suspected abdominal mass and intussusception (latter as part of the differential diagnosis). Yet, CT scan is the best alternative for diagnosing intra-abdominal injury in children. For acute abdominal pain in adults, an abdominal X-ray has a low sensitivity and accuracy in general. Computed tomography provides an overall better surgical strategy planning, and possibly less unnecessary laparotomies. Abdominal X-ray is therefore not recommended for adults presenting in the emergency department with acute abdominal pain. The standard abdominal X-ray protocol is usually a single anteroposterior projection in supine position. A Kidneys, Ureters, and Bladder projection (KUB) is an anteroposterior abdominal projection that covers the levels of the urinary system, but does not necessarily include the diaphragm. === Axial skeleton === ==== Head ==== Cerebral angiography allows visualization of blood vessels in and around the brain. A contrast agent is injected prior to the radiographs of the head, Orbital radiography, imaging both left and right eye sockets, generally including the frontal and maxillary sinuses. Dental radiography uses a small radiation dose with high penetration to view teeth, which are relatively dense. A dentist may examine a painful tooth and gum using X-ray equipment. The machines used are typically single-phase pulsating DC, the oldest and simplest sort. Dental technicians or the dentist may run these machines; radiographers are not required by law to be present. A derivative technique from projectional radiography used in dental radiography is orthopantomography. This is a panoramic imaging technique of the upper and lower jaw using focal plane tomography, where the X-ray generator and X-ray detector are simultaneously moved so as to keep a consistent exposure of only the plane of interest during image acquisition. Sinus - The standard protocol in the UK is OM with open mouth. Facial Bones - The standard protocol in the UK is OM and OM 30°. In case of trauma, the standard UK protocol is to have a CT scan of the skull instead of projectional radiography. A skeletal survey including the skull can be indicated in for example multiple myeloma. ==== Other axial skeleton ==== The spine (that is, the vertebral column. A projectional radiograph of the spine confers an effective dose of approximately 1.5 mSv, comparable to a background radiation equivalent time of 6 months. Cervical spine: The standard projections in the UK AP and Lateral. Peg projection with trauma only. Obliques and Flexion and Extension on special request. In the US, five or six projections are common; a Lateral, two 45 degree obliques, an AP axial (Cephalad), an AP "Open Mouth" for C1-C2, and Cervicothoracic Lateral (Swimmer's) to better visualize C7-T1 if necessary. Special projections include a Lateral with Flexion and Extension of the cervical spine, an Axial for C1-C2 (Fuchs or Judd method), and an AP Axial (Caudad) for articular pillars. Thoracic Spine - AP and Lateral in the UK. In the US, an AP and Lateral are basic projections. Obliques 20 degrees from lateral may be ordered to better visualize the zygapophysial joint. Lumbar Spine - AP and Lateral +/- L5/S1 view in the UK, with obliques and Flexion and Extension requests being rare. In the US, basic projections include an AP, two Obliques, a Lateral, and a Lateral L5-S1 spot to better visualize the L5-S1 interspace. Special projections are AP Right and Left bending, and Laterals with Flexion and Extension. Pelvis - AP only in the UK, with SIJ projections (prone) on special request. Sacrum and Coccyx: In the US, if both bones are to be examined separate cephalad and caudad AP axial projections are obtained for the sacrum and coccyx respectively as well as a single Lateral of both bones. Ribs: In the US, common rib projections are based on the location of the area of interest. These are obtained with shorter wavelengths/higher frequencies/higher levels of radiation than a standard CXR. Anterior area of interest - a PA chest X-ray, a PA projection of the ribs, and a 45 degree Anterior Oblique with the non-interest side closest to the image receptor. Posterior area of interest - a PA chest X-ray, an AP projection of the ribs, and a 45 degree Posterior Oblique with the side of interest closest to the image receptor. Sternum. The standard projections in the UK are PA chest and lateral sternum. In the US, the two basic projections are a 15 to 20 degree Right Anterior Oblique and a Lateral. Sternoclavicular Joints - Are usually ordered as a single PA and a Right and Left 15 degree Right Anterior Obliques in the US. === Shoulders === These include: AP-projection 40° posterior oblique after Grashey The body has to be rotated about 30 to 45 degrees towards the shoulder to be imaged, and the standing or sitting patient lets the arm hang. This method reveals the joint gap and the vertical alignment towards the socket. Transaxillary projection The arm should be abducted 80 to 100 degrees. This method reveals: The horizontal alignment of the humerus head in respect to the socket, and the lateral clavicle in respect to the acromion. Lesions of the anterior and posterior socket border or of the tuberculum minus. The eventual non-closure of the acromial apophysis. The coraco-humeral interval Y-projection The lateral contour of the shoulder should be positioned in front of the film in a way that the longitudinal axis of the scapula continues parallel to the path of the rays. This method reveals: The horizontal centralization of the humerus head and socket. The osseous margins of the coraco-acromial arch and hence the supraspinatus outlet canal. The shape of the acromion This projection has a low tolerance for errors and accordingly needs proper execution. The Y-projection can be traced back to Wijnblath's 1933 published cavitas-en-face projection. In the UK, the standard projections of the shoulder are AP and Lateral Scapula or Axillary Projection. === Extremities === A projectional radiograph of an extremity confers an effective dose of approximately 0.001 mSv, comparable to a background radiation equivalent time of 3 hours. The standard projection protocols in the UK are: Clavicle - AP and AP Cranial Humerus - AP and Lateral Elbow - AP and Lateral. Radial head projections available on request Radius and Ulna - AP and Lateral Wrist - DP and Lateral Scaphoid - DP with Ulna deviation, Lateral, Oblique and DP with 30° angulation Hip joint: AP and Lateral. The Lauenstein projection a form of examination of the hip joint emphasizing the relationship of the femur to the acetabulum. The knee of the affected leg is flexed, and the thigh is drawn up to nearly a right angle. This is also called the frog-leg position. Applications include X-ray of hip dysplasia. Hand - DP and Oblique Fingers - DP and Lateral Thumb - AP and Lateral Femur - AP and Lateral Knee - AP and Lateral. Intra Condular projections on request Patella - Skyline projection Tibia and Fibula - AP and Lateral Ankle - AP/Mortice and Lateral Calcaneum - Axial and Lateral Foot / Toes - Dorsoplantar, Oblique and Lateral. Certain suspected conditions require specific projections. For example, skeletal signs of rickets are seen predominantly at sites of rapid growth, including the proximal humerus, distal radius, distal femur and both the proximal and the distal tibia. Therefore, a skeletal survey for rickets can be accomplished with anteroposterior radiographs of the knees, wrists, and ankles. == General disease mimics == Radiological disease mimics are visual artifacts, normal anatomic structures or harmless variants that may simulate diseases or abnormalities. In projectional radiography, general disease mimics include jewelry, clothes and skin folds. In general medicine a disease mimic shows symptoms and/or signs like those of another. == See also == Medical imaging in pregnancy, including projectional radiography Radiography Medical imaging X-ray X-ray generator X-ray detector Radiographer Digital radiography Tomography Anatomical terms of location == References == == External links == Online Radiography Positioning Manual Nice Guidelines The Human Skeleton
Wikipedia/Projectional_radiograph
Three-dimensional electrical capacitance tomography (3D ECT) also known as electrical capacitance volume tomography (ECVT) is a non-invasive 3D imaging technology applied primarily to multiphase flows. It was introduced in the early 2000s as an extension of the conventional two-dimensional ECT. In conventional electrical capacitance tomography, sensor plates are distributed around a surface of interest. Measured capacitance between plate combinations is used to reconstruct 2D images (tomograms) of material distribution. Because the ECT sensor plates are required to have lengths on the order of the domain cross-section, 2D ECT does not provide the required resolution in the axial dimension. In ECT, the fringing field from the edges of the plates is viewed as a source of distortion to the final reconstructed image and is thus mitigated by guard electrodes. 3D ECT exploits this fringing field and expands it through 3D sensor designs that deliberately establish an electric field variation in all three dimensions. In 3D tomography, the data are acquired in 3D geometry, and the reconstruction algorithm produces the three-dimensional image directly, in contrast to 2D tomography, where 3D information might be obtained by stacking 2D slices reconstructed individually. The image reconstruction algorithms are similar in nature to ECT; nevertheless, the reconstruction problem in 3D ECT is more complicated. The sensitivity matrix of an 3D sensor is more ill-conditioned, and the overall reconstruction problem is more ill-posed compared to ECT. The 3D ECT approach to sensor design allows direct 3D imaging of the outrounded geometry. The second commonly used name electrical capacitance volume tomography (ECVT) was introduced by W. Warsito, Q. Marashdeh, and L.-S. Fan in 2007. == Principles == === Capacitance and Field Equations in 3D ECT === Two metal electrodes held at different electric potential V {\displaystyle V} and separated by a finite distance will induce an electric field E {\displaystyle E} in the region between and surrounding them. The field distribution is determined by the geometry of the problem and the constitutive medium properties such as permittivity ε {\displaystyle \varepsilon } and conductivity σ {\displaystyle \sigma } . Assuming a static or quasi-static regime and the presence of a lossless dielectric medium, such as a perfect insulator, in the region between the plates, the field obeys the following equation: ∇ . ( ε ∇ φ ) = 0 {\displaystyle \nabla .(\varepsilon \nabla \varphi )=0} where φ {\displaystyle \varphi } denotes the electric potential distribution. In a homogeneous medium with uniform ε {\displaystyle \varepsilon } , this equation reduces to the Laplace equation. In a lossy medium with finite conductivity, such as water, the field obeys the generalized Ampere equation, ∇ × H = σ E + j ω ε E {\displaystyle \nabla \times H=\sigma E+j\omega \varepsilon E} By taking divergence of this equation and using the fact that E = − ∇ φ {\displaystyle E=-\nabla \varphi } , it follows: ∇ . ( ( σ + j ω ε ) ∇ φ ) = 0 {\displaystyle \nabla .((\sigma +j\omega \varepsilon )\nabla \varphi )=0} when the plates are excited by a time-harmonic voltage potential with frequency ω {\displaystyle \omega } . The capacitance C {\displaystyle C} is a measure of electric energy W {\displaystyle W} stored in the medium, which can be quantified via the following relation: W = 1 2 ∫ ε E 2 d v = 1 2 C V 2 {\displaystyle W={\frac {1}{2}}\int _{}^{}\varepsilon E^{2}\,dv={\frac {1}{2}}CV^{2}} where E 2 {\displaystyle E^{2}} is the square magnitude of the electric field. The capacitance changes as a nonlinear function of the dielectric permittivity ε {\displaystyle \varepsilon } because the electric field distribution in the above integral is also a function of ε {\displaystyle \varepsilon } . === Soft-Field Tomography === Soft-field tomography refers to a set of imaging modalities such as electrical capacitance tomography (ECT), electrical impedance tomography (EIT), electrical resistivity tomography (ERT), etc., wherein electric (or magnetic) field lines undergo changes in the presence of a perturbation in the medium. This is in contrast to hard-field tomography, such as X-ray CT, where the electric field lines do not change in the presence of a test subject. A fundamental characteristic of soft-field tomography is its ill-posedness. This contributes for making the reconstruction more challenging to achieve good spatial resolution in soft-field tomography as compared to hard-field tomography. A number of techniques, such Tikhonov regularization, can be used to alleviate the ill-posed problem. The figure at the right shows a comparison in image resolution between 3D ECT and MRI. === 3D ECT Measurement Acquisition Systems === The hardware of 3D ECT systems consists of sensing electrode plates, the data acquisition circuitry, and the computer to control the overall system and process the data. ECT is a non-intrusive and non-invasive imaging modality due to its contactless operation. Prior to the actual measurements, a calibration and normalization procedure is necessary to cancel out the effects of stray capacitance and any insulating wall between the electrodes and the region of interest to be imaged. After calibration and normalization, the measurements can be divided into a sequence of acquisitions where two separate electrodes are involved: one electrode (TX) is excited with AC voltage source in the quasi-electrostatic regime, typically below 10 MHz (AC method) or with pulse signal, typically lasting a few microseconds (pulse method) while a second electrode (RX) is placed at the ground potential used for measuring the resultant current. The remaining electrodes are also placed at ground potential. This process is repeated for all possible electrode pairs. Note that reversing the roles of TX and RX electrodes would result in the same mutual capacitance due to the reciprocity. As a result, for 3D ECT systems with N number of plates, the number of independent measurement is equal to N(N-1)/2. This process is typically automated through data acquisition circuitry. Depending on the operation frequency, number of plates and frame rate per second of the measurement system, one full measurement cycle can vary; however, this is in the order of few seconds or less. One of the most critical parts of three-dimensional systems is sensor design. As the previous discussion suggests, increasing the number of electrodes also increases the amount of independent information about the region of interests. However this results in smaller electrode sizes which in turn results in low signal to noise ratio. Increasing the electrode size, on the other hand, does not result in non-uniform charge distribution over the plates, which may exacerbate the ill-posedness of the problem. The sensor dimension is also limited by the gaps between the sensing electrodes. These are important due to fringe effects. The use of guard plates between electrodes have been shown to reduce these effects. Based on the intended application, tomographic sensors can be composed of single or more layers along the axial direction. The three-dimensional tomography is not obtained from merging of 2D scans but rather from 3D discretized voxels sensitivities. The design of the electrodes is also dictated by the shape of the domain under investigation. Some domains can be relatively simple geometries (cylindrical, rectangular prism, etc.) where symmetrical electrode placement can be used. However, complex geometries (corner joints, T-shaped domains, etc.) require specially designed electrodes to properly surround the domain. The flexibility of ECT makes it very useful for field applications where the sensing plates cannot be placed symmetrically. Since the Laplace equation lacks a characteristic length (such as the wavelength in Helmholtz equation), the fundamental physics of the 3D ECT problem is scalable in size as long as quasi-static regime properties are preserved. === Image Reconstruction Methods for 3D ECT === Reconstruction methods address the inverse problem of 3D ECT imaging, i.e. to determine the volumetric permittivity distribution form the mutual capacitance measurements. Traditionally, the inverse problem is handled through the linearization of the (nonlinear) relationship between the capacitance and the material permittivity equation using the Born approximation. Typically, this approximation is only valid for small permittivity contrasts. For other cases, the nonlinear nature of the electric field distribution poses a challenge for both 2D and 3D image reconstruction, making the reconstruction methods an active research area for better image quality. Reconstruction methods for ECT can be categorized as iterative and non-iterative (single step) methods. The examples of non-iterative methods are linear back projection (LBP), and direct method based on singular value decomposition and Tikhonov regularization. These algorithms are computationally inexpensive; however, their tradeoff is less accurate images without quantitative information. Iterative methods can be roughly classified into projection-based and optimization-based methods. Some of the linear projection iterative algorithms used for 3D ECT include Newton-Raphson, Landweber iteration and steepest descent algebraic reconstruction and simultaneous reconstruction techniques, and model-based iteration. Similar to single step methods, these algorithms also use linearized sensitivity matrix for the projections to obtain the permittivity distribution inside the domain. Projection-based iterative methods typically provide better images than non-iterative algorithms yet require more computational resources. The second type of iterative reconstruction methods are optimization-based reconstruction algorithms such as neural network optimization. These methods need more computational resources than the previously mentioned methods along with added complexity for the implementation. Optimization reconstruction methods employ multiple objective functions and use iterative process to minimize them. The resultant images contain less artifacts from the nonlinear nature and tend to be more reliable for quantitative applications. === Displacement-Current Phase Tomography (DCPT) === Displacement-Current Phase Tomography is an imaging modality that relies on the same hardware as ECT. 3D ECT does not make use of the real part (conductance component) of the obtained mutual admittance measurements. This component of the measurement is related with the material losses in the region of interest (conductivity and/or dielectric losses). DCPT utilizes the full admittance information by means of the small angle phase component of this complex valued data. DCPT can only be used when the electrodes are excited with AC voltage. It applies only to domains that include material losses, otherwise the measured phase will be zero (real part of the admittance will be zero). DCPT is designed to be used with the same reconstruction algorithms designed for 3D ECT. Therefore, DCPT can be used simultaneously with 3D ECT to image the spatial tangent loss distribution of the medium along with its spatial relative permittivity distribution from ECT. === Multi-Frequency 3D ECT Operation === Multiphase flows are invariably complex. Advanced measuring techniques are required to monitor and quantify phase hold ups in such multiphase flows. Due to their relatively fast speed of acquisition and non-intrusive characteristics, 2D and 3D ECT are widely used in industries for flow monitoring. However, the flow decomposition and monitoring capabilities of ECT for multiphase flow containing three or more phases (e.g., a combination of oil, air, and water) is somewhat limited. Multi-frequency excitations and measurements have been exploited and successfully used in ECT image reconstruction in those cases. Multi-frequency measurements allow the exploitation of the Maxwell-Wagner-Sillars (MWS) effect on the response of the measured data (e.g., admittance, capacitance, etc.) as a function of excitation frequency. This effect was first discovered by Maxwell in 1982 and later studied by Wagner and Silliars. The MWS effect is a consequence of surface migration polarization at the interface between materials when at least one of them is conducting. Typically a dielectric material presents a Debye-type relaxation effect at microwave frequencies. However, due to the presence of the MWS effect (or the MWS polarization) a mixture containing at least one conducting phase will exhibit this relaxation at much lower frequencies. The MWS effect depends on several factors such as volume fraction of each phase, phase orientation, conductivity and other mixture parameters. Wagner formula for dilute mixture and Bruggeman formula for dense mixtures are among the most notable formulation of effective dielectric constant. Hanai's formulation of complex dielectric constant, an extension of Bruggeman formula of effective dielectric constant, is instrumental in analyzing MWS effect for complex dielectric constant. Hanai's formula for complex dielectric writes as ( ε 1 ∗ − ε 2 ∗ ε 1 ∗ − ε ∗ ) 3 ε ∗ ε 2 ∗ = 1 ( 1 − ϕ ) 3 {\displaystyle \left({\frac {\varepsilon _{1}^{*}-\varepsilon _{2}^{*}}{\varepsilon _{1}^{*}-\varepsilon ^{*}}}\right)^{3}{\frac {\varepsilon ^{*}}{\varepsilon _{2}^{*}}}={\frac {1}{(1-\phi )^{3}}}} where ε 1 ∗ {\displaystyle \varepsilon _{1}^{*}} , ε 2 ∗ {\displaystyle \varepsilon _{2}^{*}} , and ε ∗ {\displaystyle \varepsilon ^{*}} are the complex effective permittivity of the dispersed phase, continuous phase, and mixture, respectively. ϕ {\displaystyle \phi } is the volume fraction of the dispersed phase. Knowing that a mixture will exhibit dielectric relaxation due to the MWS effect, this additional measuring dimension can be exploited to decompose multiphase flows when at least one of the phases is conducting. The figure to the right shows the reconstructed images of the flow model, conducting phase, and non-conducting phases extracted by exploited MWS effect from experimental data. === 3D ECT Velocimetry === Velocimetry refers to techniques used to measure velocity of fluids. The use of the sensitivity gradient enables the reconstruction of 3D velocity profiles using an ECT sensor, which can readily provide information of fluid dynamics. The sensitivity gradient is defined as F = ∇ S = a ^ x ∂ S ∂ x + a ^ y ∂ S ∂ y + a ^ z ∂ S ∂ z {\displaystyle F=\nabla S={\hat {a}}_{x}{\frac {\partial S}{\partial x}}+{\hat {a}}_{y}{\frac {\partial S}{\partial y}}+{\hat {a}}_{z}{\frac {\partial S}{\partial z}}} where S {\displaystyle S} is the sensitivity distribution of an 3D ECT sensor as shown to the right. Upon application of the sensitivity gradient as described in, a 3D and 2D velocity profile corresponding to the figure above are shown in the figure to the right. The application of the sensitivity gradient provides significant improvement over more traditional (cross-correlation based) velocimetry, exhibiting better image quality and requiring less computational time. Another advantage of the sensitivity gradient based velocimetry is its compatibility with conventional image reconstruction algorithms used in 3D ECT. == Advantages == === Modular === The basic requirements of 3D ECT sensors are simple and can therefore be very modular in design. Tomographic sensors require only conductive electrodes which are electrically isolated from one another and are also not shorted through the medium being inspected by the sensor. Additionally there must be a way to excite and detect signal to and from each electrode. The lack of constraints on the sensor design allow it to be made out of a variety of materials and take a plethora of forms including flexible walled, high temperature performance, high pressure performance, thin walled, elbowed and flat sensors. With the three-dimensional sensor electrode configuration becomes modular as well without the need to fabricate new sensors. === Safe === 3D ECT is low energy, low frequency, and non-radioactive, making it safe to employ in any situation where toxic waste, high voltage, or electromagnetic radiation is a concern. The low energy nature of the technology also makes it suitable for remote locations where power is in short supply. In many occasions, a simple solar powered battery may prove sufficient to power an 3D ECT device. === Scalable === 3D ECT operates on very large wavelengths, typically using frequencies below 10 MHz to excite the electrodes. These long wavelengths allow the technology to operate under the quasi-electrostatic regime. As long as the diameter of the sensor is much smaller than the length of the wave, these assumptions hold valid. For instance, when exciting with 2 MHz AC signal, the wavelength is 149.9 meters. Sensor diameters are typically designed well below this limit. Additionally, capacitance strength, C {\displaystyle C} , scales proportionally according to electrode area, A {\displaystyle A} , and distance between the plates, d {\displaystyle d} , or the diameter of the sensor. So as a sensor diameter becomes larger, if the plate area scales accordingly, any given sensor design can easily be scaled up or down with minimal effect on the signal strength. C ∝ A d {\displaystyle C\varpropto {\frac {A}{d}}} === Low Cost & Profile === Compared to other sensing and imaging equipment such as gamma radiation, x-ray, or MRI machines, 3D ECT remains relatively cheap to manufacture and operate. Part of this quality of the technology is due to its low energy emissions which do not require any additional mechanisms of containing waste or insulating high power outputs. Adding to the low cost is the availability of a wide variety of materials to fabricate a sensor. Electronics can also be placed remotely from the sensor itself which allows for standard environment electronics to be utilized for data acquisition even when the sensor is subjected to extreme temperatures or other conditions which typically make it difficult to employ electronic instrumentation. === High Temporal Resolution (Fast) === In general terms, the method of data acquisition used alongside 3D ECT is very fast. Data can be sampled from the sensor at several thousand times per second depending on the number of plate pairs in the sensor design and the analog design of the data acquisition system (i.e. clock speed, parallel circuitry, etc.). The potential for collecting data very quickly makes the technology very attractive to industries which have processes that occur very quickly or transport at high speeds. This is a great contrast to MRI which has high spatial resolution but often very poor temporal resolution. == Challenges for Spatial Resolution in 3D ECT == Spatial resolution is a fundamental challenge in 2D and 3D ECT. The spatial resolution is limited by the soft-field nature of ECT and the fact that the interrogating electric field is quasi-static in nature. The latter property implies that the potential distribution between the plates is a solution of Laplace equation. As a consequence, there cannot be any relative minima or maxima for the potential distribution between the plates and hence no focal spots can be produced. In order to increase spatial resolution, two basic strategies can be pursued. The first strategy consists of enriching the measurement data. This can be done by (a) adaptive acquisitions with synthetic electrodes, (b) spatio-temporal sampling using additional measurements obtained when objects are in different positions inside the sensor, (c) multi-frequency operation to exploit permittivity variations with frequency due to the MWS effect, and (d) combining ECT with other sensing modalities, either based on the same hardware (such as DCPT) or on additional hardware (such as microwave tomography). The second strategy to increase spatial resolution consists in the development of multi-stage image reconstruction that incorporate a priori information and training data sets, and spatial adaptivity. == Applications == === Multi-Phase Flow === Multi-phase flow refers to simultaneous flow of materials of different physical states or chemical compositions, and is heavily involved in petroleum, chemical and biochemical industries. In the past, 3D ECT has been extensively tested in a wide range of multi-phase flow systems in laboratory as well as industrial settings. ECT's unique ability to obtain real-time non-invasive spatial visualization of systems with complex geometries under different temperature and pressure conditions at relatively low costs renders it favorable for both fundamental fluid mechanics research and applications in large-scale processing industries. Recent research efforts in exploring these two aspects are summarized below. ==== Gas-Solid ==== Gas-solid fluidized bed is a typical gas-solid flow system, and has been widely employed in chemical industries due to its superior heat and mass transfer and solid transport and handling. 3D ECT has been successfully applied to gas-solid fluidized bed systems for system properties measurements and dynamic behaviors visualization. An example is the study of choking phenomenon in a 0.1 m ID gas-solid circulating fluidized bed with a 12-channel cylindrical ECT sensor, where the formation of slug during transition to choking is clearly recorded by 3D ECT. Another experiment studies the bubbling gas-solid fluidization in a 0.05 ID column, where the solid holdup, bubble shape and frequency obtained from ECT are validated with MRI measurements. The flexibility of the 3D ECT sensor geometry also enables it for imaging of bend, tapering and other non-uniform sections of gas-solid flow reactors. For example, a horizontal gas jet penetrating into a cylindrical gas-solid fluidized bed can be imaged with a modified 3D ECT sensor, and information such as penetration length and width of the jet as well as jet coalescence behavior with the bubbles in the fluidized bed can be obtained from 3D ECT. Another example is 3D ECT imaging of the riser and bend of a gas-solid circulating fluidized bed (CFB). A core-annulus flow structure in both the riser and the bend and a solid accumulation in the horizontal section of the bend are identified from quantitative images. ==== Gas-Liquid ==== Gas-liquid bubble column is a typical gas-liquid flow system that is widely used in petrochemical and biochemical processes. The bubbling flow phenomena have been extensively researched with computational fluid dynamic methods as well as traditional invasive measurement techniques. ECT possesses the unique ability to obtain real-time quantitative visualization of an entire gas-liquid flow field. An example is the study of the dynamics of spiral bubble plumes in bubble columns. 3D ECT is shown to be able to capture the spiral motion of bubble plumes, the structures of large scale liquid vortices and gas holdup distributions. Another example of the application of 3D ECT in gas-liquid systems is the study of a cyclonic gas-liquid separator, where a gas-liquid mixture enters a horizontal column tangentially and creates a swirling flow field where gas and liquid is separated by centrifugal force. ECT successfully captures the liquid distribution inside the vessel and the off-centered gas core drifting phenomenon. The quantitative results match mechanistic models. ==== Gas-Liquid-Solid ==== The trickle bed reactor (TBR) is a typical three-phase gas-liquid-solid system, and has applications in petroleum, petrochemical, biochemical, electrochemical and water treatment industries. In a TBR, gas and liquid flow downward concurrently through packed solid materials. Depending on the gas and liquid flow rates, TBR can have different flow regimes, including trickling flow, pulsating flow and dispersed-bubble flow. 3D ECT has been successfully used to image the turbulent pulsating flow in a TBR, and detailed pulse structure and pulse velocity can be obtained. === Combustion (High Temperature and Flame) === Most of the gas-solid flow systems in chemical industries operate at elevated temperatures for optimal reaction kinetics. Under such harsh conditions, many laboratory measurement techniques are no longer available. However, ECT has the potential for high-temperature applications due to its simple and robust design and non-invasive nature, which allows for insulating materials to be imbedded in the sensor for heat-resistance. Currently the high-temperature 3D ECT technology is under rapid development and research efforts are being made to address engineering issues associated with high temperatures. 3D ECT has been utilized in environments of high temperatures up to 650 °C to image and characterize fluidized beds under high temperatures such as those used in fluidized bed reactors, fluid catalytic cracking and fluidized bed combustion. The application of this technology to high temperature fluidized beds has allowed the in-depth analysis of how temperature affects flow behavior in the beds. For instance, in a slugging fluidized bed with large column height to column diameter ratio with Geldart Group D particles, increasing temperature up to 650 °C can change the density and viscosity of the gas, but has negligible effect on slugging behavior such as slug velocity and frequency. === Non-Destructive Testing (NDT) === In the infrastructure inspection industry, it is desirable to use equipment that inspects embedded components non-invasively. Issues such as corroded steel, water penetration, and air voids are often embedded within concrete or other solid members. Here, non-destructive testing (NDT) methods must be used to avoid compromising the integrity of the structure. 3D ECT has been used in this field for the non-destructive testing of external tendons on post-tensioned bridges. These structures are filled with steel cables and protective grouting or grease. In this application, a mobilized, remotely-controlled 3D ECT device is placed around the external tendon and scans the interior of the tendon. The tomographic device can then decipher information about the quality of the grouting or grease within the tendon in real time. It can also determine the size and location of any air voids or moisture within the tendon. Finding these issues is a critical task for bridge inspectors, as air and moisture pockets within the tendons can lead to corrosion of steel cables and failure of the tendon, which puts the bridge at risk of structural damage. == See also == Electrical capacitance tomography Electrical impedance tomography Electrical resistivity tomography Process tomography == References ==
Wikipedia/Electrical_capacitance_volume_tomography
Muon tomography or muography is a technique that uses cosmic ray muons to generate two or three-dimensional images of volumes using information contained in the Coulomb scattering of the muons. Since muons are much more deeply penetrating than X-rays, muon tomography can be used to image through much thicker material than x-ray based tomography such as CT scanning. The muon flux at the Earth's surface is such that a single muon passes through an area the size of a human hand per second. Since its development in the 1950s, muon tomography has taken many forms, the most important of which are muon transmission radiography and muon scattering tomography. Muography uses muons by tracking the number of muons that pass through the target volume to determine the density of the inaccessible internal structure. Muography is a technique similar in principle to radiography (imaging with X-rays) but capable of surveying much larger objects. Since muons are less likely to interact, stop and decay in low density matter than high density matter, a larger number of muons will travel through the low density regions of target objects in comparison to higher density regions. The apparatuses record the trajectory of each event to produce a muogram that displays the matrix of the resulting numbers of transmitted muons after they have passed through objects up to multiple kilometers in thickness. The internal structure of the object, imaged in terms of density, is displayed by converting muograms to muographic images. Muon tomography imagers are under development for the purposes of detecting nuclear material in road transport vehicles and cargo containers for the purposes of non-proliferation. Another application is the usage of muon tomography to monitor potential underground sites used for carbon sequestration. == Etymology and use == The term muon tomography is based on the word "tomography", a word produced by combining Ancient Greek tomos "cut" and graphe "drawing." The technique produces cross-sectional images (not projection images) of large-scaled objects that cannot be imaged with conventional radiography. Some authors hence see this modality as a subset of muography. Muography was named by Hiroyuki K. M. Tanaka. There are two explanations for the origin of the word "muography": (A) a combination of the elementary particle muon and Greek γραφή (graphé) "drawing," together suggesting the meaning "drawing with muons"; and (B) a shortened combination of "muon" and "radiography." Although these techniques are related, they differ in that radiography uses X-rays to image the inside of objects on the scale of meters, while muography uses muons to image the inside of objects on the scale of hectometers to kilometers. == Invention of muography == === Precursor technologies === Twenty years after Carl David Anderson and Seth Neddermeyer discovered that muons were generated from cosmic rays in 1936, Australian physicist E.P. George made the first known attempt to measure the areal density of the rock overburden of the Guthega-Munyang tunnel (part of the Snowy Mountains Hydro-Electric Scheme) with cosmic ray muons. He used a Geiger counter. Although he succeeded in measuring the areal density of rock overburden placed above the detector, and even successfully matched the result from core samples, due to the lack of directional sensitivity in the Geiger counter, imaging was impossible. In a famous experiment in the 1960s, Luis Alvarez used muon transmission imaging to search for hidden chambers in the Pyramid of Chephren in Giza, although none were found at the time; a later effort discovered a previously unknown void in the Great Pyramid. In all cases the information about the absorption of the muons was used as a measure of the thickness of the material crossed by the cosmic ray particles. === First muogram === The first muogram was produced in 1970 by a team led by American physicist Luis Walter Alvarez, who installed detection apparatus in the Belzoni Chamber of the Pyramid of Khafre to search for hidden rooms within the structure. He recorded the number of muons after they had passed through the Pyramid. With an invention of this particle tracking technique, he worked out the methods to generate the muogram as a function of muon's arriving angles. The generated muogram was compared with the results of the computer simulations, and he concluded that there were no hidden chambers in the Pyramid of Chephren after the apparatus was exposed to the Pyramid for several months. === Film muography === Tanaka and Niwa’s pioneering work created film muography, which uses nuclear emulsion. Exposures of nuclear emulsions were taken in the direction of the volcano and then analyzed with a newly invented scanning microscope, custom built for the purpose of identifying particle tracks more efficiently. Film muography enabled them to obtain the first interior imaging of an active volcano in 2007, revealing the structure of the magma pathway of Asama volcano. === Real-time muography === In 1968, the group of Alvarez used spark chambers with a digital read out for their Pyramid experiment. Tracking data from the apparatus was onto magnetic tape in the Belzoni Chamber, then the data were analyzed by the IBM 1130 computer, and later by the CDC 6600 computer located at Ein Shams University and Lawrence Radiation Laboratory, respectively. Strictly speaking these were not real time measurements. Real-time muography requires muon sensors to convert the muon's kinetic energy into a number of electrons in order to process muon events as electronic data rather than as chemical changes on film. Electronic tracking data can be processed almost instantly with an adequate computer processor; in contrast, film muography data have to be developed before the muon tracks can be observed. Real-time tracking of muon trajectories produce real-time muograms that would be difficult or impossible to obtain with film muography. === High-resolution muography === The MicroMegas detector has a positioning resolution of 0.3 mm, an order of magnitude higher than that of the scintillator-based apparatus (10 mm), and thus has a capability to create better angular resolution for muograms. == Applications == === Geology === Muons have been used to image magma chambers to predict volcanic eruptions. Kanetada Nagamine et al. continue active research into the prediction of volcanic eruptions through cosmic ray attenuation radiography. Minato used cosmic ray counts to radiograph a large temple gate. Emil Frlež et al. reported using tomographic methods to track the passage of cosmic rays muons through cesium iodide crystals for quality control purposes. All of these studies have been based on finding some part of the imaged material that has a lower density than the rest, indicating a cavity. Muon transmission imaging is the most suitable method for acquiring this type of information. In 2021, Giovanni Leone and his group revealed that volcanic eruption frequency is related to the amount of volcanic material which moves through a near-surface conduit in an active volcano. ==== Vesuvius ==== The Mu-Ray project has been using muography to image Vesuvius, famous for its eruption of 79 AD, which destroyed local settlements including Pompeii and Herculaneum. The Mu-Ray project is funded by the Istituto Nazionale di Fisica Nucleare (INFN, Italian National Institute for Nuclear Physics) and the Istituto Nazionale di Geofisica e Vulcanologia (Italian National Institute for Geophysics and Volcanology). The last time this volcano erupted was in 1944. The goal of this project is to "see" inside the volcano which is being developed by scientists in Italy, France, the US and Japan. This technology can be applied to volcanoes all around the world, to have a better understanding of when volcanoes will erupt. ==== Etna ==== The ASTRI SST-2M Project is using muography to generate the internal images of the magma pathways of Etna volcano. The last major eruption of 1669 caused widespread damage. Monitoring the magma flows with muography may help to predict the direction from which lava from future eruptions may emit. From August 2017 to October 2019, time sequential muography imaging of the Etna edifice was conducted to study differences in density levels which would indicate interior volcanic activities. Some of the findings of this research were the following: imaging of a cavity formation prior to crater floor collapse, underground fracture identification, and imaging of the formation of a new vent in 2019 which became active and subsequently erupted. ==== Stromboli ==== The apparatuses use nuclear emulsions to collect data near Stromboli volcano. Recent emulsion scanning improvements developed during the course of the Oscillation Project with Emulsion tRacking Apparatus (OPERA experiment) led to film muography. Unlike other muography particle trackers, nuclear emulsion can acquire high angular resolution without electricity. An emulsion-based tracker has been collecting data at Stromboli since December 2011. Over a period of 5 months in 2019, an experiment using nuclear emulsion muography was done at Stromboli volcano. Emulsion films were prepared in Italy and analyzed in Italy and Japan. The images revealed a low-density zone at the summit of the volcano which is thought to influence the stability of the “Sciara del Fuoco” slope (the source of many landslides). ==== Puy de Dôme ==== Since 2010, a muographic imaging survey has been conducted at the dormant volcano, Puy de Dôme, in France. It has been using the existing closed building structures located directly underneath the southern and eastern sides of the volcano for equipment testing and experiments. Preliminary muographs have revealed previously unknown density features at the top of Puy de Dôme that have been confirmed with gravimetric imaging. A joint measurement was conducted by French and Italian research groups in 2013-2014 during which different strategies for improved detector designs were tested, particularly their capacities to reduce background noise. ==== Underground water monitoring ==== Muography has been applied to groundwater and saturation level monitoring for bedrock in a landslide area as a response to major rainfall events. The measurement results were compared with borehole groundwater level measurements and rock resistivity. ==== Glaciers ==== The applicability of muography to glacier studies was first demonstrated with a survey of the top portion of Aletch glacier located in the Central European Alps. In 2017, a Japanese/Swiss collaboration conducted a larger scale muography imaging experiment based at Eiger Glacier to determine the bedrock geometry beneath active glaciers in the steep alpine environment of the Jungfrau region in Switzerland. 5-6 double side coated emulsion films were set in frames with stainless steel plates for shielding to be installed in 3 regions of a railway tunnel which was located underneath the targeted glacier. Production of the emulsion films was done in Switzerland and analysis was done in Japan. Underlying bedrock erosion and its boundary between glacier and bedrock could be successfully imaged for the first time. The methodology provided important information on subglacial mechanisms of bedrock erosion. ==== Mining ==== TRIUMF and its spin-off company Ideon Technologies developed a muograph designed specifically for surveys of possible uranium deposit sites with industry-standard boreholes === Civil engineering === Muography has been used to map the inside of big civil engineering structures, such as dams, and their surroundings for safety and risk prevention purposes. Muography imaging was applied to the identification of hidden construction shafts located above the Alfreton Old Tunnel (constructed in 1862) in the UK. === Nuclear reactors === Muography was applied to investigating the conditions of nuclear reactors damaged by the Fukushima nuclear disaster, and helped to confirm its state of near-complete meltdown. ==== Nuclear waste imaging ==== Tomographic techniques can be effective for non-invasive nuclear waste characterization and for nuclear material accountancy of spent fuel inside dry storage containers. Cosmic muons can improve the accuracy of data on nuclear waste and Dry Storage Containers (DSC). Imaging of DSC exceeds the IAEA detection target for nuclear material accountancy. In Canada, spent nuclear fuel is stored in large pools (fuel bays or wet storage) for a nominal period of 10 years to allow for sufficient radioactive cooling. Challenges and issues for nuclear waste characterization are covered at great length, summarized below: Historical waste. Non-traceable waste stream poses a challenge for characterization. Different types of waste can be distinguished: tanks with liquids, fabrication facilities to be decontaminated before decommissioning, interim waste storage sites, etc. Some waste form may be difficult and/or impossible to measure and characterize (i.e. encapsulated alpha/beta emitters, heavily shielded waste). Direct measurements, i.e. destructive assay, are not possible in many cases and Non-Destructive Assay (NDA) techniques are required, which often do not provide conclusive characterization. Homogeneity of the waste needs characterization (i.e. sludge in tanks, in-homogeneities in cemented waste, etc.). Condition of the waste and waste package: breach of containment, corrosion, voids, etc. Accounting for all of these issues can take a great deal of time and effort. Muon Tomography can be useful to assess the characterization of waste, radiation cooling, and condition of the waste container. ==== Los Alamos Concrete Reactor ==== In the summer of 2011, a reactor mockup was imaged using Muon Mini Tracker (MMT) at Los Alamos. The MMT consists of two muon trackers made up of sealed drift tubes. In the demonstration, cosmic-ray muons passing through a physical arrangement of concrete and lead; materials similar to a reactor were measured. The mockup consisted of two layers of concrete shielding blocks, and a lead assembly in between; one tracker was installed at 2.5 metres (8 ft 2 in) height, and another tracker was installed on the ground level at the other side. Lead with a conical void similar in shape to the melted core of the Three Mile Island reactor was imaged through the concrete walls. It took three weeks to accumulate 8×10^4 muon events. The analysis was based on point of closest approach, where the track pairs were projected to the mid-plane of the target, and the scattered angle was plotted at the intersection. This test object was successfully imaged, even though it was significantly smaller than expected at Fukushima Daiichi for the proposed Fukushima Muon Tracker (FMT). ==== Fukushima application ==== On March 11, 2011, a 9.0-magnitude earthquake, followed by a tsunami, caused an ongoing nuclear crisis at the Fukushima Daiichi power plant. Though the reactors are stabilized, complete shutdown will require knowledge of the extent and location of the damage to the reactors. A cold shutdown was announced by the Japanese government in December, 2011, and a new phase of nuclear cleanup and decommissioning was started. However, it is hard to plan the dismantling of the reactors without any realistic estimate of the extent of the damage to the cores, and knowledge of the location of the melted fuel. Since the radiation levels are still very high at the inside of the reactor core, it is not likely anyone can go inside to assess the damage. The Fukushima Daiichi Tracker (FDT) is proposed to see the extent of the damage from a safe distance. A few months of measurements with muon tomography, will show the distribution of the reactor core. From that, a plan can be made for reactor dismantlement; thus potentially shortening the time of the project many years. In August 2014, Decision Sciences International Corporation it had been awarded a contract by Toshiba Corporation (Toshiba) to support the reclamation of the Fukushima Daiichi Nuclear complex with the use of Decision Science's muon tracking detectors. Industrial muography has found an application in reactor inspection. It was used to locate the nuclear fuel in the Fukushima Daiichi nuclear power plant, which was damaged by the 2011 Tōhoku earthquake and tsunami. === Non-proliferation === The Nuclear Non-proliferation Treaty (NPT) signed in 1968 was a major step in the non-proliferation of nuclear weapons. Under the NPT, non-nuclear weapon states were prohibited from, among other things, possessing, manufacturing or acquiring nuclear weapons or other nuclear explosive devices. All signatories, including nuclear weapon states, were committed to the goal of total nuclear disarmament. The Comprehensive Nuclear-Test-Ban Treaty (CTBT) bans all nuclear explosions in any environments. Tools such as muon tomography can help to stop the spread of nuclear material before it is armed into a weapon. The New START treaty signed by the US and Russia aims to reduce the nuclear arsenal by as much as a third. The verification involves a number of logistically and technically difficult problems. New methods of warhead imaging are of crucial importance for the success of mutual inspections. Muon tomography can be used for treaty verification due to many important factors. It is a passive method; it is safe for humans and will not apply an artificial radiological dose to the warhead. Cosmic rays are much more penetrating than gamma or x-rays. Warheads can be imaged in a container behind significant shielding and in presence of clutter. Exposure times depend on the object and detector configuration (~few minutes if optimized). While special nuclear material (SNM) detection can be reliably confirmed, and discrete SNM objects can be counted and localized, the system can be designed to not reveal potentially sensitive details of the object design and composition. The Multi-Mode Passive Detection System (MMPDS) port scanner, located in the Freeport, Bahamas can detect both shielded nuclear material, as well as explosives and contraband. The scanner is large enough for a cargo container to pass through, making it a scaled-up version of the Mini Muon Tracker. It then produces a 3-D image of what is scanned. Tools such as the MMPDS can be used to prevent the spread of nuclear weapons. The safe but effective use of cosmic rays can be implemented in ports to help non-proliferation efforts, or even in cities, under overpasses, or entrances to government buildings. === Archaeology === ==== Egyptian pyramids ==== In 2015, 45 years after Alvarez’s experiment, the ScanPyramids Project, which is composed of an international team of scientists from Egypt, France, Canada, and Japan, started using muography and thermography imaging techniques to survey the Giza pyramid complex. In 2017, scientists involved in the project discovered a large cavity, named "ScanPyramids Big Void", above the Grand Gallery of the Great Pyramid of Giza. In 2023, "a corridor-shaped structure" was found in Khufu's Pyramid using the cosmic-ray muons. It was named "ScanPyramids North Face Corridor". ==== Mexican pyramids ==== The 3rd largest pyramid in the world, the Pyramid of the Sun, situated near Mexico City in the ancient city of Teotihuacan was surveyed with muography. One of the motivations of the team was to discover if inaccessible chambers inside the Pyramid might hold the tomb of a Teotihuacan ruler. The apparatus was transported in components and then reassembled inside a small tunnel leading to an underground chamber directly underneath the pyramid. A low density region approximately 60 meters wide was reported as a preliminary result, which has led some researchers to suggest that the structure of the pyramid might have been weakened and it is in danger of collapse. In 2020, the US National Science Foundation awarded a US-Mexico international group a grant for muography to investigate El Castillo, the largest pyramid in Chichen Itza. ==== Mt. Echia ==== A three-dimensional muography experiment was done in the underground tunnels of Mt Echia (in Naples, Italy) with 2 muon detectors, MU-RAY and MIMA, which successfully imaged 2 known cavities and discovered one unknown cavity. Mt Echia is where the earliest Naples settlement started in the 8th century and is located underground. Using measurements from 3 different locations in the underground tunnels, a 3D reconstruction was created for the unknown cavity. The method used for this experiment could be applied to other archeological targets to check the structural integrity of ancient sites and to potentially discover hidden historical regions within known sites. ==== China's imperial chambers ==== Yuanyuan Liu of the Beijing Normal University and her group showed the feasibility of muography to image the underground chamber of the first emperor of China. === Planetary science === ==== Mars ==== Muography may potentially be implemented to image extraterrestrial objects such as the geology of Mars. Cosmic rays are numerous and omnipresent in outer space. Therefore, it is predicted that the interaction of the cosmic rays in the Earth’s atmosphere to generate pions/mesons and subsequently to decay into muons also occurs in the atmosphere of other planets. It has been calculated that the atmosphere of Mars is sufficient to produce a horizontal muon flux for practical muography, roughly equivalent to the Earth’s muon flux. In the future, it may be viable to include a high-resolution muography apparatus in a future space mission to Mars, for instance inside a Mars rover. Getting accurate images of the density of Martian structures could be used for surveying sources of ice or water. ==== Small Solar System bodies ==== The “NASA Innovative Advanced Concepts (NIAC) program” is now in the process of assessing whether muography may be used for imaging the density structures of small Solar System bodies (SSBs). While the SSBs tend to generate lower muon flux than the Earth's atmosphere, some are sufficient to allow for muography of objects ranging from 1 km or less in diameter. The program includes calculating the muon flux for each potential target, creating imaging simulations and considering the engineering challenges of building a more lightweight, compact apparatus appropriate for such a mission. === Hydrospheric muography === The Hyper-kilometric Submarine Deep Detector (HKMSDD) was designed as a technique to operate muographic observations autonomously under the sea at reasonable costs by combining linear arrays of muographic sensor modules with underwater tube structures. In undersea muography, time-dependent mass movements consisting of or within targeted gigantic fluid bodies and submerged solid material bodies can be more precisely imaged than with land-based muography. Time-dependent fluctuations of the muon flux due to atmospheric pressure variations are suppressed when muography is conducted under the seafloor by the “inverse barometric effect (IBE)” of seawater. Low atmospheric pressures, such as the pressures observed at the center of a cyclone suck up seawater; on the other hand, high atmospheric pressures will push down seawater. The muon’s barometric pressure fluctuation, therefore, are mostly compensated by IBE at sea levels. === Carbon capture and storage === The success of carbon capture and storage (CCS) hinges upon being able to reliably contain the materials within the storage containers. It has been proposed to use muography as a monitoring tool for CCS. In 2018, a 2 month study supported the feasibility of CCS muography monitoring. It was completed in the UK at the Boulby Mine site in a 1.1 kilometres (3,600 ft) deep borehole. == Technique variants == === Muon scattering tomography (MST) === Muon scattering tomography was first proposed by Chris Morris and his group at Los Alamos National Laboratory (LANL). This technique is capable of locating the muon's Rutherford scattering source by tracking incoming and outgoing muons from the target. Since the radiation lengths tend to be shorter for higher atomic number materials; hence larger scattering angles are expected for the same path lengths, this technique is more sensitive to distinguishing differences between materials within structures and is therefore can be used for imaging heavy metals hidden inside light materials. On the other hand, this technique is not suitable for imaging void structures or light materials located inside heavy materials. LANL and its spinoff company Decision Sciences applied the MST technique to image the interiors of large trucks and other storage containers in order to detect nuclear materials. A similar system that used MST was developed at the University of Glasgow and its spin-off company Lynkeos Technology to apply towards monitoring the robustness of nuclear waste containers at the Sellafield storage site. With muon scattering tomography, both incoming and outgoing trajectories for each particle are reconstructed. This technique has been shown to be useful to find materials with high atomic number in a background of high-z material such as uranium or material with a low atomic number.< Since the development of this technique at Los Alamos, a few different companies have started to use it for several purposes, most notably for detecting nuclear cargo entering ports and crossing over borders. The Los Alamos National Laboratory team has built a portable Mini Muon Tracker (MMT). This muon tracker is constructed from sealed aluminum drift tubes, which are grouped into twenty-four 1.2-meter-square (4 ft) planes. The drift tubes measure particle coordinates in X and Y with a typical accuracy of several hundred micrometers. The MMT can be moved via a pallet jack or a fork lift. If a nuclear material has been detected it is important to be able to measure details of its construction in order to correctly evaluate the threat. MT uses multiple scattering radiography. In addition to energy loss and stopping cosmic rays undergo Coulomb scattering. The angular distribution is the result of many single scatters. This results in an angular distribution that is Gaussian in shape with tails from large angle single and plural scattering. The scattering provides a novel method for obtaining radiographic information with charged particle beams. More recently, scattering information from cosmic ray muons has been shown to be a useful method of radiography for homeland security applications. Multiple scattering can be defined as when the thickness increases and the number of interactions become high the angular dispersion can be modelled as Gaussian. Where the dominant part of the multiple scattering polar-angular distribution is d N d θ = 1 2 π θ 0 2 exp ⁡ ( − θ 2 2 θ 0 2 ) . {\displaystyle {\frac {\mathrm {d} N}{\mathrm {d} \theta }}={\frac {1}{2\pi \theta _{0}^{2}}}\,\exp {\left(-{\frac {\theta ^{2}}{2\theta _{0}^{2}}}\right)}.} where θ is the muon scattering angle and θ0 is the standard deviation of scattering angle, is given approximately by θ 0 = 14.1 M e V p c β X X 0 . {\displaystyle \theta _{0}={\frac {14.1\,\mathrm {MeV} }{pc\beta }}{\sqrt {\frac {X}{X_{0}}}}.} The muon momentum and velocity are p and β, respectively, c is the speed of light, X is the length of scattering medium, and X0 is the radiation length for the material. This needs to be convolved with the cosmic ray momentum spectrum in order to describe the angular distribution. The Image can then be reconstructed by use of GEANT4. These runs include input and output vectors, X → {\displaystyle {\vec {X}}} in and X → {\displaystyle {\vec {X}}} out for each incident particle. The incident flux projected to the core location was used to normalize transmission radiography (attenuation method). From here the calculations are normalized for the zenith angle of the flux. === Muon Momentum Integrated Tomography System === Despite the various benefits of using cosmic ray muons for imaging large and dense objects, i.e., spent nuclear fuel casks and nuclear reactors, their wide applications are often limited by the naturally low muon flux at sea level, approximately 10,000 m−2min−1. To overcome this limitation, two important quantities—scattering angle, θ and momentum, p—for each muon event must be measured during the measurement. To measure cosmic ray muon momentum in the field, a fieldable muon spectrometer using multi-layer pressurized gas Cherenkov radiators has been developed and the muon spectrometer-tomography shows improved muon scattering tomography resolutions. === Muon computational axial tomography (Mu-CAT) === Mu-CAT is a technique which combines multiple projected muographic images to create a 3D muography image. In principle, it is similar to medical imaging used in radiology (CAT scans) to obtain three-dimensional internal images of the body. While medical CAT scanners use a rotating X-ray generator around the target object, Mu-CAT uses multiple detectors around the target object and naturally occurring muons as probes. Either the tomographic reconstruction technique or the inverse problem is applied to these data from the Mu-CAT observations to reconstruct 3d images. Mu-CAT revealed the three-dimensional position of a fractured zone below the crater floor of an active volcano related to a past eruption that had caused a large pyroclastic and lava flow on its northern slope. === Cosmic Ray Inspection and Passive Tomography (CRIPT) === The Cosmic Ray Inspection and Passive Tomography (CRIPT) detector is a Canadian muon tomography project which tracks muon scattering events while simultaneously estimating the muon momentum. The CRIPT detector is 5.3 metres (17 ft) tall and has a mass of 22 tonnes (22 long tons; 24 short tons). The majority of the detector mass is located in the muon momentum spectrometer which is a feature unique to CRIPT regarding muon tomography. After initial construction and commissioning at Carleton University in Ottawa, Canada, the CRIPT detector was moved to Atomic Energy Of Canada Limited's Chalk River Laboratories. The CRIPT detector is presently examining the limitations on detection time for border security applications, limitations on muon tomography image resolution, nuclear waste stockpile verification, and space weather observation through muon detection. == Technical aspects == The apparatus is a muon-tracking device that consists of muon sensors and recording media. There are several different kinds of muon sensors used in muography apparatuses: plastic scintillators, nuclear emulsions, or gaseous ionization detectors. The recording medium is the film itself, digital magnetic or electronic memory. The apparatus is directed towards the target volume, exposing the muon sensor until the muon events required in order to form a statistically sufficient muogram are recorded, after which, (post processing) a muograph displaying the average density along each muon path is created. == Advantages == There are several advantages that muography has over traditional geophysical surveys. First, muons are naturally abundant and travel from the atmosphere towards the Earth’s surface. This abundant muon flux is nearly constant, therefore muography can be used worldwide. Second, because of the high-contrast resolution of muography, a small void of less than 0.001% of the entire volume can be distinguished. Finally, the apparatus has much lower power requirements than other imaging techniques since they use natural probes, rather than relying on artificially generated signals. == Process == In the field of muography, the transmission coefficient is defined as the ratio of the transmission through the object over the incident muon flux. By applying the muon's range through matter to the open-sky muon energy spectrum, the value of the fraction of incident muon flux that is transmitted through the object can be analytically derived. A muon with a different energy has a different range, which is defined as a distance that the incident muon can traverse in matter before it stops. For example, 1 TeV energy muons have a continuous slowing down approximation range (CSDA range) of 2500 m water equivalent (m.w.e.) in silica dioxide whereas the range is reduced to 400 m.w.e. for 100 GeV muons. This range varies if the material is different, e.g., 1 TeV muons have a CSDA range of 1500 m.w.e. in lead. The numbers (or later colors) forming a muogram are displayed in terms of the transmitted number of muon events. Each pixel in the muogram is a two dimensional unit based on the angular resolution of the apparatus. The phenomenon that muography cannot differentiate density variations is called the "Volume Effects". Volume Effects happen when a large amount of low density materials and a thin layer of high density materials cause the same attenuation in muon flux. Therefore, in order to avoid false data arising from Volume Effects, the exterior shape of the volume has to be accurately determined and used for analyzing the data. == References ==
Wikipedia/Muon_tomography
Cryogenic electron tomography (cryoET) is an imaging technique used to reconstruct high-resolution (~1–4 nm) three-dimensional volumes of samples, often (but not limited to) biological macromolecules and cells. cryoET is a specialized application of transmission electron cryomicroscopy (CryoTEM) in which samples are imaged as they are tilted, resulting in a series of 2D images that can be combined to produce a 3D reconstruction, similar to a CT scan of the human body. In contrast to other electron tomography techniques, samples are imaged under cryogenic conditions (< −150 °C). For cellular material, the structure is immobilized in non-crystalline, vitreous ice, allowing them to be imaged without dehydration or chemical fixation, which would otherwise disrupt or distort biological structures. == Description of technique == In electron microscopy (EM), samples are imaged in a high vacuum. Such a vacuum is incompatible with biological samples such as cells; the water would boil off, and the difference in pressure would explode the cell. In room-temperature EM techniques, samples are therefore prepared by fixation and dehydration. Another approach to stabilize biological samples, however, is to freeze them (cryo-electron microscopy or cryoEM). As in other electron cryomicroscopy techniques, samples for cryoET (typically small cells such as Bacteria, Archaea, or viruses) are prepared in standard aqueous media and applied to an EM grid. The grid is then plunged into a cryogen, for example liquid ethane, with sufficiently large specific heat that the rate of cooling is rapid enough that water molecules do not have time to rearrange into a crystalline lattice. The resulting water state is called low-density amorphous ice, or commonly "vitreous ice" for its glass like nature. This form of ice preserves native cellular structures, such as lipid membranes, that would normally be disrupted by hexagonal or other ordered ice upon slower freezing. Plunge-frozen samples are subsequently kept at liquid-nitrogen temperatures through storage and imaging so that the water never warms enough to crystallize. Samples are imaged in a transmission electron microscope (TEM). As in other electron tomography techniques, the sample is tilted to different angles relative to the electron beam (typically every 2-3 degrees from about −60° to +60°), and an image is acquired at each angle. This tilt-series of images can then be computationally reconstructed into a three-dimensional view of the object of interest. This is called a tomogram, or tomographic reconstruction. === Potential for high-resolution in situ imaging === One of the most commonly cited benefits of cryoET is the ability to reconstruct 3D volumes of individual objects (proteins, cells, etc.) rather than necessitating multiple copies of the sample in crystallographic methods or in other cryoEM imaging methods like single particle analysis. CryoET is considered to be an in situ method when used on an unperturbed cell or other system since plunge-freezing of sufficiently thin samples fixes the specimen in place fast enough to cause minimal changes to atomic positioning. Thick samples, greater than ~500 nm, require additional conditions like high-pressure to promote vitrification throughout the sample. == Considerations == === Sample thickness === In transmission electron microscopy (TEM), because electrons interact strongly with matter, samples must be kept very thin to not cause samples to darken due to multiple elastic scattering events. Therefore, in cryoET, samples are generally less than ~500 nm thick. For this reason, most cryoET studies have focused on purified macromolecular complexes, viruses, or small cells such as those of many species of Bacteria and Archaea. For example, cryoET has been used to understand encapsulation of 12 nm size protein cage nanoparticles inside 60 nm sized virus-like nanoparticles. Larger cells, and even tissues, can be prepared for cryoET by thinning, either by cryo-sectioning or by focused ion beam (FIB) milling. In cryo-sectioning, frozen blocks of cells or tissue are sectioned into thin samples with a cryo-microtome. In FIB-milling, plunge-frozen samples are exposed to a focused beam of ions, typically gallium, that precisely whittle away material from the top and bottom of a sample, leaving a thin lamella suitable for cryoET imaging. === Signal-to-noise ratio === For structures that are present in multiple copies in one or multiple tomograms, higher resolution (even ≤1 nm) can be obtained by subtomogram averaging. Similar to single particle analysis, subtomogram averaging computationally combines images of identical objects to increase the signal-to-noise ratio. == Limitations == === Radiation damage === Electron microscopy is known to swiftly decay biological samples compared to samples in materials science and physics due to radiation damage. In most other electron microscopy-based methods for imaging biological samples, combining the signal from many different sample copies has been the general way of surpassing this problem (e.g. crystallography, single particle analysis). In cryoET, instead of taking many images of different sample copies, many images are taken of one area. Consequentially, the fluence (number of electrons imparted per unit area) on the sample is around 2-5x more than in single particle analysis. Tomography on materials much more resilient allows drastically higher resolution than typical biological imaging, suggesting that radiation damage is the greatest limitation to cryoET of biological samples. === Depth resolution === The strong interaction of electrons with matter also results in an anisotropic resolution effect. As the sample is tilted during imaging, the electron beam interacts with a thicker apparent sample along the optical axis of the microscope at higher tilt angles. In practice, tilt angles greater than approximately 60–70° do not yield much information and are therefore not used. This results in a "missing wedge" of information in the final tomogram that decreases resolution parallel to the electron beam. The term "missing wedge" originates from the view of the Fourier transform of the tomogram, where an empty wedge is apparent due to not tilting the sample to 90°. The missing wedge results in a lack of resolution in sample depth, as the missing information is mostly along the z-axis. The missing wedge is also a problem in 3D electron crystallography, where it is usually solved by merging multiple datasets that overlap each other or through symmetry expansion where possible. Both of these solutions are due to the nature of crystallography, and so neither can be applied to tomography. === Segmentation === A major obstacle in cryoET is identifying structures of interest within complicated cellular environments. Solutions such as correlated cryo-fluorescence light microscopy, and super-resolution light microscopy (e.g. cryo-PALM) can be integrated with cryoET. In these techniques, a sample containing a fluorescently-tagged protein of interest is plunge-frozen and first imaged in a light microscope equipped with a special stage to allow the sample to be kept at sub-crystallization temperatures (< −150 °C). The location of the fluorescent signal is identified and the sample is transferred to the CryoTEM, where the same location is then imaged at high resolution by cryoET. == See also == Electron microscopy Electron tomography Transmission electron cryomicroscopy Transmission electron microscopy == References == == External links == Getting started in cryo-EM course (Caltech)
Wikipedia/Cryogenic_electron_tomography
Neutron tomography is a form of computed tomography involving the production of three-dimensional images by the detection of the absorbance of neutrons produced by a neutron source. It creates a three-dimensional image of an object by combining multiple planar images with a known separation. It has a resolution of down to 25 μm. Whilst its resolution is lower than that of X-ray tomography, it can be useful for specimens containing low contrast between the matrix and object of interest; for instance, fossils with a high carbon content, such as plants or vertebrate remains. Neutron tomography can have the unfortunate side-effect of leaving imaged samples radioactive if they contain appreciable levels of certain elements such as cobalt, however in practice this neutron activation is low and short-lived such that the method is considered non-destructive. The increasing availability of neutron imaging instruments at research reactors and spallation sources via peer-reviewed user access programs has seen neutron tomography achieve increasing impact across diverse applications including earth sciences, palaeontology, cultural heritage, materials research and engineering. In 2022, it was reported in the journal Gondwana Research that an ornithopod dinosaur was serendipitously discovered by neutron tomography in the gut content of Confractosuchus, a Cretaceous crocodyliform from the Winton Formation of central Queensland, Australia. This is the first time that a dinosaur has been discovered using neutron tomography, and to this day, the partially digested dinosaur remains entirely embedded within the surrounding matrix. == See also == Winkler, B. (2006). "Applications of Neutron Radiography and Neutron Tomography". Reviews in Mineralogy and Geochemistry. 63 (1): 459–471. Bibcode:2006RvMG...63..459W. doi:10.2138/rmg.2006.63.17. Schwarz, D.; Vontobel, P. L.; Eberhard, H.; Meyer, C. A.; Bongartz, G. (2005). "Neutron tomography of internal structures of vertebrate remains: a comparison with X-ray computed tomography" (PDF). Palaeontologia Electronica. 8 (30). Mays, C.; Cantrill, D. J.; Stilwell. J. D.; Bevitt. J. J. (2017). "Neutron tomography of Austrosequoia novae-zeelandiae comb. nov. (Late Cretaceous, Chatham Islands, New Zealand): implications for Sequoioideae phylogeny and biogeography". Journal of Systematic Palaeontology. 16 (7): 551–570. doi:10.1080/14772019.2017.1314898. S2CID 133375313. == References ==
Wikipedia/Neutron_tomography
The computed tomography imaging spectrometer (CTIS) is a snapshot imaging spectrometer which can produce in fine the three-dimensional (i.e. spatial and spectral) hyperspectral datacube of a scene. == History == The CTIS was conceived separately by Takayuki Okamoto and Ichirou Yamaguchi at Riken (Japan), and by F. Bulygin and G. Vishnakov in Moscow (Russia). The concept was subsequently further developed by Michael Descour, at the time a PhD student at the University of Arizona, under the direction of Prof. Eustace Dereniak. The first research experiments based on CTIS imaging were conducted in the fields of molecular biology. Several improvements of the technology have been proposed since then, in particular regarding the hardware: dispersive elements providing more information on the datacube, enhanced calibration of the system. The enhancement of the CTIS was also fueled by the general development of bigger image sensors. For academic purposes, although not as widely used as other spectrometers, CTIS has been employed in applications ranging from the military to ophthalmology and astronomy. == Image formation == === Optical layout === The optical layout of a CTIS instrument is shown on the left part of the top image. A field stop is placed at the image plane of an objective lens, after which a lens collimates the light before it passes through a disperser (such as a grating or a prism). Finally, a re-imaging lens maps the dispersed image of the field stop onto a large-format detector array. === Resulting image === The information that the CTIS acquires can be seen as the three-dimensional datacube of the scene. Of course, this cube does not exist in physical space as mechanical objects do, but this representation helps to gain intuition on what the image is capturing: As seen in the figure on the right, the shapes on the image can be considered as projections (in a mechanical sense) of the datacube. The central projection, called the 0th order of diffraction, is the sum of the datacube following the spectral axis (hence, this projection acts as a panchromatic camera). In the image of the "5" on the right, one can clearly read the number in the central projection, but with no information regarding the spectre of the light. All the other projections result from "looking" at the cube obliquely and hence contain a mixture of spatial and spectral information. From a discrete point of view where the datacube is considered as a set of spectral slices (as in the figure above, where two such slices are represented in purple and red), one can understand these projections as a partial spread of the stack of slices, similarly to a magician spreading his cards in order for an audience member to pick one of them. It is important to note that for typical spectral dispersions and the typical size of a sensor, the spectral information of a given slice is heavily overlapping with the one from other neighboring slices. In the "5" image, one can see in the side projections that the number is not clearly readable (loss of spatial information), but that some spectral information is available (i.e. some wavelengths appear brighter than others). Hence, the image contains multiplexed information regarding the datacube. The number and layout of the projections depend on the type of diffracting element employed. In particular, more than one order of diffraction can be captured. == Datacube reconstruction == The resulting image contains all of the information of the datacube. It is necessary to carry out a reconstruction algorithm to convert this image back in the 3D spatio-spectral space. Hence, the CTIS is a computational imaging system. === Link to X-ray computed tomography === Conceptually, one can consider each of the projections of the datacube in a manner analogous to the X-ray projections measured by medical X-ray computed tomography instruments used to estimate the volume distribution within a patient's body. Hence, the most widely-used algorithms for CTIS reconstruction are the same as the one used in the X-ray CT field of study. In particular, the algorithm used by Descour is directly taken from a seminal work in X-ray CT reconstruction. Since then, slightly more elaborate techniques have been employed, in the same way (but not to the same extent) X-ray CT reconstruction has improved since the 80s. === Difficulties === Compared to the X-ray CT field, CTIS reconstruction is notoriously more difficult. In particular, the number of projections resulting from a CTIS acquisition is typically far less than in X-ray CT. This results in a blurrier reconstruction, following the projection-slice theorem. Moreover, unlike X-ray CT where projections are acquired around the patient, the CTIS, as all imaging systems, only acquires the scene from a single point of view, and hence many projection angles are unobtainable. == References == == External links == A fast reconstruction algorithm for computed tomography imaging spectrometer (CTIS) is documented in the paper: Larz White, W. Bryan Bell, Ryan Haygood, "Accelerating computed tomographic imaging spectrometer reconstruction using a parallel algorithm exploiting spatial shift-invariance", Opt. Eng. 59(5), 055110 (2020).[1]
Wikipedia/Computed_tomography_imaging_spectrometer
Discrete tomography focuses on the problem of reconstruction of binary images (or finite subsets of the integer lattice) from a small number of their projections. In general, tomography deals with the problem of determining shape and dimensional information of an object from a set of projections. From the mathematical point of view, the object corresponds to a function and the problem posed is to reconstruct this function from its integrals or sums over subsets of its domain. In general, the tomographic inversion problem may be continuous or discrete. In continuous tomography both the domain and the range of the function are continuous and line integrals are used. In discrete tomography the domain of the function may be either discrete or continuous, and the range of the function is a finite set of real, usually nonnegative numbers. In continuous tomography when a large number of projections is available, accurate reconstructions can be made by many different algorithms. It is typical for discrete tomography that only a few projections (line sums) are used. In this case, conventional techniques all fail. A special case of discrete tomography deals with the problem of the reconstruction of a binary image from a small number of projections. The name discrete tomography is due to Larry Shepp, who organized the first meeting devoted to this topic (DIMACS Mini-Symposium on Discrete Tomography, September 19, 1994, Rutgers University). == Theory == Discrete tomography has strong connections with other mathematical fields, such as number theory, discrete mathematics, computational complexity theory and combinatorics. In fact, a number of discrete tomography problems were first discussed as combinatorial problems. In 1957, H. J. Ryser found a necessary and sufficient condition for a pair of vectors being the two orthogonal projections of a discrete set. In the proof of his theorem, Ryser also described a reconstruction algorithm, the very first reconstruction algorithm for a general discrete set from two orthogonal projections. In the same year, David Gale found the same consistency conditions, but in connection with the network flow problem. Another result of Ryser's is the definition of the switching operation by which discrete sets having the same projections can be transformed into each other. The problem of reconstructing a binary image from a small number of projections generally leads to a large number of solutions. It is desirable to limit the class of possible solutions to only those that are typical of the class of the images which contains the image being reconstructed by using a priori information, such as convexity or connectedness. === Theorems === Reconstructing (finite) planar lattice sets from their 1-dimensional X-rays is an NP-hard problem if the X-rays are taken from m ≥ 3 {\displaystyle m\geq 3} lattice directions (for m = 2 {\displaystyle m=2} the problem is in P). The reconstruction problem is highly unstable for m ≥ 3 {\displaystyle m\geq 3} (meaning that a small perturbation of the X-rays may lead to completely different reconstructions) and stable for m = 2 {\displaystyle m=2} , see. Coloring a grid using k {\displaystyle k} colors with the restriction that each row and each column has a specific number of cells of each color is known as the ( k − 1 ) {\displaystyle (k-1)} −atom problem in the discrete tomography community. The problem is NP-hard for k ≥ 3 {\displaystyle k\geq 3} , see. For further results, see == Algorithms == Among the reconstruction methods one can find algebraic reconstruction techniques (e.g., DART or ), greedy algorithms (see for approximation guarantees), and Monte Carlo algorithms. == Applications == Various algorithms have been applied in image processing, medicine, three-dimensional statistical data security problems, computer tomograph assisted engineering and design, electron microscopy and materials science, including the 3DXRD microscope. A form of discrete tomography also forms the basis of nonograms, a type of logic puzzle in which information about the rows and columns of a digital image is used to reconstruct the image. == See also == Geometric tomography == References == == External links == Euro DT (a Discrete Tomography Wiki site for researchers) Tomography applet by Christoph Dürr PhD thesis on discrete tomography (2012): Tomographic segmentation and discrete tomography for quantitative analysis of transmission tomography data
Wikipedia/Discrete_tomography
In radiography, focal plane tomography is tomography (imaging a single plane, or slice, of an object) by simultaneously moving the X-ray generator and X-ray detector so as to keep a consistent exposure of only the plane of interest during image acquisition. This was the main method of obtaining tomographs in medical imaging until the late-1970s. It has since been largely replaced by more advanced imaging techniques such as CT and MRI. It remains in use today in a few specialized applications, such as for acquiring orthopantomographs of the jaw in dental radiography. Focal plane tomography’s development began in the 1930s as a means of reducing the problem of superimposition of structures which is inherent to projectional radiography. It was invented in parallel by, among others, by the French physician Bocage, the Italian radiologist Alessandro Vallebona and the Dutch radiologist Bernard George Ziedses des Plantes. == Technique == Focal plane tomography generally uses mechanical movement of an X-ray source and film in unison to generate a tomogram using the principles of projective geometry. Synchronizing the movement of the radiation source and detector which are situated in the opposite direction from each other causes structures which are not in the focal plane being studied to blur out. === Limitations === The blurring provided by focal plane tomography is only marginally effective, since it only occurs in the X plane. Moreover, since focal plane tomography uses plain X-rays, it is not particularly effective at resolving soft tissues. The increased availability and power of computers in the 1960s and 70s gave rise to new imaging techniques such as CT and MRI which use computational (in addition to or in lieu of mechanical) methods to acquire and process tomographic image data, and which do not suffer from the limitations of focal plane tomography. == Variants == Initially focal plane tomography used simple linear movements. The technique advanced through the mid-twentieth century however, steadily producing sharper images, and with a greater ability to vary the thickness of the cross-section being examined. This was achieved through the introduction of more complex, pluridirectional devices that can move in more than one plane and perform more effective blurring. === Linear tomography === This is the most basic form of conventional tomography. The X-ray tube moved from point "A" to point "B" above the patient, while the detector (such as cassette holder or "bucky") moves simultaneously under the patient from point "B" to point "A". The fulcrum, or pivot point, is set to the area of interest. In this manner, the points above and below the focal plane are blurred out, just as the background is blurred when panning a camera during exposure. Rarely used, and has largely been replaced by computed tomography (CT). === Poly tomography === This was achieved using a more advanced X-ray apparatus that allows for more sophisticated and continuous movements of the X-ray tube and film. With this technique, a number of complex synchronous geometrical movements could be programmed, such as hypocycloidic, circular, figure 8, and elliptical. Philips Medical Systems for example produced one such device called the 'Polytome'. This pluridirectional unit was still in use into the 1990s, as its resulting images for small or difficult physiology, such as the inner ear, were still difficult to image with CTs at that time. As the resolution of CT scanners got better, this procedure was taken over by CT. === Zonography === This is a variant of linear tomography, where a limited arc of movement is used, resulting in less blurring than linear tomography. It is still used in some centres for visualising the kidney during an intravenous urogram (IVU), though it too is being supplanted by CT. === Panoramic radiograph === Panoramic radiography is the only common tomographic examination still in use. This makes use of a complex movement to allow the radiographic examination of the mandible, as if it were a flat bone. It is commonly performed in dental practices and is often referred to as a "Panorex", though this is a trademark of a specific company and not a generic term. == See also == Tomography == References == == External links == Media related to Focal plane tomography at Wikimedia Commons
Wikipedia/Focal_plane_tomography
Ultrasound-modulated optical tomography (UOT), also known as Acousto-Optic Tomography (AOT), is a hybrid imaging modality that combines light and sound; it is a form of tomography involving ultrasound. It is used in imaging of biological soft tissues and has potential applications for early cancer detection. As a hybrid modality which uses both light and sound, UOT provides some of the best features of both: the use of light provides strong contrast and sensitivity (both molecular and functional); these two features are derived from the optical component of UOT. The use of ultrasound allows for high resolution, as well as a high imaging depth. However, the difficulty of tackling the two fundamental problems with UOT (low SNR in deep tissue and short speckle decorrelation time) have caused UOT to evolve relatively slowly; most work in the field is limited to theoretical simulations or phantom / sample studies. == Basic Description of Acousto-Optic Tomography == In UOT, ultrasound transducers are used to apply ultrasounds wave into a medium, usually some biological tissue. Applying these ultrasound waves, or an ultrasound field, to a region of tissue will change the optical properties of the tissue in time and space. This region of ultrasound-modulated tissue is the region of interest (ROI) which will be analyzed. Photons are then sent into the tissue from some source, such as a laser. Eventually, despite the strength of optical scattering in tissue, some of these photons will pass through the ROI. The photons that pass through the ROI will change according to the modulation of the tissue; this causes the photons to be "tagged". Typically, this tagging will cause the frequency of the light to shift by the frequency of the US field. Sufficiently coherent light traveling through a medium creates a speckle pattern (Fig 1 of next citation). Modulating the ultrasound field applied to the ROI will cause the speckle patterns to change, due to the 3 modulation mechanisms which are explained below. The changes of these speckle patterns are used to derive various properties of the tissue during reconstruction and analysis. Optical properties that can be derived include the optical absorption coefficient, the optical scattering coefficient, and the fluence in the region of interest; UOT can also be used to derive mechanical properties as well. == Use of Light and Sound in Conjunction == Optical imaging modalities typically rely on ballistic photons to collect and convey information. However, as a result of strong optical scattering in tissue, conventional imaging modalities struggle to image deeper into tissue, past the optical diffusion limit (typically about 1 mm into tissue). Various imaging modalities have been developed to peer deeper into tissue, such as diffuse optical tomography (DOT) and optical coherence tomography (OCT). While OCT has fantastic spatial resolution (3.5 and 7 micrometer resolutions, axially and lateral respectively), its imaging depth is limited to the millimeter range (e.g. 2.5 mm). DOT has fantastic penetration, on the scale of centimeters, but suffers from inferior resolution (~1cm). To address the difficulty of deep optical imaging, hybrid ultrasound and optical imaging modalities have been developed, namely UOT and photoacoustic imaging (PAI). Both imaging modalities use diffuse photons, which typically cannot be used to transmit information from deep within the tissue. This is because strong optical scattering means that it is incredibly difficult to determine where the photons have traveled and how many scattering events they might have gone through. UOT and PAI have different ways to effectively transfer information from the diffuse photons within the tissue back to the system, which allows for centimeter imaging depths (deeper than DOT) while retaining high spatial resolutions (millimeters to 100s of micrometers). Photoacoustic imaging sends a pulse of light into tissue; as photons are absorbed by the tissue, the resulting heating rise results in thermoelastic expansion. This leads to a pressure wave propagation, which is collected by ultrasound transducers. Thus, PAI uses photons to acquire information deep inside tissue and uses ultrasound to transmit that information to the system. In contrast, UOT relies on photons for information transmission and uses ultrasound to acquire information. Ultrasound is used in UOT to "tag" or identify photons which have passed through the region of interest; these photons can be trusted to carry information back to the system regarding the region of interest. The modulation of the ultrasound will accordingly change the optical properties of the tissue, which can be used to derive both optical and mechanical properties. The difference in the use of ultrasound between PAI and UOT means that different types of information can be derived from the two different modalities. PAI is proficient in delivering information regarding optical absorption; UOT can provide information regarding both optical absorption and optical scattering. Thus, UOT has a strong advantage over PAI in that UOT can provide information about tissue/organ structure as well as tissue metabolism. == Advantages == UOT utilizes all photons within deep tissue; as long as they have not been absorbed and they pass through the ROI, those photons can still be used to convey information in UOT. Thus, UOT can achieve imaging depths (exploiting diffuse photons) deeper than 9 cm into the body while retaining high spatial resolutions (from the dimensions of the ultrasound focus), ~mm scale, as of 2017. UOT can be used to derive mechanical and optical properties of the tissue; compared to PAI, which can only derive optical absorption, UOT can derive absorption and scattering properties of the tissue. == Disadvantages == There are two fundamental problems with UOT: UOT has very low signal to noise in deep tissue; with an imaging depth of several centimeters, the ratio between untagged and tagged photons can be greater than 100:1 and even 1000:1. As the tagged photons are the photons which carry the information, this leads to difficulties in recovering data as a result of strong background noise (untagged photons). This is due to deep tissue having a large diffuse volume, which leads to a very low ultrasound focus. Because of this incredibly weak signal to noise ratio, compensatory methods are required for any practical UOT system. One way to tackle this problem is by effective filtering techniques, which allow for a large decrease in the background noise of untagged protons. The other problem is to improve the sensitivity of the UOT system to tagged photons. UOT data comes from speckles resulting from scattering of photons through tissue. As biological tissue is constantly in motion on a microscopic scale, UOT imaging in-vivo results in a very short speckle decorrelation time of less than 1 millisecond in biological tissue. Thus, UOT systems typically require high temporal resolution in order to be able to derive data from stable speckle patterns. == Basic Concepts == UOT is built upon the modulation of light due to the effects of ultrasonic waves on the optical properties of the testing medium. The target within the testing medium will be irradiated by a laser beam and a focused ultrasonic wave. The reemitted light propagating through the ultrasonic field or ultrasonic focal zone will then carry information of local optical and acoustic properties of this zone. These properties will be used to reconstruct images showing the inside view of the medium. === Mechanisms === There are three important mechanisms behind UOT technology. 1 Incoherent Modulation of Light Due to Ultrasound-Induced Variations in Optical Properties of Medium. As the ultrasonic wave propagates through the medium, the mass density of the medium will be changed due to the vibration. This variation in mass density will then influence the local optical properties. For example, the local absorption coefficient, scattering coefficient, and the index of refraction will all be modulated. With different optical properties, the reemitted light features (like intensity) will be modified. 2 Variations in Optical Phase in Response to Ultrasound-Induced Displacements of Scatterers. This mechanism mainly describes the effect in a microcosmic view. With the vibration from focused ultrasound, the local scatterers within the medium will be moved. When coherent light passes through such region, the displacement of scatterers will then cause optical phases change and then further modulate the light's free-path lengths. In the end, the reemitted light will form speckle pattern. 3 Variations in Optical Phase in Response to Ultrasonic Modulation of Index of Refraction of Background Medium. Similar to the second mechanism, the mass density modulation caused by ultrasonic wave vibration will also modulate the medium's index of refraction. This effect can further influence the free-path phases when light passes through the ultrasonic region and form a speckle pattern. In conclusion, these three mechanisms describe how the modulation of the ultrasonic wave can modulate and fluctuate the light intensity (in mechanism 1), the light phase and form speckle pattern (mechanism 2 and 3). Mechanisms 2 and 3 require a coherent light source; mechanism 1 does not. When coherent light sources are applied, mechanism 1 can be disregarded, as its effect compared to mechanisms 2 and 3 are effectively negligible. These three mechanisms are the fundamental building blocks required to design a UOT system. == UOT analytic model == In analytic model, two approximations are made. (1) the optical wavelength is much shorter than the mean free path (weak-scattering approximation) and (2) the ultrasound-induced change in the optical path length is much less than the optical wavelength (weak-modulation approximation). With the first approximation, we can assume the ensemble-averaged correlations between electric fields are the same. Since the differences between correlation results from different paths are negligible. With this assumption, the correlations for electric fields G1(ꚍ) can be written as the following: G 1 ( τ ) = ∫ p ( s ) ⟨ E s ( t ) E s ∗ ( t + τ ) ⟩ d s {\displaystyle G1(\tau )=\int p(s)\langle E_{s}(t)E_{s}^{*}(t+\tau )\rangle ds} In this equation the parenthesis represents the ensemble and time averaging. Es demsontrates the unit-amplitude electric field of the scattered light of a path of length s, and p(s) denotes the probability density function of s. In analytic UOT model, we treat the light source as an optical plane wave. We assume the plane wave light normally hit a slab of thickness d. The transmitted light will be captured by a point detector. After applying diffusion theory, the original G1 equation can be further modified as: G 1 ( τ ) = ( ( d l t ′ ) ( sinh ⁡ ( ε [ 1 − c o s ( ω a τ ) ] ) 1 / 2 ) sinh ⁡ ( d / l t ′ ) ( ε [ 1 − c o s ( ω a τ ) ] ) 1 / 2 ) {\displaystyle G1(\tau )=\left({\frac {\left({\frac {d}{l_{t}'}}\right)(\sinh({\varepsilon [1-cos(\omega _{a}\tau )]})^{1/2})}{\sinh(d/l_{t}')({\varepsilon [1-cos(\omega _{a}\tau )]})^{1/2}}}\right)} Where ε = 6 ( δ n + δ d ) ( n 0 k 0 A ) 1 / 2 {\displaystyle \varepsilon =6(\delta _{n}+\delta _{d})(n_{0}k_{0}A)^{1/2}} δ n = ( α n 1 + α n 2 ) η 2 {\displaystyle \delta _{n}=(\alpha _{n1}+\alpha _{n2})\eta ^{2}} α n 1 = 1 / 2 k a l t ′ arctan ⁡ ( k a l t ′ ) {\displaystyle \alpha _{n1}=1/2k_{a}l_{t}'\arctan(k_{a}l_{t}')} α n 2 = α n 1 k a l t ′ / arctan ⁡ ( k a l t ′ ) − 1 {\displaystyle \alpha _{n2}={\frac {\alpha _{n1}}{k_{a}l_{t}'/\arctan(k_{a}l_{t}')-1}}} δ d = 1 6 {\displaystyle \delta _{d}={\frac {1}{6}}} In these equations, ωa represents the acoustic angular frequency, and the n0 is the background index of refraction; k0 is the magnitude of the optical wave vector in vacuo; A is the acoustic amplitude, which is proportional to the acoustic pressure; ka is the magnitude of the acoustic wavevector; lt' is the optical transport mean free path; η is the elasto-optical coefficient; ρ is the mass density; δ n δ d {\displaystyle \delta _{n}\delta _{d}} represents how ultrasound averagely change the light per free path via index of refraction and displacementare respectively. After deriving the autocorrelation equation, Wiener-Khinchin theorem is applied. With this theorem, we can further connect G1 with the spectral density of the modulated speckle. Their relationship in frequency space is shown as the following Fourier transformation equation. S ( ω ) = ∫ − ∞ ∞ G 1 ( τ ) exp ⁡ ( i ω τ ) d τ {\displaystyle S(\omega )=\int _{-{\infty }}^{\infty }G1(\tau )\exp(i\omega \tau )d\tau } For simplicity, the Fourier transformed term exp(-iω0t) is dropped, and the ω here represents relative angular frequency of unmodulated light. For example, if ω=0 this equation is calculating the spectral density with absolute angular frequency ω0. Since G1 is an even autocorrelation function, the spectral intensity at ωa can be written as: I n = 1 T a ∫ 0 T a cos ⁡ ( n ω a τ ) G 1 ( τ ) d τ {\displaystyle I_{n}={\frac {1}{T_{a}}}\int \limits _{0}^{T_{a}}\cos(n\omega _{a}\tau )G1(\tau )d\tau } Here, the n and Ta represent the acoustic period. Since the frequency spectrum is symmetric about ω0, the one side modulation depth is defined as: M 1 = I 1 I 0 {\displaystyle M_{1}={\frac {I_{1}}{I_{0}}}} Then we can consider the condition under the second approximation (weak-modulation approximation). In this situation, the term ( d / l t ′ ) ε 1 / 2 {\displaystyle (d/l_{t}')\varepsilon ^{1/2}} is much smaller than 1. By applying the feature of sinh function, which is the main component of G1 function, the original autocorrelation G1 can be further simplified as: G 1 ( τ ) = 1 − 1 6 ( d l t ′ ) 2 ε [ 1 − c o s ( ω a τ ) ] {\displaystyle G1(\tau )=1-{\frac {1}{6}}({\frac {d}{l_{t}'}})^{2}\varepsilon [1-cos(\omega _{a}\tau )]} Therefore, the one side modulation depth can be simplified as M 1 = 1 12 ( d l t ′ ) 2 ε {\displaystyle M_{1}={\frac {1}{12}}({\frac {d}{l_{t}'}})^{2}\varepsilon } . From previous equation, we can see ε = 6 ( δ n + δ d ) ( n 0 k 0 A ) 1 / 2 {\displaystyle \varepsilon =6(\delta _{n}+\delta _{d})(n_{0}k_{0}A)^{1/2}} . Therefore, in conclusion, A and M1 has quadratic relationship. Such quadratic modulation can be captured by a Fabry-Perot interferometer. Or, we can calculate the ratio between observed AC signal and he observed DC signal (also named as apparent modulation depth) which can carry enough information to represent such modulation as well. In conclusion, in UOT analytic model, with the help of weak-scattering approximation, weak-modulation approximation, diffusion theory, and Wiener-Khinchin theorem, the relationship between acoustic amplitude and modulated light can be successfully observed. == Single Frequency UOT == Single frequency UOT leverages the frequency shift induced by the physical displacement of scatterers. This shifts the frequency of the incident light by the ultrasound frequency for photons that travel through the ultrasound focal region creating so called "tagged" photons. Measuring the intensity variation of these tagged photons gives information on the optical properties of the ultrasound focal region. As such, an image can be formed by tracking the intensity of tagged photons as the ultrasound scans the tissue. The most restrictive challenge in practical application of single frequency UOT for imaging in tissues is the poor signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). To address this, there has been work in integrating different detection mechanisms to attenuate the untagged photon signal to the point where the tagged photon signal is more easily measured. The filters are implemented in the measurement pipeline between the light collection location on the sample and the photodiode used to construct the image. Several filtering methods have been studied in an effort to determine the most promising approach. Additionally, other detection methods have been developed towards addressing this issue. The four most prevalent detection techniques that have been studied are speckle contrast imaging, photorefractive detection, off-axis holography, and spectral hole burning. === Detection Techniques === ==== Speckle Contrast Imaging ==== Speckle contrast imaging is the most straightforward approach to detecting the as it requires no additional equipment or reference beams. The idea of this method is to simply take a long exposure image of the transmitted light, called the speckle field, with the time being much longer than the ultrasound period. Next, the contrast of the speckles, defined as the standard deviation divided by the average intensity, is calculated. This contrast value is correlated with the ratio of tagged to untagged photons contained in a given speckle. As the number of tagged photons increases, the measured contrast will decrease and indicate a difference between tissue regions. The challenge in using this method is that it does not directly address the SNR issue present in UOT. That is, the fact that the tagged photon signal is much lower than the untagged photon signal means the changes in contrast will be correspondingly low across the tissue. Therefore, speckle contrast imaging struggles greatly to achieve good performance as compared to other methods that apply techniques to amplify the tagged signal or attenuate the untagged one. ==== Off-axis Holography ==== Off-axis holography uses interference between the speckle field and a reference beam to improve detection. In this setup, the speckle field and reference beam are superimposed onto a detector with the reference beam having an incident angle. The interference between these two beams is the recorded measurement. The beating effects of the untagged photons and reference beam interference that arise at the ultrasound frequency average out over a sufficiently long integration time. As a result, the integrated interference signal has no untagged photon intensity at the ultrasound frequency. The fast Fourier transform (FFT) of the integrated interference signal can be taken, and the amplitude present at the ultrasound frequency is only due to the tagged photons. Then the inverse FFT is taken to retrieve the speckle field containing only the tagged photon contributions. The need to take the FFT and inverse FFT to obtain images drastically increases the computational load. Considering that higher resolution images would necessarily increase the dimensionality of the measurement, the computational requirements to obtain high frame rate data quickly becomes prohibitive. ==== Photorefractive Detection ==== Photorefractive detection uses materials that exhibit a photorefractive effect to process the transmitted light. The speckle field and a reference beam are superimposed at an angle within the material. The two beams interfere within the material generating an index grating. This leads to part of the reference field being diffracted, which then replicates the initial speckle field. Finally, the speckle field and diffracted reference replicating the speckle field can selectively interfere on a larger detector to obtain the UOT image. Then, by subtracting a baseline image without ultrasound modulation, the tagged photon intensity can be extracted. The primary obstacle for the photorefractive method is the excessively long response time of the material (~ 100 ms). This is too long to be used for in vivo tissue imaging where the speckle decorrelation time is ~0.1-1 ms. As a result, while photorefractive detection has relatively good noise characteristics, the limitation in translating the technology to real tissue imaging has prevent focused development. ==== Spectral Hole Burning ==== Spectral hole burning utilizes materials doped with rare-earth ions that act as a spectral band-pass filter. The material exhibits heterogeneous broadening, which allows for the selective absorption spectrum to be manually altered. The material is cryogenically cooled and excited with a pump beam at the desired pass frequency. A certain amount of the doped ions absorb the photons from the pump beam and are excited from their ground state for a short time interval. During this time, the absorption spectrum is altered such that a spectral “hole” is burned around the pump beam frequency resulting in decreased absorption. The width of the spectral hole depends on the properties of the heterogeneous broadening of the material and the decrease in absorption is dependent on the pump beam intensity. Any light that passes through the material while the spectral hole is present will have a greatly attenuated signal at frequencies other than the pump frequency. Therefore, by using a pump frequency that closely matches the expected frequency of the tagged photon signal, the noise from the untagged photons can be effectively attenuated. Compared to the other detection methods, spectral hole burning possesses the best CNR performance. Additionally, this approach offers an excellent etendue compared to others and is immune to speckle decorrelation. The etendue is essentially a measure of the size of the collection field with respect to both the acceptance angle and area of the detector. Consequently, spectral hole burning has seen the most work in recent years within the field of UOT. However, limitations in the practicality of the method have stunted its transition into in vivo imaging applications. These limitations include the need for rare-earth materials, although this has become less of an issue with progress in materials capabilities, and the requirement of the filter material to be cryogenically cooled to less than 5 K. == Time-Resolved Frequency-Swept UOT (Forward model) == For single-frequency UOT, the axial resolution along ultrasonic axis is always limited by the elongated ultrasonic focal zone. To improve the axial resolution, Ultrasonic frequency-swept UOT model is designed. In this system, the object is placed in a tank full of UOT scattering medium. There will also be an ultrasound absorber at the bottom of the tank to avoid rebound of ultrasound. Basically, a function generator will produce a frequency signal relating to time. After passing through a power amplifier and a transformer, such frequency command will be sent to the ultrasonic transducer to generate ultrasonic beam with different frequencies. After a brief calculation, a focused ultrasonic beam will be sent to the medium and the target. Meanwhile, a laser beam which is perpendicular to the ultrasonic beam will also illuminate the scattering medium. Then, on the other side of the light source, the PMT, modulated by frequency signal sent through the first function generator will detect transmitted light signal within the tank and transfer the optical signal to electrical signal. The electrical signal will then pass through an amplifier, an Oscilloscope and be stored in the data base. With such data base, spectral intensity vs frequency plots at multiple points can be generated (The first spectrum is generated as a reference, produced by optical signal far from the object.). Each of the spectrum can then be further converted to a 1D image showing the interior of the medium in the direction perpendicular to the tank (z direction). In the end, all the 1D image will be pieced together to generate a full view inside the medium. In summary, a frequency-swept (chirped) ultrasonic wave can encode laser light traversing the acoustic axis with various frequencies. Decoding the transmitted light provides resolution along the acoustic axis. This scheme is analogous to MRI. == Development == Was first proposed as a method for virus detection in 2013. Recent advances in UOT (2020 onwards) include 1) the development of Coded Ultrasound Transmissions for SNR gain in AOI, 2) the development of Homodyne Time of Flight AOI, 3) the use of super-resolution techniques to improve UOT beyond the acoustic diffraction limit, and 4) the use of coaxial interferometry to better enable modern high-performance cameras for parallel detection of UOT signals. Levi et al. discovered that the use of coded sequences of acoustic pulses can turn the speckle modulation at every time instant into the sum of acoustically modulated regions. CT-AOI can keep the spatial resolution from single cycle ultrasound pulses while increasing SNR by the half of the square root of the number of cycles. In this paper, Levi et al. use 79 cycles to gain an experimental 4 times increase to the SNR. As a follow-up to their previous work, Levi et al. developed a homodyne AOI scheme enabling the detection of tagged detection with a single low-gain photodetector. This method leads to a 4 times SNR increase over more traditional high-gain photodetectors, such as photomultiplier tubes. In this homodyne time-of-flight AOI system, the reemitted light is not detected directly but is rather interfered with a reference beam in a homodyne configuration. The interference leads to an optical amplification of the US-modulated light, enabling its detection with low-gain photodetectors with a bandwidth that is higher than the AO modulation frequency. This setup does not temporally integrate the signal, allowing for much more flexibility regarding speckle decorrelation. This is because the measurement signal can be divided in post-processing, allowing the analysis of time windows wherein the speckles are stable. Doktofksy et al. utilized super-resolution optical fluctuations imaging (SOFI) techniques to gain massive improvements to spatial resolution in UOT. Naturally fluctuating speckle grains present in UOT images are analogous to blinking fluorophores in SOFI, which enables super-resolution. Normally, single-pixel detectors (e.g. photodiode) are used in UOT. These detectors suffer from limited dynamic range, which causes difficulties with low modulation depth. Modulation depth can be enhanced by using multiple pixels in parallel; the enhancement is equal to N ^ (1/2), where N represents the number of pixels. Modern high-performance cameras have millions of pixels but have low temporal resolution (slow framerate and long exposure time). Lin et al. tackled this issue by designing a system with paired illumination from two co-propagated beams with slightly different optical frequencies. == References ==
Wikipedia/Ultrasound-modulated_optical_tomography
Electrical capacitance tomography (ECT) is a method for determination of the dielectric permittivity distribution in the interior of an object from external capacitance measurements. It is a close relative of electrical impedance tomography and is proposed as a method for industrial process monitoring. Although capacitance sensing methods were in widespread use the idea of using capacitance measurement to form images is attributed to Maurice Beck and co-workers at UMIST in the 1980s. Although usually called tomography, the technique differs from conventional tomographic methods, in which high resolution images are formed of slices of a material. The measurement electrodes, which are metallic plates, must be sufficiently large to give a measureable change in capacitance. This means that very few electrodes are used, typically eight to sixteen electrodes. An N-electrode system can only provide N(N−1)/2 independent measurements. This means that the technique is limited to producing very low resolution images of approximate slices. However, ECT is fast, and relatively inexpensive. == Applications == Applications of ECT include the measurement of flow of fluids in pipes and measurement of the concentration of one fluid in another, or the distribution of a solid in a fluid. ECT enables the visualization of multiphase flow, which play an important role in the technological processes of the chemical, petrochemical and food industries. Due to its very low spatial resolution, ECT has not yet been used in medical diagnostics. Potentially, ECT may have similar medical applications to electrical impedance tomography, such as monitoring lung function or detecting ischemia or hemorrhage in the brain. == See also == Three-dimensional electrical capacitance tomography Electrical impedance tomography Electrical resistivity tomography Industrial Tomography Systems Process tomography == References ==
Wikipedia/Electrical_capacitance_tomography
Network tomography is the study of a network's internal characteristics using information derived from end point data. The word tomography is used to link the field, in concept, to other processes that infer the internal characteristics of an object from external observation, as is done in MRI or PET scanning (even though the term tomography strictly refers to imaging by slicing). The field is a recent development in electrical engineering and computer science, dating from 1996. Network tomography seeks to map the path data takes through the Internet by examining information from “edge nodes,” the computers in which the data are originated and from which they are requested. The field is useful for engineers attempting to develop more efficient computer networks. Data derived from network tomography studies can be used to increase quality of service by limiting link packet loss and increasing routing optimization. == Recent developments == There have been many published papers and tools in the area of network tomography, which aim to monitor the health of various links in a network in real-time. These can be classified into loss and delay tomography. === Loss tomography === Loss tomography aims to find “lossy” links in a network by sending active “probes” from various vantage points in the network or the Internet. === Delay tomography === The area of delay tomography has also attracted attention in the recent past. It aims to find link delays using end-to-end probes sent from vantage points. This can potentially help isolate links with large queueing delays caused by congestion. == More applications == Network tomography may be able to infer network topology using end-to-end probes. Topology discovery is a tradeoff between accuracy vs. overhead. With network tomography, the emphasis is to achieve as accurate a picture of the network with minimal overhead. In comparison, other network topology discovery techniques using SNMP or route analytics aim for greater accuracy with less emphasis on overhead reduction. Network tomography may find links which are shared by multiple paths (and can thus become potential bottlenecks in the future). Network Tomography may improve the control of a smart grid == See also == Network science Computer network == References ==
Wikipedia/Network_tomography
Magnetic induction tomography (MIT) is an imaging technique used to image electromagnetic properties of an object by using the eddy current effect. It is also called electromagnetic induction tomography, electromagnetic tomography (EMT), eddy current tomography, and eddy current testing. == Applications == The method is used in nondestructive testing and geophysics, and has potential applications in medicine. It is also used to generate 3D images of passive electromagnetic properties, which has applications in brain imaging, cryosurgery monitoring in medical imaging, and metal flow visualization in metalworking processes. Recently, eddy current sensors has been used to scan additive manufacturing for metal process layer-by-layer, producing eddy current tomography images. The company AMiquam has been developing this technology since 2020. == References == Telford, W M; Geldart, L P; Sheri, R E & Keys, D A (1976). Applied Geophysics. Cambridge University Press: Cambridge, England. Section 3.5.4. Peyton, A. J., et al. "An overview of electromagnetic inductance tomography: description of three different systems." Measurement Science and Technology 7.3 (1996): 261. Griffiths H (2001). "Magnetic induction tomography". Meas. Sci. Technol. 12 (8): 1126–1131. Bibcode:2001MeScT..12.1126G. doi:10.1088/0957-0233/12/8/319. S2CID 250767846. Korjenevsky A, Cherepenin V, Sapetsky S (2000). "Magnetic induction tomography: experimental realization". Physiological Measurement. 21 (1): 89–94. doi:10.1088/0967-3334/21/1/311. PMID 10720003. S2CID 250895798. Scharfetter H, Lackner HK, Rosell J (2001). "Magnetic induction tomography: Hardware for multi-frequency measurements in biological tissues". Physiological Measurement. 22 (1): 131–146. doi:10.1088/0967-3334/22/1/317. PMID 11236874. S2CID 250865106. Binns, R; Lyons, A R A; Peyton, A J & Pritchard, W D N (2001). "Imaging molten steel flow profiles". Meas. Sci. Technol. 12 (8): 1132–1138. Bibcode:2001MeScT..12.1132B. doi:10.1088/0957-0233/12/8/320. S2CID 250883756. Soleimani, M; Lionheart, W R B (2006). "Absolute Conductivity Reconstruction in Magnetic Induction Tomography Using a Nonlinear Method" (PDF). IEEE Trans Med Imaging. 25 (12): 1521–1530. doi:10.1109/TMI.2006.884196. PMID 17167989. S2CID 2855911.
Wikipedia/Magnetic_induction_tomography
Multiscale tomography (or multi-length scale tomography) is a form of tomography spanning large orders of magnitude in resolution, often utilizing many different forms of tomography together to do so. The forms of tomography combined in the process depend on what is being studied and the details needed. Each form of tomography has an optimal range of optical resolution that it can function across, but many modern materials and applications need information beyond the range of a single form of tomography. Combining this information using many forms of tomography can help provide a holistic view of the system being looked at, and is important for computer simulations. == References ==
Wikipedia/Multiscale_tomography