text
stringlengths
11
320k
source
stringlengths
26
161
The forward problem of electrocardiology is a computational and mathematical approach to study the electrical activity of the heart through the body surface. [ 1 ] The principal aim of this study is to computationally reproduce an electrocardiogram (ECG), which has important clinical relevance to define cardiac pathologies such as ischemia and infarction , or to test pharmaceutical intervention . Given their important functionalities and the relative small invasiveness, the electrocardiography techniques are used quite often as clinical diagnostic tests . Thus, it is natural to proceed to computationally reproduce an ECG, which means to mathematically model the cardiac behaviour inside the body. [ 1 ] The three main parts of a forward model for the ECG are: Thus, to obtain an ECG, a mathematical electrical cardiac model must be considered, coupled with a diffusive model in a passive conductor that describes the electrical propagation inside the torso . [ 1 ] The coupled model is usually a three-dimensional model expressed in terms of partial differential equations . Such model is typically solved by means of finite element method for the solution's space evolution and semi-implicit numerical schemes involving finite differences for the solution's time evolution. However, the computational costs of such techniques, especially with three dimensional simulations, are quite high. Thus, simplified models are often considered, solving for example the heart electrical activity independently from the problem on the torso. To provide realistic results, three dimensional anatomically realistic models of the heart and the torso must be used. [ 1 ] Another possible simplification is a dynamical model made of three ordinary differential equations . [ 3 ] The electrical activity of the heart is caused by the flow of ions across the cell membrane , between the intracellular and extracellular spaces, which determines a wave of excitation along the heart muscle that coordinates the cardiac contraction and, thus, the pumping action of the heart that enables it to push blood through the circulatory system . The modeling of cardiac electrical activity is thus related to the modelling of the flow of ions on a microscopic level, and on the propagation of the excitation wave along the muscle fibers on a macroscopic level. [ 1 ] [ 4 ] Between the mathematical model on the macroscopic level, Willem Einthoven and Augustus Waller defined the ECG through the conceptual model of a dipole rotating around a fixed point, whose projection on the lead axis determined the lead recordings. Then, a two-dimensional reconstruction of the heart activity in the frontal plane was possible using the Einthoven's limbs leads I, II and III as theoretical basis. [ 5 ] Later on, the rotating cardiac dipole was considered inadequate and was substituted by multipolar sources moving inside a bounded torso domain. The main shortcoming of the methods used to quantify these sources is their lack of details, which are however very relevant to realistically simulate cardiac phenomena. [ 4 ] On the other hand, microscopic models try to represent the behaviour of single cells and to connect them considering their electrical properties. [ 6 ] [ 7 ] [ 8 ] These models present some challenges related to the different scales that need to be captured, in particular considering that, especially for large scale phenomena such as re-entry or body surface potential , the collective behaviour of the cells is more important than that of every single cell. [ 4 ] The third option to model the electrical activity of the heart is to consider a so-called "middle-out approach", where the model incorporates both lower and higher level of details. This option considers the behaviour of a block of cells, called a continuum cell, thus avoiding scale and detail problems. The model obtained is called bidomain model , which is often replaced by its simplification, the monodomain model . [ 4 ] The basic assumption of the bidomain model is that the heart tissue can be divided in two ohmic conducting continuous media, connected but separated through the cell membrane. This media are called intracellular and extracellular regions, the former representing the cellular tissues, and the latter representing the space between cells. [ 2 ] [ 1 ] The standard formulation of the bidomain model, including a dynamical model for the ionic current, is the following [ 2 ] { ∇ ⋅ ( σ i ∇ V m ) + ∇ ⋅ ( σ i ∇ u e ) = A m ( C m ∂ V m ∂ t + I ion ( V m , w ) ) + I app in Ω H ∇ ⋅ ( ( σ i + σ e ) ∇ u e ) + ∇ ⋅ ( σ i ∇ V m ) = 0 in Ω H ∂ w ∂ t + g ( V m , w ) = 0 in Ω H {\displaystyle {\begin{cases}\nabla \cdot ({\boldsymbol {\sigma }}_{i}\nabla V_{m})+\nabla \cdot ({\boldsymbol {\sigma }}_{i}\nabla u_{e})=A_{m}\left(C_{m}{\dfrac {\partial V_{m}}{\partial t}}+I_{\text{ion}}(V_{m},w)\right)+I_{\text{app}}&{\text{in }}\Omega _{H}\\\nabla \cdot \left(({\boldsymbol {\sigma }}_{i}+{\boldsymbol {\sigma }}_{e})\nabla u_{e}\right)+\nabla \cdot ({\boldsymbol {\sigma }}_{i}\nabla V_{m})=0&{\text{in }}\Omega _{H}\\{\frac {\partial w}{\partial {t}}}+g(V_{m},w)=0&{\text{in }}\Omega _{H}\end{cases}}} where V m {\displaystyle V_{m}} and u e {\displaystyle u_{e}} are the transmembrane and extracellular potentials respectively, I ion {\displaystyle I_{\text{ion}}} is the ionic current, which depends also from a so-called gating variable w {\displaystyle w} (accounting for cellular-level ionic behavior), and I app {\displaystyle I_{\text{app}}} is an external current applied to the domain. Moreover, σ i {\displaystyle {\boldsymbol {\sigma }}_{i}} and σ e {\displaystyle {\boldsymbol {\sigma }}_{e}} are the intracellular and extracellular conductivity tensors, A m {\displaystyle A_{m}} is the surface to volume ratio of the cell membrane and C m {\displaystyle C_{m}} is the membrane capacitance per unit area. Here the domain Ω H {\displaystyle \Omega _{H}} represents the heart muscle. [ 2 ] The boundary conditions for this version of the bidomain model are obtained through the assumption that there is no flow of intracellular potential outside of the heart, which means that σ i ∇ V m ⋅ n + σ i ∇ u e ⋅ n = 0 on Σ {\displaystyle {\boldsymbol {\sigma }}_{i}\nabla V_{m}\cdot \mathbf {n} +{\boldsymbol {\sigma }}_{i}\nabla u_{e}\cdot \mathbf {n} =0\quad \quad {\text{on }}\Sigma } where Σ = ∂ Ω H {\displaystyle \Sigma =\partial \Omega _{H}} denotes the boundary of the heart domain and n {\displaystyle \mathbf {n} } is the outward unit normal to Σ {\displaystyle \Sigma } . [ 2 ] The monodomain model is a simplification of the bidomain model that, in spite of some unphysiological assumptions, is able to represent realistic electrophysiological phenomena at least for what concerns the transmembrane potential V m {\displaystyle V_{m}} . [ 2 ] [ 1 ] The standard formulation is the following partial differential equation, whose only unknown V m {\displaystyle V_{m}} is the transmembrane potential: χ C m ∂ V m ∂ t − ∇ ⋅ ( σ i λ 1 + λ ∇ V m ) + χ I ion = I app in Ω H {\displaystyle \chi C_{m}{\frac {\partial V_{m}}{\partial t}}-\nabla \cdot \left({\boldsymbol {\sigma }}_{i}{\frac {\lambda }{1+\lambda }}\nabla V_{m}\right)+\chi I_{\text{ion}}=I_{\text{app}}\quad \quad {\text{in }}\Omega _{H}} where λ {\displaystyle \lambda } is a parameter that relates the intracellular and extracellular conductivity tensors. [ 2 ] The boundary condition used for this model is [ 9 ] ( σ i ∇ V m ) ⋅ n = 0 on ∂ Ω H . {\displaystyle \left({\boldsymbol {\sigma }}_{i}\nabla V_{m}\right)\cdot \mathbf {n} =0\quad \quad {\text{on }}\partial \Omega _{H}.} In the forward problem of electrocardiography, the torso is seen as a passive conductor and its model can be derived starting from the Maxwell's equations under quasi-static assumption. [ 1 ] [ 2 ] The standard formulation consists of a partial differential equation with one unknown scalar field, the torso potential u T {\displaystyle u_{T}} . Basically, the torso model is the following generalized Laplace equation ∇ ⋅ ( σ T ∇ u T ) = 0 in Ω T , {\displaystyle \nabla \cdot ({\boldsymbol {\sigma }}_{T}\nabla u_{T})=0\quad \quad {\text{in }}\Omega _{T},} where σ T {\displaystyle {\boldsymbol {\sigma }}_{T}} is the conductivity tensor and Ω T {\displaystyle \Omega _{T}} is the domain surrounding the heart, i.e. the human torso. [ 2 ] As for the bidomain model, the torso model can be derived from the Maxwell's equations and the continuity equation after some assumptions. First of all, since the electrical and magnetic activity inside the body is generated at low level, a quasi-static assumption can be considered. Thus, the body can be viewed as a passive conductor, which means that its capacitive, inductive and propagative effect can be ignored. [ 1 ] Under quasi-static assumption, the Maxwell's equations are [ 1 ] { ∇ ⋅ E = ρ ϵ 0 ∇ × E = 0 ∇ ⋅ B = 0 ∇ × B = μ 0 J {\displaystyle {\begin{cases}\nabla \cdot \mathbf {E} &={\frac {\rho }{\epsilon _{0}}}\\\nabla \times \mathbf {E} &=0\\\nabla \cdot \mathbf {B} &=0\\\nabla \times \mathbf {B} &=\mu _{0}\mathbf {J} \end{cases}}} and the continuity equation is [ 1 ] ∇ ⋅ J = 0. {\displaystyle \nabla \cdot \mathbf {J} =0.} Since its curl is zero, the electrical field can be represented by the gradient of a scalar potential field, the torso potential where the negative sign means that the current flows from higher to lower potential regions. [ 1 ] Then, the total current density can be expressed in terms of the conduction current and other different applied currents so that, from continuity equation, [ 1 ] Then, substituting ( 1 ) in ( 2 ) ∇ ⋅ ( σ T ∇ u T ) = ∇ ⋅ J app = I v {\displaystyle \nabla \cdot ({\boldsymbol {\sigma }}_{T}\nabla u_{T})=\nabla \cdot \mathbf {J} _{\text{app}}=I_{v}} in which I v {\displaystyle I_{v}} is the current per unit volume. [ 1 ] Finally, since aside from the heart there is no current source inside the torso, the current per unit volume can be set to zero, giving the generalized Laplace equation, which represents the standard formulation of the diffusive problem inside the torso [ 1 ] ∇ ⋅ ( σ T ∇ u T ) = 0 in Ω T . {\displaystyle \nabla \cdot ({\boldsymbol {\sigma }}_{T}\nabla u_{T})=0\quad \quad {\text{in }}\Omega _{T}.} The boundary conditions accounts for the properties of the media surrounding the torso, i.e. of the air around the body. Generally, air has null conductivity which means that the current cannot flow outside the torso. This is translated in the following equation [ 1 ] σ T ∇ u T ⋅ n T = 0 on Γ T {\displaystyle {\boldsymbol {\sigma }}_{T}\nabla u_{T}\cdot \mathbf {n} _{T}=0\quad \quad {\text{on }}\Gamma _{T}} where n T {\displaystyle \mathbf {n} _{T}} is the unit outward normal to the torso and Γ T {\displaystyle \Gamma _{T}} is the torso boundary, which means the torso surface. [ 1 ] [ 2 ] Usually, the torso is considered to have isotropic conductivity, which means that the current flows in the same way in all directions. However, the torso is not an empty or homogeneous envelope, but contains different organs characterized by different conductivity coefficients, which can be experimentally obtained. A simple example of conductivity parameters in a torso that considers the bones and the lungs is reported in the following table. [ 2 ] The coupling between the electrical activity model and the torso model is achieved by means of suitable boundary conditions at the epicardium, i.e. at the interface surface between the heart and the torso. [ 1 ] [ 2 ] The heart-torso model can be fully coupled, if a perfect electrical transmission between the two domains is considered, or can be uncoupled, if the heart electrical model and the torso model are solved separately with a limited or imperfect exchange of information between them. [ 2 ] The complete coupling between the heart and the torso is obtained imposing a perfect electrical transmission condition between the heart and the torso. This is done considering the following two equations, that establish a relationship between the extracellular potential and the torso potential [ 2 ] u e = u T on Σ ( σ e ∇ u e ) ⋅ n e = − ( σ T ∇ u T ) ⋅ n T on Σ . {\displaystyle {\begin{aligned}u_{e}&=u_{T}&{\text{on }}\Sigma \\({\boldsymbol {\sigma }}_{e}\nabla u_{e})\cdot \mathbf {n} _{e}&=-({\boldsymbol {\sigma }}_{T}\nabla u_{T})\cdot \mathbf {n} _{T}\quad &{\text{on }}\Sigma .\end{aligned}}} This equations ensure the continuity of both the potential and the current across the epicardium. [ 2 ] Using these boundary conditions, it is possible to obtain two different fully coupled heart-torso models, considering either the bidomain or the monodomain model for the heart electrical activity. From the numerical viewpoint, the two models are computationally very expensive and have similar computational costs. [ 2 ] Boundary conditions that represent a perfect electrical coupling between the heart and the torso are the most used and the classical ones. However, between the heart and the torso there is the pericardium , a sac with a double wall that contains a serous fluid which has a specific effect on the electrical transmission. Considering the capacitance C p {\displaystyle C_{p}} and the resistive R p {\displaystyle R_{p}} effect that the pericardium has, alternative boundary conditions that take into account this effect can be formulated as follows [ 10 ] R p σ e ∇ u e ⋅ n = R p C p ∂ ( u T − u e ) ∂ t + ( u T − u e ) on Σ σ e ∇ u e ⋅ n = σ T ∇ u T ⋅ n on Σ {\displaystyle {\begin{aligned}R_{p}{\boldsymbol {\sigma }}_{e}\nabla u_{e}\cdot \mathbf {n} &=R_{p}C_{p}{\frac {\partial (u_{T}-u_{e})}{\partial t}}+(u_{T}-u_{e})\quad \quad &{\text{on }}\Sigma \\{\boldsymbol {\sigma }}_{e}\nabla u_{e}\cdot \mathbf {n} &={\boldsymbol {\sigma }}_{T}\nabla u_{T}\cdot \mathbf {n} \quad \quad &{\text{on }}\Sigma \end{aligned}}} The fully coupled heart-torso model, considering the bidomain model for the heart electrical activity, in its complete form is [ 2 ] { ∇ ⋅ ( σ i ∇ V m ) + ∇ ⋅ ( σ i ∇ u e ) = A m ( C m ∂ V m ∂ t + I ion ( V m , w ) ) + I app in Ω H ∇ ⋅ ( ( σ i + σ e ) ∇ u e ) + ∇ ⋅ ( σ i ∇ V m ) = 0 in Ω H ∂ w ∂ t + g ( V m , w ) = 0 in Ω H ∇ ⋅ ( σ T ∇ u T ) = 0 in Ω T ( σ i ∇ V m ) ⋅ n + ( σ i ∇ u e ) ⋅ n = 0 on Σ u e = u T on Σ ( σ e ∇ u e ) ⋅ n = ( σ T ∇ u T ) ⋅ n on Σ ( σ T ∇ u T ) ⋅ n = 0 on Γ T {\displaystyle {\begin{cases}\nabla \cdot ({\boldsymbol {\sigma }}_{i}\nabla V_{m})+\nabla \cdot ({\boldsymbol {\sigma }}_{i}\nabla u_{e})=A_{m}\left(C_{m}{\dfrac {\partial V_{m}}{\partial t}}+I_{\text{ion}}(V_{m},w)\right)+I_{\text{app}}&{\text{in }}\Omega _{H}\\[1ex]\nabla \cdot \left(({\boldsymbol {\sigma }}_{i}+{\boldsymbol {\sigma }}_{e})\nabla u_{e}\right)+\nabla \cdot ({\boldsymbol {\sigma }}_{i}\nabla V_{m})=0&{\text{in }}\Omega _{H}\\[1ex]{\dfrac {\partial w}{\partial {t}}}+g(V_{m},w)=0&{\text{in }}\Omega _{H}\\[1ex]\nabla \cdot ({\boldsymbol {\sigma }}_{T}\nabla u_{T})=0&{\text{in }}\Omega _{T}\\[1ex]({\boldsymbol {\sigma }}_{i}\nabla V_{m})\cdot \mathbf {n} +({\boldsymbol {\sigma }}_{i}\nabla u_{e})\cdot \mathbf {n} =0&{\text{on }}\Sigma \\[1ex]u_{e}=u_{T}&{\text{on }}\Sigma \\[1ex]({\boldsymbol {\sigma }}_{e}\nabla u_{e})\cdot \mathbf {n} =({\boldsymbol {\sigma }}_{T}\nabla u_{T})\cdot \mathbf {n} &{\text{on }}\Sigma \\[1ex]({\boldsymbol {\sigma }}_{T}\nabla u_{T})\cdot \mathbf {n} =0&{\text{on }}\Gamma _{T}\end{cases}}} where the first four equations are the partial differential equations representing the bidomain model, the ionic model and the torso model, while the remaining ones represent the boundary conditions for the bidomain and torso models and the coupling conditions between them. [ 2 ] The fully coupled heart-torso model considering the monodomain model for the electrical activity of the heart is more complicated that the bidomain problem. Indeed, the coupling conditions relate the torso potential with the extracellular potential, which is not computed by the monodomain model. Thus, it is necessary to use also the second equation of the bidomain model (under the same assumptions under which the monodomain model is derived), yielding: [ 2 ] ∇ ⋅ ( ( σ i + σ e ) ∇ u e ) + ∇ ⋅ ( σ i ∇ V m ) = 0 in Ω H . {\displaystyle \nabla \cdot \left(({\boldsymbol {\sigma }}_{i}+{\boldsymbol {\sigma }}_{e})\nabla u_{e}\right)+\nabla \cdot ({\boldsymbol {\sigma }}_{i}\nabla V_{m})=0\quad \quad {\text{in }}\Omega _{H}.} This way, the coupling conditions do not need to be changed, and the complete heart-torso model is composed of two different blocks: [ 2 ] The fully coupled heart-torso models are very detailed models but they are also computationally expensive to solve. [ 2 ] A possible simplification is provided by the so-called uncoupled assumption in which the heart is considered completely electrically isolated from the heart. [ 2 ] Mathematically, this is done imposing that the current cannot flow across the epicardium, from the heart to the torso, namely [ 2 ] σ e ∇ u e ⋅ n = 0 on Σ . {\displaystyle {\boldsymbol {\sigma }}_{e}\nabla u_{e}\cdot \mathbf {n} =0\quad \quad {\text{on }}\Sigma .} Applying this equation to the boundary conditions of the fully coupled models, it is possible to obtained two uncoupled heart-torso models, in which the electrical models can be solved separately from the torso model reducing the computational costs. [ 2 ] The uncoupled version of the fully coupled heart-torso model that uses the bidomain to represent the electrical activity of the heart is composed of two separated parts: [ 2 ] As in the case of the fully coupled heart-torso model which uses the monodomain model, also in the corresponding uncoupled model extracellular potential needs to be computed. In this case, three different and independent problems must be solved: [ 2 ] Solving the fully coupled or the uncoupled heart-torso models allows to obtain the electrical potential generated by the heart in every point of the human torso, and in particular on the whole surface of the torso. Defining the electrodes positions on the torso, it is possible to find the time evolution of the potential on such points. Then, the electrocardiograms can be computed, for example according to the 12 standard leads, considering the following formulas [ 2 ] { I = u T ( L ) − u T ( R ) I I = u T ( F ) − u T ( R ) I I I = u T ( F ) − u T ( L ) a V R = 3 2 ( u T ( R ) − u w ) a V L = 3 2 ( u T ( L ) − u w ) a V F = 3 2 ( u T ( F ) − u w ) V i = u T ( V i ) − u w i = 1 , … , 6 {\displaystyle {\begin{cases}\mathrm {I} &=u_{T}(L)-u_{T}(R)\\\mathrm {II} &=u_{T}(F)-u_{T}(R)\\\mathrm {III} &=u_{T}(F)-u_{T}(L)\\\mathrm {aVR} &={\frac {3}{2}}(u_{T}(R)-u_{w})\\\mathrm {aVL} &={\frac {3}{2}}(u_{T}(L)-u_{w})\\\mathrm {aVF} &={\frac {3}{2}}(u_{T}(F)-u_{w})\\\mathrm {V} _{i}&=u_{T}(V_{i})-u_{w}\quad i=1,\dots ,6\end{cases}}} where u W = ( u T ( L ) + u T ( R ) + u T ( F ) ) / 3 {\displaystyle u_{W}=(u_{T}(L)+u_{T}(R)+u_{T}(F))/3} and L , R , F , { V i } i = 1 6 {\displaystyle L,R,F,\{V_{i}\}_{i=1}^{6}} are the standard locations of the electrodes. [ 2 ] The heart-torso models are expressed in terms of partial differential equations whose unknowns are function of both space and time. They are in turn coupled with an ionic model which is usually expressed in terms of a system of ordinary differential equations . A variety of numerical schemes can be used for the solution of those problems. Usually, the finite element method is applied for the space discretization and semi-implicit finite-difference schemes are used for the time discretization. [ 1 ] [ 2 ] Uncoupled heart-torso model are the simplest to treat numerically because the heart electrical model can be solved separately from the torso one, so that classic numerical methods to solve each of them can be applied. This means that the bidomain and monodomain models can be solved for example with a backward differentiation formula for the time discretization, while the problems to compute the extracellular potential and torso potential can be easily solved by applying only the finite element method because they are time independent. [ 1 ] [ 2 ] The fully coupled heart-torso models, instead, are more complex and need more sophisticated numerical models. For example, the fully heart-torso model that uses the bidomain model for the electrical simulation of the cardiac behaviour can be solved considering domain decomposition techniques, such as a Dirichlet-Neumann domain decomposition. [ 2 ] [ 11 ] To simulate and electrocardiogram using the fully coupled or uncoupled models, a three-dimensional reconstruction of the human torso is needed. Today, diagnostic imaging techniques such as MRI and CT can provide a sufficiently accurate images that allow to reconstruct in detail anatomical human parts and, thus, obtain a suitable torso geometry. For example, the Visible Human Data [ 13 ] is a useful dataset to create a three-dimensional torso model detailed with internal organs including the skeletal structure and muscles. [ 1 ] Even if the results are quite detailed, solving a three-dimensional model is usually quite expensive. A possible simplification is a dynamical model based on three coupled ordinary differential equations. [ 3 ] The quasi-periodicity of the heart beat is reproduced by a three-dimensional trajectory around an attracting limit cycle in the ( x , y ) {\displaystyle (x,y)} plane. The principal peaks of the ECG, which are the P,Q,R,S and T, are described at fixed angles θ P , θ Q , θ R , θ S and θ T {\displaystyle \theta _{P},\theta _{Q},\theta _{R},\theta _{S}{\text{ and }}\theta _{T}} , which give the following three ODEs [ 3 ] x ′ = α z − ω y y ′ = α y + ω x z ′ = − ∑ i ∈ { P , Q , R , S , T } a i Δ θ i exp ( − Δ θ i 2 / 2 b i 2 ) − ( z − z 0 ) {\displaystyle {\begin{aligned}x'&=\alpha z-\omega y\\[1ex]y'&=\alpha y+\omega x\\[1ex]z'&=\textstyle -\sum _{i\in \{P,Q,R,S,T\}}a_{i}\Delta \theta _{i}{\text{exp}}\left(-{\Delta \theta _{i}^{2}}/{2b_{i}^{2}}\right)-(z-z_{0})\end{aligned}}} with α = 1 − x 2 + y 2 {\textstyle \alpha =1-{\sqrt {x^{2}+y^{2}}}} , Δ θ i = ( θ − θ i ) mod ( 2 π ) {\displaystyle \Delta \theta _{i}=(\theta -\theta _{i}){\bmod {(}}2\pi )} , θ = atan 2 ( y , x ) {\displaystyle \theta ={\text{atan}}2(y,x)} The equations can be easily solved with classical numerical algorithms like Runge-Kutta methods for ODEs. [ 3 ]
https://en.wikipedia.org/wiki/Forward_problem_of_electrocardiology
The fossa ovalis is a depression in the right atrium of the heart , at the level of the interatrial septum , the wall between right and left atrium . The fossa ovalis is the remnant of a thin fibrous sheet that covered the foramen ovale during fetal development. During fetal development, the foramen ovale allows blood to pass from the right atrium to the left atrium , bypassing the nonfunctional fetal lungs while the fetus obtains its oxygen from the placenta . A flap of tissue called the septum primum acts as a valve over the foramen ovale during that time. After birth, the introduction of air into the lungs causes the pressure in the pulmonary circulatory system to drop. This change in pressure pushes the septum primum against the atrial septum , closing the foramen. [ 1 ] The septum primum and atrial septum eventually fuse together to form a complete seal, leaving a depression called the fossa ovalis. By age two, about 75% of people have a completely sealed fossa ovalis. An unfused fossa ovalis is called a patent foramen ovale . Depending on the circumstances, a patent foramen ovale may be completely asymptomatic, or may require surgery. [ 1 ] The limbus of fossa ovalis ( annulus ovalis ) is the prominent oval margin of the fossa ovalis in the right atrium. It is most distinct above and at the sides of the fossa ovalis; below, it is deficient. A small slit-like valvular opening is occasionally found, at the upper margin of the fossa, leading upward beneath the limbus, into the left atrium ; it is the remains of the fetal aperture the foramen ovale between the two atria . Almost immediately after the infant is born, the foramen ovale and ductus arteriosus close. The major changes that are made by the body occur at the first breath (in the case of heart and lung functions) and up to weeks after birth (such as the liver's enzyme synthesis ). The foramen ovale becomes the fossa ovalis as the foramen closes while edge of the septum secundum in right atrium becomes the anulus ovalis, so the depression beneath it becomes the fossa ovalis. [ 2 ] [ unreliable medical source? ] This enables respiration and circulation independent from the mother's placenta. With the child's first breath, the lung sends oxygenated blood to the left atrium. As a result, pressure in the left atrium is higher than that of the right, and the increased pressure holds the interatrial flap (which covers the foramen ovale) shut, therefore closing the foramen ovale as well. [ 2 ] In normal development, the closed foramen ovale fuses with the interatrial wall. During the first breath, vasoconstriction causes the ductus arteriosus to close, and during adult years, tissue occludes what once was the ductus arterious, creating the ligamentum arteriosum . [ 3 ] Aneurysms can occur in adulthood if the foramen ovale is not closed correctly. An aneurysm happens when an artery becomes enlarged in a localized area due to weakening of the arterial wall. [ 4 ] When this type of aneurysm occurs in the area of the fossa ovalis, an enlarged pouch is formed. This pouch can protrude into the right atrium or the left atrium. The cause of this aneurysm is the result of abnormal, increased pressure within the heart. Even if the foramen ovale does seal shut, an aneurysm may occur, usually on the side of the right atrium. If the aneurysm stretches too far, it can narrow the opening of the inferior vena cava . [ 5 ] This type of aneurysm can be a result of plaque build-up in the arteries from coronary heart disease , as well as diseases of the aortic valve or mitral valve . Surgery may be useful in helping to cope with the aneurysm. If the atrial septum does not close properly, it leads to a patent foramen ovale (PFO). This type of defect generally works like a flap valve, opening during certain conditions of increased pressure in the chest, such as during strain while having a bowel movement, cough, or sneeze. With enough pressure, blood may travel from the right atrium to the left. If there is a clot in the right side of the heart, it can cross the PFO, enter the left atrium, and travel out of the heart and to the brain, causing a stroke . If the clot travels into a coronary artery it can cause a heart attack . [ 6 ]
https://en.wikipedia.org/wiki/Fossa_ovalis_(heart)
Foster Kennedy syndrome is a constellation of findings associated with tumors of the frontal lobe . [ 1 ] Although Foster Kennedy syndrome is sometimes called "Kennedy syndrome", [ 2 ] it should not be confused with Kennedy disease, or spinal and bulbar muscular atrophy , which is named after William R. Kennedy . Pseudo-Foster Kennedy syndrome is defined as one-sided optic atrophy with papilledema in the other eye but with the absence of a mass. [ 3 ] The syndrome is defined as the following changes: [ 4 ] The presence of anosmia (loss of smell) ipsilateral to the eye demonstrating optic atrophy was historically associated with this syndrome, but is now understood to not strictly be associated with all cases. [ 4 ] This syndrome is due to optic nerve compression, olfactory nerve compression, and increased intracranial pressure (ICP) secondary to a mass (such as meningioma or plasmacytoma , usually an olfactory groove meningioma). [ 5 ] [ 6 ] There are other symptoms present in some cases such as nausea and vomiting , memory loss and emotional lability (i.e., frontal lobe signs). [ 6 ] Brain tumor can be visualized very well on CT scan, but MRI gives better detail and is the preferred study. Clinical localization of brain tumors may be possible by virtue of specific neurologic deficits or symptom patterns. Tumor at the base of the frontal lobe produces inappropriate behavior, optic nerve atrophy on the side of the tumor, and papilledema of the contralateral eye; anosmia on the side of the tumor may be found in certain cases of progressive disease. [ 4 ] The treatment, and therefore prognosis, varies depending upon the underlying tumour. [ 6 ] While awaiting surgical removal, treat any increased intracranial pressure with high-dose steroids (i.e., dexamethasone). [ citation needed ] The syndrome was first extensively noted by Robert Foster Kennedy in 1911, an Irish neurologist , who spent most of his career working in the United States of America . [ 7 ] However, the first mention of the syndrome came from a William Gowers in 1893. Schultz–Zehden described the symptoms again in 1905. A later description was written by Wilhelm Uhthoff in 1915. [ 8 ]
https://en.wikipedia.org/wiki/Foster_Kennedy_syndrome
The Fothergillian Medal is awarded by the Medical Society of London since 1787. [ 1 ] The first recipient was William Falconer . [ 1 ] It was awarded to Edward Jenner in 1803. [ 2 ] This medical article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fothergillian_Medal
The fourth heart sound or S 4 is an extra heart sound that occurs during late diastole, immediately before the normal two "lub-dub" heart sounds (S 1 and S 2 ). It occurs just after atrial contraction and immediately before the systolic S 1 and is caused by the atria contracting forcefully in an effort to overcome an abnormally stiff or hypertrophic ventricle. This produces a rhythm classically compared to the cadence of the word " Tennessee ." [ 1 ] [ full citation needed ] [ 2 ] One can also use the phrase " A -stiff-wall" to help with the cadence (a S 4 , stiff S 1 , wall S 2 ), as well as the pathology of the S 4 sound. [ 3 ] The normal heart sounds, S 1 and S 2 , are produced during the closing of the atrioventricular valves and semilunar valves , respectively. The closing of these valves produces a brief period of turbulent flow, which produces sound. [ 4 ] The S 4 sound occurs, by definition, immediately before S 1 , while the atria of the heart are vigorously contracting. [ 5 ] It is manifest as a vibration of 20 to 30 Hz within the ventricle. [ 5 ] While the mechanism is not absolutely certain, it is generally accepted that S 4 is caused by stiffening of the walls of the ventricles (usually the left), which produces abnormally turbulent flow as the atria contract to force blood into the ventricle. [ 5 ] [ 4 ] This for example occurs in conditions that decrease ventricular compliance like left ventricular hypertrophy . [ 4 ] S 4 is sometimes audible in the elderly due to a more rigid ventricle. When loud, it is a sign of a pathologic state, [ 6 ] usually a failing left ventricle. If the problem lies with the left ventricle, the gallop rhythm will be heard best at the cardiac apex. It will become more apparent with exercise, with the patient lying on the left-hand side, or with the patient holding expiration . If the problem is in the right ventricle, the abnormal sound will be most evident on the lower left hand side of the sternum and will get louder with exercise and quick, deep inspiration . [ 7 ] S 4 has also been termed an atrial gallop or a presystolic gallop because of its occurrence late in the heart cycle. It is a type of gallop rhythm by virtue of having an extra sound; the other gallop rhythm is called S 3 . The two are quite different, but they may sometimes occur together forming a quadruple gallop . If the heart rate is also very fast ( tachycardia ), it can become difficult to distinguish between S 3 and S 4 thus producing a single sound called a summation gallop . The S4 heart sound itself does not require treatment; rather plans should be laid to stop the progression of whatever causes the underlying ventricular dysfunction. The S4 heart sound is a secondary manifestation of a primary disease process and treatment should be focused on treating the underlying, primary disease.
https://en.wikipedia.org/wiki/Fourth_heart_sound
Fractional flow reserve (FFR) is a diagnostic technique used in coronary catheterization . FFR measures pressure differences across a coronary artery stenosis (narrowing, usually due to atherosclerosis ) to determine the likelihood that the stenosis impedes oxygen delivery to the heart muscle ( myocardial ischemia ). [ 1 ] Fractional flow reserve is defined as the pressure after (distal to) a stenosis relative to the pressure before the stenosis. [ 2 ] The result is an absolute number; an FFR of 0.80 means that a given stenosis causes a 20% drop in blood pressure . In other words, FFR expresses the maximal flow down a vessel in the presence of a stenosis compared to the maximal flow in the hypothetical absence of the stenosis. During coronary catheterization, a catheter is inserted into the femoral (groin) or radial arteries (wrist) using a sheath and guidewire. FFR uses a small sensor on the tip of the wire (commonly a transducer ) to measure pressure, temperature and flow to determine the exact severity of the lesion . This is done during maximal blood flow ( hyperemia ), which can be induced by injecting products such as adenosine or papaverine . A pullback of the pressure wire is performed, and pressures are recorded across the vessel. [ 3 ] When interpreting FFR measurements, higher values indicate a non-significant stenosis, whereas lower values indicate a significant lesion. There is no absolute cut-off point at which an FFR measurement is considered abnormal. However, reviews of clinical trials show a cut-off range between 0.75 and 0.80 has been used when determining significance. [ 4 ] Fractional flow reserve (FFR) is the ratio of maximum blood flow distal to a stenotic lesion to normal maximum flow in the same vessel. It is calculated using the pressure ratio F F R = p d p a {\displaystyle FFR={\frac {p_{d}}{p_{a}}}} where p d {\displaystyle p_{d}} is the pressure distal to the lesion, and p a {\displaystyle p_{a}} is the pressure proximal to the lesion. The decision to perform a percutaneous coronary intervention (PCI) is usually based on angiographic results alone. Angiography can be used for the visual evaluation of the inner diameter of a vessel. In ischemic heart disease, deciding which narrowing is the culprit lesion is not always clear-cut. Fractional flow reserve can provide a functional evaluation by measuring the pressure decline caused by a vessel narrowing. [ 4 ] FFR has certain advantages over other techniques to evaluate narrowed coronary arteries, such as coronary angiography, intravascular ultrasound or CT coronary angiography . For example, FFR takes into account collateral flow, which can render an anatomical blockage functionally unimportant. Also, standard angiography can underestimate or overestimate narrowing, because it only visualizes contrast inside a vessel. [ 5 ] Finally, when compared to other indices of vessel narrowing, FFR seems to be less vulnerable to variability between patients. [ 6 ] Other techniques can also provide information which FFR cannot. Intravascular ultrasound, for example, can provide information on plaque vulnerability , whereas FFR measures are only determined by plaque thickness. There are newly developed technologies that can assess both plaque vulnerability and FFR from CT by measuring the vasodilitative capacity of the arterial wall. [ citation needed ] FFR allows real-time estimation of the effects of a narrowed vessel, and allows for simultaneous treatment with balloon dilatation and stenting. On the other hand, FFR is an invasive procedure for which non-invasive (less drastic) alternatives exist, such as cardiac stress testing . In this test, physical exercise or intravenous medication (adenosine/dobutamine) is used to increase the workload and oxygen demand of the heart muscle, and ischemia is detected using ECG changes or nuclear imaging . In the DEFER study, fractional flow reserve was used to determine the need for stenting in patients with intermediate single vessel disease. In stenosis patients with an FFR of less than 0.75, outcomes were significantly worse. In patients with an FFR of 0.75 or more however, stenting did not influence outcomes. [ 7 ] The Fractional Flow Reserve versus Angiography for Multivessel Evaluation (FAME) study evaluated the role of FFR in patients with multivessel coronary artery disease. [ 8 ] In 20 centers in Europe and the United States, 1005 patients undergoing percutaneous coronary intervention with drug eluting stent implantation were randomized to intervention based on angiography or based on fractional flow reserve in addition to angiography. In the angiography arm of the study, all suspicious-looking lesions were stented. In the FFR arm, only angiographically suspicious lesions with an FFR of 0.80 or less were stented. [ citation needed ] In the patients whose care was guided by FFR, fewer stents were used (2.7±1.2 and 1.9±1.3, respectively). After one year, the primary endpoint of death, nonfatal myocardial infarction , and repeat revascularization were lower in the FFR group (13.2% versus 18.3%), largely attributable to fewer stenting procedures and their associated complications. There also was a non-significant higher number of patients with residual angina (81% versus 78%). In the FFR group, hospital stay was slightly shorter (3.4 vs 3.7 days) and procedural costs were less ($5,332 vs $6,007). FFR did not prolong procedure (around 70 minutes in both groups).
https://en.wikipedia.org/wiki/Fractional_flow_reserve
Francesco Della Valle (Puccianiello- Caserta , 2 February 1858-27 July 1937) was an Italian physician and general who served as General Director of Military Health from 1920 to 1925. After graduating in 1883, Della Valle joined the Military Health Service and in 1888, when he was a young lieutenant, he was assigned to the Ministry of War. Della Valle spent much of his career in the corps, acquiring extensive skills in the organization of the Royal Army's health system and preventive medicine among the troops. [ 1 ] At the outbreak of the First World War , Della Valle was in charge of the Military Health Office of the Supreme Mobilized Command as a medical colonel, from where he led the effort to keep up with the dramatic daily increase in the needs of the forces at the front. According to him: it was a matter of coordinating and directing the entire medical resources of the country for the war, of utilizing men and things as much as possible for the proper execution of the service, with sufficient guarantees of breadth and technical capacity. [ 2 ] From mid 1915, Della Valle was a member of the Vigilance Commission set up by the General Intendency of the Supreme Command for "the need to activate and integrate the prophylactic services", facilitating interaction between civil and military health authorities in the fight against infectious diseases in war zones. [ 3 ] This interaction often resulted in so much difficulty Della Valle said; "it must be considered that no service was as much in contrast with the action of the Command as the health service because of its tyrannical requirements for war". [ 4 ] He acknowledging the impossibility of making accurate diagnoses of unconfirmed types of tuberculosis during the expeditious medical examinations for recruitment. [ 5 ] On 9 September 1917, Francesco Della Valle was promoted to Major General for his outstanding merits. [ 6 ] The Italian military medical system, under his coordination, "had to manage the transport, care and hospitalization of over two and a half million wounded and sick people, making use of the Military Health Corps and the Italian Red Cross apparatus (medical personnel and 'Dames of the Red Cross') assisted by voluntary nurses from welfare committees such as those of the Knights of Malta , the Order of Saints Maurice and Lazarus and the Jesuits " [ 7 ] In 41 months of the war. In his role, Della Valle contributed to the development of the health chain for the triage and sorting of the wounded from the battlefront to the most-suitable hospitals of different capacities, to the introduction of surgical ambulances —mobile operating rooms of the army run by highly trained doctors, that were equipped with several tents and vehicles—to the interaction between the military and civilian health systems, and to the establishment of the Università Castrense [ 8 ] in San Giorgio di Nogaro (UD) for the emergency training of a large number of military doctors needed to assist the wounded. [ 9 ] During the war period, in addition to the recalled organizational and innovative action, Francesco Della Valle contributed to encouraging the establishment of consultancies or specialized sections in military hospitals in the war zone, trying to overcome the vision of war medicine as capables à tout faire . [ 10 ] He paid particular attention to the containment of castrensi epidemics and, more generally, to the problem of infectious diseases on the front line that would inevitably affect civil society and territorial public health. Della Valle was active in fighting the cholera epidemic of the First World War; [ 11 ] he was awarded the silver medal for public health merits in 1916; and in the use of hospitals and convalescence warehouses for the isolation of sick people suffering from or suspected of carrying infectious diseases despite considerable difficulties related to the frequent clearing of health facilities due to the needs of operations in war zones. [ 12 ] In 1917, he introduced typhoid vaccinations for mobilized troops. [ 13 ] From 1920 to 1925, once Della Valle was promoted to lieutenant general , he headed the newly established General Directorate of Military Health promoting the reorganization of the Army's health system that emerged from the Great War. In this role position, Della Valle organized and chaired the 2nd International Congress of Military Medicine and Pharmacy [ 14 ] in Rome in 1923, gaining an appreciation for the preparation of the event, and the quantity and quality of the scientific papers presented. [ 15 ] It was an emblematic review of how the war had pushed medicine to improve and experiment, intervention techniques, hygiene and organizational practices and vaccination practices. From 1920, Della Valle was an ordinary academician of the Royal Medical Academy of Rome , and from 1923 he was also the honorary president of the Permanent International Military Health Committee. [ 16 ] Della Valle paid attention and care, especially during and after the war, to bear witness to the activity and sacrifice of military doctors, career and complementary officers, and health assistants who died, were wounded and disabled in the war, including diseases contracted during military service. In the years following the end of the war, he was also the president of the Committee for the celebration of doctors who died in the war; in this position, Della Valle promoted and followed, entrusting the coordination to the Medical Captain Federico Bocchetti – a two-year project for the collection of documents and testimonies, which in 1924 resulted in the publication of the 'Libro d'oro. I medici italiani ai loro eroi'. [ 17 ] The last institutional position fulfilled by Francesco Della Valle was that of the president of the Superior Commission for War Pensions, which he held until 1 February 1933. [ 18 ] 1. Combattere, Curare, Istruire – Padova "Capitale al fronte" e l'Università Castrense. 19 ottobre 2018 / 6 gennaio 2019 - Padova MUSME Museo di Storia della Medicina. Thematic exhibition promoted by the Department of Culture of the Municipality of Padua and the MUSME Foundation ( https://www.musme.it/combattere-curare-istruire/ and https://padovacultura.padovanet.it/it/attivita-culturali/combattere-curare-istruire )
https://en.wikipedia.org/wiki/Francesco_Della_Valle
Francesco Racanelli (1904–1978) was an Italian doctor, pranotherapist and writer, [ 1 ] and the originator of an unconventional therapy that he called in Italian : medicina bioradiante or "bio-radiant medicine". Francesco Racanelli was born in 1904 in Sannicandro di Bari , Puglia , Italy. [ 1 ] He believed that he possessed a gift which, much later in his life, he called "bio-radiant energy", and that there was a "vital fluid" which "emanated" from "particularly gifted people". [ 2 ] He began to practice on people. As a result, he was prosecuted for the illegal practice of medicine. To avoid further legal problems, he studied medicine [ 3 ] and qualified as a doctor. He worked as a healer and lecturer in Florence. He treated wounded people in Florence during the Liberation of Italy . [ 4 ] Francesco Racanelli died in Orbetello, in 1978. Francesco Racanelli wrote several books, some of which were translated into French and German. They include: This biographical article about an Italian writer or poet is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Francesco_Racanelli
Francisco Lopera (June 10, 1951 – September 10, 2024) was a Colombian neurologist who made major discoveries in the field of Alzheimer's. He was a professor at the University of Antioquia in Medellín. He identified the world's largest extended family with Alzheimer's which he studied for decades and identified the genetic cause of their disease. [ 1 ] Also by his studies, he won the Potamkin Prize of 2024 of $100,000. Lopera died on September 10, 2024, at the age of 73. [ 2 ]
https://en.wikipedia.org/wiki/Francisco_Lopera
Frank A. Gough (June 28, 1872 – August 15, 1938) was an American orthodontist who graduated from Angle School of Orthodontia . He was the first person to open an orthodontic practice in the borough of Brooklyn. [ 1 ] [ 2 ] He was born in 1872 in North East , Pennsylvania. He initially was interested in working in civil service but later became interested in dentistry. He attended New York University College of Dentistry and obtained his dental degree in 1896. He was the fourth dentist ever to get a license to practice dentistry in the State of New York . Around 1900 he became interested in orthodontics and applied to Angle School of Orthodontia in St. Louis. He eventually attended the school with his classmates Lloyd Steel Lourie , Herbert A. Pullen and Richard Summa . After his orthodontic course, Gough stayed in Hornellsville, Pennsylvania, to assist a dentist for some time. He then moved to Brooklyn , where he started his orthodontic practice in an office of his home. He was the first person to open a practice of orthodontics in Brooklyn. He practiced here for next 30 years. He was also a Fellow of American College of Dentists and Edward H. Angle Society of Orthodontists. In 1927, Gough joined Brooklyn Rotary Club and he became the treasurer, vice-president, and president in 1926. He was also a member of New York State Crippled Children's Society and New York Mausoleum Association . Gough was married to Allie B. Ellsworth, who died in 1937. After that incident, Gough suffered a coronary thrombosis and gave up his practice and his organization activities. He then returned to his home in North East , Pennsylvania, and eventually died in 1938. They had two daughters: Helen Gough, who practiced in Brooklyn as an orthodontist and Mrs. Charles Gough, who lived in New York. This dentistry article is a stub . You can help Wikipedia by expanding it . This biographical article related to medicine in the United States is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Frank_A._Gough
Frank E. Beube (July 1, 1904 – June 14, 1995) [ 1 ] was a Canadian-born American periodontist and a pioneer in the field of periodontics . [ 2 ] Beube was born in Kingston , Ontario, Canada. He graduated from the University of Toronto Faculty of Dentistry in 1930 before emigrating to the United States. [ 3 ] Beube served as director of the Department of Periodontology after having volunteers on faculty in the 1930s. [ 4 ] Beube died in New York on June 14, 1995, at the age of 90. [ 5 ] In honor of his service to the Department of Periodontology, the postgraduate periodontics conference room on VC-7 of the Columbia University College of Dental Medicine is called the Frank Beube Conference Room. This biographical article related to medicine in the United States is a stub . You can help Wikipedia by expanding it . This dentistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Frank_Beube
Frank M. Casto (30 May 1875 – 25 April 1965) was an American orthodontist who attended Angle School of Orthodontia in 1902. He was the President of the American Association of Orthodontists in 1909 and the American Dental Association in 1935. [ 1 ] Casto was born in Blanchester , Ohio . He obtained his dental degree in 1898 from Ohio State University College of Dentistry . In 1900 he received his medical degree and thereafter his pharmacy degree in 1902 from Ohio State University College of Medicine and Pharmacy School respectively. He then became a professor until 1904 at the school. [ 2 ] He also served as the dean of the School of Dentistry at Western Reserve University from 1917 to 1937. The annual AAO meeting of 1909 was held at his home in Cleveland , Ohio. He has served as president of numerous organizations such as Cleveland Dental Association, the Northern Ohio Dental Association, the Ohio State Dental Association , and the American Dental Association (1935-1936). In addition, he served as the past vice-president of the Pan American Orthodontic Association and was Supreme Grand Master of the Delta Sigma Delta dental fraternity in 1928. He moved to La Jolla , California in 1937 and practiced Orthodontics there until the time of his retirement in 1952. Casto held the reserve commission in the United States Navy and was also an active member of the La Jolla Post of the American Legion and Military Order of World Wars. Casto along with Dr. McCoy played an important role in the functioning of the organization American Association of Orthodontists in its first 10 years of existence. Casto was married to Florence M. Andrus on 20 February 1902, and had three children, William, Ruth, and Florence This dentistry article is a stub . You can help Wikipedia by expanding it . This biographical article related to medicine in the United States is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Frank_M._Casto
Franz Alexander Nissl (9 September 1860, in Frankenthal – 11 August 1919, in Munich ) was a German psychiatrist and medical researcher . He was a noted neuropathologist . Nissl was born in Frankenthal to Theodor Nissl and Maria Haas. Theodor taught Latin in a Catholic school and wanted Franz to become a priest. However Franz entered the Ludwig Maximilian University of Munich to study medicine . Later, he specialized in Psychiatry. One of Nissl's university professors was Bernhard von Gudden . His assistant, Sigbert Josef Maria Ganser suggested that Nissl write an essay on the pathology of the cells of the cortex of the brain. When the medical faculty offered a competition for a prize in neurology in 1884, Nissl undertook the brain-cortex study. He used alcohol as a fixative and developed a staining technique that allowed the demonstration of several new nerve-cell constituents. Nissl won the prize, and wrote his doctoral dissertation on the same topic in 1885. [ 1 ] Professor von Gudden was the judge in Nissl's college-essay competition, and he was so impressed with the study that he offered Nissl an assistantship at the Furstenried castle southwest of Munich, where one of his responsibilities was to care for the mad Prince Otto. Nissl accepted, and remained in that post from 1885 until 1888. There was a small laboratory at the castle, which enabled Nissl to continue with his neuropathological research. In 1888 Nissl moved to the Institution Blankenheim. In 1889 he went to Frankfurt as second in position under Emil Sioli (1852–1922) at the Städtische Irrenanstalt. There he met neurologist Ludwig Edinger and neuropathologist Karl Weigert , who was developing a neuroglial stain. This work motivated Nissl to study mental and nervous diseases by relating them to observable changes in glial cells , blood elements , blood vessels and brain tissue in general. In Frankfurt Nissl became acquainted with Alois Alzheimer , and they collaborated over seven years. They became close friends, [ 2 ] jointly editing the Histologische und histopathologische Arbeiten über die Grosshirnrinde (1904–1921). In 1895 Emil Kraepelin invited Nissl to become assistant physician at the University of Heidelberg . By 1904 he was a full professor at that institution, and became director of the Department of Psychiatry when Kraepelin moved to Munich. The burden of teaching and administration, combined with poor research facilities, forced Nissl to leave many scientific projects unfinished. He also suffered from a kidney disease. During World War I he was charged with administering a large military hospital. In 1918 Kraepelin again invited Nissl to accept a research position at the Deutsche Forschungsanstalt für Psychiatrie in Munich. After one year at that position, where he performed research alongside Korbinian Brodmann and Walther Spielmeyer , he died in 1919 of kidney disease. Nissl was of small stature, with poor posture. He had a birthmark on his left face. He never married, and his life revolved entirely around his work. [ 3 ] One day, for a practical joke, Nissl (who was an active campaigner against human consumption of alcohol) placed a row of empty beer bottles outside his laboratory and made sure that Kraepelin heard that he could be found lying under his desk, dead drunk. Nissl was a competent pianist. Nazi physician Hugo Spatz (1888–1969) told of his first meeting, when Spatz applied for a position in Nissl's laboratory. Nissl was busy that morning and asked the student to come to his home at twelve. When Spatz came to the house at noon, Nissl was not there, and the housekeeper finally opined that the Professor must have meant twelve midnight, so Spatz returned that night. Nissl was at home then, but Spatz had to wait in the anteroom for half an hour until Nissl had finished the piano sonata that he was playing. The conversation lasted until daybreak. Nissl was possibly the greatest neuropathologist of his day and also a fine clinician who popularised the use of spinal puncture, [ 4 ] which had been introduced by Heinrich Quincke . Nissl also examined the neural connections between the human cortex and thalamic nuclei ; he was in the midst of this study at the time of his death. An example of his research philosophy is taken from his 1896 writings: The Nissl method is the staining of the cell body, and in particular endoplasmic reticulum . This is done by using various basic dyes (e.g. aniline , thionine , or cresyl violet ) to stain the negatively charged RNA blue, and is used to highlight important structural features of neurons. The Nissl substance ( rough endoplasmic reticulum ) appears dark blue due to the staining of ribosomal RNA, giving the cytoplasm a mottled appearance. Individual granules of extranuclear RNA are named Nissl granules ( ribosomes ). DNA present in the nucleus stains a similar color. Nissl bodies
https://en.wikipedia.org/wiki/Franz_Nissl
Franz Schuh (17 October 1804, Scheibbs , Scheibbs District , Lower Austria – 22 December 1865) was an Austrian Empire pathologist and surgeon who was a native of Scheibbs . In 1831 he obtained his medical doctorate in Vienna , afterwards serving as an assistant to Joseph Wattmann (1789–1866). In 1836 he worked as a professor at the Lyceum in Salzburg , returning to Vienna the following year as primary surgeon at the general hospital . In 1841 he became an associate professor in Vienna, where in 1842, he was appointed head of the second surgical clinic. In Vienna, he was a colleague of physician Joseph Škoda (1805–1881), and an instructor to Austrian-American dermapathologist Carl Heitzmann (1836–1896). He died in December 1865 from a malignant fever and blood poisoning, possibly due to a septic infection. Franz Schuh was a medical pioneer who advanced scientific surgical practices in Vienna. He is remembered for his pathophysiological research and his investigations of new surgical methods. In 1840 he is credited with performing the first successful pericardiocentesis ( pericardiac aspiration ), and in January 1847, he was the first Austrian physician to use ether as an anesthetic on a human patient. [ 1 ] In 1906, the thoroughfare Franz-Schuh-Gasse in the Favoriten district of Vienna was named in his honor.
https://en.wikipedia.org/wiki/Franz_Schuh_(physician)
Franz Freiherr von Pitha (born Jiří František Piťha , [ 1 ] 8 February 1810 – 29 December 1875) was an Austrian surgeon. He was rector of the Charles University in Prague in 1854–1855. Pitha was born in Řakom near Klatovy (today part of Dolany in the Czech Republic ). In 1836 he received his medical doctorate at Prague , and was later a professor of surgery at Charles University in Prague , and at Josephs Academy (Josephinum) in Vienna , where he was chair of surgery from 1857 to 1874. During the Italian Wars of Independence , Pitha was chief of field medical services. In this role he made advancements in Austrian military hygiene, and also gained experience regarding battle-related injuries. He published a treatise titled Verletzungen und Krankheiten der Extremitäten (Injuries and Diseases of the Extremities), as a result of his war-time experiences. Pitha was instrumental in acquiring a position for Theodor Billroth (1829–1894) at the medical faculty in Vienna, and with Billroth he published an important textbook on surgery called Handbuch der allgemeinen und speciellen Chirurgie mit Einschluss der topographischen Anatomie, Operations- und Verbandlehre (Textbook of General and Specialized Surgery with the Inclusion of Topographical Anatomy, Operations and Bandaging Skills). Pitha was knight of the Order of Leopold and recipient of the Order of the Iron Crown 2nd class. [ 2 ] He was ennobled in 1859 and raised to the baronial rank in January 1875. [ 3 ] He had three daughters with his wife Emilia née Barter. [ 4 ] This biographical article related to medicine in Austria is a stub . You can help Wikipedia by expanding it . This biographical article of a European noble is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Franz_von_Pitha
Freddy Homburger (8 February 1916 in Sankt Gallen , Switzerland - 25 September 2001 in Cambridge, Massachusetts ) was a Swiss -born oncologist . Homburger came to the United States in 1941 to continue his medical education, becoming a citizen in 1952. [ 1 ] In 1948 Homberger was appointed head of Tufts College Medical School cancer research and cancer control units. [ 2 ] In 1973, Homburger was studying the cause of cancer and its relation with cigarette smoking . He succeeded in inducing laryngeal cancer in hamsters who smoked; however, the Council for Tobacco Research , which was underwriting his research, forbid him to publish his research results as long as he referred to the growths as "cancerous", and threatened to ruin him financially. Homburger, and his suppressed research, gained prominence in 1997 in tobacco-related lawsuits. [ 3 ] [ 4 ] Homberger was also an artist, an aviator and served as Honorary Consul for Switzerland in Boston from 1966 to 1986. [ 5 ] Homberger and his wife Regina Thürlimann collected art [ 6 ] [ 7 ] and engaged in philanthropical activity. [ 8 ] This article about a Swiss scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Freddy_Homburger
Frederick Newland-Pedley (1855 – 1944) was a British physician and dentist known for his contribution to the fields of military dentistry and dental education. Newland-Pedley was born in 1855. [ 1 ] He studied at Dulwich College and Guy's Hospital , becoming first a physician, a member of the Royal College of Surgeons of England in 1881 and a fellow of the same in 1885. In 1880 he received his L.D.S degree from the Royal Dental Hospital in Leicester Square. [ 1 ] At Guy's hospital he was appointed as a dental surgeon in 1887. [ 1 ] [ 2 ] When the dentist he had been assisting became seriously ill, Newland-Pedley took up his responsibilities and also began the hospital's first dental department at his own expense. [ 1 ] [ 2 ] In 1888 he proposed the idea of a dental school to the hospital's board; the school opened with twelve chairs the next year in 1889. [ 1 ] [ 3 ] [ 4 ] From February to June 1900, Newland-Pedley volunteered in the British Army during the Boer War, becoming the army's first appointed field dentist. [ 5 ] [ 6 ] [ 7 ] He brought his own supplies and equipment, setting up in a tent to treat soldier's dental problems. [ 8 ] [ 9 ] Returning from the war, he proposed to the British Army that a more permanent dental service be instituted. [ 10 ] [ 11 ] [ 12 ] This resulted in four dentists being sent on contract to the Boer war. [ 6 ] [ 13 ] During World War I , Newland-Pedley again served as a volunteer dental surgeon, this time at Rouen. [ 1 ] Following the war, he practiced in London as a dentist, ultimately retiring to Italy. [ citation needed ] He died at Lake Como, Italy, on 4 May 1944. [ 1 ] His headstone can be found along the northern wall of the Santa Maria Assunta church graveyard, located in Santa Maria Rezzonico, on the north west shore of Lake Como, Italy. [ citation needed ]
https://en.wikipedia.org/wiki/Frederick_Newland-Pedley
Frederick T. West (July 6, 1893 – January 18, 1989) was an American orthodontist and a graduate of Dewey School of Orthodontia . He was a former president of American Association of Orthodontists in 1954 and was selected 5th ever honorary member of Edward Angle Society of Orthodontists in 1965. [ 1 ] He was born in Sacramento, California, and attended Christian Brothers High School (Sacramento, California) in 1910. He attended Saint Mary's College of California , where he received his college degree in 1914. He then attended College of Physicians and Surgeons Dental School in San Francisco and graduated in 1917. After this he continued his studies at the Dewey School of Orthodontia, where he learned orthodontics for six weeks. In 1919, he started teaching at the Physicians and Surgeons School, where he served as a faculty member for 43 years. The school later changed its name to University of the Pacific Arthur A. Dugoni School of Dentistry after the school survived a financial disaster in 1923 which West saved the school from closing by loaning the school money. West became the curator of a library which held Dr. Spencer Atkinson's collection of 15,000 skulls. In 1982 he received Albert H. Ketcham Memorial award, which is the highest honor in the field of orthodontics. [ 2 ] University of the Pacific Arthur A. Dugoni School of Dentistry hosts annual lecture series in the honor of Dr. West. This dentistry article is a stub . You can help Wikipedia by expanding it . This biographical article related to medicine in the United States is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Frederick_T._West
Free Open Access Medical Education ( FOAM or FOAMed ) refers to a dynamic collection of online resources and tools designed to promote lifelong learning in emergency medicine. It is also a community and an ethos embraced by healthcare professionals globally. The term was coined in June 2012 at the International Conference on Emergency Medicine in Dublin . [ 1 ] The FOAM movement arose in response to the broader Open Educational Resources (OER) movement, reflecting the growing demand for freely available, high-quality educational materials. [ 2 ] It gained momentum with the rapid growth of social media and digital platforms , which revolutionised how medical information is accessed and shared. This shift allowed healthcare professionals to exchange knowledge beyond traditional academic and institutional boundaries, fostering a collaborative global learning environment. [ 3 ] FOAM is defined by its accessibility and flexibility. Resources are freely available to anyone, anywhere, at any time, making it an inclusive tool for medical education. [ 1 ] By leveraging a variety of digital platforms—such as blogs , podcast , and social media, FOAM fosters an interactive and collaborative learning environment. The movement is characterised by the rapid dissemination of information, enabling healthcare professionals to remain informed about the latest developments in medical science . [ 4 ] The FOAM ecosystem encompasses a diverse range of resources, including educational blogs, podcasts, and social media posts. Other formats, such as online question banks, YouTube videos, and Massive Open Online Courses (MOOCs), also play a significant role in the movement. [ 2 ] These resources cater to various learning styles and preferences, ensuring they have a wide and meaningful impact. [ 5 ] FOAM has profoundly influenced medical education. Surveys suggest that over 90% of both pre-clinical and clinical medical students use FOAM resources on a weekly basis. [ 6 ] By promoting equitable access to high-quality resources, FOAM has bridged educational gaps and ensured medical knowledge is disseminated widely, particularly in underserved regions. [ 7 ] It has also facilitated the rapid dissemination of clinical knowledge, bridging the gap between research findings and practical application in medical practice. [ 5 ] Despite its benefits, FOAM faces several challenges. Concerns have been raised about the lack of formal peer review and quality control. [ 8 ] Resource creation is often geographically concentrated, potentially excluding perspectives from lower-resource regions. [ 7 ] Additionally, challenges related to intellectual property , regulation, and the influence of prominent figures whose views may dominate discussions have also been noted. [ 5 ] The future of FOAM lies in its integration with traditional medical education curricula . Academic institutions are increasingly involved in creating high-quality resources to ensure a balance between innovation and academic rigour. [ 9 ] Efforts are also underway to develop robust quality assessment tools, such as the Social Media Index, which can standardise the evaluation of FOAM materials. [ 5 ] As the movement grows, it will likely continue to complement traditional educational methods and expand global access to medical knowledge. [ 3 ] The FOAM ecosystem includes several prominent platforms, each offering unique resources for healthcare professionals. These include:
https://en.wikipedia.org/wiki/Free_Open_Access_Medical_Education
A free gingival graft is a type of gingival grafting performed to correct acquired deficiencies of the gum tissue around teeth or dental implants . Besides autologous tissues, xenogeneic collagen matrices are using for gingival augmentation after dental implantation. [ 1 ] Simultaneous injection of stem cells may improve the grafting outcomes due to enhanced vascularization and epithelialization in affected tissues. [ 2 ] This dentistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Free_gingival_graft
Freedom House Ambulance Service was the first emergency medical service in the United States to be staffed by paramedics with medical training beyond basic first aid. [ 1 ] [ 2 ] Founded in 1967 to serve the predominantly Black Hill District of Pittsburgh, Pennsylvania , it was staffed entirely by African Americans. [ 3 ] [ 4 ] Freedom House Ambulance Service broke medical ground by training its personnel to previously unheard-of standards of emergency medical care for patients en route to hospitals. [ 3 ] [ 5 ] [ 6 ] The paramedic training and ambulance design standards pioneered in the Freedom House Ambulance Service would set the standard for emergency care nationally and even internationally. [ 2 ] [ 5 ] Despite its successes, the ambulance service was closed eight years after it began operating. [ 5 ] Prior to the mid 1960s, ambulance service in the US was typically provided by either the police or a local funeral home. [ 2 ] [ 5 ] Such services provided, at most, basic first aid and rapid transportation to a hospital. [ 2 ] [ 5 ] [ 4 ] In police-operated ambulances, the ambulance crew would typically load the patient into the back of a police van, and rush to the hospital. [ 2 ] The U.S. medical system had yet to incorporate advances in emergency care made in battlefield medicine. Suburbanization in the U.S. following World War II led to more car accidents and more injuries far from hospitals, exacerbating this lack of medical care provided en route to hospitals. [ 6 ] In 1966, the National Academy of Sciences published a white paper titled " Accidental Death and Disability: The Neglected Disease of Modern Society ." [ 6 ] The paper stated that up to 50,000 deaths each year were the result of inadequate ambulance crews and lack of suitable hospitals within range, drawing attention to the need for improved pre-hospital care. [ 2 ] The severity of the situation in Pittsburgh was brought home when the former Governor of Pennsylvania and former mayor of Pittsburgh, David L. Lawrence , suffered a heart attack and was transported to a local hospital by police. [ 6 ] [ 3 ] Lawrence had no brain activity when he arrived at the hospital and died after being removed from life support, [ 6 ] a death that could have been avoidable with adequate pre-hospital care, in the view of the physician who treated him, Peter Safar . [ 6 ] In Pittsburgh, the city police handled ambulance service within the city, transporting patients via paddy wagon while funeral homes provided ambulance service in the suburbs. [ 5 ] [ 4 ] Wait times were often longer for service in predominantly black neighborhoods, [ 4 ] especially in the economically depressed Hill District . [ 2 ] Additionally, tension between police and the community made many reluctant to call the police. [ 2 ] The program received its initial funding from Lyndon Johnson's War on Poverty and the Maurice Falk Fund. [ 3 ] [ 2 ] [ 5 ] The Falk Fund was headed by Phil Hallen, a former ambulance driver, who was seeking to improve responses to medical emergencies as well as create employment opportunities for African-American men in Pittsburgh. [ 2 ] [ 6 ] [ 7 ] Upon hearing that Hallen was working to improve ambulance service in Pittsburgh, Safar reached out to him. [ 5 ] Safar's daughter had died of an asthma attack following transportation to the hospital without provision of care en route, [ 5 ] and he had previously worked on emergency pre-hospital care, including the development of cardiopulmonary resuscitation and advocating its use by laypeople. [ 5 ] He offered his ideas on how a new standard of care could be provided by the new ambulance service. [ 2 ] [ 5 ] His ideas included intense paramedic training and improved ambulance design. [ 2 ] [ 5 ] Hallen contacted Freedom House Enterprises to help recruit paramedics for the new ambulance service. [ 5 ] At that time, Freedom House Enterprises worked on civil rights projects including voter registration and organizing NAACP meetings as well as offering job training and assistance with job searches to black Pittsburghers. [ 2 ] [ 5 ] [ 6 ] Freedom House agreed to partner on the ambulance program. [ 8 ] The first cohort of Freedom House Ambulance Service recruits consisted of 25 black men recruited from The Hill District, a low income, predominantly black neighborhood . [ 1 ] [ 9 ] At the time, local media referred to residents of the neighborhood as the "unemployables," [ 5 ] and the recruits included men who had suffered long-term unemployment. [ 3 ] Half of the recruits had not graduated high school. [ 3 ] Some had criminal records, including felonies. The recruits also included veterans of the Vietnam War . [ 5 ] [ 6 ] Dr. Peter Safar designed and implemented the paramedics' training, a 32-week, [ 5 ] 300-hour course that included anatomy, physiology, CPR, advanced first aid, nursing, and defensive driving. [ 2 ] Those who had not completed high school were helped in completing their GEDs. [ 6 ] [ 3 ] Dr. Safar worked with Dr. Ron Stewart and Dr. Paul Paris to create a training curriculum that would soon shape paramedicine across the globe. Dr. Safar would soon meet a young and ambitious Dr. Nancy Caroline who while completing her medical schooling, would assist Safar, Stewart and Paris in compiling the new curriculum for Freedom House paramedics. This was Emergency Care in the Streets , and Caroline eventually became Freedom House's first medical director. [ 10 ] Stewart and Paris were also in the process of attempting to create a place that people in the Pittsburgh region could come to study emergency medicine. [ citation needed ] The Freedom House Ambulance Service program began in 1967, [ 6 ] and started officially operating in 1968 with two ambulances. [ 5 ] [ 1 ] Prior to receiving their own ambulances, the Freedom House paramedics were pressed into service to help people injured during the King assassination riots in 1968, riding along with police on ambulance duty. [ 6 ] The city contracted Freedom House Ambulance Service to handle emergency transportation in the downtown area and some predominantly black neighborhoods. [ 2 ] They came to be known for the high standard of care they provided and were frequently requested by callers over the police. [ 3 ] Freedom House Ambulance Service responded to almost 5,800 in their first year, [ 2 ] [ 1 ] and transported more than 4,600 patients, primarily in African-American neighborhoods in Pittsburgh. [ 1 ] According to data collected by Dr. Safar, the paramedics saved 200 lives in their first year of operations. [ 2 ] Where slow service to black neighborhoods by the police had been a point of tension, [ 2 ] the Freedom House paramedics had a response time of less than ten minutes in most neighborhoods. [ 5 ] In 1974, Dr. Nancy Caroline became the medical director of Freedom House upon being recruited by Dr. Safar. [ 5 ] She arranged ongoing training for the paramedics in such unprecedented areas as intubation , cardiac care, and I.V. drug administration. [ 3 ] [ 5 ] [ 6 ] The training Dr. Caroline provided would become the basis for the first paramedic curriculum, written by Caroline and adopted by the federal government in 1975. [ 5 ] [ 2 ] The data and studies conducted by Dr. Caroline shaped EMS practices for Magen David Adom . [ 3 ] The Freedom House paramedics' relationships with the communities they served also aided in their effectiveness. According to one documentary maker who chronicled their history: Freedom House [paramedics] had compassion for the community... They told me when you walk into a person’s home, you’re a guest. That’s the No.1 thing they brought to the table: They cared. They addressed everybody by their names. They respected them and asked permission before providing treatment. [ 6 ] During a deadly surge in heroin use, the paramedics were able to contact local drug dealers and provide information on identifying signs of an overdose. The paramedics also notified them that they would provide medical assistance in case of emergencies without legal repercussions for those who sold or used the drugs. [ 3 ] This effort was followed by a dramatic drop in fatal overdoses in the city. [ 3 ] Freedom House Ambulance Service became a model across the U.S. and internationally, [ 2 ] and was awarded a major grant to develop the first national standards for paramedics. [ 5 ] Miami, Los Angeles, and Jacksonville would all follow the Freedom House model. [ 5 ] Additionally, the ambulance model designed by Dr. Safar and proved through use by Freedom House paramedics was adopted by the National Highway Traffic Safety Administration as the official ambulance standard. [ 3 ] Despite these successes, the Freedom House paramedics faced racism from hospital staff and patients, as well as discrimination by the city government. The paramedics were sometimes assumed by hospital staff to be orderlies and were asked to mop the floor. [ 2 ] White patients were often surprised by or resentful of black paramedics, [ 3 ] and would sometimes refuse to be touched or helped by them. [ 2 ] Opponent of the Freedom House Ambulance Service, Peter F. Flaherty became Mayor in 1970. [ 5 ] The mayor opposed public/private partnerships, believing services paid for by the city should be directly overseen by it. [ 5 ] Phil Hallen of the Falk Fund stated that he believed racism was also a factor in Flaherty's opposition to the service. [ 2 ] Op-eds printed at the time accused the mayor of trying to eliminate the ambulance service to pander to the police union. [ 5 ] Dr. Safar echoed this view, stating “racial prejudices with white police officers eager to maintain control of ambulances city-wide” were the cause of efforts to end Freedom House's ambulance services in the city. [ 5 ] Freedom House Ambulance Service's request to expand their contract with the city to cover additional parts of the city was denied by the mayor, despite their strong record. [ 6 ] [ 5 ] This denied them the chance to serve more affluent neighborhoods in which they would likely have been more able to collect the fees they charged for ambulance service. [ 5 ] During Flaherty's time as Mayor, the city began providing payment for the ambulance contract late, [ 5 ] and cut its portion of the ambulance service's operating budget by 50%. [ 2 ] [ 5 ] Flaherty also signed an ordinance barring the use of ambulance sirens in the downtown area, with noise complaints given as the reason. [ 5 ] [ 2 ] [ 3 ] This slowed the paramedics when transporting patients to hospitals as well as their response time, allowing the police to reach more calls before them. [ 5 ] In 1974, the Mayor announced plans for a citywide ambulance system to be staffed by police officers trained as paramedics. [ 5 ] Faced with resistance from city council member Eugene DePasquale, the mayor agreed to fund the Freedom House Ambulance Service contract for one more year. [ 5 ] At the end of the year, the Mayor then announced the creation of a citywide ambulance service to be staffed by non-police paramedics and the end of the contract with Freedom House. [ 2 ] [ 5 ] The Freedom House Ambulance Service closed on October 15, 1975. [ 5 ] All of the paramedics initially hired to staff the new city ambulance service which succeeded it were white. [ 5 ] [ 2 ] Then the previous medical director of the Freedom House Ambulance Service, Dr. Nancy Caroline , accepted a position as medical director of the new city ambulance service on the conditions that the Freedom House paramedics and dispatchers also be hired and that Freedom House ambulance crews be kept together. [ 5 ] While the Freedom House paramedics were hired, their crews were broken up, in violation of the agreement. [ 5 ] Those with criminal records were fired. [ 6 ] Pass/fail exams were instituted, covering materials the Freedom House paramedics had not been taught, resulting in the dismissal of many. [ 5 ] [ 6 ] Most of those remaining were reassigned to non-medical or non-essential work. [ 2 ] Many were placed in positions overseen by white employees with less experience. [ 6 ] Of the 26 Freedom House employees who joined the city ambulance service, only half remained a year later. [ 5 ] Ultimately, only five remained with the city ambulance service, and only one was promoted into a leadership position. [ 6 ] In the late 1990s, 98% of the Pittsburgh Bureau of Emergency Medical Services were white. [ 2 ] There are two documentaries about the Freedom House. In January 2023, Pittsburgh local television station WQED released a documentary called, "Freedom House Ambulance: the FIRST Responders." [ 11 ] The documentary, "Heroes On Call," began showing on February 4, 2025 by the streaming service Very Local, a subsidiary of Hearst Television broadcasting company. [ 12 ] [ 13 ] In the television show The Pitt , a patient featured in season 1 episode 8 was a former Freedom House Ambulance medic. The show is set in the ER of the fictional Pittsburgh Trauma Medical Hospital. The patient, Willie Alexander, is an 81-year-old man with dementia whose pacemaker is discovered to be detached. Despite his dementia Alexander demonstrates some familiarity with medical terms. The audience later learns that Alexander has extensive medical knowledge from his time as a Freedom House Ambulance medic. The character of Dr. Robby, the attending physician in the show, describes the Freedom House medics as "the first paramedics in the country," and credits Freedom House with inspiring the modern 911 EMS system.
https://en.wikipedia.org/wiki/Freedom_House_Ambulance_Service
Fremitus is a vibration transmitted through the body. [ 1 ] In common medical usage, it usually refers to assessment of the lungs by either the vibration intensity felt on the chest wall ( tactile fremitus ) and/or heard by a stethoscope on the chest wall with certain spoken words ( vocal fremitus ), although there are several other types. When a person speaks, the vocal cords create vibrations ( vocal fremitus ) in the tracheobronchial tree and through the lungs and chest wall, where they can be felt ( tactile fremitus ). [ 2 ] This is usually assessed with the healthcare provider placing the flat of their palms on the chest wall and then asking a patient to repeat a phrase containing low-frequency vowels such as "blue balloons" or "toys for tots" (the original diphthong used was the German word neunundneunzig but the translation to the English 'ninety-nine' was a higher-frequency diphthong and thus not as effective in eliciting fremitus). An increase in tactile fremitus indicates denser or inflamed lung tissue, which can be caused by diseases such as pneumonia . A decrease suggests air or fluid in the pleural spaces or a decrease in lung tissue density, which can be caused by diseases such as chronic obstructive pulmonary disease or asthma . [ 2 ] Pleural fremitus is a palpable vibration of the wall of the thorax caused by friction between the parietal and visceral pleura of the lungs. [ 3 ] See pleural friction rub for the auditory analog of this sign. Fremitus appears when teeth move. This can be assessed by feeling and looking at teeth when the mouth is opened and closed. [ 4 ] Periodontal fremitus occurs in either of the alveolar bones when an individual sustains trauma from occlusion . [ 5 ] It is a result of teeth exhibiting at least slight mobility rubbing against the adjacent walls of their sockets, the volume of which has been expanded ever so slightly by inflammatory responses, bone resorption or both. As a test to determine the severity of periodontal disease , a patient is told to close his or her mouth into maximum intercuspation and is asked to grind his or her teeth ever so slightly. Fingers placed in the labial vestibule against the alveolar bone can detect fremitus. [ citation needed ] Rhonchal fremitus , also known as bronchial fremitus, is a palpable vibration produced during breathing caused by partial airway obstruction. The obstruction can be due to mucus or other secretions in the airway, [ 6 ] : 411 bronchial hyperreactivity , or tumors. See rhonchus (rhonchi) for the auditory analog of this sign. Tactile fremitus , known by many other names including pectoral fremitus, tactile vocal fremitus, or just vocal fremitus, is a vibration felt on the patient's chest during low frequency vocalization. [ 6 ] : 409 Commonly, the patient is asked to repeat a phrase while the examiner feels for vibrations by placing a hand over the patient's chest or back. Phrases commonly used in English include, 'boy oh boy' and 'toy boat' ( diphthong phrases), as well as 'blue balloons' and 'Scooby-Doo'. 'Ninety-nine' is classically included, however, this is a misinterpretation of the original German report, in which "neunundneunzig" was the low-frequency diphthong of choice. [ 7 ] Tactile fremitus is normally more intense in the right second intercostal space, as well as in the interscapular region, as these areas are closest to the bronchial trifurcation (right side) or bifurcation (left side). Tactile fremitus is pathologically increased over areas of consolidation and decreased or absent over areas of pleural effusion or pneumothorax (when there is air outside the lung in the chest cavity, preventing lung expansion). [ citation needed ] The reason for increased fremitus in a consolidated lung is the fact that the sound waves are transmitted with less decay in a solid or fluid medium (the consolidation) than in a gaseous medium (aerated lung). Conversely, the reason for decreased fremitus in a pleural effusion or pneumothorax (or any pathology separating the lung tissue itself from the body wall) is that this increased space diminishes or prevents entirely sound transmission. It has been suggested that the artifacts caused by eliciting tactile fremitus during breast ultrasonography can be used to differentiate between benign and malignant tumors. [ 8 ] Tussive fremitus is a vibration felt on the chest when the patient coughs. [ 6 ] : 411 Pericardial fremitus is a vibration felt on the chest wall due to the friction of the surfaces of the pericardium over each other. See pericardial friction rub for the auditory analog of this sign. [ 9 ] Hydatid fremitus is a vibratory sensation felt on palpating a hydatid cyst . [ 10 ]
https://en.wikipedia.org/wiki/Fremitus
Frenuloplasty is the surgical alteration of a frenulum when its presence restricts range of motion between interconnected tissues. Two of the common sites for a frenuloplasty are: This surgery article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Frenuloplasty
The frequency following response ( FFR ), also referred to as frequency following potential ( FFP ) is an evoked potential generated by periodic or nearly-periodic auditory stimuli. [ 1 ] [ 2 ] Part of the auditory brainstem response (ABR), the FFR reflects sustained neural activity integrated over a population of neural elements: "the brainstem response...can be divided into transient and sustained portions, namely the onset response and the frequency-following response (FFR)". [ 3 ] It is often phase-locked to the individual cycles of the stimulus waveform and/or the envelope of the periodic stimuli. [ 4 ] It has not been well studied with respect to its clinical utility, although it can be used as part of a test battery for helping to diagnose auditory neuropathy . This may be in conjunction with, or as a replacement for, otoacoustic emissions . [ 5 ] In 1930, Wever and Bray discovered a potential called the "Wever-Bray effect". [ 6 ] [ 7 ] They originally believed that the potential originated from the cochlear nerve , but it was later discovered that the response is non-neural and is cochlear in origin, specifically from the outer hair cells . [ 8 ] [ 9 ] This phenomenon came to be known as the cochlear microphonic (CM). The FFR may have been accidentally discovered back in 1930; however, renewed interest in defining the FFR did not occur until the mid-1960s. While several researchers raced to publish the first detailed account of the FFR, the term "FFR" was originally coined by Worden and Marsh in 1968, to describe the CM-like neural components recorded directly from several brainstem nuclei (research based on Jewett and Williston's work on click ABR's). [ 2 ] The recording procedures for the scalp-recorded FFR are essentially the same as the ABR. A montage of three electrodes is typically utilized: An active electrode, located either at the top of the head or top of the forehead, a reference electrode, located on an earlobe, mastoid, or high vertebra, and a ground electrode, located either on the other earlobe or in the middle of the forehead. [ 10 ] [ 11 ] The FFR can be evoked to sinusoids, complex tones, steady-state vowels, tonal sweeps, or consonant-vowel syllables. The duration of those stimuli is generally between 15 and 150 milliseconds, with a rise time of 5 milliseconds. The polarity of successive stimuli can be either fixed or alternating. There are many reasons for, and effects of, alternating polarity. When stimulus delivery technology is not properly shielded, the electromagnetic acoustic transducer may induce the stimulus directly into the electrodes. This is known as a stimulus artifact, and researchers and clinicians seek to avoid it, as it is a contamination of the true recorded response of the nervous system. If stimulus polarities alternate, and responses are averaged over both polarities, stimulus artifact can be guaranteed to be absent. This is because the artifact changes polarity with the physical stimuli, and thus will average to nearly zero over time. Direct physiological responses to the stimuli such as the CM, however, also alternate polarity with the stimuli and will also be absent. Subtracting the responses to the two polarities yields the portions of the signal canceled out in the average. Such decomposition of the responses is not readily possible if the stimuli have constant polarity. [ 12 ] [ 13 ] Due to the lack of specificity at low levels, the FFR has yet to make its way into clinical settings. Only recently has the FFR been evaluated for encoding complex sound and binaural processing. [ 14 ] [ 15 ] [ 16 ] There may be uses for the information the FFR can provide regarding steady state, time-variant, and speech signals for better understanding of individuals with hearing loss and its effects and of people with psychopathology. [ 17 ] [ 18 ] FFR distortion products (FFR DPs) could supplement low frequency (< 1000 Hz) DPOAEs . [ 1 ] FFRs have the potential to be used to evaluate the neural representation of speech sounds processed by different strategies employed by users of cochlear implants , primarily identification and discrimination of speech. Also, phase-locked neural activity reflected in the FFR has been successfully used to predict auditory thresholds. [ 14 ] Currently, there is renewed interest in using the FFR to evaluate: the role of neural phase-locking in encoding of complex sounds in normally hearing and hearing impaired subjects, encoding of voice pitch, binaural hearing, and evaluating the characteristics of the neural version of cochlear nonlinearity. [ 1 ] Furthermore, it is demonstrated that the temporal pattern of phase-locked brainstem neural activity generating the FFR may contain information relevant to the binaural processes underlying spatial release from masking (SRM) in challenging listening environments. [ 19 ]
https://en.wikipedia.org/wiki/Frequency_following_response
Frequency Specific Microcurrent (FSM) or frequency Specific Microcurrent Therapy (FSMT) is the practice of introducing a mild electrical current into an area of damaged soft tissue . Practitioners claim that the introduced current enhances the healing process underway in that same tissue. Critics, such as David Gorski , call proponent's claims of the technique altering body tissue's vibrational amplitude pseudoscience . [ 1 ] Frequencies are simultaneously applied used on two channels so they intersect or cross in the area to be treated. Clinical experience shows that both frequencies need to accurately reflect the condition causing the problem (like inflammation or scarring) and the tissue being affected (like the nerve or spinal cord) in order for the treatment to be successful. [ citation needed ] A 2012 systematic review of physical therapies for Achilles tendinopathy found limited evidence from a single randomized clinical trial suggests FSM as an effective therapy. [ 2 ] Skeptics note that FSM is another form of vibration medicine and that there is no good evidence that when a tissue is injured it takes on a “different vibrational characteristic”. [ 1 ] In addition to the implausibility of the underlying mechanism, critics further argue that the treatment lacks a body or research neither establishing the phenomenon nor the clinical claims. [ 3 ] A 1994 review of electronic devices as potential cancer treatments by the American Cancer Society found the methods to questionable, ineffective, and strongly advises against using them. [ 4 ] Another criticism is that the champion of the modality is a discredited chiropractor. [ 5 ]
https://en.wikipedia.org/wiki/Frequency_specific_microcurrent
Frits F.W. Prinzen is an expert on cardiac pacing therapies, both for bradycardia and for heart failure ( cardiac resynchronisation therapy , CRT). He was born July 2, 1954, in Hilversum . He earned a master's degree in medical biology from Utrecht University in 1978, and a PhD in physiology from Maastricht University in 1982. [ 1 ] Prinzen works on cardiac pacing therapies , both for bradycardia and for heart failure ( cardiac resynchronisation therapy , CRT). His main research topic is cardiac mechanics and long-term structural and functional adaptations to various conditions, with emphasis on asynchronous electrical activation and cardiac resynchronisation. [ 2 ] In 1995, he took a sabbatical year during which he worked at the Johns Hopkins University in Baltimore, MD, USA, where he dove into the world of MRI tagging of the heart. This work was published in a well-cited article in the Journal of the American College of Cardiology . [ 3 ] His work led to improved cardiological treatments, especially in the field of cardiac pacing. Together with cardiologists and industrial partners he improved and developed pacemakers, pacing wires, and implantation methods. [ 4 ] An example is published in NEJM in 2007, [ 5 ] describing the case of a child with heart failure who directly benefited by changing the site of the pacemaker wire. This theory was later confirmed in a large clinical trial, published in 2013 in Circulation. [ 6 ] In his lab of it was shown for the first time that pacing the left side of the interventricular septum maintained cardiac function. [ 7 ] [ 8 ] This pacing strategy has been adopted in clinical practice. [ 9 ] Frits was awarded the CARIM commitment award in 2016. [ 10 ] Frits Prinzen is co-author of over 280 scientific articles, with over 16,000 citations and an all time H-index of 70. [ 11 ] He contributed to Clinical Cardiac Pacing, Defibrillation and Resynchronization Therapy . [ 12 ]
https://en.wikipedia.org/wiki/Frits_Prinzen
The frontal lobe is the largest of the four major lobes of the brain in mammals , and is located at the front of each cerebral hemisphere (in front of the parietal lobe and the temporal lobe ). It is parted from the parietal lobe by a groove between tissues called the central sulcus and from the temporal lobe by a deeper groove called the lateral sulcus (Sylvian fissure). The most anterior rounded part of the frontal lobe (though not well-defined) is known as the frontal pole, one of the three poles of the cerebrum . [ 1 ] The frontal lobe is covered by the frontal cortex . [ 2 ] The frontal cortex includes the premotor cortex and the primary motor cortex – parts of the motor cortex . The front part of the frontal cortex is covered by the prefrontal cortex . The nonprimary motor cortex is a functionally defined portion of the frontal lobe. There are four principal gyri in the frontal lobe. The precentral gyrus is directly anterior to the central sulcus , running parallel to it and contains the primary motor cortex, which controls voluntary movements of specific body parts. Three horizontally arranged subsections of the frontal gyrus are the superior frontal gyrus , the middle frontal gyrus , and the inferior frontal gyrus . The inferior frontal gyrus is divided into three parts – the orbital part , the triangular part and the opercular part . [ 3 ] The frontal lobe contains most of the dopaminergic neurons in the cerebral cortex . The dopaminergic pathways are associated with reward , attention , short-term memory tasks, planning , and motivation . Dopamine tends to limit and select sensory information coming from the thalamus to the forebrain . [ 4 ] The frontal lobe is the largest lobe of the brain and makes up about a third of the surface area of each hemisphere. [ 3 ] On the lateral surface of each hemisphere, the central sulcus separates the frontal lobe from the parietal lobe. The lateral sulcus separates the frontal lobe from the temporal lobe . The frontal lobe can be divided into a lateral, polar, orbital (above the orbit ; also called basal or ventral ), and medial part. Each of these parts consists of a particular gyrus : The gyri are separated by sulci . E.g., the precentral gyrus is in front of the central sulcus, and behind the precentral sulcus . The superior and middle frontal gyri are divided by the superior frontal sulcus . The middle and inferior frontal gyri are divided by the inferior frontal sulcus . In humans the frontal lobe reaches full maturity only after the 20s—the prefrontal cortex, in particular, continues in maturing 'til the second and third decades of life [ 5 ] —which, thereafter, marks the cognitive maturity associated with adulthood. A small amount of atrophy , however, is normal in the aging person's frontal lobe. Fjell, in 2009, studied atrophy of the brain in people aged 60–91 years. The 142 healthy participants were scanned using MRI . Their results were compared to those of 122 participants with Alzheimer's disease . A follow-up one year later showed there to have been a marked volumetric decline in those with Alzheimer's and a much smaller decline (averaging 0.5%) in the healthy group. [ 6 ] These findings corroborate those of Coffey, who in 1992 indicated that the frontal lobe decreases in volume approximately 0.5–1% per year. [ 7 ] The entirety of the frontal cortex can be considered the "action cortex", much as the posterior cortex is considered the "sensory cortex". It is devoted to action of one kind or another: skeletal movement, ocular movement, speech control, and the expression of emotions. In humans, the largest part of the frontal cortex, the prefrontal cortex (PFC), is responsible for internal, purposeful mental action, commonly called reasoning or prefrontal synthesis . The function of the PFC involves the ability to project future consequences that result from current actions. PFC functions also include override and suppression of socially unacceptable responses as well as differentiation of tasks. The PFC also plays an important part in integrating longer non-task based memories stored across the brain. These are often memories associated with emotions derived from input from the brain's limbic system . The frontal lobe modifies those emotions, generally to fit socially acceptable norms. [ citation needed ] Psychological tests that measure frontal lobe function include finger tapping (as the frontal lobe controls voluntary movement), the Wisconsin Card Sorting Test , and measures of language , numeracy skills , [ 8 ] and decision making, [ 9 ] all of which are controlled by the frontal lobe. Damage to the frontal lobe can occur in a number of ways and result in many different consequences. Transient ischemic attacks (TIAs) also known as mini-strokes, and strokes are common causes of frontal lobe damage in older adults (65 and over). These strokes and mini-strokes can occur due to the blockage of blood flow to the brain or as a result of the rupturing of an aneurysm in a cerebral artery . Other ways in which injury can occur include traumatic brain injuries incurred following accidents, diagnoses such as Alzheimer's disease or Parkinson's disease (which cause dementia symptoms), and frontal lobe epilepsy (which can occur at any age). [ 10 ] Very often, frontal lobe damage is recognized in those with prenatal alcohol exposure . Common effects of damage to the frontal lobe are varied. Patients who have experienced frontal lobe trauma may know the appropriate response to a situation but display inappropriate responses to those same situations in real life [ citation needed ] . Similarly, emotions that are felt may not be expressed in the face or voice. For example, someone who is feeling happy would not smile, and the voice would be devoid of emotion. Along the same lines, though, the person may also exhibit excessive, unwarranted displays of emotion. Depression is common in stroke patients. Also common is a loss of or decrease in motivation. Someone might not want to carry out normal daily activities and would not feel "up to it". [ 10 ] Those who are close to the person who has experienced the damage may notice changes in behavior. [ 11 ] The case of Phineas Gage was long considered exemplary of these symptoms, though more recent research has suggested that accounts of his personality change have been poorly evidenced. The frontal lobe is the same part of the brain that is responsible for executive functions such as planning for the future, judgment, decision-making skills, attention span , and inhibition. These functions can decrease in someone whose frontal lobe is damaged. [ 10 ] Consequences that are seen less frequently are also varied. Confabulation may be the most frequently indicated "less common" effect. In the case of confabulation, someone gives false information while maintaining the belief that it is the truth. In a small number of patients, uncharacteristic cheerfulness can be noted. This effect is seen mostly in patients with lesions to the right frontal portion of the brain. [ 10 ] [ 12 ] Another infrequent effect is that of reduplicative paramnesia , in which patients believe that the location in which they currently reside is a replica of one located somewhere else. Similarly, those who experience Capgras syndrome after frontal lobe damage believe that an identical "replacement" has taken the identity of a close friend, relative, or other person and is posing as that person. This last effect is seen mostly in schizophrenic patients who also have a neurological disorder in the frontal lobe. [ 10 ] [ 13 ] In the human frontal cortex, a set of genes undergo reduced expression after age 40 and especially after age 70. [ 14 ] This set includes genes that have key functions in synaptic plasticity important in learning and memory, vesicular transport and mitochondrial function . During aging , DNA damage is markedly increased in the promoters of the genes displaying reduced expression in the frontal cortex. In cultured human neurons, these promoters are selectively damaged by oxidative stress. [ 14 ] Individuals with HIV associated neurocognitive disorders accumulate nuclear and mitochondrial DNA damage in the frontal cortex. [ 15 ] A report from the National Institute of Mental Health says a gene variant of (COMT) that reduces dopamine activity in the prefrontal cortex is related to poorer performance and inefficient functioning of that brain region during working memory, tasks, and to a slightly increased risk for schizophrenia . [ 16 ] In the early 20th century, a medical treatment for mental illness , first developed by Portuguese neurologist Egas Moniz , involved damaging the pathways connecting the frontal lobe to the limbic system . A frontal lobotomy (sometimes called frontal leucotomy) successfully reduced distress but at the cost of often blunting the subject's emotions, volition and personality . The indiscriminate use of this psychosurgical procedure, combined with its severe side effects and a mortality rate of 7.4 to 17 per cent, [ 17 ] earned it a bad reputation. The frontal lobotomy has largely died out as a psychiatric treatment. More precise psychosurgical procedures are still used, although rarely. They may include anterior capsulotomy (bilateral thermal lesions of the anterior limbs of the internal capsule ) or the bilateral cingulotomy (involving lesions of the anterior cingulate gyri ) and might be used to treat otherwise untreatable obsessional disorders or clinical depression . Theories of frontal lobe function can be separated into four categories: Other theories include: It may be highlighted that the theories described above differ in their focus on certain processes/systems or construct-lets. [ clarification needed ] Stuss (1999) remarks that the question of homogeneity (single construct) or heterogeneity (multiple processes/systems) of function "may represent a problem of semantics and/or incomplete functional analysis rather than an unresolvable dichotomy" (p. 348). However, further research will show if a unified theory of frontal lobe function that fully accounts for the diversity of functions will be available. Many scientists had thought that the frontal lobe was disproportionately enlarged in humans compared to other primates. This was thought to be an important feature of human evolution and seen as the primary reason why human cognition differs from that of other primates. However, this view in relation to great apes has since been challenged by neuroimaging studies. Using magnetic resonance imaging to determine the volume of the frontal cortex in humans, all extant ape species, and several monkey species, it was found that the human frontal cortex was not relatively larger than the cortex of other great apes , but was relatively larger than the frontal cortex of lesser apes and the monkeys. [ 22 ] The higher cognition of the humans is instead seen to relate to a greater connectedness given by neural tracts that do not affect the cortical volume. [ 22 ] This is also evident in the pathways of the language network connecting the frontal and temporal lobes. [ 23 ]
https://en.wikipedia.org/wiki/Frontal_lobe
Fronto-cerebellar dissociation is the disconnection and independent function of frontal and cerebellar regions of the brain . It is characterized by inhibited communication between the two regions, and is notably observed in cases of ADHD , schizophrenia , alcohol use disorder , and heroin use. The frontal and cerebellar regions make distinctive contributions to cognitive performance, with the left-frontal activations being responsible for selecting a response to a stimulus , while the right-cerebellar activation is responsible for the search for a given response to a stimulus. Left-frontal activation increases when there are many appropriate responses to a stimulus, and right-cerebellar activation increases when there is a single appropriate response to a stimulus. A person with dissociated frontal and cerebellar regions may have difficulties with selecting a response to a stimuli, or difficulties with response initiation. [ 1 ] Fronto-cerebellar dissociation can often result in either the frontal lobe or the cerebellum becoming more active in place of the less active region as a compensatory effect. [ 2 ] [ 3 ] Neuroanatomical studies in non-human primates have shown connections between the cerebellum and non-motor cortical areas of the frontal lobe. [ 4 ] Fronto-cerebellar circuitry is important to processes such as language , memory , and thought . Positron emission tomography has shown that when a subject was shown a noun and asked to state an associated verb, left-frontal and right-cerebellar activation was greater than if asked to simply speak the noun aloud. Generating an associated word required more fronto-cerebellar coactivation, indicating a difference between semantic retrieval and verbal working memory . Additionally, completing a word when presented a stem of the word resulted in greater left-frontal activation when there were many possible completions to the stem. Right-cerebellar activation was greater when there were fewer possible completions to the stem. These results indicate that the left-frontal activations correlate with selection of a response among many possible responses. [ 1 ] [ 5 ] Two distinct fronto-cerebellar pathways have been identified through evoking and measuring field potentials on the cerebellar surface of rats . The first path originates from the PrL (prelimbic) sub-region of the mPFC ( medial prefrontal cortex ), and the second originates from the M2 region ( premotor cortex ). The path from the PrL cortex was found to be 5 milliseconds slower than the path from the M2 cortex. This difference in speed was part of the evidence supporting the idea of two pathways from independent origins. In addition, larger amplitude responses were recorded during PrL stimulation, further supporting that the pathways were separate. [ 6 ] The PrL cortex and M2 cortex are involved in actions such as eyeblink conditioning , action initiation and termination, conflict-monitoring between automatic and voluntary behavioral strategies, attention, and direct and indirect eye movement control. They are also involved in even higher cognitive functions such as stimulus-outcome encoding and automatization of recurrent actions. [ 7 ] There is a reduction in functional neurological connectivity in alcohol dependent subjects, specific to the fronto-cerebellar circuits. Similar dissociation is not seen between prefrontal and premotor cortex, nor between parietal cortex and cerebellum, nor between temporal cortex and cerebellum in people who drink excessive amounts of alcohol. In alcoholic subjects, there is often decreased glucose metabolic rates in the frontal lobe, irregular glucose metabolism in frontal, parietal, and cerebellar regions, irregular concentrations of N-acetyl-aspartate and choline in the cerebellum, and abnormal grey matter volume in the nodes of fronto-cerebellar circuits. [ 8 ] Studies indicate reorganization of fronto-cerebellar circuitry and fronto-cerebellar dysfunction among heroin addicted subjects. In task related fMRI , addicts showed an inverse correlation between cerebellum activation and prefrontal cortex activity. This indicates that when chronic use damages the prefrontal cortex, the importance of the cerebellum to external incentive stimuli increases. This drug-induced damage of fronto-cerebellar circuitry may result in the cerebellum taking a larger role in long-term emotional memory, behavioral sensitization , and inflexible behavior. [ 9 ] In healthy individuals, fronto-cerebellar neural networks mediate selective and sustained attention. Children and adults with Attention Deficit Hyperactivity Disorder ( ADHD ) show reduced functional connectivity relative to healthy controls in fronto-cerebellar networks. Such dissociation is correlated with the characteristic functions of ADHD such as set-shifting and set maintenance , higher level and selective attention, interference control, motor inhibition, integration across space and time, planning, decision making, temporal foresight, and working memory. In individuals with ADHD, there is often increased cerebellar activity due to a compensatory effect related to reduced frontal activity. [ 2 ] [ 3 ] Fronto-cerebellar dissociation has been associated with higher cognitive defects in individuals with schizophrenia. Behaviors such as anhedonia , the inability to experience pleasure from activities usually found enjoyable, and ambivalence , the state of having simultaneous, conflicting feelings toward a person or thing, are both attributed to fronto-cerebellar abnormalities in schizophrenic patients. [ 10 ] When healthy adults undergo infant motor development , it is characterized by increased grey matter density in the fronto-cerebellar pathways. Schizophrenic individuals often have delayed motor development, resulting in fronto-cerebellar abnormalities and decreased executive function in adulthood as a result. [ 11 ]
https://en.wikipedia.org/wiki/Fronto-cerebellar_dissociation
Frontotemporal lobar degeneration ( FTLD ) is a pathological process that occurs in frontotemporal dementia . It is characterized by atrophy in the frontal lobe and temporal lobe of the brain , with sparing of the parietal and occipital lobes . [ 1 ] [ 2 ] Common proteinopathies that are found in FTLD include the accumulation of tau proteins and TAR DNA-binding protein 43 (TDP-43). Mutations in the C9orf72 gene have been established as a major genetic contribution of FTLD, although defects in the granulin (GRN) and microtubule-associated proteins (MAPs) are also associated with it. [ 3 ] There are 3 main histological subtypes found at post-mortem: Two groups independently categorized the various forms of TDP-43 associated disorders. Both classifications were considered equally valid by the medical community, but the physicians and researchers in question have jointly proposed a compromise classification to avoid confusion. [ 6 ] In December 2021 the structure of TDP-43 was resolved with cryo-EM [ 7 ] [ 8 ] but shortly after it was argued that in the context of FTLD-TDP the protein involved could be TMEM106B (which has been also resolved with cryo-EM), rather than of TDP-43. [ 9 ] [ 10 ] There have been numerous advances in descriptions of genetic causes of FTLD, and the related disease amyotrophic lateral sclerosis . Mutations in all of the above genes cause a very small fraction of the FTLD spectrum. Most of the cases are sporadic (no known genetic cause). For diagnostic purposes, magnetic resonance imaging (MRI) and ([18F]fluorodeoxyglucose) positron emission tomography (FDG-PET) are applied. They measure either atrophy or reductions in glucose utilization. The three clinical subtypes of frontotemporal lobar degeneration, frontotemporal dementia, semantic dementia and progressive nonfluent aphasia , are characterized by impairments in specific neural networks. [ 17 ] The first subtype, frontotemporal dementia, mainly affects a frontomedian network and impairs social cognition . Semantic dementia is mainly related to the inferior temporal poles and amygdalae ; brain regions enabling conceptual knowledge, semantic information processing, and social cognition , whereas progressive nonfluent aphasia affects the entire left frontotemporal network for phonological and syntactical processing. [ citation needed ] There is no known treatment. United States Senator Pete Domenici ( R - NM ) suffered from FTLD, and the illness was the main reason for his October 4, 2007 announcement of retirement at the end of his term in office. [ 18 ] American film director, producer, and screenwriter Curtis Hanson died as a result of FTLD on September 20, 2016. [ 19 ] British journalist Ian Black died from the disease on January 22, 2023. [ 20 ]
https://en.wikipedia.org/wiki/Frontotemporal_lobar_degeneration
The frozen section procedure is a pathological laboratory procedure to perform rapid microscopic analysis of a specimen. It is used most often in oncological surgery . [ 1 ] The technical name for this procedure is cryosection . The microtome device that cold cuts thin blocks of frozen tissue is called a cryotome . [ 2 ] The quality of the slides produced by frozen section is of lower quality than formalin fixed paraffin embedded tissue processing. While diagnosis can be rendered in many cases, fixed tissue processing is preferred in many conditions for more accurate diagnosis. The intraoperative consultation is the name given to the whole intervention by the pathologist , which includes not only frozen section but also gross evaluation of the specimen, examination of cytology preparations taken on the specimen (e.g. touch imprints), and aliquoting of the specimen for special studies (e.g. molecular pathology techniques, flow cytometry). The report given by the pathologist is often limited to a "benign" or "malignant" diagnosis, and communicated to the surgeon operating via intercom. When operating on a previously confirmed malignancy, the main purpose of the pathologist is to inform the surgeon if the resection margin is clear of residual cancer, or if residual cancer is present at the resection margin. The method of processing is usually done with the bread loafing technique. But margin controlled surgery ( CCPDMA ) can be performed using a variety of tissue cutting and mounting methods, including Mohs surgery . The frozen section procedure as practiced today in medical laboratories is based on the description by Dr Louis B. Wilson in 1905. Wilson developed the technique from earlier reports at the request of Dr William Mayo , surgeon and one of the founders of the Mayo Clinic [ 3 ] Earlier reports by Dr Thomas S. Cullen at Johns Hopkins Hospital in Baltimore also involved frozen section, but only after formalin fixation, and pathologist Dr William Welch, also at Hopkins, experimented with Cullen's procedure but without clinical consequences. Hence, Wilson is generally credited with truly pioneering the procedure (Gal & Cagle, 2005). [ 4 ] The key instrument for cryosection is the cryostat , which is essentially a microtome inside a freezer. The microtome can be compared to a very accurate "deli" slicer, capable of slicing sections as thin as 1 micrometre. The usual histology slice is cut at 5 to 10 micrometres. The surgical specimen is placed on a metal tissue disc which is then secured in a chuck and frozen rapidly to about –20 to –30 °C. The specimen is placed in a gel-like embedding medium, usually OCT which consists of polyethylene glycol and polyvinyl alcohol ; this compound is known by many names and when frozen has the same density as frozen tissue. At this temperature, most tissues become rock-hard. Usually a lower temperature is required for fat or lipid rich tissue. Each tissue has a preferred temperature for processing. Subsequently, it is cut frozen with the microtome portion of the cryostat, the section is picked up on a glass slide and stained (usually with hematoxylin and eosin , the H&E stain ). The preparation of the sample is much more rapid than with traditional histology technique (around 10 minutes vs 16 hours). However, the technical quality of the sections is much lower. The entire laboratory can occupy a space less than 9-square-foot (0.84 m 2 ), and minimal ventilation is required compared to a standard wax embedded specimen laboratory. [ citation needed ] Steps of cryotomy: The principal use of the frozen section procedure is the examination of tissue while surgery is taking place. This may be for various reasons. In the performance of Mohs surgery , it is a simple method for real-time margin control of a surgical specimen. If a tumor appears to have metastasized , a sample of the suspected metastasis is sent for cryosection to confirm its identity. This will help the surgeon decide whether there is any point in continuing the operation. Usually, aggressive surgery is performed only if there is a chance to cure the patient. If the tumor has metastasized, surgery is usually not curative, and the surgeon will choose a more conservative surgery, or no resection at all. If a tumor has been resected but it is unclear whether the resection margin is free of tumor, an intraoperative consultation is requested to assess the need to make a further resection for clear margins. In a sentinel node procedure , a sentinel node containing tumor tissue prompts a further lymph node dissection, while a benign node will avoid such a procedure. [ citation needed ] If surgery is explorative, rapid examination of a lesion might help identify the possible cause of a patient's symptoms. It is important to note, however, that the pathologist is very limited by the poor technical quality of the frozen sections. A final diagnosis is rarely offered intraoperatively. [ citation needed ] Rarely, cryosections are used to detect the presence of substances lost in the traditional histology technique, for example lipids. They can also be used to detect some antigens masked by formalin. The cryostat is available in a small portable device weighing less than 80 lb (36 kg), to a large stationary device 500 lb (230 kg) or more. The entire histologic laboratory can be carried in one portable box, making frozen section histology a possible tool in primitive medicine. A Cochrane systematic review published in 2016 analysed all studies that reported diagnostic accuracy of frozen sections in women undergoing surgery for suspicious tumor in ovary. The review concluded that for tumors that were clearly either benign or malignant on frozen section, the accuracy of the diagnosis was good, as confirmed later by regular biopsy. On the contrary, where the frozen section diagnosis was a borderline tumor, neither confirming not ruling out cancer, the diagnosis was less accurate. The review suggests that in such situations of uncertainty, surgeons may choose to perform additional surgery in this group of women at the time of their initial surgery in order to reduce the need for a second operation, as on an average one out of five of these women were subsequently found to have cancer. [ 5 ] An ultracryotome , which is a very similar device to crytome, can cut ultrathin blocks of tissue, and that tissue can be observed by transmission electron microscopy . The cutting thickness of ultracryotome is about dozens of nanometers. The ultrastructural properties can be studied without embedding of the tissue, and so the molecular conservation is better. [ 6 ]
https://en.wikipedia.org/wiki/Frozen_section_procedure
Full arch restoration in dentistry refers to the comprehensive reconstruction or rehabilitation of an entire dental arch , which can include all teeth in the upper or lower jaw . [ 1 ] [ 2 ] This procedure is also known as full mouth reconstruction or full mouth rehabilitation. [ 3 ] [ 4 ] Full arch restoration involves creating a single prosthesis to replace 10 to 14 teeth. Typically, the front areas of the jaw maintain more bone volume suitable for implants, whereas the back regions often suffer greater bone loss. This sequence occurs due to the typical loss of molars initially, followed by premolars, while the front teeth tend to remain intact for the longest duration. As time passes, there is a noticeable reduction in both the height and width of the alveolar ridge following tooth loss. [ 5 ] The indications for full-arch restoration include: The two main types of full-arch restorations in dentistry are fixed implant-supported restorations and removable implant-supported overdentures. Prosthetics can be temporary or permanent . Temporary prosthetics are essential in implant-supported full-arch restorations. Temporary prosthetics in full arch restoration refer to provisional dental appliances that are used to replace missing teeth during the healing phase after implant surgery. These temporary prosthetics are designed to provide immediate aesthetics and function while the final permanent prosthesis is being fabricated. They are typically worn for a period of several months until the implants have fully integrated with the jawbone and the final restoration can be placed. Temporary prosthetics help maintain the patient's appearance and ability to eat and speak comfortably during the healing process. Permanent prosthetics in full arch restoration are the final, long-term dental appliances used to replace missing teeth and restore function and aesthetics in patients with extensive tooth loss. These prosthetics are custom-designed and fabricated to fit precisely onto dental implants that have integrated with the jawbone. Permanent prosthetics can include fixed dental bridges, implant-supported dentures, or full-arch implant-supported prostheses. They are typically made from durable materials such as ceramic, zirconia, or metal alloys, and are designed to closely resemble natural teeth in both appearance and function. [ 6 ] Approaches for securing a prosthesis onto a bar: The choice of full-arch restoration depends on factors such as the patient's oral health, bone structure, budget, and treatment preferences. A thorough evaluation by a dentist or prosthodontist is necessary to determine the most suitable treatment plan for each individual case. Implants featuring a prominent threading pattern are especially important when being inserted into a socket that was recently vacated by a extracted tooth. This approach ensures initial stability, relying on 3–4 mm of the implant's tip securely fitting into the bone tissue. The size and length of these implants are chosen according to the specific clinical scenario, taking into account the patient's anatomical characteristics and the state of the bone tissue. The initial stage commences with the acquisition of 3D representations of the patient's jaw by merging digital data from CBCT scans with optical data gathered from an intraoral scanner. Various software platforms excel in handling this data, spanning from converting CBCT images into 3D files to creating prosthetic models. Leading software choices for these processes include Dental Wings, Shape 3D and Exocad. Additionally, documenting the soft tissue state and existing teeth is crucial, accomplished through photographic records. Following this, the expert needs to strategize the placement of implants and generate a preliminary model of the dentition. It's vital to position the screw shaft exits on the inner aspect of the prosthesis. Precise digital representations of implants, screws, and abutments are critical at this juncture to guarantee their accurate positioning within the digital jaw model. Once the suitable implants have been chosen and their positions planned, the next step involves crafting a navigation template, often referred to as a surgical template. The template plays a crucial role in ensuring the precise placement of implants according to the digital blueprint and at the proper angles. The surgical or navigation template, together with the provisional prosthesis, undergoes thorough assessment and any required adjustments using either a plaster model or a 3D-printed representation of the jaw. Both models, encompassing the restorations, need to be validated within an articulator to confirm their precision. The procedure adapts according to the initial clinical state. For patients with a prolonged absence of teeth, the process typically follows a straightforward approach. The benefits include: Contraindications: Rehabilitation treatments involving full arch dental implants may encounter complications and failures. In general, complications may be related to the patient's systemic compromise, increased functional demand, surgical technique, post-operative care, design and type of prosthesis, etc. The overall success rate for dental implants is between 90 and 100% according to the study. [ 12 ] Common prosthetic issues following the installation of an implant-supported prosthesis include mucositis, loosening or breakage of the abutment screw or prosthetic parts, and fracture of the acrylic or porcelain structure. Although most complications resolve favorably in follow-up appointments, it is essential to establish an adequate surgical and prosthetic management protocol to achieve predictable and successful long-term results. [ 13 ] [ 14 ]
https://en.wikipedia.org/wiki/Full_arch_restoration
Full mouth disinfection typically refers to an intense course of treatment for periodontitis typically involving scaling and root planing in combination with adjunctive use of local antimicrobial adjuncts to periodontal treatment such as chlorhexidine in various ways of application. The aim is to complete debridement of all periodontal pocket areas within a short time frame such as 24 hours, in order to minimize the chance of reinfection of the pockets with pathogens coming from another oral niches like the tongue, tonsils and non-treated periodontal pocket. Eberhard (2022) [ 1 ] published a Cochrane review (systematic review and meta-analysis ) which found modest benefit for full mouth disinfection, but the superiority (or otherwise) of the intervention had not at the time of review been conclusively demonstrated. Current recommendations support its use as equal and equivalent to other established effective treatment modalities.
https://en.wikipedia.org/wiki/Full_mouth_disinfection
Fulminant ( / ˈ f ʊ l m ɪ n ən t / ) is a medical descriptor for any event or process that occurs suddenly and escalates quickly, and is intense and severe to the point of lethality, i.e., it has an explosive character. [ 1 ] The word comes from Latin fulmināre , to strike with lightning . There are several diseases described by this adjective: Beyond these particular uses, the term is used more generally as a descriptor for sudden-onset medical conditions that are immediately threatening to life or limb. Some viral hemorrhagic fevers , such as Ebola , Lassa fever , and Lábrea fever , may kill in as little as two to five days. Diseases that cause rapidly developing lung edema , such as some kinds of pneumonia , may kill in a few hours. It was said of the " black death " (pneumonic bubonic plague ) that some of its victims would die in a matter of hours after the initial symptoms appeared. Other pathologic conditions that may be fulminating in character are acute respiratory distress syndrome , asthma , acute anaphylaxis , septic shock , Sweating sickness , and disseminated intravascular coagulation . The term is generally not used to refer to immediate death by trauma, [ 3 ] such as gunshot wound, but can refer to trauma-induced secondary conditions, such as commotio cordis , a sudden cardiac arrest caused by a blunt, non-penetrating trauma to the precordium , which causes ventricular fibrillation of the heart. Cardiac arrest and stroke in certain parts of the brain , such as in the brainstem (which controls cardiovascular and respiratory system functions), and massive hemorrhage of the great arteries (such as in perforation of the walls by trauma or by sudden opening of an aneurysm of the aorta ) may be very quick, causing "fulminant death". Sudden infant death syndrome (SIDS) is still a mysterious cause of respiratory arrest in infants. Certain infections of the brain, such as rabies , meningococcal meningitis , Acute measles encephalitis, or primary amebic meningoencephalitis can kill within hours to days after symptoms appear. Some toxins , such as cyanide , may also provoke fulminant death. Abrupt hyperkalemia provoked by intravenous injection of potassium chloride leads to fulminant death by cardiac arrest.
https://en.wikipedia.org/wiki/Fulminant
The Functional Capacity Index ( FCI ) is a measure of a person's level of function for the following 12 months after sustaining some form of illness or injury . [ 1 ] The FCI incorporates ten physical functions and gives each a numerical value on a scale of 0 to 100, with 100 representing no limitations on a person's everyday function. [ 2 ] [ 3 ] [ 4 ] This medical article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Functional_Capacity_Index
Functional disorders are a group of recognisable medical conditions which are due to changes to the functioning of the systems of the body rather than due to a disease affecting the structure of the body. [ 1 ] Functional disorders are common and complex phenomena that pose challenges to medical systems. Traditionally in medicine, the body is thought of as consisting of different organ systems , but it is less well understood how the systems interconnect or communicate [ citation needed ] . Functional disorders can affect the interplay of several organ systems (for example gastrointestinal, respiratory, musculoskeletal or neurological) leading to multiple and variable symptoms. Less commonly there is a single prominent symptom or organ system affected. Most symptoms that are caused by structural disease can also be caused by a functional disorder. Because of this, individuals often undergo many medical investigations before the diagnosis is clear. Though research is growing to support explanatory models of functional disorders, structural scans such as MRIs , or laboratory investigation such as blood tests do not usually explain the symptoms or the symptom burden [ citation needed ] . This difficulty in 'seeing' the processes underlying the symptoms of functional disorders has often resulted in these conditions being misunderstood and sometimes stigmatised within medicine and society. Despite being associated with high disability, functional symptoms are not a threat to life, and are considered modifiable with appropriate treatment. [ citation needed ] Functional disorders are mostly understood as conditions characterised by: There are many different functional disorder diagnoses that might be given depending on the symptom or syndrome that is most troublesome. There are many examples of symptoms that individuals may experience; some of these include persistent or recurrent pain, fatigue, weakness, shortness of breath or bowel problems. Single symptoms may be assigned a diagnostic label, such as "functional chest pain", "functional constipation" or "functional seizures". Characteristic collections of symptoms might be described as one of the functional somatic syndromes . [ 2 ] A syndrome is a collection of symptoms. Somatic means 'of the body'. Examples of functional somatic syndromes include: irritable bowel syndrome ; cyclic vomiting syndrome ; some persistent fatigue and chronic pain syndromes, such as fibromyalgia (chronic widespread pain), or chronic pelvic pain ; interstitial cystitis ; functional neurologic disorder ; and multiple chemical sensitivity . [ citation needed ] Most medical specialties define their own functional somatic syndrome, and a patient may end up with several of these diagnoses without understanding how they are connected. There is overlap in symptoms between all the functional disorder diagnoses. For example, it is not uncommon to have a diagnosis of irritable bowel syndrome (IBS) and chronic widespread pain/fibromyalgia. [ 3 ] All functional disorders share risk factors and factors that contribute to their persistence. Increasingly researchers and clinicians are recognising the relationships between these syndromes. [ citation needed ] The terminology for functional disorders has been fraught with confusion and controversy, with many different terms used to describe them. Sometimes functional disorders are equated or mistakenly confused with diagnoses like category of "somatoform disorders", "medically unexplained symptoms", "psychogenic symptoms" or "conversion disorders". Many historical terms are now no longer thought of as accurate, and are considered by many to be stigmatising. [ 4 ] Psychiatric illnesses have historically also been considered as functional disorders in some classification systems, as they often fulfil the criteria above. Whether a given medical condition is termed a functional disorder depends in part on the state of knowledge. Some diseases, such as epilepsy , were historically categorized as functional disorders but are no longer classified that way. [ citation needed ] Functional disorders can affect individuals of all ages, ethnic groups and socioeconomic backgrounds. In clinical populations, functional disorders are common and have been found to present in around one-third of consultations in both specialist practice [ 5 ] and primary care. [ 6 ] Chronic courses of disorders are common and are associated with high disability, health-care usage and social costs. [ 7 ] Rates differ in the clinical population compared with the general population, and will vary depending on the criteria used to make the diagnosis. For example, irritable bowel syndrome is thought to affect 4.1%, [ 8 ] and fibromyalgia 0.2–11.4% of the global population. [ 9 ] A recent large study carried out on population samples in Denmark showed the following: In total, 16.3% of adults reported symptoms fulfilling the criteria for at least one Functional Somatic Syndrome, and 16.1% fulfilled criteria for Bodily Distress Syndrome. [ 10 ] The diagnosis of functional disorders is usually made in the healthcare setting most often by a doctor — this could be a primary care physician or family doctor, hospital physician or specialist in the area of psychosomatic medicine or a consultant-liaison psychiatrist. The primary care physician or family doctor will generally play an important role in coordinating treatment with a secondary care clinician if necessary. The diagnosis is essentially clinical, whereby the clinician undertakes a thorough medical and mental health history and physical examination. Diagnosis should be based on the nature of the presenting symptoms, and is a "rule in" as opposed to "rule out" diagnosis — this means it is based on the presence of positive symptoms and signs that follow a characteristic pattern. There is usually a process of clinical reasoning to reach this point and assessment might require several visits, ideally with the same doctor. In the clinical setting, there are no laboratory or imaging tests that can consistently be used to diagnose the conditions; however, as is the case with all diagnoses, often additional diagnostic tests (such as blood tests, or diagnostic imaging) will be undertaken to consider the presence of underlying disease. There are however diagnostic criteria that can be used to help a doctor assess whether an individual is likely to suffer from a particular functional syndrome. These are usually based on the presence or absence of characteristic clinical signs and symptoms. Self-report questionnaires may also be useful. There has been a tradition of a separate diagnostic classification systems for "somatic" and "mental" disorder classifications. Currently, the 11th version of the International Classification System of Diseases ( ICD-11 ) has specific diagnostic criteria for certain disorders which would be considered by many clinicians to be functional somatic disorders, such as IBS or chronic widespread pain/fibromyalgia, and dissociative neurological symptom disorder. [ 11 ] In the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders ( DSM-5 ) the older term somatoform ( DSM-IV ) has been replaced by somatic symptom disorder , which is a disorder characterised by persistent somatic (physical) symptoms, and associated psychological problems to the degree that it interferes with daily functioning and causes distress. (APA, 2022). Bodily distress disorder is a related term in the ICD-11. Somatic symptom disorder and bodily distress disorder have significant overlap with functional disorders and are often assigned if someone would benefit from psychological therapies addressing psychological or behavioural factors which contribute to the persistence of symptoms. However, people with symptoms partly explained by structural disease (for example, cancer) may also meet the criteria for diagnosis of functional disorders, somatic symptom disorder and bodily distress disorder. [ 12 ] It is not unusual for a functional disorder to coexist with another diagnosis (for example, functional seizures can coexist with epilepsy, [ 13 ] or irritable bowel syndrome with inflammatory bowel disease. [ 14 ] This is important to recognise as additional treatment approaches might be indicated in order that the patient achieves adequate relief from their symptoms. The diagnostic process is considered an important step in order for treatment to move forward successfully. When healthcare professionals are giving a diagnosis and carrying out treatment, it is important to communicate openly and honestly and not to fall into the trap of dualistic concepts – that is "either mental or physical" thinking; or attempt to "reattribute" symptoms to a predominantly psychosocial cause. [ 15 ] It often important to recognise the need to cease unnecessary additional diagnostic testing if a clear diagnosis has been established . [ 16 ] Explanatory models that support our understanding of functional disorders take into account the multiple factors involved in symptom development. A personalised, tailored approach is usually needed in order to consider the factors which relate to that individual's biomedical, psychological, social, and material environment. [ 17 ] More recent functional neuroimaging studies have suggested malfunctioning of neural circuits involved in stress processing, emotional regulation, self-agency, interoception, and sensorimotor integration. [ 18 ] A recent article in Scientific American proposed that important brain structures suspected in the pathophysiology of functional neurological disorder include increased activity of the amygdala and decreased activity within the right temporoparietal junction . [ 19 ] Healthcare professionals might find it useful to consider three main categories of factors: predisposing, precipitating, and perpetuating (maintaining) factors. These are factors that make the person more vulnerable to the onset of a functional disorder; and include biological, psychological and social factors. Like all health conditions, some people are probably predisposed to develop functional disorders due to their genetic make-up. However, no single genes have been identified that are associated with functional disorders. Epigenetic mechanisms (mechanisms that affect interaction of genes with their environment) are likely to be important, and have been studied in relation to the hypothalamic–pituitary–adrenal axis . [ 20 ] Other predisposing factors include current or prior somatic/physical illness or injury, and endocrine, immunological or microbial factors. [ 21 ] Functional disorders are diagnosed more frequently in female patients. [ 22 ] Medical bias possibly contributes to the sex differences in diagnosis: women are more likely to be diagnosed than men with a functional disorder by doctors. [ 23 ] People with functional disorders also have higher rates of pre-existing mental and physical health conditions, including depression and anxiety disorders, [ 24 ] Post-traumatic stress disorder, [ 25 ] multiple sclerosis and epilepsy. [ 26 ] Personality style has been suggested as a risk factor in the development of functional disorders but the effect of any individual personality trait is variable and weak. [ 27 ] [ 28 ] Alexithymia (difficulties recognising and naming emotions) has been widely studied in patients with functional disorders and is sometimes addressed as part of treatment. [ 29 ] Migration, cultural and family understanding of illness, are also factors that influence the chance of an individual developing a functional disorder. [ 30 ] Being exposed to illness in the family while growing up or having parents who are healthcare professionals are sometimes considered risk factors. Adverse childhood experiences and traumatic experiences of all kinds are known important risk factors. [ 31 ] [ 32 ] [ 25 ] Newer hypotheses have suggested minority stressors may play a role in the development of functional disorders in marginalized communities. [ 33 ] [ 34 ] These are the factors that for some patients appear to trigger the onset of a functional disorder. Typically, these involve either an acute cause of physical or emotional stress, for example an operation, a viral illness, a car accident, a sudden bereavement, or a period of intense and prolonged overload of chronic stressors (for example relationship difficulties, job or financial stress, or caring responsibilities). Not all affected individuals will be able to identify obvious precipitating factors and some functional disorders develop gradually over time. These are the factors that contribute to the development of functional disorder as a persistent condition and maintaining symptoms. These can include the condition of the physiological systems including the immune and neuroimmune systems, the endocrine system, the musculoskeletal system, the sleep-wake cycle , the brain and nervous system , the person's thoughts and experience , their experience of the body , social situation and environment. All these layers interact with each other. Illness mechanisms are important therapeutically as they are seen as potential targets of treatment. [ 35 ] The exact illness mechanisms that are responsible for maintaining an individual's functional disorder should be considered on an individual basis. However, various models have been suggested to account for how symptoms develop and continue. For some people there seems to be a process of central-sensitisation, [ 36 ] chronic low grade inflammation [ 37 ] or altered stress reactivity mediated through the hypothalamic-pituitary-adrenal (HPA) axis (Fischer et al., 2022). For some people attentional mechanisms are likely to be important. [ 38 ] Commonly, illness-perceptions or behaviours and expectations (Henningsen, Van den Bergh et al. 2018 ) contribute to maintaining an impaired physiological condition. Perpetuating illness mechanisms are often conceptualized as "vicious cycles", which highlights the non-linear patterns of causality characteristic of these disorders. [ 39 ] Other people adopt a pattern of trying to achieve a lot on "good days" which results in exhaustion for days following and a flare up of symptoms, which has led to various energy management tools being used in the patient community, such as "Spoon Theory." [ 40 ] Depression, PTSD, sleep disorders, and anxiety disorders can also perpetuate functional disorders and should be identified and treated where they are present. Side effects or withdrawal effects of medication often need to be considered. Iatrogenic factors such as lack of a clear diagnosis, not feeling believed or not taken seriously by a healthcare professional, multiple (invasive) diagnostic procedures, ineffective treatments and not getting an explanation for symptoms can increase worry and unhelpful illness behaviours. Stigmatising medical attitudes and unnecessary medical interventions (tests, surgeries or drugs) can also cause harm and worsen symptoms. [ 41 ] Functional disorders can be treated successfully and are considered reversible conditions. Treatment strategies should integrate biological, psychological and social perspectives. The body of research around evidence-based treatment in functional disorders is growing. [ 42 ] With regard to self-management, there are many basic things that can be done to optimise recovery. Learning about and understanding the condition is helpful in itself. [ 43 ] Many people are able to use bodily complaints as a signal to slow down and reassess their balance between exertion and recovery. Bodily complaints can be used as a signal to begin incorporating stress reduction and balanced lifestyle measures (routine, regular activity and relaxation, diet, social engagement) that can help reduce symptoms and are central to improving quality of life. Mindfulness practice can be helpful for some people. [ 44 ] Family members or friends can also be helpful in supporting recovery. Most affected people benefit from support and encouragement in this process, ideally through a multi-disciplinary team with expertise in treating functional disorders. Family members or friends may also be helpful in supporting recovery. The aim of treatment overall is to first create the conditions necessary for recovery, and then plan a programme of rehabilitation to re-train mind-body connections making use of the body's ability to change. Particular strategies can be taught to manage bowel symptoms, pain or seizures. [ 42 ] Though medication alone should not be considered curative in functional disorders, medication to reduce symptoms might be indicated in some instances, for example where mood or pain is a significant issue, preventing adequate engagement in rehabilitation. It is important to address accompanying factors such as sleep disorders, pain, depression and anxiety, and concentration difficulties. Physiotherapy may be relevant for exercise and activation programs, or when weakness or pain is a problem. [ 45 ] Psychotherapy might be helpful to explore a pattern of thoughts, actions and behaviours that could be driving a negative cycle – for example tackling illness expectations or preoccupations about symptoms. [ 46 ] Some existing evidence-based treatments include cognitive behavioural therapy (CBT) for functional neurological disorder; [ 47 ] physiotherapy for functional motor symptoms, [ 48 ] and dietary modification or gut targeting agents for irritable bowel syndrome. [ 49 ] Despite some progress in the last decade, people with functional disorders continue to suffer subtle and overt forms of discrimination by clinicians, researchers and the public. Stigma is a common experience for individuals who present with functional symptoms and is often driven by historical narratives and factual inaccuracies. Given that functional disorders do not usually have specific biomarkers or findings on structural imaging that are typically undertaken in routine clinical practice, this leads to potential for symptoms to be misunderstood, invalidated, or dismissed, leading to adverse experiences when individuals are seeking help. [ 50 ] Part of this stigma is also driven by theories around " mind body dualism ", which frequently surfaces as an area of importance for patients, researchers and clinicians in the realm of functional disorders. Artificial separation of the mind/brain/body (for example the use of phrases such as; "physical versus psychological" or "organic versus non-organic") furthers misunderstanding and misconceptions around these disorders, and only serves to hinder progress in scientific domain and for patients seeking treatment. Some patient groups have fought to have their illnesses not classified as functional disorders, because in some insurance based health-care systems these have attracted lower insurance payments. [ 51 ] Current research is moving away from dualistic theories, and recognising the importance of the whole person, both mind and body, in diagnosis and treatment of these conditions. People with functional disorders frequently describe experiences of doubt, blame, and of being seen as less 'genuine' than those with other disorders. Some clinicians perceive those individuals with functional disorders are imagining their symptoms, are malingering, or doubt the level of voluntary control they have over their symptoms. As a result, individuals with these disorders often wait long periods of time to be seen by specialists and receive appropriate treatment. [ 52 ] Currently, there is a lack of specialised treatment services for functional disorders in many countries. [ 53 ] However, research is growing in this area, and it is hoped that the implementation of the increased scientific understanding of functional disorders and their treatment will allow effective clinical services supporting individuals with functional disorders to develop. [ 54 ] Patient membership organisations/advocate groups have been instrumental in gaining recognition for individuals with these disorders. [ 55 ] [ 56 ] Directions for research involve understanding more about the processes underlying functional disorders, identifying what leads to symptom persistence and improving integrated care/treatment pathways for patients. Research into the biological mechanisms which underpin functional disorders is ongoing. Understanding how stress effects the body over a lifetime, [ 57 ] for example via the immune [ 58 ] [ 59 ] endocrine [ 20 ] and autonomic nervous systems, is important Ying-Chih et.al 2020, Tak et. al. 2011, Nater et al. 2011). Subtle dysfunctions of these systems, for example through low grade chronic inflammation, [ 60 ] [ 61 ] or dysfunctional breathing patterns, [ 62 ] are increasingly thought to underlie functional disorders and their treatment. However, more research is needed before these theoretical mechanisms can be used clinically to guide treatment for an individual patient.
https://en.wikipedia.org/wiki/Functional_disorder
Functional drug sensitivity testing (f-DST) is an in-vitro diagnostic test method in functional precision medicine. [ 1 ] It was developed to personalize the choice among cytotoxic drugs and drug combinations for patients with an indication for systemic chemotherapy in specific cancer types. [ 2 ] [ 3 ] [ 4 ] [ 5 ] f-DST is performed by various in-vitro diagnostic methods which have in common to quantify reactions of individual patient-derived cancer tissue when exposed to cytotoxic drugs. As substrate, testing methods initially require live cancer tissue from an individual patient (metastases or primary tumor). Since the 1970s, randomized controlled trials have been used to assess new cancer therapies. [ 6 ] Randomized trials assess average effects of treatment across a patient population, [ 7 ] but cytotoxic chemotherapies are known to have different effects and side effect profiles for different individuals. Precision medicine aims to match patients with the best available treatment for their tumor. [ 8 ] f-DST methodology comprises three basic steps: Pre-analytical processing of cancerous tissue samples, cultivation of the testable cellular product (i.e. tumoroids, organoids, cell clusters, single cells) and subsequent exposition to the clinically most important cytotoxic agents (5-FU, oxaliplatin, irinotecan, individually or in combinations (CAPOX, FOLFOX, FOLFIRI, FOLFOXIRI), as well as to other cytotoxic substances. [ 9 ] [ 10 ] Biopsy samples containing live cancer tissue are processed to obtain the required type, histologic organization and number of carcinomatous cells. This could be isolated cells, cell clusters, organoids or tumoroids of defined sizes. The processed cancer specimen, cell or organized cell aggregates, are then cultured in stem cell media to increase in number and expand into a sufficient number of testable cancer cell aggregates as required, depending on the used test model. [ 11 ] Tumor cells can be grown in different environments including 3-dimensional organoids , on chips that simulate the tumor microenvironment , or xenografted on to animal models. [ 1 ] After defined time periods of culture, often between 3 and 7 days, cell or organized cell aggregates are counted and transferred to drug screening arrays, where they are exposed to defined concentrations of the cytotoxic drugs or drug combinations in question. Measurement methods and statistical analyses usually focus on cell/cell aggregate behaviour in vitro under exposure to the test drugs after defined periods of time. In vitro reactions of those patient-derived cancer cell or cell aggregates following exposure to standardized cytotoxic drug concentrations over a specified time are then calculated based on positive and negative controls and/or to calibration curves obtained from reference populations. [ 12 ] f-DST provides information on an individual patient's tumoroids / organoids / cell cluster / cell vulnerabilities towards cytotoxic chemotherapies in vitro [ 13 ] f-DST requires repeat, fresh cancerous tissue biopsy procedures, which is not standard of care in the routine diagnostic workup of solid tumor patients in all stages of the disease. [ 14 ] If organoids or tumoroids are cultured for the purpose of fDST, results start being available from between 14 and 21 days after the bioptic procedure. [ 15 ] Like other functional testing methods (e.g. antibiograms), none of the current f-DST methods claims to fully replicate the intricate interactions of tumor tissue within a patient's body. However, information obtained by f-DST is being clinically investigated regarding relevant endpoints such as progression free survival. [ 16 ] f-DST is an emerging in vitro diagnostic tool. It has the potential to shift the current average cytotoxic drug efficacy / side effects risk balance of classic systemic chemotherapies for which no individual biomarkers exist.
https://en.wikipedia.org/wiki/Functional_drug_sensitivity_testing
Functional medicine ( FM ) is a form of alternative medicine that encompasses many unproven and disproven methods and treatments. [ 1 ] [ 2 ] [ 3 ] At its essence, it is a rebranding of complementary and alternative medicine (CAM), [ 4 ] and as such is pseudoscientific , [ 5 ] and has been described as a form of quackery . [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 4 ] In the United States, FM practices have been ruled ineligible for course credits by the American Academy of Family Physicians because of concerns they may be harmful. [ 10 ] [ 11 ] Functional medicine was created by Jeffrey Bland, [ 12 ] who founded The Institute for Functional Medicine (IFM) in the early 1990s as part of one of his companies, HealthComm. [ 13 ] IFM, which promotes functional medicine, became a registered non-profit in 2001. [ 14 ] Mark Hyman became an IFM board member and prominent promoter. [ 12 ] [ 14 ] David Gorski has written that FM is not well-defined and performs "expensive and generally unnecessary tests". [ 15 ] Gorski says FM's vagueness is a deliberate tactic that makes functional medicine difficult to challenge. [ 16 ] Proponents of functional medicine oppose established medical knowledge and reject its models, instead adopting a model of disease based on the notion of "antecedents", "triggers", and "mediators". These are meant to correspond to the underlying causes of health issues, the immediate causes, and the particular characteristics of a person's illness. A functional medicine practitioner devises a "matrix" from these factors to serve as the basis for treatment. [ 17 ] Treatments, practices, and concepts are generally not supported by medical evidence . [ 1 ] Jonathan Stea writes that functional medicine, integrative medicine, and CAM "are marketing terms designed to confuse patients, promote pseudoscience, and sow distrust in mainstream medicine." [ 18 ] FM practitioners claim to diagnose and treat conditions that have been found by research studies not to exist, such as adrenal fatigue and numerous imbalances in body chemistry. [ 19 ] [ 20 ] For instance, contrary to scientific evidence, Joe Pizzorno , a major figure in FM, claimed that 25% of people in the United States have heavy metal poisoning and need to undergo detoxification . [ 10 ] Many scientists state that such detox supplements are a waste of time and money. [ 21 ] Detox has been also called "mass delusion". [ 22 ] In 2014, the American Academy of Family Physicians withdrew course credits for functional medicine courses, having identified some of its treatments as "harmful and dangerous". [ 10 ] In 2018, it partly lifted the ban, but only to allow overview classes, not to teach its practice. [ 11 ] The opening of centers for functional medicine at the Cleveland Clinic and George Washington University was described by David Gorski as an "unfortunate" example of quackery infiltrating academic medical centers. [ 4 ]
https://en.wikipedia.org/wiki/Functional_medicine
Functional neurological symptom disorder ( FNSD ), also referred to as dissociative neurological symptom disorder ( DNSD ), is a condition in which patients experience neurological symptoms such as weakness , movement problems , sensory symptoms, and convulsions . As a functional disorder , there is, by definition, no known disease process affecting the structure of the body, yet the person experiences symptoms relating to their body function. Symptoms of functional neurological disorders are clinically recognisable, but are not categorically associated with a definable organic disease. [ 2 ] [ 3 ] The intended contrast is with an organic brain syndrome , where a pathology (disease process) that affects the body's physiology can be identified. The diagnosis is made based on positive signs and symptoms in the history and examination during the consultation of a neurologist. [ 4 ] Physiotherapy is particularly helpful for patients with motor symptoms (e.g., weakness, problems with gait , movement disorders) and tailored cognitive behavioral therapy has the best evidence in patients with non-epileptic seizures . [ 5 ] [ 6 ] There are a great number of symptoms experienced by those with a functional neurological disorder. While these symptoms are very real, their origin is complex, since it can be associated with severe psychological trauma and idiopathic neurological dysfunction. [ 7 ] The core symptoms are those of motor or sensory dysfunction or episodes of altered awareness: [ 8 ] [ 9 ] [ 10 ] [ 11 ] A systematic review found that stressful life events and childhood neglect were significantly more common in patients with FNSD than the general population, although some patients report no stressors. [ 12 ] Converging evidence from several studies using different techniques and paradigms has now demonstrated distinctive brain activation patterns associated with functional deficits, unlike those seen in actors simulating similar deficits. [ 13 ] The new findings advance current understanding of the mechanisms involved in this disease, and offer the possibility of identifying markers of the condition and patients' prognosis. [ 14 ] [ 15 ] FNSD has been reported as a rare occurrence in the period following general anesthesia. [ 16 ] A diagnosis of a functional neurological disorder is dependent on positive features from the history and examination. [ 17 ] Positive features of functional weakness on examination include Hoover's sign , when there is weakness of hip extension which normalizes with contralateral hip flexion. [ 18 ] Signs of functional tremor include entrainment and distractibility. The patient with tremor should be asked to copy rhythmical movements with one hand or foot. If the tremor of the other hand entrains [ clarification needed ] to the same rhythm, stops, or if the patient has trouble copying a simple movement this may indicate a functional tremor. Functional dystonia usually presents with an inverted ankle posture or clenched fist. [ 19 ] Positive features of dissociative or non-epileptic seizures include prolonged motionless unresponsiveness, long duration episodes (>2 minutes) and symptoms of dissociation prior to the attack. These signs can be usefully discussed with patients when the diagnosis is being made. [ 20 ] [ 21 ] [ 22 ] [ 23 ] Patients with functional movement disorders and limb weakness may experience symptom onset triggered by an episode of acute pain, a physical injury or physical trauma. They may also experience symptoms when faced with a psychological stressor, but this isn't the case for most patients. Patients with functional neurological disorders are more likely to have a history of another illness such as irritable bowel syndrome, chronic pelvic pain or fibromyalgia but this cannot be used to make a diagnosis. [ 24 ] FNSD does not show up on blood tests or structural brain imaging such as magnetic reasonance imaging (MRI) or CT scanning. However, this is also the case for many other neurological conditions so negative investigations should not be used alone to make the diagnosis. FNSD can occur alongside other neurological diseases and tests may show non-specific abnormalities which cause confusion for doctors and patients. [ 24 ] The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition ( DSM-5 ) lists the following diagnostic criteria for functional neurological symptom disorder: The presence of symptoms defines an acute episode of functional neurological symptom disorder for less than six months, while a persistent episode includes the presence of symptoms for greater than six months. FNSD can also have the specifier of with or without the psychological stressor. Epidemiological studies and meta-analysis have shown higher rates of depression and anxiety in patients with FNSD compared to the general population, but rates are similar to patients with other neurological disorders such as epilepsy or Parkinson's disease . This is often the case because of years of misdiagnosis and accusations of malingering. [ 26 ] [ 27 ] [ 28 ] [ 29 ] Multiple sclerosis has some overlapping symptoms with FNSD, potentially a source of misdiagnosis. [ 30 ] Non-epileptic seizures account for about 1 in 7 referrals to neurologists after an initial episode, while functional weakness has a similar prevalence to multiple sclerosis . [ 31 ] [ clarification needed ] Treatment requires a firm and transparent diagnosis based on positive features which both health professionals and patients can feel confident about. [ 15 ] It is essential that the health professional confirms that this is a common problem which is genuine, not imagined and not a diagnosis of exclusion. [ 32 ] A multi-disciplinary approach to treating functional neurological disorder is recommended. Treatment options can include: [ 17 ] Physiotherapy with someone who understands functional disorders may be the initial treatment of choice for patients with motor symptoms such as weakness, gait (walking) disorder and movement disorders. Nielsen et al. have reviewed the medical literature on physiotherapy for functional motor disorders up to 2012 and concluded that the available studies, although limited, mainly report positive results. [ 33 ] For many patients with FNSD, accessing treatment can be difficult. Availability of expertise is limited and they may feel that they are being dismissed or told "it's all in your head" especially if psychological input is part of the treatment plan. Some medical professionals are uncomfortable explaining and treating patients with functional symptoms. Changes in the diagnostic criteria, increasing evidence, literature about how to make the diagnosis and how to explain it and changes in medical training is slowly changing this. [ 34 ] Wessely and White have argued that FNSD may merely be an unexplained somatic symptom disorder . [ 35 ] FNSD remains a stigmatized condition in the healthcare setting. [ 36 ] [ 37 ] Functional neurologic disorder, is a more recent and inclusive term for what is sometimes referred to as conversion disorder. [ 38 ] Throughout its history, many patients have been misdiagnosed with conversion disorder when they had organic disorders such as tumors, epilepsy, or vascular diseases. This has led to patient deaths, a lack of appropriate care and suffering for the patients. [ 39 ] There is a growing understanding that symptoms are real and distressing, and are caused by an incorrect functioning of the brain rather than being imagined or feigned. [ 38 ]
https://en.wikipedia.org/wiki/Functional_neurological_deficit
Functional neurological symptom disorder ( FNSD ), also referred to as dissociative neurological symptom disorder ( DNSD ), is a condition in which patients experience neurological symptoms such as weakness , movement problems , sensory symptoms, and convulsions . As a functional disorder , there is, by definition, no known disease process affecting the structure of the body, yet the person experiences symptoms relating to their body function. Symptoms of functional neurological disorders are clinically recognisable, but are not categorically associated with a definable organic disease. [ 2 ] [ 3 ] The intended contrast is with an organic brain syndrome , where a pathology (disease process) that affects the body's physiology can be identified. The diagnosis is made based on positive signs and symptoms in the history and examination during the consultation of a neurologist. [ 4 ] Physiotherapy is particularly helpful for patients with motor symptoms (e.g., weakness, problems with gait , movement disorders) and tailored cognitive behavioral therapy has the best evidence in patients with non-epileptic seizures . [ 5 ] [ 6 ] There are a great number of symptoms experienced by those with a functional neurological disorder. While these symptoms are very real, their origin is complex, since it can be associated with severe psychological trauma and idiopathic neurological dysfunction. [ 7 ] The core symptoms are those of motor or sensory dysfunction or episodes of altered awareness: [ 8 ] [ 9 ] [ 10 ] [ 11 ] A systematic review found that stressful life events and childhood neglect were significantly more common in patients with FNSD than the general population, although some patients report no stressors. [ 12 ] Converging evidence from several studies using different techniques and paradigms has now demonstrated distinctive brain activation patterns associated with functional deficits, unlike those seen in actors simulating similar deficits. [ 13 ] The new findings advance current understanding of the mechanisms involved in this disease, and offer the possibility of identifying markers of the condition and patients' prognosis. [ 14 ] [ 15 ] FNSD has been reported as a rare occurrence in the period following general anesthesia. [ 16 ] A diagnosis of a functional neurological disorder is dependent on positive features from the history and examination. [ 17 ] Positive features of functional weakness on examination include Hoover's sign , when there is weakness of hip extension which normalizes with contralateral hip flexion. [ 18 ] Signs of functional tremor include entrainment and distractibility. The patient with tremor should be asked to copy rhythmical movements with one hand or foot. If the tremor of the other hand entrains [ clarification needed ] to the same rhythm, stops, or if the patient has trouble copying a simple movement this may indicate a functional tremor. Functional dystonia usually presents with an inverted ankle posture or clenched fist. [ 19 ] Positive features of dissociative or non-epileptic seizures include prolonged motionless unresponsiveness, long duration episodes (>2 minutes) and symptoms of dissociation prior to the attack. These signs can be usefully discussed with patients when the diagnosis is being made. [ 20 ] [ 21 ] [ 22 ] [ 23 ] Patients with functional movement disorders and limb weakness may experience symptom onset triggered by an episode of acute pain, a physical injury or physical trauma. They may also experience symptoms when faced with a psychological stressor, but this isn't the case for most patients. Patients with functional neurological disorders are more likely to have a history of another illness such as irritable bowel syndrome, chronic pelvic pain or fibromyalgia but this cannot be used to make a diagnosis. [ 24 ] FNSD does not show up on blood tests or structural brain imaging such as magnetic reasonance imaging (MRI) or CT scanning. However, this is also the case for many other neurological conditions so negative investigations should not be used alone to make the diagnosis. FNSD can occur alongside other neurological diseases and tests may show non-specific abnormalities which cause confusion for doctors and patients. [ 24 ] The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition ( DSM-5 ) lists the following diagnostic criteria for functional neurological symptom disorder: The presence of symptoms defines an acute episode of functional neurological symptom disorder for less than six months, while a persistent episode includes the presence of symptoms for greater than six months. FNSD can also have the specifier of with or without the psychological stressor. Epidemiological studies and meta-analysis have shown higher rates of depression and anxiety in patients with FNSD compared to the general population, but rates are similar to patients with other neurological disorders such as epilepsy or Parkinson's disease . This is often the case because of years of misdiagnosis and accusations of malingering. [ 26 ] [ 27 ] [ 28 ] [ 29 ] Multiple sclerosis has some overlapping symptoms with FNSD, potentially a source of misdiagnosis. [ 30 ] Non-epileptic seizures account for about 1 in 7 referrals to neurologists after an initial episode, while functional weakness has a similar prevalence to multiple sclerosis . [ 31 ] [ clarification needed ] Treatment requires a firm and transparent diagnosis based on positive features which both health professionals and patients can feel confident about. [ 15 ] It is essential that the health professional confirms that this is a common problem which is genuine, not imagined and not a diagnosis of exclusion. [ 32 ] A multi-disciplinary approach to treating functional neurological disorder is recommended. Treatment options can include: [ 17 ] Physiotherapy with someone who understands functional disorders may be the initial treatment of choice for patients with motor symptoms such as weakness, gait (walking) disorder and movement disorders. Nielsen et al. have reviewed the medical literature on physiotherapy for functional motor disorders up to 2012 and concluded that the available studies, although limited, mainly report positive results. [ 33 ] For many patients with FNSD, accessing treatment can be difficult. Availability of expertise is limited and they may feel that they are being dismissed or told "it's all in your head" especially if psychological input is part of the treatment plan. Some medical professionals are uncomfortable explaining and treating patients with functional symptoms. Changes in the diagnostic criteria, increasing evidence, literature about how to make the diagnosis and how to explain it and changes in medical training is slowly changing this. [ 34 ] Wessely and White have argued that FNSD may merely be an unexplained somatic symptom disorder . [ 35 ] FNSD remains a stigmatized condition in the healthcare setting. [ 36 ] [ 37 ] Functional neurologic disorder, is a more recent and inclusive term for what is sometimes referred to as conversion disorder. [ 38 ] Throughout its history, many patients have been misdiagnosed with conversion disorder when they had organic disorders such as tumors, epilepsy, or vascular diseases. This has led to patient deaths, a lack of appropriate care and suffering for the patients. [ 39 ] There is a growing understanding that symptoms are real and distressing, and are caused by an incorrect functioning of the brain rather than being imagined or feigned. [ 38 ]
https://en.wikipedia.org/wiki/Functional_neurological_symptom_disorder
A functional symptom is a medical symptom with no known physical cause. [ 1 ] In other words, there is no structural or pathologically defined disease to explain the symptom. The use of the term 'functional symptom' does not assume psychogenesis , only that the body is not functioning as expected. [ 2 ] Functional symptoms are increasingly viewed within a framework in which 'biological, psychological, interpersonal and healthcare factors' should all be considered to be relevant for determining the aetiology and treatment plans. [ 3 ] Historically, there has often been fierce debate about whether certain problems are predominantly related to an abnormality of structure (disease) or are psychosomatic in nature (secondary gain), and what are at one stage posited to be functional symptoms are sometimes later reclassified as organic, as investigative techniques improve. [ 4 ] It is well established that psychosomatic symptoms are a real phenomenon, so this potential explanation is often plausible, however the commonality of a range of psychological symptoms and functional weakness does not imply that one causes the other. For example, symptoms associated with migraine , epilepsy , schizophrenia , multiple sclerosis , stomach ulcers , myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), Lyme disease and many other conditions have all tended historically at first to be explained largely as physical manifestations of the patient's psychological state of mind; until such time as new physiological knowledge is eventually gained. [ citation needed ] Another specific example is functional constipation , which may have psychological or psychiatric causes. However, one type of apparently functional constipation, anismus , may have a neurological (physical) basis. This is also an issue when the patient is involved in litigation such as injuries from motor vehicle accidents or work injuries involving workers compensation benefits and disputes. Studies have shown that unsettled claims affect level of complaints and many medical studies do not include data from cases where outcomes may have been tainted by inclusion of patients involved in worker's compensation cases. [ 5 ] Whilst misdiagnosis of functional symptoms does occur, in neurology, for example, this appears to occur no more frequently than of other neurological or psychiatric syndromes. However, in order to be quantified, misdiagnosis has to be recognized as such, which can be problematic in such a challenging field as medicine. A common trend is to see functional symptoms and syndromes such as fibromyalgia , irritable bowel syndrome and functional neurological symptoms such as functional weakness as symptoms in which both biological and psychological factors are relevant, without one necessarily being dominant. [ 6 ] Functional weakness is weakness of an arm or leg without evidence of damage or a disease of the nervous system. Patients with functional weakness experience symptoms of limb weakness which can be disabling and frightening such as problems walking or a 'heaviness' down one side, dropping things or a feeling that a limb just doesn't feel normal or 'part of them'. Functional weakness may also be described as functional neurological symptom disorder (FNsD), Functional Neurological Disorder (FND) or functional neurological symptoms. If the symptoms are caused by a psychological trigger, it may be diagnosed as 'dissociative motor disorder' or conversion disorder (CD). To the patient and the doctor it often looks as if there has been a stroke or have symptoms of multiple sclerosis . However, unlike these conditions, with functional weakness there is no permanent damage to the nervous system which means that it can get better or even go away completely. The diagnosis should usually be made by a consultant neurologist so that other neurological causes can be excluded. The diagnosis should be made on the basis of positive features in the history and the examination (such as Hoover's sign ). [ 7 ] It is dangerous to make the diagnosis simply because tests are normal. Neurologists usually diagnose wrongly about 5% of the time (which is the same for many other conditions.) The most effective treatment is physiotherapy , however it is also helpful for patients to understand the diagnosis, and some may find CBT helps them to cope with the emotions associated with being unwell. For those with conversion disorder, psychological therapy is key to their treatment as it is emotional or psychological factors which are causing their symptoms. Giveway weakness (also "give-away weakness", "collapsing weakness", etc.) refers to a symptom where a patient's arm, leg, can initially provide resistance against an examiner's touch, but then suddenly "gives way" and provides no further muscular resistance. It can also be seen if the examinee is not cooperating with the exam and does not produce a full effort. This may sometimes be associated with secondary gain from being injured.
https://en.wikipedia.org/wiki/Functional_symptom
A functional symptom is a medical symptom with no known physical cause. [ 1 ] In other words, there is no structural or pathologically defined disease to explain the symptom. The use of the term 'functional symptom' does not assume psychogenesis , only that the body is not functioning as expected. [ 2 ] Functional symptoms are increasingly viewed within a framework in which 'biological, psychological, interpersonal and healthcare factors' should all be considered to be relevant for determining the aetiology and treatment plans. [ 3 ] Historically, there has often been fierce debate about whether certain problems are predominantly related to an abnormality of structure (disease) or are psychosomatic in nature (secondary gain), and what are at one stage posited to be functional symptoms are sometimes later reclassified as organic, as investigative techniques improve. [ 4 ] It is well established that psychosomatic symptoms are a real phenomenon, so this potential explanation is often plausible, however the commonality of a range of psychological symptoms and functional weakness does not imply that one causes the other. For example, symptoms associated with migraine , epilepsy , schizophrenia , multiple sclerosis , stomach ulcers , myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), Lyme disease and many other conditions have all tended historically at first to be explained largely as physical manifestations of the patient's psychological state of mind; until such time as new physiological knowledge is eventually gained. [ citation needed ] Another specific example is functional constipation , which may have psychological or psychiatric causes. However, one type of apparently functional constipation, anismus , may have a neurological (physical) basis. This is also an issue when the patient is involved in litigation such as injuries from motor vehicle accidents or work injuries involving workers compensation benefits and disputes. Studies have shown that unsettled claims affect level of complaints and many medical studies do not include data from cases where outcomes may have been tainted by inclusion of patients involved in worker's compensation cases. [ 5 ] Whilst misdiagnosis of functional symptoms does occur, in neurology, for example, this appears to occur no more frequently than of other neurological or psychiatric syndromes. However, in order to be quantified, misdiagnosis has to be recognized as such, which can be problematic in such a challenging field as medicine. A common trend is to see functional symptoms and syndromes such as fibromyalgia , irritable bowel syndrome and functional neurological symptoms such as functional weakness as symptoms in which both biological and psychological factors are relevant, without one necessarily being dominant. [ 6 ] Functional weakness is weakness of an arm or leg without evidence of damage or a disease of the nervous system. Patients with functional weakness experience symptoms of limb weakness which can be disabling and frightening such as problems walking or a 'heaviness' down one side, dropping things or a feeling that a limb just doesn't feel normal or 'part of them'. Functional weakness may also be described as functional neurological symptom disorder (FNsD), Functional Neurological Disorder (FND) or functional neurological symptoms. If the symptoms are caused by a psychological trigger, it may be diagnosed as 'dissociative motor disorder' or conversion disorder (CD). To the patient and the doctor it often looks as if there has been a stroke or have symptoms of multiple sclerosis . However, unlike these conditions, with functional weakness there is no permanent damage to the nervous system which means that it can get better or even go away completely. The diagnosis should usually be made by a consultant neurologist so that other neurological causes can be excluded. The diagnosis should be made on the basis of positive features in the history and the examination (such as Hoover's sign ). [ 7 ] It is dangerous to make the diagnosis simply because tests are normal. Neurologists usually diagnose wrongly about 5% of the time (which is the same for many other conditions.) The most effective treatment is physiotherapy , however it is also helpful for patients to understand the diagnosis, and some may find CBT helps them to cope with the emotions associated with being unwell. For those with conversion disorder, psychological therapy is key to their treatment as it is emotional or psychological factors which are causing their symptoms. Giveway weakness (also "give-away weakness", "collapsing weakness", etc.) refers to a symptom where a patient's arm, leg, can initially provide resistance against an examiner's touch, but then suddenly "gives way" and provides no further muscular resistance. It can also be seen if the examinee is not cooperating with the exam and does not produce a full effort. This may sometimes be associated with secondary gain from being injured.
https://en.wikipedia.org/wiki/Functional_weakness
A fusion beat occurs when electrical impulses from different sources act upon the same region of the heart at the same time. [ 1 ] If it acts upon the ventricular chambers it is called a ventricular fusion beat , whereas colliding currents in the atrial chambers produce atrial fusion beats . Ventricular fusion beats can occur when the heart's natural rhythm and the impulse from a pacemaker coincide to activate the same part of a ventricle at the same time, causing visible variation in configuration and height of the QRS complex of an electrocardiogram reading of the heart's activity. [ 2 ] This contrasts with the pseudofusion beat wherein the pacemaker impulse does not affect the complex of the natural beat of the heart. Pseudofusion beats are normal. Rare or isolated fusion beats caused by pacemakers are normal as well, but if they occur too frequently may reduce cardiac output and so can require adjustment of the pacemaker. [ 3 ]
https://en.wikipedia.org/wiki/Fusion_beat
Future Oncology is a peer-reviewed medical journal established in 2005 and published by Future Medicine. It covers all aspects of oncology . The editors-in-chief are Ron Allison (21st Century Oncology) and Jackson Orem ( Uganda Cancer Institute ). The journal is abstracted and indexed in CINAHL Plus , Chemical Abstracts , Current Contents /Clinical Medicine, EMBASE / Excerpta Medica , Index Medicus / MEDLINE / PubMed , Science Citation Index Expanded , and Scopus . According to the Journal Citation Reports , the journal has a 2022 impact factor of 3.3, ranking it 152nd out of 217 journals in the category "Oncology". [ 1 ] This article about an oncology journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Future_Oncology
The term " Clinical research center " (CRC) or " General clinical research center " ( GCRC ) refers to any designated medical facility used to conduct clinical research , such as at a hospital or medical clinic . [ 1 ] They have been used to perform clinical trials of various medical procedures. The medical profession has had specific uses for CRC facilities, including awarding grants to support various types of research. For example, the U.S. National Institutes of Health had, for years, issued GCRC grants, but later changed to awarding a Clinical and Translational Science Award (CTSA). Many hospitals or clinics have included a wing, ward, or other area titled as "Clinical Research Center" (with capitalized words). Some examples of CRC facilities are:
https://en.wikipedia.org/wiki/GCRC
In histology , the GFAP stain is done to determine whether cells contain glial fibrillary acidic protein , a protein found in glial cells . It is useful for determining whether a tumour is of glial origin. [ 1 ] This article related to pathology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/GFAP_stain
The Gruppo Italiano per lo Studio della Sopravvivenza nell'Infarto Miocardico ( GISSI ) (Italian group for the study of the survival of myocardial infarction) is a cardiology research group founded as a collaboration between two Italian organisations – the Mario Negri Institute for Pharmacological Research and the Associazione Nazionale dei Medici Cardiologi Ospedalieri (ANMCO). [ 1 ] [ 2 ] Four large-scale clinical trials (GISSI 1, [ 3 ] GISSI 2, [ 3 ] GISSI 3, [ 4 ] GISSI Prevention [ 5 ] ) have involved over 60,000 people with acute myocardial infarction (AMI). [ 1 ] [ 5 ] [ 6 ] This article about a medical organization or association is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/GISSI
GLUT1 deficiency syndrome , also known as GLUT1-DS , De Vivo disease or Glucose transporter type 1 deficiency syndrome, is an autosomal dominant genetic metabolic disorder associated with a deficiency of GLUT1 , the protein that transports glucose across the blood brain barrier. [ 1 ] Glucose Transporter Type 1 Deficiency Syndrome has an estimated birth incidence of 1 in 90,000 [ 2 ] to 1 in 24,300. [ 3 ] This birth incidence translates to an estimated prevalence of 3,000 to 7,000 in the U.S. [ 2 ] GLUT1 deficiency is characterized by an array of signs and symptoms including mental and motor developmental delays, infantile seizures refractory to anticonvulsants, ataxia , dystonia , dysarthria , opsoclonus , spasticity , other paroxysmal neurologic phenomena and sometimes deceleration of head growth also known as microcephaly . The presence and severity of symptoms vary considerably between affected individuals. Individuals with the disorder generally have frequent seizures (epilepsy), often beginning in the first months of life. In newborns, the first sign of the disorder may be involuntary eye movements that are rapid and irregular. [ 4 ] Patients typically begin to experience seizures between three and six months of age, but some occur much later. [ 5 ] Other seizure types may occur, including generalized tonic clonic, focal, myoclonic, atypical absence, atonic or unclassified. [ 5 ] Mothers of infants with this disorder usually have uneventful pregnancies and deliveries, with the child appearing normal and within typical birth weight and length ranges. Infants with GLUT1 deficiency syndrome have a normal head size at birth, but the growth of the brain and skull is slow, in severe cases resulting in an abnormally small head size ( microcephaly ). [ 4 ] Typically, seizures start between one and four months in 90% of cases with abnormal eye movements and apneic episodes preceding the onset of seizures in some cases. [ 6 ] Seizures usually are complex to begin with and later become more generalized. Seizure frequency is variable and a history of decreasing frequency during times of ketosis may prompt a diagnosis. It is estimated that 10% of individuals with Glut 1 Deficiency do not have seizures and symptoms are typically less severe in these cases. [ 7 ] Most of these non-epileptic cases will still have developmental delay, intellectual delays and movement disorders such as ataxia, alternating hemiplegia or dystonia. [ 7 ] Some symptoms may be present all the time (like walking difficulties), while other signs may come and go (like seizures or poor balance). [ 8 ] These findings can be clustered under three major domains: cognition, behavior and movement. [ 8 ] The syndrome can cause infantile seizures refractory to anticonvulsive drugs, developmental delay, acquired microcephaly and neurologic manifestations including spasticity, hypotonia and ataxia. [ 9 ] The frequency, severity and types of seizures may vary considerably among GLUT1 deficiency patients and do not necessarily correspond to the severity of other symptoms. Most seizures in GLUT1 deficiency patients are not easily treated with anti-seizure medications. A minority of GLUT1 deficiency patients (approximately 10%) do not experience seizures. [ 5 ] Cognitive symptoms often become apparent as developmental milestones are delayed. Cognitive deficits range from subtle learning difficulties to severe intellectual disabilities. Often speech and language are impaired. [ 5 ] Behavioral symptoms affect relations with other people and may include short attention span, intractability, and delays in achieving age-appropriate behaviors. Sociability with peers, however, is a strength in GLUT1 deficiency patients. [ 5 ] Movement symptoms relate to the quality of motor functions. Walking may be delayed or difficult because legs are stiff (spasticity), balance is poor (ataxia) or posture is twisted (dystonia). Fine motor deficits may affect speech quality and manipulative skills, such as writing. These abnormalities may be constant or intermittent (paroxysmal). [ 5 ] Paroxysmal exercise-induced dyskinesia (PED) may also be present. [ 10 ] Other intermittent symptoms may include headaches, confusion, and loss of energy. Episodes of confusion, lack of energy/stamina, and/or muscle twitches may occur; particularly during periods without food. [ 7 ] Some young patients experience occasional abnormal eye movements that may resemble opsoclonus or nystagmus. [ 5 ] The rapid eye movements that some Glut 1 patients exhibit are rapid, multidirectional, and there is often a head movement in the same direction as the eye movement. [ 11 ] These abnormal eye movements were recently named aberrant gaze saccades. [ 11 ] Hemiplegia or alternating intermittent hemiplegia may occur in some patients and mimic stroke-like symptoms. [ 12 ] Another characteristic of GLUT1 deficiency is that symptoms are sensitive to food (e.g. symptoms that can be temporarily improved by intake of carbohydrates), and symptoms may be worse in the morning upon and just after waking. [ 5 ] All symptoms may be aggravated or triggered by factors such as hunger, fatigue, heat, anxiety, and sickness. The symptom picture for each patient may evolve and change over time as children with GLUT1 deficiency grow and develop through adolescence and into adulthood. [ 5 ] Data on adult Glut1DS are just emerging. [ 13 ] Changes in symptomatology over time include a shift from infantile-childhood onset epilepsy to adolescent-adult onset movement disorders including PED. The GLUT1 protein that transports glucose across the blood brain barrier is encoded by the SLC2A1 gene, located on chromosome 1. [ 8 ] In GLUT1 deficiency syndrome, one of the two genes is damaged by a mutation and an insufficient amount protein is made. As a result, insufficient glucose is passing the blood brain barrier. Having less functional GLUT1 protein reduces the amount of glucose available to brain cells, which affects brain development and function. [ 14 ] Because glucose is the primary source of fuel for the brain, patients with GLUT1 deficiency have insufficient cellular energy to permit normal brain growth and function. [ 8 ] Around 90% of cases of GLUT1 deficiency syndrome are de novo mutations of the SLC2A1 gene (a mutation not present in the parents, but present in one of the two copies of the gene in the baby), although it can be inherited. [ 15 ] Glut 1 Deficiency can be inherited in an autosomal dominant manner. A person with GLUT1 deficiency syndrome has a 50% chance of passing along the altered SLC2A1 gene to his or her offspring. [ 16 ] In a study focusing on GLUT1 mice model brain slides, physiological glucose concentration was found to be a modulator of frequency oscillations and less frequent 30–50 Hz or gamma oscillations. [ 17 ] Early diagnosis is crucial in order to initiate treatment during the important early stages of brain development. To make a proper diagnosis, it is important to know the various symptoms of GLUT1 deficiency and how those symptoms evolve with age. [ 18 ] GLUT1 deficiency is diagnosed based on the clinical features in combination with determining the glucose concentration in the CSF and/or a genetic analysis through a lumbar puncture (spinal tap). [ 13 ] A low glucose value in CSF (<2.2 mmol/L) or lowered CSF/ plasma glucose ratio (<0.4)are indicatieve of GLUT1 deficiency. A genetic mutation in the SLC2A1 gene also confirms the diagnosis, although mutations have not been identified in approximately 15% of GLUT1 deficiency patients. [ 19 ] A highly specialized lab test called the red blood cell uptake assay may confirm GLUT1 deficiency but is not commercially available. [ 20 ] Anti-seizure medications are generally not effective, since they do not provide nourishment to the starved brain. [ 8 ] Once diagnosed, a medically supervised ketogenic diet is usually recommended as it can help to control seizures. [ 21 ] The ketogentic diet is the current standard of care treatment, with 80% of patients having >90% seizure reduction [ 13 ] and improving some movement disorders in approximately two thirds of GLUT1 deficiency patients. [ 18 ] There is also some evidence of some cognitive benefits for GLUT1 deficiency patients on a ketogenic diet, and most parents report improved energy, alertness, balance, coordination, and concentration, [ 18 ] especially when the diet is started early in childhood. The ketogenic diet is a diet high in fat and low in protein and carbohydrates, with up to 90% of calories obtained from fat. Since the diet is low in carbohydrates, the body gets little glucose, normally the main energy source. The fat in the diet is converted by the liver in ketone bodies , which causes a build up of ketones in the blood stream, called ketosis . Ketone bodies are transported across the blood-brain barrier by other means than the GLUT1 protein and thus serve as an alternative fuel for the brain when glucose is not available. [ 22 ] While ketogenic diets have been proven effective to control seizures and relieve some movement disorders in many GLUT1 deficiency patients, some patients do not respond as well as others. In addition, some critical symptoms, including cognitive deficits and certain movement difficulties, tend to persist in GLUT1 deficiency patients treated by a ketogenic diet, raising the question whether GLUT1 deficiency is caused simply by a lack of proper brain energy or if there are more complicated and widespread systems and processes affected. [ 18 ] The ketogenic diet must be carefully crafted and tailored to meet the needs of each patient and reduce the risk of side effects. It should only be used under the care of medical professionals and dietitians, and it may take some time to establish the ideal ratio of fat versus proteins and carbohydrates and other diet variables for each individual patient to experience optimal tolerance and benefits. Variations on the ketogenic diet, including the Modified Atkins Diet, and diets based on MCT oil have also been shown to be beneficial for some GLUT1 deficiency patients. [ 18 ] While the classic ketogenic diet is commonly used for younger children, compliance with the ketogenic diet can be difficult for older children and adults. In recent years, the Modified Atkins Diet, and MCT oil based diets, have gained increasing acceptance among doctors treating these groups and may be more feasible for quality of life and compliance. [ 13 ] There is growing empirical evidence that these diets can provide at least some of the benefits of the classical ketogenic diet for some GLUT1 deficiency patients. [ 18 ] Ketone esters are an area of dietary therapy currently under investigation for potential treatment of GLUT1 deficiency and other medical conditions. Ketone esters are synthetic ketones that break down into natural ketones when metabolized. Ketone esters have been shown in recent research to improve seizures and movement disorders in GLUT1 deficient mice, but human studies have not yet been conducted. [ 18 ] Triheptanoin (C7 oil), a triglyceride oil synthesized from castor beans. [ 18 ] is an investigational pharmaceutical-grade medical food that has shown potential as a treatment for a number of inherited metabolic diseases. When metabolized by the body, C7 oil produces ketones similar to those produced on a ketogenic diet in addition to other types of ketones that are thought to fulfill further metabolic requirements in the absence of sufficient glucose. [ 18 ] A phase 3 clinical trial however failed to find an improvement in patients with GLUT1 DS with disabling movement disorders. The inhibition of insulin production to increase glucose in the blood with the medicine diazoxide, in combination with continuous glucose monitoring, has been successful in one adolescent. The increased blood glucose also increases the availability of glucose in the brain, through the increased transfer of more glucose through the GLUT1-protein. She became seizure-free, became more physically active and had improved cognition. [ 23 ] Researchers are studying gene therapy as a possible effective treatment for Glut 1 Deficiency. [ 24 ] [ 25 ] Therapies and rehabilitative services are beneficial since most GLUT1 deficiency patients experience movement disturbances as well as speech and language disorders. Occupational, physical, and speech/language therapies are standard for most patients, especially in childhood. [ 18 ] Many families greatly benefit from other therapies such as aquatic therapy, hippotherapy, specific learning strategies, and behavioral therapy. [ 18 ] Glut 1 patients Weak Areas are lowered IQ and adaptive behavior scores, expressive-language deficits, weakness in fine motor skills, limited visual attention to details, weakness in abstract analytical skills, and weakness in transfer of learning to new contexts. [ citation needed ]
https://en.wikipedia.org/wiki/GLUT1_deficiency
Enlarged Board of Appeal of the European Patent Office G 1/07 is a decision of the Enlarged Board of Appeal of the European Patent Office (EPO), which was issued on February 15, 2010. The Enlarged Board of Appeal notably decided that, under the European Patent Convention (EPC),
https://en.wikipedia.org/wiki/G_1/07
In the context of surgery or dental surgery , a gag is a device used to hold the patient 's mouth open when working in the oral cavity , or to force the mouth open when it cannot open naturally because of forward dislocation of the jaw joint's intraarticular cartilage pad. Applications for medical gags include oral surgery and airway management . Gag designs, like other medical instrument designs, are often named after their inventors. Common examples of medical gags include the Jennings , Whitehead , and Hallam gags. These types of gags are also used in sexual fetish or bondage play. See Gag (BDSM) § Medical .
https://en.wikipedia.org/wiki/Gag_(medicine)
Gait abnormality is a deviation from normal walking ( gait ). Watching a patient walk is an important part of the neurological examination. Normal gait requires that many systems, including strength, sensation and coordination, function in an integrated fashion. Many common problems in the nervous system and musculoskeletal system will show up in the way a person walks. [ 1 ] Patients with musculoskeletal pain, weakness or limited range of motion often present conditions such as Trendelenburg's sign , limping , myopathic gait and antalgic gait . Patients who have peripheral neuropathy also experience numbness and tingling in their hands and feet. This can cause ambulation impairment, such as trouble climbing stairs or maintaining balance . Gait abnormality is also common in persons with nervous system problems such as cauda equina syndrome , multiple sclerosis , Parkinson's disease (with characteristic Parkinsonian gait ), Alzheimer's disease , vitamin B 12 deficiency , myasthenia gravis , normal pressure hydrocephalus , and Charcot–Marie–Tooth disease . Research has shown that neurological gait abnormalities are associated with an increased risk of falls in older adults. [ 2 ] Orthopedic corrective treatments may also manifest into gait abnormality, such as lower extremity amputation , healed fractures , and arthroplasty (joint replacement). Difficulty in ambulation that results from chemotherapy is generally temporary in nature, though recovery times of six months to a year are common. Likewise, difficulty in walking due to arthritis or joint pains (antalgic gait) sometimes resolves spontaneously once the pain is gone. [ 3 ] [ 4 ] Hemiplegic persons have circumduction gait, where the affected limb moves through an arc away from the body, and those with cerebral palsy often have scissoring gait . [ citation needed ]
https://en.wikipedia.org/wiki/Gait_abnormality
Gait analysis is the systematic study of animal locomotion , more specifically the study of human motion, using the eye and the brain of observers, augmented by instrumentation for measuring body movements, body mechanics , and the activity of the muscles. [ 1 ] Gait analysis is used to assess and treat individuals with conditions affecting their ability to walk. It is also commonly used in sports biomechanics to help athletes run more efficiently and to identify posture-related or movement-related problems in people with injuries. The study encompasses quantification (introduction and analysis of measurable parameters of gaits ), as well as interpretation, i.e. drawing various conclusions about the animal (health, age, size, weight, speed etc.) from its gait pattern. The pioneers of scientific gait analysis were Aristotle in De Motu Animalium (On the Gait of Animals) [ 2 ] and much later in 1680, Giovanni Alfonso Borelli also called De Motu Animalium (I et II) . In the 1890s, the German anatomist Christian Wilhelm Braune and Otto Fischer published a series of papers on the biomechanics of human gait under loaded and unloaded conditions. [ 3 ] With the development of photography and cinematography, it became possible to capture image sequences that reveal details of human and animal locomotion that were not noticeable by watching the movement with the naked eye. Eadweard Muybridge and Étienne-Jules Marey were pioneers of these developments in the early 1900s. For example, serial photography first revealed the detailed sequence of the horse " gallop ", which was usually misrepresented in paintings made prior to this discovery. Although much early research was done using film cameras, the widespread application of gait analysis to humans with pathological conditions such as cerebral palsy , Parkinson's disease , and neuromuscular disorders , began in the 1970s with the availability of video camera systems that could produce detailed studies of individual patients within realistic cost and time constraints. The development of treatment regimes, often involving orthopedic surgery , based on gait analysis results, advanced significantly in the 1980s. Many leading orthopedic hospitals worldwide now have gait labs that are routinely used to design treatment plans and for follow-up monitoring. [ citation needed ] Development of modern computer based systems occurred independently during the late 1970s and early 1980s in several hospital based research labs, some through collaborations with the aerospace industry. [ 4 ] Commercial development soon followed with the emergence of commercial television and later infrared camera systems in the mid-1980s. A typical gait analysis laboratory has several cameras (video or infrared) placed around a walkway or a treadmill, which are linked to a computer. The patient has markers located at various points of reference of the body (e.g., iliac spines of the pelvis, ankle malleolus, and the condyles of the knee), or groups of markers applied to half of the body segments. The patient walks down the catwalk or the treadmill and the computer calculates the trajectory of each marker in three dimensions. A model is applied to calculate the movement of the underlying bones. This gives a complete breakdown of the movement of each joint. One common method is to use Helen Hayes Hospital marker set, [ 5 ] in which a total of 15 markers are attached on the lower body. The 15 marker motions are analyzed analytically, and it provides angular motion of each joint. [ citation needed ] To calculate the kinetics of gait patterns, most labs have floor-mounted load transducers, also known as force platforms, which measure the ground reaction forces and moments, including the magnitude, direction and location (called the center of pressure). The spatial distribution of forces can be measured with pedobarography equipment. Adding this to the known dynamics of each body segment enables the solution of equations based on the Newton–Euler equations of motion permitting computations of the net forces and the net moments of force about each joint at every stage of the gait cycle. The computational method for this is known as inverse dynamics. [ citation needed ] This use of kinetics, however, does not result in information for individual muscles but muscle groups, such as the extensor or flexors of the limb. To detect the activity and contribution of individual muscles to movement, it is necessary to investigate the electrical activity of muscles. Many labs also use surface electrodes attached to the skin to detect the electrical activity or electromyogram (EMG) of muscles. In this way it is possible to investigate the activation times of muscles and, to some degree, the magnitude of their activation—thereby assessing their contribution to gait. Deviations from normal kinematic, kinetic or EMG patterns are used to diagnose specific pathologies, predict the outcome of treatments, or determine the effectiveness of training programs [ citation needed ] The gait analysis is modulated or modified by many factors, and changes in the normal gait pattern can be transient or permanent. The factors can be of various types: The parameters taken into account for the gait analysis are as follows: Gait analysis involves measurement, [ 7 ] where measurable parameters are introduced and analyzed, and interpretation, where conclusions about the subject (health, age, size, weight, speed, etc.) are drawn. The analysis is the measurement of the following: It consists of the calculation of speed, the length of the rhythm, pitch, and so on. These measurements are carried out through: Pressure measurement systems are an additional way to measure gait by providing insights into pressure distribution, contact area, center of force movement and symmetry between sides. These systems typically provide more than just pressure information; additional information available from these systems are force , timing and spatial parameters. Different methods for assessing pressure are available, like a pressure measurement mat or walkway (longer in length to capture more foot strikes), as well as in-shoe pressure measurement systems (where sensors are placed inside the shoe). [ 17 ] [ 18 ] [ 19 ] Many pressure measurement systems integrate with additional types of analysis systems, like motion capture, EMG or force plates to provide a comprehensive gait analysis. [ citation needed ] Is the study of the forces involved in the production of movements. Is the study of patterns of muscle activity during gait. Gait analysis is used to analyze the walking ability of humans and animals, so this technology can be used for the following applications: Pathological gait may reflect compensations for underlying pathologies, or be responsible for causation of symptoms in itself. Cerebral palsy and stroke patients are commonly seen in gait labs. The study of gait allows diagnoses and intervention strategies to be made, as well as permitting future developments in rehabilitation engineering . Aside from clinical applications, gait analysis is used in professional sports training to optimize and improve athletic performance. Gait analysis techniques allow for the assessment of gait disorders and the effects of corrective orthopedic surgery. [ 20 ] Options for treatment of cerebral palsy include the artificial paralysis of spastic muscles using Botox or the lengthening, re-attachment or detachment of particular tendons . Corrections of distorted bony anatomy are also undertaken ( osteotomy ). [ 20 ] Observation of gait is also beneficial for diagnoses in chiropractic and osteopathic professions as hindrances in gait may be indicative of a misaligned pelvis or sacrum. As the sacrum and ilium biomechanically move in opposition to each other, adhesions between the two of them via the sacrospinous or sacrotuberous ligaments (among others) may suggest a rotated pelvis. Both doctors of chiropractic and osteopathic medicine use gait to discern the listing of a pelvis and can employ various techniques to restore a full range of motion to areas involved in ambulatory movement. Chiropractic adjustment of the pelvis has shown a trend in helping restore gait patterns [ 21 ] [ 22 ] as has osteopathic manipulative therapy (OMT). [ 23 ] [ 24 ] By studying the gait of non-human animals, more insight can be gained about the mechanics of locomotion, which has diverse implications for understanding the biology of the species in question as well as locomotion more broadly. Gait recognition is a type of behavioral biometric authentication that recognizes and verifies people by their walking style and pace. [ 25 ] [ 26 ] Advances in gait recognition have led to the development of techniques for forensics use since each person can have a gait defined by unique measurements such as the locations of ankle, knee, and hip. [ 27 ] In 2018, there were reports that the Government of China had developed surveillance tools based on gait analysis, allowing them to uniquely identify people, even if their faces are obscured. [ 28 ] [ 29 ]
https://en.wikipedia.org/wiki/Gait_analysis
A galactagogue , or galactogogue (from Greek : γάλα [γαλακτ-], milk, + ἀγωγός, leading), also known as a lactation inducer or milk booster , is a substance that promotes lactation in humans and other animals. [ 1 ] [ 2 ] It may be synthetic , plant -derived, or endogenous . They may be used to induce lactation and to treat low milk supply . Synthetic galactagogues such as domperidone and metoclopramide interact with the dopamine system in such a way to increase the production of prolactin ; specifically, by blocking the D 2 receptor . [ 3 ] There is some evidence to suggest that mothers who are unable to meet their infants' breastfeeding needs may benefit from galactogogues. [ 4 ] [ 5 ] A more recent study questions the effectiveness of commercial lactation cookies finding no significant difference. [ 6 ] Galactagogues may be considered when non-pharmacologic interventions are found to be insufficient. [ 7 ] [ 8 ] For example, domperidone may be an option for mothers of preterm babies who at over 14 days from delivery and after full lactation support still have difficulty expressing breast milk in sufficient quantity for their child's needs. [ 9 ] Lactation induction may also be possible in certain circumstances for women planning to adopt an infant. [ 10 ] Domperidone (like metoclopramide, a D 2 receptor antagonist) is not approved for enhanced lactation in the USA. [ 11 ] [ 12 ] By contrast, Australian guidelines consider domperidone to be the preferred galactagogue when non-pharmacological approaches have proved insufficient. [ 7 ] Unlike metoclopramide, domperidone does not cross the blood–brain barrier and does not tend to have adverse effects such as drowsiness or depression. [ 7 ] Other drugs which may increase lactation include: Progestogens like progesterone , medroxyprogesterone acetate , and cyproterone acetate have been found to produce lobuloalveolar development of the breasts , which is important for lactation as milk is produced in the mammary lobules . [ 14 ] [ 15 ] [ 16 ] Herbals and foods used as galactagogues have little or no scientific evidence of efficacy, and the identity and purity of herbals are concerns because of inadequate testing requirements. [ 17 ] The herbals most commonly cited as galactagogues are: [ 17 ] Other herbals that have been claimed to be galactagogues include:
https://en.wikipedia.org/wiki/Galactagogue
“I get the chance to not only watch the future happen, but I can actually be a part of it and create it,” says Ugandan entrepreneur Emmanuel Kasigazi. Duyen Nguyen | MIT Open Learning Like millions of others during the global Covid-19 lockdowns, Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube to pass the time. But he wasn’t following an influencer or watching music videos. A lifelong learner, Kasigazi was scouring the video-sharing platform for educational resources. Since 2013, when he got his first smartphone, Kasigazi has been charting his own learning journey through YouTube, educating himself on subjects as diverse as psychology and artificial intelligence. And it was while searching for the answer to an AI-related question that Kasigazi first discovered MIT OpenCourseWare (OCW). “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew." “The search results showed MIT lectures, and I thought, ‘Which MIT is this?’” recalls Kasigazi, who admits he was initially skeptical as he opened the OCW YouTube channel. To his amazement, he found hundreds of courses there — not only clips, but complete lectures that he could follow alongside the students in MIT classrooms. He searched for more information on OCW and tried the channel on different browsers to triple-check its credibility. “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew,” he remembers. For Kasigazi, the channel became a gateway to other open education resources, including the OpenCourseWare website and MITx courses, both part of MIT Open Learning. “I always had the questions — I grew up on science cartoons like ‘Dexter’s Laboratory’ and ‘Pinky and the Brain’ — so I would go on YouTube to try to find answers to these questions, and I found this whole other world,” he says. OCW launched its YouTube channel in 2008, and this August passed 4 million subscribers. While introductory computer science, math, and physics are the most-visited courses on the OCW website, the most popular YouTube videos reflect a more diverse range of interests, including a lecture about piloting a fighter jet aircraft, an introduction to the human brain, and an introduction to financial terms and concepts. Through this extensive collection, Kasigazi explains that he’s been able to explore “the things I love,” while also studying cloud computing, data science, and AI — fields that he plans to pursue in graduate studies. He says, “This is what OpenCourseWare has enabled me to do: I get the chance to not only watch the future happen, but I can actually be a part of it and create it.” Understanding humanity through the liberal arts When Kasigazi was young, a beloved aunt recognized his natural curiosity and steered him toward the best schools. “I owe her everything,” he says, “everything I am is because of her.” Thanks to his excellent grades he received an academic scholarship from the Ugandan government to attend Makerere University, one of the top universities in sub-Saharan Africa, where he earned a degree in information systems. Having pursued IT for its practical applications, Kasigazi admits that he was initially more interested in the science and theory behind computers than “the coding bits of it.” “I love the concept of it — how we are trying to make these machines,” he says, explaining that he’s long been drawn to the social sciences and humanities, particularly psychology and philosophy. “I’m interested in how we work as human beings, because everything we do is for, with, and around human beings,” says Kasigazi, who considers psychology to be foundational to almost every field. “Whatever it is you’re teaching these kids, they’re going to be dealing with people. So first teach them what people think, how they act — that was my drive to love psychology.” Kasigazi has also turned to OCW to brush up on his coding skills, watching 6.0001 (Introduction to Computer Science and Programming Using Python) lectures with Professor Ana Bell and reviewing the instructor-paced version with Professor Eric Grimson now on MITx. “I am proud to say MIT OCW has made me fall in love with coding … it makes sense like it never has before,” he says. Nurturing a worldview In 2014 Kasigazi moved to South Sudan, which had only recently emerged from a civil war as an independent nation. Fresh out of university, he was there to teach computer skills and graphic design — some of his students included members of the new country’s government — but his time in South Sudan quickly became a learning experience for him, too. “When you grow up in your community, you have this bubble. We all experience it — it’s a human thing,” he reflects. “For the first time, I realized that everything I knew is not a given. Everything I grew up knowing is not universal.” With his worldview newly broadened, he began to nurture his interest in psychology, philosophy, and the sciences, watching crash courses, explainer videos, and other content on the subject. “It’s entertainment, to me, at the same time that it’s a passion,” he says. Today Kasigazi runs his own company, which he started in 2012 with friends and resumed when he returned to Uganda seven years ago. Since coming across the OCW YouTube channel, Kasigazi has worked through all of the freely available MIT psychology courses. Professor John Gabrieli’s 9.00SC (Introduction to Psychology) have particularly resonated with him, even prompting him to reach out to Gabrieli. “As much as I’d been getting some knowledge on psychology over the years online, it wasn’t as deep and as interesting or captivating as your classes were,” he wrote. “From your teaching style, to the explanations, to the topics, to how you make people understand a topic, to the experiments mentioned and referenced, to how you approach questions and later make one think deeper about them.” “The message from Emmanuel is deeply touching about the joy of learning,” says Gabrieli, who is also an investigator at the McGovern Institute. “I am so grateful to OCW for making this course on psychology open to the world, and to Emmanuel for so delightfully sharing what this course meant to him.” New courses are added regularly to both the OCW website and YouTube channel. Kasigazi, who’s currently enjoying 9.13 (Introduction to the Human Brain) from professor and McGovern Institute investigator Nancy Kanwisher, looks forward to discovering what new worlds of knowledge they’ll open. Reposted from https://news.mit.edu on November 7, 2022. We hope you’ve been inspired by this story and by OCW’s effort to meet the needs of learners eager to enhance their knowledge, lift up their communities, and change the world for the benefit of everyone. Please consider supporting our work with a donation or if giving isn’t possible right now, we’d love to hear how OCW has made a difference in your life or classroom. We’d appreciate it! Read Full Story
common_crawl_ocw.mit.edu_0
“I get the chance to not only watch the future happen, but I can actually be a part of it and create it,” says Ugandan entrepreneur Emmanuel Kasigazi. Duyen Nguyen | MIT Open Learning Like millions of others during the global Covid-19 lockdowns, Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube to pass the time. But he wasn’t following an influencer or watching music videos. A lifelong learner, Kasigazi was scouring the video-sharing platform for educational resources. Since 2013, when he got his first smartphone, Kasigazi has been charting his own learning journey through YouTube, educating himself on subjects as diverse as psychology and artificial intelligence. And it was while searching for the answer to an AI-related question that Kasigazi first discovered MIT OpenCourseWare (OCW). “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew." “The search results showed MIT lectures, and I thought, ‘Which MIT is this?’” recalls Kasigazi, who admits he was initially skeptical as he opened the OCW YouTube channel. To his amazement, he found hundreds of courses there — not only clips, but complete lectures that he could follow alongside the students in MIT classrooms. He searched for more information on OCW and tried the channel on different browsers to triple-check its credibility. “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew,” he remembers. For Kasigazi, the channel became a gateway to other open education resources, including the OpenCourseWare website and MITx courses, both part of MIT Open Learning. “I always had the questions — I grew up on science cartoons like ‘Dexter’s Laboratory’ and ‘Pinky and the Brain’ — so I would go on YouTube to try to find answers to these questions, and I found this whole other world,” he says. OCW launched its YouTube channel in 2008, and this August passed 4 million subscribers. While introductory computer science, math, and physics are the most-visited courses on the OCW website, the most popular YouTube videos reflect a more diverse range of interests, including a lecture about piloting a fighter jet aircraft, an introduction to the human brain, and an introduction to financial terms and concepts. Through this extensive collection, Kasigazi explains that he’s been able to explore “the things I love,” while also studying cloud computing, data science, and AI — fields that he plans to pursue in graduate studies. He says, “This is what OpenCourseWare has enabled me to do: I get the chance to not only watch the future happen, but I can actually be a part of it and create it.” Understanding humanity through the liberal arts When Kasigazi was young, a beloved aunt recognized his natural curiosity and steered him toward the best schools. “I owe her everything,” he says, “everything I am is because of her.” Thanks to his excellent grades he received an academic scholarship from the Ugandan government to attend Makerere University, one of the top universities in sub-Saharan Africa, where he earned a degree in information systems. Having pursued IT for its practical applications, Kasigazi admits that he was initially more interested in the science and theory behind computers than “the coding bits of it.” “I love the concept of it — how we are trying to make these machines,” he says, explaining that he’s long been drawn to the social sciences and humanities, particularly psychology and philosophy. “I’m interested in how we work as human beings, because everything we do is for, with, and around human beings,” says Kasigazi, who considers psychology to be foundational to almost every field. “Whatever it is you’re teaching these kids, they’re going to be dealing with people. So first teach them what people think, how they act — that was my drive to love psychology.” Kasigazi has also turned to OCW to brush up on his coding skills, watching 6.0001 (Introduction to Computer Science and Programming Using Python) lectures with Professor Ana Bell and reviewing the instructor-paced version with Professor Eric Grimson now on MITx. “I am proud to say MIT OCW has made me fall in love with coding … it makes sense like it never has before,” he says. Nurturing a worldview In 2014 Kasigazi moved to South Sudan, which had only recently emerged from a civil war as an independent nation. Fresh out of university, he was there to teach computer skills and graphic design — some of his students included members of the new country’s government — but his time in South Sudan quickly became a learning experience for him, too. “When you grow up in your community, you have this bubble. We all experience it — it’s a human thing,” he reflects. “For the first time, I realized that everything I knew is not a given. Everything I grew up knowing is not universal.” With his worldview newly broadened, he began to nurture his interest in psychology, philosophy, and the sciences, watching crash courses, explainer videos, and other content on the subject. “It’s entertainment, to me, at the same time that it’s a passion,” he says. Today Kasigazi runs his own company, which he started in 2012 with friends and resumed when he returned to Uganda seven years ago. Since coming across the OCW YouTube channel, Kasigazi has worked through all of the freely available MIT psychology courses. Professor John Gabrieli’s 9.00SC (Introduction to Psychology) have particularly resonated with him, even prompting him to reach out to Gabrieli. “As much as I’d been getting some knowledge on psychology over the years online, it wasn’t as deep and as interesting or captivating as your classes were,” he wrote. “From your teaching style, to the explanations, to the topics, to how you make people understand a topic, to the experiments mentioned and referenced, to how you approach questions and later make one think deeper about them.” “The message from Emmanuel is deeply touching about the joy of learning,” says Gabrieli, who is also an investigator at the McGovern Institute. “I am so grateful to OCW for making this course on psychology open to the world, and to Emmanuel for so delightfully sharing what this course meant to him.” New courses are added regularly to both the OCW website and YouTube channel. Kasigazi, who’s currently enjoying 9.13 (Introduction to the Human Brain) from professor and McGovern Institute investigator Nancy Kanwisher, looks forward to discovering what new worlds of knowledge they’ll open. Reposted from https://news.mit.edu on November 7, 2022. We hope you’ve been inspired by this story and by OCW’s effort to meet the needs of learners eager to enhance their knowledge, lift up their communities, and change the world for the benefit of everyone. Please consider supporting our work with a donation or if giving isn’t possible right now, we’d love to hear how OCW has made a difference in your life or classroom. We’d appreciate it! Read Full Story
common_crawl_ocw.mit.edu_1
“I get the chance to not only watch the future happen, but I can actually be a part of it and create it,” says Ugandan entrepreneur Emmanuel Kasigazi. Duyen Nguyen | MIT Open Learning Like millions of others during the global Covid-19 lockdowns, Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube to pass the time. But he wasn’t following an influencer or watching music videos. A lifelong learner, Kasigazi was scouring the video-sharing platform for educational resources. Since 2013, when he got his first smartphone, Kasigazi has been charting his own learning journey through YouTube, educating himself on subjects as diverse as psychology and artificial intelligence. And it was while searching for the answer to an AI-related question that Kasigazi first discovered MIT OpenCourseWare (OCW). “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew." “The search results showed MIT lectures, and I thought, ‘Which MIT is this?’” recalls Kasigazi, who admits he was initially skeptical as he opened the OCW YouTube channel. To his amazement, he found hundreds of courses there — not only clips, but complete lectures that he could follow alongside the students in MIT classrooms. He searched for more information on OCW and tried the channel on different browsers to triple-check its credibility. “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew,” he remembers. For Kasigazi, the channel became a gateway to other open education resources, including the OpenCourseWare website and MITx courses, both part of MIT Open Learning. “I always had the questions — I grew up on science cartoons like ‘Dexter’s Laboratory’ and ‘Pinky and the Brain’ — so I would go on YouTube to try to find answers to these questions, and I found this whole other world,” he says. OCW launched its YouTube channel in 2008, and this August passed 4 million subscribers. While introductory computer science, math, and physics are the most-visited courses on the OCW website, the most popular YouTube videos reflect a more diverse range of interests, including a lecture about piloting a fighter jet aircraft, an introduction to the human brain, and an introduction to financial terms and concepts. Through this extensive collection, Kasigazi explains that he’s been able to explore “the things I love,” while also studying cloud computing, data science, and AI — fields that he plans to pursue in graduate studies. He says, “This is what OpenCourseWare has enabled me to do: I get the chance to not only watch the future happen, but I can actually be a part of it and create it.” Understanding humanity through the liberal arts When Kasigazi was young, a beloved aunt recognized his natural curiosity and steered him toward the best schools. “I owe her everything,” he says, “everything I am is because of her.” Thanks to his excellent grades he received an academic scholarship from the Ugandan government to attend Makerere University, one of the top universities in sub-Saharan Africa, where he earned a degree in information systems. Having pursued IT for its practical applications, Kasigazi admits that he was initially more interested in the science and theory behind computers than “the coding bits of it.” “I love the concept of it — how we are trying to make these machines,” he says, explaining that he’s long been drawn to the social sciences and humanities, particularly psychology and philosophy. “I’m interested in how we work as human beings, because everything we do is for, with, and around human beings,” says Kasigazi, who considers psychology to be foundational to almost every field. “Whatever it is you’re teaching these kids, they’re going to be dealing with people. So first teach them what people think, how they act — that was my drive to love psychology.” Kasigazi has also turned to OCW to brush up on his coding skills, watching 6.0001 (Introduction to Computer Science and Programming Using Python) lectures with Professor Ana Bell and reviewing the instructor-paced version with Professor Eric Grimson now on MITx. “I am proud to say MIT OCW has made me fall in love with coding … it makes sense like it never has before,” he says. Nurturing a worldview In 2014 Kasigazi moved to South Sudan, which had only recently emerged from a civil war as an independent nation. Fresh out of university, he was there to teach computer skills and graphic design — some of his students included members of the new country’s government — but his time in South Sudan quickly became a learning experience for him, too. “When you grow up in your community, you have this bubble. We all experience it — it’s a human thing,” he reflects. “For the first time, I realized that everything I knew is not a given. Everything I grew up knowing is not universal.” With his worldview newly broadened, he began to nurture his interest in psychology, philosophy, and the sciences, watching crash courses, explainer videos, and other content on the subject. “It’s entertainment, to me, at the same time that it’s a passion,” he says. Today Kasigazi runs his own company, which he started in 2012 with friends and resumed when he returned to Uganda seven years ago. Since coming across the OCW YouTube channel, Kasigazi has worked through all of the freely available MIT psychology courses. Professor John Gabrieli’s 9.00SC (Introduction to Psychology) have particularly resonated with him, even prompting him to reach out to Gabrieli. “As much as I’d been getting some knowledge on psychology over the years online, it wasn’t as deep and as interesting or captivating as your classes were,” he wrote. “From your teaching style, to the explanations, to the topics, to how you make people understand a topic, to the experiments mentioned and referenced, to how you approach questions and later make one think deeper about them.” “The message from Emmanuel is deeply touching about the joy of learning,” says Gabrieli, who is also an investigator at the McGovern Institute. “I am so grateful to OCW for making this course on psychology open to the world, and to Emmanuel for so delightfully sharing what this course meant to him.” New courses are added regularly to both the OCW website and YouTube channel. Kasigazi, who’s currently enjoying 9.13 (Introduction to the Human Brain) from professor and McGovern Institute investigator Nancy Kanwisher, looks forward to discovering what new worlds of knowledge they’ll open. Reposted from https://news.mit.edu on November 7, 2022. We hope you’ve been inspired by this story and by OCW’s effort to meet the needs of learners eager to enhance their knowledge, lift up their communities, and change the world for the benefit of everyone. Please consider supporting our work with a donation or if giving isn’t possible right now, we’d love to hear how OCW has made a difference in your life or classroom. We’d appreciate it! Read Full Story
common_crawl_ocw.mit.edu_2
“I get the chance to not only watch the future happen, but I can actually be a part of it and create it,” says Ugandan entrepreneur Emmanuel Kasigazi. Duyen Nguyen | MIT Open Learning Like millions of others during the global Covid-19 lockdowns, Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube to pass the time. But he wasn’t following an influencer or watching music videos. A lifelong learner, Kasigazi was scouring the video-sharing platform for educational resources. Since 2013, when he got his first smartphone, Kasigazi has been charting his own learning journey through YouTube, educating himself on subjects as diverse as psychology and artificial intelligence. And it was while searching for the answer to an AI-related question that Kasigazi first discovered MIT OpenCourseWare (OCW). “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew." “The search results showed MIT lectures, and I thought, ‘Which MIT is this?’” recalls Kasigazi, who admits he was initially skeptical as he opened the OCW YouTube channel. To his amazement, he found hundreds of courses there — not only clips, but complete lectures that he could follow alongside the students in MIT classrooms. He searched for more information on OCW and tried the channel on different browsers to triple-check its credibility. “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew,” he remembers. For Kasigazi, the channel became a gateway to other open education resources, including the OpenCourseWare website and MITx courses, both part of MIT Open Learning. “I always had the questions — I grew up on science cartoons like ‘Dexter’s Laboratory’ and ‘Pinky and the Brain’ — so I would go on YouTube to try to find answers to these questions, and I found this whole other world,” he says. OCW launched its YouTube channel in 2008, and this August passed 4 million subscribers. While introductory computer science, math, and physics are the most-visited courses on the OCW website, the most popular YouTube videos reflect a more diverse range of interests, including a lecture about piloting a fighter jet aircraft, an introduction to the human brain, and an introduction to financial terms and concepts. Through this extensive collection, Kasigazi explains that he’s been able to explore “the things I love,” while also studying cloud computing, data science, and AI — fields that he plans to pursue in graduate studies. He says, “This is what OpenCourseWare has enabled me to do: I get the chance to not only watch the future happen, but I can actually be a part of it and create it.” Understanding humanity through the liberal arts When Kasigazi was young, a beloved aunt recognized his natural curiosity and steered him toward the best schools. “I owe her everything,” he says, “everything I am is because of her.” Thanks to his excellent grades he received an academic scholarship from the Ugandan government to attend Makerere University, one of the top universities in sub-Saharan Africa, where he earned a degree in information systems. Having pursued IT for its practical applications, Kasigazi admits that he was initially more interested in the science and theory behind computers than “the coding bits of it.” “I love the concept of it — how we are trying to make these machines,” he says, explaining that he’s long been drawn to the social sciences and humanities, particularly psychology and philosophy. “I’m interested in how we work as human beings, because everything we do is for, with, and around human beings,” says Kasigazi, who considers psychology to be foundational to almost every field. “Whatever it is you’re teaching these kids, they’re going to be dealing with people. So first teach them what people think, how they act — that was my drive to love psychology.” Kasigazi has also turned to OCW to brush up on his coding skills, watching 6.0001 (Introduction to Computer Science and Programming Using Python) lectures with Professor Ana Bell and reviewing the instructor-paced version with Professor Eric Grimson now on MITx. “I am proud to say MIT OCW has made me fall in love with coding … it makes sense like it never has before,” he says. Nurturing a worldview In 2014 Kasigazi moved to South Sudan, which had only recently emerged from a civil war as an independent nation. Fresh out of university, he was there to teach computer skills and graphic design — some of his students included members of the new country’s government — but his time in South Sudan quickly became a learning experience for him, too. “When you grow up in your community, you have this bubble. We all experience it — it’s a human thing,” he reflects. “For the first time, I realized that everything I knew is not a given. Everything I grew up knowing is not universal.” With his worldview newly broadened, he began to nurture his interest in psychology, philosophy, and the sciences, watching crash courses, explainer videos, and other content on the subject. “It’s entertainment, to me, at the same time that it’s a passion,” he says. Today Kasigazi runs his own company, which he started in 2012 with friends and resumed when he returned to Uganda seven years ago. Since coming across the OCW YouTube channel, Kasigazi has worked through all of the freely available MIT psychology courses. Professor John Gabrieli’s 9.00SC (Introduction to Psychology) have particularly resonated with him, even prompting him to reach out to Gabrieli. “As much as I’d been getting some knowledge on psychology over the years online, it wasn’t as deep and as interesting or captivating as your classes were,” he wrote. “From your teaching style, to the explanations, to the topics, to how you make people understand a topic, to the experiments mentioned and referenced, to how you approach questions and later make one think deeper about them.” “The message from Emmanuel is deeply touching about the joy of learning,” says Gabrieli, who is also an investigator at the McGovern Institute. “I am so grateful to OCW for making this course on psychology open to the world, and to Emmanuel for so delightfully sharing what this course meant to him.” New courses are added regularly to both the OCW website and YouTube channel. Kasigazi, who’s currently enjoying 9.13 (Introduction to the Human Brain) from professor and McGovern Institute investigator Nancy Kanwisher, looks forward to discovering what new worlds of knowledge they’ll open. Reposted from https://news.mit.edu on November 7, 2022. We hope you’ve been inspired by this story and by OCW’s effort to meet the needs of learners eager to enhance their knowledge, lift up their communities, and change the world for the benefit of everyone. Please consider supporting our work with a donation or if giving isn’t possible right now, we’d love to hear how OCW has made a difference in your life or classroom. We’d appreciate it! Read Full Story
common_crawl_ocw.mit.edu_3
“I get the chance to not only watch the future happen, but I can actually be a part of it and create it,” says Ugandan entrepreneur Emmanuel Kasigazi. Duyen Nguyen | MIT Open Learning Like millions of others during the global Covid-19 lockdowns, Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube to pass the time. But he wasn’t following an influencer or watching music videos. A lifelong learner, Kasigazi was scouring the video-sharing platform for educational resources. Since 2013, when he got his first smartphone, Kasigazi has been charting his own learning journey through YouTube, educating himself on subjects as diverse as psychology and artificial intelligence. And it was while searching for the answer to an AI-related question that Kasigazi first discovered MIT OpenCourseWare (OCW). “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew." “The search results showed MIT lectures, and I thought, ‘Which MIT is this?’” recalls Kasigazi, who admits he was initially skeptical as he opened the OCW YouTube channel. To his amazement, he found hundreds of courses there — not only clips, but complete lectures that he could follow alongside the students in MIT classrooms. He searched for more information on OCW and tried the channel on different browsers to triple-check its credibility. “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew,” he remembers. For Kasigazi, the channel became a gateway to other open education resources, including the OpenCourseWare website and MITx courses, both part of MIT Open Learning. “I always had the questions — I grew up on science cartoons like ‘Dexter’s Laboratory’ and ‘Pinky and the Brain’ — so I would go on YouTube to try to find answers to these questions, and I found this whole other world,” he says. OCW launched its YouTube channel in 2008, and this August passed 4 million subscribers. While introductory computer science, math, and physics are the most-visited courses on the OCW website, the most popular YouTube videos reflect a more diverse range of interests, including a lecture about piloting a fighter jet aircraft, an introduction to the human brain, and an introduction to financial terms and concepts. Through this extensive collection, Kasigazi explains that he’s been able to explore “the things I love,” while also studying cloud computing, data science, and AI — fields that he plans to pursue in graduate studies. He says, “This is what OpenCourseWare has enabled me to do: I get the chance to not only watch the future happen, but I can actually be a part of it and create it.” Understanding humanity through the liberal arts When Kasigazi was young, a beloved aunt recognized his natural curiosity and steered him toward the best schools. “I owe her everything,” he says, “everything I am is because of her.” Thanks to his excellent grades he received an academic scholarship from the Ugandan government to attend Makerere University, one of the top universities in sub-Saharan Africa, where he earned a degree in information systems. Having pursued IT for its practical applications, Kasigazi admits that he was initially more interested in the science and theory behind computers than “the coding bits of it.” “I love the concept of it — how we are trying to make these machines,” he says, explaining that he’s long been drawn to the social sciences and humanities, particularly psychology and philosophy. “I’m interested in how we work as human beings, because everything we do is for, with, and around human beings,” says Kasigazi, who considers psychology to be foundational to almost every field. “Whatever it is you’re teaching these kids, they’re going to be dealing with people. So first teach them what people think, how they act — that was my drive to love psychology.” Kasigazi has also turned to OCW to brush up on his coding skills, watching 6.0001 (Introduction to Computer Science and Programming Using Python) lectures with Professor Ana Bell and reviewing the instructor-paced version with Professor Eric Grimson now on MITx. “I am proud to say MIT OCW has made me fall in love with coding … it makes sense like it never has before,” he says. Nurturing a worldview In 2014 Kasigazi moved to South Sudan, which had only recently emerged from a civil war as an independent nation. Fresh out of university, he was there to teach computer skills and graphic design — some of his students included members of the new country’s government — but his time in South Sudan quickly became a learning experience for him, too. “When you grow up in your community, you have this bubble. We all experience it — it’s a human thing,” he reflects. “For the first time, I realized that everything I knew is not a given. Everything I grew up knowing is not universal.” With his worldview newly broadened, he began to nurture his interest in psychology, philosophy, and the sciences, watching crash courses, explainer videos, and other content on the subject. “It’s entertainment, to me, at the same time that it’s a passion,” he says. Today Kasigazi runs his own company, which he started in 2012 with friends and resumed when he returned to Uganda seven years ago. Since coming across the OCW YouTube channel, Kasigazi has worked through all of the freely available MIT psychology courses. Professor John Gabrieli’s 9.00SC (Introduction to Psychology) have particularly resonated with him, even prompting him to reach out to Gabrieli. “As much as I’d been getting some knowledge on psychology over the years online, it wasn’t as deep and as interesting or captivating as your classes were,” he wrote. “From your teaching style, to the explanations, to the topics, to how you make people understand a topic, to the experiments mentioned and referenced, to how you approach questions and later make one think deeper about them.” “The message from Emmanuel is deeply touching about the joy of learning,” says Gabrieli, who is also an investigator at the McGovern Institute. “I am so grateful to OCW for making this course on psychology open to the world, and to Emmanuel for so delightfully sharing what this course meant to him.” New courses are added regularly to both the OCW website and YouTube channel. Kasigazi, who’s currently enjoying 9.13 (Introduction to the Human Brain) from professor and McGovern Institute investigator Nancy Kanwisher, looks forward to discovering what new worlds of knowledge they’ll open. Reposted from https://news.mit.edu on November 7, 2022. We hope you’ve been inspired by this story and by OCW’s effort to meet the needs of learners eager to enhance their knowledge, lift up their communities, and change the world for the benefit of everyone. Please consider supporting our work with a donation or if giving isn’t possible right now, we’d love to hear how OCW has made a difference in your life or classroom. We’d appreciate it! Read Full Story
common_crawl_ocw.mit.edu_4
“I get the chance to not only watch the future happen, but I can actually be a part of it and create it,” says Ugandan entrepreneur Emmanuel Kasigazi. Duyen Nguyen | MIT Open Learning Like millions of others during the global Covid-19 lockdowns, Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube to pass the time. But he wasn’t following an influencer or watching music videos. A lifelong learner, Kasigazi was scouring the video-sharing platform for educational resources. Since 2013, when he got his first smartphone, Kasigazi has been charting his own learning journey through YouTube, educating himself on subjects as diverse as psychology and artificial intelligence. And it was while searching for the answer to an AI-related question that Kasigazi first discovered MIT OpenCourseWare (OCW). “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew." “The search results showed MIT lectures, and I thought, ‘Which MIT is this?’” recalls Kasigazi, who admits he was initially skeptical as he opened the OCW YouTube channel. To his amazement, he found hundreds of courses there — not only clips, but complete lectures that he could follow alongside the students in MIT classrooms. He searched for more information on OCW and tried the channel on different browsers to triple-check its credibility. “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew,” he remembers. For Kasigazi, the channel became a gateway to other open education resources, including the OpenCourseWare website and MITx courses, both part of MIT Open Learning. “I always had the questions — I grew up on science cartoons like ‘Dexter’s Laboratory’ and ‘Pinky and the Brain’ — so I would go on YouTube to try to find answers to these questions, and I found this whole other world,” he says. OCW launched its YouTube channel in 2008, and this August passed 4 million subscribers. While introductory computer science, math, and physics are the most-visited courses on the OCW website, the most popular YouTube videos reflect a more diverse range of interests, including a lecture about piloting a fighter jet aircraft, an introduction to the human brain, and an introduction to financial terms and concepts. Through this extensive collection, Kasigazi explains that he’s been able to explore “the things I love,” while also studying cloud computing, data science, and AI — fields that he plans to pursue in graduate studies. He says, “This is what OpenCourseWare has enabled me to do: I get the chance to not only watch the future happen, but I can actually be a part of it and create it.” Understanding humanity through the liberal arts When Kasigazi was young, a beloved aunt recognized his natural curiosity and steered him toward the best schools. “I owe her everything,” he says, “everything I am is because of her.” Thanks to his excellent grades he received an academic scholarship from the Ugandan government to attend Makerere University, one of the top universities in sub-Saharan Africa, where he earned a degree in information systems. Having pursued IT for its practical applications, Kasigazi admits that he was initially more interested in the science and theory behind computers than “the coding bits of it.” “I love the concept of it — how we are trying to make these machines,” he says, explaining that he’s long been drawn to the social sciences and humanities, particularly psychology and philosophy. “I’m interested in how we work as human beings, because everything we do is for, with, and around human beings,” says Kasigazi, who considers psychology to be foundational to almost every field. “Whatever it is you’re teaching these kids, they’re going to be dealing with people. So first teach them what people think, how they act — that was my drive to love psychology.” Kasigazi has also turned to OCW to brush up on his coding skills, watching 6.0001 (Introduction to Computer Science and Programming Using Python) lectures with Professor Ana Bell and reviewing the instructor-paced version with Professor Eric Grimson now on MITx. “I am proud to say MIT OCW has made me fall in love with coding … it makes sense like it never has before,” he says. Nurturing a worldview In 2014 Kasigazi moved to South Sudan, which had only recently emerged from a civil war as an independent nation. Fresh out of university, he was there to teach computer skills and graphic design — some of his students included members of the new country’s government — but his time in South Sudan quickly became a learning experience for him, too. “When you grow up in your community, you have this bubble. We all experience it — it’s a human thing,” he reflects. “For the first time, I realized that everything I knew is not a given. Everything I grew up knowing is not universal.” With his worldview newly broadened, he began to nurture his interest in psychology, philosophy, and the sciences, watching crash courses, explainer videos, and other content on the subject. “It’s entertainment, to me, at the same time that it’s a passion,” he says. Today Kasigazi runs his own company, which he started in 2012 with friends and resumed when he returned to Uganda seven years ago. Since coming across the OCW YouTube channel, Kasigazi has worked through all of the freely available MIT psychology courses. Professor John Gabrieli’s 9.00SC (Introduction to Psychology) have particularly resonated with him, even prompting him to reach out to Gabrieli. “As much as I’d been getting some knowledge on psychology over the years online, it wasn’t as deep and as interesting or captivating as your classes were,” he wrote. “From your teaching style, to the explanations, to the topics, to how you make people understand a topic, to the experiments mentioned and referenced, to how you approach questions and later make one think deeper about them.” “The message from Emmanuel is deeply touching about the joy of learning,” says Gabrieli, who is also an investigator at the McGovern Institute. “I am so grateful to OCW for making this course on psychology open to the world, and to Emmanuel for so delightfully sharing what this course meant to him.” New courses are added regularly to both the OCW website and YouTube channel. Kasigazi, who’s currently enjoying 9.13 (Introduction to the Human Brain) from professor and McGovern Institute investigator Nancy Kanwisher, looks forward to discovering what new worlds of knowledge they’ll open. Reposted from https://news.mit.edu on November 7, 2022. We hope you’ve been inspired by this story and by OCW’s effort to meet the needs of learners eager to enhance their knowledge, lift up their communities, and change the world for the benefit of everyone. Please consider supporting our work with a donation or if giving isn’t possible right now, we’d love to hear how OCW has made a difference in your life or classroom. We’d appreciate it! Read Full Story
common_crawl_ocw.mit.edu_5
“I get the chance to not only watch the future happen, but I can actually be a part of it and create it,” says Ugandan entrepreneur Emmanuel Kasigazi. Duyen Nguyen | MIT Open Learning Like millions of others during the global Covid-19 lockdowns, Emmanuel Kasigazi, an entrepreneur from Uganda, turned to YouTube to pass the time. But he wasn’t following an influencer or watching music videos. A lifelong learner, Kasigazi was scouring the video-sharing platform for educational resources. Since 2013, when he got his first smartphone, Kasigazi has been charting his own learning journey through YouTube, educating himself on subjects as diverse as psychology and artificial intelligence. And it was while searching for the answer to an AI-related question that Kasigazi first discovered MIT OpenCourseWare (OCW). “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew." “The search results showed MIT lectures, and I thought, ‘Which MIT is this?’” recalls Kasigazi, who admits he was initially skeptical as he opened the OCW YouTube channel. To his amazement, he found hundreds of courses there — not only clips, but complete lectures that he could follow alongside the students in MIT classrooms. He searched for more information on OCW and tried the channel on different browsers to triple-check its credibility. “Here they were, all these courses by one of the best — if not the best — schools in tech in the world, and they were free. For a long time I couldn’t believe it. I told everyone I knew,” he remembers. For Kasigazi, the channel became a gateway to other open education resources, including the OpenCourseWare website and MITx courses, both part of MIT Open Learning. “I always had the questions — I grew up on science cartoons like ‘Dexter’s Laboratory’ and ‘Pinky and the Brain’ — so I would go on YouTube to try to find answers to these questions, and I found this whole other world,” he says. OCW launched its YouTube channel in 2008, and this August passed 4 million subscribers. While introductory computer science, math, and physics are the most-visited courses on the OCW website, the most popular YouTube videos reflect a more diverse range of interests, including a lecture about piloting a fighter jet aircraft, an introduction to the human brain, and an introduction to financial terms and concepts. Through this extensive collection, Kasigazi explains that he’s been able to explore “the things I love,” while also studying cloud computing, data science, and AI — fields that he plans to pursue in graduate studies. He says, “This is what OpenCourseWare has enabled me to do: I get the chance to not only watch the future happen, but I can actually be a part of it and create it.” Understanding humanity through the liberal arts When Kasigazi was young, a beloved aunt recognized his natural curiosity and steered him toward the best schools. “I owe her everything,” he says, “everything I am is because of her.” Thanks to his excellent grades he received an academic scholarship from the Ugandan government to attend Makerere University, one of the top universities in sub-Saharan Africa, where he earned a degree in information systems. Having pursued IT for its practical applications, Kasigazi admits that he was initially more interested in the science and theory behind computers than “the coding bits of it.” “I love the concept of it — how we are trying to make these machines,” he says, explaining that he’s long been drawn to the social sciences and humanities, particularly psychology and philosophy. “I’m interested in how we work as human beings, because everything we do is for, with, and around human beings,” says Kasigazi, who considers psychology to be foundational to almost every field. “Whatever it is you’re teaching these kids, they’re going to be dealing with people. So first teach them what people think, how they act — that was my drive to love psychology.” Kasigazi has also turned to OCW to brush up on his coding skills, watching 6.0001 (Introduction to Computer Science and Programming Using Python) lectures with Professor Ana Bell and reviewing the instructor-paced version with Professor Eric Grimson now on MITx. “I am proud to say MIT OCW has made me fall in love with coding … it makes sense like it never has before,” he says. Nurturing a worldview In 2014 Kasigazi moved to South Sudan, which had only recently emerged from a civil war as an independent nation. Fresh out of university, he was there to teach computer skills and graphic design — some of his students included members of the new country’s government — but his time in South Sudan quickly became a learning experience for him, too. “When you grow up in your community, you have this bubble. We all experience it — it’s a human thing,” he reflects. “For the first time, I realized that everything I knew is not a given. Everything I grew up knowing is not universal.” With his worldview newly broadened, he began to nurture his interest in psychology, philosophy, and the sciences, watching crash courses, explainer videos, and other content on the subject. “It’s entertainment, to me, at the same time that it’s a passion,” he says. Today Kasigazi runs his own company, which he started in 2012 with friends and resumed when he returned to Uganda seven years ago. Since coming across the OCW YouTube channel, Kasigazi has worked through all of the freely available MIT psychology courses. Professor John Gabrieli’s 9.00SC (Introduction to Psychology) have particularly resonated with him, even prompting him to reach out to Gabrieli. “As much as I’d been getting some knowledge on psychology over the years online, it wasn’t as deep and as interesting or captivating as your classes were,” he wrote. “From your teaching style, to the explanations, to the topics, to how you make people understand a topic, to the experiments mentioned and referenced, to how you approach questions and later make one think deeper about them.” “The message from Emmanuel is deeply touching about the joy of learning,” says Gabrieli, who is also an investigator at the McGovern Institute. “I am so grateful to OCW for making this course on psychology open to the world, and to Emmanuel for so delightfully sharing what this course meant to him.” New courses are added regularly to both the OCW website and YouTube channel. Kasigazi, who’s currently enjoying 9.13 (Introduction to the Human Brain) from professor and McGovern Institute investigator Nancy Kanwisher, looks forward to discovering what new worlds of knowledge they’ll open. Reposted from https://news.mit.edu on November 7, 2022. We hope you’ve been inspired by this story and by OCW’s effort to meet the needs of learners eager to enhance their knowledge, lift up their communities, and change the world for the benefit of everyone. Please consider supporting our work with a donation or if giving isn’t possible right now, we’d love to hear how OCW has made a difference in your life or classroom. We’d appreciate it! Read Full Story
common_crawl_ocw.mit.edu_6
The last previous method uses a straight line approximation to get a new argument between old ones, once the values of f at the old arguments have the opposite sign. This is usually a smart thing to do, unless the endpoints of the interval between which your solution is trapped (in columns s and t above) converge very slowly. To avoid that possibility we can, once the function f has opposite sign at two points, say a and b, evaluate it at the middle point, and replace one end by the middle. This will cut the size of the interval in which the solution must lie in half. This is slow convergence compared to the best of Newton's algorithm or the variants discussed above, but it is steady and effective and will always give a definite improvement in accuracy in a fixed number of steps. Since 2 to the tenth power is a bit more than one thousand, (it's 1024) the size of the interval goes down by a factor of at least 1000 for every ten iterations, and so if it starts at something like 1, after 35 or so steps you will have the answer to ten decimal places. How does the algorithm go? We start with two arguments, a and b and suppose we assume a < b. We evaluate f(a) and f(b), and also . Then if the last of these and the first has the same sign you replace a by and keep b, while otherwise you keep a and replace b by . In a spreadsheet you can put your initial guesses in aa2 and ab2 put "= aa2/2+ab2/2" in ac2 and put "= f(aa2)" in ad2 and copy that to ae2 and af2. You can then put "= if(ae2*af2>0, aa2,ac2)" in aa3, and "= if(ae2*af2>0,ac2,ab2)" in ab3, copy down and you are done. Unless there is an error lurking somewhere, or you started with ad2 and ae2 having the same sign, this will shorten the initial a to b by a factor of at least 10-10 interval after 35 or so steps. Exercise 13.9 Compare performance of this algorithm with the previous ones on the same examples. Any comments?
common_crawl_ocw.mit.edu_7
# fm_casestudy_1_0_DownloadData.r # # * Install/load R packages # * Collect historical financial data from internet # * Create time series data matrix: casestudy1.data0.0 # Closing prices on stocks (BAC, GE, JDSU, XOM) # Closing values of indexes (SP500) # Yields on constant maturity US rates/bonds (3MO, 1YR, 5YR, 10 YR) # Closing price on crude oil spot price # 0. Install and load packages ---- # # 0.1 Install packages --- # Set ind.install0 to TRUE if running script for first time on a computer # or updating the packages ind.install0<-FALSE # if (ind.install0){ install.packages("quantmod") install.packages("tseries") install.packages("vars") install.packages("fxregime") install.packages("coefplot") } # 0.2 Load packages into R session library("quantmod") ## Warning: package 'quantmod' was built under R version 3.1.3 ## Loading required package: xts ## Warning: package 'xts' was built under R version 3.1.3 ## Loading required package: zoo ## Warning: package 'zoo' was built under R version 3.1.3 ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ## ## Loading required package: TTR ## Warning: package 'TTR' was built under R version 3.1.3 ## Version 0.4-0 included new data defaults. See ?getSymbols. library("tseries") ## Warning: package 'tseries' was built under R version 3.1.3 library("vars") ## Warning: package 'vars' was built under R version 3.1.3 ## Loading required package: MASS ## Warning: package 'MASS' was built under R version 3.1.3 ## Loading required package: strucchange ## Warning: package 'strucchange' was built under R version 3.1.3 ## Loading required package: sandwich ## Warning: package 'sandwich' was built under R version 3.1.3 ## Loading required package: urca ## Warning: package 'urca' was built under R version 3.1.3 ## Loading required package: lmtest ## Warning: package 'lmtest' was built under R version 3.1.3 library("fxregime") ## Warning: package 'fxregime' was built under R version 3.1.3 # 1. Load data into R session ---- # 1.1 Stock Price Data from Yahoo # Apply quantmod(sub-package TTR) function # getYahoodata # # Returns historical data for any symbol at the website # http://finance.yahoo.com # # 1.1.1 Set start and end date for collection in YYYYMMDD (numeric) format date.start<-20000101 date.end<-20150331 # format(Sys.Date(),"%Y%m%d") # 1.1.2 Collect historical data for S&P 500 Index SP500 <- getYahooData("^GSPC", start=date.start, end=date.end) chartSeries(SP500[,1:5]) # 1.1.3 Collect historical data for 4 stocks GE <- getYahooData("GE", start=date.start, end=date.end) BAC <- getYahooData("BAC", start=date.start, end=date.end) JDSU <- getYahooData("JDSU", start=date.start, end=date.end) XOM <- getYahooData("XOM", start=date.start, end=date.end) AEP <- getYahooData("AEP", start=date.start, end=date.end) DUK <- getYahooData("DUK", start=date.start, end=date.end) chartSeries(GE[,1:5]) chartSeries(BAC[,1:5]) chartSeries(JDSU[,1:5]) chartSeries(XOM[,1:5]) chartSeries(AEP[,1:5]) chartSeries(DUK[,1:5]) # 1.1.4 Details of data object GE from getYahoodata # # GE is a matrix object with # row dimension equal to the number of dates # column dimension equal to 9 is.matrix(GE) ## [1] TRUE dim(GE) ## [1] 3834 9 # Print out the first and last parts of the matrix: head(GE) ## Open High Low Close Volume Unadj.Close Div ## 2000-01-03 32.00652 32.15034 31.20898 31.37894 35166578 150.0000 NA ## 2000-01-04 30.80366 30.96055 30.12378 30.12378 35248799 144.0000 NA ## 2000-01-05 30.07149 30.75136 29.82306 30.07149 43489038 143.7500 NA ## 2000-01-06 29.94074 30.73829 29.83615 30.47353 31666460 145.6719 NA ## 2000-01-07 30.96055 31.77118 30.75136 31.65351 32093817 151.3125 NA ## 2000-01-10 31.94114 32.22879 31.61428 31.64044 24262291 151.2500 NA ## Split Adj.Div ## 2000-01-03 NA NA ## 2000-01-04 NA NA ## 2000-01-05 NA NA ## 2000-01-06 NA NA ## 2000-01-07 NA NA ## 2000-01-10 NA NA tail(GE) ## Open High Low Close Volume Unadj.Close Div Split Adj.Div ## 2015-03-24 25.38 25.48 25.27 25.27 25801300 25.27 NA NA NA ## 2015-03-25 25.23 25.33 24.91 24.91 34897400 24.91 NA NA NA ## 2015-03-26 24.80 24.92 24.67 24.80 32504400 24.80 NA NA NA ## 2015-03-27 24.92 24.92 24.71 24.86 28320600 24.86 NA NA NA ## 2015-03-30 24.98 25.20 24.97 25.12 27281400 25.12 NA NA NA ## 2015-03-31 25.09 25.09 24.81 24.81 34940900 24.81 NA NA NA # Some attributes of the object GE mode(GE) # storage mode of GE is "numeric" ## [1] "numeric" class(GE) # object-oriented class(es) of GE are "xts" and "zoo" ## [1] "xts" "zoo" # xts is an extensible time-series object from the package xts # zoo is an object storing ordered observations in a vector or matrix with an index attribute # Important zoo functions # coredata() extracts or replaces core data # index() extracts or replaces the (sort.by) index of the object # 1.2 Federal Reserve Economic Data (FRED) from the St. Louis Federal Reserve # Apply quantmod function # getSymbols( seriesname, src="FRED") # # Returns historical data for any symbol at the website # http://research.stlouisfed.org/fred2/ # # Series name | Description # # DGS3MO | 3-Month Treasury, constant maturity rate # DGS1 | 1-Year Treasury, constant maturity rate # DGS5 | 5-Year Treasury, constant maturity rate # DGS10 | 10-Year Treasury, constant maturity rate # # DAAA | Moody's Seasoned Aaa Corporate Bond Yield # DBAA | Moody's Seasoned Baa Corporate Bond Yield # # DCOILWTICO | Crude Oil Prices: West Text Intermediate (WTI) - Cushing, Oklahoma # # 1.2.1 Default setting collects entire series # and assigns to object of same name as the series getSymbols("DGS3MO", src="FRED") ## As of 0.4-0, 'getSymbols' uses env=parent.frame() and ## auto.assign=TRUE by default. ## ## This behavior will be phased out in 0.5-0 when the call will ## default to use auto.assign=FALSE. getOption("getSymbols.env") and ## getOptions("getSymbols.auto.assign") are now checked for alternate defaults ## ## This message is shown once per session and may be disabled by setting ## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for more details. ## [1] "DGS3MO" getSymbols("DGS1", src="FRED") ## [1] "DGS1" getSymbols("DGS5", src="FRED") ## [1] "DGS5" getSymbols("DGS10", src="FRED") ## [1] "DGS10" getSymbols("DAAA", src="FRED") ## [1] "DAAA" getSymbols("DBAA", src="FRED") ## [1] "DBAA" getSymbols("DCOILWTICO", src="FRED") ## [1] "DCOILWTICO" # Each object is a 1-column matrix with time series data # The column-name is the same as the object name is.matrix(DGS3MO) # ## [1] TRUE dim(DGS3MO) ## [1] 8687 1 head(DGS3MO) ## DGS3MO ## 1982-01-04 11.87 ## 1982-01-05 12.20 ## 1982-01-06 12.16 ## 1982-01-07 12.17 ## 1982-01-08 11.98 ## 1982-01-11 12.49 tail(DGS3MO) ## DGS3MO ## 2015-04-14 0.02 ## 2015-04-15 0.02 ## 2015-04-16 0.02 ## 2015-04-17 0.01 ## 2015-04-20 0.03 ## 2015-04-21 0.03 mode(DGS3MO) ## [1] "numeric" class(DGS3MO) ## [1] "xts" "zoo" # # 2.0 Merge data series together # 2.1 Create data frame with all FRED series from 2000/01/01 on # # Useful functions/methods for zoo objects # merge() # lag (lag.zoo) # diff() # window.zoo() # # na.locf() # replace NAs by last previou non-NA # rollmean(), rollmax() # compute rolling functions, column-wise fred.data0<-merge( DGS3MO, DGS1, DGS5, DGS10, DAAA, DBAA, DCOILWTICO)["2000::2015-03"] tail(fred.data0) ## DGS3MO DGS1 DGS5 DGS10 DAAA DBAA DCOILWTICO ## 2015-03-24 0.02 0.24 1.37 1.88 3.50 4.41 47.03 ## 2015-03-25 0.04 0.25 1.41 1.93 3.52 4.46 48.75 ## 2015-03-26 0.03 0.28 1.47 2.01 3.61 4.56 51.41 ## 2015-03-27 0.04 0.27 1.42 1.95 3.53 4.48 48.83 ## 2015-03-30 0.04 0.27 1.41 1.96 3.55 4.51 48.66 ## 2015-03-31 0.03 0.26 1.37 1.94 3.52 4.49 47.72 # Determine data dimensions dim(fred.data0) ## [1] 3977 7 class(fred.data0) ## [1] "xts" "zoo" # Check first and last rows in object head(fred.data0) ; tail(fred.data0) ## DGS3MO DGS1 DGS5 DGS10 DAAA DBAA DCOILWTICO ## 2000-01-03 5.48 6.09 6.50 6.58 7.75 8.27 NA ## 2000-01-04 5.43 6.00 6.40 6.49 7.69 8.21 25.56 ## 2000-01-05 5.44 6.05 6.51 6.62 7.78 8.29 24.65 ## 2000-01-06 5.41 6.03 6.46 6.57 7.72 8.24 24.79 ## 2000-01-07 5.38 6.00 6.42 6.52 7.69 8.22 24.79 ## 2000-01-10 5.42 6.07 6.49 6.57 7.72 8.27 24.71 ## DGS3MO DGS1 DGS5 DGS10 DAAA DBAA DCOILWTICO ## 2015-03-24 0.02 0.24 1.37 1.88 3.50 4.41 47.03 ## 2015-03-25 0.04 0.25 1.41 1.93 3.52 4.46 48.75 ## 2015-03-26 0.03 0.28 1.47 2.01 3.61 4.56 51.41 ## 2015-03-27 0.04 0.27 1.42 1.95 3.53 4.48 48.83 ## 2015-03-30 0.04 0.27 1.41 1.96 3.55 4.51 48.66 ## 2015-03-31 0.03 0.26 1.37 1.94 3.52 4.49 47.72 # Count the number of NAs in each column apply(is.na(fred.data0),2,sum) ## DGS3MO DGS1 DGS5 DGS10 DAAA DBAA ## 164 164 164 164 164 164 ## DCOILWTICO ## 150 # Plot the rates series all togehter opar<-par() par(fg="blue",bg="black", col.axis="gray", col.lab="gray", col.main="blue", col.sub="blue") ts.plot(as.ts(fred.data0[,1:6]),col=rainbow(6),bg="black", main="FRED Data: Rates") ## Warning in xy.coords(x = matrix(rep.int(tx, k), ncol = k), y = x, log = ## log): NAs introduced by coercion ## Warning in xy.coords(x, y): NAs introduced by coercion par(opar) ## Warning in par(opar): graphical parameter "cin" cannot be set ## Warning in par(opar): graphical parameter "cra" cannot be set ## Warning in par(opar): graphical parameter "csi" cannot be set ## Warning in par(opar): graphical parameter "cxy" cannot be set ## Warning in par(opar): graphical parameter "din" cannot be set ## Warning in par(opar): graphical parameter "page" cannot be set # add legend to plot legend(x=0,y=2, legend=dimnames(fred.data0)[[2]][1:6], lty=rep(1,times=6), col=rainbow(6), cex=0.75) # Plot the Crude Oil PRice chartSeries(to.monthly(fred.data0[,"DCOILWTICO"]), main="FRED Data: Crude Oil (WTI)") ## Warning in to.period(x, "months", indexAt = indexAt, name = name, ...): ## missing values removed from data chartSeries(to.monthly(XOM[,1:5])) # 2.2 Merge the closing prices for the stock market data series yahoo.data0<-merge(BAC$Close, GE$Close, JDSU$Close, XOM$Close, SP500$Close, AEP$Close, DUK$Close) # Replace the index of yahoo data with date values that do not include hours/minutes dimnames(yahoo.data0)[[2]]<-c("BAC","GE","JDSU","XOM","SP500","AEP","DUK") yahoo.data0.0<-zoo(x=coredata(yahoo.data0), order.by=as.Date(time(yahoo.data0))) # fred.data0 is already indexed by this scale # 2.3 Merge the yahoo and Fred data together # 2.3.1 merge with all dates casestudy1.data0<-merge(yahoo.data0.0, fred.data0) dim(casestudy1.data0) ## [1] 3977 14 head(casestudy1.data0) ## BAC GE JDSU XOM SP500 AEP DUK ## 2000-01-03 15.61121 31.37894 752.00 27.45076 1455.22 14.97396 40.60250 ## 2000-01-04 14.68461 30.12378 684.50 26.92497 1399.42 15.15257 41.23363 ## 2000-01-05 14.84576 30.07149 633.00 28.39281 1402.11 15.71819 42.91664 ## 2000-01-06 16.11480 30.47353 599.00 29.86064 1403.45 15.80750 44.07370 ## 2000-01-07 15.69179 31.65351 719.75 29.77301 1441.47 16.01588 45.23077 ## 2000-01-10 15.14791 31.64044 801.50 29.35676 1457.60 15.95634 45.17818 ## DGS3MO DGS1 DGS5 DGS10 DAAA DBAA DCOILWTICO ## 2000-01-03 5.48 6.09 6.50 6.58 7.75 8.27 NA ## 2000-01-04 5.43 6.00 6.40 6.49 7.69 8.21 25.56 ## 2000-01-05 5.44 6.05 6.51 6.62 7.78 8.29 24.65 ## 2000-01-06 5.41 6.03 6.46 6.57 7.72 8.24 24.79 ## 2000-01-07 5.38 6.00 6.42 6.52 7.69 8.22 24.79 ## 2000-01-10 5.42 6.07 6.49 6.57 7.72 8.27 24.71 tail(casestudy1.data0) ## BAC GE JDSU XOM SP500 AEP DUK DGS3MO DGS1 DGS5 ## 2015-03-24 15.61 25.27 13.48 84.52 2091.50 57.25 76.06 0.02 0.24 1.37 ## 2015-03-25 15.41 24.91 13.04 84.86 2061.05 55.91 74.96 0.04 0.25 1.41 ## 2015-03-26 15.42 24.80 13.03 84.32 2056.15 55.33 74.35 0.03 0.28 1.47 ## 2015-03-27 15.31 24.86 12.98 83.58 2061.02 55.90 75.00 0.04 0.27 1.42 ## 2015-03-30 15.52 25.12 13.14 85.63 2086.24 56.58 75.90 0.04 0.27 1.41 ## 2015-03-31 15.39 24.81 13.12 85.00 2067.89 56.25 76.78 0.03 0.26 1.37 ## DGS10 DAAA DBAA DCOILWTICO ## 2015-03-24 1.88 3.50 4.41 47.03 ## 2015-03-25 1.93 3.52 4.46 48.75 ## 2015-03-26 2.01 3.61 4.56 51.41 ## 2015-03-27 1.95 3.53 4.48 48.83 ## 2015-03-30 1.96 3.55 4.51 48.66 ## 2015-03-31 1.94 3.52 4.49 47.72 apply(is.na(casestudy1.data0),2,sum) ## BAC GE JDSU XOM SP500 AEP ## 143 143 143 143 143 143 ## DUK DGS3MO DGS1 DGS5 DGS10 DAAA ## 143 164 164 164 164 164 ## DBAA DCOILWTICO ## 164 150 # 2.3.2 Subset out days when SP500 is not missing (not == NA) index.notNA.SP500<-which(is.na(coredata(casestudy1.data0$SP500))==FALSE) casestudy1.data0.0<-casestudy1.data0[index.notNA.SP500,] head(casestudy1.data0.0) ## BAC GE JDSU XOM SP500 AEP DUK ## 2000-01-03 15.61121 31.37894 752.00 27.45076 1455.22 14.97396 40.60250 ## 2000-01-04 14.68461 30.12378 684.50 26.92497 1399.42 15.15257 41.23363 ## 2000-01-05 14.84576 30.07149 633.00 28.39281 1402.11 15.71819 42.91664 ## 2000-01-06 16.11480 30.47353 599.00 29.86064 1403.45 15.80750 44.07370 ## 2000-01-07 15.69179 31.65351 719.75 29.77301 1441.47 16.01588 45.23077 ## 2000-01-10 15.14791 31.64044 801.50 29.35676 1457.60 15.95634 45.17818 ## DGS3MO DGS1 DGS5 DGS10 DAAA DBAA DCOILWTICO ## 2000-01-03 5.48 6.09 6.50 6.58 7.75 8.27 NA ## 2000-01-04 5.43 6.00 6.40 6.49 7.69 8.21 25.56 ## 2000-01-05 5.44 6.05 6.51 6.62 7.78 8.29 24.65 ## 2000-01-06 5.41 6.03 6.46 6.57 7.72 8.24 24.79 ## 2000-01-07 5.38 6.00 6.42 6.52 7.69 8.22 24.79 ## 2000-01-10 5.42 6.07 6.49 6.57 7.72 8.27 24.71 tail(casestudy1.data0.0) ## BAC GE JDSU XOM SP500 AEP DUK DGS3MO DGS1 DGS5 ## 2015-03-24 15.61 25.27 13.48 84.52 2091.50 57.25 76.06 0.02 0.24 1.37 ## 2015-03-25 15.41 24.91 13.04 84.86 2061.05 55.91 74.96 0.04 0.25 1.41 ## 2015-03-26 15.42 24.80 13.03 84.32 2056.15 55.33 74.35 0.03 0.28 1.47 ## 2015-03-27 15.31 24.86 12.98 83.58 2061.02 55.90 75.00 0.04 0.27 1.42 ## 2015-03-30 15.52 25.12 13.14 85.63 2086.24 56.58 75.90 0.04 0.27 1.41 ## 2015-03-31 15.39 24.81 13.12 85.00 2067.89 56.25 76.78 0.03 0.26 1.37 ## DGS10 DAAA DBAA DCOILWTICO ## 2015-03-24 1.88 3.50 4.41 47.03 ## 2015-03-25 1.93 3.52 4.46 48.75 ## 2015-03-26 2.01 3.61 4.56 51.41 ## 2015-03-27 1.95 3.53 4.48 48.83 ## 2015-03-30 1.96 3.55 4.51 48.66 ## 2015-03-31 1.94 3.52 4.49 47.72 apply(is.na(casestudy1.data0.0)==TRUE, 2,sum) ## BAC GE JDSU XOM SP500 AEP ## 0 0 0 0 0 0 ## DUK DGS3MO DGS1 DGS5 DGS10 DAAA ## 0 28 28 28 28 28 ## DBAA DCOILWTICO ## 28 14 # Remaining missing values are for interest rates and the crude oil spot price # There are days when the stock market is open but the bond market and/or commodities market # is closed # For the rates and commodity data, replace NAs with previoius non-NA values casestudy1.data0.00<-na.locf(casestudy1.data0.0) apply(is.na(casestudy1.data0.00),2,sum) # Only 1 NA left, the first DCOILWTICO value ## BAC GE JDSU XOM SP500 AEP ## 0 0 0 0 0 0 ## DUK DGS3MO DGS1 DGS5 DGS10 DAAA ## 0 0 0 0 0 0 ## DBAA DCOILWTICO ## 0 1 save(file="casestudy_1_0.RData", list=c("casestudy1.data0.00"))
common_crawl_ocw.mit.edu_8
# fm_casestudy_1_rcode_CAPM.r #Execute the R-script ``fm_casestudy_1_0_DownloadData.r'' # creates the time-series matrix $casestudy1.data0.00$ # and saves it in R-workspace ``casestudy_1_0.Rdata'. #source("fm_casestudy_1_0.r") # 0.1 Load libraries ---- library("zoo") ## Warning: package 'zoo' was built under R version 3.1.3 ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric library("quantmod") ## Warning: package 'quantmod' was built under R version 3.1.3 ## Loading required package: xts ## Warning: package 'xts' was built under R version 3.1.3 ## Loading required package: TTR ## Warning: package 'TTR' was built under R version 3.1.3 ## Version 0.4-0 included new data defaults. See ?getSymbols. library("coefplot") ## Warning: package 'coefplot' was built under R version 3.1.3 ## Loading required package: ggplot2 ## Warning: package 'ggplot2' was built under R version 3.1.3 library ("graphics") library("ggplot2") ##### # 0.2 Load R dataset: casestudy_1_0.Rdata ---- load("casestudy_1_0.RData") dim(casestudy1.data0.00) ## [1] 3834 14 names(casestudy1.data0.00) ## [1] "BAC" "GE" "JDSU" "XOM" "SP500" ## [6] "AEP" "DUK" "DGS3MO" "DGS1" "DGS5" ## [11] "DGS10" "DAAA" "DBAA" "DCOILWTICO" head(casestudy1.data0.00) ## BAC GE JDSU XOM SP500 AEP DUK ## 2000-01-03 15.61121 31.37894 752.00 27.45076 1455.22 14.97396 40.60250 ## 2000-01-04 14.68461 30.12378 684.50 26.92497 1399.42 15.15257 41.23363 ## 2000-01-05 14.84576 30.07149 633.00 28.39281 1402.11 15.71819 42.91664 ## 2000-01-06 16.11480 30.47353 599.00 29.86064 1403.45 15.80750 44.07370 ## 2000-01-07 15.69179 31.65351 719.75 29.77301 1441.47 16.01588 45.23077 ## 2000-01-10 15.14791 31.64044 801.50 29.35676 1457.60 15.95634 45.17818 ## DGS3MO DGS1 DGS5 DGS10 DAAA DBAA DCOILWTICO ## 2000-01-03 5.48 6.09 6.50 6.58 7.75 8.27 NA ## 2000-01-04 5.43 6.00 6.40 6.49 7.69 8.21 25.56 ## 2000-01-05 5.44 6.05 6.51 6.62 7.78 8.29 24.65 ## 2000-01-06 5.41 6.03 6.46 6.57 7.72 8.24 24.79 ## 2000-01-07 5.38 6.00 6.42 6.52 7.69 8.22 24.79 ## 2000-01-10 5.42 6.07 6.49 6.57 7.72 8.27 24.71 tail(casestudy1.data0.00) ## BAC GE JDSU XOM SP500 AEP DUK DGS3MO DGS1 DGS5 ## 2015-03-24 15.61 25.27 13.48 84.52 2091.50 57.25 76.06 0.02 0.24 1.37 ## 2015-03-25 15.41 24.91 13.04 84.86 2061.05 55.91 74.96 0.04 0.25 1.41 ## 2015-03-26 15.42 24.80 13.03 84.32 2056.15 55.33 74.35 0.03 0.28 1.47 ## 2015-03-27 15.31 24.86 12.98 83.58 2061.02 55.90 75.00 0.04 0.27 1.42 ## 2015-03-30 15.52 25.12 13.14 85.63 2086.24 56.58 75.90 0.04 0.27 1.41 ## 2015-03-31 15.39 24.81 13.12 85.00 2067.89 56.25 76.78 0.03 0.26 1.37 ## DGS10 DAAA DBAA DCOILWTICO ## 2015-03-24 1.88 3.50 4.41 47.03 ## 2015-03-25 1.93 3.52 4.46 48.75 ## 2015-03-26 2.01 3.61 4.56 51.41 ## 2015-03-27 1.95 3.53 4.48 48.83 ## 2015-03-30 1.96 3.55 4.51 48.66 ## 2015-03-31 1.94 3.52 4.49 47.72 # 0.3 Define functions ---- # function to compute daily returns of a symbol fcn.compute.r.daily.symbol0<-function( symbol0="SP500", rdbase0=casestudy1.data0.00, scaleLog=FALSE){ indexcol.symbol0<-match(symbol0, dimnames(rdbase0)[[2]],nomatch=0) if (indexcol.symbol0==0){ return(NULL)} if(scaleLog==FALSE){ r.daily.symbol0<-zoo( x=exp(as.matrix(diff(log(rdbase0[,indexcol.symbol0]))))-1, order.by=as.Date(time(rdbase0)[-1])) dimnames(r.daily.symbol0)[[2]]<-paste("r.daily.",symbol0,sep="") return(r.daily.symbol0) } if(scaleLog==TRUE){ r.daily.symbol0<-zoo( x=(as.matrix(diff(log(rdbase0[,indexcol.symbol0])))), #order.by=time(rdbase0)[-1]) order.by=as.Date(time(rdbase0)[-1])) dimnames(r.daily.symbol0)[[2]]<-paste("dlog.daily.",symbol0,sep="") return(r.daily.symbol0) } return(NULL) } fcn.compute.r.daily.riskfree<-function(rdbase0=casestudy1.data0.00){ r.daily.riskfree<-(.01*coredata(rdbase0[-1,"DGS3MO"]) * diff(as.numeric(time(rdbase0)))/360) dimnames(r.daily.riskfree)[[2]]<-"r.daily.riskfree" r.daily.riskfree0<-zoo( x=as.matrix(r.daily.riskfree), #order.by=time(rdbase0)[-1]) order.by=as.Date(time(rdbase0)[-1])) return(r.daily.riskfree0) } # function to compute submatrix of kperiod returns fcn.rollksub<-function(x,kperiod=2,...){ x.0<-filter(as.matrix(coredata(x)), f=rep(1,times=kperiod), sides=1) n0=floor(length(x)/kperiod) indexsub0<-seq(kperiod,length(x),kperiod) x.00<-x.0[indexsub0] return(x.00) } # 1.1 Define list.symbol.00 list.symbol.00<-c("GE","BAC","XOM","JDSU") # 2.0 Setup CAPM for symbol0 ---- symbol0<-list.symbol.00[1] #symbol0<-list.symbol.00[2] #symbol0<-list.symbol.00[3] #symbol0<-list.symbol.00[4] # 2.1 Plot time series of symbol0, SP500 and risk-free asset ----- # symbol0, SP500, and the risk-free interest rate opar<-par() # set graphics parameter to 3 panels par(mfcol=c(3,1)) plot(casestudy1.data0.00[,symbol0],ylab="Price", main=paste(symbol0, " Stock",sep="")) plot(casestudy1.data0.00[,"SP500"], ylab="Value",main="S&P500 Index") plot(casestudy1.data0.00[,"DGS3MO"], ylab="Rate" , main="3-Month Treasury Rate (Constant Maturity)") # reset graphics parameter to 1 panel par(mfcol=c(1,1)) # 2.2 Compute the returns series ---- r.daily.symbol0<-fcn.compute.r.daily.symbol0(symbol0, casestudy1.data0.00) r.daily.SP500<-fcn.compute.r.daily.symbol0("SP500", casestudy1.data0.00) r.daily.riskfree<-fcn.compute.r.daily.riskfree(casestudy1.data0.00) # Note for returns time series of riskfree asset # holding periods vary from 1-3 days corresponding # to mid-week and over-weekend returns) # plot(r.daily.riskfree) # Compute excess returns of symbol0 and SP500 r.daily.symbol0.0 <-r.daily.symbol0 - r.daily.riskfree dimnames(r.daily.symbol0.0)[[2]]<-paste("r.daily.",symbol0,".0",sep="") r.daily.SP500.0<-r.daily.SP500 - r.daily.riskfree dimnames(r.daily.SP500.0)[[2]]<-"r.daily.SP500.0" dim(r.daily.SP500.0) ## [1] 3833 1 # 2.3 Create r.daily.data0 = merged series ---- # and display first and last sets of rows dimnames(r.daily.symbol0)[[2]] ## [1] "r.daily.GE" # Note: r.daily.symbol0 has names that use symbol0 r.daily.data0<-merge(r.daily.symbol0, r.daily.SP500, r.daily.riskfree, r.daily.symbol0.0, r.daily.SP500.0) head(r.daily.data0) ## r.daily.GE r.daily.SP500 r.daily.riskfree r.daily.GE.0 ## 2000-01-04 -0.0400000000 -0.038344670 0.0001508333 -0.0401508333 ## 2000-01-05 -0.0017359722 0.001922189 0.0001511111 -0.0018870833 ## 2000-01-06 0.0133694590 0.000955674 0.0001502778 0.0132191812 ## 2000-01-07 0.0387214059 0.027090400 0.0001494444 0.0385719615 ## 2000-01-10 -0.0004129203 0.011189973 0.0004516667 -0.0008645869 ## 2000-01-11 0.0016527601 -0.013062514 0.0001508333 0.0015019268 ## r.daily.SP500.0 ## 2000-01-04 -0.0384955037 ## 2000-01-05 0.0017710780 ## 2000-01-06 0.0008053962 ## 2000-01-07 0.0269409552 ## 2000-01-10 0.0107383063 ## 2000-01-11 -0.0132133472 tail(r.daily.data0) ## r.daily.GE r.daily.SP500 r.daily.riskfree r.daily.GE.0 ## 2015-03-24 -0.007852375 -0.006139421 5.555556e-07 -0.007852931 ## 2015-03-25 -0.014246142 -0.014558905 1.111111e-06 -0.014247253 ## 2015-03-26 -0.004415897 -0.002377502 8.333333e-07 -0.004416731 ## 2015-03-27 0.002419355 0.002368563 1.111111e-06 0.002418244 ## 2015-03-30 0.010458568 0.012236645 3.333333e-06 0.010455235 ## 2015-03-31 -0.012340764 -0.008795776 8.333333e-07 -0.012341598 ## r.daily.SP500.0 ## 2015-03-24 -0.006139977 ## 2015-03-25 -0.014560016 ## 2015-03-26 -0.002378335 ## 2015-03-27 0.002367452 ## 2015-03-30 0.012233312 ## 2015-03-31 -0.008796610 # 2.4 Excess Returns plot: symbol0 vs SP500 ---- par(mfcol=c(1,1)) plot(r.daily.SP500.0, r.daily.symbol0.0, main=symbol0) abline(h=0,v=0) # 3. Linear Regression for CAPM ---- #The linear regression model is fit using the R-function lm(): options(show.signif.stars=FALSE) # (by default the output from summary.lm() uses stars (*) # to indicate significant coefficients; # automated processing of the output encounters errors # with asterisks so the option to not show them is necessary) lmfit0<-lm(r.daily.symbol0.0 ~ r.daily.SP500.0, x=TRUE, y=TRUE) # 3.1 Apply lm(), summary.lm() ---- # The components of lmfit0: names(lmfit0) ## [1] "coefficients" "residuals" "effects" "rank" ## [5] "fitted.values" "assign" "qr" "df.residual" ## [9] "xlevels" "call" "terms" "model" ## [13] "x" "y" lmfit0.summary<-summary.lm(lmfit0) # The components of lmfit0.summary: names(lmfit0.summary) ## [1] "call" "terms" "residuals" "coefficients" ## [5] "aliased" "sigma" "df" "r.squared" ## [9] "adj.r.squared" "fstatistic" "cov.unscaled" print(lmfit0.summary) ## ## Call: ## lm(formula = r.daily.symbol0.0 ~ r.daily.SP500.0, x = TRUE, y = TRUE) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.159282 -0.005513 -0.000422 0.005185 0.144856 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -5.225e-05 2.138e-04 -0.244 0.807 ## r.daily.SP500.0 1.175e+00 1.673e-02 70.238 <2e-16 ## ## Residual standard error: 0.01323 on 3831 degrees of freedom ## Multiple R-squared: 0.5629, Adjusted R-squared: 0.5628 ## F-statistic: 4933 on 1 and 3831 DF, p-value: < 2.2e-16 # Under CAPM, the intercept should be zero tstat.intercept<-round(lmfit0.summary$coefficients["(Intercept)", "t value"],digits=4) pvalue.intercept<-round(lmfit0.summary$coefficients["(Intercept)", "Pr(>|t|)"],digits=4) # The $t$-statistic for the intercept is: print(tstat.intercept) ## [1] -0.2444 # The p-value for testing whether the intercept is zero is print(pvalue.intercept) ## [1] 0.8069 # 3.2 R-squared plot ---- lmfit0.rsquared<-lmfit0.summary$r.squared lmfit0.rsquared.pvalue<-pf( q=lmfit0.summary$fstatistic[["value"]], df1=lmfit0.summary$fstatistic[["numdf"]], df2=lmfit0.summary$fstatistic[["dendf"]], lower.tail=FALSE) length(as.numeric(lmfit0$fitted.values)) ## [1] 3833 length(as.numeric(lmfit0$y)) ## [1] 3833 modelname0<-"CAPM" plot(x=lmfit0$fitted.values, y=lmfit0$y, xlab="Fitted Values", ylab="Actual Values", main=paste(c(modelname0, "\n Plot: Actual vs Fitted Values\n", "(R-Squared = ",round(lmfit0.rsquared,digits=4), ")"),collapse="")) abline(a=0,b=1,col='green',lwd=2) # 3.3 Coefficients Plot ---- library("coefplot") par(mfcol=c(1,1)) coefplot(lmfit0, lwdInner=4, lwdOuter=1, title=paste(modelname0, "\n Coefficients Plot",sep=""), plot=TRUE) # 3.4a Residuals Analysis: Histogram/Gaussian Fits---- std.residuals<-sort(as.numeric(lmfit0$residuals)) par(mfcol=c(1,1)) hist(std.residuals, freq=FALSE, nclass=100, main=paste(c(modelname0 , "\n Histogram of Std. Residuals \n", "Normal Fits: MLE(Green) and Robust(Blue)"), collapse="")) std.residuals.mean=mean(std.residuals) std.residuals.stdev.mle=sqrt(mean(std.residuals^2) - (mean(std.residuals))^2) std.residuals.stdev.robust=IQR(std.residuals)/1.3490 std.residuals.density.mle<-dnorm(std.residuals, mean=std.residuals.mean, sd=std.residuals.stdev.mle) std.residuals.density.robust<-dnorm(std.residuals, mean=std.residuals.mean, sd=std.residuals.stdev.robust) lines(std.residuals, std.residuals.density.mle, col='green',lwd=2) lines(std.residuals, std.residuals.density.robust, col='blue',lwd=2) # 3.4b Residuals Analysis: QQPlot/Gaussian Fits---- par(mfcol=c(1,1)) qqnorm(std.residuals, main=paste(modelname0, "\n Normal QQ Plot of Std. Residuals",sep="")) abline(a=0,b=sqrt(var(std.residuals)), col='green', lwd=3) abline(a=0,b=IQR(std.residuals)/1.3490, col='blue', lwd=3) title(sub=paste("Residual Sigma Fits: MLE(Green) and Robust(Blue)")) # 3.4c Residuals Analysis: MLE-Percentile Histogram ---- par(mfcol=c(1,1)) # nclass0=100 hist(100.*pnorm(std.residuals,mean=std.residuals.mean, sd=std.residuals.stdev.mle), nclass=nclass0, xlab="Fitted Percentile", main=paste(c(modelname0, "\nHistogram of Residuals\nMLE-Fitted Percentiles"), collapse="")) abline(h=length(std.residuals)/nclass0, col="green",lwd=2) # 3.4c Residuals Analysis: Robust-Fitted Percentile Histogram ---- par(mfcol=c(1,1)) hist(100.*pnorm(std.residuals,mean=std.residuals.mean, sd=std.residuals.stdev.robust), nclass=nclass0, xlab="Fitted Percentile", main=paste(c(modelname0, "\nHistogram of Residuals\nRobust-Fitted Percentiles"), collapse="")) abline(h=length(std.residuals)/nclass0, col="blue",lwd=2) # 3.5 Regression Diagnostics: Leverage/Hat Values ---- # For the functions hatvalues() and influence.measures() # the input argument lmfit0 must handle indexes properly y00=coredata(r.daily.symbol0.0) x00=coredata(r.daily.SP500.0) lmfit00<-lm(y00~x00, x=TRUE, y=TRUE) # This replacement code snippet eliminates warnings and errors returned from # the functon influence.measures(lmfit0) below # lmfit0.hat<-zoo(x=hatvalues(lmfit00), order.by=as.Date(names(lmfit00$fitted.values))) par(mfcol=c(1,1)) plot(lmfit0.hat, ylab="Leverage/Hat Value", main=paste(c(modelname0, "\nLeverage/Hat Values"), collapse="")) # Note the cases are time points of the time series data # The financial crisis of 2008 is evident # 3.6 Regression Diagnostics Plot---- # The R function $plot.lm()$ generates a # 2x2 display of plots for various regression diagnostic statistics: oldpar=par(no.readonly=TRUE) layout(matrix(c(1,2,3,4),2,2)) # optional 4 graphs/page plot(lmfit00) par(oldpar,no.readonly=TRUE) #######ADDITION # Some useful R functions # anova.lm(): conduct an Analysis of Variance for the linear regression model, detailing the computation of the F-statistic for no regression structure. # #anova.lm(lmfit0) # influence.measures(): compute regression diagnostics evaluating case influence for the linear regression model; includes `hat' matirx, case-deletion statistics for the regression coefficients and for the residual standard deviation. # 3.7 Apply influence.measures() ---- # Compute influence measures (case-deletion statistics) lmfit0.inflm<-influence.measures(lmfit00) # Table counts of influential/non-influential cases # as measured by the hat/leverage statistic. table(lmfit0.inflm$is.inf[,"hat"]) ## ## FALSE TRUE ## 3682 151 # 3.8 Plot data, fitted model, influential cases ---- # selective highlighting of influential cases plot(r.daily.SP500.0, r.daily.symbol0.0, main=paste(symbol0," vs SP500 Data \n OLS Fit (Green line)\n High-Leverage Cases (red points)\n High Cooks Dist (blue Xs)",sep=""), cex.main=0.8) abline(h=0,v=0) abline(lmfit0, col=3, lwd=3) # Plot cases with high leverage as red (col=2) "o"s index.inf.hat<-which(lmfit0.inflm$is.inf[,"hat"]==TRUE) points(r.daily.SP500.0[index.inf.hat], r.daily.symbol0.0[index.inf.hat], col=2, pch="o") # Plot cases with high cooks distance as big (cex=2) blue (col=4) "X"s index.inf.cook.d<-which(lmfit0.inflm$is.inf[,"cook.d"]==TRUE) dim(r.daily.SP500.0) ## [1] 3833 1 dim(r.daily.SP500.0) ## [1] 3833 1 points(r.daily.SP500.0[index.inf.cook.d], r.daily.symbol0.0[index.inf.cook.d], col=4, pch="X", cex=2.)
common_crawl_ocw.mit.edu_9
# Problem_10_9_26.r # Problem 10.26 of Rice # Hampson and Walker data on heats of sublimation of # platinum,iridium, and rhodium # To install these packages, uncomment the next two lines #install.packages("MASS") #install.packages("boot") library(MASS) ## Warning: package 'MASS' was built under R version 3.1.3 library(boot) ## Warning: package 'boot' was built under R version 3.1.3 # x.platinum=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/platinum.txt") x.iridium=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/iridium.txt") x.rhodium=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/rhodium.txt") # Parts (a)-(d) x=x.platinum par(mfcol=c(2,2)) hist(x,main="Platinum") stem(x) ## ## The decimal point is at the | ## ## 132 | 7 ## 134 | 134578889900224488 ## 136 | 36 ## 138 | ## 140 | 2 ## 142 | 3 ## 144 | ## 146 | 58 ## 148 | 8 boxplot(x) plot(x) x=x.rhodium par(mfcol=c(2,2)) hist(x,main="Rhodium") stem(x) ## ## The decimal point is at the | ## ## 126 | 4 ## 127 | ## 128 | ## 129 | ## 130 | ## 131 | 111112234569 ## 132 | 123456677899 ## 133 | 000333455558 ## 134 | 12 ## 135 | 7 boxplot(x) plot(x) x=x.iridium par(mfcol=c(2,2)) hist(x,main="Iridium") stem(x) ## ## The decimal point is 1 digit(s) to the right of the | ## ## 13 | 7 ## 14 | ## 14 | 5 ## 15 | 2 ## 15 | 999 ## 16 | 00000000000000001113 ## 16 | ## 17 | 4 boxplot(x) plot(x) plot(c(10:length(x)), x[10:length(x)], main="Iridium") # (e) For Platinum, observations 8,9,10, and 14,5 seem much higher than the rest. # They do not seem iid # (e) For Rhodium, the first 2 observations are very variable (low then high). # and the observations after about number 25 seem different from those before. # These data do not seem iid either. # (e) For Iridium, the first 4 observations steadily increase and observaton 8 seems much higher # than the rest. These data do not appear iid. Also, plotting the observations from 10 on, # there seems to be time series dependence -- closer observations in the sequence have more similar # values. # (f) Measures of location # For Rhodium x=x.rhodium mean(x); mean(x,trim=.1);mean(x,trim=.2);median(x); huber(x,k=1.5)[[1]] ## [1] 132.42 ## [1] 132.4781 ## [1] 132.5292 ## [1] 132.65 ## [1] 132.4921 # All the estimates are very close # The mean is the lowest, it is affected by the lowest value # The trimmed means exclude the extreme values and are similar to the median. # For Iridium x=x.iridium mean(x); mean(x,trim=.1);mean(x,trim=.2);median(x); huber(x,k=1.5)[[1]] ## [1] 158.8148 ## [1] 159.5478 ## [1] 159.8412 ## [1] 159.8 ## [1] 159.8545 # The mean is much lower than the other estimates # The mean is the lowest, it is affected by the lowest value # The trimmed means exclude the extreme values and are similar to the median. # (g) Standard error of the sample mean and approximate 90 percent conf. interval # First for rhodium: x=x.rhodium x.stdev=sqrt(var(x)) x.mean.sterr=x.stdev/sqrt(length(x)) print(x.mean.sterr) ## [1] 0.2273369 alpha=0.10 z.upperalphahalf=qnorm(1-alpha/2) x.mean=mean(x) x.mean.ci<-c(-1,1)*z.upperalphahalf* x.mean.sterr + x.mean print(x.mean); print(x.mean.ci) ## [1] 132.42 ## [1] 132.0461 132.7939 stats1.rhodium=c(x.mean, x.mean.ci) # Second for iridium x=x.iridium x.stdev=sqrt(var(x)) x.mean.sterr=x.stdev/sqrt(length(x)) print(x.mean.sterr) ## [1] 1.197917 alpha=0.10 z.upperalphahalf=qnorm(1-alpha/2) x.mean=mean(x) x.mean.ci<-c(-1,1)*z.upperalphahalf* x.mean.sterr + x.mean print(x.mean); print(x.mean.ci) ## [1] 158.8148 ## [1] 156.8444 160.7852 stats1.iridium=c(x.mean, x.mean.ci) # parts (i), (j), (k). # Bootstrap confidence intervals of different location measures fcn.median<- function(x, d) { return(median(x[d])) } fcn.mean<- function(x, d) { return(mean(x[d])) } fcn.trimmedmean <- function(x, d, trim=0) { return(mean(x[d], trim/length(x))) } fcn.huber<-function(x,d){ x.huber=huber(x[d],k=1.5) return(x.huber[[1]]) } # First for rhodium set.seed(1) x=x.rhodium # Bootstrap analysis of sample 20% trimmed mean: x.boot.trimmedmean.2= boot(x, fcn.trimmedmean,trim=.2, R=1000) par(mfcol=c(1,2)) plot(x.boot.trimmedmean.2) title(paste("Trimmed Mean: StDev=",as.character(round(sqrt(var(x.boot.trimmedmean.2$t)),digits=4)),sep=""), adj=1) print(x.boot.trimmedmean.2$t0) ## [1] 132.42 print(sqrt(var(x.boot.trimmedmean.2$t))) ## [,1] ## [1,] 0.2214742 stats2.trim20.rhodium<-c(x.boot.trimmedmean.2$t0,(sqrt(var(x.boot.trimmedmean.2$t)))) # boot.ci(x.boot.trimmedmean.2,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.2, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (132.1, 132.8 ) ## Calculations and Intervals on Original Scale # Bootstrap analysis of sample 10% trimmed mean: x.boot.trimmedmean.1= boot(x, fcn.trimmedmean,trim=.1, R=1000) par(mfcol=c(1,2)) plot(x.boot.trimmedmean.1) title(paste("Trimmed Mean: StDev=",as.character(round(sqrt(var(x.boot.trimmedmean.1$t)),digits=4)),sep=""), adj=1) print(x.boot.trimmedmean.1$t0) ## [1] 132.42 print(sqrt(var(x.boot.trimmedmean.1$t))) ## [,1] ## [1,] 0.2195023 stats2.trim10.rhodium<-c(x.boot.trimmedmean.1$t0,(sqrt(var(x.boot.trimmedmean.1$t)))) # # boot.ci(x.boot.trimmedmean.1,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.1, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (132.1, 132.8 ) ## Calculations and Intervals on Original Scale # Bootstrap analysis of sample median: x.boot.median= boot(x, fcn.median, R=1000) plot(x.boot.median) title(paste("Median: StDev=",as.character(round(sqrt(var(x.boot.median$t)),digits=4)),sep=""), adj=1) print(x.boot.median$t0) ## [1] 132.65 print(sqrt(var(x.boot.median$t))) ## [,1] ## [1,] 0.2090489 boot.ci(x.boot.median,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.median, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (132.3, 133.0 ) ## Calculations and Intervals on Original Scale stats2.median.rhodium<-c(x.boot.median$t0,(sqrt(var(x.boot.median$t)))) stats1.rhodium ## [1] 132.4200 132.0461 132.7939 # The bootstrap estimate and standard errrors are given by: stats2.trim20.rhodium ## [1] 132.4200000 0.2214742 stats2.trim10.rhodium ## [1] 132.4200000 0.2195023 stats2.median.rhodium ## [1] 132.6500000 0.2090489 # For Rhodium these are all about the same # (k). The approximate 90% confidence intervals are: boot.ci(x.boot.trimmedmean.2,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.2, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (132.1, 132.8 ) ## Calculations and Intervals on Original Scale boot.ci(x.boot.trimmedmean.1,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.1, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (132.1, 132.8 ) ## Calculations and Intervals on Original Scale boot.ci(x.boot.median,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.median, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (132.3, 133.0 ) ## Calculations and Intervals on Original Scale # These are all essential equal for rhodium # These intervals are all about the same as that based on part (g) stats1.rhodium ## [1] 132.4200 132.0461 132.7939 # ############## # Second for iridium set.seed(1) x=x.iridium # Bootstrap analysis of sample 20% trimmed mean: x.boot.trimmedmean.2= boot(x, fcn.trimmedmean,trim=.2, R=1000) par(mfcol=c(1,2)) plot(x.boot.trimmedmean.2) title(paste("Trimmed Mean: StDev=",as.character(round(sqrt(var(x.boot.trimmedmean.2$t)),digits=4)),sep=""), adj=1) print(x.boot.trimmedmean.2$t0) ## [1] 158.8148 print(sqrt(var(x.boot.trimmedmean.2$t))) ## [,1] ## [1,] 1.181086 stats2.trim20.iridium<-c(x.boot.trimmedmean.2$t0,(sqrt(var(x.boot.trimmedmean.2$t)))) # boot.ci(x.boot.trimmedmean.2,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.2, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (157.0, 160.8 ) ## Calculations and Intervals on Original Scale # Bootstrap analysis of sample 10% trimmed mean: x.boot.trimmedmean.1= boot(x, fcn.trimmedmean,trim=.1, R=1000) par(mfcol=c(1,2)) plot(x.boot.trimmedmean.1) title(paste("Trimmed Mean: StDev=",as.character(round(sqrt(var(x.boot.trimmedmean.1$t)),digits=4)),sep=""), adj=1) print(x.boot.trimmedmean.1$t0) ## [1] 158.8148 print(sqrt(var(x.boot.trimmedmean.1$t))) ## [,1] ## [1,] 1.185398 stats2.trim10.iridium<-c(x.boot.trimmedmean.1$t0,(sqrt(var(x.boot.trimmedmean.1$t)))) # # boot.ci(x.boot.trimmedmean.1,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.1, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (157.0, 160.9 ) ## Calculations and Intervals on Original Scale # Bootstrap analysis of sample median: x.boot.median= boot(x, fcn.median, R=1000) plot(x.boot.median) title(paste("Median: StDev=",as.character(round(sqrt(var(x.boot.median$t)),digits=4)),sep=""), adj=1) print(x.boot.median$t0) ## [1] 159.8 print(sqrt(var(x.boot.median$t))) ## [,1] ## [1,] 0.2127957 boot.ci(x.boot.median,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.median, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (159.5, 160.1 ) ## Calculations and Intervals on Original Scale stats2.median.iridium<-c(x.boot.median$t0,(sqrt(var(x.boot.median$t)))) stats1.iridium ## [1] 158.8148 156.8444 160.7852 stats2.trim20.iridium ## [1] 158.814815 1.181086 stats2.trim10.iridium ## [1] 158.814815 1.185398 stats2.median.iridium ## [1] 159.8000000 0.2127957 ############## # The bootstrap estimate and standard errrors are given by: stats2.trim20.iridium ## [1] 158.814815 1.181086 stats2.trim10.iridium ## [1] 158.814815 1.185398 stats2.median.iridium ## [1] 159.8000000 0.2127957 # For iridium the median has much lower st error than the trimmed means # (k). The approximate 90% confidence intervals are: boot.ci(x.boot.trimmedmean.2,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.2, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (157.0, 160.8 ) ## Calculations and Intervals on Original Scale boot.ci(x.boot.trimmedmean.1,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean.1, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (157.0, 160.9 ) ## Calculations and Intervals on Original Scale boot.ci(x.boot.median,conf=.90, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.median, conf = 0.9, type = "basic") ## ## Intervals : ## Level Basic ## 90% (159.5, 160.1 ) ## Calculations and Intervals on Original Scale # The interval using the median is much smaller than that for the trimmed means # These intervals are all smaller than that based on part (g) stats1.iridium ## [1] 158.8148 156.8444 160.7852
common_crawl_ocw.mit.edu_10
# Problem_14_9_39.r # 1.0 Read in data ---- # See Problem 14.9.39 # Data from Knafl et al. (1984) # tankvolume=read.table(file="Rice 3e Datasets/ASCII Comma/Chapter 14/tankvolume.txt", sep=",",stringsAsFactors = FALSE, header=TRUE) Volume=tankvolume$Volume Pressure=tankvolume$Pressure # (a). Plot pressure versus volume. The relationship appears linear plot(Volume, Pressure) #summary(Volume) # (b). Calculate the linear regression of pressure on volume lmfit1=lm( Pressure~ Volume) summary(lmfit1) ## ## Call: ## lm(formula = Pressure ~ Volume) ## ## Residuals: ## Min 1Q Median 3Q Max ## -28.429 -15.610 2.047 10.819 36.634 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -257.301 9.430 -27.29 <2e-16 *** ## Volume 2316.469 9.243 250.61 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 19.44 on 19 degrees of freedom ## Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997 ## F-statistic: 6.28e+04 on 1 and 19 DF, p-value: < 2.2e-16 abline(lmfit1,col='green') # Plot the residuals versus volume plot(Volume, lmfit1$residuals) # # The residuals plot shows a non-linear relationship with volume # # (c). Fit Pressure as a quadratic function of volume. VolumeSq=Volume*Volume lmfit2=lm(Pressure ~ Volume + VolumeSq) summary(lmfit2) ## ## Call: ## lm(formula = Pressure ~ Volume + VolumeSq) ## ## Residuals: ## Min 1Q Median 3Q Max ## -18.645 -7.189 1.944 7.371 15.528 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -204.995 9.274 -22.104 1.70e-14 *** ## Volume 2164.032 23.052 93.877 < 2e-16 *** ## VolumeSq 83.191 12.276 6.777 2.39e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 10.6 on 18 degrees of freedom ## Multiple R-squared: 0.9999, Adjusted R-squared: 0.9999 ## F-statistic: 1.057e+05 on 2 and 18 DF, p-value: < 2.2e-16 plot(Volume, lmfit2$residuals) abline(h=0,col='gray') # The fit looks much better, but the residuals at specific volume # levels tend to be all positive or all negative together. # There is variability within given Volume level which is smaller # than variability across Volume levels. # There appears to be two sources of varability: across volume levels and within.
common_crawl_ocw.mit.edu_11
# Problem 9.11.1 # A coin is thrown independently 10 times with P(heads)=p # To test null hypothesis p=.5 versus alternative that it is not, # reject if 0 or 10 heads observed # # Let X be the number of heads observed. # X ~ Binomial(size=10,prob=p) # # a). The significance level of the test is the # probability of rejcting the null when it is true. # (chance of getting 10 tails or getting 10 heads in a row) sig.level=2*(.5^10) print(sig.level) ## [1] 0.001953125 # b). What is the power of the test if P(heads)=.1 # The power is chance of a binomial(size=10,prob=.1) equalling 0 or 10 # This can be computing usin binomial pmf: dbinom(0,size=10,prob=.1) + dbinom(10,size=10,prob=.1) ## [1] 0.3486784 # Or using the binomial cdf: pbinom(0,size=10,prob=.1) + (1-pbinom(9,size=10,prob=.1)) ## [1] 0.3486784 # Problem 9.11.6. Consider tossing the coin until a head comes up # Define X to be the total number of tosses. # The variable Y=(X-1) has a geometric distribution in R # with pmf function dgeom(x, prob) args(dgeom) ## function (x, prob, log = FALSE) ## NULL x.grid=seq(1,15) dgeom(0:10,prob=.5) ## [1] 0.5000000000 0.2500000000 0.1250000000 0.0625000000 0.0312500000 ## [6] 0.0156250000 0.0078125000 0.0039062500 0.0019531250 0.0009765625 ## [11] 0.0004882812 dgeom(0:10,prob=.1) ## [1] 0.10000000 0.09000000 0.08100000 0.07290000 0.06561000 0.05904900 ## [7] 0.05314410 0.04782969 0.04304672 0.03874205 0.03486784 x.grid.probs.h0=dgeom(x.grid-1, prob=.5) x.grid.probs.h1=dgeom(x.grid-1, prob=.1) x.grid.likeratio=x.grid.probs.h0/x.grid.probs.h1 plot(x.grid, x.grid.likeratio,xlab="x", ylab="LikeRatio") # a). If the prior probabilities are equal, which # outcomes favor H0? # # These are the values of x for which the likelihood ratio exceeds 1. x.grid[which(x.grid.likeratio>1)] # x=1,2, or 3 ## [1] 1 2 3 # The values which favor H1 are the complement, values greater than 3. # b). If the prior odds P(H0)/P(H1)=10, # then the outcomes that favor H0 are those for which the # posterior odds exceed 1, which are those for which the # likelihood ratio exceeds 1/10: x.grid[which(x.grid.likeratio>.1)] ## [1] 1 2 3 4 5 6 7 # c). What is the significance level of a test that # rejects H0 if X >= 8 prob.h0=0.5 # Equals 1-Prob(accept H0 | H0) sig.level=1-sum(dgeom(c(0:(7-1)), prob=prob.h0)) print(sig.level) ## [1] 0.0078125 # This should be close to sum(dgeom(7:50,prob=prob.h0)) ## [1] 0.0078125 # d) The power of the test is the probability of rejecting # given prob=.1 prob.h1=.1 power.h1=1-sum(dgeom(c(0:(7-1)), prob=prob.h1)) print(power.h1) ## [1] 0.4782969 # This should be close to sum(dgeom(7:50,prob=prob.h1)) ## [1] 0.4736585
common_crawl_ocw.mit.edu_12
# Problem 9.11.3 # X is a binomial(size=100,prob=p) random variable. # Plot of the probability mass function of X for p=.5 x.grid=seq(0,100,1) x.grid.pdf=dbinom(x.grid,size=100,prob=.5) plot(x.grid, x.grid.pdf) # Consider the test of Null: p=0.5 vs Alternative p not =.5 # that rejects when abs(x-50)>10 # # The normal approximation for X is # Normal with mean = 100*p and Variance = 100*p*(1-p). # a). What is alpha, the level of the test? # The test rejects when # abs(x-50) > 10 # The standarized x value is # z=(x-50)/sigma.x # Under the null distribution # sigma.x=sqrt(100*p*(1-p))=sqrt(100*.5*.5)=5 # So, the test rejects when # abs(z) > 10/5 =2 # and the level of the test is this probability, 0.0455. sigma.x=sqrt(100*.5*(1-.5)) test.level=2*(1-pnorm(10/sigma.x)) # b). Graph the power as a function of p grid.p=seq(0,1,.01) # For each case of p, compute the rejection points of the # standardized x value # z.rejectlow=(40-100*p)/sqrt(p*(1-p)*100) # z.rejecthigh=(60-100*p)/sqrt(p*(1-p)*100) z.rejectlow=(40-100*grid.p)/sqrt(grid.p*(1-grid.p)*100) z.rejecthigh=(60-100*grid.p)/sqrt(grid.p*(1-grid.p)*100) grid.p.power= pnorm(z.rejectlow) + 1-pnorm(z.rejecthigh) plot(grid.p, grid.p.power,xlab="P(Heads)",ylab="Power") # Note that the alpha (significance level) of the test # is the value of the power for the null hypothesis p=.5 min(grid.p.power) ## [1] 0.04550026 grid.p[which(grid.p.power==min(grid.p.power))] ## [1] 0.5
common_crawl_ocw.mit.edu_13
# Rproject10_flow_occ_regressions.r # 1.0 Read in data ---- # See Problem 10.9.50, # data from: http://pems.eecs.berkeley.edu # For each of three lanes, the # flow (number of cars) # occupancy (percentage of time a car was over the loop) # # 1740 5-minute intervals # Lane 1 farthest left lane, lane 2 center, lane 3 farthest right flowocc=read.table(file="Rice 3e Datasets/ASCII Comma/Chapter 10/flow-occ.txt", sep=",",stringsAsFactors = FALSE, header=TRUE) Timestamp2 = strptime(flowocc$Timestamp, "%m/%d/%Y %H:%M:%S") #plot(Timestamp2, flowocc$Lane.1.Occ) #plot(flowocc$Lane.1.Occ) flowocc$Timestamp=Timestamp2 lmfit1=lm(Lane.3.Occ ~ Lane.1.Occ, data=flowocc) plot(flowocc$Lane.1.Occ, flowocc$Lane.3.Occ) lmfit1=lm(Lane.3.Occ ~ Lane.1.Occ, data=flowocc) abline(lmfit1,col="green") plot(flowocc$Lane.1.Occ, lmfit1$residuals) abline(h=0,col="gray") qqnorm(lmfit1$residuals) # Consider two subsets ind.subset1=(flowocc$Lane.1.Occ < .18) ind.subset2=(flowocc$Lane.1.Occ > .18) # For first subset: plot(flowocc$Lane.1.Occ[ind.subset1], flowocc$Lane.3.Occ[ind.subset1]) lmfit1.subset1=lm(Lane.3.Occ ~ Lane.1.Occ, data=flowocc, weight=1*ind.subset1) abline(lmfit1.subset1,col="green") plot(flowocc$Lane.1.Occ[ind.subset1], lmfit1.subset1$residuals[ind.subset1]) abline(h=0,col="gray") qqnorm(lmfit1.subset1$residuals[ind.subset1]) # For second subuset: plot(flowocc$Lane.1.Occ[ind.subset2], flowocc$Lane.3.Occ[ind.subset2]) lmfit1.subset2=lm(Lane.3.Occ ~ Lane.1.Occ, data=flowocc, weight=1*ind.subset2) abline(lmfit1.subset2,col="green") plot(flowocc$Lane.1.Occ[ind.subset2], lmfit1.subset2$residuals[ind.subset2]) abline(h=0,col="gray") qqnorm(lmfit1.subset2$residuals[ind.subset2]) # For second subuset:
common_crawl_ocw.mit.edu_14
# Rproject11_Tablets_TwoSampleT.r # 1.0 Read in data ---- # See Example 12.2A of Rice # # Measurements of chlorpheniramine maleate in tablets made by seven laboratories # Nominal dosage equal to 4mg # 10 measurements per laboratory # # Sources of variability # within labs # between labs # Note: read.table has trouble parsing header row if (FALSE){tablets1=read.table(file="Rice 3e Datasets/ASCII Comma/Chapter 12/tablets1.txt", sep=",",stringsAsFactors = FALSE, quote="\'", header=TRUE) } # Read in matrix and label columns tablets1=read.table(file="Rice 3e Datasets/ASCII Comma/Chapter 12/tablets1.txt", sep=",",stringsAsFactors = FALSE, skip=1, header=FALSE) dimnames(tablets1)[[2]]<-paste("Lab",c(1:7),sep="") tablets1 ## Lab1 Lab2 Lab3 Lab4 Lab5 Lab6 Lab7 ## 1 4.13 3.86 4.00 3.88 4.02 4.02 4.00 ## 2 4.07 3.85 4.02 3.88 3.95 3.86 4.02 ## 3 4.04 4.08 4.01 3.91 4.02 3.96 4.03 ## 4 4.07 4.11 4.01 3.95 3.89 3.97 4.04 ## 5 4.05 4.08 4.04 3.92 3.91 4.00 4.10 ## 6 4.04 4.01 3.99 3.97 4.01 3.82 3.81 ## 7 4.02 4.02 4.03 3.92 3.89 3.98 3.91 ## 8 4.06 4.04 3.97 3.90 3.89 3.99 3.96 ## 9 4.10 3.97 3.98 3.97 3.99 4.02 4.05 ## 10 4.04 3.95 3.98 3.90 4.00 3.93 4.06 # Replicate Figure 12.1 of Rice boxplot(tablets1) # 2.1 Define a function to implement Two-sample t test ---- fcn.TwoSampleTTest<-function(x,y, conf.level=0.95, digits0=4){ # conf.level=0.95; digits0=4 x.mean=mean(x) y.mean=mean(y) x.var=var(x) y.var=var(y) x.n=length(x) y.n=length(y) sigmasq.pooled=((x.n-1)*x.var + (y.n-1)*y.var)/(x.n + y.n -2) # Print out statistics for each sample and pooled estimate of standard deviation cat("Sample Statistics:\n") print(t(data.frame( x.mean, x.n, x.stdev=sqrt(x.var), y.mean, y.n, y.stdev=sqrt(y.var), sigma.pooled=sqrt(sigmasq.pooled) ))) cat("\n t test Computations") mean.diff=x.mean-y.mean mean.diff.sterr=sqrt(sigmasq.pooled)*sqrt( 1/x.n + 1/y.n) mean.diff.tstat=mean.diff/mean.diff.sterr tstat.df=x.n + y.n -2 mean.diff.tstat.pvalue=2*pt(-abs(mean.diff.tstat), df=tstat.df) print(t(data.frame( mean.diff=mean.diff, mean.diff.sterr=mean.diff.sterr, tstat=mean.diff.tstat, df=tstat.df, pvalue=mean.diff.tstat.pvalue))) # Confidence interval for difference alphahalf=(1-conf.level)/2. t.critical=qt(1-alphahalf, df=tstat.df) mean.diff.confInterval=mean.diff +c(-1,1)*mean.diff.sterr*t.critical cat(paste("\n", 100*conf.level, " Percent Confidence Interval:\n ")) cat(paste(c( "[", round(mean.diff.confInterval[1],digits=digits0), ",", round(mean.diff.confInterval[2],digits=digits0),"]"), collapse="")) # invisible(return(NULL)) } # 2.2 Apply function fcn.TwoSampleTTest ---- # Compare Lab7 to Lab4 fcn.TwoSampleTTest(tablets1$Lab7, tablets1$Lab4) ## Sample Statistics: ## [,1] ## x.mean 3.99800000 ## x.n 10.00000000 ## x.stdev 0.08482662 ## y.mean 3.92000000 ## y.n 10.00000000 ## y.stdev 0.03333333 ## sigma.pooled 0.06444636 ## ## t test Computations [,1] ## mean.diff 0.07800000 ## mean.diff.sterr 0.02882129 ## tstat 2.70633286 ## df 18.00000000 ## pvalue 0.01445587 ## ## 95 Percent Confidence Interval: ## [0.0174,0.1386] # Compare output to t.test t.test(tablets1$Lab7, tablets1$Lab4, var.equal=TRUE) ## ## Two Sample t-test ## ## data: tablets1$Lab7 and tablets1$Lab4 ## t = 2.7063, df = 18, p-value = 0.01446 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## 0.01744872 0.13855128 ## sample estimates: ## mean of x mean of y ## 3.998 3.920 # 3.1 Conduct One-Way ANOVA ---- # Analysis of Variance with R function aov() # Create vector variables # yvec = Dependent variable yvec=as.vector(data.matrix(tablets1)) # xvec.factor.Lab = Independent variable (of factor type in R) xvec.factor.Lab=as.factor(as.vector(col(data.matrix(tablets1)))) table(xvec.factor.Lab) ## xvec.factor.Lab ## 1 2 3 4 5 6 7 ## 10 10 10 10 10 10 10 plot(as.numeric(xvec.factor.Lab), yvec) # One-way ANOVA using r function aov() tablets1.aov<-aov(yvec ~ xvec.factor.Lab) tablets1.aov.summary<-summary(tablets1.aov) # Replicate Table in Example 12.2.A, p. 483 of Rice print(tablets1.aov.summary) ## Df Sum Sq Mean Sq F value Pr(>F) ## xvec.factor.Lab 6 0.1247 0.020790 5.66 9.45e-05 *** ## Residuals 63 0.2314 0.003673 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 # Construct model tables tablets1.aov.model.tables<-model.tables(tablets1.aov, type="means",se=TRUE) print(tablets1.aov.model.tables) ## Tables of means ## Grand mean ## ## 3.984571 ## ## xvec.factor.Lab ## xvec.factor.Lab ## 1 2 3 4 5 6 7 ## 4.062 3.997 4.003 3.920 3.957 3.955 3.998 ## ## Standard errors for differences of means ## xvec.factor.Lab ## 0.0271 ## replic. 10 # Validate standard error for difference of means ResidualsMeanSq=0.003673 J=nrow(tablets1) sqrt(ResidualsMeanSq/J) # standard error of each mean ## [1] 0.01916507 sqrt(ResidualsMeanSq/J)*sqrt(2) # standard error of difference of two means ## [1] 0.02710351 #tablets1.aov.model.tables<-model.tables(tablets1.aov, type="effects",se=TRUE) #print(tablets1.aov.model.tables) # 3.2 Confidence intervals for pairwise differences ---- # Create confidence intervals on differences between means # Studentized range statistic # Tukey's 'Honest Significant Difference' method # Apply R function TukeyHSD(): TukeyHSD(tablets1.aov) ## Tukey multiple comparisons of means ## 95% family-wise confidence level ## ## Fit: aov(formula = yvec ~ xvec.factor.Lab) ## ## $xvec.factor.Lab ## diff lwr upr p adj ## 2-1 -0.065 -0.147546752 0.017546752 0.2165897 ## 3-1 -0.059 -0.141546752 0.023546752 0.3226101 ## 4-1 -0.142 -0.224546752 -0.059453248 0.0000396 ## 5-1 -0.105 -0.187546752 -0.022453248 0.0045796 ## 6-1 -0.107 -0.189546752 -0.024453248 0.0036211 ## 7-1 -0.064 -0.146546752 0.018546752 0.2323813 ## 3-2 0.006 -0.076546752 0.088546752 0.9999894 ## 4-2 -0.077 -0.159546752 0.005546752 0.0830664 ## 5-2 -0.040 -0.122546752 0.042546752 0.7578129 ## 6-2 -0.042 -0.124546752 0.040546752 0.7140108 ## 7-2 0.001 -0.081546752 0.083546752 1.0000000 ## 4-3 -0.083 -0.165546752 -0.000453248 0.0478900 ## 5-3 -0.046 -0.128546752 0.036546752 0.6204148 ## 6-3 -0.048 -0.130546752 0.034546752 0.5720976 ## 7-3 -0.005 -0.087546752 0.077546752 0.9999964 ## 5-4 0.037 -0.045546752 0.119546752 0.8178759 ## 6-4 0.035 -0.047546752 0.117546752 0.8533629 ## 7-4 0.078 -0.004546752 0.160546752 0.0760155 ## 6-5 -0.002 -0.084546752 0.080546752 1.0000000 ## 7-5 0.041 -0.041546752 0.123546752 0.7362355 ## 7-6 0.043 -0.039546752 0.125546752 0.6912252 # # Compare Tukey HSD confidence interval for Lab7 vs Lab4 fcn.TwoSampleTTest(tablets1$Lab7,tablets1$Lab4) ## Sample Statistics: ## [,1] ## x.mean 3.99800000 ## x.n 10.00000000 ## x.stdev 0.08482662 ## y.mean 3.92000000 ## y.n 10.00000000 ## y.stdev 0.03333333 ## sigma.pooled 0.06444636 ## ## t test Computations [,1] ## mean.diff 0.07800000 ## mean.diff.sterr 0.02882129 ## tstat 2.70633286 ## df 18.00000000 ## pvalue 0.01445587 ## ## 95 Percent Confidence Interval: ## [0.0174,0.1386] # Note P-value from t-test is 0.0144 while TukeyHSD is 0.076 # 4. Implement One-Way Anova with lm() ---- lmfit.oneway.Lab=lm(yvec ~ xvec.factor.Lab,x=TRUE, y=TRUE) lmfit.oneway.Lab.summary<-summary(lmfit.oneway.Lab) print(lmfit.oneway.Lab.summary) ## ## Call: ## lm(formula = yvec ~ xvec.factor.Lab, x = TRUE, y = TRUE) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.1880 -0.0245 0.0060 0.0410 0.1130 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 4.06200 0.01917 211.948 < 2e-16 *** ## xvec.factor.Lab2 -0.06500 0.02710 -2.398 0.019449 * ## xvec.factor.Lab3 -0.05900 0.02710 -2.177 0.033248 * ## xvec.factor.Lab4 -0.14200 0.02710 -5.239 1.99e-06 *** ## xvec.factor.Lab5 -0.10500 0.02710 -3.874 0.000257 *** ## xvec.factor.Lab6 -0.10700 0.02710 -3.948 0.000201 *** ## xvec.factor.Lab7 -0.06400 0.02710 -2.361 0.021316 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.06061 on 63 degrees of freedom ## Multiple R-squared: 0.3503, Adjusted R-squared: 0.2884 ## F-statistic: 5.66 on 6 and 63 DF, p-value: 9.453e-05 # Compare regression output from lm() with anova output from aov() print(tablets1.aov.summary) ## Df Sum Sq Mean Sq F value Pr(>F) ## xvec.factor.Lab 6 0.1247 0.020790 5.66 9.45e-05 *** ## Residuals 63 0.2314 0.003673 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 # Residual standard error equals root (Mean Sq for Residuals) print(lmfit.oneway.Lab.summary$sigma) ## [1] 0.06060541 print((lmfit.oneway.Lab.summary$sigma)^2) ## [1] 0.003673016 # Compare to Mean Sq for Residuals in tablets1.aov.summary # F-statistic/degrees of freedom/P-values are same
common_crawl_ocw.mit.edu_15
# Rproject11_Tablets_TwoSampleT.r # 1.0 Read in data ---- # See Example 12.2A of Rice # # Measurements of chlorpheniramine maleate in tablets made by seven laboratories # Nominal dosage equal to 4mg # 10 measurements per laboratory # # Sources of variability # within labs # between labs # Note: read.table has trouble parsing header row if (FALSE){tablets1=read.table(file="Rice 3e Datasets/ASCII Comma/Chapter 12/tablets1.txt", sep=",",stringsAsFactors = FALSE, quote="\'", header=TRUE) } # Read in matrix and label columns tablets1=read.table(file="Rice 3e Datasets/ASCII Comma/Chapter 12/tablets1.txt", sep=",",stringsAsFactors = FALSE, skip=1, header=FALSE) dimnames(tablets1)[[2]]<-paste("Lab",c(1:7),sep="") tablets1 ## Lab1 Lab2 Lab3 Lab4 Lab5 Lab6 Lab7 ## 1 4.13 3.86 4.00 3.88 4.02 4.02 4.00 ## 2 4.07 3.85 4.02 3.88 3.95 3.86 4.02 ## 3 4.04 4.08 4.01 3.91 4.02 3.96 4.03 ## 4 4.07 4.11 4.01 3.95 3.89 3.97 4.04 ## 5 4.05 4.08 4.04 3.92 3.91 4.00 4.10 ## 6 4.04 4.01 3.99 3.97 4.01 3.82 3.81 ## 7 4.02 4.02 4.03 3.92 3.89 3.98 3.91 ## 8 4.06 4.04 3.97 3.90 3.89 3.99 3.96 ## 9 4.10 3.97 3.98 3.97 3.99 4.02 4.05 ## 10 4.04 3.95 3.98 3.90 4.00 3.93 4.06 # Replicate Figure 12.1 of Rice boxplot(tablets1) # 2. Comparing Two Independent Samples ---- boxplot(tablets1[,c("Lab4","Lab7")]) x=tablets1$Lab4 y=tablets1$Lab7 # 2.1 Define a function to implement Two-sample t test ---- fcn.TwoSampleTTest<-function(x,y, conf.level=0.95, digits0=4){ # conf.level=0.95; digits0=4 x.mean=mean(x) y.mean=mean(y) x.var=var(x) y.var=var(y) x.n=length(x) y.n=length(y) sigmasq.pooled=((x.n-1)*x.var + (y.n-1)*y.var)/(x.n + y.n -2) # Print out statistics for each sample and pooled estimate of standard deviation cat("Sample Statistics:\n") print(t(data.frame( x.mean, x.n, x.stdev=sqrt(x.var), y.mean, y.n, y.stdev=sqrt(y.var), sigma.pooled=sqrt(sigmasq.pooled) ))) cat("\n t test Computations") mean.diff=x.mean-y.mean mean.diff.sterr=sqrt(sigmasq.pooled)*sqrt( 1/x.n + 1/y.n) mean.diff.tstat=mean.diff/mean.diff.sterr tstat.df=x.n + y.n -2 mean.diff.tstat.pvalue=2*pt(-abs(mean.diff.tstat), df=tstat.df) print(t(data.frame( mean.diff=mean.diff, mean.diff.sterr=mean.diff.sterr, tstat=mean.diff.tstat, df=tstat.df, pvalue=mean.diff.tstat.pvalue))) # Confidence interval for difference alphahalf=(1-conf.level)/2. t.critical=qt(1-alphahalf, df=tstat.df) mean.diff.confInterval=mean.diff +c(-1,1)*mean.diff.sterr*t.critical cat(paste("\n", 100*conf.level, " Percent Confidence Interval:\n ")) cat(paste(c( "[", round(mean.diff.confInterval[1],digits=digits0), ",", round(mean.diff.confInterval[2],digits=digits0),"]"), collapse="")) # invisible(return(NULL)) } # 2.2 Apply function fcn.TwoSampleTTest ---- # Compare Lab7 to Lab4 fcn.TwoSampleTTest(tablets1$Lab7, tablets1$Lab4) ## Sample Statistics: ## [,1] ## x.mean 3.99800000 ## x.n 10.00000000 ## x.stdev 0.08482662 ## y.mean 3.92000000 ## y.n 10.00000000 ## y.stdev 0.03333333 ## sigma.pooled 0.06444636 ## ## t test Computations [,1] ## mean.diff 0.07800000 ## mean.diff.sterr 0.02882129 ## tstat 2.70633286 ## df 18.00000000 ## pvalue 0.01445587 ## ## 95 Percent Confidence Interval: ## [0.0174,0.1386] # # 2.3 Compare to built-in r function t.test() ---- t.test(tablets1$Lab7, tablets1$Lab4, var.equal=TRUE) ## ## Two Sample t-test ## ## data: tablets1$Lab7 and tablets1$Lab4 ## t = 2.7063, df = 18, p-value = 0.01446 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## 0.01744872 0.13855128 ## sample estimates: ## mean of x mean of y ## 3.998 3.920 # 2.4 Compare to Linear Regression ---- x=tablets1$Lab4 y=tablets1$Lab7 # Create yvec and xmat matrix yvec=c(x,y) xmat.col1=0*yvec +1 xmat.col2=c(0*x, (1 + 0*y)) xmat=cbind(xmat.col1, xmat.col2) plot(xmat.col2, yvec) # 2.4.1 Fit linear regression lmfit=lm(yvec ~ xmat.col2,x=TRUE, y=TRUE) abline(lmfit, col='green') # x matrix used by lm() is same as xmat # print(abs(xmat-lmfit$x)) print(summary(lmfit)) ## ## Call: ## lm(formula = yvec ~ xmat.col2, x = TRUE, y = TRUE) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.1880 -0.0245 0.0010 0.0440 0.1020 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.92000 0.02038 192.348 <2e-16 *** ## xmat.col2 0.07800 0.02882 2.706 0.0145 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.06445 on 18 degrees of freedom ## Multiple R-squared: 0.2892, Adjusted R-squared: 0.2497 ## F-statistic: 7.324 on 1 and 18 DF, p-value: 0.01446 # t value / P-value for xmat.col2 estimate same as two-sample t test # F-statistic in lm() equal to square of t value # (p-values of F and t are same)
common_crawl_ocw.mit.edu_16
# RProject12_ChisquareTest.r # De Veaux, Velleman and Bock (2014) Example # 1. Tattoo/HepC Two-Way Table ---- tableA=data.frame( HepC=rbind(TattooParlor=17, TattooElsewhere=8, NoTattoo=22), NoHepC=rbind(TattooParlor=35, TattooElsewhere=53, NoTattoo=491) ) print(tableA) ## HepC NoHepC ## TattooParlor 17 35 ## TattooElsewhere 8 53 ## NoTattoo 22 491 # # 2. Conduct ChiSquare Test of Independence ---- # Custom function implementing chisqtest fcn.chisqtest<-function(tableA){ cat("\n Two-Way Table: \n") print(tableA) n.total=sum(as.vector(tableA)) cat("\n Total Counts in Table: ", n.total,"\n") # Compute marginal probabilities of # TattooStatus and of HepCStatus probs.TattooStatus=rowSums(tableA)/n.total probs.HepCStatus=colSums(tableA)/n.total cat("\n MLEs of row level probabilities\n") print(probs.TattooStatus) cat("\n MLEs of column level probabilities\n") print(probs.HepCStatus) # Compute table of fitted cell probabilities and # expected counts assuming independence of two factors tableA.fittedprobs=as.matrix(probs.TattooStatus)%*% t( as.matrix(probs.HepCStatus) ) cat("\n Fitted cell probabilities assuming independence\n") print(tableA.fittedprobs) tableA.expected=n.total* tableA.fittedprobs cat("\n Expected Counts assuming independence \n") print(tableA.expected) # Compute standardized residuals fitted table tableA.chisqresiduals=((tableA - tableA.expected))/sqrt(tableA.expected) cat("\n Table of Chi-Square Residuals by cell\n") print(tableA.chisqresiduals) # Compute table of chi-square test statistic contributions tableA.chisqterms=((tableA - tableA.expected)^2)/tableA.expected cat("\n Table of Chi-Square statistic terms by cell\n") print(tableA.chisqterms) tableA.chisqStatistic=sum(as.vector(tableA.chisqterms)) cat("\n Chi-Square Statistic: ",tableA.chisqStatistic,"\n") df.tableA=(nrow(tableA)-1)*(ncol(tableA)-1) cat("\n degrees of freedom: ", df.tableA, "\n") tableA.chisqStatistic.pvalue=1- pchisq(tableA.chisqStatistic, df=df.tableA) cat("\n P-Value : ", tableA.chisqStatistic.pvalue, "\n\n") } fcn.chisqtest(tableA) ## ## Two-Way Table: ## HepC NoHepC ## TattooParlor 17 35 ## TattooElsewhere 8 53 ## NoTattoo 22 491 ## ## Total Counts in Table: 626 ## ## MLEs of row level probabilities ## TattooParlor TattooElsewhere NoTattoo ## 0.08306709 0.09744409 0.81948882 ## ## MLEs of column level probabilities ## HepC NoHepC ## 0.07507987 0.92492013 ## ## Fitted cell probabilities assuming independence ## HepC NoHepC ## TattooParlor 0.006236667 0.07683043 ## TattooElsewhere 0.007316090 0.09012800 ## NoTattoo 0.061527116 0.75796170 ## ## Expected Counts assuming independence ## HepC NoHepC ## TattooParlor 3.904153 48.09585 ## TattooElsewhere 4.579872 56.42013 ## NoTattoo 38.515974 474.48403 ## ## Table of Chi-Square Residuals by cell ## HepC NoHepC ## TattooParlor 6.627811 -1.8883383 ## TattooElsewhere 1.598143 -0.4553290 ## NoTattoo -2.661238 0.7582168 ## ## Table of Chi-Square statistic terms by cell ## HepC NoHepC ## TattooParlor 43.927885 3.5658214 ## TattooElsewhere 2.554061 0.2073245 ## NoTattoo 7.082189 0.5748927 ## ## Chi-Square Statistic: 57.91217 ## ## degrees of freedom: 2 ## ## P-Value : 2.657874e-13 # 3. Apply built-in R function chisq.test() ---- print(chisq.test(tableA, correct=FALSE)) ## Warning in chisq.test(tableA, correct = FALSE): Chi-squared approximation ## may be incorrect ## ## Pearson's Chi-squared test ## ## data: tableA ## X-squared = 57.9122, df = 2, p-value = 2.658e-13 # # 4. Specify Two-Way Table aggregating Tattoo ---- tableB=data.frame( HepC=rbind(Tattoo=25, NoTattoo=22), NoHepC=rbind(Tattoo=88, NoTattoo=491) ) print(tableB) ## HepC NoHepC ## Tattoo 25 88 ## NoTattoo 22 491 # Apply fcn.chisqtest() and chisq.test() ---- fcn.chisqtest(tableB) ## ## Two-Way Table: ## HepC NoHepC ## Tattoo 25 88 ## NoTattoo 22 491 ## ## Total Counts in Table: 626 ## ## MLEs of row level probabilities ## Tattoo NoTattoo ## 0.1805112 0.8194888 ## ## MLEs of column level probabilities ## HepC NoHepC ## 0.07507987 0.92492013 ## ## Fitted cell probabilities assuming independence ## HepC NoHepC ## Tattoo 0.01355276 0.1669584 ## NoTattoo 0.06152712 0.7579617 ## ## Expected Counts assuming independence ## HepC NoHepC ## Tattoo 8.484026 104.516 ## NoTattoo 38.515974 474.484 ## ## Table of Chi-Square Residuals by cell ## HepC NoHepC ## Tattoo 5.670263 -1.6155220 ## NoTattoo -2.661238 0.7582168 ## ## Table of Chi-Square statistic terms by cell ## HepC NoHepC ## Tattoo 32.151885 2.6099112 ## NoTattoo 7.082189 0.5748927 ## ## Chi-Square Statistic: 42.41888 ## ## degrees of freedom: 1 ## ## P-Value : 7.367551e-11 chisq.test(tableB) ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: tableB ## X-squared = 39.8894, df = 1, p-value = 2.688e-10 chisq.test(tableB,correct=FALSE) ## ## Pearson's Chi-squared test ## ## data: tableB ## X-squared = 42.4189, df = 1, p-value = 7.368e-11 # 5. Specify Recidivism Study Two-Way Table ---- tableC=data.frame( ReOffended=rbind(FGC=46, Control=77), NoReOffence=rbind(FGC=186, Control=149)) print(tableC) ## ReOffended NoReOffence ## FGC 46 186 ## Control 77 149 # Apply fcn.chisqtest() and chisq.test() ---- fcn.chisqtest(tableC) ## ## Two-Way Table: ## ReOffended NoReOffence ## FGC 46 186 ## Control 77 149 ## ## Total Counts in Table: 458 ## ## MLEs of row level probabilities ## FGC Control ## 0.5065502 0.4934498 ## ## MLEs of column level probabilities ## ReOffended NoReOffence ## 0.268559 0.731441 ## ## Fitted cell probabilities assuming independence ## ReOffended NoReOffence ## FGC 0.1360386 0.3705116 ## Control 0.1325204 0.3609294 ## ## Expected Counts assuming independence ## ReOffended NoReOffence ## FGC 62.30568 169.6943 ## Control 60.69432 165.3057 ## ## Table of Chi-Square Residuals by cell ## ReOffended NoReOffence ## FGC -2.065737 1.251714 ## Control 2.092979 -1.268221 ## ## Table of Chi-Square statistic terms by cell ## ReOffended NoReOffence ## FGC 4.267269 1.566788 ## Control 4.380560 1.608385 ## ## Chi-Square Statistic: 11.823 ## ## degrees of freedom: 1 ## ## P-Value : 0.0005850347 chisq.test(tableC, correct=FALSE) ## ## Pearson's Chi-squared test ## ## data: tableC ## X-squared = 11.823, df = 1, p-value = 0.000585
common_crawl_ocw.mit.edu_17
# Rproject1_script1.r # # Distributions derived from the normal distribution # See Chapter 6 of Rice for distribution theory # 1. R functions for Normal distribution ---- # random number generator of normal distribuion help(rnorm) ## starting httpd help server ... done # Family of functions # dnorm() density function # pnorm() cumulative distribution function # qnorm() quantile function # rnorm() random number generator # Syntax of functions -- from help(rnorm) output # dnorm(x, mean = 0, sd = 1, log = FALSE) # pnorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) # qnorm(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) # rnorm(n, mean = 0, sd = 1) # 1.1 rnorm() ---- # random normals # Generate vector x of 1000 iid N(0,1) random values x=rnorm(1000) # NOTE: Top right panel of Rstudio, Environment tab lists x # Plot the series of values in x plot(x, main="1.1 1000 Random N(0,1) Values") # Plot a histogram of the values in x hist(x, main="1.2a Histogram of 1000 Random N(0,1) Values") # # help(hist) #command to display help file on r function hist() # Replot histogram using probability density scale hist(x,probability="TRUE", main="1.2b Histogram of 1000 Random N(0,1) Values\n (density scale)") # 1.2 dnorm() #---- # density function of Normal distribution # Create vector of x values x.grid=seq(-3,3,.01) # Compute vector of corresponding density values density.grid=dnorm(x.grid,mean=0, sd=1) # use R function lines() to add to current plot lines(x.grid,density.grid,col='blue') # 1.3 pnorm() ----- # cdf (cumulative distribution function) of Normal distribution cdf.grid=pnorm(x.grid,mean=0,sd=1) plot(x.grid,cdf.grid,type="l",col="blue", main="1.3 CDF Fof N(0,1)") # 1.4 qnorm() ---- # compute quantiles (percentiles) qnorm(.95) ## [1] 1.644854 qnorm(.995) ## [1] 2.575829 qnorm(.9995) ## [1] 3.290527 qnorm(.05) ## [1] -1.644854 qnorm(.005) ## [1] -2.575829 qnorm(.0005) ## [1] -3.290527 # 2. R functions for Chi-square distribution ---- # random number generator of chisquare distribuion # See: help(rchisq) # Syntax of functions -- from help(qchisq) output # dchisq(x, df, ncp = 0, log = FALSE) # pchisq(q, df, ncp = 0, lower.tail = TRUE, log.p = FALSE) # qchisq(p, df, ncp = 0, lower.tail = TRUE, log.p = FALSE) # rchisq(n, df, ncp = 0) # 2.1 rchisq() ---- # random chisquare # Generate vector v of 1000 chisquare (df=3) random values v=rchisq(1000,df=3) # NOTE: Top right panel of Rstudio, Environment tab lists v # Plot the series of values in v plot(v, main="2.1 1000 Random Chisquare(df=3) Values") # Plot a histogram of the values in v hist(v, main="2.2a Histogram of 1000 Random\n Chisquare(df=3) Values", nclass=50) # # Replot histogram using probability density scale # Plot a histogram of the values in v hist(v,probability="TRUE", nclass=50, main="2.2b Histogram of 1000 Random\n Chisquare(df=3) Values\n (density scale)") # 2.2 dchisq() #---- # density function of chisquared distribution # Create vector of x values v.grid=seq(0, max(v),.01) # Compute vector of corresponding density values density.v.grid=dchisq(v.grid,df=3) # use R function lines() to add to current plot lines(v.grid,density.v.grid,col='blue') # 2.3 pchisq() ----- # cdf (cumulative distribution function) of chisquared distribution cdf.v.grid=pchisq(v.grid,df=3) plot(v.grid,cdf.v.grid,type="l",col="blue", main="2.3 CDF of Chisquare Dist (df=3)") # 2.4 qchisq() ---- # compute quantiles (percentiles) qchisq(.95,df=3) ## [1] 7.814728 qchisq(.995,df=3) ## [1] 12.83816 qchisq(.9995,df=3) ## [1] 17.73 qchisq(.05,df=3) ## [1] 0.3518463 qchisq(.005 ,df=3) ## [1] 0.07172177 qchisq(.0005,df=3) ## [1] 0.01527897 # # 3. Simulate Chisquare r.v.s from normal r.v.s ---- # 3.1 A chisquare(df=3) r.v. is sum of 3 squared N(0,1) r.v.s v.sim=(rnorm(1000))^2 + (rnorm(1000))^2 + (rnorm(1000))^2 plot(v.sim, main="3.1 1000 Random Chisquare(df=3) Values \n (Derived from N(0,1) r.v.s)") hist(v.sim,nclass=50, main="3.2 Histogram of 1000 Random Chisquare(df=3) Values\n (Derived from N(0,1) r.v.s)") # # Replot histogram using probability density scale hist(v.sim,probability="TRUE", nclass=50, main="3.2 Histogram of 1000 Random Chisquare(df=3) Values\n (Derived from N(0,1) r.v.s)\n (density scale)") # use R function lines() to add to current plot lines(v.grid,density.v.grid,col='blue') # 4. Simulate t distribution from Normal r.v.s---- # (t with 3 degrees of freedom) # 4.1 Apply theorem of t dist ---- # = N(0,1)/sqrt(chisq/df)) t.sim = x / sqrt(v.sim/3) # Plot the series of values in t.sim plot(t.sim, main="4.1 1000 Random t (df=3) Values") # Plot a histogram of the values in v hist(t.sim, main="4.2a Histogram of 1000 Random t(df=3) Values",nclass=50) # # Replot histogram using probability density scale hist(t.sim,probability="TRUE", nclass=50, main="4.2b Histogram of 1000 Random t(df=3) Values\n (density scale)") ## R functions relating to t distribution # See help(rt) # # dt(x, df, ncp, log = FALSE) # pt(q, df, ncp, lower.tail = TRUE, log.p = FALSE) # qt(p, df, ncp, lower.tail = TRUE, log.p = FALSE) # rt(n, df, ncp) # Note: ncp is non-centrality parameter # corresponds to mean of normal distribution in numberator of t # # 4.2 dt() #---- # density function of t distribution # Create vector of x values t.grid=seq(min(t.sim),max(t.sim),.01) # Compute vector of corresponding density values density.t.grid=dt(t.grid,df=3) # use R function lines() to add to current plot lines(t.grid,density.t.grid,col='blue') # 4.3 pt() ----- # cdf (cumulative distribution function) of t dist with df degrees of freedom cdf.t.grid=pt(t.grid,df=3) plot(t.grid,cdf.t.grid,type="l",col="blue", main="4.3 CDF Fof Chisquare Dist (df=3)") # 4.4 qt() ---- # compute quantiles (percentiles) qt(.95,df=3) ## [1] 2.353363 qt(.995,df=3) ## [1] 5.840909 qt(.9995,df=3) ## [1] 12.92398 qt(.05,df=3) ## [1] -2.353363 qt(.005 ,df=3) ## [1] -5.840909 qt(.0005,df=3) ## [1] -12.92398 #
common_crawl_ocw.mit.edu_18
# Rproject2_script1_gamma_MOM.r # 1.0 Read in data ---- # LeCam and Neyman Precipitation Data from Rice 3e Datasets # From Rice, p. 414: # rainfall of summer storms, in inches, measured by network of rain gauges # southern Illinois for the years 1960-1964 # measurements are average amount of rainfall from each storm # file0.60<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois60.txt" file0.60data<-scan(file=file0.60,sep=",") file0.61<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois61.txt" file0.61data<-scan(file=file0.61,sep=",") file0.62<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois62.txt" file0.62data<-scan(file=file0.62,sep=",") file0.63<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois63.txt" file0.63data<-scan(file=file0.63,sep=",") file0.64<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois64.txt" file0.64data<-scan(file=file0.64,sep=",") data.precipitation<-c(file0.60data, file0.61data, file0.62data, file0.63data, file0.64data) # 2. Display the data ---- # 2.1 Histograms with different bin-counts (nclass) ---- par(mfcol=c(1,1)) hist(data.precipitation,nclass=15, main="Precipitation Data (nclass=15)\n(Le Cam and Neyman)") hist(data.precipitation,nclass=50, main="Precipitation Data (nclass=50)\n(Le Cam and Neyman)") # 2.2 Index plot ---- plot(data.precipitation, main="Precipitation Data") # 3.0 Parameter Estimation of Gamma Distribution ---- # 3.1 Method of moments estimates ---- # Compute first moment (mean) and variance (second moment minus square of first moment) data.precipitation.xbar=mean(data.precipitation) data.precipitation.var=mean(data.precipitation^2) - (mean(data.precipitation))^2 # Compute MOM estimates per theory lambdahat.mom=(data.precipitation.xbar)/data.precipitation.var alphahat.mom=(data.precipitation.xbar^2)/data.precipitation.var print(lambdahat.mom) ## [1] 1.684175 print(alphahat.mom) ## [1] 0.3779155 # 3.2 Simulation of MOM Estimates ---- # 3.2.1 Define function used in simulation ---- # Define function computing MOM estimates of the rate and shape parameters fcn.fitdistr.mom.gamma<-function(x){ x.mean=mean(x) x.var=mean(x^2) - (x.mean^2) rate.mom=x.mean/x.var shape.mom=(x.mean^2)/x.var result=c(shape=shape.mom, rate=rate.mom) return(result) } data.precipitation.mom.gamma<-fcn.fitdistr.mom.gamma(data.precipitation) print(data.precipitation.mom.gamma) ## shape rate ## 0.3779155 1.6841748 shape.mom=data.precipitation.mom.gamma["shape"] rate.mom=data.precipitation.mom.gamma["rate"] # 3.2.2 Set up simulation objects ---- nsimulation=1000 n0=length(data.precipitation) output.mom.simulation<-matrix(nrow=nsimulation, ncol=2) dimnames(output.mom.simulation)[[2]]<-c("shape","rate") # 3.2.3 Conduct simulation ---- for (j in c(1:nsimulation)){ #j=1 data.simulation=rgamma(n0,shape=shape.mom, rate=rate.mom) data.simulation.fitdistr<-fcn.fitdistr.mom.gamma(data.simulation) output.mom.simulation[j, 1]<-data.simulation.fitdistr["shape"] output.mom.simulation[j, 2]<-data.simulation.fitdistr["rate"] # print(j) # print(output.mom.simulation[j,]) } # 3.2.4 Display MOM estimates sampling distributions ---- # Displays for shape parameter hist(output.mom.simulation[,"shape"],nclass=50) simulation.shape.mom.Mean=mean(output.mom.simulation[,"shape"]) simulation.shape.mom.StandardError=sqrt(var(output.mom.simulation[,"shape"])) print(shape.mom) ## shape ## 0.3779155 print(simulation.shape.mom.Mean) ## [1] 0.3968352 print(simulation.shape.mom.StandardError) ## [1] 0.06454836 # Add red vertical(v) lines at simulation mean +/- 1 standard error abline(v=simulation.shape.mom.Mean + c(-1,0,1)*simulation.shape.mom.StandardError, col=c("red","red","red"),lwd=c(2,2,2)) # Add blue vertical(v) line at true shape parameter for simulation abline(v=shape.mom,col="blue",lwd=2) # Displays for rate parameter hist(output.mom.simulation[,"rate"],nclass=50) simulation.rate.mom.Mean=mean(output.mom.simulation[,"rate"]) simulation.rate.mom.StandardError=sqrt(var(output.mom.simulation[,"rate"])) print(rate.mom) ## rate ## 1.684175 print(simulation.rate.mom.Mean) ## [1] 1.781198 print(simulation.rate.mom.StandardError) ## [1] 0.3549565 # Add red vertical(v) lines at simulation mean +/- 1 standard error abline(v=simulation.rate.mom.Mean + c(-1,0,1)*simulation.rate.mom.StandardError, col=c("red","red","red"),lwd=c(2,2,2)) # Add blue vertical(v) line at true rate parameter for simulation abline(v=rate.mom,col="blue",lwd=2)
common_crawl_ocw.mit.edu_19
# Rproject2_script2_gamma_MOMwithMLE.r # 0.0 Load library library("MASS") # 1.0 Read in data ---- # LeCam and Neyman Precipitation Data from Rice 3e Datasets # From Rice, p. 414: # rainfall of summer storms, in inches, measured by network of rain gauges # southern Illinois for the years 1960-1964 # measurements are average amount of rainfall from each storm # #C:\PseudoFdrive\MIT\mit18443\rstudio18443\Rice 3e Datasets\ASCII Comma\Chapter 10 file0.60<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois60.txt" file0.60data<-scan(file=file0.60,sep=",") file0.61<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois61.txt" file0.61data<-scan(file=file0.61,sep=",") file0.62<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois62.txt" file0.62data<-scan(file=file0.62,sep=",") file0.63<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois63.txt" file0.63data<-scan(file=file0.63,sep=",") file0.64<-"Rice 3e Datasets\\ASCII Comma\\Chapter 10\\illinois64.txt" file0.64data<-scan(file=file0.64,sep=",") data.precipitation<-c(file0.60data, file0.61data, file0.62data, file0.63data, file0.64data) # 2. Display the data ---- # 2.1 Histograms with different bin-counts (nclass) ---- par(mfcol=c(1,1)) hist(data.precipitation,nclass=15, main="Precipitation Data (nclass=15)\n(Le Cam and Neyman)") hist(data.precipitation,nclass=50, main="Precipitation Data (nclass=50)\n(Le Cam and Neyman)") # 2.2 Index plot ---- plot(data.precipitation, main="Precipitation Data") # 3.0 Parameter Estimation of Gamma Distribution ---- # 3.1 Method of moments estimates ---- # Compute first moment (mean) and variance (second moment minus square of first moment) data.precipitation.xbar=mean(data.precipitation) data.precipitation.var=mean(data.precipitation^2) - (mean(data.precipitation))^2 # Compute MOM estimates per theory lambdahat.mom=(data.precipitation.xbar)/data.precipitation.var alphahat.mom=(data.precipitation.xbar^2)/data.precipitation.var print(lambdahat.mom) ## [1] 1.684175 print(alphahat.mom) ## [1] 0.3779155 # 3.2 Maximum Likelihood Estiamtes options(warn=-1) data.precipitation.fitdistr.mle<-fitdistr(data.precipitation,densfun="gamma") print(data.precipitation.fitdistr.mle) ## shape rate ## 0.44080171 1.96476293 ## (0.03376322) (0.24743990) # 3.2 Simulation of MOM Estimates ---- # 3.2.1 Define function used in simulation ---- # Define function computing MOM estimates of the rate and shape parameters fcn.fitdistr.mom.gamma<-function(x){ x.mean=mean(x) x.var=mean(x^2) - (x.mean^2) rate.mom=x.mean/x.var shape.mom=(x.mean^2)/x.var result=c(shape=shape.mom, rate=rate.mom) return(result) } data.precipitation.mom.gamma<-fcn.fitdistr.mom.gamma(data.precipitation) print(data.precipitation.mom.gamma) ## shape rate ## 0.3779155 1.6841748 shape.mom=data.precipitation.mom.gamma["shape"] rate.mom=data.precipitation.mom.gamma["rate"] data.precipitation.fitdistr.mle<-fitdistr(data.precipitation,densfun="gamma") print(data.precipitation.fitdistr.mle) ## shape rate ## 0.44080171 1.96476293 ## (0.03376322) (0.24743990) shape.mle=data.precipitation.fitdistr.mle$estimate["shape"] rate.mle=data.precipitation.fitdistr.mle$estimate["rate"] # 3.2.2 Set up simulation objects ---- nsimulation=1000 n0=length(data.precipitation) output.mom.simulation<-matrix(nrow=nsimulation, ncol=2) dimnames(output.mom.simulation)[[2]]<-c("shape","rate") output.mle.simulation<-matrix(nrow=nsimulation, ncol=2) dimnames(output.mle.simulation)[[2]]<-c("shape","rate") options(warn=-1) # 3.2.3 Conduct simulation ---- for (j in c(1:nsimulation)){ #j=1 data.simulation=rgamma(n0,shape=shape.mle, rate=rate.mle) data.simulation.fitdistr.mom<-fcn.fitdistr.mom.gamma(data.simulation) output.mom.simulation[j, 1]<-data.simulation.fitdistr.mom["shape"] output.mom.simulation[j, 2]<-data.simulation.fitdistr.mom["rate"] # # For MLE use function fitdistr from the library MASS # turn off warnings options(warn=-1) # data.simulation.fitdistr.mle<-fitdistr(data.simulation,densfun="gamma") data.simulation.fitdistr.mle<-suppressWarnings(fitdistr(data.simulation,densfun="gamma")) output.mle.simulation[j, "shape"]<-data.simulation.fitdistr.mle$estimate["shape"] output.mle.simulation[j, "rate"]<-data.simulation.fitdistr.mle$estimate["rate"] # print(j) # print(output.mom.simulation[j,]) } # 4 Display/Compare sampling distributions of MOMs and MLEs---- par(mfcol=c(2,1)) # 4.1.1 Displays for MOM of shape parameter ---- hist(output.mom.simulation[,"shape"],nclass=50) simulation.shape.mom.Mean=mean(output.mom.simulation[,"shape"]) simulation.shape.mom.StandardError=sqrt(var(output.mom.simulation[,"shape"])) print(shape.mom) ## shape ## 0.3779155 print(simulation.shape.mom.Mean) ## [1] 0.4580915 print(simulation.shape.mom.StandardError) ## [1] 0.06982183 # Add red vertical(v) lines at simulation mean +/- 1 standard error abline(v=simulation.shape.mom.Mean + c(-1,0,1)*simulation.shape.mom.StandardError, col=c("red","red","red"),lwd=c(2,2,2)) # Add blue vertical(v) line at true shape parameter for simulation abline(v=shape.mle,col="blue",lwd=2) # 4.1.2 Displays for MLE of shape parameter hist(output.mle.simulation[,"shape"],nclass=50) simulation.shape.mle.Mean=mean(output.mle.simulation[,"shape"]) simulation.shape.mle.StandardError=sqrt(var(output.mle.simulation[,"shape"])) print(shape.mle) ## shape ## 0.4408017 print(simulation.shape.mle.Mean) ## [1] 0.4446394 print(simulation.shape.mle.StandardError) ## [1] 0.0337602 # Add red vertical(v) lines at simulation mean +/- 1 standard error abline(v=simulation.shape.mle.Mean + c(-1,0,1)*simulation.shape.mle.StandardError, col=c("green","green","green"),lwd=c(3,3,3)) # Add blue vertical(v) line at true shape parameter for simulation abline(v=shape.mle,col="blue",lwd=2) ########## # # # 4.2.1 Displays of MOM for Rate parameter ---- par(mfcol=c(2,1)) hist(output.mom.simulation[,"rate"],nclass=50) simulation.rate.mom.Mean=mean(output.mom.simulation[,"rate"]) simulation.rate.mom.StandardError=sqrt(var(output.mom.simulation[,"rate"])) print(rate.mom) ## rate ## 1.684175 print(simulation.rate.mom.Mean) ## [1] 2.058181 print(simulation.rate.mom.StandardError) ## [1] 0.3775248 # Add red vertical(v) lines at simulation mean +/- 1 standard error abline(v=simulation.rate.mom.Mean + c(-1,0,1)*simulation.rate.mom.StandardError, col=c("red","red","red"),lwd=c(2,2,2)) # Add blue vertical(v) line at true rate parameter for simulation abline(v=rate.mle,col="blue",lwd=2) # 4.2.2 Displays of MLE for Rate parameter ---- # hist(output.mle.simulation[,"rate"],nclass=50) simulation.rate.mle.Mean=mean(output.mle.simulation[,"rate"]) simulation.rate.mle.StandardError=sqrt(var(output.mle.simulation[,"rate"])) print(rate.mle) ## rate ## 1.964763 print(simulation.rate.mle.Mean) ## [1] 1.99772 print(simulation.rate.mle.StandardError) ## [1] 0.2488804 # Add red vertical(v) lines at simulation mean +/- 1 standard error abline(v=simulation.rate.mle.Mean + c(-1,0,1)*simulation.rate.mle.StandardError, col=c("red","red","red"),lwd=c(2,2,2)) # Add blue vertical(v) line at true rate parameter for simulation abline(v=rate.mle,col="blue",lwd=2) # 4.3. Boxplot comparisons of estimates #help(boxplot) par(mfcol=c(1,2)) boxplot(cbind(MOM=output.mom.simulation[,"shape"],MLE= output.mle.simulation[,"shape"]), main="Shape Parameter\nSampling Distributions\n(simulated)") boxplot(cbind(MOM=output.mom.simulation[,"rate"],MLE= output.mle.simulation[,"rate"]), main="Rate Parameter\nSampling Distributions\n(simulated)") par(mfcol=c(1,1)) plot(y=output.mom.simulation[,"shape"],x= output.mle.simulation[,"shape"], xlab="MLE",ylab="MOM", main="Shape Parameter\nSampling Distributions\n(simulated)") # Add green vertical(v) lines at MLE simulation mean +/- 1 standard error abline(v=simulation.shape.mle.Mean + c(-1,0,1)*simulation.shape.mle.StandardError, col=c("green","green","green"),lwd=c(3,3,3)) # Add blue vertical(v) line at true shape parameter for simulation abline(v=shape.mle,col="blue",lwd=2) # Add red horizontal (h) lines at MOM simulation mean +/- 1 standard error abline(h=simulation.shape.mom.Mean + c(-1,0,1)*simulation.shape.mom.StandardError, col=c("red","red","red"),lwd=c(2,2,2)) # Add blue horizontal(h) line at true shape parameter for simulation abline(h=shape.mle,col="blue",lwd=2) # par(mfcol=c(1,1)) plot(y=output.mom.simulation[,"rate"],x= output.mle.simulation[,"rate"], xlab="MLE",ylab="MOM", main="Rate Parameter\nSampling Distributions\n(simulated)") # Add green vertical(v) lines at MLE simulation mean +/- 1 standard error abline(v=simulation.rate.mle.Mean + c(-1,0,1)*simulation.rate.mle.StandardError, col=c("green","green","green"),lwd=c(3,3,3)) # Add blue vertical(v) line at true rate parameter for simulation abline(v=rate.mle,col="blue",lwd=2) # Add red horizontal (h) lines at MOM simulation mean +/- 1 standard error abline(h=simulation.rate.mom.Mean + c(-1,0,1)*simulation.rate.mom.StandardError, col=c("red","red","red"),lwd=c(2,2,2)) # Add blue horizontal(h) line at true rate parameter for simulation abline(h=rate.mle,col="blue",lwd=2) #
common_crawl_ocw.mit.edu_20
Section \(8.5.1\) of Rice discusses multinomial cell probabilities. Data consisting of: \[ X_1, X_2, \ldots, X_m\] are counts in cells \(1, \ldots, m\) and follow a multinomial distribution \[f(x_1, \ldots, x_n \mid p_1, \ldots, p_m ) = { n! \over \prod_{j=1}^m x_i ! } \prod_{j=1}^m p_j ^{x_j}\] where \((p_1, \ldots, p_m)\) is the vector of cell probabilities with \(\sum_{i=1}^m p_i=1.\) \(n=\sum_{j=1}^m x_i\) is the total count. These data arise from a random sample of single-count Multinomial random variables, which are a generalization of Bernoulli random variables (\(m\) distinct outcomes versus 2 distinct outcomes). Suppose \(W_1, W_2, \ldots, W_n\) are iid \(W \sim Multinomial(n, probs=(p_1, \ldots, p_m))\) random variables: The sample space of each \(W_i\) is \({\cal W} = \{1, 2, \ldots, m\}\), a set of \(m\) distinct outcomes. \(P( W_i=k) = p_k,\) \(k=1,2, \ldots, m.\) Define \(X_k = \sum_{i=1}^n 1( W_i =k)\), (sum of indicators of outcome \(k\)), \(k=1, \ldots, m\) X\(=(X_1, \ldots, X_m)\) \[\begin{array}{lcl} lik(p_1, \ldots, p_m) &=& log [ f(x_1, \ldots, x_m \mid p_1, \ldots, p_m )] \\ &=& log (n!) - \sum_{j=1}^m log( x_j !) + \sum_{j=1}^m x_j log(p_j) \\\end{array}\] The MLE of \((p_1, \ldots, p_m)\) maximizes \(lik(p_1, \ldots, p_m)\) (with \(x_1, \ldots, x_m\) fixed!) Maximum achieved when differential is zero Constraint: \(\sum_{j=1}^m p_j =1\) Apply method of Lagrange multipliers Solution: \(\hat p_j = x_j /n\), \(j=1, \ldots, m.\) Note: if any \(x_j =0,\) then \(\hat p_j=0\) solved as limit Equilibrium frequency of genotypes: \(AA\), \(Aa\), and \(aa\) \(P(a)=\theta\) and \(P(A)=1-\theta\) Equilibrium probabilities of genotypes: \((1-\theta)^2\), \(2(\theta)(1-\theta)\), and \(\theta^2.\) Multinomial Data: \((X_1, X_2, X_3)\) corresponding to counts of \(AA\), \(Aa\), and \(aa\) in a sample of size \(n.\) See, e.g. http://www.nature.com/scitable/definition/hardy-weinberg-equation-299 http://www.nature.com/scitable/definition/hardy-weinberg-equilibrium-122 Sample Data \[\begin{array}{|l|lll|}\hline Genotype& AA & Aa & aa & Total \\ Count &X_1 & X_2 & X_3 & n\\ Frequency & 342 & 500 & 187 & 1029 \\ \hline \end{array}\] \(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\) \(\tilde \theta = (\tilde p)^{1/2} = (X_3/n)^{1/2}\) \(=\sqrt{187/1029}=.4263.\) \(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\) Solve for MLE \[\begin{array}{rcl} l(\theta ) &=& log( f(x_1, x_2, x_3 \mid p_1(\theta), p_2(\theta), p_3(\theta)))\\ &=& log( { n! \over x_1! x_2! x_3!} p_1(\theta)^{x_1} p_2(\theta)^{x_2} p_3(\theta)^{x_3} )\\ &=&x_1 log((1-\theta)^2) + x_2 log( 2 \theta ( 1-\theta)) + x_3 log ( \theta^2) + (\text{non-}\theta \; terms)\\ &=& (2 x_1 + x_2) log(1-\theta) + (2x_3 + x_2) log(\theta) + (\text{non-}\theta \; terms)\\ &&\\ \Longrightarrow \;\; \hat \theta & =& {2 x_3 + x_2 \over 2 x_1 + 2x_2 + 2x_3} = {2 x_3 + x_2 \over 2n} = 0.4247\\ \end{array} \] Which estimate is better? Conduct Parametric Bootstrap Simulation!
common_crawl_ocw.mit.edu_21
Suppose \(R \sim Rayleigh(\theta),\) then the density of \(R\) is given by (Rice p. 321) \[f(r \mid \theta) = \displaystyle {r \over \theta^2} exp \left( { -r^2 \over 2 \theta^2} \right)\] The cumulative distribution function of \(R\) is \[F_R(r) = 1 - exp \left( {-r^2 \over 2 \theta^2 }\right) \] Note that \({d \over dr}F_R(r) = f(r \mid \theta)\) Data consisting of: \[ R_1, R_2, \ldots, R_n\] are i.i.d. \(Rayleigh(\theta)\) random variables. The likelihood function is \[\begin{array}{lcl} lik(\theta) &=& f(r_1, \ldots, r_n \mid \theta)= \prod_{i=1}^n f(r_i \mid \theta) \\ &=& \prod_{i=1}^n \displaystyle \big [ {r_i \over \theta^2} exp \left( { -r_i^2 \over 2 \theta^2 } \right ) \big ] \\ \end{array} \] The log-likelihood function is \[\begin{array}{lcl} \ell(\theta) &=& \log [lik(\theta)] \\ &=& [\sum_1^n log(r_i)] - 2n log(\theta) - {1 \over \theta^2} \sum_1^n [r_i^2/2] \\ \end{array} \] The mle solves \({d \over d \theta} \ell(\theta) =0\): \[\begin{array}{lcl} 0 &=& {d \over d \theta}(\ell(\theta)) \\ &=& - 2n ( {1 \over \theta }) + 2 ({1 \over \theta^3}) \sum_1^n [r_i^2/2] \\ \Longrightarrow \;\;\; \hat \theta_{MLE} &=& ({ 1 \over n }\sum_1^n [r_i^2/2])^{1/2} \\ \end{array}\] The first moment of the \(Rayleigh(\theta)\) distribution is \[\begin{array}{lcl} \mu_1 &=& E[R \mid \theta ] = \int_0^\infty r f(r \mid \theta ) dr \\ &=& \int_0^\infty r { r \over \theta^2} exp({-r^2 \over 2 \theta^2}) dr\\ &=& {1 \over \theta^2} \int_0^\infty r^2 exp({-r^2 \over 2 \theta^2}) dr\\ &=& {1 \over \theta^2} \int_0^\infty v \cdot exp({-v \over 2 \theta^2}) [ {dv \over 2 \sqrt{v}}] \;\; (\text{change of variables: } v=r^2 \text{)}\\ &=& {1 \over 2 \theta^2} \int_0^\infty v^{{3 \over 2} -1} \cdot exp({-v \over 2 \theta^2}) {dv }\\ &=& {1 \over 2 \theta^2} \Gamma({3 \over 2}) (2 \theta)^{3 \over 2} \\ &=& \sqrt{2} \theta \Gamma({3 \over 2} ) = \sqrt{2} \theta \times ({1 \over 2}) \Gamma({1 \over 2}) \\ &=& \theta \times {\sqrt{\pi} \over \sqrt{2} }\\ \end{array} \] (using the facts that \(\Gamma(n+1)=n\Gamma(n)\) and \(\Gamma({1 \over 2}) = \sqrt{\pi}\)) The MOM estimate solves: \[\begin{array}{rcl} \mu_1 &=& \hat \mu_1 = {1 \over n} \sum_{R_i} = \overline{R}\\ \theta \times {\sqrt{\pi} \over \sqrt{2}} &=& \overline{R}\\ \Longrightarrow \;\; \hat \theta_{MOM} &=& \overline{R} \times {\sqrt{2} \over \sqrt{\pi}} \\ \end{array} \] The approximate variance of the MLE is \(Var(\hat \theta_{MLE}) \approx { 1 \over n I(\theta)}\) where \[\begin{array}{lcl}I(\theta) &=& E[-{d^2 \over d \theta^2}(log(f(x \mid \theta)))] \\ &=& E[-{d^2 \over d \theta^2}[log({ x \over \theta^2} exp(-{x^2 \over 2 \theta^2}) )]]\\ &=& E[-{d \over d \theta}[-2 ({1 \over \theta}) - ({x^2 \over 2}) (-2) \theta^{-3}]]\\ &=& E[ -[ ({2 \over \theta^2}) + ({x^2 }) )(-3) \theta^{-4} ]] \\ &=& 3 \theta^{-4} E[x^2] - ({2 \over \theta^2}) = 3 \theta^{-4} (2 \theta^2) - ( {2 \over \theta^2})\\ &=& {4 \over \theta^2}\\ \end{array}\] So, \(Var(\hat \theta_{MLE}) \approx { \theta^2 \over 4 n}\) The MOM estimate \(\hat \theta_{MOM} = \overline{R} \times {\sqrt{2} \over \sqrt{\pi}} \) has variance: \(Var(\hat \theta_{MOM}) = ({\sqrt{2} \over \sqrt{\pi}})^2 Var(\overline{R}) = ({2 \over \pi}) {Var(R) \over n} \) \[\begin{array}{lcl} Var(R) &=& E[R^2] - (E[R])^2 \\ &=& 2 \theta^2 - ( \sqrt{\pi \over 2} \theta)^2 \\ &=& \theta^2( 2 - {\pi \over 2}) \\ \end{array} \] So, \(Var(\hat \theta_{MOM}) = \theta^2 (2 - {\pi \over 2}) ({2 \over \pi})({1 \over n}) = \theta^2 ( { 4 \over \pi} -1) ({1 \over n}) \approx {\theta^2 \over n} \times 0.2732\) This exceeds the approximate \(Var(\hat \theta_{MLE}) \approx {\theta^2 \over n} \times 0.25\)
common_crawl_ocw.mit.edu_22
# Rproject3_script1_multinomial_simulation.r # # Parametric Bootstrap simulation of sampling distributions for alternate estimators # Multinomial Counts: Hardy Weinberg Equilibrium # 1.1 Trinomial data---- # from Example 8.5.1.A, p. 273 of Rice. # # x=(X1,X2,X3) # counts of multinomial cells (1,2,3) # Erythrocyte antigen blood types of n=1029 Hong Kong population in 1937. x<-c(342,500,187) # 1.2 Two estimators: Multinomial MLE and Binomial(X3) MLE ---- x.n=sum(x) x.theta.mle=(2*x[3] + x[2])/(2*sum(x)) x.theta.binomialx3=sqrt(x[3]/sum(x)) print(x.theta.mle) ## [1] 0.4246842 print(x.theta.binomialx3) ## [1] 0.4262978 # 2.0 Simulate sampling distribution of estimators ---- # 1000 trials (the sampling distribution) of the two estimators # For each trial, generate sample of size n=1027 from the multinomial distribution # w.samplespace=c(1,2,3); the outcomes/cells of the multinomial # prob=fcn.probs.hardyweinberg(x.theta.mle); the cell probilities # # Compute estimates of the Hardy-Weinberg theta parameter # by Multinomial MLE and Binomial(X3) MLE # 2.1 R functions for the simulation ---- # function computing cell probabilities given Hardy-Weinberg theta parameter fcn.probs.hardyweinberg<-function(theta0){ probs=c((1-theta0)**2, 2*theta0*(1-theta0), theta0**2) return(probs) } # function computing cell counts given w.sample, a sample of single-outcome multinomial # random variables (comparable to Bernoulli outcomes underlying a binomial) fcn.w.sample.counts<-function(w.sample, w.samplespace){ result=0*w.samplespace for (j.outcome in c(1:length(w.samplespace))){ result[j.outcome]<-sum(w.sample==w.samplespace[j.outcome]) } return(result) } # 2.2 Validate functions for single trial example ---- args(sample) ## function (x, size, replace = FALSE, prob = NULL) ## NULL w.samplespace=c(1,2,3) w.sample= sample(x=w.samplespace,size=x.n,replace=TRUE, prob=fcn.probs.hardyweinberg(x.theta.mle)) w.sample.counts<-fcn.w.sample.counts(w.sample, w.samplespace) par(mfcol=c(1,1)) plot(w.sample,main=paste("Single Multinomial Trial (n=", as.character(x.n),")",sep="")) print(table(w.sample)) ## w.sample ## 1 2 3 ## 351 489 189 print(w.sample.counts) ## [1] 351 489 189 # 2.3 Conduct Simulation ---- n.simulations=20000 data.simulations<-matrix(NA,nrow=n.simulations, ncol=2) dimnames(data.simulations)[[2]]<-c("MLE","Alternate") for (j.simulation in c(1:n.simulations)){ j.w.sample= sample(x=w.samplespace,size=x.n,replace=TRUE, prob=fcn.probs.hardyweinberg(x.theta.mle)) j.w.sample.counts<-fcn.w.sample.counts(j.w.sample, w.samplespace) x.j=j.w.sample.counts # print(x.j) x.j.theta.mle=(2*x.j[3] + x.j[2])/(2*sum(x.j)) x.j.theta.binomialx3=sqrt(x.j[3]/sum(x.j)) data.simulations[j.simulation,1]=x.j.theta.mle data.simulations[j.simulation,2]=x.j.theta.binomialx3 } # 3. Simulation Results ---- # 3.1 Histogram of estimators sampling distributions ----- par(mfcol=c(2,1)) print(simulations.means<-apply(data.simulations,2,mean)) ## MLE Alternate ## 0.4248393 0.4246122 print(simulation.stdevs<-sqrt(apply(data.simulations,2,var))) ## MLE Alternate ## 0.01083837 0.01402613 hist(data.simulations[,1], main="Multinomial MLE") abline(v=simulations.means[1] +c(-1,0,1)*simulation.stdevs[1], col=c(3,3,3)) hist(data.simulations[,2], main="Binomial(X3) MLE") abline(v=simulations.means[2] +c(-1,0,1)*simulation.stdevs[2], col=c(2,2,2)) # 3.2 Histograms of estimation errors for each estimator ---- data.simulations.error=data.simulations-x.theta.mle par(mfcol=c(2,1)) hist(data.simulations.error[,1] , main="Error of Multinomial MLE") print(simulations.error.means<-apply(data.simulations.error,2,mean)) ## MLE Alternate ## 1.551749e-04 -7.192265e-05 print(simulations.error.stdevs<-sqrt(apply(data.simulations.error,2,var))) ## MLE Alternate ## 0.01083837 0.01402613 abline(v=simulations.error.means[1] +c(-1,0,1)*simulations.error.stdevs[1], col=c(3,3,3)) hist(data.simulations.error[,2], main="Error of Binomial(X3) MLE") abline(v=simulations.error.means[2] +c(-1,0,1)*simulations.error.stdevs[2], col=c(2,2,2)) # 3.3 Boxplot comparing sampling distributions ---- par(mfcol=c(1,1)) boxplot(data.simulations, main="Simulated Sampling Distribution\nHardy-Weinberg Parameter") # 3.4 Scatterplot of joint distribution of estimates ---- plot(data.simulations[,1], data.simulations[,2],xlab="Multinomial MLE", ylab="Binomial(X3) MLE") abline(v=simulations.means[1] +c(-1,0,1)*simulation.stdevs[1], col=c(3,3,3)) abline(h=simulations.means[2] +c(-1,0,1)*simulation.stdevs[2], col=c(2,2,2)) # 4. Bootstrap Confidence Interval ---- head(data.simulations.error) ## MLE Alternate ## [1,] -0.0004859086 -0.004124118 ## [2,] 0.0126336249 0.022746688 ## [3,] -0.0068027211 -0.008771333 ## [4,] 0.0034013605 0.002751979 ## [5,] 0.0048590865 -0.007604675 ## [6,] 0.0072886297 0.013972683 quantile(data.simulations.error[,1],probs=c(.05,.95)) ## 5% 95% ## -0.01797862 0.01797862 quantile(data.simulations.error[,2],probs=c(.05,.95)) ## 5% 95% ## -0.02303547 0.02274669 x.theta.mle ## [1] 0.4246842 x.theta.binomialx3 ## [1] 0.4262978 # 4.1 Approximate 90% confidence interval based on MLE ---- approx.CI.limits.90percent.mle=x.theta.mle +c(- quantile(data.simulations.error[,1], probs=.95), - quantile(data.simulations.error[,1],probs=.05)) approx.CI.limits.90percent.mle ## 95% 5% ## 0.4067055 0.4426628 # 4.2 Approximate 90% confidence interval based on BINOMIALX3 ---- approx.CI.limits.90percent.binomialx3=x.theta.binomialx3 +c(- quantile(data.simulations.error[,2], probs=.95), - quantile(data.simulations.error[,2],probs=.05)) approx.CI.limits.90percent.binomialx3 ## 95% 5% ## 0.4035511 0.4493333
common_crawl_ocw.mit.edu_23
# Rproject3_script1_chromatin.r # 1.0 Read in data ---- # See problem 8.10.45, p 321 of Rice. # # Three experiments where 100-200 measurements of 2-dimensional distances were determined. # file0.short<-"Rice 3e Datasets\\ASCII Comma\\Chapter 8\\chromatin\\data32.txt" data.short<-scan(file=file0.short,sep=",") file0.medium<-"Rice 3e Datasets\\ASCII Comma\\Chapter 8\\chromatin\\data05.txt" data.medium<-scan(file=file0.medium,sep=",") file0.long<-"Rice 3e Datasets\\ASCII Comma\\Chapter 8\\chromatin\\data33.txt" data.long<-scan(file=file0.long,sep=",") # 2. Display the data ---- # 2.1 Histograms with different bin-counts (nclass) ---- nclass0=15 par(mfcol=c(3,1)) hist(data.short,nclass=nclass0, main=paste(c("Chromatin/short Data \n (nclass=",as.character(nclass0),")"), collapse="")) hist(data.medium,nclass=nclass0, main=paste(c("Chromatin/medium Data \n (nclass=",as.character(nclass0),")"), collapse="")) hist(data.long,nclass=nclass0, main=paste(c("Chromatin/long Data \n (nclass=",as.character(nclass0),")"), collapse="")) # 2.2 Index plots ---- par(mfcol=c(3,1)) plot(data.short, main="Chromatin/short Data") plot(data.medium, main="Chromatin/medium Data") plot(data.long, main="Chromatin/long Data") # 3.0 Functions computing MLE and MOM estimates ---- fcn.rayleigh.mle<-function(x){ theta.mle=sqrt(mean(.5*x*x)) return(theta.mle) } fcn.rayleigh.mom<-function(x){ theta.mom=sqrt(2./pi)*mean(x) return(theta.mom) } # 4. Analyse Chromatin/Short data # 4.1 Compute MLE and MOM Estimates ---- fcn.rayleigh.mle(data.short) ## [1] 1.123778 fcn.rayleigh.mom(data.short) ## [1] 1.170563 # 4.2 Plot density function of Rayleigh ----- data0=data.short main.line1="Chromatin/short Data" x.grid<-seq(0,max(data0)*1.1,max(data0)/1000) theta0.mle=fcn.rayleigh.mle(data0) theta0.mom=fcn.rayleigh.mom(data0) x.grid.density.mle<- (x.grid/(theta0.mle^2))* exp( - (x.grid^2)/(2*theta0.mle^2)) x.grid.density.mom<- (x.grid/(theta0.mom^2))* exp( - (x.grid^2)/(2*theta0.mom^2)) par(mfcol=c(1,1)) hist(data0, nclass=15, xlim=c(0, 1.1*max(data0)), probability=TRUE, main=paste(main.line1,"\n MLE (Green) and MOM (Red)",sep="")) lines(x.grid, x.grid.density.mle, type="l",col="green") lines(x.grid, x.grid.density.mom, type="l",col="red")
common_crawl_ocw.mit.edu_24
# Project4_Bayesian_HardyWeinberg.r # Multinomial Counts: Hardy Weinberg Equilibrium # 1.0 Trinomial data from Example 8.5.1.A, p. 273 of Rice. # # x=(X1,X2,X3) # counts of multinomial cells (1,2,3) # Erythrocyte antigen blood types of n=1029 Hong Kong population in 1937. x<-c(342,500,187) # Two estimators: Multinomial MLE and Binomial(X3) MLE x.n=sum(x) x.theta.mle=(2*x[3] + x[2])/(2*sum(x)) x.theta.binomialx3=sqrt(x[3]/sum(x)) print(x.theta.mle) ## [1] 0.4246842 print(x.theta.binomialx3) ## [1] 0.4262978 # 2.1 R functions # function computing cell probabilities given Hardy-Weinberg theta parameter fcn.probs.hardyweinberg<-function(theta0){ probs=c((1-theta0)**2, 2*theta0*(1-theta0), theta0**2) return(probs) } # 3. Bayesian inference with uniform prior dgrid=.001 theta.grid=seq(0,1,dgrid) # Compute likelihood of x # First, compute matrix with multinomial probabilities (3 columns) for each theta (row) probs.grid=t(apply(as.matrix(theta.grid),1, fcn.probs.hardyweinberg)) plot(theta.grid, probs.grid[,1],type="l",col='red', xlab="theta", ylab="Cell Probability", main="Hardy Weinberg Cell Probabilities \n 1 (Red), 2 (Green), 3 (Blue)") lines(theta.grid, probs.grid[,2],type="l",col='green') lines(theta.grid, probs.grid[,3],type="l",col='blue') # Compute likelihood function given x at theta.grid values # Issue: scaling of likelihood; use log scale and normalize loglike.grid<-( log(probs.grid) %*% as.matrix(x)) plot(theta.grid, loglike.grid) loglike.grid.norm0<-loglike.grid - max(loglike.grid) plot(theta.grid, loglike.grid.norm0) # # Convert from Log scale to Original Scale of Likelihood like.grid.norm0 =exp(loglike.grid.norm0) like.grid.norm0<-(1/dgrid)*like.grid.norm0/sum(like.grid.norm0) plot(theta.grid,like.grid.norm0, type="l") length(like.grid.norm0) ## [1] 1001 length(theta.grid) ## [1] 1001 # For uniform prior, the posterior is the normalized likelihiood plot(theta.grid, like.grid.norm0[,1],type="l", xlab="theta", ylab="Density", xlim=c(.3,.6), main="Figure 8.10 Posterior Density\nHardy-Weinberg Model theta" ) # Compute 95% posterior predictive interval (numerically) index.quantile.025=which(cumsum(like.grid.norm0*dgrid) >.025)[1] posterior.llimit=theta.grid[index.quantile.025] index.quantile.975=which(cumsum(like.grid.norm0*dgrid) >.975)[1] posterior.ulimit=theta.grid[index.quantile.975] abline(v=c(posterior.llimit, posterior.ulimit), col='blue') abline(v=x.theta.mle, col='green') print(x.theta.mle) ## [1] 0.4246842 print(c(posterior.llimit, posterior.ulimit)) ## [1] 0.403 0.446
common_crawl_ocw.mit.edu_25
# Rproject4_Bayesian_Poisson.r # Example 8.4.A: Counts of asbestos fibers on filters # Steel et al. 1980 # 1. Data ---- x=c(31,29,19,18,31,28, 34,27,34,30,16,18, 26,27,27,18,24,22, 28,24,21,17,24) # 2. Parameter Estimation of Poisson---- # Suppose x is a sample of size n=23 from a Poisson(lambda) distribution # # 2.1 Estimate of lambda (MOM and MLE are the same) lambda.hat=mean(x) print(lambda.hat) ## [1] 24.91304 # par(mfcol=c(1,1)) # 2.2 Plot histograms of sample data # Vary argument nclass= for number of bins hist(x, xlim=c(0,1.5*max(x)), probability=TRUE) hist(x,nclass=15, xlim=c(0,1.5*max(x)), probability=TRUE) hist(x,nclass=10, xlim=c(0,1.5*max(x)), probability=TRUE) # Add plot of pmf function for fitted Poisson distribution x.grid=seq(0,1.5*max(x),1) x.probs=dpois(x.grid, lambda=lambda.hat) lines(x.grid,x.probs, type="h", col='blue',lwd=4) # lambda.hat.sterror=sqrt(lambda.hat/length(x)) print(lambda.hat.sterror) ## [1] 1.040757 # 3. Approximate Confidence Interval for lambda ---- # # Confidence Level: 90% lambda.CI.Limits=lambda.hat + c(-1,1)*qnorm(.95)*lambda.hat.sterror print(lambda.CI.Limits) ## [1] 23.20115 26.62494 # Note: qnorm(.95) ## [1] 1.644854 qnorm(.05) #= -qnorm(.95) ## [1] -1.644854 # 4. Bayesian Approach ---- # # 4.1 Prior distribution for lambda ---- # Gamma(shape=alpha0, rate=nu0) distribution with # prior mean = 15 # prior variance = 5*5 # # For a Gamma distribution # mean = mu = alpha0/nu0 # variance =sigsq= alpha0/(nu0*nu0) # # Solving for Gamma parameters # nu0 = mu/sigsq nu0= 15/25 # alpha0=mu*nu0 alpha0=nu0*15 lambda.grid=seq(0.1,30,.1) lambda.grid.priorpmf=dgamma(lambda.grid, shape=alpha0, rate=nu0) priorpmf.grid =dgamma(lambda.grid, shape=alpha0, rate=nu0) # 4.2 Plot prior, likelihood, and posterior ---- #par(mfcol=c(3,1)) # Plot Prior Density plot(lambda.grid, priorpmf.grid, col='red', type="l", main="Prior Density") # Plot Likelihood # Sum(x) is Poisson(lambda.sum) where lambda.sum=n*lambda likelihood.grid=dpois(sum(x),lambda=length(x)*lambda.grid) plot(lambda.grid,likelihood.grid, main="Likelihood of lambda", col='green',type="l") # Plot Posterior # Posterior parameters alpha1= alpha0 + sum(x) nu1 = nu0 + length(x) posteriorpmf.grid=dgamma(lambda.grid,shape=alpha1, rate=nu1) plot(lambda.grid, posteriorpmf.grid, main="Posterior Density", col='blue', type="l") # Plot of all densities together ---- par(mfcol=c(1,1)) plot(lambda.grid, posteriorpmf.grid,ylab="density", main="Densities \n Gamma Prior/Posterior (red/blue)\n", type="n") lines(lambda.grid, priorpmf.grid, col='red') lines(lambda.grid, posteriorpmf.grid, col='blue') # Add case of Uniform prior ---- lines(lambda.grid, (1+0*priorpmf.grid)*.1/30, col='orange') posteriorpmf.uniformprior.grid=dgamma(lambda.grid,shape=1+ sum(x), rate= length(x)) lines(lambda.grid, posteriorpmf.uniformprior.grid, col='green') title(main="\n\nUniform Prior/Posterior (orange/green)") # 5. Point and Interval Estimates ---- # 5.1 First posterior distribution # Parameters/attributes posterior.mean=alpha1/nu1 posterior.stdev=sqrt(alpha1/(nu1*nu1)) posterior.mode= (alpha1-1)/nu1 # 90% posterior predictive interval posterior.llimit=qgamma(.05, shape=alpha1, rate=nu1) posterior.ulimit=qgamma(.95, shape=alpha1, rate=nu1) # Summary table bayes1.attributes=data.frame( mode=posterior.mode, mean=posterior.mean, stdev=posterior.stdev, ulimit=posterior.ulimit, llimit=posterior.llimit) # 5.2 Second posterior distribution # Parameters/attributes # Reset posterior parameters corresponding to uniform prior alpha1=sum(x) nu1=length(x) posterior.mean=alpha1/nu1 posterior.stdev=sqrt(alpha1/(nu1*nu1)) posterior.mode= (alpha1-1)/nu1 # 90% posterior predictive interval posterior.llimit=qgamma(.05, shape=alpha1, rate=nu1) posterior.ulimit=qgamma(.95, shape=alpha1, rate=nu1) # Summary table bayes2.attributes=data.frame( mode=posterior.mode, mean=posterior.mean, stdev=posterior.stdev, ulimit=posterior.ulimit, llimit=posterior.llimit) # 5.3 MLE Estimates/Confidence Interval mle.attributes=data.frame( mode=lambda.hat, mean=NA, stdev=sqrt(lambda.hat/length(x)), ulimit=lambda.hat + qnorm(.95)*sqrt(lambda.hat/length(x)), llimit=lambda.hat + qnorm(.05)*sqrt(lambda.hat/length(x))) estimates.table=data.frame(cbind(Bayes1=t(bayes1.attributes), Bayes2=t(bayes2.attributes), MLE=t(mle.attributes)) ) dimnames(estimates.table)[[2]]<-c("Bayes 1", "Bayes 2", "Maximum Likelihood") print(estimates.table) ## Bayes 1 Bayes 2 Maximum Likelihood ## mode 24.618644 24.869565 24.913043 ## mean 24.661017 24.913043 NA ## stdev 1.022232 1.040757 1.040757 ## ulimit 26.366182 26.649296 26.624937 ## llimit 23.004027 23.226222 23.201150
common_crawl_ocw.mit.edu_26
# RProject5_HypothesisTesting.r # 1.0 Two coins: coin0 and coin1 # # P(Head | coin0)=0.5 and P(Head | coin1) =0.7 prob.coin0=0.5 prob.coin1=0.7 #help(sample) # 2.0 Choose a coin at random, count number of heads in 10 tosses prob.coin.random=sample(c(prob.coin0,prob.coin1), size=1) x=rbinom(n=1, size=10, prob=prob.coin.random) x ## [1] 6 # 3. Likelihood Table # For each outcome of X (column 1) # report Likelihood of coin0 (column2) and of coin1 (column 3) # X= number of heads on 10 tosses of coin list.x=0:10 pdf.coin0=dbinom(list.x, size=10,prob=prob.coin0) pdf.coin1=dbinom(list.x, size=10,prob=prob.coin1) likelihood.table=cbind( x=list.x, like.coin0=pdf.coin0, like.coin1=pdf.coin1, likeratio= pdf.coin0/pdf.coin1) likelihood.table ## x like.coin0 like.coin1 likeratio ## [1,] 0 0.0009765625 0.0000059049 165.38171688 ## [2,] 1 0.0097656250 0.0001377810 70.87787866 ## [3,] 2 0.0439453125 0.0014467005 30.37623371 ## [4,] 3 0.1171875000 0.0090016920 13.01838588 ## [5,] 4 0.2050781250 0.0367569090 5.57930823 ## [6,] 5 0.2460937500 0.1029193452 2.39113210 ## [7,] 6 0.2050781250 0.2001209490 1.02477090 ## [8,] 7 0.1171875000 0.2668279320 0.43918753 ## [9,] 8 0.0439453125 0.2334744405 0.18822323 ## [10,] 9 0.0097656250 0.1210608210 0.08066710 ## [11,] 10 0.0009765625 0.0282475249 0.03457161 # Plot pmf function of X under H0 and H1 par(mfcol=c(1,1)) plot(list.x, pdf.coin0, xlab="x", ylab="p(x | theta)", ylim=c(0, max(c(pdf.coin0, pdf.coin1)))) points(list.x, pdf.coin1, col='red') title(main="p(x | theta) for H0 (Black) and H1 (Red)") title(sub=paste( "H0: theta = ", prob.coin0, ", H1: theta = ", prob.coin1, collapse="")) # Plot Likelihood Ratio H0:H1 as a function of X par(mfcol=c(1,1)) plot(likelihood.table[,"x"], likelihood.table[,"likeratio"], xlab="x", ylab= "LikeRatio", main="Likelihood Ratio H0:H1") list.levels=c(1/10, 1,10) abline(h=1, col='grey') abline(h=list.levels, col=c(2,3,4)) plot(likelihood.table[,"x"], log(likelihood.table[,"likeratio"]), xlab="x", ylab= "Log LikeRatio", main="Likelihood Ratio HO:H1") abline(h=0,col='gray') abline(h=log(list.levels), col=c(2,3,4)) paste(list.levels, collapse=", ") ## [1] "0.1, 1, 10" title(sub=paste("LR Levels: ", paste(list.levels, collapse=", "), sep="")) # 3.0 Bayes Rule accepts H0 if Likelihood Ratio > c # where c=P(H1)/P(H0); the prior odds of H1 to H0 # 4.0 Consider decision rule for each value of x*=0,1, ...,10 # # d.x*=1 if x >=x* #Create table of probability of errors list.xstar=c(-1:10) table.errorProbs<-matrix(NA, nrow=length(list.xstar), ncol=3) for (j.xstar in c(1:length(list.xstar))){ xstar=list.xstar[j.xstar] table.errorProbs[j.xstar,1]=xstar prob.rejectH0.given.H0=1-pbinom(xstar,size=10,prob=prob.coin0) table.errorProbs[j.xstar, 2]=prob.rejectH0.given.H0 prob.acceptH0.givenH1 = pbinom(xstar,size=10,prob=prob.coin1) table.errorProbs[j.xstar, 3]=prob.acceptH0.givenH1 } dimnames(table.errorProbs)[[2]]<-c("xstar", "P(Reject H0 | H0)", "P(Accept H0 | H1)") print(table.errorProbs) ## xstar P(Reject H0 | H0) P(Accept H0 | H1) ## [1,] -1 1.0000000000 0.0000000000 ## [2,] 0 0.9990234375 0.0000059049 ## [3,] 1 0.9892578125 0.0001436859 ## [4,] 2 0.9453125000 0.0015903864 ## [5,] 3 0.8281250000 0.0105920784 ## [6,] 4 0.6230468750 0.0473489874 ## [7,] 5 0.3769531250 0.1502683326 ## [8,] 6 0.1718750000 0.3503892816 ## [9,] 7 0.0546875000 0.6172172136 ## [10,] 8 0.0107421875 0.8506916541 ## [11,] 9 0.0009765625 0.9717524751 ## [12,] 10 0.0000000000 1.0000000000 # plot(table.errorProbs[,1], table.errorProbs[,2], xlab="x* (Reject H0 if X > x*)", ylab="P(Reject H0 | H0)", main="P(Type I Error)") par(mfcol=c(1,1)) plot(table.errorProbs[,1], table.errorProbs[,3], xlab="x* (Reject H0 if X > x*)", ylab="P(Accept H0 |H1)", main="P(Type II Error)") par(mfcol=c(1,1)) plot(table.errorProbs[,2], table.errorProbs[,3], xlab="P(Reject H0 | H0)", ylab="P(Accept H0 | H1)", main="Risk Points of Decision Rules\nP(Type II Error) vs P(Type I Error)" ) # Add Risk Points of decision rules d2.x* (opposite of rules d.x*) # for constants c in list.xstar # d.xstar=1 if x >=xstar # d2.xstar=1 if x < xstar table.errorProbs2<-matrix(NA, nrow=length(list.xstar), ncol=3) for (j.xstar in c(1:length(list.xstar))){ c=list.xstar[j.xstar] table.errorProbs2[,1]=c prob.rejectH0.given.H0=pbinom(c,size=10,prob=prob.coin0) table.errorProbs2[j.xstar, 2]=prob.rejectH0.given.H0 prob.acceptH0.givenH1 = 1-pbinom(c,size=10,prob=prob.coin1) table.errorProbs2[j.xstar, 3]=prob.acceptH0.givenH1 } points(table.errorProbs2[,2], table.errorProbs2[,3],pch="x",col="red")
common_crawl_ocw.mit.edu_27
# Project6_LRTest_HardyWeinberg.r # Multinomial Counts: Hardy Weinberg Equilibrium # 1.0 Trinomial data from Example 8.5.1.A, p. 273 of Rice. # # x=(X1,X2,X3) # counts of multinomial cells (1,2,3) # Erythrocyte antigen blood types of n=1029 Hong Kong population in 1937. x<-c(342,500,187) # Two estimators: Multinomial MLE and Binomial(X3) MLE x.n=sum(x) x.theta.mle=(2*x[3] + x[2])/(2*sum(x)) x.theta.binomialx3=sqrt(x[3]/sum(x)) print(x.theta.mle) ## [1] 0.4246842 print(x.theta.binomialx3) ## [1] 0.4262978 # 2.1 R functions # function computing cell probabilities given Hardy-Weinberg theta parameter fcn.probs.hardyweinberg<-function(theta0){ probs=c((1-theta0)**2, 2*theta0*(1-theta0), theta0**2) return(probs) } # 3. Generalized Likelihood Ratio Test ---- # 3.1 Construct vectors of Observed and Expected counts counts.Observed=x probs.mle=fcn.probs.hardyweinberg(x.theta.mle) counts.sum=sum(x) counts.Expected=counts.sum*probs.mle labels.BloodType=c("M","MN","N") # 3.2 Compute LRStat # Conduct level alpha=0.05 test # Compute P-Value component.LRStat=2*counts.Observed*log(counts.Observed/counts.Expected) LRStat=sum(component.LRStat) print(LRStat) ## [1] 0.03249863 # # Under Null Hypothesis, LRStat is a chi-square r.v. with # q=(m-1) -1 = (3-1)-1=1 degrees of freedom # # For level alpha test, determine critical value of LRStat alpha=.05 q=3-1-1 chisq.criticalValue=qchisq(p=1-alpha,df=q) print(chisq.criticalValue) ## [1] 3.841459 # Test accepted at alpha level (LRStat < chisq.criticalValue) # # Compute P-value of LRStat LRStat.pvalue=1-pchisq(LRStat, df=q) print(LRStat.pvalue) ## [1] 0.8569376 # 4. Pearson ChiSquare Test ---- # 4.1 Construct vectors of Observed and Expected counts counts.Observed=x probs.mle=fcn.probs.hardyweinberg(x.theta.mle) counts.sum=sum(x) counts.Expected=counts.sum*probs.mle labels.BloodType=c("M","MN","N") # 4.2 Compute Pearson ChiSqStat # Conduct level alpha=0.05 test # Compute P-Value component.ChiSqStat=((counts.Observed-counts.Expected)^2 )/counts.Expected ChiSqStat=sum(component.ChiSqStat) print(ChiSqStat) ## [1] 0.03250408 # # Under Null Hypothesis, ChiSqStat is a chi-square r.v. with # q=(m-1) -1 = (3-1)-1=1 degrees of freedom # # For level alpha test, determine critical value of ChiSqStat alpha=.05 q=3-1-1 chisq.criticalValue=qchisq(p=1-alpha,df=q) print(chisq.criticalValue) ## [1] 3.841459 # Test accepted at alpha level (ChiSqStat < chisq.criticalValue) # # Compute P-value of ChiSqStat ChiSqStat.pvalue=1-pchisq(ChiSqStat, df=q) print(ChiSqStat.pvalue) ## [1] 0.8569258 ############################ # 5. Create Table for Both Tests ---- table.lrtests<-data.frame( Observed=counts.Observed, Expected=counts.Expected, LRStat.j=component.LRStat, ChiSqStat.j=component.ChiSqStat) # Add last row equal to sums of columns # Use r function apply(X=, MARGIN=, FUN=) table.lrtests.0<-rbind( table.lrtests, t(as.matrix(apply(X=table.lrtests,MARGIN=2,FUN=sum)))) dimnames(table.lrtests.0)[[1]]<-c(labels.BloodType,"Total/Sum") print(table.lrtests.0) ## Observed Expected LRStat.j ChiSqStat.j ## M 342 340.587 2.83189894 0.005862327 ## MN 500 502.826 -5.63617628 0.015883284 ## N 187 185.587 2.83677597 0.010758471 ## Total/Sum 1029 1029.000 0.03249863 0.032504082 print(data.frame(cbind(LRStat.pvalue=LRStat.pvalue, ChiSqStat.pvalue=ChiSqStat.pvalue), row.names=c("P-Value"))) ## LRStat.pvalue ChiSqStat.pvalue ## P-Value 0.8569376 0.8569258
common_crawl_ocw.mit.edu_28
# Rproject6_LRTest_Poisson.r # Example 8.4.A: Counts of asbestos fibers on filters # Steel et al. 1980 # 1. Data ---- x=c(31,29,19,18,31,28, 34,27,34,30,16,18, 26,27,27,18,24,22, 28,24,21,17,24) # x= rpois(n=100, lambda=24.91) # 2. Parameter Estimation of Poisson---- # Suppose x is a sample of size n=23 from a Poisson(lambda) distribution # # 2.1 Estimate of lambda (MOM and MLE are the same) lambda.hat=mean(x) print(lambda.hat) ## [1] 24.91304 # par(mfcol=c(1,1)) # 2.2 Plot histograms of sample data # Vary argument nclass= for number of bins hist(x, xlim=c(0,1.5*max(x)), probability=TRUE) hist(x,nclass=15, xlim=c(0,1.5*max(x)), probability=TRUE) hist(x,nclass=10, xlim=c(0,1.5*max(x)), probability=TRUE) # Add plot of pmf function for fitted Poisson distribution x.grid=seq(0,1.5*max(x),1) x.probs=dpois(x.grid, lambda=lambda.hat) lines(x.grid,x.probs, type="h", col='blue',lwd=4) # lambda.hat.sterror=sqrt(lambda.hat/length(x)) print(lambda.hat.sterror) ## [1] 1.040757 # 3. Likelihood Ratio Test Based on Histogram nclass0=10 hist.0<-hist(x,nclass=nclass0) print(hist.0$counts) ## [1] 5 1 2 3 1 5 2 2 2 n.bins=length(hist.0$counts) print(hist.0$breaks) ## [1] 16 18 20 22 24 26 28 30 32 34 # cdf values at the breaks hist.0.breaks.MLEFITTED.cdf=ppois(hist.0$breaks, lambda=lambda.hat) # bin probabilities are the differences hist.0.pbin=diff(hist.0.breaks.MLEFITTED.cdf) #### # 3.1 Construct vectors of Observed and Expected counts counts.Observed=hist.0$counts probs.mle=hist.0.pbin counts.sum=sum(counts.Observed) counts.Expected=counts.sum*probs.mle #labels.BloodType=c("M","MN","N") # 3.2 Compute LRStat # Conduct level alpha=0.05 test # Compute P-Value component.LRStat=2*counts.Observed*log(counts.Observed/counts.Expected) LRStat=sum(component.LRStat) print(LRStat) ## [1] 15.82175 # # Under Null Hypothesis, LRStat is a chi-square r.v. with # q=(m-1) -1 degrees of freedom # where m=number of bins # For level alpha test, determine critical value of LRStat alpha=.05 m=n.bins q=m-1-1 print(q) ## [1] 7 chisq.criticalValue=qchisq(p=1-alpha,df=q) print(chisq.criticalValue) ## [1] 14.06714 # Test rejected at alpha level (LRStat > chisq.criticalValue) # # Compute P-value of LRStat LRStat.pvalue=1-pchisq(LRStat, df=q) print(LRStat.pvalue) ## [1] 0.02679569 # 4. Pearson ChiSquare Test ---- # 4.1 Construct vectors of Observed and Expected counts counts.Observed=hist.0$counts probs.mle=hist.0.pbin counts.sum=sum(counts.Observed) counts.Expected=counts.sum*probs.mle # 4.2 Compute Pearson ChiSqStat # Conduct level alpha=0.05 test # Compute P-Value component.ChiSqStat=((counts.Observed-counts.Expected)^2 )/counts.Expected ChiSqStat=sum(component.ChiSqStat) print(ChiSqStat) ## [1] 16.84007 # # Under Null Hypothesis, LRStat is a chi-square r.v. with # q=(m-1) -1 degrees of freedom # where m=number of bins # For level alpha test, determine critical value of LRStat alpha=.05 m=n.bins q=m-1-1 print(q) ## [1] 7 chisq.criticalValue=qchisq(p=1-alpha,df=q) print(chisq.criticalValue) ## [1] 14.06714 # Test accepted at alpha level (ChiSqStat < chisq.criticalValue) # # Compute P-value of ChiSqStat ChiSqStat.pvalue=1-pchisq(ChiSqStat, df=q) print(ChiSqStat.pvalue) ## [1] 0.01845707 ############################ # 5. Create Table for Both Tests ---- table.lrtests<-data.frame( Observed=counts.Observed, Expected=counts.Expected, LRStat.j=component.LRStat, ChiSqStat.j=component.ChiSqStat) # Add last row equal to sums of columns # Use r function apply(X=, MARGIN=, FUN=) table.lrtests.0<-rbind( table.lrtests, t(as.matrix(apply(X=table.lrtests,MARGIN=2,FUN=sum)))) labels.tablerows<-paste("Bin_",c(1:n.bins),"[", hist.0$breaks[1:(n.bins)],",", hist.0$breaks[2:(n.bins+1)],"]",sep="") dimnames(table.lrtests.0)[[1]]<-c(labels.tablerows,"Total/Sum") print(table.lrtests.0) ## Observed Expected LRStat.j ChiSqStat.j ## Bin_1[16,18] 5 1.2812509 13.6160105 10.79343231 ## Bin_2[18,20] 1 2.1902179 -1.5680021 0.64679349 ## Bin_3[20,22] 2 3.0734068 -1.7185579 0.37489412 ## Bin_4[22,24] 3 3.6030113 -1.0989460 0.10092185 ## Bin_5[24,26] 1 3.5810485 -2.5513113 1.86029635 ## Bin_6[26,28] 5 3.0554534 4.9250991 1.23754511 ## Bin_7[28,30] 2 2.2621571 -0.4926865 0.03038089 ## Bin_8[30,32] 2 1.4669015 1.2399793 0.19373762 ## Bin_9[32,34] 2 0.8399652 3.4701678 1.60206703 ## Total/Sum 23 21.3534126 15.8217528 16.84006878 print(data.frame(cbind(LRStat.pvalue=LRStat.pvalue, ChiSqStat.pvalue=ChiSqStat.pvalue), row.names=c("P-Value"))) ## LRStat.pvalue ChiSqStat.pvalue ## P-Value 0.02679569 0.01845707
common_crawl_ocw.mit.edu_29
# Rproject7_5_beeswax.r # 1.0 Read in data ---- # See Example 10.2.1 A, data from White, Riethof, and Kushnir (1960) # file0.beeswax<-"Rice 3e Datasets/ASCII Comma/Chapter 10/beeswax.txt" data.beeswax<-read.table(file=file0.beeswax,sep=",", header=TRUE) head(data.beeswax) ## MeltingPoint Hydrocarbon ## 1 63.78 14.27 ## 2 63.45 14.80 ## 3 63.58 12.28 ## 4 63.08 17.09 ## 5 63.40 15.10 ## 6 64.42 12.92 #x.label="MeltingPoint" x.label="Hydrocarbon" x=data.beeswax[,x.label] # 2.0 Plots of the data # 2.1 Index Plot plot(x, ylab=x.label, main="Beeswax Data (White et. al. 1960)") # 2.2 Plot of the ECDF plot(ecdf(x), verticals=TRUE, col.points='blue', col.hor='red', col.vert='green', main=paste("ECDF of Beeswax ", x.label,sep="")) # 2.3 Histogram hist(x, main="Histogram (Counts)") hist(x, main="Histogram (Density)", probability=TRUE) # Add plot of Fitted Normal grid.x<-seq(.95*min(x), 1.05*max(x), .01) x.mean=mean(x) x.var=var(x) grid.x.normdensity=dnorm(grid.x, mean=x.mean, sd=sqrt(x.var)) lines(grid.x, grid.x.normdensity, col='green') # 2.4 Normal QQ Plot qqnorm(x) # # Add line with slope=sqrt(x.var) and interecept=x.mean abline(a=x.mean, b=sqrt(x.var), col='green') # 3.0 Compute selected quantiles list.probs=c(.10,.25,.50,.75,.9) print(data.frame(cbind(prob=list.probs, quantile=quantile(x,probs=list.probs)))) ## prob quantile ## 10% 0.10 13.676 ## 25% 0.25 14.070 ## 50% 0.50 14.570 ## 75% 0.75 15.115 ## 90% 0.90 15.470
common_crawl_ocw.mit.edu_30
# Rproject7_6_lifetimes.r # 1.0 Read in data ---- # See Example 10.2.1 A, data from White, Riethof, and Kushnir (1960) # # Lifetime in days of animals who died within 2 years # Groups 1-5, 72 animals per group # Control group, 107 animals gpigs1=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/gpigs1.txt",sep=",") gpigs2=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/gpigs2.txt",sep=",") gpigs3=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/gpigs3.txt",sep=",") gpigs4=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/gpigs4.txt",sep=",") gpigs5=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/gpigs5.txt",sep=",") gpigscontrol=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/gpigscontrol.txt",sep=",") # Plot the data (check for errors) plot(gpigs1) plot(gpigs2) plot(gpigs3) plot(gpigs4) plot(gpigs5) plot(gpigscontrol) # 2. Compute empirical survival functions for each group --- # 2.1 Define fcn.ecdf # empirical cdf of time-to-failure # Inputs # x (times of failures) # n (total number of items/individuals) # For jth smallest failure, set ecdf = j/(n+1) fcn.ecdf<-function(x, n=72){ if (sum(1*(diff(x)<0))==0){ x.0=x x.0.ecdf=c(1:length(x.0))/(n+1) return(x.0.ecdf)}else{return(NULL)} } gpigs1.esf<-1-fcn.ecdf(gpigs1) gpigs2.esf<-1-fcn.ecdf(gpigs2) gpigs3.esf<-1-fcn.ecdf(gpigs3) gpigs4.esf<-1-fcn.ecdf(gpigs4) gpigs5.esf<-1-fcn.ecdf(gpigs5) gpigscontrol.esf<-1-fcn.ecdf(gpigscontrol,n=107) # 3.0 Plot Empirical Survival Functions --- xlim0=c(0,800.) ylim0=c(0,1.) plot(gpigscontrol,gpigscontrol.esf, xlim=xlim0, ylim=ylim0, main="Empirical Survival Functions \n Figure 10.2 (Rice)", xlab="Days Elapsed", ylab="Proportion of live animals", type="l", col='black') cols.groups<-rainbow(5) lines(gpigs1, gpigs1.esf, lty=1, col=cols.groups[1]) lines(gpigs2, gpigs2.esf, lty=1, col=cols.groups[2]) lines(gpigs3, gpigs3.esf, lty=1, col=cols.groups[3]) lines(gpigs4, gpigs4.esf, lty=1, col=cols.groups[4]) lines(gpigs5, gpigs5.esf, lty=1, col=cols.groups[5]) legend(x=550,y=1.0, legend=c("Control",paste("Group ", c("I","II","III","IV","V"),sep="")), lty=rep(1,times=6), col=c('black', cols.groups), cex=.8) # Redo plot with log scale plot(gpigscontrol,log(gpigscontrol.esf), xlim=xlim0, ylim=c(log(.005),log(1.0)), main="Log Empirical Survival Functions \n Figure 10.2 (Rice)", xlab="Days Elapsed", ylab="Log Proportion of live animals", type="l", col='black') cols.groups<-rainbow(5) lines(gpigs1, log(gpigs1.esf), lty=1, col=cols.groups[1]) lines(gpigs2, log(gpigs2.esf), lty=1, col=cols.groups[2]) lines(gpigs3, log(gpigs3.esf), lty=1, col=cols.groups[3]) lines(gpigs4, log(gpigs4.esf), lty=1, col=cols.groups[4]) lines(gpigs5, log(gpigs5.esf), lty=1, col=cols.groups[5]) legend(x=0,y=-3.0, legend=c("Control",paste("Group ", c("I","II","III","IV","V"),sep="")), lty=rep(1,times=6), col=c('black', cols.groups), cex=.8)
common_crawl_ocw.mit.edu_31
# Rproject8_1_windspeed.r # 1.0 Read in data ---- # See Problem 10.9.39 # data from Simiu and Filliben (1975), analysis of extreme winds windspeed=read.table(file="Rice 3e Datasets/ASCII Comma/Chapter 10/windspeed.txt", sep=",",stringsAsFactors = FALSE) windspeed.0=t(windspeed[,-1]) dimnames(windspeed.0)<-list(c(1:nrow(windspeed.0)), windspeed[,1]) head(windspeed.0) ## Cairo Alpena TatoushIsland Williston Richmond Burlington Eastport Canton ## 1 35 38 68 38 46 40 53 51 ## 2 38 43 51 50 48 47 41 53 ## 3 33 41 65 40 41 43 54 50 ## 4 35 39 61 35 43 40 49 44 ## 5 40 41 68 38 37 41 60 46 ## 6 38 38 54 41 47 50 54 44 ## Yuma Duluth Valentine Charleston Eureka OklahomaCity Baker Sheridan ## 1 32 54 38 52 35 56 27 44 ## 2 32 49 44 49 35 41 30 38 ## 3 29 49 44 46 46 44 29 41 ## 4 32 46 43 43 46 57 28 43 ## 5 32 50 39 50 35 48 28 40 ## 6 37 45 36 37 35 43 30 38 ## BlockIsland Winnemucca NorthHead KeyWest CorpusChristi ## 1 60 36 69 32 47 ## 2 54 32 65 40 41 ## 3 65 36 70 39 41 ## 4 63 32 63 44 40 ## 5 56 38 73 41 90 ## 6 59 41 65 40 38 boxplot(windspeed.0) #boxplot(windspeed.0, horizontal=TRUE) boxplot(windspeed.0, horizontal=FALSE,las=2,cex.lab=.5)
common_crawl_ocw.mit.edu_32
#RProject8_2_ks_test.r set.seed(0) par(mfcol=c(2,2)) # 2x2 Panel of Plots # Generate random sample from N(0,1) distribution # x=rnorm(50) # 1.1 Index plot of sample plot(x) # 1.2 Normal QQ Plot of sample qqnorm(x) # 1.3 Empirical CDF of sample plot.ecdf(x) # 1.4 Theoretical and empirical CDF together par(mfcol=c(1,1)) x=rt(50,df=15) plot.ecdf(x) x.grid=seq(-3,3,.001) x.grid.pnorm<-pnorm(x.grid) lines(x.grid,x.grid.pnorm,col='green',type="l") # Compute Kolmogorov Smirnov Test Statistic x.ecdf<-ecdf(x) x.ecdf(x) ## [1] 0.80 0.84 0.74 0.52 0.58 0.68 0.06 0.76 0.66 0.28 0.16 0.34 0.70 0.92 ## [15] 0.48 0.78 0.44 0.82 0.64 0.96 0.02 0.90 0.22 0.72 0.18 0.42 0.32 1.00 ## [29] 0.94 0.46 0.50 0.12 0.54 0.56 0.62 0.36 0.24 0.14 0.30 0.88 0.40 0.08 ## [43] 0.38 0.20 0.04 0.10 0.26 0.60 0.86 0.98 KSstat=max(abs(x.ecdf(x)-pnorm(x))) x.KSstat=x[KSstat==abs(x.ecdf(x)-pnorm(x))] abline(v=x.KSstat, col='gray') title(paste("KS-stat = ",as.character(round(KSstat,digits=3)),sep=""),adj=1) # 2. Apply R function ks.test() to sample ks.test(x, y="pnorm") ## ## One-sample Kolmogorov-Smirnov test ## ## data: x ## D = 0.19902, p-value = 0.03273 ## alternative hypothesis: two-sided
common_crawl_ocw.mit.edu_33
# RProject8_3_bootstrap_location.r # Problem 10.26 of Rice # Hampson and Walker data on heats of sublimation of # platinum,iridium, and rhodium # To install these packages, uncomment the next two lines #install.packages("MASS") #install.packages("boot") library(MASS) ## Warning: package 'MASS' was built under R version 3.1.3 library(boot) ## Warning: package 'boot' was built under R version 3.1.3 x.platinum=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/platinum.txt") x.iridium=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/iridium.txt") x.rhodium=scan(file="Rice 3e Datasets/ASCII Comma/Chapter 10/rhodium.txt") # Parts (a)-(d) x=x.platinum hist(x) stem(x) ## ## The decimal point is at the | ## ## 132 | 7 ## 134 | 134578889900224488 ## 136 | 36 ## 138 | ## 140 | 2 ## 142 | 3 ## 144 | ## 146 | 58 ## 148 | 8 boxplot(x) plot(x) # (e) # Measurements 8,9,10 are all very high as are measurements 14 and 15 # The data do not appear indepenent # (f) Measures of location mean(x); mean(x,trim=.1);mean(x,trim=.2);median(x); huber(x,k=1.5)[[1]] ## [1] 137.0462 ## [1] 136.3091 ## [1] 135.2875 ## [1] 135.1 ## [1] 135.3841 # (g) Standard error of the sample mean and approximate 90 percent conf. interval x.stdev=sqrt(var(x)) x.mean.sterr=x.stdev/sqrt(length(x)) print(x.mean.sterr) ## [1] 0.8724542 alpha=0.10 z.upperalphahalf=qnorm(1-alpha/2) x.mean=mean(x) x.mean.ci<-c(-1,1)*z.upperalphahalf* x.mean.sterr + x.mean print(x.mean); print(x.mean.ci) ## [1] 137.0462 ## [1] 135.6111 138.4812 # parts (i), (j), (k). # Bootstrap confidence intervals of different location measures fcn.median<- function(x, d) { return(median(x[d])) } fcn.mean<- function(x, d) { return(mean(x[d])) } fcn.trimmedmean <- function(x, d, trim=0) { return(mean(x[d], trim/length(x))) } fcn.huber<-function(x,d){ x.huber=huber(x[d],k=1.5) return(x.huber[[1]]) } # Bootstrap analysis of sample mean: x.boot.mean= boot(x, fcn.mean, R=1000) plot(x.boot.mean) print(x.boot.mean$t0) ## [1] 137.0462 print(sqrt(var(x.boot.mean$t))) ## [,1] ## [1,] 0.8406306 boot.ci(x.boot.mean,conf=.95, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.mean, conf = 0.95, type = "basic") ## ## Intervals : ## Level Basic ## 95% (135.2, 138.5 ) ## Calculations and Intervals on Original Scale title(paste("Sample Mean: StDev=",as.character(round(sqrt(var(x.boot.mean$t)),digits=4)),sep=""), adj=1) # Bootstrap analysis of sample median: x.boot.median= boot(x, fcn.median, R=1000) plot(x.boot.median) title(paste("Median: StDev=",as.character(round(sqrt(var(x.boot.median$t)),digits=4)),sep=""), adj=1) print(x.boot.median$t0) ## [1] 135.1 print(sqrt(var(x.boot.median$t))) ## [,1] ## [1,] 0.238632 boot.ci(x.boot.median,conf=.95, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.median, conf = 0.95, type = "basic") ## ## Intervals : ## Level Basic ## 95% (134.4, 135.4 ) ## Calculations and Intervals on Original Scale # Bootstrap analysis of sample trimmed mean: x.boot.trimmedmean= boot(x, fcn.trimmedmean,trim=.2, R=1000) plot(x.boot.trimmedmean) title(paste("Trimmed Mean: StDev=",as.character(round(sqrt(var(x.boot.trimmedmean$t)),digits=4)),sep=""), adj=1) print(x.boot.trimmedmean$t0) ## [1] 137.0462 print(sqrt(var(x.boot.trimmedmean$t))) ## [,1] ## [1,] 0.833879 boot.ci(x.boot.trimmedmean,conf=.95, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.trimmedmean, conf = 0.95, type = "basic") ## ## Intervals : ## Level Basic ## 95% (135.3, 138.6 ) ## Calculations and Intervals on Original Scale # Bootstrap analysis of Huber M estimate: x.boot.huber= boot(x, fcn.huber, R=1000) plot(x.boot.huber) title(paste("Huber M Est: StDev=",as.character(round(sqrt(var(x.boot.huber$t)),digits=4)),sep=""), adj=1) print(x.boot.huber$t0) ## [1] 135.3841 print(sqrt(var(x.boot.huber$t))) ## [,1] ## [1,] 0.3827326 boot.ci(x.boot.huber,conf=.95, type="basic") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = x.boot.huber, conf = 0.95, type = "basic") ## ## Intervals : ## Level Basic ## 95% (134.4, 135.9 ) ## Calculations and Intervals on Original Scale
common_crawl_ocw.mit.edu_34
# RProject8_4_density.r require(graphics) set.seed(0) x=ifelse(runif(100)<.5, rnorm(100) +5,rnorm(100)) # x=ifelse(runif(100)<.5, rgamma(100,shape=3,scale=2),rnorm(100)) # x=ifelse(runif(100)<.5, (rnorm(100) +5)^2,(rnorm(100)^2)) par(mfcol=c(2,2)) x.density1<-density(x,bw="sj") hist(x,nclass=50,probability=TRUE, main=paste("Density Estimate (bw='sj')", "\nN = ", as.character(length(x))," Bandwidth=", as.character(round(x.density1$bw,digits=3)), collapse="")) lines(x.density1$x, x.density1$y, col="green") rug(x) x.density1<-density(x,bw="nrd0") hist(x,nclass=50,probability=TRUE, main=paste("Density Estimate (bw='nrd0')", "\nN = ", as.character(length(x))," Bandwidth=", as.character(round(x.density1$bw,digits=3)), collapse="")) lines(x.density1$x, x.density1$y, col="green") rug(x) x.density1<-density(x,bw="nrd0",adjust=.2) hist(x,nclass=50,probability=TRUE, main=paste("Density Estimate (bw='nrd0 x .2')", "\nN = ", as.character(length(x))," Bandwidth=", as.character(round(x.density1$bw,digits=3)), collapse="")) lines(x.density1$x, x.density1$y, col="green") rug(x) x.density1<-density(x,bw="nrd0",adjust=4.) hist(x,nclass=50,probability=TRUE, main=paste("Density Estimate (bw='nrd0 x 4')", "\nN = ", as.character(length(x))," Bandwidth=", as.character(round(x.density1$bw,digits=3)), collapse="")) lines(x.density1$x, x.density1$y, col="green") rug(x) ### Alternate Kernels: par(mfcol=c(1,1)) (kernels <- eval(formals(density.default)$kernel)) ## [1] "gaussian" "epanechnikov" "rectangular" "triangular" ## [5] "biweight" "cosine" "optcosine" ## show the kernels in the R parametrization plot (density(0, bw = 1), xlab = "", main = "R's density() kernels with bw = 1") for(i in 2:length(kernels)) lines(density(0, bw = 1, kernel = kernels[i]), col = i) legend(1.5,.4, legend = kernels, col = seq(kernels), lty = 1, cex = .8, y.intersp = 1) # stem(x) ## ## The decimal point is at the | ## ## -2 | 9 ## -0 | 5411109977665543311110 ## 0 | 0002235678990123455789 ## 2 | 02436788999 ## 4 | 011112222456666789000011112233456777 ## 6 | 13335664 args(stem) ## function (x, scale = 1, width = 80, atom = 1e-08) ## NULL #help(stem) stem(x,scale=2) ## ## The decimal point is at the | ## ## -2 | 9 ## -1 | 541110 ## -0 | 9977665543311110 ## 0 | 000223567899 ## 1 | 0123455789 ## 2 | 024 ## 3 | 36788999 ## 4 | 011112222456666789 ## 5 | 000011112233456777 ## 6 | 1333566 ## 7 | 4 stem(x,scale=3) ## ## The decimal point is at the | ## ## -2 | 9 ## -2 | ## -1 | 5 ## -1 | 41110 ## -0 | 99776655 ## -0 | 43311110 ## 0 | 000223 ## 0 | 567899 ## 1 | 01234 ## 1 | 55789 ## 2 | 024 ## 2 | ## 3 | 3 ## 3 | 6788999 ## 4 | 0111122224 ## 4 | 56666789 ## 5 | 0000111122334 ## 5 | 56777 ## 6 | 1333 ## 6 | 566 ## 7 | 4 boxplot(x)
common_crawl_ocw.mit.edu_35
Reading 11: Debugging 6.005 Prime Objective Objectives The topic of today’s class is systematic debugging. Sometimes you have no choice but to debug – particularly when the bug is found only when you plug the whole system together, or reported by a user after the system is deployed, in which case it may be hard to localize it to a particular module. For those situations, we can suggest a systematic strategy for more effective debugging. Reproduce the Bug Start by finding a small, repeatable test case that produces the failure. If the bug was found by regression testing, then you’re in luck; you already have a failing test case in your test suite. If the bug was reported by a user, it may take some effort to reproduce the bug. For graphical user interfaces and multithreaded programs, a bug may be hard to reproduce consistently if it depends on timing of events or thread execution. Nevertheless, any effort you put into making the test case small and repeatable will pay off, because you’ll have to run it over and over while you search for the bug and develop a fix for it. Furthermore, after you’ve successfully fixed the bug, you’ll want to add the test case to your regression test suite, so that the bug never crops up again. Once you have a test case for the bug, making this test work becomes your goal. Here’s an example. Suppose you have written this function: /** * Find the most common word in a string. * @param text string containing zero or more words, where a word * is a string of alphanumeric characters bounded by nonalphanumerics. * @return a word that occurs maximally often in text, ignoring alphabetic case. */ public static String mostCommonWord(String text) { ... } A user passes the whole text of Shakespeare’s plays into your method, something like mostCommonWord(allShakespearesPlaysConcatenated) , and discovers that instead of returning a predictably common English word like "the" or "a" , the method returns something unexpected, perhaps "e" . Shakespeare’s plays have 100,000 lines containing over 800,000 words, so this input would be very painful to debug by normal methods, like print-debugging and breakpoint-debugging. Debugging will be easier if you first work on reducing the size of the buggy input to something manageable that still exhibits the same (or very similar) bug: - does the first half of Shakespeare show the same bug? (Binary search! Always a good technique. More about this below.) - does a single play have the same bug? - does a single speech have the same bug? Once you’ve found a small test case, find and fix the bug using that smaller test case, and then go back to the original buggy input and confirm that you fixed the same bug. Understand the Location and Cause of the Bug To localize the bug and its cause, you can use the scientific method: - Study the data. Look at the test input that causes the bug, and the incorrect results, failed assertions, and stack traces that result from it. - Hypothesize. Propose a hypothesis, consistent with all the data, about where the bug might be, or where it cannot be. It’s good to make this hypothesis general at first. - Experiment. Devise an experiment that tests your hypothesis. It’s good to make the experiment an observation at first – a probe that collects information but disturbs the system as little as possible. - Repeat. Add the data you collected from your experiment to what you knew before, and make a fresh hypothesis. Hopefully you have ruled out some possibilities and narrowed the set of possible locations and reasons for the bug. Let’s look at these steps in the context of the mostCommonWord() example, fleshed out a little more with three helper methods: /** * Find the most common word in a string. * @param text string containing zero or more words, * where a word is a string of alphanumeric * characters bounded by nonalphanumerics. * @return a word that occurs maximally often in text, * ignoring alphabetic case. */ public static String mostCommonWord(String text) { ... words = splitIntoWords(text); ... ... frequencies = countOccurrences(words); ... ... winner = findMostCommon(frequencies); ... ... return winner; } /** Split a string into words ... */ private static List<String> splitIntoWords(String text) { ... } /** Count how many times each word appears ... */ private static Map<String,Integer> countOccurrences(List<String> words) { ... } /** Find the word with the highest frequency count ... */ private static String findMostCommon(Map<String,Integer> frequencies) { ... } 1. Study the Data One important form of data is the stack trace from an exception. Practice reading the stack traces that you get, because they will give you enormous amounts of information about where and what the bug might be. The process of isolating a small test case may also give you data that you didn’t have before. You may even have two related test cases that bracket the bug in the sense that one succeeds and one fails. For example, maybe mostCommonWords("c c, b") is broken, but mostCommonWords("c c b") is fine. 2. Hypothesize It helps to think about your program as modules, or steps in an algorithm, and try to rule out whole sections of the program at once. The flow of data in mostCommonWord() is shown at right. If the symptom of the bug is an exception in countOccurrences() , then you can rule out everything downstream, specifically findMostFrequent() . Then you would choose a hypothesis that tries to localize the bug even further. You might hypothesize that the bug is in splitIntoWords() , corrupting its results, which then cause the exception in countOccurrences() . You would then use an experiment to test that hypothesis. If the hypothesis is true, then you would have ruled out countOccurrences() as the source of the problem. If it’s false, then you would rule out splitIntoWords() . 3. Experiment A good experiment is a gentle observation of the system without disturbing it much. It might be: - Run a different test case. The test case reduction process discussed above used test cases as experiments. - Insert a print statement or assertion in the running program, to check something about its internal state. - Set a breakpoint using a debugger, then single-step through the code and look at variable and object values. It’s tempting to try to insert fixes to the hypothesized bug, instead of mere probes. This is almost always the wrong thing to do. First, it leads to a kind of ad-hoc guess-and-test programming, which produces awful, complex, hard-to-understand code. Second, your fixes may just mask the true bug without actually removing it. For example, if you’re getting an ArrayOutOfBoundsException , try to understand what’s going on first. Don’t just add code that avoids or catches the exception, without fixing the real problem. Other tips Bug localization by binary search . Debugging is a search process, and you can sometimes use binary search to speed up the process. For example, in mostCommonWords , the data flows through three helper methods. To do a binary search, you would divide this workflow in half, perhaps guessing that the bug is found somewhere between the first helper method call and the second, and insert probes (like breakpoints, print statements, or assertions) there to check the results. From the answer to that experiment, you would further divide in half. Prioritize your hypotheses . When making your hypothesis, you may want to keep in mind that different parts of the system have different likelihoods of failure. For example, old, well-tested code is probably more trustworthy than recently-added code. Java library code is probably more trustworthy than yours. The Java compiler and runtime, operating system platform, and hardware are increasingly more trustworthy, because they are more tried and tested. You should trust these lower levels until you’ve found good reason not to. Swap components . If you have another implementation of a module that satisfies the same interface, and you suspect the module, then one experiment you can do is to try swapping in the alternative. For example, if you suspect your binarySearch() implementation, then substitute a simpler linearSearch() instead. If you suspect java.util.ArrayList, you could swap in java.util.LinkedList instead. If you suspect the Java runtime, run with a different version of Java. If you suspect the operating system, run your program on a different OS. If you suspect the hardware, run on a different machine. You can waste a lot of time swapping unfailing components, however, so don’t do this unless you have good reason to suspect a component. Make sure your source code and object code are up to date. Pull the latest version from the repository, and delete all your binary files and recompile everything (in Eclipse, this is done by Project → Clean). Get help. It often helps to explain your problem to someone else, even if the person you’re talking to has no idea what you’re talking about. Lab assistants and fellow 6.005 students usually do know what you’re talking about, so they’re even better. Sleep on it. If you’re too tired, you won’t be an effective debugger. Trade latency for efficiency. Fix the Bug Once you’ve found the bug and understand its cause, the third step is to devise a fix for it. Avoid the temptation to slap a patch on it and move on. Ask yourself whether the bug was a coding error, like a misspelled variable or interchanged method parameters, or a design error, like an underspecified or insufficient interface. Design errors may suggest that you step back and revisit your design, or at the very least consider all the other clients of the failing interface to see if they suffer from the bug too. Think also whether the bug has any relatives. If I just found a divide-by-zero error here, did I do that anywhere else in the code? Try to make the code safe from future bugs like this. Also consider what effects your fix will have. Will it break any other code? Finally, after you have applied your fix, add the bug’s test case to your regression test suite, and run all the tests to assure yourself that (a) the bug is fixed, and (b) no new bugs have been introduced. Summary In this reading, we looked at how to debug systematically: - reproduce the bug as a test case, and put it in your regression suite - find the bug using the scientific method - fix the bug thoughtfully, not slapdash Thinking about our three main measures of code quality: - Safe from bugs. We’re trying to prevent them and get rid of them. - Easy to understand. Techniques like static typing, final declarations, and assertions are additional documentation of the assumptions in your code. Variable scope minimization makes it easier for a reader to understand how the variable is used, because there’s less code to look at. - Ready for change. Assertions and static typing document the assumptions in an automatically-checkable way, so that when a future programmer changes the code, accidental violations of those assumptions are detected.
common_crawl_ocw.mit.edu_36
Photo by Flickr user Changhua Coast Conservation Action. Seeking to understand and transform the world’s energy systems, MIT researchers and students investigate all aspects of energy. They discover new ways of generating and storing energy, as in creating biofuels from plant waste and in holding electricity from renewable sources in cost-effective, high-capacity batteries. They create models and design experiments to determine how we can improve energy efficiency at all scales, from nanostructures and photovoltaic cells to large power plants and smart electrical grids. They analyze how people make decisions about energy, whether as individual consumers or whole nations, and they forecast what the social and environmental consequences of these decisions might be. In fact, the study of energy is so important and so pervasive at MIT, the MIT Energy Initiative has devised an undergraduate Energy Studies Minor which develops the expertise needed to reshape how the world uses energy. The Energy Studies Minor consists of a core of foundational subjects, complemented by a choice of electives which allow students to tailor their Energy Minor to their particular interests. Many of the Energy Minor subjects are represented on OCW, and listed below. In addition to its core and elective courses, some other energy courses which are not officially part of the Energy Minor program are also listed. The Energy Studies Minor is built on a core of foundational subjects in energy science, economics, social science, and technology/engineering. Energy Minor elective courses allow students to tailor their program to their particular interests. These energy courses on OCW are not officially part of the Energy Minor program, but may be of interest.
common_crawl_ocw.mit.edu_37
We need your input! Take this brief survey to shape the future of MIT OpenCourseWare. Photo license: CC Zero (public domain) Like so many of the big challenges taken on at MIT, environmental and sustainability issues demand an interdisciplinary perspective. From declining fisheries to acute urban pollution to record-breaking global temperatures, the evidence of human impact on the environment continues to mount. And at the same time, the environment shapes us, as human society and institutions are built upon our connection to the weather, land, water, and other species. What can we learn from ecological systems and cycles? What solutions will allow people and the planet to thrive? MIT scholars, students and alumni are working to understand and help us make progress toward a more sustainable and just world. This core mission draws upon all of the fields represented at MIT: not just science, engineering, and technology, but also the humanities, arts, economics, history, architecture, urban planning, management, policy, and more. This OCW course collection is inspired by two interdisciplinary MIT programs. Many of the undergraduate courses fall within the undergraduate Environment and Sustainability Minor managed by MIT’s Environmental Solutions Initiative (ESI); the OCW course list employs the undergraduate minor’s four topic pillars. Many of the graduate-level courses are part of the MIT Sloan School of Management Sustainability Certificate curriculum. Also check out the MIT Climate Portal for many other Creative Commons-licensed open educational resources on climate change, including brief Explainer articles on key topics and the award-winning TILClimate podcast with educator guides.
common_crawl_ocw.mit.edu_38
What is MIT OpenCourseWare? MIT OpenCourseWare is a free and open publication of material from thousands of MIT courses, covering the entire MIT curriculum and used by millions of learners and educators around the world. Do I have to register or sign up? No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates. Why is OCW made free to the public? Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW. Are there any restrictions on use of OCW materials? Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.) Please see our Help pages for answers to many common questions about OCW. Please see our Help pages for answers to many common questions about OCW. If you did not find the answer in the OCW help pages, please fill out the form below and a staff member we will respond as soon as possible.
common_crawl_ocw.mit.edu_39
Course Meeting Times Lectures: 2 sessions / week, 1.5 hours / session Recitations: 1 session / week, 2 hours / session Course Description This course gives an introduction to probability and statistics, introducing students to quantitative uncertainty analysis and risk assessment with emphasis on engineering applications. The course focuses on probability theory and its applications, with a smaller module at the end covering basic topics in statistics (parameter estimation, hypothesis testing and regression analysis). The probability modules cover events and their probability, the total probability and Bayes’ theorems, discrete and continuous random variables and vectors, the Bernoulli trial sequence and Poisson process models, conditional distributions, functions of random variables and vectors, statistical moments, second-moment uncertainty propagation and second-moment conditional analysis, and various probability models such as the exponential, gamma, normal, lognormal, uniform, beta and extreme-type distributions. Throughout the subjects, concepts are illustrated with examples from various areas of engineering and everyday life. Prerequisites None in probability and statistics, but familiarity with elementary linear algebra (vectors, matrices) and calculus (derivatives, integrals) is assumed. Textbook Ang, Alfredo H-S., and Wilson H. Tang. Probability Concepts in Engineering: Emphasis on Applications to Civil and Environmental Engineering. 2nd ed. New York, NY: John Wiley & Sons, 2006. ISBN: 9780471720645. Grading Group Work Students are encouraged to discuss all course material with one another except for homework assignments, which should be individually done. Interaction to solve suggested problems is appropriate. Use of Old Solutions Students should not seek to obtain previous years’ solutions to homeworks. If they have knowledge of such solutions, they should so indicate for each homework problem. Problems for which students had previous knowledge of the solutions will not be used for grading.
common_crawl_ocw.mit.edu_40
In 1814, a man who has since been called the “Chief Benefactor of Boston” had an idea. It was so stupendous that it was then considered a weird, impossible dream. Yet the train of consequences resulting from that idea have been largely responsible for Boston’s greatness today. One hundred and twelve years ago, Uriah Cotting, on behalf of a group of men who together formed the Boston and Roxbury Mill Corporation, applied to the legislature of Massachusetts for a charter which should empower the Company to build a series of dams connecting Boston, Brookline, and Roxbury; to use these dams as toll roadways, and to develop water power by the tidal flow in and out of the Back Bay. This Bay was so called to distinguish it from the harbor - or Front Bay, and from the South Bay. It was at that time a shallow sheet of water, spotted here and there by marshy islands and flats. Charles Street, Boston Neck, and the Roxbury mainland marked the shore line, but when the tide was unusually low much of the entire expanse was bare. The project of Uriah cotting and his associates marked the first attempt at development in this area. In spite of much opposition, the legislature granted the charter of the Mill Corporation, slipping it through rather secretly at a session when only fifty members were present. Governor Strong signed the bill, and work was soon begun. The Mill Dam was built from the Common at the foot of Beacon hill to the solid land at Sewall’s Point, now the junction of Brookline and Commonwealth Avenues. It was a toll thoroughfare, today known as Beacon Street. When first opened to travel it formed a new, short way between Boston, the Brighton road, and the Punch Bowl road, which ran westward from the outer end of the dam. It was here that the famous Punch Bowl Tavern was located. The stream of traffic that passes through Governor Square today represents the growth during many years of that which flowed over the old Mill Dam highway. Connecting the Mill Dam with Gravelly Point - a promontory extending from Roxbury to what is now the corner of Massachusetts and Commonwealth Avenues - was built the Cross Dam. The two enclosed the power company’s receiving basin. In the neighborhood of Gravelly Point, the Roxbury town landing had been located. With the completion of the Cross Dam and the availability of water power, the Point became the center of a manufacturing community. Near here were grist mills, soap and candle works, a fulling mill, a looking glass and a carpet works. Today Gravelly Point is still the center of a business community. Despite the passage of years and a multitude of changes, we see in the Massachusetts Avenue neighborhood the development of the old-time Cross Dam Community. Uriah Cotting did not live to see his project completed. He was succeeded by Loammi Baldwin, who finished, in 1821, the construction of the dams. The area enclosed by them formed a tidal basin which soon became a nuisance, an eyesore, and a menace to the health of the city. The building of railways and dissatisfaction among the mill interests with the available power foretold further development. The public voice began to urge that the flats of the basin be filled in and new land be made, as had already been done along the harbor front. By 1844, two railroads had been laid across the flats - the Boston and Worcester Railroad and the Boston and Providence line. The rails of these lines could be used to transport material - “clean gravel and earth” - easily and cheaply. By making use of the dams as retaining walls, sand dredged from the bed of the Charles River could be used to make land on the flats. Several plans for the development of the district were proposed, and, in 1852, a legislative committee recommended that the district be filled in. That the narrow and winding streets of the older city somewhat preyed on the minds of the citizens is realized when it is observed that the new district was to be “laid out in rectangular plots, with wide streets.” Ordinary streets were to be a hundred feet wide between buildings, while the central boulevard - Commonwealth Avenue - was to have the unprecedented width for Boston of two hundred and forty feet! Imagine the meaning of this to those who could hardly even conceive a street over thirty feet wide. It was the accepted program that the State should pay for the work of filling in the basin, and should be repaid by the sale of the new land. Certain lots were to be set aside for museums, schools, charities, and so forth. Other spaces were to be left for parks and playgrounds, and the balance to be sold for residences. But before actual work could be begun, there was much wrangling and disagreement. The town of Roxbury, perhaps a bit jealous of her larger neighbor, refused at first to disclaim title to the bottom lands within her boundaries. The powerful water power company held out for better terms. Petty bickerings and politics delayed operations several years, but finally, in 1859, the Back Bay was attacked with sand, gravel, and earth. Progress was slow. By 1874, the dry land extended only as far out as Gloucester Street. As the filled area increased size, building went on apace. The fashionable families began to desert the South End for new and magnificent homes along the wide, parked streets of the newly made section. The name Back Bay, instead of an epithet applied to a sheet of shallow water and mud flats, became a synonym for fashion and culture. Beacon Street - the former Mill Dam - was soon lined with the now familiar brown stone homes. The parallel streets were given the names that in the early days had been given to parts of Washington Street - Newbury and Marlborough. Commonwealth Avenue soon began to take on an atmosphere of luxury a bit above its neighbors. Clubs and hotels appeared - and the back bay was Back Bay. The finishing touches at the Fenway were added between 1882 and 1885, marking the first step in the famous Metropolitan Park system of Boston. For a decade or more the pressure in the congested downtown section of Boston has caused many to seek a business home in the wider spaces of the newer city. Very gradually trade has crept westward into Back Bay. New business centers have grown up in the neighborhood. The modern apartment house has, in many cases, displaced the residence of the ‘70’s. Recognizing the trend of the times, the Old Colony Trust Company has established a new office to serve this growing neighborhood. This office occupies the ground floor of a fine new building at the corner of Massachusetts and Commonwealth Avenues, on the exact spot where the old Cross Dam joined the mainland at Gravelly Point, the only bit of lower Commonwealth Avenue that is not man-made land.
common_crawl_ocw.mit.edu_41
You have the option of proposing your own application of interest for the project. To do so, you will first need to write a short one-page proposal: - Briefy describe the application in mind and the model you want to use (this could be a reference to a paper on the topic from which you want to use the model). - Specify the dataset you want to work with (with the link to the dataset). - Discuss what you want to do with the data. You should email your proposal to both the Teaching Assistant and instructor by Lecture 18. Be sure to specify your research question (Part II) and your proposed approach (Part III). Don’t worry about the bumps that you may face by going with your own project, we will help you get it done. If the findings are novel and interesting, it can result in some publication and you can continue working on it as a paid job at the MIT Institute for Data, Systems, and Society (IDSS) over the summer. Here are some papers (and ideas based on them) that may help you come up with an idea for your proposal: - Battiston et al. (2012) DebtRank: Too Central to Fail? Financial Networks, the FED and Systemic Risk. Analyze inter-bank connection to understand how the network (for example, distribution of degrees, centralities, …) changes with the state of the economy (e.g., in a volatile economy like 2008 versus a more stable economy in the years before). Consider the systemic risk in the network such as how defaults would spread from bank to bank. Does it depend on the centrality of the defaulting banks or the underlying network structure? A good database for this idea is provided by the International Monetary Fund website. - Akbarpour et al. (2018) Just a Few Seeds More: Value of Network Information for Diffusion (PDF). Given some novel information and a social network, which individuals would you inform in order to maximize its spread if you can only contact a few people? First define an underlying model of diffusion for a network and then compare several seeding strategies for different realistic networks. Compare their performance and its dependence on the underlying network structure and diffusion model. Does it make sense to use centrality-based measures, or exploit the friendship paradox? How do these strategies compare to random seeding? - Shah and Zaman (2011) Rumors in a Network: Who’s the Culprit? (PDF - 1.2MB). Consider a rumor that originates at a single node and spreads through a network. If you only observe the end result of the spread (which nodes end up hearing the rumor), can you come up with an algorithm to determine which node initiated the rumor? This paper proposes an algorithm based on what they call rumor centrality. See if you can apply it to some realistic or simulated networks. Propose alternative algorithms and test their performance. You can also look at the datasets available at the following databases to come up with an idea: - The Colorado Index of Complex Networks (ICON) - The Koblenz Network Collection (KONECT) - Stanford Large Network Dataset Collection Timeline Each individual (or group if you are working with someone else) will have to write and submit a report along with their Python code.1 The report should describe the motivation of your project, the network dataset, the descriptive information from Part I, the research question, model and thesis from Part II, and finally your approach and results from Part III. The Part I descriptive write-up will be due before the final report, in order to make sure everyone is on track. You will also have to give a 15 minute presentation on your project during the last week of class. There will be the following deadlines. - Lecture 21: Part I descriptive write-up. - Lecture 25: Oral Presentation (15 minutes). - One week after Lecture 25: Final project report (including all code). 1You may use other programming languages, while Python is preferred.
common_crawl_ocw.mit.edu_42
Course Description This subject provides an introduction to the mechanics of materials and structures. You will be introduced to and become familiar with all relevant physical properties and fundamental laws governing the behavior of materials and structures and you will learn how to solve a variety of problems of interest to civil and … This subject provides an introduction to the mechanics of materials and structures. You will be introduced to and become familiar with all relevant physical properties and fundamental laws governing the behavior of materials and structures and you will learn how to solve a variety of problems of interest to civil and environmental engineers. While there will be a chance for you to put your mathematical skills obtained in 18.01, 18.02, and eventually 18.03 to use in this subject, the emphasis is on the physical understanding of why a material or structure behaves the way it does in the engineering design of materials and structures. Course Info Learning Resource Types assignment Problem Sets notes Lecture Notes
common_crawl_ocw.mit.edu_43
Course Description 1.050 is a sophomore-level engineering mechanics course, commonly labelled “Statics and Strength of Materials” or “Solid Mechanics I.” This course introduces students to the fundamental principles and methods of structural mechanics. Topics covered include: static equilibrium, force resultants, … 1.050 is a sophomore-level engineering mechanics course, commonly labelled “Statics and Strength of Materials” or “Solid Mechanics I.” This course introduces students to the fundamental principles and methods of structural mechanics. Topics covered include: static equilibrium, force resultants, support conditions, analysis of determinate planar structures (beams, trusses, frames), stresses and strains in structural elements, states of stress (shear, bending, torsion), statically indeterminate systems, displacements and deformations, introduction to matrix methods, elastic stability, and approximate methods. Design exercises are used to encourage creative student initiative and systems thinking. Course Info Learning Resource Types laptop_windows Simulations groups Course Introduction assignment Activity Assignments assignment_turned_in Problem Sets with Solutions assignment Design Assignments
common_crawl_ocw.mit.edu_44
Scope and Background This is your third project and substantially different from the first two. It is actually a large scale planning project and as such, it is open ended. Its complexity requires that the design is done by a team with each team member concentrating on a specialty. You should follow the design process as given to you in the lectures. The intent of this project is to have you define the constraints (boundary conditions) as part of the problem formulation. While it may be unusual to have such a “wide open” project, it is a good way to start with any planning or design project to assume very relaxed boundary conditions. This will give you the freedom to come up with innovative solutions even when you add constraints. In this particular exercise you will not only have the opportunity to plan and design a different “Back Bay” but you will be able to compare what was actually done with what one (you!!) could have done. Imagine Boston in the topographical shape of the late 18th century! When the first settlers founded Boston, its topography looked much different from what it looks now. At that time Boston was a pear-shaped peninsula, which was connected to the mainland only through a narrow neck. The peninsula was bordered by large tidal flats and had many inlets and coves. One of these, North Cove, was cut off by a mill dam/causeway already in 1640. On a small scale, the shoreline was changed more or less continuously from that time onwards. However, changes on a larger scale did not occur until the beginning of the 19th century. Design Task You are charged with the development of the Back Bay area based on the topography as it existed around 1800 but using modern construction technology and satisfying present day requirements. You have to create a mixed residential-commercial area with 50,000 inhabitants. You are completely free in your choice of buildings, access (transportation), utilities, providing foundations and so on. The result of your work should be in form of a rough plan indicating residential/commercial zones, major streets or other access. In an accompanying report (max. 4 pages, double-spaced) you should explain how your design satisfies the boundary conditions and how it addresses structural, aesthetic and environmental concerns. You should also indicate if what you propose will end up with medium or with high costs. More specific comments on the deliverables are given on the next page. Very important: Do not assume that what has been done, i.e. the present Back Bay, is the best solution! Back Bay Pre-Handout (PDF - 1.4 MB)
common_crawl_ocw.mit.edu_45
Course Meeting Times Lectures: 1 session / week, 1 hour / session Labs: 1 session / week, 3 hours / session Subject Description In Sophomore Design, 1.101, you will be challenged with three design tasks: a first concerning water resources/treatment, a second concerning structural design, and a third focusing on the conceptual (re)design of a large system, Boston’s Back Bay. The first two tasks require the design, fabrication and testing of hardware. Several laboratory experiments will be carried out and lectures will be presented to introduce students to the conceptual and experimental basis for design in both domains. Course Conduct All students are expected to attend the Tuesday lectures. The class will be split into two sections, A and B, for the laboratory work. Section A will meet on Wednesday afternoon, B on Thursday afternoon. Students will work in groups of from 3 to 5 on all laboratory tasks. Groups will be defined by the faculty in charge but with an ear open to requests for specific partners. Group composition will vary from one task to the next. The most successful experiments in science and engineering are those in which you know what the outcome will be before you start. Indeed, you cannot design an experiment without knowing something about the range of possible value of variables and parameters, a safe loading of the structure, an instruments sensitivity to some external disturbance, and the like. So procedures for each experiment are to be read before the start of lab even though these can only sketch out what needs to be done to effect a measurement. Certain constraints are printed in bold within these descriptions. These constraints are to be strictly observed. In part this is for safety reasons, in part because we do not want to break something that is not supposed to be broken, overload an instrument, or flood the basement. If you are not sure, ask your lab instructor. Each lab session will begin with an orientation. The faculty in charge will respond to questions and convey essential, often tacit, knowledge regarding the smooth conduct of the experiment or ways to carry through a design/fabrication/testing task. The lab instructor, TA, and technical instructor will be available throughout the lab session to provoke your thinking in response to questions that you pose. Course Requirements You are required to keep a lab notebook. You are to use ink, make no erasures. Draw a line through that text which you find faulty or erroneous. Sketches of apparatus may be made in pencil. Make sure you record all relevant dimensions (don’t neglect to record the units), variables, settings, materials, and information that will enable you to write the report without coming back to the lab to check up on the value of a critical parameter. Reserve a section of your lab notebook for documenting your design tasks. While we will periodically review your notebooks, these are primarily for your reading, not ours. They should provide you with a record to draw upon in writing up the reports which are primarily for our reading and evaluation. You will prepare, as individuals, a one to two page report on each experiment done in the lab. Ordinarily you will have time to complete this report and hand in before the end of the lab session. Details about the content and format of these will be prescribed at the start of the lab. You will prepare and submit, as a group, a report on each of the three design tasks. Details, again, will be defined at the introduction to the design task. Written reports will be judged on their clarity and legibility as well as their technical content. Reports are to be submitted as hard copy unless otherwise indicated. The use of appropriate software applications is encouraged if such use promotes clarity and legibility. Groups will be called upon to describe and review their design efforts to the class as a whole at the times noted on the syllabus. Individually, you are required to develop materials, abstracted from each of your three design reports, for a portfolio. There will be occasional, short reading assignments. Safety Attention to safety is absolutely required of all participants. Key elements of shop safety include knowledge of proper machine operation, full attention to the task, and use of protective equipment. Proper shop housekeeping, including maintaining an organized workspace and cleaning up after yourself, is essential. See and comply with the Safety Instructions as well as instructor directions (you will be asked to acknowledge that you have received and read this document). Any person deemed to be working in a manner that presents a hazard to themselves or others may be removed from the course at the discretion of the instructors. Tools and supplies: Kits of hand tools will be signed out to each team, which will be responsible for their security and their return. Supplies will also be provided for prototyping. Students who wish to use tools and materials other than those supplied should speak with an instructor first. CEE Shop Protocol (PDF) Hand and Power Tool Safety (PDF) Grading Grades will be based both on participation and on the design and laboratory reports. Participation comprises attendance, attitude, and industry in class, as well as the keeping of a clear and accurate project notebook. Overall weighting will be as follows: There is no final exam.
common_crawl_ocw.mit.edu_46
Course Description This is a foundation subject in modern software development techniques for engineering and information technology. The design and development of component-based software (using C# and .NET) is covered; data structures and algorithms for modeling, analysis, and visualization; basic problem-solving techniques; web … This is a foundation subject in modern software development techniques for engineering and information technology. The design and development of component-based software (using C# and .NET) is covered; data structures and algorithms for modeling, analysis, and visualization; basic problem-solving techniques; web services; and the management and maintenance of software. Includes a treatment of topics such as sorting and searching algorithms; and numerical simulation techniques. Foundation for in-depth exploration of image processing, computational geometry, finite element methods, network methods and e-business applications. This course is a core requirement for the Information Technology M. Eng. program. This class was also offered in Course 13 (Department of Ocean Engineering) as 13.470J. In 2005, ocean engineering subjects became part of Course 2 (Department of Mechanical Engineering), and the 13.470J designation was dropped in lieu of 2.159J.
common_crawl_ocw.mit.edu_47