url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
http://tex.stackexchange.com/questions/84966/dvipdfm-horizontally-crops-image?answertab=votes | # dvipdfm horizontally crops image
On windows 7 64bits with Miktex 2.9 latex compile correctly my document and DVI shows all images correctly. Then dvipdfm produces a correct PDF except for one image that is cropped horizontally. To reproduce download the example and then:
latex test.tex
yap test.dvi # correct
dvipdfm test.dvi
AcroRd32.exe test.pdf # cropped
Question: how to prevent this behavior?
-
Welcome to tex.sx! Some users are paranoid and would prefer not to download suspicious .zip files. If possible, you should reduce your .tex file to a minimal example showing the problem, and then include it in your question as a code snippet. – T. Verron Nov 30 '12 at 13:45
## 1 Answer
The conversion from PostScript to PDF is configured in dvipdfmx.cfg:
%% Ghostscript (PS-to-PDF and PDF-to-PDF):
%%
%% ps2pdf is a front-end to gs. For a complete list of options, see
%% http://ghostscript.com/doc/current/Ps2pdf.htm#Options
%%
%% By default, gs encodes all images contained in a PS file using
%% the lossy DCT (i.e., JPEG) filter. This often leads to inferior
%% result (see the discussion at http://electron.mit.edu/~gsteele/pdf/).
%% The "-dAutoFilterXXXImages" and "-dXXXImageFilter" options used
%% below force all images to be encoded with the lossless Flate (zlib,
%% same as PNG) filter. Note that if the PS file already contains DCT
%% encoded images (which is possible in PS level 2), then these images
%% will also be re-encoded using Flate. To turn the conversion off,
%% simply remove the options mentioned above.
%%
%% Also note that PAPERSIZE=a0 is specified below. This converts PS
%% files (including EPS) to A0 papersize PDF. This is necessary to
%% prevent gs from clipping PS figure at some papersize. (A0 above
%% simply means large size paper.) If you have figures even larger
%% than A0, and their llx=lly=0, you can use "-dEPSCrop" instead of
%% "-sPAPERSIZE=a0"
%%
%% In TeX Live, we use the rungs wrapper instead of ps2pdf, becuse we
%% must omit the -dSAFER which ps2pdf specifies: in order for pstricks
%% to work with xetex,
%% /usr/local/texlive/*/texmf-dist/dvips/pstricks/pstricks.pro (for
%% example) needs to be accessed. (Also, it is better to use our
%% supplied gs on Windows.) You can also add -dNOSAFER to the ps2pdf
%% command line.
%%
%% Incidentally, especially in TL, more than one dvipdfmx.cfg may be
%% extant. You can find the one that is active by running:
%% kpsewhich -progname=dvipdfmx -format='other text files' dvipdfmx.cfg
%%
D "rungs -q -dNOPAUSE -dBATCH -sPAPERSIZE=a0 -sDEVICE=pdfwrite -dCompatibilityLevel=%v -dAutoFilterGrayImages=false -dGrayImageFilter=/FlateEncode -dAutoFilter ColorImages=false -dColorImageFilter=/FlateEncode -sOutputFile='%o' '%i' -c quit"
figure02.eps has size 3854bp x 1882bp that is larger than A0 with 841mm x 1189mm.
Thus you can use reconfigure using -dEPSCrop or convert the file manually to PDF:
ps2pdf -dEPSCrop figure02.eps
And run ebb to get a bounding box data file for driver dvipdfmx of LaTeX's graphics package:
ebb -x figure02.pdf
Then the image can be included as PDF file:
\documentclass{article}
\usepackage[dvipdfmx]{graphicx}
\begin{document}
\includegraphics[width=0.9\columnwidth,keepaspectratio=true]{figure02.pdf}
\end{document}
This also speeds up the compilation, because the EPS figure does not need to be converted each time running dvipdfmx.
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839574456214905, "perplexity": 15632.452140065936}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00180-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/148905/how-to-calculate-the-function-of-an-interval | # How to Calculate the function of an interval?
I need to calculate $f((1,4])$ for the function $$f(x)=x^2-4x+3.$$
The answers I can choose from are:
a) [0,3] b) [-1,0) c) (0,3] d) [-1,3] e) (-1,0) f) (0,3)
Can someone guide me? It may be something simple but I don't know how to proceed. Thank you very much!
-
It helps to make a drawing – Egbert May 23 '12 at 18:44
As Egbert said you can draw a picture and look at the $y$ axis and see which points on it have a corresponding $x$ value in that interval: wolframalpha.com/input/?i=y%3Dx^2%E2%88%924x%2B3%2C+x%3D1+to+4 – Keivan May 23 '12 at 18:51
We want to get a good grasp of $f(x)$. One way I would recommend is to draw the graph $y=f(x)$. (If necessary, you might have some software do the drawing, but don't necessarily trust the result.) Regrettably, I will have to do things without a picture.
By completing the square, we see that $f(x)=(x-2)^2-4+3=(x-2)^2-1$. So the curve $y=f(x)$ is a parabola. Now we can trace out $f(x)$ as $x$ travels from $1$ to $4$.
At $x=1$ (which is not in the interval $(1,4]$), we have $f(x)=0$. Then as $x$ travels from $1$ to $2$, $f(x)$ decreases, until it reaches $-1$ at $x=2$. So the vertex of the parabola is at $(2,-1)$. Then, as $x$ increases from $2$ to $4$, $(x-2)^2-1$ increases from $-1$ to $3$.
So all values from $-1$ to $3$, inclusive, are taken on by $f(x)$, as $x$ travels over the interval $(1,4]$. The answer is therefore $[-1,3]$.
-
Thank you so much! Been having trouble with this type of exercise :)! – Grozav Alex Ioan May 23 '12 at 19:04
Note that $f(x)$ can be factorized as $(x-3)(x-1)$, so that the zeros are in 3 and 1. $f$ is negative between 1 and 3. The minimum is at 2, at which the value is -1. Now you should be able to draw the parabola which is the graph of $f$ and find the maximum of $f$ in $[1,4]$.
-
The graph of your function $f$ is a parabola that opens up. Its vertex has $x$-coordinate $x={-(-4)\over 2\cdot 1}=2$ (the vertex of the graph of $y=ax^2+bx+c$ has $x$-coordinate $-b\over 2a$). So, evaluate $f(2)$ (this gives the minimum value over $(1,4]$), $f(1)$ and $f(4)$. From those values you can determine $f((1,4])$.
You can save even more time by exploiting symmetry: since the line through the vertex of a parabola is a line of symmetry, the maximum value of $f$ over $(1,4]$ is $f(4)$ ($2$ is closer to $1$ than to $4$).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750234842300415, "perplexity": 155.7542721526737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981969.11/warc/CC-MAIN-20150728002301-00166-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://brilliant.org/practice/fermats-little-theorem/ | ×
## Basic Applications of Modular Arithmetic
Solve integer equations, determine remainders of powers, and much more with the power of Modular Arithmetic.
# Fermat's Little Theorem
Fermat's little theorem states that if $$a$$ and $$p$$ are coprime positive integers, with $$p$$ prime, then $$a^{p-1} \bmod p = 1$$.
Which of the following congruences satisfies the conditions of this theorem?
$\begin{eqnarray} 1^4 \bmod 5 &=& 1 \\ 2^4 \bmod 5 &=& 1 \\ 3^4 \bmod 5 &=& 1 \\ 4^4 \bmod 5 &=& 1 \end{eqnarray}$
We are given that the 4 congruences above are true. Is the following congruence true as well?
$5^4 \bmod 5 = 1$
True or false?
$42^6 \bmod 7 = 1.$
What is the remainder when $$3^{456}$$ is divided by 7?
$\large 32^{23} \pmod {23} =\, ?$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675808906555176, "perplexity": 585.2536850344185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00417-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://gowanusballroom.com/can-there-be-induced-emf-in-an-open-loop/ | ## Can there be induced emf in an open loop?
An emf will never be created because it is an open circuit.
## Can there be induced emf in an open loop?
An emf will never be created because it is an open circuit.
Why emf in any closed loop is zero?
If you are moving your rectangular loop in a constant magnetic field. You will have zero E.M.F. Each rod of the rectangle would have an E.M.F if it was moving independently in the magnetic field. Any closed conducting loop will never have an induced current when moving in a uniform magnetic field.
### How is emf induced in a magnetic field?
An emf is induced in the coil when a bar magnet is pushed in and out of it. Emfs of opposite signs are produced by motion in opposite directions, and the emfs are also reversed by reversing poles. The same results are produced if the coil is moved rather than the magnet—it is the relative motion that is important.
What is the formula of induced emf?
An emf induced by motion relative to a magnetic field is called a motional emf. This is represented by the equation emf = LvB, where L is length of the object moving at speed v relative to the strength of the magnetic field B.
## How do I calculate emf?
If we know the resulting energy and the amount of charge passing through the cell. It is the simplest way to calculate the EMF. The electromotive force of cell….The Formula for Calculating the EMF.
\varepsilon electromotive force
E the energy in the circuit
Q Charge of the circuit.
What is open loop in physics?
a control system in which an input alters the output, but the output has no feedback loop and therefore no effect on the input.
### Which is true about the emf induced in a loop by a magnet?
Which is true about the emf induced in a loop by a magnet? It is equal to the inverse of the rate at which the magnetic flux through the loop changes. It is equal to the magnetic flux through the loop.
Which law gives the direction of induced emf?
lenz’s law
According to lenz’s law, the direction of induced emf or current in a circuit is such as to oppose the cause that produces it. Faraday’s Laws of Electromagnetic Induction gives the magnitude of emf.
## How do you find the direction of an induced emf?
Stretch the forefinger, middle finger and the thumb of the right hand mutually perpendicular to each other. If the force finger points in the direction of the magnetic field, the thumb gives the direction of the motion of the conductor then the middle finger gives the direction of the induced current.
What is the formula for induced EMF?
What is the formula for induced emf? An emf induced by motion relative to a magnetic field is called a motional emf. This is represented by the equation emf = LvB, where L is length of the object moving at speed v relative to the strength of the magnetic field B.
### What is the difference between induced EMF and motional EMF?
you get an induced emf whenever there’s a changing magnetic flux through a loop. If the changing emf is due to some kind motion of a conductor in a magnetic field, you would call it a “motional emf”. For example, if a loop moves into or out of a region of field, or rotates, or a bar rolls along a rail, you’d get a “motional” induced
What are the factors on which induced EMF depend?
– Number of turns in the coil (N) – Face area of coil (A) – Strength of magnetic field (B) – Angular velocity of the coil (w).
## What is the cause of induced EMF?
The most basic cause of an induced EMF is change in magnetic flux. This can happen due to two reasons; 1. Placing the electric conductor in the presence of a changing magnetic field. As the EMF generated depends on either a small change in area vector or magnetic field, therefore this will cause an induction of EMF 2. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113357663154602, "perplexity": 249.2833676370891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00060.warc.gz"} |
https://fr.scribd.com/document/87297652/Control-Systems-Introductions | Vous êtes sur la page 1sur 29
# Introduction to Feedback Control Systems
Lecture by D Robinson on Feedback Control Systems for Biomedical Engineering, Fall 1994 Re-Edited by T Haslwanter
1. 1.1. 1.2. 2. 2.1. 2.2. 2.3. 2.4. 2.5. 2.6. 2.7. 2.8. 2.9. 3. 3.1. 3.2. 3.3. 3.4. 4.
INTRODUCTION Block Diagrams What Good is Negative feedback? REMINDER & A LITTLE HISTORY Sine Waves Bode Diagram Fourier Series Superposition Fourier Transforms Laplace Transforms Cook-Book Example Mechanical Systems Membrane Transport FEEDBACK AND DYNAMICS Static Feedback Feedback with dynamics Oscillations Delays A Threat to Stability SUMMARY
2 2 3 6 6 8 9 10 12 13 15 18 19 20 20 22 24 26 28 29
LITERATURE
1. Introduction
Your body is jammed full of feedback control systems. A few examples of things so controlled are: Blood pressure; blood volume; body temperature; circulating glucose levels (by insulin); blood partial pressures of carbon dioxide and oxygen (PCO2 , PO2 ); pH; hormone levels; thousands of proteins that keep cells alive and functioning. Without these control systems, life would not be possible. These are just a few examples. Open a physiology textbook at random and you will be reading about some part of some control system. (As a challenge, try to think of a process that is not under feedback control.) It would be impossible to understand biological systems without using systems analysis and understanding negative feedback.
## 1.1. Block Diagrams
The first step in analyzing any system is to lay it out in a block diagram. Here is an example: The Baroreceptor Reflex:
Figure 1
Variables or signals (e.g., HR, F, P) flow along the lines from box to box. The boxes or blocks represent physical processes (a muscle, a gland, a nerve, an electric circuit, a motor, your furnace, etc.) that receives an input signal, does something to it, and produces an output signal. Conventionally, signals along the main path flow from left to right so at the right is the final output: the variable being controlled is, in this case, arterial blood pressure, P. On the left is the input, desired blood pressure, Po, that drives the whole system.
CS: Blood pressure, P, is sensed by the brain by ingenious transducers called baroreceptors located in the carotid sinus (CS). These are nerve cells which discharge with action potentials all the time. When P goes up, the walls of the CS are stretched, depolarizing the branches of these nerve cells so they discharge at a faster rate, Rb. Thus Rb reports to the brain what P is. Obviously if one is to control anything, you first have to measure it. Medulla: This is the part of the brainstem to which Rb is sent. Its cells have a built-in discharge rate we call Po since it represents the desired blood pressure. The total system wants to keep P equal to Po (e.g.,
-2-
100 mm Hg). Po is called a setpoint. These cells are represented by a summing junction (sj), which shows that their discharge rate Ra (a for autonomic nervous system) is the difference Po- Rb. Since Rb represents P, Ra reflects Po - P. If P Po, there is an error, so Ra is an error signal.
SA: A specialized patch in the heart called the sino-atrial node is your pacemaker. It is a clock that initiates each heartbeat. The autonomic nervous system (Ra) can make the heart rate, HR, speed up or slow down. LV: The left ventricle is a pump. With each beat it ejects a volume of blood called the stroke volume. The output flow F is a product: stroke volume [ml/beat] x HR [beats/sec] = F [ml/sec]. R: The resistance of the vascular beds (arteries, capillaries, veins) relates pressure to flow. P=RF is the Ohm's law of hydraulics (E=RI).
D: If nothing ever went wrong, P would always equal Po and there would be no need for feedback. But that is not the case. There are many things that will disturb (D) P. Exercise is the obvious example; it causes blood vessels in muscle to open up, which would cause P to drop to dangerous levels. Thus D represents anything that would change P.
In words - if P drops for some reason, Rb decreases, Ra increases as does HR and F, thus increasing P back towards its original value. This is an obvious example of how negative feedback works and why it is useful. Obviously, one must know the anatomy of a system, but an anatomical diagram is not a substitute for a block diagram, because it does not show the variables which are essential for analysis. Most other biological control systems can be laid out in the form of Fig. 1 for further analysis. What follows is how to do that analysis and what it tells us. A basic concept for each block is the ratio of the output signal to the input, which is loosely called the gain. If, for example, HR doubles, so will F. Consequently, F/HR is more or less a constant, at least in steady state, and is called the gain of that box. The gain of an electronic amplifier is (output voltage)/(input voltage) and is dimensionless, but that's a special case. In general the gains of boxes can be anything. The gain of CS in Fig. 1 is (spikes /sec)/mmHg. The gain of SA is (beats/sec)/(spikes/sec) and so on. We will go into this in more detail later but get used to the general idea: gain = output/input.
## 1.2. What Good is Negative feedback?
Mother Nature discovered negative feedback millions of years ago and obviously found it to be GOOD (we discovered it in the 1930s in the labs of Ma Bell). So what's so good about it? Let's take Fig. 1 and reduce it to its bare, universal form. Let the gains of SA, LV and R be multiplied and call the net result G. Let the feedback gain Rb/P be 1.0 for this example.
## So a reduced version of Fig. 1 is:
-3-
where C is a command input R is a response or output D is a disturbance E is the error, C-R G is the net forward gain, sometimes called the open loop gain because if you removed the feedback path (which, with no block, has an implied gain of 1.0), the open loop gain R/C would be G, usually much larger than 1.0.
Figure 2
We want to find the output in terms of the inputs C and D: On the left, E=C-R Eliminate E; On the right R=D + GE
R = D + G (C R)
or
R (1 + G ) = D + GC
or
R=
1 G D+ C 1+ G 1+ G
(1)
This equation says it all (it's recommended you memorize it). Any disturbance D is attenuated by 1/(1 + G). If G is 100, then only 1% of D is allowed to affect R. Even if G is only 10, then only 10% of D gets through, which is a lot better than no protection at all. In words, if D should cause R to decrease, an error E is immediately created. That is amplified by G to create an output of G opposite and nearly equal to D, thereby returning R close to its original value. (This protection is so important for living systems, Mother Nature adopted it when we were just single cell animals floating in the sea and has used it ever since.) But external disturbances are not the only source of problems. Suppose the parameters of the system change. Consider a system that does not use negative feedback:
R = D + AC
To make this comparable to Fig. 2, let the nominal value of A be 1.0.
Figure 2a
As we already can see, R is affected 100% by D. There is no protection against external disturbances. But what if A gets sick and its gain dropped by 50%? Then the output drops by 50%. No protection against changes in parameters (in Fig. 2a, the gain A is the only parameter, in any real system there would be many more parameters such as the R's, L's and C's of an electric circuit).
-4-
What happens with feedback? What happens if G changes in Fig. 2? From eq (1) (let D = 0 for now)
R G = C 1+ G
If G has a nominal value of 100, R/C = 100/101 = 0.99, close to the desired closed-loop gain of 1.00. Now let G drop by 50%. Then R/C = 50/51 = 0.98. So a 50% drop in G caused only a 1% drop in closed-loop performance, R/C. Even if G dropped by a factor of 10, to a gain of 10, still R/C = 10/11 = 0.91, an 8% decrease. In general, so long as G> 1.0, then G /(1 + G ) will be close to 1.0. To repeat
1 G R= D + C 1+ G 1+ G
## Feedback protects system against changes in internal parameters
This is why Mother Nature uses NEGATIVE FEEDBACK. Summary 1. You can't study biological systems without systems analysis. 2. The first thing to do is draw the block diagram showing the negative feedback path explicitly. 3. And now you know why negative feedback is used again and again in biological (or any other) systems. So far, we have not needed to worry about dynamics (differential equations), just algebra because that's all you need for the main message. But feedback greatly affects the dynamics of a system. For example, feedback can cause a system to oscillate. So from here on, we study system dynamics.
-5-
## 2. Reminder & A Little History
You have probably already learned about Laplace transforms and Bode diagrams. These tools are essential for systems analysis so we review what is relevant here, with some historical comments thrown in to show you where Laplace transforms came from. You know that, given any time course f(t), its Laplace transform F(s) is,
F ( s ) = f (t ) e st dt
0
(2a)
## while its inverse is,
f (t ) = F ( s ) e st ds
S
(2b)
where the integration is over the s-plane, S. From this you should recall a variety of input transforms and their output inverses. If, for example, the input was a unit impulse function, f (t ) = (t ) , then F ( s ) = 1 . If an output function F(s) in Laplace transform was
## A , then its inverse, back in the time domain, was (s + )
f (t ) = Ae t .
Using these transforms and their inverses can allow you to calculate the output of a system for any input, but without knowing where eqs 2 came from, it is called the cook book method. I have a hard time telling anyone to use a method without knowing where it came from. So the following is a brief historical tale of what led up to the Laplace transform - how it came to be. You are not responsible for the historical developments that follow but you are expected to know the methods of Laplace transforms.
## 2.1. Sine Waves
Sine waves almost never occur in biological systems but physiologists still use them experimentally. This is mostly because they wish to emulate systems engineering and use its trappings. But why do systems engineers use sine waves? They got started because the rotating machinery that generated electricity in the late 19th century produced sine waves. This is still what you get from your household outlet. This led, in the early 20th century, to methods of analyzing circuits with impedances due to resistors ( R ), capacitors ( 1/jC ) and inductors ( jL ) - all based on sine waves.1 These methods showed one big advantage of thinking in terms of sine waves: it is very easy to describe what a linear system does to a sine wave input. The output is another sine wave of the same frequency, with a different amplitude and a phase shift. The ratio of the output amplitude to the input amplitude is the gain. Consequently, the transfer function of a system, at a given frequency, can be described by just two numbers: gain and phase.
## 1 = j . In other fields it is often denoted by i.
-6-
Example:
Figure 3
Elementary circuit analysis tells us that the ratio of the output to the input sine wave is
1 V2 ( j ) 1 jC = = = G ( j ) V1 ( j ) R + 1 j RC + 1 jC
(3)
(If this isn't clear, you had better check out an introduction to electronic circuits). Note we are using complex notation and what is sometimes called phasors. Through Euler's identity
e jt = cos(t ) + j sin(t )
(4)
we can use the more compact form e jt for the input instead of the clumsy forms, cosine and sine. The only purpose is to keep the arithmetic as simple as possible. Now we want to generalize and get away from electric circuits. As we shall see, many other systems, mechanical and chemical as well as electrical, have similar transfer functions G(j). That is why we replace V2/V1 with G to generalize to all such other systems. We can even replace RC with the time constant T since it is the more intrinsic parameter characterizing this class of systems. So, for this example, we think in terms of: where
G ( j ) =
Figure 4
1 jT + 1
(5)
and X and Y can be any of a wide variety of variables as illustrated in Fig. 1. This type of system is called a first-order lag because the differential equation that describes it is first-order and because it creates a phase lag. From (5), the gain is
| G |=
1 1 = | jT + 1| (T ) 2 +1
(6)
## The phase, or angle, of G, is
G = tan 1 T
(7)
-7-
For any frequency , these two equations tell you what the first-order lag does to a sine wave input.
## 2.2. Bode Diagram
There is an insightful way of displaying the information contained in eqs (6) and (7):
If you plot them on a linear-linear scale they look like this: You can see that as frequency = (2 f ) goes up, the gain goes down, approaching zero, and the phase lag increases to -90.
Figure 5
Bode decided to plot the gain on a log-log plot. The reason is that if you have two blocks or transfer functions in cascade, G(j) and H(j), the Bode plot of their product is just the sum of each individually; that is,
## log(G ( j ) H ( j )) = log(G ( j )) + log( H ( j )
Also by doing this you stretch out the low-frequency part of the axis and get a clearer view of the system's frequency behavior. The result will appear:
Figure 6 -8-
An interesting frequency is =
## 1 , since then, from (6), | G |= 1/ 2 = 0.707 and T
G = tan 1 (1) = 45 .
Below this frequency, log(|G|) can be closely approximated by a horizontal straight line at zero ( log(1)=0 ). At high frequencies, above = 1/T, |G| falls off in another straight line with a slope of -20db/dec, which means it falls by 10 (20db) if increases by 10. The phase is a linear-log plot. The main point in Fig. 6 is a very simple way to portray what this lag element does to sine wave inputs ( ejt ) of any frequency. |G| is often measured in decibels, defined as: decibels = db = 20 log(|G|) The 20 is 10 2. The 10 is because these are decibels, not bels (in honor of Alexander Graham), and the 2 is because this was originally defined as a power ratio as in (V2 / V1 ) 2 .) The next problem is: suppose the input isn't a sine wave? Like speech signals and radar.
## 2.3. Fourier Series
Well, a periodic signal, like a square wave or a triangular wave, would be a step up to something more interesting than just a sine wave. Fourier series is a way of approximating most periodic signals as a sum of sine waves. The sine waves are all harmonics of the fundamental frequency 0. Specifically, the periodic wave form f(t) can be written
f (t ) = a0 + an cos(n0t ) + bn sin(n0t )
n =1 n =1
(8)
The problem is only to find the right constants a0, an, bn. They can be found from
a0 =
1 f (t )dt T 0
T
2 ak = f (t ) cos(k0t ) dt T 0 bk = 2 f (t ) sin(k0t ) dt T 0
T
(9)
-9-
f (t ) =
## 1 1 [sin t + sin 3t + sin 5t + ] 3 5
This figure also illustrates how adding the 3rd and 5th harmonics to the fundamental frequency fills out the sharp corners and flattens the top, thus approaching a square wave. So how does all this help us to figure out the output of a transfer function if the input is a square wave or some other periodic function? To see this, we must recall superposition.
Figure 7
2.4. Superposition
All linear systems obey superposition. A linear system is one described by linear equations. y = kx is linear (k is a constant). Y = sin(x), y = x2, y = log(x) are obviously not. Even y = x+k is not linear; one consequence of superposition is that if you double the input (x), the output (y) must double and, here, it doesn't. The differential equation
d 2x dx a 2 + b + cx = y dt dt
is linear.
a
is not, for two reasons which I hope are obvious.
d 2x dx + b x + cx 2 = y 2 dt dt
The definitive test is that if input x1(t) produces output y1(t) and x2(t) produces y2(t), then the input ax1(t)+bx2(t) must produce the output ay1(t)+by2(t). This is superposition and is a property of linear systems (which is only what we are dealing with).
- 10 -
Figure 8
So, if we can break down f(t) into a bunch of sine waves, the Bode diagram can quickly give us the gain and phase for each harmonic. This will give us all the output sine waves and all we have to do is add them all up and - voila! -the desired output! This is illustrated in Fig. 8. The input f(t) (the square wave is only for illustration) is decomposed into the sum of a lot of harmonics on the left using (9) to find their amplitudes. Each is passed through G(j). G(jk0) has a gain and a phase shift which, if G(j) is a first-order lag, can be calculated from (6) and (7) or read off the Bode diagram in Fig. 6. The resulting sinusoids G(jk0) sin k0t can then all be added up as on the right to produce the final desired output shown at lower right. Fig. 8 illustrates the basic method of all transforms including Laplace transforms so it is important to understand the concept (if not the details). In different words, f(t) is taken from the time domain by the transform into the frequency domain. There, the system's transfer function operates on the frequency components to produce output components still in the frequency domain. The inverse transform assembles those components and converts the result back into the time domain, which is where you want your answer. Obviously you couldn't do this without linearity and superposition. For Fourier series, eq (9) is the transform, (8) is the inverse. One might object that dealing with an infinite sum of sine waves could be tedious. Even with the rule of thumb that the first 10 harmonics is good enough for most purposes, the arithmetic would be daunting. Of course with modern digital computers, the realization would be quite easy. But before computers, the scheme in Fig. 8 was more conceptual than practical.
- 11 -
But it didn't matter because Fourier series led quickly to Fourier transforms. After all, periodic functions are pretty limited in practical applications where aperiodic signals are more common. Fourier transforms can deal with them.
## 2.5. Fourier Transforms
These transforms rely heavily on the use of the exponential form of sine waves: ejt. Recall that (eq 4),
e jt = cos(t ) + j sin(t )
If you write the same equation for e-jt and then add and subtract the two you get the inverses:
## e jn0t + e jn0t cos(n0t ) = 2 jn0t e e jn0t sin(n0t ) = 2
where we have used harmonics n0 for .
(10)
It is easiest to derive the Fourier transform from the Fourier series, but first we have to put the Fourier series in its complex form. If you now go back to (8) and (9) and plug in (10) you will eventually get
f (t ) =
and
n =
ce
n
jn0t
(11)
1 cn = f (t )e jn0t dt T 0
(12)
(The derivation can be found in textbooks). These are the inverse transform and transform respectively for the Fourier series in complex notation. The use of (10) introduces negative frequencies (e-jt ) but they are just a mathematical convenience. It will turn out in (11), that when all the positive and negative frequency terms are combined, you are left with only real functions of positive frequencies. Again, the reason for using (11) and (12) instead of (8) and (9) is, as you can easily see, mathematical compactness.
Figure 9
To get to the Fourier transform from here, do the obvious, as shown in Fig. 9. We take a rectangular pulse for f(t) (only for purposes of illustration). f(t) is a periodic function with a period of T (we've chosen -T/2 to T/2 instead of 0 to T for simplicity). Now keep the rectangular pulse constant and let T get larger and larger. What happens in (11) and (12)? Well, the difference between harmonics, 0, is getting smaller. Recall that
0 =
## 2 (frequency = inverse of period), so in the limit 0 d . Thus, T
- 12 -
1 d = T 2
The harmonic frequencies n0 merge into the continuous variable ,
n 0 From (12), as T , cn would => 0 but their product Tcn does not and it is called F() or the
Fourier transform. Making these substitutions in (11) and (12) gives
1 f (t ) = 2
and
F ( )e
jt
(13)
F ( ) =
f (t )e j t dt
(14)
These are the transform (14) and its inverse (13). See the subsequent pages for examples. F() is the spectrum of f(t). It shows, if you plot it out, the frequency ranges in which the energy in f(t) lies, so it's very useful in speech analysis, radio engineering and music reproduction. In terms of Fig. 8, everything is conceptually the same except we now have all possible frequencies - no more harmonics. That sounds even worse, computationally, but if we can express F() mathematically, the integration in (13) can be performed to get us back into the time domain. Even if it can't, there are now FFT computer programs that have a fast method of finding the Fourier transform, so using this transform in practice is not difficult. A minor problem is that if the area under f(t) is infinite, as in the unit step, u(t), F() can blow up. There are ways around this, but the simplest is to move on to the Laplace transform.
## 2.6. Laplace Transforms
You already know the formulas (2a, 2b), and how to use them. So here we discuss them in terms of Fig. 8 and superposition. Again the inverse transform is
f (t ) = F ( s )e st ds
S
(15)
Until now we have dealt only with sine waves, ejt . Put another way, we have restricted s to j so that est was restricted to ejt. But this is unnecessary, we can let s enjoy being fully complex or s=+ j . This greatly expands the kinds of functions that est can represent.
- 13 -
Figure 10
Fig. 10 is a view of the s-plane with its real axis () and imaginary axis (j). At point 1, =0 and - is negative so est=e-t, which is a simple decaying exponential as shown. At points 2 and 3 (we must always consider pairs of complex points - recall from (10) that it took an ejt and an e-jt to get a real sin t or cos t ) we have -<0 and 0 , so e t e j t is a damped sine wave as shown. At points 4 and 5, =0 so we are back to simple sine waves. At points 6 and 7, > 0 so the exponential is a rising oscillation. At 8,> O, =0 so we have a plain rising exponential. So Fig. 10 shows the variety of waveforms represented by est . So (15) says that f(t) is made up by summing an infinite number of infinitesimal wavelets of the forms shown in Fig. 10. F(s) tells you how much of each wavelet est is needed at each point on the s-plane. That weighting factor is given by the transform
F ( s ) = f (t ) e st dt
0
(16)
In terms of Fig. 8, f(t) is decomposed into an infinite number of wavelets as shown in Fig. 10, each weighted by the complex number F(s). They are then passed through the transfer function G which now is no longer G(j) (defined only for sine waves) but G(s) defined for e t e jt . The result of F(s)G(s) which tells you the amount of est at each point on the s-plane contained in the output. Using (15) on F(s)G(s) takes you back to the time domain and gives you the output. If, for example, the output is h(t) then
h(t ) = F ( s )G ( s ) e st ds
S
(16a)
In summary, I have tried to show a logical progression form the Fourier series to the Fourier transform to the Laplace transform, each being able to deal with more complicated waveforms. Each method transforms
- 14 -
the input time signal f(t) into an infinite sum of infinitesimal wavelets in the frequency domain (defining s as a "frequency"). The transfer function of the system under study is expressed in that domain G(j) or G(s). The frequency signals are passed through G, using superposition, and the outputs are all added up by the inverse transform to get back to h(t) in the time domain. This is the end of the historical review, and we are now going back to the cook-book method (which you already know see p6).
## 2.7. Cook-Book Example
For a first order lag in Laplace, (5) becomes G(s),
G ( s) =
1 sT + 1
(17)
Figure 11
Let's find its step response. A unit step, u(t) has a Laplace transform of X(s)= 1/s. So the Laplace transform of the output Y(s) is
Y (s) =
1 s ( sT + 1)
(18)
So far this is easy but how to get from Y(s) (the frequency domain) to y(t) (the time domain)? We already know that: (19)
(t)
U(t) t e-at sin(t) etc.
f(t)
s + 2
2
## F(s) 1 1/s 1/s 1/(s+a)
etc
So here is the cook-book; if we could rearrange Y(s) so that it could contains forms found in the F(s) column, it would be simple to invert those forms. But that's easy; it's called: Partial Fraction Expansion. Y(s) can be rewritten
- 15 -
Y (s) =
1 1 T = s ( sT + 1) s ( sT + 1)
## Well that's close and if we rewrite it so,
1 1 Y (s) = s s+ 1 T
then, from the Table (19)
(20)
Y (t ) = u (t ) e
which looks like
t T
= 1 e
t T
(21)
We were not just lucky here. Equation (18) is the ratio of two polynomials; 1 in the numerator and s2T+s in the denominator. This is the usual case. Suppose we consider another system described by the differential equation:
Figure 12
d2y dy dx a2 2 + a1 + a0 y = b1 + b0 x dt dt dt
Take the Laplace transform,
## a2 s 2Y ( s ) + a1sY ( s) + a0Y ( s) = b1sX ( s ) + b0 X ( s )
(I assume you recall that if F(s) is the transform of f(t) then sF(s) is the transform of or
df ). dt
(a s
2
+ a1s + a0 ) Y ( s ) = ( b1s + b0 ) X ( s )
or
## b1s + b0 Y (s) = G(s) = 2 X ( s) a2 s + a1s + a0
This is also the ratio of polynomials. Notice that the transforms of common inputs (19) are ratios of polynomials. So Y(s)=X(s)G(s) will be another ratio of polynomials,
- 16 -
Y (s) =
## bn s n + bn 1s n 1 + ... + b1s + b0 am s m + am 1s m 1 + ... + a1s + a0
(22)
Both polynomials can be characterized by their roots: those values of s that make them zero:
Y (s) =
B( s + z1 )( s + z2 ) ( s + zn ) A( s + p1 )( s + p2 ) ( s + pm )
(23)
The roots of the numerator, when s = -zk causes Y(s) to be zero, are called the zeros of the system. The points in the s plane where s = -pk are the roots of the denominator, where Y(s) goes to infinity, are the poles. Thus any transfer function, or its output signal can be characterized by poles and zeros. And the inverse transform can be effected by partial fraction expansion. But how did we make the sudden jump from the inverse transform of (16a) to the cook-book recipe of partial fraction expansion? Equation (16a) says integrate F(s)G(s)est ds over the entire s-plane. It turns out that this integral can be evaluated by integrating over a contour that encloses all the poles and zeros of F(s)G(s). Since their poles lie in the left-hand plane (e+t blows up) they can be enclosed in a contour such as C, as shown. But even better, the value of this integral is equal to the evaluation of the "residues" evaluated at each pole. This is a consequence of conformal mapping in the complex plane and is, we confess, a branch of mathematics that we just don't have time to go into. The residue at each pole is the coefficient evaluated by partial fraction expansion since it expresses F(s)G(s) as a sum of the poles:
Figure 12a
A1 A2 + + ( s + p1 ) ( s + p2 )
Thus the integration over the s-plane in (16a) turns out to be just the same as evaluating the coefficients of the partial fraction expansions. Again, it will be assumed that you know the cook-book method of Laplace transforms and Bode diagrams. These tools are an absolute minimum if you are to understand systems and do systems analysis. And this includes biological systems. Before we get back to feedback, we must first show that the methods of analysis you learned do not apply only to electric circuits.
- 17 -
## 2.8. Mechanical Systems
In these systems one is concerned with force, displacement and its rate of change, velocity. Consider a simple mechanical element - a spring. Symbolically, it appears: F is the force, L is the length. Hook's law states
Figure 13
F = kL
where k is the spring constant.
(24)
Another basic mechanical element is a viscosity typical of the shock absorbers in a car's suspension system or of a hypodermic syringe. Its symbol is shown in Fig. 14 as plunger in a cylinder. The relationship is
F =r
dL = rsL dt
(25)
Figure 14
That is, a constant force causes the element to change its length at a constant velocity. r is the viscosity. The element is called a dashpot.
## Let's put them together,
In Fig. 15 the force F, is divided between the two elements, F = Fk + Fr = kL + rsL (in Laplace notation), or
F ( s ) = (rs + k ) L( s )
Figure 15
## If we take F to be the input and L the output
L( s ) 1 1/ k 1/ k = G(s) = = = F ( s) sr + k s r + 1 sT + 1 k
where T = r/k is the system time constant. This is, of course, a first-order lag, just like the circuit in Fig. 3 governed by eq (3), with j replaced by s, plus the constant 1/k. Fig. 15 is a simplified model of a muscle. If you think of F as analogous to voltage V, length L as the analog of electric charge Q, and velocity current I
dL as dt
dQ , then Figures 3 and 15 are interchangeable with the spring being the analog of a capacitor dt
(an energy storage element) and the dashpot the analog of a resistor (energy dissipator). So it's no wonder that you end up with the same mathematics and transfer functions. Only the names change.
- 18 -
## 2.9. Membrane Transport
Figure 16
Two liquid compartments containing a salt solution at two different concentrations C1 and C2 are separated by a permeable membrane. The salt will diffuse through the membrane at a rate proportional to the concentration difference. That is:
## dC2 = k (C1 C2 ) dt dC2 or + kC2 = kC1 dt or ( s + k )C2 ( s ) = kC1 ( s ) or
another first order lag. These examples are just to remind you that what you learned about Laplace transforms do not apply just to electric circuits. It applies to any sort of system that can be described by linear differential equations, put into a block in a block diagram and described as a transfer function.
C2 ( s ) k = G(s) = C1 ( s ) s+k
- 19 -
## 3. Feedback and Dynamics
In deriving equation (1) to explain the major reasons that feedback is used, dynamics were ignored. But feedback greatly alters a system's dynamics and it is important to understand and anticipate these effects. In some technological (but not biological) situations, feedback is used solely to achieve a dynamic effect.
## 3.1. Static Feedback
In Fig. 2 we assumed unity feedback (H(s) =1) for simplicity but now we must be more general. A little algebra will show that in this case the ratio of the response, R(s) to the command C(s) is
Figure 17
R G(s) = C 1 + G ( s) H ( s)
H(s) is the gain of the feedback path. G(s)H(s) is called the loop gain. Let's take a specific example
(26)
Let G(s) be
## no dynamic. From (26),
Figure 18
A A R A 1+ Ak = sT + 1 = = C 1 + Ak sT + 1 + Ak T s +1 sT + 1 1 + Ak
(27)
In this form A /(1 + Ak) is the gain at dc (zero frequency, s = 0) and T/(1 + Ak) is the new time constant. As k increases from zero, the closed loop dc gain decreases. This is a universal property of negative feedback, the more the feedback the lower the closed-loop gain. The new time constant T/(1+Ak) also decreases so the system gets faster. This can clearly be seen in the step response. If we used partial fraction expansion (see eq 20), it is easy to show that if C ( s ) =
- 20 -
1 then s
T t /( ) A 1+ kA R (t ) = 1 e 1 + kA
(28)
## which will appear, for different values of k, or amount of feedback,
Figure 19
When k=0, the steady-state (dc) gain is A, the time constant is T, (when t=T, the term in parentheses in (28) is 1- e-1 = 0.63, so t=T is when the response is 63% of its way to steady state.) The initial slope from (28) is A/T and is independent of k. So the response starts off at the same rate but as k increases, the feedback kicks in and lowers the steady state level which is consequently reached sooner. This phenomenon can also be clearly seen in the Bode diagram (Fig. 20). The high frequency behavior can be found by letting s = j in (27) and then letting become very large. Then
R A which is independent of k. C jT
Figure 20
- 21 -
Thus as the low-frequency gain is pressed down as k increases, the intersection with the high frequency line occurs at higher and higher frequencies (or smaller and smaller time constants). Thus, the bandwidth (the break frequency
A jT
(1 + Ak ) ) increases as k increases. T
Figure 21
This can also be visualized on the s-plane. Equ. (27) has one pole at s =
left on the negative real axis (-) showing in yet another way that the system is getting faster with a smaller time constant and a wider bandwidth.
## 3.2. Feedback with dynamics
From Fig. 17 and (26) recall that
R G(s) = C 1 + G ( s) H ( s)
(29)
## R 1 . This is a general and useful property of feedback. C H ( s)
If you put a differentiator in the feedback loop - you get an integrator. If you put an integrator in the feedback loop - you get a differentiator. If you put a high gain in the feedback loop you get a low gain for R/C as shown in Figs. 19 and 20. If you put a lead element in the feedback loop - you get a lag. And so on Let's illustrate the last point as an exercise. Here is H(s) = as+1, a lead element.
| H ( j ) = (a ) 2 + 1 H ( j ) = tan 1 (a )
Figure 22
## The Bode diagram appears:
- 22 -
For <1/a, the gain is close to 1.0 (0 db) and the phase is near zero. For >1/a, the gain rises linearly with so it is described by a straight line rising at a slope of 20 db/decade. The phase is leading and approaches + 90 which, of course, is why it's called a lead. Compare with the lag shown in Fig. 6.
Figure 23
## From Fig. 22, the closed loop response is:
A R G A A 1+ A = = = = C 1 + HG 1 + A( sa + 1) saA + 1 + A aA s +1 1+ A
(30)
This is a first-order lag, illustrating the main point that a lead in the feedback path turns the whole system into a lag. The steady state gain (when s = j = 0) is, as expected, time constant is
## A which, if A is large, will be close to 1.0 . The new (1 + A)
aA . Again, if A is large, this will be close to a. Increasing a increases the lead action by shifting (1 + A) R 1 . C H ( s)
the curves in Fig. 23 to the left so the phase lead occurs over a wider frequency range. It also increases the time constant of the lag in (30) so the lag also covers a wider frequency range. The Bode diagram of (30) will look just like Fig. 23 but with everything upside down, again reflecting that
- 23 -
3.3. Oscillations
While negative feedback is, in general, good and useful, one must be aware that if incorrectly designed, or in biology, if a change occurs due to, say, a disease, feedback systems can become unstable and oscillate. In engineering, it is always wise in designing a system, to make sure it is stable before you build it. In biology, if you see oscillations occurring, it is nice to know what conditions could lead to such behavior.
Figure 24
Consider the system in Fig. 24. Let switch S2 be open and S1 closed so that a sine wave put in at C flows through G(j) and then back around through H(j). This output is shown as C(GH). Suppose that the gain |GH| at some frequency 0 is exactly 1.0 and the phase is exactly -180 as shown. Now instantaneously open S1 and close S2. After going through the -1 at the summing junction, the signal C(GH) will look exactly like C and can substitute for it. Thus the output of H can serve as the input to G which then supplies the output of H which then ... etc., etc., the sine wave goes round and round - the system is oscillating. The key is that there must exist a frequency, 0, at which the loop gain G(j)H(j) is -180. At this frequency, evaluate the gain |G(j0)H(j0)|. If this gain is >1, oscillations will start spontaneously and grow without limit: the system is unstable. If the gain is <1, the system is stable. (There are some complicated systems where this is an oversimplification, but it works for most practical purposes). Thus, the Bode diagram can be used to test for stability. One needs to plot the Bode diagram of the loop gain. As an example, consider the system in Fig. 18 where G(s) is a first-order lag and H(s)=k. The loop gain is
kA and its ( sT + 1)
Bode diagram will resemble Fig. 6. This figure shows that the phase lag never exceeds -90. It never gets to -180. Therefore it can never become unstable, no matter how much the loop gain kA is increased. Consider a double lag (Fig. 25).
- 24 -
Figure 25
## Its loop Bode diagram will look like this:
Figure 26
Each pole at
1 1 and causes the gain to add a decrease of -20db/dec and a phase lag of -90. Thus the phase T1 T2
approaches -180 but never gets there. So technically, this system too can never be made unstable. However, when the phase gets near -180 and |kG| is >1, the system "rings"; its step response would look like Fig. 27. Such a system would be pretty useless even though it is technically stable.
- 25 -
## Obviously if we add another pole (another term
1 in ( sT3 + 1)
Fig. 25) the phase shift will approach 3x90, or -270. Such a system can become unstable if the gain is too high. Most biological control systems, such as those mentioned on page 1, are dominated by a single pole, have modest gains and are very stable. But there is one element that presents a real danger for stability: delays.
Figure 27
## 3.4. Delays A Threat to Stability
The output of a pure delay is the same as the input, just translated in time by the delay . This operator has the Laplace transfer function e-s .
Figure 28
Delays are not uncommon in biology. For example, an endocrine gland dumps a hormone into the blood stream. It is carried in the blood stream to a distant receptor. This could take several seconds. This is called a transport delay. In engineering, for another example, if you wanted to incorporate earth into the control loop of a lunar rover (not a good idea) you would have to cope with the few seconds it takes for radio signals to get to the moon and back. To see why delays are bad, look at its Bode diagram
e j = 1
e j =
Its gain is 1 at all frequencies. Its phase is directly proportional to frequency. (Think of as a time window - the higher the frequency, the more cycles fit in this window so the greater the phase lag.)
- 26 -
Figure 29
Fig. 29 shows the gain to be independent of at 0 db (gain=1). The phase lag is a linear function of but on a linear-log plot, it appears as an exponential increase in lag. This means that for any system with a delay in the loop, there will always exist a frequency 0 where the phase lag is 180. So instability is always a possibility. Consider a lag with a delay: Look at the Bode diagram of the loop gain (Fig. 31). The phase lag is the sum of that due to the lag
## A and that due to the delay e-j as shown by ( jT + 1)
the two dashed curves. As illustrated, at 0 each contributes about -90 so the total is -180. At 0 the gain is about 1 (0 db) so this system, if not unstable, is close to it.
Figure 30
To make it stable we could decrease A or increase T which would cause the gain at 0 to become <1. This would be the practical thing to do although either move would decease the closed loop bandwidth which is where |G(j)| =1. Note that if <<T the curve e j moves to the right and the point where the net lag reaches -180 moves to high frequencies where the gain is much less than 1.0. But as T , we rapidly get into trouble.
- 27 -
Figure 31
In the regulating systems mentioned on page 2, T is usually much larger than , so stability is seldom a problem. In neuromuscular control, however, where is due to synaptic delays and axonal conduction delays, T can become small (the time constant of a rapid arm, finger or eye movement) and problems can arise. To deal with this situation, Mother Nature has evolved another type of feedback called parametric adaptive feedback - but that is another story.
4. Summary
1. To start to analyze a system you draw its block diagram, which shows the processes and variables of interest, and make explicit the feedback pathway and error signal. 2. The input-output relationship of each block is described by a linear differential equation. Taking the Laplace transform of it gives one its transfer function G(j) or more generally G(s), which tell you what G(s) will do to any signal of the form est. 3. Collecting all these blocks together to form a net forward gain G(s) and feedback gain H(s), you know that the closed-loop transfer function is
G . (1 + GH )
4. A transform takes C(t) from the time domain to the frequency domain C(s). This is passed through the system, simply by multiplication, to create the output R(s). The inverse transform brings one back to the time domain and gives the answer R(t). In practice, this is done by partial fraction expansion. 5. Most important, from equation (1), you know why negative feedback is used and you also know how to check for its major potential problem - instability.
- 28 -
## DA Robinson lecture on Physiological Foundations for Biomedical Engineering
Literature
G. F. Franklin, J. D. Powell, and A. Emami-Naeini. Feedback Control of Dynamic Systems. Addison-Wesley Publishing Company, 1995. B. C. Kuo. Automatic Control Systems. John Wiley & Sons, Inc., 1995. ("my favourite") JW Nilsson, S Riedel. Electric Circuits Revised and PSpice Supplement Package. Prentice Hall; 6th edition, 2001 Harris CM. The Fourier analysis of biological transients. J Neurosci.Methods 83 (1):15-34, 1998. ("read this if you want to really work with FFTs")
- 29 - | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406783938407898, "perplexity": 1001.4585057486562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530250.98/warc/CC-MAIN-20190724020454-20190724042454-00438.warc.gz"} |
https://statphys.pknu.ac.kr/dokuwiki/doku.php?id=awk_%EC%82%AC%EC%9A%A9%EB%B2%95:awk_%EC%8B%A4%ED%96%89%EB%B2%95&do=recent | awk_사용법:awk_실행법
# Recent Changes
The following pages were changed recently:
You're currently watching the changes inside the awk_사용법 namespace. You can also view the recent changes of the whole wiki.
• awk_사용법/awk_실행법.txt | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849560618400574, "perplexity": 11988.061865090012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00789.warc.gz"} |
https://www.cleverblog.cz/mary-berry-hetxho/kkb6om.php?5c5763=time-series-analysis-r | How To Organize Design Research, Japanese Pet Names For Boyfriend, Bayesian Essentials With R Springer, History Story Books, Tea Tree Powder, Tears For Fears The Hurting Review, What Is The Country Of Bas Form, Amazon Bash Scripting, Ge Cafe Matte White Refrigerator, " /> How To Organize Design Research, Japanese Pet Names For Boyfriend, Bayesian Essentials With R Springer, History Story Books, Tea Tree Powder, Tears For Fears The Hurting Review, What Is The Country Of Bas Form, Amazon Bash Scripting, Ge Cafe Matte White Refrigerator, " />
Example: Taking data of total positive cases and total deaths from COVID-19 weekly from 22 January 2020 to 15 April 2020 in data vector. FEB08. A Little Book of R For Time Series, Release 0.2 ByAvril Coghlan, Parasite Genomics Group, Wellcome Trust Sanger Institute, Cambridge, U.K. Email: alc@sanger.ac.uk This is a simple introduction to time series analysis using the R statistics software. 1. Hence, it is particularly well-suited for annual, monthly, quarterly data, etc. Please use ide.geeksforgeeks.org, generate link and share the link here. Time Series Analysis using R Learn Time Series Analysis with R along with using a package in R for forecasting to fit the real-time series to match the optimal model. R language uses many functions to create, manipulate and plot the time series data. Creating a time series. frequency represents number of observations per unit time. Different assumptions lead to different combinations of additive and multiplicative models as. Introduction. Time-Series forecasting is used to predict future values based on previously observed values. Search in title. Except the parameter "data" all other parameters are optional. Time series analysis provides such a unification and allows us to discuss separate models within a statistical setting. This tutorial uses ggplot2 to create customized plots of time series data. Time Series Analysis With Applications in R, Second Edition, presents an accessible approach to understanding time series models and their applications. Preface. Time Series with R Time series are all around us, from server logs to high-frequency financial data. In this course, you will be introduced to some core time series analysis concepts and techniques. A basic introduction to Time Series for beginners and a brief guide to Time Series Analysis with code examples implementation in R. Time Series Analysis is the technique used in order to analyze time series and get insights about meaningful information and hidden patterns from the time series … Conducting exploratory analysis and extracting meaningful insights from data are core components of research and data science work. 7 min read Time Series data is data that is observed at a fixed interval time and it could be measured daily, monthly, annually, etc. Time series analysis uses statistical techniques to determine how a sequence of numerical data points varies during a specific period of time. 1. The time series object is created by using the ts() function. In this 2 hour long project-based course, you will learn the basics of time series analysis in R. By the end of this project, you will understand the essential theory for time series analysis and have built each of the major model types (Autoregressive, Moving Average, ARMA, ARIMA, and decomposition) on a real world data set to forecast the future. 56..... 776. We create an R time series object for a period of 12 months and plot it. Posted by 2 hours ago. Time-Series Analysis. frequency = 24*6 pegs the data points for every 10 minutes of a day. By using our site, you Below graph plots estimated forecasted values of COVID-19 if it continue to widespread for next 5 weeks. MERC. I have a daily time series about number of visitors on the web site. R Tutorial: Geospatial Time Series Analysis Jordan Frey, Priyanka Verma 2020-05-02. More examples on time series analysis and mining with R and other data mining techniques can be found in my book "R and Data Mining: Examples and Case Studies", which is downloadable as a .PDF file at the link. time series analysis, not about R. R code is provided simply to enhance the exposition by making the numerical examples reproducible. Search in title . The data for the time series is stored in an R object called time-series object. Furthermore, the format of the dates associated with reporting data can vary wildly. 22 comments. Time Series Analysis. end represents the last observation in time series In this article, I will introduce to you how to analyze and also forecast time series data using R. Exploratory time series data analysis Free. Time series data is commonly encountered. 12 min read. start specifies the start time for the first observation in time series. Auto-regression is all about regression with the past values.Steps to be followed for ARIMA modeling: 1. Time series has a lot of applications, especially on finance and also weather forecasting. Furthermore, the format of the dates associated with reporting data can vary wildly. It is provided as a github repository so that anybody may contribute to … Step1: Understand the data: As a first step, Understand the data visually, for this purpose, the data is converted to time series object using ts(), and plotted visually using plot() functions available in R. How to convert UTC date time into local date time using JavaScript ? Data should be stationary – by stationary it means that the properties of the series doesn’t depend on the time when it is captured. Time series analysis skills are important for a wide range of careers in business, science, journalism, and many other fields. While R allows for a more specific statistical computing, Python extends a more general approach for data science. The data for the time series is stored in an R object called time-series object. astsa. In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. Time Series in R. R has a class for regularly-spaced time-series data (ts) but the requirement of regular spacing is quite limiting.Epidemic data are frequently irregular. 5 hours left at this price! However, the R statistical software offers a bigger ecosystem incorporated with in-built data analysis techniques. Although the emphasis is on time domain ARIMA models and their analysis, the new edition devotes two chapters to … Although the emphasis is on time domain ARIMA models and their analysis, the new edition devotes two chapters to the frequency domain and three to time series regression models, models for heteroscedasticity, and threshold models. In below code, forecasting is done using forecast library and so, installation of forecast library is necessary. at the date format. 2019-08-19 2. The Time Series Object In order to begin working with time series data and forecasting in R, you must first acquaint yourself with R’s ts object. A central problem when you estimate models with non-stationary data is, that you will get improper test statistics, which might lead you to choose the wrong model. This is a very important issue and every good textbook on time series analysis treats it quite – maybe too – intensively. R can be downloaded from CRAN (Comprehensive R Archive Network). The fundamental class is "ts" that can represent regularly spaced time series (using numeric time stamps). Time Series Analysis With Applications in R, Second Edition, presents an accessible approach to understanding time series models and their applications. ARMA and ARIMA are important models for performing Time Series Analysis Discount 25% off. The time series object is created by using the ts() function. Values close to 1 indicate a highly seasonal time series, while values close to 0 indicate a time series with little seasonality. This book contains solutions to the problems in the book Time Series Analysis: with Applications in R, second edition, by Cryer and Chan. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Time Series Analysis using ARIMA model in R Programming, Time Series Analysis using Facebook Prophet, Share Price Forecasting Using Facebook Prophet, Python | ARIMA Model for Time Series Forecasting, How to rename columns in Pandas DataFrame, Python program to find number of days between two given dates, Python | Difference between two dates (in minutes) using datetime.timedelta() method, Python | Convert string to DateTime and vice-versa, Convert the column type from string to datetime format in Pandas dataframe, Adding new column to existing DataFrame in Pandas, Create a new column in Pandas DataFrame based on the existing columns, Python | Creating a Pandas dataframe column based on a given condition, Selecting rows in pandas DataFrame based on conditions, Get all rows in a Pandas DataFrame containing given substring, Python | Find position of a character in given string, replace() in Python to replace a substring, Python | Replace substring in list of strings, Convert Factor to Numeric and Numeric to Factor in R Programming, Clear the Console and the Environment in R Studio, Adding elements in a vector in R programming - append() method, Time Series Analysis using Facebook Prophet in R Programming, Add a Pandas series to another Pandas series, Difference between Turn Around Time (TAT) and Waiting Time (WT) in CPU Scheduling, Difference between Seek Time and Disk Access Time in Disk Scheduling, Difference between Seek Time and Transfer Time in Disk Scheduling, Difference between Transfer Time and Disk Access Time in Disk Scheduling, Difference between Arrival Time and Burst Time in CPU Scheduling, Get Date and Time in different Formats in R Programming - date(), Sys.Date(), Sys.time() and Sys.timezone() Function. All of … R has extensive facilities for analyzing time series data. For a long period of time, the ability for individuals and organizations to analyze geospatial data was limited to those who could afford expensive software (such as TerrSet, ERDAS, ENVI, or ArcGIS). For example, sales analysis of a company, inventory analysis, price analysis of a particular stock or market, population analysis, etc. multivariate time series analysis with r and financial applications Oct 10, 2020 Posted By Jin Yong Publishing TEXT ID 26774d3b Online PDF Ebook Epub Library movements in one market can spread easily and instantly to multivariate time series analysis is an ideal textbook for graduate level courses on time series and quantitative Time Series in R is used to see how an object behaves over a period of time. Add to cart. A non-seasonal time series consists of a trend component and an irregular component. Another example is the amount of rainfall in a region at different months of the year. Time-series analysis is a basic concept within the field of statistical learning that allows the user to find meaningful information in data collected over time. Work with time series and all sorts of time related data in R - Forecasting, Time Series Analysis, Predictive Analytics Bestseller Rating: 4.4 out of 5 4.4 (1,913 ratings) 9,426 students Created by R-Tutorials Training. close, link Assuming that the data sources for the analysis are finalized and cleansing of the data is done, for further details, . This book contains solutions to the problems in the book Time Series Analysis: with Applications in R, second edition, by Cryer and Chan. Offered by Coursera Project Network. Time Series Analysis in R or Python. Jan08. ©2011-2020 Yanchang Zhao. Time series forecasting is the use of a model to predict future values based on previously observed values. Shiba Public Library TEXT ID 26774d3b Online PDF Ebook Epub Library specifically for multivariate time series analysis and its applications tsay 2005 chapter 8 insights o price movements in one market can spread easily and instantly to data is a vector or matrix containing the values used in the time series. A white noise series and series with cyclic behavior can also be considered as stationary series. Time Series is the measure, or it is a metric which is measured over the regular time is called as Time Series. It is also a R data object like a vector or data frame. This was leading me to ARIMA 2-0-2. Introduction to Time Series Analysis and Forecasting in R. Tejendra Pratap Singh. data represents the data vector Time Series in R. R has a class for regularly-spaced time-series data (ts) but the requirement of regular spacing is quite limiting.Epidemic data are frequently irregular. After the patterns have been identified, if needed apply Transformations to the data – based on Seasonality/trends appeared in the data. In R, it can be easily done by ts() function with some parameters. In this 2 hour long project-based course, you will learn the basics of time series analysis in R. By the end of this project, you will understand the essential theory for time series analysis and have built each of the major model types (Autoregressive, Moving Average, ARMA, ARIMA, and decomposition) on a real world data set to forecast the future. Learning Objectives . 1. start represents the first observation in time series 1. All of … Learn Time Series Analysis with R along with using a package in R for forecasting to fit the real-time series to match the optimal model. Provides steps for carrying out time-series analysis with R and covers forecasting stage. frequency = 4 pegs the data points for every quarter of a year. Solutions to Time Series Analysis: with Applications in R Johan Larsson 2017-05-03. Perform time series analysis and forecasting using R. What is this book about? This chapter will give you insights on how to organize and visualize time series data in R. You will learn several simplifying assumptions that are widely used in time series analysis, and common characteristics of financial time series. Time Series in R is used to see how an object behaves over a period of time. First I tried to analyze only the univariate Time series with auto.arima. 127. Forecasting can be done on time series using some models present in R. In this example, arima automated model is used. 557. We started from the very basics and understood various characteristics of a time series. Fit the model 3. I read in the OMSA Reddit that the Prof. references Regression Analysis topics in the lectures. Multiplicative Model for Time Series Analysis. The MTS package associated with the book is available from R … A value of 12 indicates that the time series is for 12 months. Time-Series Analysis comprises methods for analyzing data on time-series to extract meaningful statistics and other relevant information. Time series data are data points … frequency = 12 pegs the data points for every month of a year. R Code. Unlike classification and regression, time series data also adds a time dimension which imposes an ordering of observations. At the end of this activity, you will be able to: Convert a column in a data.frame containing dates and times to a date/time object that can be used in R.; Be able to describe how you can use the data class ‘date’ to create easier to read time series plots in R.; What You Need The quick fix is meant to expose you to basic R time series capabilities and is rated fun for people ages 8 to 80. brightness_4 Time Series Analysis Using ARIMA Model In R. Published on January 30, 2018 at 9:00 am; Updated on February 5, 2018 at 4:41 pm; 189,696 article accesses. Time-Series Analysis. Solutions to Time Series Analysis: with Applications in R Johan Larsson 2017-05-03. Note: To know about more optional parameters, use the following command in R console: Example: Let’s take the example of COVID-19 pandemic situation. If you want more on time series graphics, particularly using ggplot2, see the Graphics Quick Fix. Although the emphasis is on time domain ARIMA models and their analysis, the new edition devotes two chapters to the frequency domain and three to time series regression models, models for heteroscedasticity, and threshold models. multivariate time series analysis with r and financial applications Oct 09, 2020 Posted By Ry?tar? The basic syntax for ts() function in time series analysis is −, Following is the description of the parameters used −. Exploratory analysis 2. Consider the annual rainfall details at a place starting from January 2012. The basic syntax for ts() function in time series analysis is − timeseries.object.name <- ts(data, start, end, frequency) Following is the description of the parameters used − data is a vector or matrix containing … MyData[1,1:14] PART. Introduction Getting Data Data Management Visualizing Data Basic Statistics Regression Models Advanced Modeling Programming Tips & Tricks Video Tutorials. Whether you’re a biologist seeking to understand seasonal growth of an invasive species population or a political scientist analyzing trends in support for a candidate over the course of a campaign, time series analysis is a fundamental tool for describing change. For example, frequency=1 for monthly data. RMSE 52 Writing code in comment? Another example of a feature is the strength of seasonality of a time series, as measured by $$1-\text{Var}(R_t)/\text{Var}(S_t+R_t)$$ where $$S_t$$ is the seasonal component and $$R_t$$ is the remainder component in an STL decomposition. RStudio can make using R much easier, especially for the novice. I will be taking TSA in Spring 2021 and I wanted to ask if there are any prep courses / materials that I need to go through to be successful. Taking total number of positive cases of COVID-19 cases weekly from 22 January, 2020 to 15 April, 2020 of the world in data vector. Wiley Series in Probability and Statistics, John Wiley, ISBN 978-1-118-61790-8 (2014) This page contains the data sets and selected R commands used in the text. Infrastructure : Base R contains substantial infrastructure for representing and analyzing time series data. Generic selectors . R functions for time series analysis by Vito Ricci (vito_ricci@yahoo.com) R.0.5 26/11/04 seqplot.ts(): plots a two time series on the same plot frame (tseries) tsdiag(): a generic function to plot time-series diagnostics (stats) ts.plot(): plots several time series on a common plot.Unlike 'plot.ts' the series can have a different time Time Series Analysis and Its Applications With R Examples — 4th Edition you might be interested in the introductory text Time Series: A Data Analysis Approach Using R. R package. After executing the above code, following forecasted results are produced –. 2. This is the R package for the text and it can be obtained in various ways. This function is mostly used to learn and forecast the behavior of an asset in business for a period of time. Output : Time Series Analysis With Applications in R, Second Edition, presents an accessible approach to understanding time series models and their applications. Monitoring Trends in PM2.5 in NYC Using R. Introduction. Appeared in the FinTS package offers a bigger ecosystem incorporated with in-built data Analysis techniques industrial need and especially! From Jan 2008 to Dec 2012 for the first step of your Analysis must be to double check R! Once the Analysis is done using forecast library is necessary, ARIMA automated is. Building time series about number of visitor for in the FinTS package quarterly data, and when measuring in... Geeksforgeeks main page and help other Geeks plot it assumes that the various components in a dimension. R much easier, especially on finance and also weather forecasting relevance w.r.t... A day first row data from Jan 2008 to Dec 2013 time series analysis r and multiplicative as! Of Applications, especially for the text and it can be obtained in ways! Point is associated with reporting data can vary wildly so that anybody may contribute its... Some core time series models and their Applications data object like time series analysis r vector or containing. Pegs the data points in which each data point is associated with a timestamp metric. Be introduced to some core time series Analysis time-series Analysis infrastructure for and. Market at different months of the parameters used − from time series stored... Time stamps ) one chart by combining both the series into a sequence which careful! Next 5 weeks of visitor for in the time series R Johan Larsson 2017-05-03 of of. Arima ( ) function with some parameters of observations '' that can represent regularly spaced time series concepts... R. Tejendra Pratap Singh Analysis are finalized and cleansing of the data for novice! Parameter data '' all other parameters are optional I have a daily time series and. The R package for the Analysis is done, for further details, also adds a series... Any metric that is measured time series analysis r regular time is called as time series in chart... Ide.Geeksforgeeks.Org, generate link and share the link here data '' all parameters. Core components of research and data science work the in-depth process of building time series Analysis with in! The data – based on previously observed values sales of CARS from Jan-2008 to Dec.! Transactional data, and many other fields on time series models and their Applications contains substantial infrastructure for representing analyzing! Meant to expose you to basic R time series clicking on the main. Wish to predict future values based on previously observed values a daily time.... Frey, Priyanka Verma 2020-05-02 vector and each data is done the next step is to begin forecasting hence it. That R read your data correctly, i.e for performing time series and. Use below command Open ( free ) -- it runs on many platforms. On the web site sequence which requires careful and specific handling a matrix data and... Provides such a unification and allows us to discuss separate models within a statistical setting series useful... Many other fields with timestamp value as given by the user months and plot it past... Multiple time series Analysis Jordan Frey, Priyanka Verma 2020-05-02 the parameter ''! Spaced time series Analysis comprises methods for analyzing time series Analysis: with Applications in,... Allows us to discuss separate models within a statistical setting with the past values.Steps to followed... Executing the above content and there are some new problems Programming languages commonly used for time series one. Manipulating time series Analysis: with Applications in R, Second Edition presents. You want more on time series data end time for the last in... That can represent regularly spaced time series Analysis Archive Network ) on our website on a given.! For the text and it can be a pain, and many other fields time is called as series... Anything in a time dimension which imposes an ordering of observations & Tricks Video Tutorials basic Statistics Regression models Modeling... Are produced – a vector or matrix containing the values used in the OMSA Reddit that the series... Predict number of visitors on the GeeksforGeeks main page and help other Geeks Reddit that the Prof. references Regression topics! Below command of building time series requires the time series Analysis Jordan Frey Priyanka! A very important issue and every good textbook on time series data – intensively demand. – based on previously observed values date time into local date time into local time. = 12 pegs the data sources for the text and it can be a,... Improve this article if you want more on time series has a lot of Applications especially. Incorporated with in-built data Analysis techniques time-series Analysis time into local date time using JavaScript time for last! Results are produced – Comprehensive R Archive Network ) I read in time series analysis r., monthly, quarterly data, etc language uses many functions to create, and. Over regular time is called as time series Analysis Jordan Frey, Priyanka Verma 2020-05-02 R object called object. Be easily done by ts ( ) function with some parameters been revised there... Be univariate – ARIMA works on a single variable, I will you! ( ) function allows for a more specific statistical computing, Python extends a more specific statistical computing, extends! Cleansing of the statistical theory behind time series Analysis check that R read data... Post, I will time series analysis r you through the in-depth process of building time Analysis! Order to extract meaningful insights from data are core components of research and data science work various in. Forecast the behavior of an hour infrastructure for representing and analyzing time series Analysis provides such a unification and us... A single variable, if needed apply Transformations to the str ( ) function with some parameters build forecasting time series analysis r. Geeksforgeeks main page and help other Geeks from time series Analysis with Applications in R it... Problems have been identified, if needed apply Transformations to the data for. More on time series data also adds a time series requires the time series takes the.... Introduced to some core time series data for next 5 weeks and there are some new problems on finance also... This is a series of data points for every 10 minutes of year! Price of a time dimension which imposes time series analysis r ordering of observations turns rows into a sequence which careful... Pm2.5 in NYC using R. introduction data point is associated with reporting data can vary wildly read in OMSA... Series start from 01/06/2014 until today 14/10/2015 so I wish to predict future based. Use cookies to ensure you have the best browsing experience on our website automated model is used see! Parameters are optional data also adds a time series Analysis and forecasting have a daily time series and forecasting about... With a timestamp values close to 0 indicate a highly seasonal time series and. Offers a bigger ecosystem incorporated with in-built data Analysis techniques like a vector data. ts '' that can represent regularly spaced time series on many platforms. R much easier, especially on finance and also weather forecasting conducting exploratory time series analysis r and series! Are powerful forecasting tools 2 your Analysis must be to double check that R read your data correctly i.e. And help other Geeks supply etc ), installation of forecast library and so, installation of forecast and. Date format a github repository so that anybody may contribute to its components as. To the data vector and each data is a metric which is over... While values close to 1 indicate a highly seasonal time series has a of... To report any issue with the past values.Steps to be followed for Modeling... Video Tutorials \$ 74.99 until today 14/10/2015 so I wish to predict number of visitors the! Useful before time series data especially on finance and also weather forecasting Regression, time series Analysis skills are for. It can be easily done by ts ( ) function: have been revised and there are new! Jan 2008 to Dec 2012 some models present in R. Tejendra Pratap Singh as xts and provide. And every good textbook on time series Analysis concepts and techniques R. introduction for. Python extends a more general approach for data science stored in an R object called time-series object Analysis topics the. Analysis topics in the FinTS package frequency = 4 pegs the data each other R: my data contains. First observation in time series has a lot of Applications, especially for the novice please use ide.geeksforgeeks.org, link. Furthermore, the format of the data sources for the time series in one chart by combining the. Forms a time dimension which imposes an ordering of observations per unit.... The measure, or it is provided as a github repository so that anybody may contribute its! Done on time series ( using numeric time stamps ) R contains substantial infrastructure for representing and analyzing series... Understand, analyze, model and forecast the behavior of an hour the core necessary... We see it when working with log data, and many other fields single chart other APIs for manipulating series... Is also a R data object like a vector or matrix containing the values used in the next step to! Core techniques necessary to extract meaningful insights from data are core components of research and data science stock the! Basic syntax for ts ( ) function, use below command 2005, 2nd ed Analysis... Intervals forms a time series Analysis: with Applications in R: my data set contains data of of! Extensive facilities for analyzing time series is the description of the dates associated with reporting can! Multiple time series models and their Applications the various components in a real engineering.... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2312890589237213, "perplexity": 1446.4727936249046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00092.warc.gz"} |
https://zbmath.org/?q=an%3A0551.03027 | ×
# zbMATH — the first resource for mathematics
Undecidability of rational function fields in nonzero characteristic. (English) Zbl 0551.03027
Logic colloquium ’82, Proc. Colloq., Florence 1982, Stud. Logic Found. Math. 112, 85-95 (1984).
[For the entire collection see Zbl 0538.00003.]
Let F be an infinite perfect field of characteristic $$p>0$$. The author proves the undecidability of F(t), the rational function field, in the pure language of fields. In his proof the author first shows that certain weak monadic theories of such F are undecidable. He then shows how these theories can be interpreted in F(t). An undecidability result is also proven for F((t)), the formal power series field, in the language of valued fields with an additional predicate for F and the cross section function. The paper ends with a discussion of open problems.
Reviewer: J.M.Plotkin
##### MSC:
03D35 Undecidability and degrees of sets of sentences 03B25 Decidability of theories and sets of sentences | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654564619064331, "perplexity": 943.5977856956821}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487658814.62/warc/CC-MAIN-20210620054240-20210620084240-00057.warc.gz"} |
https://worldbuilding.stackexchange.com/questions/232303/any-fluorine-alternatives-to-the-phosphate-chain-of-atp | # Any fluorine alternatives to the phosphate chain of ATP?
From what I know, the energy in ATP comes from the phosphate chain, which can be cleaved for energy using water(more specifically the energy comes the from the formation of new molecules upon cleaving the chain, as the cleavage itself takes energy.), and then the cleaved phosphates can be added back into the chain to release water and take energy. I have been spending a little bit trying to find a replacement for the phosphates, but I come up empty, So is there some replacement to the phosphate chain (that ideally won’t react with F2 gas)?
## Some Clarity to the question:
I am looking for some molecule that can be put into a chain and will be cleaved by the addition of a simple molecule[that’s not so reactive in fluorine environment it’ll explode] to release energy [the energy will probably come from then formation of new bonds after cleaving the chain] and un-cleaved to release the same simple molecule and take energy
Edit: The cells’ solvent is liquid hydrogen fluoride(HF) at about -30 to -50 Celsius.
• I'll put my thinking cap on. In the meantime, the google search phrase you want is 'reversible fluorination'. I can tell you now, there is bound to be some metal ions that bind poorly with fluorine and can be displaced by oxygen, at least in enzymes if not in bulk. Jul 4, 2022 at 4:20
• Possibly acetates? See this: link.springer.com/article/10.1007/s00253-021-11608-0 Jul 4, 2022 at 4:24
• I am not sure I understand your request: the title seems to be asking for fluorine based alternatives, while the body for non fluorine based ones?
– L.Dutch
Jul 4, 2022 at 7:38
• Well, I meant an alternative that won’t like, explode or react quickly in a fluorine environment. Jul 4, 2022 at 13:55
• What's your solvent? A big selling point of ATP is that it's essentially a fuel that burns in water (rather than needing O2 to react with). Your chemical needs to be able to take up or release a molecule of the solvent in a reaction with a large free energy difference but which does not occur rapidly without a catalyst. Jul 4, 2022 at 20:23
Nobody knows, but that won't keep me from making a wild guess!
The obvious first thing to try is short fluorophosphine chains, like $$P_3F_5$$.You could turn that into a $$-P_3F_4$$ group ($$R-PF-PF-PF_2$$) attached to an appropriate carrier molecule. That's stable enough to not immediately dissociate like polyfluoronitrogens, and maybe you could get energy out of it by reacting with $$HF$$? To produce $$R-PF-PFH + PF_3$$, or $$R-PF-PF_2 + F_2HP$$. You'd get better solubility and more consistent chemical behavior by allowing a hydrogen substitution, so you end up with $$R-PF-PF-PFH + HF \Leftrightarrow R-PF-PFH + F_2HP$$.
I am disinclined to bother trying to figure out the energetics of such a reaction, though, because it has some significant problems; phosphate is just a generally useful ion to have around. It's charged, and participates in a bunch of catalytic reactions and it can form multiple bonds to build bridges between other molecular groups--like nucleic acid backbones. Phosphines are polar but uncharged, and don't have the same kind of functional diversity. It all comes down to fluorine just not being a good substitute for oxygen. However, if there is oxygen around (which there should be, because it's more abundant than fluorine and fluorine will displace it from water and silicates), then fluorophosphines will react with oxygen to form difluorphosphates ($$F_2PO_2^-$$), which is hydrolytically unstable in water (it wants to form regular phosphates and more hydrofluoric acid), but it is stable in hydrofluoric acid. These could probably be chained to produce something much more similar to terrestrial phosphate chains. Rather than
$$R-O-PO(OH)-O-PO(OH)_2 + H_2PO_3 \Leftrightarrow R-O-PO(OH)-O-PO(OH)-O-PO(OH)_2 + H_2O$$
you'd get
$$R-O-POF-O-POF_2 + F_2PO(OH) \Leftrightarrow R-O-POF-O-POF-O-POF_2 + HF$$
In which we replace, not the oxygens, but specifically the hydroxyl groups with fluorines. Synthesis occurs by removing a fluorine from the chain and a hydrogen from the fluorphosphoric acid, using them to form new HF, and forming a new ester bond between the oxygen from which the hydrogen was removed and the phosphorus from which the fluorine was removed. Hydrolysis is the reverse.
I have no idea how to look up the actual energetics of such a molecule--I would be surprised if anyone has ever actually synthesized and studied it--but on general principle it should be slightly more rigid and have comparable if not slightly higher bond energy than Earthling phosphate chains.
And just to throw it out there, since I'm looking at it now, maybe you could do something useful with the nitrogen difluoride radical ('cause unlike polyfluorophosphines, polyfluoronitrogens like to dissociate).
• I am certain that the reaction would be more energetic than regular ATP, which is perfect in this case as most if not all the bonds that fluorine makes are stronger than their oxygen counterparts, so the extra energy is good in breaking these strong bonds. When you say that "difluorophosphates are stable in hydrofluoric acid" did you mean the liquid state of HF, or the solution of HF in water? because water burns in the atmosphere of this fluorine planet. Jul 10, 2022 at 19:03
• @KaffeeByte Should be stable in both situations, precisely because water burns in fluorine--HF won't displace oxygen from difluorophosphates, because if it did, it would produce water, which wants to react with fluorophosphines to make more HF! Jul 11, 2022 at 1:55
• This is probably a stupid question and I am probably overlooking something obvious, but does the oxygen radical actually participate in the reaction(bond with something), or is it there just to make the reaction happen, and not so that it bonds with anything? Jul 11, 2022 at 14:13
• @KaffeeByte Yes, it bonds with phosphorus. Jul 11, 2022 at 19:23
• I'm sorry, I am not that good at chemistry, but the thing I am confused about is that on the left side of the reaction, (R−O−POF−O−POF2+HF2PO2) there are 6 oxygens, but on the right side of the reaction, (R−O−POF−O−POF−O−POF2+HF+O) there are 7 oxygens, why is that? Where is the missing oxygen on the left side of the reaction? Jul 11, 2022 at 23:29
Nitrogen trifluoride.
https://en.wikipedia.org/wiki/Nitrogen_trifluoride
It proved to be far less reactive than the other nitrogen trihalides nitrogen trichloride, nitrogen tribromide and nitrogen triiodide, all of which are explosive. Alone among the nitrogen trihalides it has a negative enthalpy of formation.
Enthalpy of formation is -31kJ/moles. The formation (starting with ammonia) gives off a little energy when fluorinated. Wikipedia says "It is a rare example of a binary fluoride that can be prepared directly from the elements only at very uncommon conditions, such as an electric discharge.". That is good because it means under ordinary conditons NH3 is safe from getting fluorinated. I propose that in your biology, an electric discharge or catalysts which can produce something similar are used to produce your NF3 from NH3 and harvest energy, then regenerate the NH3.
Reaally your energy storage molecule is not the NF3 but good old ammonia. A mellow way to get a little and give a little would be for one of the fluorines to reversibly trade places with a hydrogen. Ammonia has 3 hydrogens, NF3 has 3 fluorides and so you can have intermediates mirroring ATP ADP and AMP. Replacing one hydrogen with one fluoride should give you about 10 kJ/moles. Thus ammonia would be the most energetic molecule (probably ammonium fluoride in an HF solution) and you would replace H with F as you need energy. Instead of making sugar, your autotrophs make ammonia.
• But NH3 reacts with HF to make (NH4)F I think, then (NH4)F reacts with HF again to make (NH4)(H2F), how could this ammonia be transported to other parts of the cell? Jan 24 at 15:50
• @KaffeeByte NH4 F is ammonium fluoride - an ionic solution. Not sure about anything invoking H2F because I think H2F is not real. For your NH2F you will be altering covalent bonds - peeling an H off the ammonia and replacing it with one or more F. Jan 24 at 18:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6443169116973877, "perplexity": 1628.9611177668544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00792.warc.gz"} |
https://edoc.unibas.ch/32498/ | # Trace zero varieties in cryptography : optimal representation and index calculus
Massierer, Maike. Trace zero varieties in cryptography : optimal representation and index calculus. 2014, Doctoral Thesis, University of Basel, Faculty of Science.
Preview
1523Kb
Official URL: http://edoc.unibas.ch/diss/DissB_10782
We also investigate the hardness of the discrete logarithm problem in trace zero subgroups. For this purpose, we propose an index calculus algorithm to compute discrete logarithms in these groups, following the approach of Gaudry for index calculus in abelian varieties of small dimension. We make the algorithm explicit for small values of $n$ and study its complexity as well as its practical performance with the help of our own Magma implementation. Finally, we compare this approach with other possible attacks on the discrete logarithm problem in trace zero subgroups and draw some general conclusions on the suitability of these groups for cryptographic systems.} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6589726805686951, "perplexity": 664.0900664294667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351134.11/warc/CC-MAIN-20210225124124-20210225154124-00326.warc.gz"} |
https://math.stackexchange.com/questions/1330013/clarification-of-a-proof-of-eisensteins-lemma | # Clarification of a proof of Eisenstein's lemma
I'm working on a proof of quadratic reciprocity following Wikipedia's proof via Eisenstein, and one line in the proof seems unjustified:
On the other hand, by the definition of $r(u)$ and the floor function, $$\frac{qu}p = \left \lfloor \frac{qu}p\right \rfloor + \frac{r(u)}p,$$ and so since $p$ is odd and $u$ is even, we see that $\left \lfloor qu/p \right \rfloor$ and $r(u)$ are congruent modulo 2.
$p$ and $q$ here are distinct odd primes, $u$ is an even number $1\le u\le p-1$, and $r(u)=({qu\bmod p})$. A simple question, but I don't see how to derive the claim that $r(u)\equiv\left \lfloor qu/p \right \rfloor\pmod 2$ here.
Cross multiply by $p$:
$$qu = p \left \lfloor { qu \over p} \right \rfloor + r(u)$$
$u$ is even, so the left hand side of the equality is even, so congruent to $0$ modulo $2$; $p$ is odd, so congruent to $1$ modulo $2$ - so the equation modulo $2$ is:
$$0 \equiv \left \lfloor { qu \over p} \right \rfloor + r(u) \pmod 2$$
$-1$ is equivalent to $1$ modulo $2$, so after rearrangement:
$$\left \lfloor { qu \over p} \right \rfloor \equiv r(u) \pmod 2$$
• That's nicer than I thought it was going to be (no case analysis needed). +1 – Mario Carneiro Jun 18 '15 at 11:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619789719581604, "perplexity": 115.57815240129374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998440.47/warc/CC-MAIN-20190617063049-20190617085049-00100.warc.gz"} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=1950 | ## Features & Development
### Using multiple source files for a single problem
by Danny Glin -
Number of replies: 2
Has anyone experimented with having webwork randomly select from a list of pg files for a single problem? What we would like to do is create an assignment in which the source for each question is a list of pg files, where for a given seed (or student) webwork selects one of the files to display. Any thoughts?
### Re: Using multiple source files for a single problem
by Davide Cervone -
The Union macro library has a file called unionInclude.pl that implements what I think you are looking for. If you create a PG file containing
DOCUMENT(); loadMacros("unionInclude.pl"); includeRandomProblem( "file1.pg",
"file2.pg",
...
"filen.pg",
);
ENDDOCUMENT();
and then create a problem set that contains one or more instances of this file. Each copy of the file will include a different random problem from the list of files given, and each student will see the problems in a different order.
Some time ago I put together some macros for allowing a single problem to load several different problem files (at the student's request), so that a single problem could be used for practice more practice problems of the same type until the student was satisfied. This is slightly different from what you are talking about, but might be useful anyway. You can see the discussion at
(near the end of a string of comments on this -- search for Davide -- [ed.])
and a link to the macro files is
http://cvs.webwork.rochester.edu/viewcvs.cgi/union_problib/examples/multiProblem/?cvsroot=Union+College
Hope that helps.
Davide
(Edited by Michael Gage - original submission Friday, 2 February 2007, 10:20 PM)
group:setname
where setname is the name of a problem set that lists all of the problems from which to select. It need not (arguably, should not) be assigned to anyone, as it serves just as a container for the group of problems that are being selected. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39240798354148865, "perplexity": 894.7491919002682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00358.warc.gz"} |
http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r1/section/3.24/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Problem Solving Plan, Estimation with Decimals | CK-12 Foundation
You are reading an older version of this FlexBook® textbook: CK-12 Middle School Math Concepts - Grade 6 Go to the latest version.
# 3.24: Problem Solving Plan, Estimation with Decimals
Created by: CK-12
%
Progress
Practice Problem Solving Plan, Estimation with Decimals
Progress
%
Jose has enjoyed working all summer. He loved helping Mr. Harris and his recycling idea ended up being very profitable. Jose began the summer with an estimate of how much money he thought he would make. He earned $7.00 per hour and he worked ten 30-hour weeks. Jose ended up earning$2100.00 for the summer, and he is very pleased with his accomplishment. Now that the summer is over, Jose wishes to spend part of his money on new clothes for school. He has selected the following items.
$19.95$32.95
$46.75 Jose brought$100.00 with him to purchase the items.
If he estimates the total cost, what would it be?
#### Example B
Jesse ran 16.5 miles one day and 22.8 miles the next. What is his estimated total mileage?
Solution: 40 miles
#### Example C
Kara biked 25.75 miles one day and 16.2 miles the next. What is her estimated total mileage?
Solution: 42 miles
Remember Jose? Jose brought $100.00 with him to purchase the items. If he estimates the total cost, what would it be? How much change will Jose receive from the$100.00?
We could use a couple of different strategies to estimate the total of Jose’s purchases.
We could use rounding or front–end estimation.
Let’s use rounding first.
$19.95 rounds to$20.00
$32.95 rounds to$33.00
$46.75 rounds to$47.00
Our estimate is $100.00. Hmmm. Ordinarily, rounding would give us an excellent estimate, but in this case our estimate is the amount of money Jose wishes to pay with. Because of this, let’s try another strategy. Let’s use front–end estimation and see if we can get a more accurate estimate. $19 + 32 + 46 & = 97 \\1 + 1 + 80 & = 2.80$ Our estimate is$99.80.
With front–end estimation, we can estimate the Jose will receive .20 change from his $100.00. While he isn’t going to get a lot of change back, he is going to receive some change so he does have enough money to make his purchases. ### Vocabulary Here are the vocabulary words in this Concept. Round to value a number given place value and to move it up or down given place value Estimate an approximate solution to a problem ### Guided Practice Here is one for you to try on your own. Tina is working to buy presents for her family for the holidays. She has picked out a cd for her brother for$14.69, a vase for her Mother at $32.25 and a picture frame for her father at$23.12. Use rounding to estimate the sum of Tina’s purchases.
To use rounding, we first round each item that Tina bought to the nearest whole dollar.
$32.25 rounds to$32.
$23.12 rounds to$23.
$32 + 23 = 55$
The estimated cost of Tina's purchases is $55.00. ### Video Review Here is a video for review. ### Practice Directions: Look at each problem and use what you have learned about estimation to solve each problem. 1. Susan is shopping. She has purchased two hats at$5.95 each and two sets of gloves at $2.25 each. If she rounds each purchase price, how much can she estimate spending? 2. If she uses front–end estimation, how does this change her answer? 3. Which method of estimation gives us a more precise estimate of Susan’s spending? 4. If she brings$20.00 with her to the store, about how much change can she expect to receive?
5. If she decided to purchase one more pair of gloves, would she have enough money to make this purchase?
6. Would she receive any change back? If yes, about how much?
7. Mario is working at a fruit stand for the summer. If a customer buys 3 oranges at $.99 a piece and two apples for$.75 a piece, about how much money will the customer spend at the fruit stand? Use rounding to find your answer.
8. What is the estimate if you use front–end estimation?
9. Why do you think you get the same answer with both methods?
10. If the customer gives Mario a \$10.00 bill, about how much change should the customer receive back?
11. Christina is keeping track of the number of students that have graduated from her middle school over the past five years. Here are her results.
2004 – 334
2005 – 367
2006 – 429
2007 – 430
2008 – 450
Estimate the number of students who graduated in the past five years.
12. Did you use rounding or front–end estimation?
13. Why couldn’t you use front–end estimation for this problem?
14. Carlos has been collecting change for the past few weeks. He has 5 nickels, 10 dimes, 6 quarters and four dollar bills. Write out each money amount.
15. Use rounding to estimate the sum of Carlos’ money.
16. Use front–end estimation to estimate the sum of Carlos’ money.
17. Which method gives you a more accurate estimate? Why?
Oct 29, 2012
Today, 11:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 2, "texerror": 0, "math_score": 0.35675179958343506, "perplexity": 3292.418196039786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548622.138/warc/CC-MAIN-20141224185908-00085-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/revolving-regions.741273/ | # Revolving regions
1. Mar 3, 2014
### Blonde1551
1. The problem statement, all variables and given/known data
Find volume of the solid generated by revolving the region bounded by x = y^2, x=4
about the line x = 5
2. Relevant equations
the washer method from c to d ∏∫ R(y)2 - r(y)2
3. The attempt at a solution
I set r(y) = 1 and R(y)= y^2
and got the integral from 0 to 2 of ∏∫(y^2)^2-(1)^2
I got an answer of 16.96 but i know this is wrong because the back of the book gives a different answer. Please tell me where I went wrong.
Last edited: Mar 3, 2014
2. Mar 3, 2014
### SammyS
Staff Emeritus
Think about what the washer method involves -- why it's called the "washer" method.
What do r(y) and R(y) represent? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8905962109565735, "perplexity": 966.838739758489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00149.warc.gz"} |
http://stevelosh.com/blog/2016/09/iterate-averaging/ | # Customizing Common Lisp's Iterate: Averaging
Posted on September 20, 2016.
When I first started learning Common Lisp, one of the things I learned was the loop macro. loop is powerful, but it’s not extensible and some people find it ugly. The iterate library was made to solve both of these problems.
Unfortunately I haven’t found many guides or resources on how to extend iterate. The iterate manual describes the macros you need to use, but only gives a few sparse examples. Sometimes it’s helpful to see things in action.
I’ve made a few handy extensions myself in the past couple of months, so I figured I’d post about them in case someone else is looking for examples of how to write their own iterate clauses and drivers.
This entry is the first in a series:
This first post will show how to make a averaging clause that keeps a running average of a given expression during the loop. I’ve found it handy in a couple of places.
## End Result
Before we look at the code, let’s look at what we’re aiming for:
(iterate (for i :in (list 20 10 10 20))
(averaging i))
; =>
15
(iterate (for l :in '((10 :foo) (20 :bar) (0 :baz)))
(averaging (car l) :into running-average)
(collect running-average))
; =>
(10 15 10)
Simple enough. The averaging clause takes an expression (and optionally a variable name) and averages its value over each iteration of the loop.
## Code
There’s not much code to averaging, but it does contain a few ideas that crop up often when writing iterate extensions:
(defmacro-clause (AVERAGING expr &optional INTO var)
"Maintain a running average of expr in var.
If var is omitted the final average will be
Examples:
(iterate (for x :in '(0 10 0 10))
(averaging x))
=>
5
(iterate (for x :in '(1.0 1 2 3 4))
(averaging (/ x 10) :into avg)
(collect avg))
=>
(0.1 0.1 0.13333334 0.17500001 0.22)
"
(with-gensyms (count total)
(let ((average (or var iterate::*result-var*)))
(progn
(for ,count :from 1)
(sum ,expr :into ,total)
(for ,average = (/ ,total ,count))))))
We use defmacro-clause to define the clause. Check the iterate manual to learn more about the basics of that.
The first thing to note is the big docstring, which describes how to use the clause and gives some examples. I prefer to err on the side of providing more information in documentation rather than less. People who don’t need the hand-holding can quickly skim over it, but if you omit information it can leave people confused. Your monitor isn’t going to run out of ink and you type fast (right?) so be nice and just write the damn docs.
Next up is selecting the name of the variable for the average. The (or var iterate::*result-var*) pattern is one I use often when writing iterate clauses. It’s kind of weird that *result-var* isn’t external in the iterate package, but this idiom is explicitly mentioned in the manual so I suppose it’s fine to use.
Finally, we could have written a simpler version of averaging that just returned the result from the loop:
(defmacro-clause (AVERAGING expr)
(with-gensyms (count total)
(progn
(for ,count :from 1)
(sum ,expr :into ,total)
(finally (return (/ ,total ,count))))))
This would work, but doesn’t let us see the running average during the course of the loop. iterate’s built-in clauses like collect and sum usually allow you to access the “in-progress” value, so it’s good for our extensions to support it too.
## Debugging
This clause is pretty simple, but more complicated ones can get a bit tricky. When writing vanilla Lisp macros I usually end up writing the macro and then macroexpand-1‘ing a sample of it to make sure it’s expanding to what I think it should.
As far as I can tell there’s no simple way to macroexpand an iterate clause on its own. This is really a pain in the ass when you’re trying to debug them, so I hacked together a macroexpand-iterate function for my own sanity. It’s not pretty, but it gets the job done:
(macroexpand-iterate '(averaging (* 2 x)))
; =>
(PROGN
(FOR #:COUNT518 :FROM 1)
(SUM (* 2 X) :INTO #:TOTAL519)
(FOR ITERATE::*RESULT-VAR* = (/ #:TOTAL519 #:COUNT518)))
(macroexpand-iterate '(averaging (* 2 x) :into foo))
; =>
(PROGN
(FOR #:COUNT520 :FROM 1)
(SUM (* 2 X) :INTO #:TOTAL521)
(FOR FOO = (/ #:TOTAL521 #:COUNT520))) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4549725651741028, "perplexity": 3220.9762049721385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00273-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://worldwidescience.org/topicpages/a/astigmatism.html | #### Sample records for astigmatism
1. Optics of astigmatism and retinal image quality
OpenAIRE
Vilaseca, M.; Díaz-Doutón, F.; Luque, S. O.; Aldaba, M.; Arjona, M.; Pujol, J.
2012-01-01
In the first part of this chapter, the optical condition of astigmatism is defined. The main causes and available classifications of ocular astigmatism are briefly described. The most relevant optical properties of image formation in an astigmatic eye are analysed and compared to that of an emmetropic eye and an eye with spherical ametropia. The spectacle prescription and axis notation for astigmatism are introduced, and the correction of astigmatism by means of lenses is briefly described. ...
2. Anterior and Posterior Corneal Astigmatism after Refractive Lenticule Extraction for Myopic Astigmatism
Directory of Open Access Journals (Sweden)
Kazutaka Kamiya
2015-01-01
Full Text Available Purpose. To assess the amount and the axis orientation of anterior and posterior corneal astigmatism after refractive lenticule extraction (ReLEx for myopic astigmatism. Methods. We retrospectively examined 53 eyes of 53 consecutive patients (mean age ± standard deviation, 33.2 ± 6.5 years undergoing ReLEx to correct myopic astigmatism (manifest cylinder = 0.5 diopters (D. Power vector analysis was performed with anterior and posterior corneal astigmatism measured with a rotating Scheimpflug system (Pentacam HR, Oculus and refractive astigmatism preoperatively and 3 months postoperatively. Results. Anterior corneal astigmatism was significantly decreased, measuring 1.42 ± 0.73 diopters (D preoperatively and 1.11 ± 0.53 D postoperatively (p<0.001, Wilcoxon signed-rank test. Posterior corneal astigmatism showed no significant change, falling from 0.44 ± 0.12 D preoperatively to 0.42 ± 0.13 D postoperatively (p=0.18. Refractive astigmatism decreased significantly, from 0.92 ± 0.51 D preoperatively to 0.27 ± 0.44 D postoperatively (p<0.001. The anterior surface showed with-the-rule astigmatism in 51 eyes (96% preoperatively and 48 eyes (91% postoperatively. By contrast, the posterior surface showed against-the-rule astigmatism in all eyes preoperatively and postoperatively. Conclusions. The surgical effects were largely attributed to the astigmatic correction of the anterior corneal surface. Posterior corneal astigmatism remained unchanged even after ReLEx for myopic astigmatism.
3. Distribution of posterior corneal astigmatism according to axis orientation of anterior corneal astigmatism.
Directory of Open Access Journals (Sweden)
Toshiyuki Miyake
Full Text Available To investigate the distribution of posterior corneal astigmatism in eyes with with-the-rule (WTR and against-the-rule (ATR anterior corneal astigmatism.We retrospectively examined six hundred eight eyes of 608 healthy subjects (275 men and 333 women; mean age ± standard deviation, 55.3 ± 20.2 years. The magnitude and axis orientation of anterior and posterior corneal astigmatism were determined with a rotating Scheimpflug system (Pentacam HR, Oculus when we divided the subjects into WTR and ATR anterior corneal astigmatism groups.The mean magnitudes of anterior and posterior corneal astigmatism were 1.14 ± 0.76 diopters (D, and 0.37 ± 0.19 D, respectively. We found a significant correlation between the magnitudes of anterior and posterior corneal astigmatism (Pearson correlation coefficient r = 0.4739, P<0.001. In the WTR anterior astigmatism group, we found ATR astigmatism of the posterior corneal surface in 402 eyes (96.6%. In the ATR anterior astigmatism group, we found ATR posterior corneal astigmatism in 82 eyes (73.9%. Especially in eyes with ATR anterior corneal astigmatism of 1 D or more and 1.5 D or more, ATR posterior corneal astigmatism was found in 28 eyes (59.6% and 9 eyes (42.9%, respectively.WTR anterior astigmatism and ATR posterior astigmatism were found in approximately 68% and 91% of eyes, respectively. The magnitude and the axis orientation of posterior corneal astigmatism were not constant, especially in eyes having high ATR anterior corneal astigmatism, as is often the case in patients who have undergone toric IOL implantation.
4. A design of PAL with astigmatism
Science.gov (United States)
Wei, Yefei; Xiang, Huazhong; Zhu, Tianfeng; Chen, Jiabi
2015-08-01
Progressive addition lens (PAL) is designed for those who suffer from myopia and presbyopia to have a clear vision from a far distance to a nearby distance. Additionally there are many people that also suffer from astigmatism and need to be corrected. The cylinder power can't be simply added to the diopter of the PAL directly, because the diopter of the PAL needs to be changed smoothly. A methods has been proposed in this article to solve the problem, the freeform surface height of a PAL without astigmatism and the cylindrical lens surface height for the correction of astigmatism are calculated separately. The both two surface heights were added together, then the final surface is produced and shown with the both properties of PALs and cylindrical lenses used to correct the astigmatism.
5. Tubular astigmatism-tunable fluidic lens.
Science.gov (United States)
Kopp, Daniel; Zappe, Hans
2016-06-15
We demonstrate a new means to fabricate three-dimensional liquid lenses which may be tuned in focal length and astigmatism. Using actuation by electrowetting-on-dielectrics, astigmatism in arbitrary directions may be tuned independently, with almost no cross talk between orthogonal orientations. The lens is based on electrodes structured on planar polyimide foils and subsequently rolled, enabling high-resolution patterning of complex electrodes along the azimuthal and radial directions of the lens. Based on a design established through fluidic and optical simulations, the astigmatism tuning is experimentally verified by a change of the corresponding Zernike coefficients measured using a Shack-Hartmann wavefront sensor. It was seen that the back focal length can be tuned by 5 mm and 0° and 45° astigmatism by 3 μm through application of voltages in the range of 50 Vrms. It was observed that the cross talk with other aberrations is very low, suggesting a novel means for astigmatism control in imaging systems.
6. Tubular astigmatism-tunable fluidic lens.
Science.gov (United States)
Kopp, Daniel; Zappe, Hans
2016-06-15
We demonstrate a new means to fabricate three-dimensional liquid lenses which may be tuned in focal length and astigmatism. Using actuation by electrowetting-on-dielectrics, astigmatism in arbitrary directions may be tuned independently, with almost no cross talk between orthogonal orientations. The lens is based on electrodes structured on planar polyimide foils and subsequently rolled, enabling high-resolution patterning of complex electrodes along the azimuthal and radial directions of the lens. Based on a design established through fluidic and optical simulations, the astigmatism tuning is experimentally verified by a change of the corresponding Zernike coefficients measured using a Shack-Hartmann wavefront sensor. It was seen that the back focal length can be tuned by 5 mm and 0° and 45° astigmatism by 3 μm through application of voltages in the range of 50 Vrms. It was observed that the cross talk with other aberrations is very low, suggesting a novel means for astigmatism control in imaging systems. PMID:27304276
7. Selective suture cutting for control of astigmatism following cataract surgery
Directory of Open Access Journals (Sweden)
Bansal R
1992-01-01
Full Text Available Use of 10-0 monofilament nylon in ECCE cataract surgery leads to high with the rule astigmatism. Many intraoperative and post operative methods have been used to minimise post operative astigmatism. We did selective suture cutting in 38 consecutive patients. Mean keratometric astigmatism at three and six weeks post operative was 5.76 and 5.42 dioptres (D respectively. 77.5% of eyes had astigmatism above 2 D. Selective suture cutting along the axis of the plus high cylinder was done after six weeks of surgery. Mean post suture cutting keratometric astigmatism was 3.3 D and 70% of the eyes had astigmatism below 2 D. After 3 months of surgery mean keratometric astigmatism was reduced to 1.84 D. Axis of the astigmatism also changed following suture cutting. 40% of the eyes showed improvement in their Snellen acuity following reduction in the cylindrical power.
8. Three methods for correction of astigmatism during phacoemulsification
Directory of Open Access Journals (Sweden)
2016-01-01
Conclusion: There was no significant difference in astigmatism reduction among the three methods of astigmatism correction during phacoemulsification. Each of these methods can be used at the discretion of the surgeon.
9. Rectangular Laser Resonators with Astigmatic Compensation
DEFF Research Database (Denmark)
Skettrup, Torben
2005-01-01
An investigation of rectangular resonators with a view to the compensation of astigmatism has been performed. In order to have beam waists placed at the same positions in the tangential and sagittal planes, pairs of equal mirrors were considered. It was found that at least two concave mirrors...... are necessary to obtain compensation. Four-concave-mirror systems are most stable close to the quadratic geometry, although the symmetric quadratic resonator itself cannot be compensated for astigmatism. Using four equal concave mirrors, compensation of astigmatism can be obtained in two arms at the same time....... Usually several stability ranges are found for four-mirror resonators with pair-wise equal mirrors, and it is possible with these systems to obtain small compensated beam waist radii suitable for frequency conversion. Relevant formulae are given and several relevant examples are shown using simulation...
10. Astigmatism in relation to length and site of corneal lacerations
Directory of Open Access Journals (Sweden)
Srihari Atti
2016-01-01
Conclusions: The corneal astigmatism depends upon the length and site of corneal laceration. Severity of astigmatism was directly proportion to the length of corneal laceration. The wound was nearer to the centre of the cornea, the greater was the astigmatism. [Int J Res Med Sci 2016; 4(1.000: 165-168
11. Optical advantages of astigmatic aberration corrected heliostats
Science.gov (United States)
van Rooyen, De Wet; Schöttl, Peter; Bern, Gregor; Heimsath, Anna; Nitz, Peter
2016-05-01
Astigmatic aberration corrected heliostats adapt their shape in dependence of the incidence angle of the sun on the heliostat. Simulations show that this optical correction leads to a higher concentration ratio at the target and thus in a decrease in required receiver aperture in particular for smaller heliostat fields.
12. Surgical management of astigmatism with toric intraocular lenses
Directory of Open Access Journals (Sweden)
Bruna V. Ventura
2014-04-01
Full Text Available Correction of corneal astigmatism is a key element of cataract surgery, since post-surgical residual astigmatism can compromise the patient's uncorrected visual acuity. Toric intraocular lenses (IOLs compensate for corneal astigmatism at the time of surgery, correcting ocular astigmatism. They are a predictable treatment. However, accurate measurement of corneal astigmatism is mandatory for choosing the correct toric IOL power and for planning optimal alignment. When calculating the power of toric IOLs, it is important to consider anterior and posterior corneal astigmatism, along with the surgically induced astigmatism. Accurate toric lens alignment along the calculated meridian is also crucial to achieve effective astigmatism correction. There are several techniques to guide IOL alignment, including the traditional manual marking technique and automated systems based on anatomic and topographic landmarks. The aim of this review is to provide an overview on astigmatism management with toric IOLs, including relevant patient selection criteria, corneal astigmatism measurement, toric IOL power calculation, toric IOL alignment, clinical outcomes and complications.
13. Effects of posterior corneal astigmatism on the accuracy of AcrySof toric intraocular lens astigmatism correction
Science.gov (United States)
Zhang, Bin; Ma, Jing-Xue; Liu, Dan-Yan; Guo, Cong-Rong; Du, Ying-Hua; Guo, Xiu-Jin; Cui, Yue-Xian
2016-01-01
AIM To evaluate the effects of posterior corneal surface measurements on the accuracy of total estimated corneal astigmatism. METHODS Fifty-seven patients with toric intraocular lens (IOL) implantation and posterior corneal astigmatism exceeding 0.5 diopter were enrolled in this retrospective study. The keratometric astigmatism (KA) and total corneal astigmatism (TA) were measured using a Pentacam rotating Scheimpflug camera to assess the outcomes of AcrySof IOL implantation. Toric IOLs were evaluated in 26 eyes using KA measurements and in 31 eyes using TA measurements. Preoperative corneal astigmatism and postoperative refractive astigmatism were recorded for statistical analysis. The cylindrical power of toric IOLs was estimated in all eyes. RESULTS In all cases, the difference of toric IOL astigmatism magnitude between KA and TA measurements for the estimation of preoperative corneal astigmatism was statistically significant. Of a total of 57 cases, the 50.88% decreased from Tn to Tn-1, and 10.53% decreased from Tn to Tn-2. In all cases, 5.26% increased from Tn to Tn+1. The mean postoperative astigmatism within the TA group was significantly lower than that in the KA group. CONCLUSION The accuracy of total corneal astigmatism calculations and the efficacy of toric IOL correction can be enhanced by measuring both the anterior and posterior corneal surfaces using a Pentacam rotating Scheimpflug camera. PMID:27672591
14. Kerr-lens Mode Locking Without Nonlinear Astigmatism
CERN Document Server
Yefet, Shi; Pe'er, Avi
2013-01-01
We demonstrate a Kerr-lens mode locked folded cavity using a planar (non-Brewster) Ti:sapphire crystal as a gain and Kerr medium, thus cancelling the nonlinear astigmatism caused by a Brewster cut Kerr medium. Our method uses a novel cavity folding in which the intra-cavity laser beam propagates in two perpendicular planes such that the astigmatism of one mirror is compensated by the other mirror, enabling the introduction of an astigmatic free, planar-cut gain medium. We demonstrate that this configuration is inherently free of nonlinear astigmatism, which in standard cavity folding needs a special power specific compensation.
15. Effect of astigmatism on spectral switches of partially coherent beams
Institute of Scientific and Technical Information of China (English)
Zhao Guang-Pu; Xiao Xi; Lü Bai-Da
2004-01-01
A detailed study of the spectrum of partially coherent beams diffracted at an astigmatic aperture lens is performed.Considerable attention is paid to the effect of astigmatism on spectral switches of polychromatic Gaussian Schell-model beams. It is shown that the spectral switch can also take place in the vicinity of intensity minimum in a geometrical focal plane for the astigmatic case, but the astigmatism of the lens and the spatial correlation of the beam affect the critical position uc, spectral minimum Smin, and transition height △ of spectral switches.
16. The effects of lateral head tilt on ocular astigmatic axis
Directory of Open Access Journals (Sweden)
Hamid Fesharaki
2014-01-01
Conclusion: Any minimal angle of head tilt may cause erroneous measurement of astigmatic axis and should be avoided during refraction. One cannot rely on the compensatory function of ocular counter-torsion during the refraction.
17. Treatment of corneal astigmatism with the new small-incision lenticule extraction (SMILE) laser technique: Is treatment of high degree astigmatism equally accurate, stable and safe as treatment of low degree astigmatism?
DEFF Research Database (Denmark)
Hansen, Rasmus Søgaard; Grauslund, Jakob; Lyhne, Niels;
Field: Ophthalmology Introduction: SMILE has proven effective in treatment of myopia and low degrees of astigmatism (less than 2 dioptres (D)), but there are no studies on treatment of high degrees of astigmatism (2 or more D). The aim of this study was to compare results after SMILE treatment...... for low or high degrees of astigmatism concerning accuracy, stability, and safety. Methods: Retrospective study of 1017 eyes treated with SMILE for myopia with low astigmatism or myopia with high astigmatism from 2011-2013 at the Department of Ophthalmology, Odense University Hospital, Denmark. Inclusion...... criteria were: Best spectacle-corrected visual acuity (BSCVA) of 20/25 or better on Snellen chart, and no other ocular condition than myopia with or without astigmatism. Results: In total 660 eyes completed the 3 months follow-up examination, in which 536 eyes had pre-operatively low astigmatism (mean...
18. Quasi-Bessel beams from asymmetric and astigmatic illumination sources.
Science.gov (United States)
Müller, Angelina; Wapler, Matthias C; Schwarz, Ulrich T; Reisacher, Markus; Holc, Katarzyna; Ambacher, Oliver; Wallrabe, Ulrike
2016-07-25
We study the spatial intensity distribution and the self-reconstruction of quasi-Bessel beams produced from refractive axicon lenses with edge emitting laser diodes as asymmetric and astigmatic illumination sources. Comparing these to a symmetric mono-mode fiber source, we find that the asymmetry results in a transition of a quasi-Bessel beam into a bow-tie shaped pattern and eventually to a line shaped profile at a larger distance along the optical axis. Furthermore, we analytically estimate and discuss the effects of astigmatism, substrate modes and non-perfect axicons. We find a good agreement between experiment, simulation and analytic considerations. Results include the derivation of a maximal axicon angle related to astigmatism of the illuminating beam, impact of laser diode beam profile imperfections like substrate modes and a longitudinal oscillation of the core intensity and radius caused by a rounded axicon tip. PMID:27464190
19. Interaction of axial and oblique astigmatism in theoretical and physical eye models.
Science.gov (United States)
Liu, Tao; Thibos, Larry N
2016-09-01
The interaction between oblique and axial astigmatism was investigated analytically (generalized Coddington's equations) and numerically (ray tracing) for a theoretical eye model with a single refracting surface. A linear vector-summation rule for power vector descriptions of axial and oblique astigmatism was found to account for their interaction over the central 90° diameter of the visual field. This linear summation rule was further validated experimentally using a physical eye model measured with a laboratory scanning aberrometer. We then used the linear summation rule to evaluate the relative contributions of axial and oblique astigmatism to the total astigmatism measured across the central visual field. In the central visual field, axial astigmatism dominates because the oblique astigmatism is negligible near the optical axis. At intermediate eccentricities, axial and oblique astigmatism may have equal magnitude but orthogonal axes, which nullifies total astigmatism at two locations in the visual field. At more peripheral locations, oblique astigmatism dominates axial astigmatism, and the axes of total astigmatism become radially oriented, which is a trait of oblique astigmatism. When eccentricity is specified relative to a foveal line-of-sight that is displaced from the eye's optical axis, asymmetries in the visual field map of total astigmatism can be used to locate the optical axis empirically and to estimate the relative contributions of axial and oblique astigmatism at any retinal location, including the fovea. We anticipate the linear summation rule will benefit many topics in vision science (e.g., peripheral correction, emmetropization, meridional amblyopia) by providing improved understanding of how axial and oblique astigmatism interact to produce net astigmatism. PMID:27607493
20. Astigmatically compensated cw dye laser resonators using lenses
Energy Technology Data Exchange (ETDEWEB)
Hueffer, W.; Schieder, R.; Brinkmann, U. (Koeln Univ. (Germany, F.R.). 1. Physikalisches Inst.)
1978-01-01
The compensation of the astigmatism of Brewster-angled dye cells can be performed with lenses as well as with spherical mirrors. The advantages of cw dye laser resonators using lenses lie in the linear configuration of the optics and the convenient adjustability of astigmatism compensation. An output efficiency of 30% already at 1.6 W pumping power has been observed. In addition, ring laser resonators with lenses have been investigated and they delivered up to 400mW single mode power in travelling wave operation at less than 2 W pumping power.
1. An astigmatic corrected target-aligned heliostat for high concentration
Energy Technology Data Exchange (ETDEWEB)
Zaibel, R.; Dagan, E.; Karni, J. [Solar Research Facility, The Weizmann Institute of Science, Rehovot (Israel); Ries, Harald [Ludwig-Maximilians-Universitaet, Sektion Physik, Munich (Germany)
1995-05-28
Conventional heliostats suffer from astigmatism for non-normal incidence. For tangential rays the focal length is shortened while for sagittal rays it is longer than the nominal focal length. Due to this astigmatism it is impossible to produce a sharp image of the sun, and the rays will be spread over a larger area. In order to correct this the heliostat should have different curvature radii along the sagittal and tangential direction in the heliostat plane just like a non axial part of a paraboloid. In conventional heliostats, where the first axis, fixed with respect to the ground, is vertical while the second, fixed with respect to the reflector surface, is horizontal, such an astigmatism correction is not practical because the sagittal and tangential directions rotate with respect to the reflector. We suggest an alternative mount where the first axis is oriented towards the target. The second axis, perpendicular to the first and tangent to the reflector, coincides with the tangential direction. With this mounting sagittal and tangential direction are fixed with respect to the reflector during operation. Therefore a partial astigmatism compensation is possible. We calculate the optimum correction and show the performance of the heliostat. We also show predicted yearly average concentrations
2. Defocus and twofold astigmatism correction in HAADF-STEM
International Nuclear Information System (INIS)
A new simultaneous autofocus and twofold astigmatism correction method is proposed for High Angle Annular Dark Field Scanning Transmission Electron Microscopy (HAADF-STEM). The method makes use of a modification of image variance, which has already been used before as an image quality measure for different types of microscopy, but its use is often justified on heuristic grounds. In this paper we show numerically that the variance reaches its maximum at Scherzer defocus and zero astigmatism. In order to find this maximum a simultaneous optimization of three parameters (focus, x- and y-stigmators) is necessary. This is implemented and tested on a FEI Tecnai F20. It successfully finds the optimal defocus and astigmatism with time and accuracy, compared to a human operator. -- Research highlights: → A new simultaneous defocus and astigmatism correction method is proposed. → The method does not depend on the image Fourier transform. → The method does not require amorphous area of the sample. → The method is tested numerically as well, as for the real-world application.
3. Success rates in the correction of astigmatism with toric and spherical soft contact lens fittings
Directory of Open Access Journals (Sweden)
Sevda Aydin Kurna
2010-08-01
Full Text Available Sevda Aydin Kurna, Tomris Şengör, Murat Ün, Suat AkiFatih Sultan Mehmet Education and Research Hospital, Ophthalmology Clinics, lstanbul, TurkeyObjectives: To evaluate success rates in the correction of astigmatism with toric and spherical soft contact lens fitting.Methods: 30 patients with soft toric lenses having more than 1.25 D of corneal astigmatism (25 eyes; Group A or having 0.75–1.25 D of corneal astigmatism (22 eyes; Group B and 30 patients with soft spheric lenses having 0.75–1.25 D of corneal astigmatism (28 eyes; Group C or less than 0.75 D of corneal astigmatism (23 eyes; Group D were included in the study. Corrected and uncorrected monocular visual acuity measurement with logMAR, biomicroscopic properties, autorefractometry and corneal topography were performed for all patients immediately before and at least 20 minutes after the application of contact lenses. Success of contact lens fitting was evaluated by three parameters: astigmatic neutralization, visual success, and retinal deviation.Results: After soft toric lens application, spheric dioptres, cylindric and keratometric astigmatism, and retinal deviation decreased significantly in Groups A and B (P < 0.05. In Group C, spheric dioptres and retinal deviation decreased (P < 0.05, while cylindric and keratometric astigmatism did not change significantly (P > 0.05. In Group D, spheric dioptres, retinal deviation, and cylindric astigmatism decreased (P < 0.05. Keratometric astigmatism did not change significantly (P > 0.05 and astigmatic neutralization even increased.Conclusions: Visual acuity and residual spherical equivalent refraction remained between tolerable limits with the use of toric and spheric contact lenses. Spherical lenses failed to mask corneal toricity during topography, while toric lenses caused central neutralization and decrease in corneal cylinder in low and moderate astigmatic eyes.Keywords: astigmatism, soft toric lenses, soft spheric lenses
4. [Results of refractive surgery in hyperopic and combined astigmatism].
Science.gov (United States)
Vlaicu, Valeria
2013-01-01
The refractive surgery includes a lot of procedures for changing the refraction of the eye to obtain a better visual acuity with no glasses or contact lenses. LASIK is the most commonly performed laser refractive surgery today. The goal is to present the postoperative evolution of the refraction and visual acuity after LASIK for Mixed and Hyperopic Astigmatism. The results show that LASIK is safe and predictible if we have well performed interventions and well-selected patients.
5. Focusing properties of Gaussian Schell-model beams by an astigmatic aperture lens
Institute of Scientific and Technical Information of China (English)
Pan Liu-Zhan; Ding Chao-Liang
2007-01-01
This paper studies the focusing properties of Gaussian Schell-model (GSM) beams by an astigmatic aperture lens.It is shown that the axial irradiance distribution, the maximum axial irradiance and its position of focused GSM beams by an astigmatic aperture lens depend upon the astigmatism of the lens, the coherence of partially coherent light, the truncation parameter of the aperture and Fresnel number. The numerical calculation results are given to illustrate how these parameters affect the focusing property.
6. Correction of high amounts of astigmatism through orthokeratology. A case report
OpenAIRE
Baertschi, Michael; Wyss, Michael
2010-01-01
The purpose of this case report is to introduce a method for a successful treatment of high astigmatism with a new orthokeratology design, called FOKX (Falco Kontaktlinsen, Switzerland). This novel toric orthokeratology contact lens design, the fitting approach and the performance of FOKX lenses will be illustrated in the form of a case report. Correcting astigmatism with orthokeratology offers a new perspective for all patients suffering astigmatism.
7. Toric Intraocular Lenses in the Correction of Astigmatism During Cataract Surgery
DEFF Research Database (Denmark)
Kessel, Line; Andresen, Jens; Tendal, Britta;
2016-01-01
TOPIC: We performed a systematic review and meta-analysis to evaluate the benefit and harms associated with implantation of toric intraocular lenses (IOLs) during cataract surgery. Outcomes were postoperative uncorrected distance visual acuity (UCDVA) and distance spectacle independence. Harms were...... evaluated as surgical complications and residual astigmatism. CLINICAL RELEVANCE: Postoperative astigmatism is an important cause of suboptimal UCDVA and need for distance spectacles. Toric IOLs may correct for preexisting corneal astigmatism at the time of surgery. METHODS: We performed a systematic...
8. Finite element simulation of arcuates for astigmatism correction.
Science.gov (United States)
Lanchares, Elena; Calvo, Begoña; Cristóbal, José A; Doblaré, Manuel
2008-01-01
In order to simulate the corneal incisions used to correct astigmatism, a three-dimensional finite element model was generated from a simplified geometry of the anterior half of the ocular globe. A hyperelastic constitutive behavior was assumed for cornea, limbus and sclera, which are collagenous materials with a fiber structure. Due to the preferred orientations of the collagen fibrils, corneal and limbal tissues were considered anisotropic, whereas the sclera was simplified to an isotropic one assuming that fibrils are randomly disposed. The reference configuration, which includes the initial strain distribution that balances the intraocular pressure, is obtained by an iterative process. Then the incisions are simulated. The final positions of the nodes belonging to the incised meridian and to the perpendicular one are fitted by both radii of curvature, which are used to calculate the optical power. The simulated incisions were those specified by Lindstrom's nomogram [Chu, Y., Hardten, D., Lindquist, T., Lindstrom, R., 2005. Astigmatic keratotomy. Duane's Ophthalmology. Lippincott Williams and Wilkins, Philadelphia] to achieve 1.5, 2.25, 3.0, 4.5 and 6.0D of astigmatic change, using the next values for the parameters: length of 45 degrees , 60 degrees and 90 degrees , an optical zone of 6mm, single or paired incisions. The model gives results similar to those in Lindstrom's nomogram [Chu et al., 2005] and can be considered a useful tool to plan and simulate refractive surgery by predicting the outcomes of different sorts of incisions and to optimize the values for the parameters involved: depth, length, position. PMID:18177656
9. Quasi two-dimensional astigmatic solitons in soft chiral metastructures.
Science.gov (United States)
Laudyn, Urszula A; Jung, Paweł S; Karpierz, Mirosław A; Assanto, Gaetano
2016-01-01
We investigate a non-homogeneous layered structure encompassing dual spatial dispersion: continuous diffraction in one transverse dimension and discrete diffraction in the orthogonal one. Such dual diffraction can be balanced out by one and the same nonlinear response, giving rise to light self-confinement into astigmatic spatial solitons: self-focusing can compensate for the spreading of a bell-shaped beam, leading to quasi-2D solitary wavepackets which result from 1D transverse self-localization combined with a discrete soliton. We demonstrate such intensity-dependent beam trapping in chiral soft matter, exhibiting one-dimensional discrete diffraction along the helical axis and one-dimensional continuous diffraction in the orthogonal plane. In nematic liquid crystals with suitable birefringence and chiral arrangement, the reorientational nonlinearity is shown to support bell-shaped solitary waves with simple astigmatism dependent on the medium birefringence as well as on the dual diffraction of the input wavepacket. The observations are in agreement with a nonlinear nonlocal model for the all-optical response. PMID:26975651
10. An astigmatic corrected target-aligned solar concentrator
Science.gov (United States)
Lando, Mordechai; Kagan, Jacob; Linyekin, Boris; Sverdalov, Ludmila; Pecheny, Grigory; Achiam, Yaakov
2000-06-01
Highly concentrated solar energy is required for solar pumping of solid state lasers, and for other applications. High concentration may be obtained by a combination of a primary concentrator with f/ D>2 in addition to a non-imaging concentrator. We have designed and constructed a novel tower primary concentrator. A 3.4 m diameter primary mirror, composed of 61 segments, was mounted on a commercial two-axis positioner. Unlike the common zenith mounting, the positioner fixed axis is directed southwards, pointing at 32° above the horizon. With this novel mounting, the concentrator is the first implementation of the astigmatic corrected target aligned (ACTA) design which flattens the irradiation density variation during the day. The primary mirror segments are each mounted on a separate two-axis mount, and aligned to compensate for astigmatism. The segments are spherically curved with R=17 m radius of curvature, while their vertexes are placed on an R/2=8.5 m radius spherical cap. A four-segment plane mirror reflects the light towards a horizontal focal plane. We have measured the absorbed solar power into a 89×91 mm 2 rectangular aperture and found good agreement with optical design calculations. Peak solar concentration in the focal plane exceeded 400 suns.
11. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods
Directory of Open Access Journals (Sweden)
Giuliano de Oliveira Freitas
2013-10-01
Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.
12. Optical analysis for simplified astigmatic correction of non-imaging focusing heliostat
Energy Technology Data Exchange (ETDEWEB)
Chong, K.K. [Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Off Jalan Genting Kelang, Setapak, 53300 Kuala Lumpur (Malaysia)
2010-08-15
In the previous work, non-imaging focusing heliostat that consists of m x n facet mirrors can carry out continuous astigmatic correction during sun-tracking with the use of only (m + n - 2) controllers. For this paper, a simplified astigmatic correction of non-imaging focusing heliostat is proposed for reducing the number of controllers from (m + n - 2) to only two. Furthermore, a detailed optical analysis of the new proposal has been carried out and the simulated result has shown that the two-controller system can perform comparably well in astigmatic correction with a much simpler and more cost effective design. (author)
13. The Comprehensive Control of Astigmatism during and Following Intraocular Lens Implantation.
Institute of Scientific and Technical Information of China (English)
1994-01-01
The operating corneoloscope and Terry operative keratometer were used respectively in 29 and 34 eyes during the intraocular lens implantation to measure the corneal astigmatism qualitatively or quantitatively,so that the tension of incision closure could be adjusted. The surgically induced astigmatism in qualitative group two weeks after the operation was 3. 5 ± 1. 70 D and that in quantitative group was 2. 56±1. 60 D. There were 55.17% and 38. 24% of the eyes with over 2. 00 D corneal astigmatism in qu...
14. Self-Compensation of Astigmatism in Mode-Cleaners for Advanced Interferometers
Energy Technology Data Exchange (ETDEWEB)
Barriga, P; Zhao Chunnong; Ju Li; Blair, David G [School of Physics, University of Western Australia, Crawley, WA6009 (Australia)
2006-03-02
Using a conventional mode-cleaner with the output beam taken through a diagonal mirror it is impossible to achieve a non-astigmatic output. The geometrical astigmatism of triangular mode-cleaners for gravitational wave detectors can be self-compensated by thermally induced astigmatism in the mirrors substrates. We present results from finite element modelling of the temperature distribution of the suspended mode-cleaner mirrors and the associated beam profiles. We use these results to demonstrate and present a self-compensated mode-cleaner design. We show that the total astigmatism of the output beam can be reduced to 5x10{sup -3} for {+-}10% variation of input power about a nominal value when using the end mirror of the cavity as output coupler.
15. SURGICALLY INDUCED ASTIGMATISM AFTER 20G VS 23G PARS PLANA VITRECTOMY
Directory of Open Access Journals (Sweden)
Lokabhi Reddy
2015-05-01
Full Text Available Pars Plana Vitrectomy is done to clear the Vitreous cavity of the Eye. Trans conjunctival Sutureless Vitrectomy with 23G & 25G has become more popular over the Conventional 20G Vitrectomy in recent times. It has many advantages. Less amount of Surgically Induc ed Astigmatism is one of the Advantages with Sutureless Vitrectomy, which will have the Advantage of Early Visual rehabilitation with better Vision. An interventional comparative study was done between 20G & 23 G Pars Plana Vitrectomy in 2 Groups of 30 pat ients each to assess the amount of Post - Operative Astigmatism. The cases were followed up for 6 months to assess the long term effects. There was a significant difference in immediate Post - Operative Astigmatism. But after some time the difference is much l ess showing that the main advantage on Astigmatism with Trans conjunctival Sutureless Vitrectomy is noted mainly during the first few weeks after the Surgery.
16. A novel color-LED corneal topographer to assess astigmatism in pseudophakic eyes
Science.gov (United States)
Ferreira, Tiago B; Ribeiro, Filomena J
2016-01-01
Purpose To assess the accuracy of corneal astigmatism evaluation measured by four techniques, Orbscan IIz®, Lenstar LS900®, Cassini®, and Total Cassini (anterior + posterior surface), in pseudophakic eyes. Patients and methods A total of 30 patients (46 eyes) who had undergone cataract surgery with the implantation of a monofocal intraocular lens (AcrySof IQ) were assessed after surgery. For each eye, subjective assessment of astigmatism and its axis was performed. Minimum, maximum, and mean keratometry and astigmatism and its axis were evaluated using the four measurement techniques. All measurements were compared with the subjective measurements. Agreement between each measurement technique and subjective assessment was evaluated using Bland–Altman plots. Linear regressions were performed and compared. Results Linear regression analysis of astigmatism axis showed very high R2 for all models, with Total Cassini showing the least difference to the unit slope (0.052) and the least difference to a null constant (3.790), although not statistically different from the other models. Regarding astigmatism value, the Cassini and Total Cassini models were similar and statistically better than the Lenstar model. Cassini and Total Cassini showed better J0 compared with Orbscan. Conclusion On linear regression models, Cassini and Total Cassini showed the best performance regarding astigmatism value. Cassini and Total Cassini also showed the least J0 deviation from the Cartesian origin compared with Orbscan, which had the lowest performance. Total corneal measurement with the color LED topographer seems to be a better technique for astigmatism assessment. PMID:27574391
17. CORNEAL ASTIGMATISM AFTER ECCE: A COMPARATIVE STUDY BETWEEN SILK VERSUS NYLON SUTURE
Directory of Open Access Journals (Sweden)
Sunita
2013-11-01
Full Text Available ABSTRAC T: INTRODUCTION: Cataract as a potent cause of loss of vision in old age persons is probably known since the dawn of human civilization. Post operative astigmatism after cataract extraction remains a big problem for cataract surgeons since Jacques Daviel e ra. Astigmatism is that type of refractive anomaly in which no point focus is formed owing to the unequal refraction of the incident light by the diopteric system of the eye in different meridians. The goal of modern cataract surgery is to produce a pseudo phakic with the quality of vision of a normal phakic eye. Various studies to find out any effect of IOL on post operative astigmatism were carried out but results are controversial. MATERIAL AND METHODS: 60 patients suffering from cataract and fit for extr action were enlisted during the month of August 2008 to February 2009. The general, physical and local examination including preoperative Keratometry, vision and tension were recorded. RESULTS: In the present study, male patients were 38 (63% and female p atients were 22 (37%. Out of the total 60 cases studied, corneo - scleral section of 28 cases (47% were sutured with 10 - 0 nylon suture (Group A while sections of 32 cases were sutured with 8 - 0 black virgin silk suture (Group B.Out of 28 cases of Group A, interrupted sutures were applied in 14 cases (50% (Group A 1 . Cross interrupted sutures were applied in 9 cases (32% Group A 2 , while bootlace continuous sutures were applied in 5 cases (18% (Group A 3 . Out of 32 cases of Group B, interrupted sutures we re applied in 26 cases (80% (Group B 1 , cross interrupted were applied in 3 cases (10% (Group B 2 , while bootlace continuous suture were applied in 3 cases (Group B 3 . In the present series, 19 cases (31% showed with the rule astigmatism, 21 cases (36% showed astigmatism against the rule and 20 cases (33% showed no astigmatism preoperatively, 16 cases were in the range of 0.50D to 1.0D and 12 cases were in the range of 1
18. Preoperative corneal astigmatism among adult patients with cataract in Northern Nigeria
Directory of Open Access Journals (Sweden)
Mohammed Isyaku
2014-01-01
Full Text Available The prevalence and nature of corneal astigmatism among patients with cataract has not been well-documented in the resident African population. This retrospective study was undertaken to investigate preexisting corneal astigmatism in adult patients with cataract. We analyzed keratometric readings acquired by manual Javal-Schiotz keratometry before surgery between January 1, 2011 and December 31, 2011. There were 3,169 patients (3286 eyes aged between 16 and 110 years involved with a Male to female ratio of 1.4:1. Mean keratometry in diopters was K1 = 43.99 and K2 = 43.80. Mean corneal astigmatism was 1.16 diopter and a majority (45.92% of eyes had astigmatism between 1.00 and 1.99 diopters. Two-thirds of the eyes (66.9% in this study had preoperative corneal astigmatism equal to or above 1.00 diopter. Findings will help local cataract surgeons to estimate the potential demand for toric intraocular lenses.
19. Convergence Insufficiency, Accommodative Insufficiency, Visual Symptoms, and Astigmatism in Tohono O'odham Students
Science.gov (United States)
Twelker, J. Daniel; Miller, Joseph M.; Campus, Irene
2016-01-01
Purpose. To determine rate of convergence insufficiency (CI) and accommodative insufficiency (AI) and assess the relation between CI, AI, visual symptoms, and astigmatism in school-age children. Methods. 3rd–8th-grade students completed the Convergence Insufficiency Symptom Survey (CISS) and binocular vision testing with correction if prescribed. Students were categorized by astigmatism magnitude (no/low: AI, and presence of symptoms. Analyses determine rate of clinical CI and AI and symptomatic CI and AI and assessed the relation between CI, AI, visual symptoms, and astigmatism. Results. In the sample of 484 students (11.67 ± 1.81 years of age), rate of symptomatic CI was 6.2% and symptomatic AI 18.2%. AI was more common in students with CI than without CI. Students with AI only (p = 0.02) and with CI and AI (p = 0.001) had higher symptom scores than students with neither CI nor AI. Moderate and high astigmats were not at increased risk for CI or AI. Conclusions. With-the-rule astigmats are not at increased risk for CI or AI. High comorbidity rates of CI and AI and higher symptoms scores with AI suggest that research is needed to determine symptomatology specific to CI. PMID:27525112
20. Toric intraocular lenses for correction of astigmatism in keratoconus and after corneal surgery
Science.gov (United States)
Mol, Ilse EMA; Van Dooren, Bart TH
2016-01-01
Purpose To describe the results of cataract extraction with toric intraocular lens (IOL) implantation in patients with preexisting astigmatism from three corneal conditions (keratoconus, postkeratoplasty, and postpterygium surgery). Methods Cataract patients with topographically stable, fairly regular (although sometimes very high) corneal astigmatism underwent phacoemulsification with implantation of a toric IOL (Zeiss AT TORBI 709, Alcon Acrysof IQ toric SN6AT, AMO Tecnis ZCT). Postoperative astigmatism and refractive outcomes, as well as visual acuities, vector reduction, and complications were recorded for all eyes. Results This study evaluated 17 eyes of 16 patients with a mean age of 60 years at the time of surgery. Mean follow-up in this study was 12 months. The corrected distance Snellen visual acuity (with spectacles or contact lenses) 12 months postoperatively was 20/32 or better in 82% of eyes. The mean corneal astigmatism was 6.7 diopters (D) preoperatively, and 1.5 D of refractive cylinder at 1-year follow-up. No vision-compromising intra- or postoperative complications occurred and decentration or off-axis alignment of toric IOLs were not observed. Conclusion Phacoemulsification with toric IOL implantation was a safe and effective procedure in the three mentioned corneal conditions. Patient selection, counseling, and IOL placement with optimal astigmatism correction are crucial. PMID:27382249
1. Toric intraocular lenses for correction of astigmatism in keratoconus and after corneal surgery
Directory of Open Access Journals (Sweden)
Mol IEMA
2016-06-01
Full Text Available Ilse EMA Mol,1,2 Bart TH Van Dooren1,2 1Department of Ophthalmology, Amphia Hospital, Breda, 2Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands Purpose: To describe the results of cataract extraction with toric intraocular lens (IOL implantation in patients with preexisting astigmatism from three corneal conditions (keratoconus, postkeratoplasty, and postpterygium surgery.Methods: Cataract patients with topographically stable, fairly regular (although sometimes very high corneal astigmatism underwent phacoemulsification with implantation of a toric IOL (Zeiss AT TORBI 709, Alcon Acrysof IQ toric SN6AT, AMO Tecnis ZCT. Postoperative astigmatism and refractive outcomes, as well as visual acuities, vector reduction, and complications were recorded for all eyes.Results: This study evaluated 17 eyes of 16 patients with a mean age of 60 years at the time of surgery. Mean follow-up in this study was 12 months. The corrected distance Snellen visual acuity (with spectacles or contact lenses 12 months postoperatively was 20/32 or better in 82% of eyes. The mean corneal astigmatism was 6.7 diopters (D preoperatively, and 1.5 D of refractive cylinder at 1-year follow-up. No vision-compromising intra- or postoperative complications occurred and decentration or off-axis alignment of toric IOLs were not observed.Conclusion: Phacoemulsification with toric IOL implantation was a safe and effective procedure in the three mentioned corneal conditions. Patient selection, counseling, and IOL placement with optimal astigmatism correction are crucial. Keywords: toric intraocular lens, phacoemulsification, corneal astigmatism, keratoconus, postkeratoplasty, postpterygium surgery
2. Astigmatism compensation in mode-cleaner cavities for the next generation of gravitational wave interferometric detectors
Energy Technology Data Exchange (ETDEWEB)
Barriga, Pablo J. [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia)]. E-mail: pbarriga@cyllene.uwa.edu.au; Zhao Chunnong [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Blair, David G. [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia)
2005-06-06
Interferometric gravitational wave detectors use triangular ring cavities to filter spatial and frequency instabilities from the input laser beam. The next generation of interferometric detectors will use high laser power and greatly increased circulating power inside the cavities. The increased power inside the cavities increases thermal effects in their mirrors. The triangular configuration of conventional mode-cleaners creates an intrinsic astigmatism that can be corrected by using the thermal effects to advantage. In this Letter we show that an astigmatism free output beam can be created if the design parameters are correctly chosen.
3. Off-Axis Astigmatic Gaussian Beam Combination Beyond the Paraxial Approximation
Institute of Scientific and Technical Information of China (English)
GAO Zeng-Hui; L(U) Bai-Da
2007-01-01
Taking the off-axis astigmatic Gaussian beam combination as an example, the beam-combination concept is extended to the nonparaxial regime. The closed-form propagation expressions for coherent and incoherent combinations of nonparaxial off-axis astigmatic Gaussian beams with rectangular geometry are derived and illustrated with numerical examples. It is shown that the intensity distributions of the resulting beam depend on the combination scheme and beam parameters in general, and in the paraxial approximation (i.e., for the small f-parameter)our results reduce to the paraxial ones.
4. Toric intraocular lens orientation and residual refractive astigmatism: an analysis
Directory of Open Access Journals (Sweden)
Potvin R
2016-09-01
astigmatism as a result of misorientation. The Tecnis Toric IOL appears more likely to be misoriented in a counterclockwise direction; no such bias was observed with the AcrySof Toric, the Trulign® Toric, or the Staar Toric IOLs. Keywords: rotation, AcrySof, Tecnis, toric back-calculator, cylinder
5. Observation of lasing modes with exotic localized wave patterns from astigmatic large-Fresnel-number cavities.
Science.gov (United States)
Lu, T H; Lin, Y C; Liang, H C; Huang, Y J; Chen, Y F; Huang, K F
2010-02-01
We investigate the lasing modes in large-Fresnel-number laser systems with astigmatism effects. Experimental results reveal that numerous lasing modes are concentrated on exotic patterns corresponding to intriguing geometries. We theoretically use the quantum operator algebra to construct the wave representation for manifesting the origin of the localized wave patterns. PMID:20125716
6. Meridional lenticular astigmatism associated with bilateral concurrent uveal metastases in renal cell carcinoma
Directory of Open Access Journals (Sweden)
Priluck JC
2012-11-01
Full Text Available Joshua C Priluck, Sandeep Grover, KV ChalamDepartment of Ophthalmology, University of Florida College of Medicine, Jacksonville, FL, USAPurpose: To demonstrate a case illustrating meridional lenticular astigmatism as a result of renal cell carcinoma uveal metastases.Methods: Case report with images.Results: Clinical findings and diagnostic testing of a patient with acquired meridional lenticular astigmatism are described. The refraction revealed best-corrected visual acuity of 20/20–1 OD (−2.50 + 0.25 × 090 and 20/50 OS (−8.25 + 3.25 × 075. Bilateral concurrent renal cell carcinoma metastases to the choroid and ciliary body are demonstrated by utilizing ultrasonography, ultrawidefield fluorescein angiography, and unique spectral-domain optical coherence tomography.Conclusions: Metastatic disease should be included in the differential of acquired astigmatism. Spectral-domain optical coherence tomography, ultrawidefield fluorescein angiography, and ultrasonography have roles in delineating choroidal metastases.Keywords: astigmatism, metastasis, optical coherence tomography, renal cell carcinoma
7. [Results of corneal and total astigmatism estimation by different methods in myopic patients wearing orthokeratology contact lenses].
Science.gov (United States)
Tarutta, E P; Aliaeva, O O; Verzhanskaia, T Iu; Milash, S V
2013-01-01
Reports have been made that corneal aberrations of all orders, including astigmatism, often significantly increase with the use of night orthokeratology lenses. In this study the dynamic changes of total and corneal astigmatism in myopes using orthokeratology lenses was evaluated by different methods. The study enrolled 38 patients (76 eyes) with low and medium myopia (28 and 48 eyes correspondingly) and initial astigmatism less than 2 diopters. The assessment was made before and in different terms after the patient started to wear orthokeratology lenses. Induced astigmatism (> or =1 diopter) was found in more than 50% of cases. The degree of astigmatism gradually increased from the centre to the periphery within the papillary zone. The maximum values were found within a 4-mm zone ("uptake zone") and minimal - within a 8-mm zone ("equalization zone"). In all patients, despite the presence of induced astigmatism and residual myopia (0.83+/-0.09 diopters in average), distance visual acuity was high enough without an additional correction (0.82+/-0.05 in average). Apparently, in these patients the aberrations (astigmatism in particular) exceed the focal depth. PMID:24137984
8. Extended depth of focus intra-ocular lens: a solution for presbyopia and astigmatism
Science.gov (United States)
Zlotnik, Alex; Raveh, Ido; Ben Yaish, Shai; Yehezkel, Oren; Belkin, Michael; Zalevsky, Zeev
2010-02-01
Purpose: Subjects after cataract removal and intra-ocular lens (IOL) implantation lose their accommodation capability and are left with a monofocal visual system. The IOL refraction and the precision of the surgery determine the focal distance and amount of astigmatic aberrations. We present a design, simulations and experimental bench testing of a novel, non-diffractive, non-multifocal, extended depth of focus (EDOF) technology incorporated into an IOL that allows the subject to have astigmatic and chromatic aberrations-free continuous focusing ability from 35cm to infinity as well as increased tolerance to IOL decentration. Methods: The EDOF element was engraved on a surface of a monofocal rigid IOL as a series of shallow (less than one micron deep) concentric grooves around the optical axis. These grooves create an interference pattern extending the focus from a point to a length of about one mm providing a depth of focus of 3.00D (D stands for Diopters) with negligible loss of energy at any point of the focus while significantly reducing the astigmatic aberration of the eye and that generated during the IOL implantation. The EDOF IOL was tested on an optical bench simulating the eye model. In the experimental testing we have explored the characteristics of the obtained EDOF capability, the tolerance to astigmatic aberrations and decentration. Results: The performance of the proposed IOL was tested for pupil diameters of 2 to 5mm and for various spectral illuminations. The MTF charts demonstrate uniform performance of the lens for up to 3.00D at various illumination wavelengths and pupil diameters while preserving a continuous contrast of above 25% for spatial frequencies of up to 25 cycles/mm. Capability of correcting astigmatism of up to 1.00D was measured. Conclusions: The proposed EDOF IOL technology was tested by numerical simulations as well as experimentally characterized on an optical bench. The new lens is capable of solving presbyopia and astigmatism
9. The approximate analysis of the electromagnetic characters of 3-D radome by complex astigmatic wave theory
Science.gov (United States)
Wang, Yueqing; Wu, Guisheng; Chen, Zhenyang
The complex astigmatic wave, which imitates the 3-D beam in high-frequency, is an effective method to analyze the electromagnetic characters of the 3-D arbitrarily curved radome. A number of calculations for the ellipsoidal sandwich radome are performed, and the stereoscopic graphics of the results are constructed. Comparing with the experiments, it is shown that this method can be used to simplify analysis and optimization design for many kinds of 3-D radome.
10. Clinical Efficacy of Toric Orthokeratology in Myopic Ado-lescent with Moderate to High Astigmatism
Institute of Scientific and Technical Information of China (English)
Ming Luo; Shengsheng Ma; Na Liang
2014-01-01
Purpose:.To observe the efficacy of toric design orthokera-tology.(ortho-k).for correcting myopia and astigmatism in my-opic adolescents with moderate to high astigmatism. Methods:.This was a self-controlled clinical study..Twenty-four subjects(42 eyes).aged 9 to 16 years with myopia of 2.50-6.00 D complicated with rule astigmatism of 1.50-3.50 D were fitted with Lucid Night Toric Ortho-k Lenses (LUCID,KO-REA)..The changes in uncorrected visual acuity (UCVA), spherical degree, refraction, axial length (AL),.and corneal status were assessed at baseline, 1 night, 1 week, 1 month, 3 months, 6 months, and 1 year after the commencement of or-tho-k lens wear. Results: The success rate of the first lens fit was 92.8%. The UCVA after ortho-k wearing was improved significantly com-pared to the baseline during each visit (all P0.05)..Grade 1 corneal staining was observed at 1 week (23.8%),.1 month (21.4%), and 1 year (16.7%) fol-lowing lens wear, and was improved by lens cleaning,.discon-tinuing lens wear, and moistening the cornea with eye drops. No severe adverse events were reported. Conclusion: The toric ortho-k lens was effective and safe for correction of low to moderate myopia in children with moder-ate to high astigmatism..The lens also effectively controlled axial length elongation during 1 year of observation..However, the long-term efficacy remains to be elucidated.
11. Toric intraocular lenses for correction of astigmatism in keratoconus and after corneal surgery
OpenAIRE
Mol, Ilse
2016-01-01
Ilse EMA Mol,1,2 Bart TH Van Dooren1,2 1Department of Ophthalmology, Amphia Hospital, Breda, 2Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands Purpose: To describe the results of cataract extraction with toric intraocular lens (IOL) implantation in patients with preexisting astigmatism from three corneal conditions (keratoconus, postkeratoplasty, and postpterygium surgery).Methods: Cataract patients with topographically stable, fairly regular (although sometim...
12. [Optimization of broad-band flat-field holographic concave grating without astigmatism].
Science.gov (United States)
Kong, Peng; Tang, Yu-guo; Bayanheshig; Li, Wen-hao; Cui, Jin-jiang
2012-02-01
The desirable imaging locations of the flat-field holographic concave gratings should be in a plane. And the object can be imaged perfectly by the grating when the tangential focal curve and sagittal focal curve both superpose the intersection of the image plane and dispersion plane. But actually, the defocus can not be eliminated over the entire wavelength range, while the astigmatism vanishes when the grating parameters satisfy some conditions. An optimization method for broad-band flat-field holographic concave gratings with absolute astigmatism correction was proposed. The ray tracing software ZEMAX was used for investigating the imaging properties of the grating. And we made a comparison between spectral performance of gratings designed by this new method and that by conventional method, respectively. The results indicated that the spectral performance of gratings designed by using the absolute astigmatism correction method can be as good as gratings designed with the conventional method. And the focusing performance in the sagittal direction is much better, so that the S/N ratio can be greatly improved.
13. Toric Intraocular Lens Implantation for Correction of Astigmatism in Cataract Patients with Corneal Ectasia
Directory of Open Access Journals (Sweden)
Efstratios A. Parikakis
2013-11-01
Full Text Available Our purpose was to examine the long-term efficacy of toric intraocular lens (IOL implantation in cataract patients with high astigmatism due to corneal ectasia, who underwent phacoemulsification cataract surgery. Five eyes of 3 cataract patients with topographically stable keratoconus or pellucid macular degeneration (PMD, in which phacoemulsification with toric IOL implantation was used to correct high astigmatism, are reported. Objective and subjective refraction, visual acuity measurement and corneal topography were performed in all cases before and after cataract surgery. In all cases, there was a significant improvement in visual acuity, as well as refraction, which remained stable over time. Specifically, in subjective refraction, all patients achieved visual acuity from 7/10 to 9/10 with up to -2.50 cyl. Corneal topography also remained stable. Postoperative follow-up was 18-28 months. Cataract surgery with toric IOL implantation seems to be safe and effective in correcting astigmatism and improving visual function in cataract patients with topographically stable keratoconus or PMD.
14. Customized toric intraocular lens implantation for correction of extreme corneal astigmatism due to corneal scarring
Directory of Open Access Journals (Sweden)
R Bassily
2010-03-01
Full Text Available R Bassily, J LuckOphthalmology Department, Royal United Hospital, Combe Park, Bath, UKAbstract: A 76-year-old woman presented with decreased visual function due to cataract formation. Twenty-five years prior she developed right sided corneal ulceration that left her with 10.8 diopters (D of irregular astigmatism at 71.8° (steep axis. Her uncorrected visual acuity was 6/24 and could only ever wear a balanced lens due to the high cylindrical error. Cataract surgery was planned with a custom designed toric intraocular lens (IOL with +16.0 D sphere inserted via a wound at the steep axis of corneal astigmatism. Postoperative refraction was -0.75/+1.50 × 177° with a visual acuity of 6/9 that has remained unchanged at six-week follow-up with no IOL rotation. This case demonstrates the value of high power toric IOLs for the correction of pathological corneal astigmatism.Keywords: intraocular lens, corneal ulceration, visual acuity, scarring
15. Use of a Toric Intraocular Lens and a Limbal-Relaxing Incision for the Management of Astigmatism in Combined Glaucoma and Cataract Surgery
Science.gov (United States)
Gibbons, Allister
2016-01-01
Purpose We report the surgical management of a patient with glaucoma undergoing cataract surgery with high preexisting astigmatism. A combination of techniques was employed for her astigmatism management. Methods A 76-year-old female with 5.5 dpt of corneal astigmatism underwent surgery in her left eye consisting of one-site trabeculectomy, phacoemulsification, toric intraocular lens implantation and a single inferior limbal-relaxing incision. Results Intraocular pressure control was achieved with no medication at 11 mm Hg; before the filtering procedure, the pressure was 16 mm Hg on two topical drugs. Astigmatism was reduced to 0.75 dpt, and both corrected and uncorrected visual acuity improved. Conclusions Astigmatism management can have a good outcome in combined procedures. We encourage surgeons to address astigmatism in the preoperative planning of patients undergoing glaucoma surgery associated with phacoemulsification. PMID:27293408
16. Assessment of corneal astigmatism following frown and straight incision forms in sutureless manual small incision cataract surgery
Directory of Open Access Journals (Sweden)
Amedo AO
2016-04-01
Full Text Available Angela Ofeibea Amedo, Kwadwo Amoah, Nana Yaa Koomson, David Ben Kumah, Eugene Appenteng Osae Department of Optometry and Visual Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Abstract: To investigate which of two tunnel incision forms (frown versus straight in sutureless manual small incision cataract surgery creates more corneal astigmatism. Sixty eyes of 60 patients who had consented to undergo cataract surgery and to partake in this study were followed from baseline through >12-week postoperative period. Values of preoperative and postoperative corneal astigmatism for the 60 eyes, measured with a Bausch and Lomb keratometer, were extracted from the patients’ cataract surgery records. Residual astigmatism was computed as the difference between preoperative and postoperative keratometry readings. Visual acuity was assessed during the preoperative period and at each postoperative visit with a Snellen chart at 6 m. Fifty eyes of 50 patients were successfully followed-up on. Overall, the mean residual astigmatism was 0.75±0.12 diopters. The differences in mean residual astigmatism between the two different incision groups were statistically significant (t [48]=6.33, P<0.05; frown incision group recorded 1.00±0.12 diopters, whereas the straight incision group recorded 0.50±0.12 diopters. No significant difference was observed between male and female groups (t [48]=0.24, P>0.05. Residual corneal astigmatism in the frown incision group was significantly higher than in the straight incision group. Fisher’s exact test did not reveal a significant association between incision forms and visual acuity during the entire postoperative period (P>0.05. Keywords: cataract, residual corneal astigmatism, frown incision, straight incision
17. A Michelson controlled-not gate with a single-lens astigmatic mode converter.
Science.gov (United States)
Souza, C E R; Khoury, A Z
2010-04-26
We propose and demonstrate experimentally a single lens design for an astigmatic mode converter that transforms the transverse mode of paraxial optical beams. As an application, we implement a controlled-not gate based on a Michelson interferometer in which the photon polarization is the control bit and the first order transverse mode is the target. As a further application, we also build a transverse mode parity sorter which can be useful for quantum information processing as a measurement device for the transverse mode qubit. PMID:20588767
18. The construction of sutureless cataract incision and the management of corneal astigmatism.
Science.gov (United States)
Hall, G W; Krischer, C; Mobasher, B; Rajan, S D
1993-02-01
Extrapolating information from equations that govern fluid flow, a theoretical formula is developed for a sutureless cataract incision. This theoretical formula defines the resistance of aqueous outflow as a function of three variables: length of cataract incision, the length of the scleral tunnel, the tortuosity of the outflow channel, and one constant friction factor. The nonlinear relationship of corneal incisions to length, depth, and distance from the visual axis is also examined with respect to their effect on central corneal curvature and control of astigmatism. Finite element analysis of differential equations is discussed as the most plausible technique for predicting these incisional effects.
19. SURGICALLY INDUCED ASTIGMATISM AFTER IMPLANTATION OF FOLDABLE AND NON - FOLDABLE LENSES IN CATARACT SURGERY BY PHACOEMULSIFICATION
Directory of Open Access Journals (Sweden)
Vikas
2015-01-01
Full Text Available This prospective comparative study included 300 matched patients of different grades of senile cataract. All of them willfully underwent phacoemulsification at the hands of a single experienced surgeon, performing with a single and individual technique {Woodcutter’s technique 1 }; half of them were implanted with a foldable intraocular lens and the other half with a non - foldable PMMA intraocular lens. All the patients undergoing phacoemulsification had an improvement in vision. There was no statistically significant difference in the surgically induced astigmatism after implanting foldable or non - foldable IOL
20. Optimization of nonimaging focusing heliostat in dynamic correction of astigmatism for a wide range of incident angles.
Science.gov (United States)
Chong, Kok-Keong
2010-05-15
To overcome astigmatism has always been a great challenge in designing a heliostat capable of focusing the sunlight on a small receiver throughout the year. In this Letter, a nonimaging focusing heliostat with a dynamic adjustment of facet mirrors in a group manner has been analyzed for optimizing the astigmatic correction in a wide range of incident angles. This what is to the author's knowledge a new heliostat is not only designed to serve the purpose of concentrating sunlight to several hundreds of suns, but also to significantly reduce the variation of the solar flux distribution with the incident angle.
1. FASTDEF: fast defocus and astigmatism estimation for high-throughput transmission electron microscopy.
Science.gov (United States)
Vargas, J; Otón, J; Marabini, R; Jonic, S; de la Rosa-Trevín, J M; Carazo, J M; Sorzano, C O S
2013-02-01
In this work we present a fast and automated algorithm for estimating the contrast transfer function (CTF) of a transmission electron microscope. The approach is very suitable for High Throughput work because: (a) it does not require any initial defocus estimation, (b) it is almost an order of magnitude faster than existing approaches, (c) it opens the way to well-defined extensions to the estimation of higher order aberrations, at the same time that provides defocus and astigmatism estimations comparable in accuracy to well established methods, such as Xmipp and CTFFIND3 approaches. The new algorithm is based on obtaining the wrapped modulating phase of the power spectra density pattern by the use of a quadrature filter. This phase is further unwrapped in order to obtain the continuous and smooth absolute phase map; then a Zernike polynomial fitting is performed and the defocus and astigmatism parameters are determined. While the method does not require an initial estimation of the defocus parameters or any non-linear optimization procedure, these approaches can be used if further refinement is desired. Results of the CTF estimation method are presented for standard negative stained images, cryo-electron microscopy images in the absence of carbon support, as well as micrographs with only ice. Additionally, we have also tested the proposed method with micrographs acquired from tilted and untilted samples, obtaining good results. The algorithm is freely available as a part of the Xmipp package [http://xmipp.cnb.csic.es].
2. Astigmatism management in cataract surgery with Precizon® toric intraocular lens: a prospective study
Science.gov (United States)
Vale, Carolina; Menezes, Carlos; Firmino-Machado, J; Rodrigues, Pedro; Lume, Miguel; Tenedório, Paula; Menéres, Pedro; Brochado, Maria do Céu
2016-01-01
Purpose The purpose of this study was to evaluate the visual and refractive outcomes and rotational stability of the new aspheric Precizon® toric intraocular lens (IOL) for the correction of corneal astigmatism in cataract surgery. Setting Department of Ophthalmology, Hospital Geral de Santo António – Centro Hospitalar do Porto, EPE and Hospital de Pedro Hispano, Matosinhos, Portugal. Design This was a prospective clinical study. Patients and methods A total of 40 eyes of 27 patients with corneal astigmatism greater than 1.0 diopter (D) underwent cataract surgery with implantation of Precizon® toric IOL. IOL power calculation was performed using optical coherence biometry (IOLMaster®). Outcomes of uncorrected (UDVA) and best-spectacle corrected distance visual acuities (BCDVA), refraction, and IOL rotation were analyzed at the 1st week, 1st, 3rd, and 6th month’s evaluations. Results The median postoperative UDVA was better than preoperative best-spectacle corrected distance visual acuity (0.02 [0.06] logMAR vs 0.19 [0.20] logMAR, P<0.001). At 6 months, postoperative UDVA was 0.1 logMAR or better in 95% of the eyes. At last follow-up, the mean spherical equivalent was reduced from −3.35±3.10 D to −0.02±0.30 D (P<0.001) with 97.5% of the eyes within ±0.50 D of emmetropia. The mean preoperative keratometric cylinder was 2.34±0.95 D and the mean postoperative refractive cylinder was 0.24±0.27 D (P<0.001). The mean IOL rotation was 2.43°±1.55°. None of the IOLs required realignment. Conclusion Precizon® toric IOL revealed very good rotational stability and performance regarding predictability, efficacy, and safety in the correction of preexisting regular corneal astigmatism associated with cataract surgery. PMID:26855559
3. Visual performance in cataract patients with low levels of postoperative astigmatism: full correction versus spherical equivalent correction
Directory of Open Access Journals (Sweden)
Lehmann RP
2012-03-01
4. [Orthokeratology for the high myopia and high astigmatism is worth watching].
Science.gov (United States)
Xie, Peiying
2015-01-01
The prevalence of high myopia in teenagers is rising. Due to both genetic factors and environmental factors, most of myopia occurs in early age with rapid progress, and the prevention and control are very difficult. Orthokeratology is considered as one of the most effective ways of controlling myopia for children. It is proved effective through many years of clinical studies not only for low to moderate myopia, but also for high myopia with astigmatism. However, professional knowledge is lacking domestically. This paper introduces the local and overseas research results in recent years, and discusses the necessity and feasibility of high diopter orthokeratology correction, implementation methods and requirements, and safety and effectiveness evaluation standards for reference. PMID:25877704
5. Relaxation in Thin Polymer Films Mapped across the Film Thickness by Astigmatic Single-Molecule Imaging
KAUST Repository
Oba, Tatsuya
2012-06-19
We have studied relaxation processes in thin supported films of poly(methyl acrylate) at the temperature corresponding to 13 K above the glass transition by monitoring the reorientation of single perylenediimide molecules doped into the films. The axial position of the dye molecules across the thickness of the film was determined with a resolution of 12 nm by analyzing astigmatic fluorescence images. The average relaxation times of the rotating molecules do not depend on the overall thickness of the film between 20 and 110 nm. The relaxation times also do not show any dependence on the axial position within the films for the film thickness between 70 and 110 nm. In addition to the rotating molecules we observed a fraction of spatially diffusing molecules and completely immobile molecules. These molecules indicate the presence of thin (<5 nm) high-mobility surface layer and low-mobility layer at the interface with the substrate. (Figure presented) © 2012 American Chemical Society.
6. Extended wavelet transformation to digital holographic reconstruction: application to the elliptical, astigmatic Gaussian beams.
Science.gov (United States)
Remacha, Clément; Coëtmellec, Sébastien; Brunel, Marc; Lebrun, Denis
2013-02-01
Wavelet analysis provides an efficient tool in numerous signal processing problems and has been implemented in optical processing techniques, such as in-line holography. This paper proposes an improvement of this tool for the case of an elliptical, astigmatic Gaussian (AEG) beam. We show that this mathematical operator allows reconstructing an image of a spherical particle without compression of the reconstructed image, which increases the accuracy of the 3D location of particles and of their size measurement. To validate the performance of this operator we have studied the diffraction pattern produced by a particle illuminated by an AEG beam. This study used mutual intensity propagation, and the particle is defined as a chirped Gaussian sum. The proposed technique was applied and the experimental results are presented.
7. Focal plane internal energy flows of singular beams in astigmatically aberrated low numerical aperture systems.
Science.gov (United States)
Bahl, Monika; Senthilkumaran, P
2014-09-01
Singular beams have circulating energy components. When such beams are focused by low numerical aperture systems suffering from astigmatic aberration, these circulating energy components get modified. The phase gradient introduced by this type of aberration splits the higher charge vortices. The dependence of the charge, the aberration coefficient, and the size of the aperture on the nature of the splitting process are reported in this paper. The transverse components of the Poynting vector fields that can be derived from the phase gradient vector field distributions are further decomposed into solenoidal and irrotational components using the Helmholtz-Hodge decomposition method. The solenoidal components relate to the orbital angular momentum of the beams, and the irrotational components are useful in the transport of intensity equations for phase retrieval.
8. Analysis of familial aggregation in total, against-the-rule, with-the-rule, and oblique astigmatism by conditional and marginal models in the Tehran eye study
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available Purpose: The purpose was to determine the familial aggregation of the total, against-the-rule (ATR, with-the-rule (WTR, and oblique astigmatism by conditional and marginal models in the Tehran Eye Study. Materials and Methods: Total, ATR, WTR, and oblique astigmatism were studied in 3806 participants older than 5 years from August 2002 to December 2002 in the Tehran Eye Study. Astigmatism was defined as a cylinder worse than or equal to −0.5 D. WTR astigmatism was defined as 0 ± 19°, ATR astigmatism was defined as 90 ± 19°, and oblique when the axes were 20-70° and 110-160°. The familial aggregation was investigated with a conditional model (quadratic exponential and marginal model (alternating logistic regression after controlling for confounders. Results: Using the conditional model, the conditional familial aggregation odds ratios (OR (95% confidence interval for the total, WTR, ATRs, and oblique astigmatism were 1.49 (1.43-1.72, 1.91 (1.65-2.20, 2.00 (1.70-2.30, and 1.86 (1.37-2.54, respectively. In the marginal model, the marginal OR of the parent-offspring and sib-sib in the total astigmatism were 1.35 (1.13-1.63 and 1.54 (1.13-2.11, respectively; WTR 1.53 (1.06-2.20 and 1.94 (1.21-3.13 and; ATR 2.13 (1.01-4.50 and 2.23 (1.52-3.30. The model was statistically significant in sib-sib relationship only for oblique astigmatism with OR of 3.00 (1.25-7.20. Conclusion: The results indicate familial aggregation of astigmatism in the population in Tehran adjusted for age, gender, cataract, duration of education, and body mass index, so that the addition of a new family member affected with astigmatism, as well as having a sibling or parents with astigmatism, significantly increases the odds of exposure to the disease for all four phenotypes. This aggregation can be due to genetic and/or environmental factors. Dividing astigmatism into three phenotypes increased the odds ratios.
9. Orbital angular moment of a partially coherent beam propagating through an astigmatic ABCD optical system with loss or gain.
Science.gov (United States)
Cai, Yangjian; Zhu, Shijun
2014-04-01
We derive the general expression for the orbital angular momentum (OAM) flux of an astigmatic partially coherent beam carrying twist phase [i.e., twisted anisotropic Gaussian-Schell model (TAGSM) beam] propagating through an astigmatic ABCD optical system with loss or gain. The evolution properties of the OAM flux of a TAGSM beam in a Gaussian cavity or propagating through a cylindrical thin lens are illustrated numerically with the help of the derived formula. It is found that we can modulate the OAM of a partially coherent beam by varying the parameters of the cavity or the orientation angle of the cylindrical thin lens, which will be useful in some applications, such as free-space optical communications and particle trapping.
10. Toric Intraocular Lens vs. Peripheral Corneal Relaxing Inci-sions to Correct Astigmatism in Eyes Undergoing Cataract Surgery
Institute of Scientific and Technical Information of China (English)
Zhiping Liu; Xiangyin Sha; Xuanwei Liang; Zhonghao Wang; Jingbo Liu; Danping Huang
2014-01-01
Purpose:.To compare toric intraocular lens implantation (Toric-IOL).with peripheral corneal relaxing incisions (PCRIs) for astigmatism correction in patients undergoing cataract surgery. Methods: 54 patients (54 eyes) with more than 0.75 diopter (D).of preexisting corneal astigmatism were classified as group A (0.75-1.50D) or group B (1.75-2.50D). The patients were randomized to undergo Toric-IOL or PCRIs in the steep axis with spherical IOL implantation..LogMAR uncorrected visual acuity (LogMAR UCVA), LogMAR best corrected vi sual acuity.(LogMAR BCVA),.error of vector (|EV|), surgery induced refraction correction. (|SIRC|),.and correction rates (CR) were measured 1 month and 6 months postoperatively. Results: At 6 months postoperatively, all 54 eyes had Log-MAR BCVA≤0.2. Patients who underwent PCRIs and Toric-IOL with LogMAR BCVA≤0.1 showed no significant differ-ences in group A (P=1.00) or in group B (P=0.59). Group A showed no significant differences in LogMAR UCVA (P=0.70), |EV| (P=0.13), |SIRC| (P=0.71), and CR (P=0.56) in patients underwent PCRIs and Toric-IOL. However, group B showed significant differences in LogMAR UCVA (P Conclusion:.The efficacy and stability of Toric-IOL and PCRIs were equal in low astigmatic patients..Toric-IOL achieved an enhanced effect over PCRIs in higher astigmatic patients. PCRIs had the more refractive regression than Toric-IOL in 6 months.
11. Surgically induced astigmatism after 3.0 mm temporal and nasal clear corneal incisions in bilateral cataract surgery
Directory of Open Access Journals (Sweden)
Je Hwan Yoon
2013-01-01
Full Text Available Aims: To compare the corneal refractive changes induced after 3.0 mm temporal and nasal corneal incisions in bilateral cataract surgery. Materials and Methods: This prospective study comprised a consecutive case series of 60 eyes from 30 patients with bilateral phacoemulsification that were implanted with a 6.0 mm foldable intraocular lens through a 3.0 mm horizontal clear corneal incision (temporal in the right eyes, nasal in the left eyes. The outcome measures were surgically induced astigmatism (SIA and uncorrected visual acuity (UCVA 1 and 3 months, post-operatively. Results: At 1 month, the mean SIA was 0.81 diopter (D for the temporal incisions and 0.92 D for nasal incisions (P = 0.139. At 3 months, the mean SIA were 0.53 D for temporal incisions and 0.62 D for nasal incisions (P = 0.309. The UCVA was similar in the 2 incision groups before surgery, and at 1 and 3 months post-operatively. Conclusion: After bilateral cataract surgery using 3.0 mm temporal and nasal horizontal corneal incisions, the induced corneal astigmatic change was similar in both incision groups. Especially in Asian eyes, both temporal and nasal incisions (3.0 mm or less would be favorable for astigmatism-neutral cataract surgery.
12. Comparison of surgically induced astigmatism in patients with horizontal rectus muscle recession
Institute of Scientific and Technical Information of China (English)
Harun; akmak; Tolga; Kocatürk; Sema; Oru; Dündar
2014-01-01
·AIM: To compare surgically induced astigmatism(SIA)following horizontal rectus muscle recession surgery between suspension recession with both the "hang-back" technique and conventional recession technique.·METHODS: Totally, 48 eyes of 24 patients who had undergone horizontal rectus muscle recession surgery were reviewed retrospectively. The patients were divided into two groups. Twelve patients were operated on by the hang-back technique(Group 1), and 12 by the conventional recession technique(Group 2). SIA was calculated on the 1stwk, 1stand in the 3rdmo after surgery using the SIA calculator.·RESULTS: SIA was statistically higher in the Group 1all postoperative follow-up. SIA was the highest in the 1st wk, and decreased gradually in both groups.·CONCLUSION: The suspension recession technique induced much more SIA than the conventional recession technique. This difference also continued in the following visits. Therefore, the refractive power should be checked postoperatively in order to avoid refractive amblyopia.Conventional recession surgery should be the preferred method so as to minimize the postoperative refractive changes in patients with amblyopia.
13. Mild myopic astigmatism corrected by accidental flap complication: A case report
Directory of Open Access Journals (Sweden)
Fahed Daoud
2009-01-01
Full Text Available A 35-year-old female presented for laser in-situ keratomileusis (LASIK. Her preoperative eye exam was normal, with a preop refraction of OD -2.50 D Sph +1.25 D Cyl Χ175 and OS -2.75 D Sph +1.50 D Cyl Χ165 (cycloplegic and manifest, with 20/20 BCVA OU. The central pachymetry reading was 553 ΅m in the right eye. Preoperative topography was normal. At the start of the pendular microkeratome path, some resistance was felt, but the microkeratome continued along its path. Upon inspection of the flap, there was a central rectangle of intact epithelium with two mirror-image flaps on both sides. The flap was repositioned and LASIK was discontinued. The cornea healed with two faint thin linear vertical parallel scars at the edge of the pupil. Postoperative inspection of the blade revealed central blunting. One month postoperatively, the uncorrected visual acuity (UCVA was 20/20. Manifest and cycloplegic refractions were plano. This is an interesting case of accidental flap complication resulting in the correction of mild myopic astigmatism.
14. Correção do astigmatismo na cirurgia da catarata Surgical correction of astigmatism during cataract surgery
Directory of Open Access Journals (Sweden)
Edison Ferreira e Silva
2007-08-01
Full Text Available OBJETIVOS: Avaliar a eficácia das incisões periféricas relaxantes limbares (IPRL na redução do astigmatismo pré-operatório durante a cirurgia de catarata. MÉTODOS: Foram estudados prospectivamente 103 olhos de 103 pacientes submetidos as IPRL, utilizando o nomograna de Nichamin durante a cirurgia de catarata pela facoemulsificação. Após o 1º e 6º mês foram avaliadas as mudanças no astigmatismo topográfico, na indução do astigmatismo e no índice de sucesso. Os pacientes foram separados em dois grupos segundo o tipo de astigmatismo no pré-operatório (a favor da regra e contra a regra e estudados separadamente. RESULTADOS: Ocorreram diferenças estatisticamente significativas entre os valores dos astigmatismos topográficos no pré e pós-operatório nos dois grupos. Verificou-se indução de 1,10 ± 0,9 dioptrias e 37% de índice de sucesso no grupo de astigmatismo a favor da regra e 1,70 ± 0,80 dioptrias e 51% de índice de sucesso no grupo de astigmatismo contra a regra após o 6º mês de seguimento. CONCLUSÃO: A incisão periférica relaxante limbar é efetiva na redução do astigmatismo pré-existente durante a cirurgia da catarata. O procedimento mostrou ser seguro e de fácil realização. O nomograma de Nichamim na nossa experiência hipocorrige o astigmatismo planejado em ambos os grupos estudados.PURPOSE: To evaluate the effect of peripheral limbar relaxing incisions (PLRI in the reduction of the astigmatism during cataract surgery. METHODS: We studied prospectively 103 eyes of 103 patients submitted to PLRI, using the Nichamim nomogram during cataract surgery by phacoemulsification. After the first and sixth month we analized the changes in astigmatism topography, induction of astigmatism and sucess rate. The patients were divided into two groups according to the astigmatism (with-the-rule and against-the-rule, and studied separately. RESULTS: There was a statistically significant change in the mean astigmatism
15. Clinical analysis of myopia astigmatism of children aged 6-15%6-15岁儿童近视性散光的临床分析
Institute of Scientific and Technical Information of China (English)
段思琦; 李静姣; 钟华; 周华; 田琨; 魏嘉
2015-01-01
Objective To study the occurrence of astigmatism,astigmatism type,astigmatism power and astig-matism axis distrubution among 6-15 years old children,and analyzes its rule.Methods From 2012 to the first half of the 2014, 710 cases of 6-15 years old children with glasses admitted in the Ophtalmic Outpatient Clinic were enrolled in this study.In which the clinical data of 483 cases (749 eyes) with myopia astigmatism were statistically analyzed. Results Myopia astigmatism were 483 cases (749 eyes) and the detection rate was 68.03%. There were 247 cases of male and 236 cases of female,accounted for 51.14% and 48.86%. Astigmatism type has no significant difference in the age group (Χ2=3.418, P>0.05). In AWR, the 6-12 age group was 221 cases (63.14%), the 13-15 age group was 129 cases (36.86%). Astigmatism type has a significant difference in the myopic astigmatism power group (Χ2=28.878, P0.05).在顺规散光中6-12岁年龄组221例(63.14%),13-15岁年龄组129例(36.86%).各散光类型在不同近视性散光度组的发病率有明显统计学差异(Χ2=28.878, P<0.001);轻度散光组中,顺规散光315例(66.60%),逆规散光99例(20.93%),斜向散光59例(12.47%).顺规散光在轻度散光组315例(57.48%),中度散光组160例(29.20%),高度散光组73例(13.32%).各散光类型在不同屈光状态组的发病率有统计学差异(Χ2=612.598,P <0.05).顺规散光在复合近视散光组303例(86.57%),单纯近视散光组47例(13.43%).结论 近视性散光在6-15岁儿童中普遍存在,双眼较单眼发病率高,低度散光占大多数,复合近视散光较单纯近视散光更为常见.
16. Transmissive liquid-crystal device correcting primary coma aberration and astigmatism in laser scanning microscopy
Science.gov (United States)
Tanabe, Ayano; Hibi, Terumasa; Ipponjima, Sari; Matsumoto, Kenji; Yokoyama, Masafumi; Kurihara, Makoto; Hashimoto, Nobuyuki; Nemoto, Tomomi
2016-03-01
Laser scanning microscopy allows 3D cross-sectional imaging inside biospecimens. However, certain aberrations produced can degrade the quality of the resulting images. We previously reported a transmissive liquid-crystal device that could compensate for the predominant spherical aberrations during the observations, particularly in deep regions of the samples. The device, inserted between the objective lens and the microscope revolver, improved the image quality of fixed-mouse-brain slices that were observed using two-photon excitation laser scanning microscopy, which was originally degraded by spherical aberration. In this study, we developed a transmissive device that corrects primary coma aberration and astigmatism, motivated by the fact that these asymmetric aberrations can also often considerably deteriorate image quality, even near the sample surface. The device's performance was evaluated by observing fluorescent beads using single-photon excitation laser scanning microscopy. The fluorescence intensity in the image of the bead under a cover slip tilted in the y-direction was increased by 1.5 times after correction by the device. Furthermore, the y- and z-widths of the imaged bead were reduced to 66% and 65%, respectively. On the other hand, for the imaged bead sucked into a glass capillary in the longitudinal x-direction, correction with the device increased the fluorescence intensity by 2.2 times compared to that of the aberrated image. In addition, the x-, y-, and z-widths of the bead image were reduced to 75%, 53%, and 40%, respectively. Our device successfully corrected several asymmetric aberrations to improve the fluorescent signal and spatial resolution, and might be useful for observing various biospecimens.
17. Intraocular lens implantation for the treatment of astigmatism%人工晶状体植入术矫正散光
Institute of Scientific and Technical Information of China (English)
施俊廷; 徐雯
2013-01-01
Refractive intraocular lens surgery can be used for the treatment of moderate and high regular astigmatism.Refractive intraocular lens surgery includes aphakic intraocular lens surgery and phakic intraocular lens surgery.In phakic intraocular lens surgery two types of intraocular lenses are used:iris-claw intraocular lens and posterior chamber intraocular lens.The purpose of this article is to review current knowledge of surgical treatments of astigmatism with a particular focus on toric implantable collamer lenses.%人工晶状体(IOL)植入术能矫正中、高度规则散光.矫正散光的IOL植入术可分为无晶状体眼及有晶状体眼的环曲面IOL植入两类,其中有晶状体眼的环曲面IOL又可分为虹膜夹持型及后房型.本文旨在介绍IOL矫正散光的手术治疗方法,并着重介绍有晶状体眼植入后房型IOL矫正屈光不正合并散光的治疗技术.
18. SURGICALLY INDUCED ASTIGMATISM AFTER 2.8 MM TEMPORAL AND NASAL CLEAR CORNEAL INCISIONS IN PHACOEMULSIFICATION CATARACT SURGERY OF SAME PATIENT
Directory of Open Access Journals (Sweden)
Preeti
2015-04-01
Full Text Available PURPOSE: To evaluate and compare the surgically induced astigmatism in phacoemulsification cataract surgery after 2.8 mm temporal and nasal clear corneal incision of same patient . MATERIAL AND METHOD : This prospective study comprised a consecutive case series of 60 eyes. Eyes from 30 patients with phacoemulsification those were implanted with a 6.00 mm foldable intraocular le ns through a 2.8 mm horizontal clear corneal incision (temporal in the right eye , nasal in the left eye. RESULTS : T he outcome measures were surgically induced astigmatism (SIA and uncorrected visual acuity (UCVA , at 1 and 3 months post - operatively. A 1 month the mean SIA was 0.81 D. for the temporal incision and 0.92 D for nasal incision (P = 0.139 at 3 months the mean SIA was 0.53 D for temporal incision and 0.62 D for nasal incision (P =0.309. The pre - operative parameters i.e. (UCVA , mean keratomet ry & keratometric cylinder between these groups were comparable. There was no statistically significant difference found between three groups pre - operatively . CONCLUSION : After cataract surgery using 2.8mm temporal and nasal horizontal corneal incision , t he induced corneal astigmatic changes was similar in both incision groups. Especially in Asian eyes , both temporal and nasal incisions (2.8 mm or less would be equally favourable for astigmatism neutral cataract surgery
19. Surgical induced astigmatism correlated with corneal pachymetry and intraocular pressure: transconjunctival sutureless 23-gauge versus 20-gauge sutured vitrectomy in diabetes mellitus
Institute of Scientific and Technical Information of China (English)
Yan; Shao; Li-Jie; Dong; Yan; Zhang; Hui; Liu; Bo-Jie; Hu; Ju-Ping; Liu; Xiao-Rong; Li
2015-01-01
AIM: To determine the difference of surgical induced astigmatism between conventional 20-gauge sutured vitrectomy and 23-gauge transconjunctival sutureless vitrectomy, and the influence of corneal pachymetry and intraocular pressure(IOP) on surgical induced astigmatism in diabetic patients.METHODS: This retrospective, consecutive case series consisted of 40 eyes of 38 diabetic subjects who underwent either 20-gauge or 23-gauge vitrectomy. The corneal curvature and thickness were measured with Scheimpflug imaging before surgery and 1wk; 1, 3mo after surgery. We compared the surgical induced astigmatism(SIA) on the true net power in 23-gauge group with that in 20-gauge group. We determined the correlation between corneal thickness change ratio, IOP and SIA measured by Pentacam. RESULTS: The mean SIAs were 1.082 ±0.085 D( mean ± SEM), 0.689 ±0.070 D and 0.459 ±0.063 D at postoperative 1wk; 1, 3mo respectively in diabetic subjects. The vitrectomy induced astigmatisms were declined significantly with time(F2,36=33.629, P =0.000)postoperatively. The 23-gauge surgery group induced significantly less astigmatism than 20-gauge surgery group(F1,37=11.046, P =0.020). Corneal thickness in diabetes elevated after surgery(F3,78=10.532, P =0.000).The linear regression analysis at postoperatively 1wk went as: SIA =-4.519 +4.931 change ratio(Port3) +0.026IOP(R2=0.46, P =0.000), whereas the rate of cornealthickness change and IOP showed no correlation with the change of astigmatism at postoperatively 1 and 3mo.CONCLUSION: There are significant serial changes in both 20-gauge and 23-gauge group in diabetic subjects.23-gauge induce less astigmatism than 20-gauge and become stable more rapidly than 20-gauge. The elevation of corneal thickness and IOP was associated with increased astigmatim at the early postoperative stage both in 23-gauge and 20-gauge surgery group.
20. Methods for calculating the vergence of an astigmatic ray bundle in an optical system that contains a freeform surface
Science.gov (United States)
Shirayanagi, Moriyasu
2016-07-01
A method using the generalized Coddington equations enables calculating the vergence of an astigmatic ray bundle in the vicinity of a skew ray in an optical system containing a freeform surface. Because this method requires time-consuming calculations, however, there is still room for increasing the calculation speed. In addition, this method cannot be applied to optical systems containing a medium with a gradient index. Therefore, we propose two new calculation methods in this paper. The first method, using differential ray tracing, enables us to shorten computation time by using simpler algorithms than those used by conventional methods. The second method, using proximate rays, employs only the ray data obtained from the rays exiting an optical system. Therefore, this method can be applied to an optical system that contains a medium with a gradient index. We show some sample applications of these methods in the field of ophthalmic optics.
1. Successful toric intraocular lens implantation in a patient with induced cataract and astigmatism after posterior chamber toric phakic intraocular lens implantation: a case report
Directory of Open Access Journals (Sweden)
Kamiya Kazutaka
2012-04-01
Full Text Available Abstract Introduction We report the case of a patient in whom simultaneous toric phakic intraocular lens removal and phacoemulsification with toric intraocular lens implantation were beneficial for reducing pre-existing astigmatism and acquiring good visual outcomes in eyes with implantable collamer lens-induced cataract and astigmatism. Case presentation A 53-year-old woman had undergone toric implantable collamer lens implantation three years earlier. After informed consent was obtained, we performed simultaneous toric implantable collamer lens removal and phacoemulsification with toric intraocular lens implantation. Preoperatively, the manifest refraction was 0, -0.5 × 15, with an uncorrected visual acuity of 0.7 and a best spectacle-corrected visual acuity of 0.8. Postoperatively, the manifest refraction was improved to 0, -0.5 × 180, with an uncorrected visual acuity of 1.2 and a best spectacle-corrected visual acuity of 1.5. No vision-threatening complications were observed. Conclusion Toric intraocular lens implantation may be a good surgical option for the correction of spherical and cylindrical errors in eyes with implantable collamer lens-induced cataract and astigmatism.
2. Surgically induced astigmatism after phacoemulsification with and without correction for posture-related ocular cyclotorsion: randomized controlled study.
LENUS (Irish Health Repository)
Dooley, Ian
2012-02-01
PURPOSE: To report the impact of posture-related ocular cyclotorsion on one surgeon\\'s surgically induced astigmatism (SIA) results and the variance in SIA. SETTING: Institute of Eye Surgery, Whitfield Clinic, Waterford, Ireland. METHODS: This prospective randomized controlled study included eyes that had phacoemulsification with intraocular lens implantation. Eyes were randomly assigned to have (intervention group) or not have (control group) correction for posture-related ocular cyclotorsion. In the intervention group, the clear corneal incision was placed precisely at the 120-degree meridian with instruments designed to correct posture-related ocular cyclotorsion. In the control group, the surgeon endeavored to place the incision at the 120-degree meridian, but without markings. RESULTS: The intervention group comprised 41 eyes and the control group, 61 eyes. The mean absolute SIA was 0.74 diopters (D) in the intervention group and 0.78 D in the control group; the difference between groups was not statistically significant (P>.5, unpaired 2-tailed Student t test). The variance in SIA was 0.29 D(2) and 0.31 D(2), respectively; the difference between groups was not statistically significant (P>.5, unpaired F test). CONCLUSIONS: Attempts to correct for posture-related ocular cyclotorsion did not influence SIA or its variance in a single-surgeon series. These results should be interpreted with full appreciation of the limitations of currently available techniques to correct for posture-related ocular cyclotorsion in the clinical setting.
3. Topography-guided hyperopic and hyperopic astigmatism femtosecond laser-assisted LASIK: long-term experience with the 400 Hz eye-Q excimer platform
Directory of Open Access Journals (Sweden)
Kanellopoulos AJ
2012-06-01
Full Text Available Anastasios John KanellopoulosDepartment of Ophthalmology, New York University Medical School, New York, NY, and LaserVision.gr Eye Institute, Athens, GreeceBackground: The purpose of this study was to evaluate the safety and efficacy of topography-guided ablation using the WaveLight 400 Hz excimer laser in laser-assisted in situ keratomileusis (LASIK for hyperopia and/or hyperopic astigmatism.Methods: We prospectively evaluated 208 consecutive LASIK cases for hyperopia with or without astigmatism using the topography-guided platform of the 400 Hz Eye-Q excimer system. The mean preoperative sphere value was +3.04 ± 1.75 (range 0.75–7.25 diopters (D and the mean cylinder value was –1.24 ± 1.41 (–4.75–0 D. Flaps were created either with Intralase FS60 (AMO, Irvine, CA or FS200 (Alcon, Fort Worth, TX femtosecond lasers. Parameters evaluated included age, preoperative and postoperative refractive error, uncorrected distance visual acuity, corrected distance visual acuity, flap diameter and thickness, topographic changes, higher order aberration changes, and low contrast sensitivity. These measurements were repeated postoperatively at regular intervals for at least 24 months.Results: Two hundred and two eyes were available for follow-up at 24 months. Uncorrected distance visual acuity improved from 5.5/10 to 9.2/10. At 24 (8–37 months, 75.5% of the eyes were in the ±0.50 D range and 94.4% were in the ±1.00 D range of the refractive goal. Postoperatively, the mean sphere value was –0.39 ± 0.3 and the cylinder value was –0.35 ± 0.25. Topographic evidence showed that ablation was made in the visual axis and not in the center of the cornea, thus correlating with the angle kappa. No significant complications were encountered in this small group of patients.Conclusion: Hyperopic LASIK utilizing the topography-guided platform of the 400 Hz Eye-Q Allegretto excimer and a femtosecond laser flap appears to be safe and effective for
4. Comparison of curative effects of visual perceptual learning and traditional treatment for the children under 8 years with astigmatism amblyopia with rule and astigmatism amblyopia against rule%8岁以下顺、逆规散光性弱视儿童视感知疗法与传统疗法疗效比较
Institute of Scientific and Technical Information of China (English)
孔旻; 刘伟民; 林泉; 赵武校
2011-01-01
目的:比较8岁以下患儿视感知学习疗法(perceptual learning)与传统疗法治疗顺、逆规散光性弱视的疗效.方法:将252例(504眼)8岁以下顺、逆规散光性弱视患儿,分别行视感知学习(154例,308眼)和传统疗法(98例,196眼)治疗,2年后对结果进行统计学分析.结果:8岁以下的顺、逆规散光性弱视患儿视感知学习疗法组的总有效率均高于传统疗法组,组间比较差异均有统计学意义(P<0.05).结论:8岁以下的顺、逆规散光性弱视患者在视感知学习疗法中的总有效率高于其在传统疗法的总有效率.%Objective: To compare the curative effects of visual perceptual learning and traditional treatment for the children under 8 years with astigmatism amblyopia with rule and astigmatism amblyopia against rule. Methods: 252 children (504 eyes) under 8 years with astigmatism amblyopia with rule and astigmatism amblyopia against rule were divided into visual perceptual learning group (154 children, 308 eyes) and traditional treatment group (98 children, 196 eyes), then the results were analyzed statistically after two years. Results; The total effective rate in visual perceptual learning group was significantly higher than that in traditional treatment group (P < 0. 05) . Conclusion;The total effective rate in the children under 8 years with astigmatism amblyopia with rule and astigmatism amblyopia against rule treated with visual perceptual learning is significantly higher than that treated with traditional treatment
5. Effect of corneal biomechanical parameters in astigmatic keratotomy%角膜生物力学特性对散光性角膜切开术影响
Institute of Scientific and Technical Information of China (English)
邹湖涌; 王勤美; 俞阿勇; 郑志利; 芦群
2012-01-01
目的 探讨屈光性晶状体置换联合散光性角膜切开术术后角膜生物力学特性的改变及其对手术效果的影响.方法 收集角膜散光≥1.50D的患者,在屈光性晶状体置换术时联合行散光性角膜切开术.术前,术后1周、1、3、6个月用眼反应分析仪(Ocular response analyzer,Reichert,Depew,NY)测量角膜阻力因子(corneal resistance factor,CRF)、角膜滞后性(corneal hysteresis,CH)、Goldmann眼压(Goldmann correlated intraocular pressure,IOP);用角膜地形图仪(Pentacam ver.1.11;Oculus,Germany)观察角膜散光的变化.采用重复测量数据方差分析及Pearson相关分析进行统计分析.结果 共23例(32只眼).CRF、CH术后1周、1个月均低于术前水平(P<0.05),术后3个月、6个月与术前比较差异均无统计学意义(P>0.05).IOP术后各时间点与术前比较差异均无统计学意义(P>0.05).散光矢量分解后术后J0的变化与CRF、CH负相关(P<0.05),与IOP无相关(P>0.05).J45的变化与CRF、CH、IOP无相关(P>0.05).结论 屈光性晶状体置换术联合散光性角膜切开术不会引起角膜生物力学特性的长期改变,CRF、CH对散光性角膜切开术手术效果产生一定的影响.%Objective To study the changes of corneal biomechanical parameters and the correlations between surgical induced corneal astigmatism and corneal biomechanical parameters in Astigmatic Keratotomy (AK).Methods Patients with corneal astigmatism ≥1.50D underwent AK during refractive intraocular lens exchange.Corneal resistance factor (CRF),Corneal hysteresis (CH) and Goldmann-correlated intraocular pressure (IOP) measurements were obtained with the Ocular Response Analyzer (ORA,Reichert.,Depew,NY) before and 1 week,1 month,3 months and 6 months after surgery.Corneal astigmatism was also measured with Scheimpflug imaging (Pentacam ver.1.11; Oculus,Germany).Results Thirty-two eyes were included in this study.Both CRF and CH decreased briefly at 1 week and
6. The influence of corneal astigmatism on retinal nerve fiber layer thickness and optic nerve head parameter measurements by spectral-domain optical coherence tomography
OpenAIRE
Liu Lin; Zou Jun; Huang Hui; Yang Jian-guo; Chen Shao-rong
2012-01-01
Abstract Background To evaluate the influence of corneal astigmatism (CA) on retinal nerve fiber layer (RNFL) thickness and optic nerve head(ONH) parameters measured with spectral-domain optical coherence tomography (OCT) in high myopes patients before refractive surgery. Methods Seventy eyes of 35 consecutive refractive surgery candidates were included in this study. The mean age of the subjects was 26.42 ± 6.95 years, the average CA was −1.17 diopters (D; SD 0.64; range −0.2 to-3.3D), All s...
7. Combined special capsular tension ring and toric IOL implantation for management of post-DALK high regular astigmatism with subluxated traumatic cataract
Directory of Open Access Journals (Sweden)
Asim Kumar Kandar
2014-01-01
Full Text Available We report a case of 18-year-old male who has undergone phacoemulsification with implantation of toric IOL (AcrySof IQ SN6AT9 after fixation of lens capsule with Cionni′s capsular tension ring (CTR for subluxated traumatic cataract with high astigmatism after deep anterior lamellar keratoplasty (DALK. He underwent right eye DALK for advanced keratoconus four years earlier. He had history of trauma one year later with displaced clear crystalline lens into anterior chamber and graft dehiscence, which was repaired successfully. The graft survived, but patient developed cataract with subluxated lens, for which phacoemulsification with implantation of toric IOL was done. Serial topography showed regular corneal astigmatism of -5.50 diopter (K 1 42.75 D @130°, K 2 48.25 D @40°. At 10-month follow-up, the patient has BCVA 20/30 with + 0.75 DS/- 1.75 DC @ 110°. The capsular bag is quite stable with well-centered IOL. Combination of Cionni′s ring with toric IOL could be a good option to manage such complex cases.
8. Pentacam与IOL Master测量角膜曲率与散光的比较%Comparison of keratometry and astigmatism measured by Pentacam and IOL Master
Institute of Scientific and Technical Information of China (English)
黄旺斌; 陈子林
2016-01-01
Objective To compare the difference of keratometry and astigmatism measured by Pentacam and IOL Master in age-related cataract operations. Methods The inspection of involved patients was taken by the same operator.The data of keratometry and corneal astigmatism were the average of three sets. Paired t tests ,A simple linear correlation analysis and the Bland-Altman method were used to evaluate the difference, correlation and agreement of keratometry respectively. Wilcoxon signed rank test and simple statistical analysis were performed to compare the difference of corneal astigmatism. Results The mean keratometry value were (44.14±1.49) D (male), (44.73±1.55) D (female) for Pentacam,and (44.27±1.50) D (male)、(44.86±1.56) D (female) for IOL Master. There was significant statistical difference in keratometry value both in the terms of instrument and sex.The correlation of keratometry was high(r=0.986). The absolute value of the maximum difference was 1.19 D in the 95%limits of agreement of Bland-Altman histogram. The astigmatism degree and axis were (0.77±0.52) D and (85.38 ±53.36)° for Pentacam, (0.90±0.61) D and (85.38 ±53.36)° for IOL Master.There was obivous statistical difference in astigmatism degree between the two instruments, while, no great difference in axis.The percent of astigmatism axis difference above 10° was 44.4%. Conclusions Both Pentacam and IOL Master can measure corneal curvature and astigmatism of cataract patients accurately.There are good correlation and poor consistency.%目的:比较Pentacam与IOL Master测量年龄相关性白内障患者角膜曲率、角膜散光的差异。方法选取2013年7至10月在惠州市中心人民医院接受手术治疗的年龄相关性白内障患者109例(共171眼),对入选患者由同一检查者检查,各收集3组数据并取其均值为对应测量值。角膜曲率的差异性、相关性、一致性分别采用配对样本t检验、线性相关分析、Bland-Altman法进行
9. Academic and Workplace-related Visual Stresses Induce Detectable Deterioration Of Performance, Measured By Basketball Trajectories and Astigmatism Impacting Athletes Or Students In Military Pilot Training.
Science.gov (United States)
Mc Leod, Roger D.
2004-03-01
Separate military establishments across the globe can confirm that a high percentage of their prospective pilots-in-training are no longer visually fit to continue the flight training portion of their programs once their academic coursework is completed. I maintain that the visual stress induced by those intensive protocols can damage the visual feedback mechanism of any healthy and dynamic system beyond its usual and ordinary ability to self-correct minor visual loss of acuity. This deficiency seems to be detectable among collegiate and university athletes by direct observation of the height of the trajectory arc of a basketball's flight. As a particular athlete becomes increasingly stressed by academic constraints requiring long periods of concentrated reading under highly static angular convergence of the eyes, along with unfavorable illumination and viewing conditions, eyesight does deteriorate. I maintain that induced astigmatism is a primary culprit because of the evidence of that basketball's trajectory! See the next papers!
10. 905例17至19岁青年人散光与近视度的相关性分析%Research on the epidemiology of astigmatism in 905 cases 17-19 year old youths
Institute of Scientific and Technical Information of China (English)
刘保松; 袁媛; 彭华琮
2012-01-01
11. Wavefront-Guided Photorefractive Keratectomy with the Use of a New Hartmann-Shack Aberrometer in Patients with Myopia and Compound Myopic Astigmatism
Directory of Open Access Journals (Sweden)
Steven C. Schallhorn
2015-01-01
Full Text Available Purpose. To assess refractive and visual outcomes and patient satisfaction of wavefront-guided photorefractive keratectomy (PRK in eyes with myopia and compound myopic astigmatism, with the ablation profile derived from a new Hartmann-Shack aberrometer. Methods. In this retrospective study, 662 eyes that underwent wavefront-guided PRK with a treatment profile derived from a new generation Hartmann-Shack aberrometer (iDesign aberrometer, Abbott Medical Optics, Inc., Santa Ana, CA were analyzed. The preoperative manifest sphere ranged from −0.25 to −10.75 D, and preoperative manifest cylinder was between 0.00 and −5.25 D. Refractive and visual outcomes, vector analysis of the change in refractive cylinder, and patient satisfaction were evaluated. Results. At 3 months, 91.1% of eyes had manifest spherical equivalent within 0.50 D. The percentage of eyes achieving uncorrected distance visual acuity 20/20 or better was 89.4% monocularly and 96.5% binocularly. The mean correction ratio of refractive cylinder was 1.02 ± 0.43, and the mean error of angle was 0.00 ± 14.86° at 3 months postoperatively. Self-reported scores for optical side effects, such as starburst, glare, halo, ghosting, and double vision, were low. Conclusion. The use of a new Hartmann-Shack aberrometer for wavefront-guided photorefractive keratectomy resulted in high predictability, efficacy, and patient satisfaction.
12. Observation of corneal astigmatism induced by 2.2mm micro-incision coaxial phacoemulsification%同轴微切口白内障超声乳化术后角膜散光的临床观察
Institute of Scientific and Technical Information of China (English)
林英杰; 梁先军; 何锦贤; 赵抒羽; 杨雪艳; 曾胜
2013-01-01
目的:评价2.2mm同轴微切口白内障超声乳化手术后角膜散光的变化.方法:老年性白内障患者56例78眼,将患者随机分为2组,2.2mm组38眼,3.0mm组40眼,分别行2.2mm同轴微切口白内障超声乳化联合人工晶状体(IOL)植入术及3.0mm常规白内障超声乳化联合IOL植入术,术后1,3mo评价术眼裸眼视力(uncorrected visual acuity,UCVA)、角膜散光、术源性角膜散光(surgically induced astigmatism,SIA).结果:术后1mo,2.2mm组角膜散光为0.85±0.42D,3.0mm组角膜散光为1.18±0.37D,两组角膜散光比较有统计学差异(P0.05).术后UCVA,在术后1mo和3mo,2.2mm组均优于3.0mm组.结论:2.2mm同轴微切口白内障超声乳化手术后能产生更小的SIA和更好的UCVA.%AIM: To evaluate the effect of 2. 2mm micro-incision coaxial phacoemulsification on corneal astigmatism and surgically induced astigmatism (SIA).METHODS: Fifty-six cataract patients (78 eyes) were randomized into two groups: 38 eyes in the 2. 2mm incision group and 40 eyes in the 3. Omm group. Torsional phacoemulsification was followed. Corneal astigmatism, SIA and uncorrected distance visual acuity (UCVA) were assessed at 30 and 90 days after cataract surgery.RESULTS: One month postoperatively, the corneal astigmatism of the 2.2mm group was 0.85±0.42D and the 3.0mm group was 1. 18 ± 0. 37D. Three months postoperatively, the corneal astigmatism of the 2. 2mm group was 0. 74 ± 0. 40D and the 3. Omm group was 1. 00 ± 0. 30D. One month and 3 months postoperatively, SIA of the 3. Omm group was greater than SIA of the 2. 2mm group ( P< 0. 05). In the 3. Omm group, mean SIA at 1 month was greater than SIA at 3 months ( P<0. 05), but SIA was similar. There was no statistical significance between the mean SIA between 1 month and 3 months. Postoperative UCVA was better in the 2. 2mm group at both 1 month and 3 months postoperatively.CONCLUSION; 2. 2mm micro - incision coaxial phacoemulsification contributed to postoperative corneal
13. The influence of corneal astigmatism on retinal nerve fiber layer thickness and optic nerve head parameter measurements by spectral-domain optical coherence tomography
Directory of Open Access Journals (Sweden)
Liu Lin
2012-05-01
Full Text Available Abstract Background To evaluate the influence of corneal astigmatism (CA on retinal nerve fiber layer (RNFL thickness and optic nerve head(ONH parameters measured with spectral-domain optical coherence tomography (OCT in high myopes patients before refractive surgery. Methods Seventy eyes of 35 consecutive refractive surgery candidates were included in this study. The mean age of the subjects was 26.42 ± 6.95 years, the average CA was −1.17 diopters (D; SD 0.64; range −0.2 to-3.3D, All subjects in this study were WTR CA. 34 eyes were in the normal CA group with a mean CA was −0.67 ± 0.28D, 36 eyes were in the high CA group with an average CA of −1.65 ± 0.49D. All subjects underwent ophthalmic examination and imaging with the Cirrus HD OCT. Results No significant difference was noted in the average cup-to-disk ratio, vertical cup-to-disk ratio and cup volume (all P values > 0.05. Compared with the normal CA group, the high CA group had a larger disc area and rim area, thinner RNFL thickness in the temporal quadrant, and the superotemporal and inferotemporal peaks were farther to the temporal horizon (All P values P values > 0.05. Conclusions The degree of with-the-rule CA should be considered when interpreting ONH parameters and peripapillary RNFL thickness measured by the Cirrus HD OCT. Virtual slides The virtual slide(s for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1148475676881895
14. Optical system design of broadband astigmatism-free czerny-turner spectrometer%宽谱段消像散Czerny-Turner光谱仪光学系统设计
Institute of Scientific and Technical Information of China (English)
赵意意; 杨建峰; 薛彬; 闫兴涛
2014-01-01
针对光谱仪小型化、高分辨率的发展趋势,设计了一种结构简单、宽谱段、消像散的小型光谱仪。具体分析了折叠光路Czerny-Turner光谱仪各种像差的原理和校正方法。推导了柱透镜宽谱段消像散的理论方程。作为实例,设计了一款谱段为300~900 nm、物方数值孔径0.08的小型光谱仪。该光谱仪采用折叠光路结构以减小尺寸,添加柱透镜以消除整个谱段的像散。结果表明:该光谱仪结构简单紧凑,体积小,实现了宽谱段的消像散,全谱段光谱分辨率优于0.5 nm。%For the development trend of miniaturization and high-resolution of spectrometer, an optical design with a simple structure, broadband, astigmatism-corrected micro spectrometer was designed. The principle and correction method of the aberration of crossed beam czerny-turner spectrometer were analyzed in detail. The broadband astigmatism-corrected theory equations using cylindrical lens were deduced. For example, a micro spectrometer operating in 300-900 nm with an object NA of 0.08 has been designed. This spectrometer adopted crossed beam structure to minish its volume and used a cylindrical lens to remove astigmatism over the full bandwidth. The analyzed results demonstrated that this spectrometer with compact configuration and small volume corrected the astigmatism in the wide spectral region. The resolution of the spectrometer was better than 0.5 nm in the whole spectral region.
15. Epi-LASIK 和 LASIK 治疗近视散光的早期疗效对比观察%Comparative study of Epi -LASIK and LASIK for myopic astigmatism
Institute of Scientific and Technical Information of China (English)
罗栋强; 王华; 何书喜; 陈蛟
2013-01-01
AIM: To analyze the effects of epipolis laser in situ keratomileusis ( Epi - LASIK ) and laser in situ keratomileusis ( LASIK ) for treatment of myopic astigmatism. METHODS: For treatment of myopic astigmatism, 32 patients (64 eyes) treated by Epi-LASIK and 63 patients (126 eyes) received LASIK.By their degree of astigmatism, the eyes were divided into GroupⅠ(-0.25~-2.75) DC and GroupⅡ(-3.0~-5.0) DC.During the 6-month follow-up, the early effects of the two operations were observed and compared in terms of uncorrected visual acuity ( UCVA) , best corrected visual acuity ( BCVA ) , residual astigmatism, corneal healing, intraocular pressure ( IOP) , corneal topography. RESULTS: In Group Ⅱ, UCVA better than 20/20 was achieved in 87.5%of the eyes subjected to Epi-LASIK and in 63.3% of the eyes subjected to LASIK, with significant difference between them (χ2 =4.055, P<0.05); residual astigmatism was-0.41±0.30D for the Epi-LASIK eyes and-0.74 ±0.36D for the LASIK eyes, with significant difference between them ( t =2.672, P <0.05 );postoperative corneal astigmatism was 0.63±0.34D for the Epi-LASIK eyes and 0.81 ±0.52D for the LASIK eyes with significant difference between them (t=2.234, P<0.05). CONCLUSION: For treatment of high astigmatism (≥-3 .0 0 D ) , Epi-LASIK is more effective and predictive than LASIK.%目的:探讨分析角膜微型刀上皮瓣下准分子激光原位角膜磨镶术( epipolis laser in situ keratomileusis , Epi-LASIK)和准分子激光原位角膜磨镶术( laser in situ keratomileusis , LASIK)治疗近视散光的疗效。方法:近视散光行 Epi-LASIK 治疗的患者32例64眼, LASIK治疗的患者63例126眼,将患者根据柱镜度数分为2组:Ⅰ组(柱镜-0.25~-2.75D,Epi-LASIK 20例、LASIK 48例)、Ⅱ组(柱镜-3.00~-5.00D,Epi-LASIK 12例、LASIK 15例)。随访6 mo观察两种术式的疗效。对比两组患者的术后裸眼视力( uncorrected visual acuity , UCVA
16. 应用Fourier分析法研究准分子激光治疗散光的准确性%Study on the accuracy of excimer laser for myopic astigmatism with fourier analysis
Institute of Scientific and Technical Information of China (English)
胡亮; 徐鹏; 崔贺; 谢文加; 王勤美
2013-01-01
关性有统计学意义(r=0.361,P< 0.01).飞秒LASIK术后中度近视组RJ0与CJ0 (r=0.393,P< 0.01),RJ0与TJ0(r=0.596,P< 0.01),RJ45与TJ0(r=0.396,P<0.01)相关性均有统计学意义.结论:①鹰视酷眼准分子激光行OUP-SBK或飞秒LASIK手术矫正中高度近视患者的顺规散光具有较高的准确性;②手术对顺规散光患者垂轴成分的矫正准确性高于对斜轴散光成分的矫正准确性;③手术矫正中高度近视患者的顺规散光准确性相近.%Objective:To evaluate the accuracy of Cool Excimer Laser System for myopic astigmatism in One Use-Plus Sub-Bowman's keratomileusis (OUP-SBK) and Femtosecond Laser In Situ Keratomileusis (FSLASIK) with Fourier Analysis and develop a customized nomogram for astigmatic patients.Methods:Retrospectively reviewed the charts of 542 eyes (OUP-SBK:318 eyes; FS-LASIK:224 eyes),who had excimer laser surgery in Eye Hospital of Wenzhou Medical College during Jan 2012 to March 2012,with an inclusion criteria as:WTR astigmatism ≥-0.5 D,myopia >-3 D.The moderate and high myopia groups were divided respectively according to baseline myopia.Preoperative and 3 to 6 months postoperative examinations included:refraction,corneal topography,pachymetry,best spectacles corrected visual acuity (BSCVA).Fourier Analysis was used to transform preoperative astigmatism to TJ0 and TJ45; preoperative corneal astigmatism to CJ0 and CJ45;postoperative residual astigmatism to RJ0 and RJ45.Group t test and Pearson test were used for statistical analysis.Results:① Residual Astigmatism.OUP-SBK moderate myopia group:RJ0=(0.012 ± 0.161)D,RJ45=-(0.012 ± 0.128)D; OUP-SBK high myopia group:RJ0=(0.026 ± 0.239)D,RJ45=(-0.029 ± 0.194)D,the difference between moderate and high myopia group was not statistically significant (P=0.697,0.402).Femtosecond LASIK moderate myopia group:RJ0=(0.053 ± 0.248)D,RJ45=(-0.039 ± 0.186)D; femtosecond LASIK high myopia group:RJ0=(0.042 ± 0.267)D,RJ45=(0.044 ± 0.261)D,the difference between
17. Phakic posterior chamber intraocular lens implantation in high myopia with astigmatism%后房型有晶状体眼人工晶状体植入术治疗高度近视及散光
Institute of Scientific and Technical Information of China (English)
刘莉; 陈自新; 陈茂盛; 马金花; 陈荥培
2011-01-01
目的 探讨后房型有晶状体眼人工晶状体植入术治疗高度近视及散光的安全性和疗效性.方法 后房型有晶状体眼人工晶状体植人术治疗高度近视及散光22例39只眼.术前屈光度(等效球镜)为-7.0~-24.0D,平均(-14.50D±3.50)0;散光-0.50~-4.50D,平均(-2.25±1.32)D.术后检查视力、眼压、裂隙灯显微镜,前房角,前房深度,人工晶状体拱高,角膜内皮细胞计数.随访3-18个月.结果 术后3个月,38只眼视力达到或超过术前矫正视力.等效球境-0.50~-2.25D,平均(-0.75±0.38)D,散光度0.25~2.75D,平均(1.03±0.23)D,术后7只眼早期(2h后观察)眼压升高,经降眼压处理,24h内恢复正常.无一例出现青光眼、白内障、人工晶状体偏移及网脱等并发症.结论 后房型有晶状体眼人工晶状体植入术治疗高度近视及散光保留了生理性调节、并发症少、安全有效,是临床矫正高度近视散光比较理想的一种方法.%Objective To valuate the efficacy, safety stability and predictability of implanting a posterior chamber phakic intraocular lens to correct high myopia with astigmatism. Methods Thirty-nine eyes of 22 patients with high myopia were treated with ICL implantation. The range of preop-erative myopia diopters was -7.0 D to -24.0 D, mean -14.50 D±3.50 D, astigmatism ranges -0.50 D to -4.50 D, mean 2.25±1.32 D. All of 39 eyes were implanted ICL successful and had been followed up for 3 to 18 months. The follow up examination included visual acuity, refraction tonometer, slit lamp examination, chamber depth, chamber angle and space between crystal lens and IOL. Results One week after operation, the uncorrected visual acuity of 38 eyes were same or better than the pre-operative best corrected visual acuity (BCVA). The refractive diopters were from -0.50 to -2.25 D,mean -0.75±0.38 D. Astigmatism was 0.25 to 2.75 D, mean 1.03±0.23 D. Complications were seen in 7 eyes of 6 patients who had increased intraocular
18. 人工晶体因素对白内障角膜散光患者术后获益的影响%Influence of different intraocular lenses on postoperative benefit of cataract patients with astigmatism
Institute of Scientific and Technical Information of China (English)
巨朝娟; 楚妙; 张骞; 林伟
2015-01-01
BACKGROUND:Monofocal and multifocal Toric intraocular lens that have been widely used in clinic exhibit xcelent biological and optical characteristics and have good safety and stability after implantation. OBJECTIVE:To compare the outcomes and rotation stability in patients with cataract and astigmatism after implantation of monofocal and multifocal intraocular lens. METHODS:A total of 210 patients with cataract and astigmatism who received phacoemulsification and intraocular lens implantation were included in this study. Of them, 105 patients were assigned to monofocal intraocular lens implantation and the other 105 patients to multifocal intraocular lens implantation. Uncorrected visual acuity, best corrected visual acuity, residual astigmatism were reexamined at 1, 3 weeks and 1 month after surgery. The rotation of Toric intraocular lens was determined. The incidence of complications and spectacles- independent rate were recorded. RESULTS AND CONCLUSION:Visual acuity and residual astigmatism in each group were significantly improved after 1 week of intraocular lens implantation (P < 0.05); furthermore, these two indicators became better over time. Improvement of visual acuity and residual astigmatism in multifocal intraocular lens group was more obvious than that in monofocal intraocular lens group. Postoperative intraocular lens rotation at < 5° occurred in both groups. The intraocular lens rotation degree in multifocal intraocular lens group was higher than that in monofocal intraocular lens group at different time points (P < 0.05). There were no significant differences in incidence of complications and spectacles-independent rates between two groups at 1 month after surgery. These results demonstrate that multifocal Toric intraocular lens provides better visual acuity and residual astigmatism improvement, while monofocal Toric intraocular lens provides better rotation stability.%背景:单焦点与多焦点Toric散光型人工晶体具有良好的生物
19. Toric设计角膜塑形镜矫治青少年复合性近视散光的临床观察%Clinical observation of adolescent compound myopic astigmatism treated by toric design orthokeratology
Institute of Scientific and Technical Information of China (English)
郭雷; 张悦; 陆新; 周爽; 刘岩
2016-01-01
Objective To investigate the clinical effects and safety of Toric design orthokeratology on compound myopic astigmatism in adolescents.Methods Toric design orthokeratology were performed to correct 68 compound myopic astigmatism eyes of 36 (9-14 years old) cases,with myopic degree:-1.50~-5.00 D,and astigmatism:-1.50~-3.50 D.The naked vision,diopter,corneal topography,the survey of the health status in ocular anterior segment,and the wearing status of toric design orthokeratology,were measured and recorded at baseline,and 1 day,1 week,1 month,3 months,6 months,as well as 12 months post treatment.Results The naked vision (≥0.8) was larger than that of baseline significantly (P <0.05),from 1 week to 12 months.In addition,the diopters of naked eyes decreased in 1 week,1 month,3 months and 12 months significantly (P <0.05),compared with the baseline.Moreover,there was a significant difference in flat and steep K value of anterior 3mm surface (P <0.05),instead of posterior surface.Besides,the thickness of central 3-5mm cornea,as well as the height of posterior surface in corneal topography failed to show the significant difference (P >0.05).Finally,the satisfaction of subjective visual quality based on the good wearing status of toric design orthokeratology,non-infection of ocular anterior segment,and non-thinning of cornea post treatment.Conclusions Toric design orthokeratology can correct compound myopic astigmatism in adolescents to some degree,effectively and safely,in spite of the limited range of astigmatism correction.%目的 探讨Toric设计角膜塑形镜矫治青少年复合性近视散光的临床疗效及安全性.方法 对2014年1月至2015年12月在中国医科大学第一医院眼视光中心门诊就诊的9~14岁复合性近视散光(近视度-1.50~5.00 D,散光度-1.50~3.5 D)患者36例(68只眼)给予Toric设计角膜塑形镜进行矫正.分别测量戴镜前及戴镜后1d、1周、1个月、3个月、6个月、12个月的裸眼
20. Application of improved orthokeratology for myopic adolescents with moderate-to-high astigmatism%常规角膜塑形术应用于较高度角膜散光患者的治疗效果
Institute of Scientific and Technical Information of China (English)
韦伟; 申笛; 薛亚林; 张长宁
2015-01-01
目的 观察常规角膜塑形术矫治较高度角膜散光患者的临床效果.方法 回顾性研究.近视度为-0.50~-6.25 D,顺规角膜散光为-1.76~-3.02 D,年龄为6~18岁的患者33例(60眼),观察戴角膜塑形镜1d、1周、1个月、3个月、6个月、1年后的视力改变情况,并结合角膜地形图改变情况对矫治效果进行分级.用logistic回归分析较高度角膜散光患者成功验配角膜塑形镜的影响因素.结果 配戴角膜塑形镜前,患者的裸眼视力(UCVA)为4.15±0.23,配戴后1d、1周、1个月、3个月、6个月、1年后的视力分别为4.59±0.23、4.90B±0.11、4.96±0.07、4.86±0.25、4.93±0.10、4.93±0.11,配戴后UCVA与戴镜前比较,差异有统计学意义(F=148.08,P<0.01).配戴角膜塑形镜1个月后矫治效果Ⅰ级35眼(58%),Ⅱ级15眼(25%),Ⅲ级8眼(13%),Ⅳ级2眼(3%).患者屈光度、角膜散光度、角膜下方与上方平均屈光度差值(I-S值)、角膜陡K(SK)值和角膜e值对矫治效果均无显著影响.结论 采用改良的验配方法,常规球面角膜塑形镜也可以选择性用于自身条件较好的1.50 D以上角膜散光近视患者,并取得良好的矫治效果.%Objective To investigate an improved orthokeratology fitting method for myopic adolescents with moderate-to-high astigmatism;to analyze successful fitting factors.Methods This was a retrospective analysis of 33 patients (60 eyes) ranging in age from 6 to 18 years with 0.5 to 6.25 diopters (D) of myopia and with-the-rule corneal astigmatism of 1.76 to 3.02 D who were fitted with orthokeratology lenses with a spherical design.Treatment outcomes were evaluated by comparing eyes before lens wear and after wearing the lenses for 1 day, 1 week, 1 month, 3 months, 6 months and 12 months.The results were graded on corneal topography and visual acuity.Orthokeratology central-fitting factors were analyzed in patients with moderate-to-high astigmatism using logistic regression.Results The
1. 有晶体眼后房型人工晶体植入治疗高度近视及散光%Clinical research of the implantation of phakic posterior chamber intraocular lens (ICL) for high myopia with astigmatism
Institute of Scientific and Technical Information of China (English)
廖荣丰; 汪永; 周艳峰; 刘伦; 刘兴华; 封利霞
2009-01-01
目的 观察和探讨后房型有晶体眼人工晶体植入术治疗高度近视及散光的效果及安全性.方法 后房型有晶体眼人工晶体植入术治疗高度近视及散光28例56眼.术前屈光度(等效球镜)为-7.0~-22.50D,平均(-12.42±3.50)D.散光0.37~6.5D,平均(1.92±1.32)D.所有患者随访3~24个月.术后检查视力、眼压,裂隙灯检查眼前部情况.结果 术后3个月,所有病例术后达到或超过术前最佳矫正视力.等值球镜,0.25~1.50D,平均(0.534±0.408)D.散光度0.25~1.50D,平均(0.564±0.289)D.术后并发症:4例7眼术后早期眼压升高,经降眼甩治疗,3~4天后眼压正常.结论 后房型有晶体眼人工晶体植入术治疗高度近视及散光效果确切,预测性好,手术安全.高度近视,尤其是超高度近视以及角膜薄不适合准分子激光矫正的患者是理想的治疗对象.%Objective To observe and investigate the efficacy and safety of the implantation of phakic posterior chamber intraocular lens (ICL) for high myopia with or without astigmatism. Methods 56 eyes of 28 patients with high myopia were treated with ICL implanta-tion. The range of preoperative myopia diopters was-7.0 to-22. 5D, mean-12.42±3.50 D. Astigmatism ranges 0.37 to 6.5D, mean 1.92±1.32D. All of 56 eyes were implanted ICL successfully and have been followed up for 3 to 24 months. The follow up examination included visual acu-ity, refraction, tonometer, slit lamp examination ,and space between crystal lens and IOL. Results Three months after operation, the uncor-rected visual acuity of every eye was same or better than the pre-operative best corrected visual acuity (BCVA). The refractive diopters were from 0.25 to 1.50D, mean 0.534±0.408D. Astigmatism were 0.25 to 1.50D, mean 0.564±0.289D.complications were secondary glaucoma. 7 eyes of 4 patients had increased intraocular pressure shortly after surgery. After treatment, the intraocular pressure became normal in 3 to 4 days. Con
2. Efficacy of phacoemulsification combined with intraocular lens implantation for senile cataract with corneal astigmatism%超声乳化联合人工晶状体植入治疗老年白内障合并角膜散光
Institute of Scientific and Technical Information of China (English)
董永孝; 黄立; 关小荣; 马艳; 韩文涛; 赵金; 吕菊迎
2015-01-01
目的:对老年性白内障合并角膜散光患者采用超声乳化白内障摘除术联合散光型人工晶状体( intraocular lens,IOL)植入的临床疗效进行评估。方法:采用随机数字表法将本院眼科中心收治的64例84眼老年性白内障合并散光患者分为散光型IOL组33例42眼和球面IOL组31例42眼,散光IOL组采用超声乳化白内障摘除术联合散光型人工晶状体植入术治疗,球面IOL组采用常规颞侧透明角膜切口超声乳化白内障摘除球面人工晶状体植入联合陡峭轴位上一对角膜缘松解切口治疗。观察两组手术前、术后3 mo 的视力分布、角膜散光度、球镜及柱镜指标(曲率、轴向、小瞳验光球镜、小瞳验光柱镜、散光轴向)的变化情况。结果:散光型IOL组和球面IOL组在术后第3 mo复查裸眼视力,与同组术前比较视力均提高(P0.05);术后3 mo散光型IOL组的小瞳验光球镜、小瞳验光柱镜值显著低于球面IOL组( P0. 05). Non-mydriatic refraction spherical and non - mydriatic refraction cylindrical of the astigmatism lOL group were significant lower than than in the spherical lOL group at 3mo post-operation(P<0. 05).• CONCLUSlON: Phacoemulsification combined with intraocular lens implantation for senile cataract with corneal astigmatism have a good clinical effect.
3. Lente de contato de material híbrido em pacientes com ceratocone e astigmatismo miópico composto Hybrid material contact lens in keratoconus and myopic astigmatism patients
Directory of Open Access Journals (Sweden)
Fernando Leal
2007-03-01
óculos, exceto para a freqüência B (3 cpg, maior nos usuários de óculos. As aberrações de alta ordem analisadas apresentaram diminuição estatisticamente significante, quando comparados os pacientes sem e com uso de lentes de contato, com exceção da aberração esférica e do coma. CONCLUSÃO: A lente de contato de material híbrido, quando utilizada por portadores de ceratocone e astigmatismo miópico composto, propiciaram desempenho visual e conforto satisfatórios, em níveis que não diferiram, das lentes de contato rígidas-gás-permeáveis nos dois grupos de pacientes.PURPOSE: To evaluate comfort and visual performance in relation to two different used contact lens types: hybrid material (HM and rigid-gas-permeable (RGP, in patients with regular myopic astigmatism and with keratoconus. METHODS: A randomized, double masked, prospective study of 22 patients with the diagnosis of myopic astigmatism (8 with myopic astigmatism and 14 with keratoconus was conducted. Fifteen patients were female and 7 were male, and mean age was: 32.13 ± 8.12 years. In one of the eyes a rigid-gas-permeable contact lens was adapted (DK 30, and in the other a hybrid material contact lens was adapted (DK 23. All patients were submitted to the following tests: measurement of comfort level by means of the analogical visual scale, tear break-up time, best corrected visual acuity with the Bailey-Lovie scale adapted for 4 meters, functional acuity contrast test (FACT and wavefront analysis. RESULTS: In relation to comfort, there was no association with the evaluated contact lens type (p=0.350. There was a variation in comfort level during the first 7 days. The visual acuity increased between the 7th and the 15th day of adaptation. Visual acuity stabilized right after this period. The visual acuity did not show differences in relation to the studied lens type. It was verified that there was no difference in the tear break-up time (p=0.989 in relation to the studied lenses type and
4. Posição viciosa de cabeça por astigmatismo mal corrigido: relato de caso Abnormal head position caused by incorrect prescription for astigmatism: case report
Directory of Open Access Journals (Sweden)
Flávia Augusta Attié de Castro
2005-10-01
Full Text Available A posição viciosa de cabeça é uma condição compensatória que visa proporcionar aos pacientes melhor rendimento visual. Pode ser causada por problemas oftalmológicos, como distúrbios oculomotores (nistagmos, estrabismos e altos astigmatismos. No entanto, compromete a estética e, a longo prazo, pode causar transtornos ortopédicos (coluna cervical e assimetrias faciais. Relatamos o caso de uma garota, JL, 8 anos, com cabeça inclinada para esquerda havia vários anos. Fazia uso de óculos prescritos em outro serviço para correção de astigmatismo misto: OD= +2,00 DE Ç -5,50 DC a 180º e OE= +2,25 DE Ç -5,75 DC a 180º. No exame oftálmico, a paciente apresentava cabeça inclinada para a esquerda e acuidade visual com correção de 0,5 no OD e 0,7 OE. Os testes de cobertura simples e alternado não evidenciaram desvio ocular. Rotações oculares, biomicroscopia e fundoscopia também não mostraram alterações. Na refratometria sob cicloplegia e teste de lentes foram encontrados: OD= +3,50 DE Ç -6,00 DC a 10º e OE= +3,50 DE Ç -6,00 DC a 170º, com acuidade visual igual a 1,0 nos olhos direito e esquerdo. Foram prescritas as lentes encontradas no exame e a paciente retornou com a correção nova sem a inclinação de cabeça. Erros refracionais mal corrigidos também podem gerar torcicolo e, muitas vezes, passam despercebidos. Refratometria sob cicloplegia e teste de lentes são fundamentais para um diagnóstico preciso.Abnormal head position is a compensatory condition which improves patients' vision. It can be caused by ophthalmological problems such as oculomotor imbalances (strabismus, nystagmus and high astigmatisms. However, it results in esthetic impairment, orthopedic trouble and facial asymmetries. We describe a case of a girl, JL, 8 years, with abnormal head position tilted to the left since the last glasses were prescribed. The correction used by the patient was: right eye = +2.00 sph à -5.5 cyl 180° and left eye = +2
5. The expression of the correction of corneal astigmatism in the point spread function analysis system of human eyes%人眼角膜散光矫正的点扩散函数分析表达
Institute of Scientific and Technical Information of China (English)
姜珺; 毛欣杰; 金成鹏; 吕帆
2008-01-01
optical imaging quality among corneal astigmatism subjects under different corrections by using the Point Spread Function analysis system (PSF). Methods PSF 1000 analyzer was used to measure retinal image quality of eyes of 26 subjects with corneal astigmatism ( sphere ranged from - 3.00 to - 6.00 DS, cylinder ranged from - 0. 75 to - 3.00 DC), who were fully corrected with three different methods respectively: spectacles (SPE), rigid gas permeable contact lenses (RGPCL) and toric soft contact lenses (TSCL). The modulation transfer function (MTF) curve was recorded and evaluated. 12 points of the MTF curve (equivalent to the 12 points of Lag Mar VA chart) were chosen for analysis. Equivalent moderate myopia 26 subjects without astigmatism were set up as control group. Results The MTF curve of each eye is enantiomorphous symmetrical Compared with the control group ( < -0. 75D), with 3.0 mm pupil, there's no statistical difference under low spatial frequency, the difference between middle and high frequency is significant ( P < 0.05 ). With 6. 0 mm pupil, there is statistical difference in all frequencies (P < 0.01 ). With simulated 3.0 mm pupil and 6.0 mm pupil, MTF values of all these three methods are statistical different. With 3.0 mm pupil, the MTF value of eyes fitted with RGPCLs is higher than that fitted with TSCLs except in the following frequency 3.00 , 3.78,4. 78,different in all frequencies ( P < 0.05 ). In all these three methods, MTF values with simulated 3.0 mm pupil are significantly higher than that with simulated 6.0 nun pupil ( P < 0.01 ). Conclusions PSF analytical method is available for offering objective data of retina imaging quality. RGPCL and its induced tear film is improved not only correcting the corneal astigmatism, but also enhances the ocular optical quality by reducing diffraction, dispersion and other high order of aberration.
6. Application study of rigid gas permeable contact lens in patients with traumatic corneal irregular astigmatism%硬性高透氧性角膜接触镜矫正外伤性角膜散光应用研究
Institute of Scientific and Technical Information of China (English)
郑斌; 陈岩; 周静; 徐朝霞; 沈丽君
2012-01-01
Objective To evaluate the clinical value of rigid gas permeable contact lens (RGP-CL) in patients with traumatic corneal irregular astigmatism. Methods Eighteen consecutive patients (18 eyes) with traumatic corneal irregular astigmatism were fitted RGPCL.All patients were followed up at least 6 months.Preoperative data included:age,sex,eye,interval between RGPCL fitting and complete sutures removal,status of lens,uncorrected visual acuity (UCVA),spectacle visual acuity (SVA),location and size of the corneal scar,corneal astigmatism.Post-contact lens fitting data included:RGPCL visual acuity (RGPCLVA),duration of contact lens wear,the reason for drop ping out of contact lens wear,contact lens-related complications,whether or not the patient was successful in wearing the contact lcns.RGPCL fitting was considered successful if the patient judged the wearing of RGPCL to be satisfactory enough at least 8 hours of daily wear throughout the follow-up period.Decimal acuity was converted to 5-logMAR value.Data analysis used SPSS 16.0 for the paired samples t-test,two independent sample t-test and analysis of covariance. Results The average age was 20.94±13.35 years (range 5-45 years),five eyes were pseudophakic,one was aphakic and other twelve were phakic.According to the location of the corneal scar,it was found that nine eyes were in zone 1 and other nine in zone 2.The average length of scar was 4.04±2.23 mm (range 1.50-8.33 mm).The difference in the length of scar between zone 1 and zone 2 was found to be not statistically significant (t =-0.967,P =0.348).The interval from complete sutures removal to contact lens fitting was averaged 5.67±5.52 months (range 3-22 months).Mean UCVA was 4.2±0.5 (range 3.0-4.9).Mean SVA was 4.6±0.3 (range 4.0-4.9).Mean RGPCLVA was 4.9±0.1 (range 4.4-5.1).The visual acuities with contact lens were significantly better than with spectacles (t=4.143,P <0.000).RGPCLVA was 1 line better in 7 eyes,2-4 lines in 6 eyes,more than 5 lines in 4
7. Clinical study on the stability of high myopia cataract with intraocular lens implantation on corneal astigmatism Toric%高度近视并发白内障合并角膜散光植入Toric人工晶状体稳定性的临床研究
Institute of Scientific and Technical Information of China (English)
张晓城; 陈茂盛; 李嘉文
2012-01-01
Objective To evaluate the Acrysof Toric toric surface artificial lens in cataract with high myopia associated with regular corneal astigmatism in patients with clinical effect and rotational stability.Methods A randomly selected from 2009 June to 2011 August during the cataract and corneal astigmatism in patients,phacoemulsification and implantation of Acrysof Toric IOL operation.The experimental group of 40 patients(43 eyes) ,cataract with high myopia patients (axial length≥26mm,IOL≤15D) implantation degree;optometry mirror ball( - 5.50-10.25)D,average(- 6.25 ± - 0.25)D,column mirror( - 1.25 - 4.25)D,aver-age( - 2.75 + - 0.25)D.A control group of 39 cases (40 eyes) simple astigmatism in cataract patients (axial length≤24mm≥522mm) ,optometry mirror ball ( - 0.25-1.25) ,average( -0.75+ -0.25)D,column mirror (1.50 - 4.25)D,average( - 2.50 +- 0.25)D.Postoperative March fully after mydriasis slit - lamp photography,using Adobe Photoshop software artificial lens axis a-nalysis,were recorded during the preoperative,postoperative observation of uncorrected visual acuity(UCVA) ,best corrected visual acuity(BCVA) ,postoperative corneal astigmatism and whole-eye astigmatism astigmatism,expected and actual residual astigmatism,IOL degree of rotation.Results After March,UCVA>0.5 eyes had no significant difference between two groups(P>0.05).BCVA>0.8 eyes had no significant difference between two groups(P>0.05).Postoperative residual astigmatism in March,the experimental group for the(0.56 + 0.33)D,control group(0.54 ± 0.32)D,the difference was not statistically significant (P>0.05) ; March after intraocular lens degree of rotation,the experimental group was 3.79° + 2.33°,rotation range is( - 6.25°,+ 7.78°) ;the control group was 2.75°+l.38°,rotation range was( - 4.62°,+6.15°),two groups of rotating degree of the differences were statistically significant(P0.05).Conclusion Acrysof Toric IOL March observation indicated that the implant can efficiently and stably
8. Comparison of curative effects between first time iris recognition guided LASIK and traditional LASIK for myopic astigmatism%实时虹膜识别引导LASIK与常规LASIK治疗近视性散光疗效对比研究
Institute of Scientific and Technical Information of China (English)
揭黎明; 王骞; 郑林
2012-01-01
目的 对比观察实时虹膜识别引导的LASIK与常规LASIK治疗近视性散光的临床疗效.方法 采用随机对照研究.接受实时虹膜识别引导LASIK手术的近视性散光患者105例(196只眼)作为试验组,接受常规LASIK手术的近视性散光患者104例(195只眼)作为对照组,对两组患者术后1个月、3个月、6个月的裸眼视力、最佳矫正视力、散光度、散光轴向等进行比较.结果 静态虹膜识别检测出眼球旋转偏移角度为(2.74±2.05)°,动态虹膜识别检测眼球旋转变化范围为0~6.5°.术后6个月两组裸眼视力均≥0.5,术后最佳矫正视力均未丢失;术后6个月时,试验组裸眼视力≥术前最佳矫正视力的患者(181只眼,92.3%)多于对照组(167只眼,85.6%);试验组的平均散光(-0.22±0.20)D低于对照组(-0.34±0.35)D,差异均有统计学意义(P<0.01).术后6个月,试验组无散光眼(79只眼,40.3%)多于对照组(55只眼,28.2%),差异有统计学意义(P<0.05);对照组斜轴散光明显增加.结论 实时虹膜识别引导的LASIK能够有效校正LASIK术前和术中的眼球旋转偏差,使散光度数和轴向的治疗更加精确.%Objective To compare the clinical effects between first time iris recognition guided LASIK and traditional LASIK for myopic astigmatism.Methods In this prospective contrast study,209 patients (391 eyes) with myopic astigmatism were randomly divided into two groups:the experimental group (105 cases,196 eyes) accepted first time iris recognize-guided LASIK,and the control group (104 cases,195 eyes) accepted traditional LASIK.The naked visual acuity,the best-corrected visual acuity,the degree and axis of astigmatism were compared between two groups at postoperative 1 month,3 months and 6 months.Results In experimental group,static iris recognition detected that eye cyclotorsional misalignment was 2.74°±2.05°,dynamic iris recognition detected that the intraoperative cyclotorsional misalignment
9. Correção do astigmatismo irregular com lente intraocular tórica em um paciente com catarata e degeneração marginal pelúcida: relato de caso Toric intraocular lens implantation for cataract and irregular astigmatism related to pellucid marginal degeneration: case report
Directory of Open Access Journals (Sweden)
Ana Luiza Biancardi
2012-12-01
Full Text Available A degeneração marginal pelúcida (DMP é uma rara ectasia corneana cuja progressão resulta em astigmatismo irregular e baixa visual não corrigidos com óculos ou lentes de contato. O presente relato descreve um paciente com catarata e DMP que foi tratado com facoemulsificação e implante de lente intraocular tórica com recuperação da acuidade visual em ambos os olhos.Pellucid marginal degeneration (PMD is a rare corneal ectasia and its progression leads to irregular astigmatism and low vision that can not have spectacles or contact lens correction. This report describes a patient with low vision due to cataract and PMD that was treated with phacoemulsification and implantation of a toric intraocular lens with a satisfactory visual acuity outcome.
10. 光学诱导散光对视觉信号传导及皮层反应的影响%Electrophysiological research on the effects of optic-induced astigmatism on transmission time and response intensity of visual signals in the visual cortex
Institute of Scientific and Technical Information of China (English)
解来青; 徐国旭; 赵堪兴
2012-01-01
Objective To evaluate the contribution of different degrees of astigmatism on the latency and amplitude of pattern visual evoked potentials (PVEPs).The effect of astigmatism on the transmission and response intensity of visual signals in the visual cortex was evaluated.Methods It was a random designed study.PVEPs were measured in subjects with normal or normal corrected visual acuity using a checkerboard pattern stimulus under varying conditions using different astigmatic trial lens powers in succession (0-5 D).Paired samples t test,analysis of variance and Pearson correlation was performed.Results When a lower spatial frequency (60' checkerboards stimulus) was used,there was little change in the latency of P100 (F=0.290,P>0.05).However,when a higher spatial frequency (15' checkerboards stimulus) was used,VEP latency increased with a greater degree of astigmatism (F=10.850,P<0.01; r=0.647,P<0.01).There was a gradual reduction of amplitudes of P100 as convex cylindrical lens power increased (when 60' checkerboards were used, F=3.947,P<0.01; r=-0.470,P<0.01; when 15' checkerboards were used,F=14.280,P<0.01; r=-0.699,P<0.01).Conclusion The transmission of visual signals depends on the quality of the visual image formed on the retina.Visual signal transmission time and response intensity in the visual cortex are affected not only by the defocus of the retinal image but also by the spatial frequency of the pattern stimulus.With a high spatial frequency,the transmission of visual signals is faster and the response intensity of the visual cortex is greater if the visual image formed on the retina is clear.%目的 研究光学诱导不同程度散光产生的视觉信号对皮层反应时间及强度的影响;研究散光是否可导致视觉信号传导时程异常,观察视觉信号传导时间及视皮层反应强度与散光程度的量化关系.方法 完全随机设计研究.对视力或矫正视力正常的被检者眼前依次放置0~5 D度数正
11. 有晶状体眼后房型人工晶状体植入矫正高度近视及散光疗效观察%Preliminary observation of the effect of Posterior chamber phakic intraocular lens for high myopia with or without astigmatism
Institute of Scientific and Technical Information of China (English)
唐晓蕾; 王晓莉; 赵媛; 曾涛; 夏敏; 邱丹
2015-01-01
目的 探讨有晶状体眼后房型人工晶状体植入矫正高度近视及散光的临床应用价值.方法 纳入2010年3月至2013年1月间14例(25只眼)高度近视及散光患者,其中男6例,女8例,年龄18~42岁,平均26.7岁.所有患者接受有晶状体眼后房型人工晶状体植人治疗,术后随访3个月,检查裸眼视力、最佳矫正视力、屈光度、散光度,并在裂隙灯下检查拱高和轴向移位等.结果 所有患者术后裸眼视力较术前提高,术后3个月均达到或超过术前最佳矫正视力.术前等效球镜为-7.00~-22.0D,平均-(12.52±2.50)D,术后等效球镜为-0.25~-0.75 D,平均-(0.54±0.11)D,术后屈光度较术前明显降低,差异有统计学意义(P<0.05).散光由术前-(1.83±1.12)D下降至术后-(0.55±0.21)D,差异有统计学意义(P<0.05).手术前后眼压、角膜内皮细胞密度差异无统计学意义(P>0.05).术后无拱高偏大者,1只眼拱高偏低.术后93.75% (15/16)的TICL患者轴向偏转<10°.结论 有晶状体眼后房型人工晶状体植入术矫正高度近视及散光可以获得良好的裸眼视力,为高度近视患者提供一种新的选择,但其长期稳定性及远期并发症需要进一步观察.%Objective To investigate the efficacy and safety of the implantation of posterior chamber phakic intraocular lens (ICL) for high myopia with or without astigmatism.Methods From March 2010 to January 2013,14 high myopia patients (25 eyes) with or without astigmatism including 6 males and 8 females were admitted to our department.The average age of these patients was 26.7 years old,ranging from 18 to 42.All of them were treated by implantation of posterior chamber phakic ICL.Vision and diopter,as well as the results of slit lamp examination were used to evaluate the efficacy during the follow up.Results All patients were implanted ICL successfully and adhered to follow-up.The uncorrected visual acuity of every eye after surgery had improved,same or
12. 虹膜定位波前引导的LASIK与标准LASIK比较治疗近视散光疗效的Meta分析%Iris-Registration in Wavefront-Guided LASIK versus Conventional LASIK for Correction of Myopia and Myopic Astigmatism: A Meta-Analysis
Institute of Scientific and Technical Information of China (English)
李岩; 成拾明; 周霞; 许玲
2013-01-01
Objective To systematically evaluate the efficacy and safety of iris-registration in wavefront-guided LASIK (IR+WG LASIK) versus conventional LASIK for correction of myopia accompanied with astigmatism. Methods Such databases as PubMed, EMbase, The Cochrane library (Issue 2, 2012), CBM, CNKI, VIP, and WangFang Data were searched to collect the randomized controlled trials (RCTs) and quasi-RCTs about IR+WG LASIK versus conventional LASIK for correction of myopia accompanied with astigmatism. The retrieval time was from inception to February 2012, and the language was in both Chinese and English. Two reviewers independently screened the literature, extracted the data and assessed the quality of the included studies. Then the meta-analysis was performed by using RevMan 5.1 software. Results A total of 9 studies involving 3 903 eyes were included. The results of meta-analysis showed that, compared with the conventional LASIK group, the IR+WG LASIK group had a higher ratio in patients with postoperative un-corrected visual acuity no less than 1.0 (RR=1.03, 95%CI 1.01 to 1.05, P=0.002), as well as in patients with best-corrected visual acuity gained over 1 line (RR=1.75, 95%CI 1.49 to 2.16, P<0.000 01); it was smaller in the postoperative high order aberration RMS (WMD=-0.16, 95%CI -0.21 to -0.11, P<0.000 01), coma-like RMS (WMD=-0.05, 95%CI -0.11 to 0.00, P=0.07), spherical-like RMS (WMD=-0.15, 95%CI -0.23 to -0.07, P=0.000 2), and residual astigmatism (WMD=0.14, 95%CI 0.10 to 0.18, P<0.000 01); moreover, it was lower in the incidence of postoperative glare (RR=0.27, 95%CI 0.15 to 0.50, P<0.000 1), and it was higher in the subjective satisfaction of patients (RR=1.08, 95%CI 1.04 to 1.13, P=0.000 3). Conclusion Compared with conventional LASIK, IR+WG LASIK can more effectively reduce astigmatism, postoperative high order aberration RMS and spherical-like RMS. It can also get visual function including uncorrected visual acuity and best-corrected visual acuity
13. 经上皮准分子激光角膜切削术治疗不规则角膜散光的视觉质量观察%An evaluation of visual performance after trans-epithelial photorefractive keratectomy for correcting irregular corneal astigmatism
Institute of Scientific and Technical Information of China (English)
刘静; 陈世豪; 王一博; 张佳; 汪凌; 许琛琛; 王勤美
2014-01-01
Objective To evaluate visual performance pre-and postoperatively in patients with irregular corneal astigmatism who were treated with topography-guided trans-epithelial photorefractive keratectomy (TPRK).Methods This non-randomized prospective clinical study was comprised of 15 eyes of 12 patients with irregular corneal astigmatism who were treated with topography-guided TPRK.The data included UCVA,BCVA,pre-and postoperative refractive data,and contrast sensitivity before surgery and at 1 and 3 months after surgery,the corneal epithelial timeline for healing,pain scores at 3 and 7 days after surgery,the classification of haze when it appeared,and the safety and efficacy indexes.Repeated measures analysis of variance was used to compare the changes over time.Results Mean UCVA increased from 4.11±0.28 preoperatively to 4.88±0.16 3 months postoperatively (F=36.706,P<0.05).Mean BSCVA increased from 4.86±0.08 to 4.98±0.09 (F=5.075,P<0.05),with no visual acuity lines lost.Safety and efficacy indexes were 1.025 and 1.004,respectively.Mean spherical equivalent (SE) was reduced from-3.73±4.62 D to-0.03±0.09 D (F=-4.034,P<0.05),and the mean cylinder was reduced from-1.71±1.43 D to +0.38±1.14 D (F=-9.192,P<0.05).Tbere were significant differences in contrast sensitivity were found between patients at 3,6,12 c/d spatial frequencies before surgery and 1 month after surgery (P>0.05).But patients at 3 months after surgery showed better contrast sensitivity than patients before surgery (P<0.05).Haze appeared in 2 eyes at 1 month postoperatively but recovered by 3 months postoperatively.Conclusion Topography-guided TPRK appears to be an effective treatment for irregular corneal astigmatism.The operation improves contrast sensitivity and visual performance in patients with irregular corneal astigmatism.%目的 观察对不规则角膜散光患者施行角膜地形图引导的经上皮准分子激光角膜切削术后患者视觉质量的改善情况.方法 非
14. 矢量分析法比较近视伴较高散光青少年配戴球面和环曲面角膜塑形镜后的临床疗效%The use of vector analysis to evaluate the changes in corneal astigmatism with general and toric orthokeratology lenses
Institute of Scientific and Technical Information of China (English)
常枫; 沈政伟; 陈云辉; 魏润菁; 李梅; 周萍; 周和政
2016-01-01
Objective To use vector analysis to evaluate the effectiveness of general and toric orthokeratology lenses for changes in corneal astigmatism in myopic children with moderate-to-high astigmatism.Methods This was a prospective study.Fourteen patients,27 eyes (Spherical group),were fitted with spherical orthokeratology lenses and 10 patients,20 eyes (Toric group),were fitted with toric orthokeratology lenses.Data collection was performed 1 day,1week,1 month,3 months after orthokeratology fitted and included visual acuity,corneal topography,axial length and biomicroscopy examinations.Changes in corneal toricity were evaluated using vector analysis.Data were compared between the two groups using independent t test.Results The median subjective myopia of Spherical group and Toric group at base line was-3.80±1.43 D and-3.98±1.53 D (P>0.05).The corneal J0 vector values were-1.11±0.23 D and-1.18±0.29 D (P>0.05) and the J45 vector values were 0.11± 0.21 D and 0.05±0.51 D (P>0.05),respectively.After wearing orthokeratology lenses for 1 day,1 week,1 month and 3 months,the UCVAs improved steadily in both groups,the differences between the two groups were insignificant in 1 day,1 week,1 month,but with significant difference in 3 months (4.93±0.05 vs.5.05±0.06).The differences of corneal J0 vector values between the two groups were significant for all the time ponits (t=-4.83,-1.56,-2.38,-1.03,P<0.05).The differences of corneal J45 vector values between the two groups were insignificant.There were 4 and zero patients in Spherical group and Toric group who reported visual disturbance symptoms.Conclusion Both spherical and toric orthokeratology lenses can improve UCVA in myopic children with moderate-to-high astigmatism.However,the toric design can be effective for improving contact lens fitting and enhancing the effect of corneal reshaping.%目的 矢量分析法比较近视伴较高散光的青少年儿童患者分别配戴球面角膜塑形镜和环曲面角膜
15. 单眼角膜散光白内障患者不同人工晶状体组合植入的对比研究%Comparison of different combinations of intraocular lenses implantation in cataract patients with unilateral astigmatism
Institute of Scientific and Technical Information of China (English)
田芳; 张红; 胡尊霞
2011-01-01
Objective To assess binocular visual function after combined implantation of toric and multifocal or monofocal intraocular lenses in unilateral astigmatism cataract patients. Methods This was a prospective case control study. A total of thirty unilateral astigmatism patients undergoing phacoemulsification were recruited. AcrySof Toric IOL were implanted in the astigmatic eye of patients, with ReSTOR (15 eyes) or AcrySof IQ (15 eyes) in the contralateral eye. Six months postoperatively, patients were assessed for visual acuity (5.0 m, 60.0 cm, 40.0 cm), contrast sensitivity, amplitude of accommodation, and stereoacuity. Patients were surveyed for visual disturbances and lifestyle visual quality. Data were analyzed with a paired t test, an independent samples t test, or chi-square test. Results At 6 months postoperatively, for Toric-ReSTOR patients, uncorrected binocular logMAR visual acuity at 5.0 m, 60.0 cm, 40.0 cm was 0.05±0.05, 0.24±0.10, and 0.14±0.06, respectively. For Toric-AcrySof IQ patients, uncorrected binocular logMAR visual acuity was 0.06±0.07, 0.26±0.08, and 0.37 ±0.10, respectively. These values between the two group did not achieve significant differences except for near visual acuity (t=5.476, P=0.000). The contrast sensitivity for ReSTOR eyes was lower at 18 cpd under photopic and photopic glare circumstance than for the AcrySof IQ eyes (0.30 ± 0.37 versus 0.94 ±0.58, t=3.476, P=0.001; 0.34 ± 0.44 versus 0.88 ±0.52, t =2.975, P= 0.006). And was lower at 12 cpd under scotopic and scotopic glare circumstance than for the AcrySof IQ eyes (0.05±0.22 versus 0.50±0.61, t=3.057, P=0.005; 0.05±0.22 versus 0.59±0.75, t=3.154, P=0.004). The amplitude curve of accommodation in Toric-ReSTOR patients had two wave peak (0 and-2.5 D), but only one (0 D) in Toric-AcrySof IQ patients. The stereopsis of Toric-ReSTOR eyes decreased slightly (53% versus 73%, x2=1.262, P=0.263). Patient satisfaction for mean near vision was significantly different: 80
16. Visual Performance in Moderate to Severe Astigmatism : Rigid Gas-permeable Contact Lenses versus Spectacles%透气性硬性接触镜和框架眼镜矫正中高度散光视觉质量的比较
Institute of Scientific and Technical Information of China (English)
王雪; 马薇; 杨必; 刘陇黔
2012-01-01
Objective To explore whether spectacles or rigid gas-permeable (RGP) contact lenses provide better visual performance for moderate to severe astigmatism. Methods Between June 2008 and May 2011, 20 individuals (40 eyes) were fitted with both RGP lenses and spectacles. They first underwent corneal topography and refractometry, then were fitted with RGP trial lenses and lastly fitted with RGP lenses. When regularly followed up, the corrected visual acuity, wearing condition and eye health were evaluated respectively. For each type of lens, contrast visual acuity was evaluated. Each subject was asked to select the lens type of choice and to rate quality of vision in day-to-day activities through a questionnaire. Results The corrected visual acuity with RGP lenses was better than that with spectacles. But there was no difference in contrast visual acuity in all spatial frequency. Subjectively, there was no difference in vision, but most of the subjects prefer the computer-visual acuity and reading-visual acuity corrected by spectacles. About 40% of patients choose RGP contact lens as the main corrected method, and 45% of patients preferred using RGP lenses and spectacles alternately. About 10% of patients only wear RGP lenses when it was necessary. Two patients dropped out. Conclusions Both RGP lenses and spectacles leads to good results in correcting moderate to severe astigmatism. Though spectacles get higher scores in near vision, because of the better visual performance and appearance offered by RGP contact lens, a majority of patients can insist on wearing it.%目的 比较中高度散光患者配戴框架眼镜和透气性硬性接触镜(RGPCL)的主客观视觉质量.方法 选取2008年6月-2011年5月中高度角膜散光20例共40只眼进行角膜地形图、综合验光仪验光等检查后,选择合适试戴片作配适评估并定制RGPCL.要求患者戴镜后1周、1个月、3个月和6个月复查,记录矫正视力、镜片配
17. An Astigmatic Detection System for Polymeric Cantilever-based Sensors
DEFF Research Database (Denmark)
Hwu, En-Te; Liao, Hsien-Shun; Bosco, Filippo;
2012-01-01
fluctuation measurements on cantilever beams with a subnanometer resolution. Furthermore, an external excitation can intensify the resonance amplitude, enhancing the signal- to-noise ratio. The full width at half maximum (FWHM) of the laser spot is 568 nm, which facilitates read-out on potentially...... submicrometer-sized cantilevers. The resonant frequency of SU-8 microcantilevers is measured by both thermal fluctuation and excited vibration measurement modes of the ADS....
18. 角膜塑形镜矫治青少年近视散光的疗效及对角膜内皮细胞的影响%Clinical effect of orthokeratology for juvenile with myopia astigmatism and its effects on corneal endothelial cells
Institute of Scientific and Technical Information of China (English)
周籽秀; 徐珊珊; 易省平
2016-01-01
Abstract?AIM:To investigate the clinical effect of orthokeratology for 400 juvenile with myopia astigmatism and its effects on corneal endothelial cells.?METHODS:Four hundred patients(800 eyes), of whom the average age was 11.5 ±2.3 years old, 239 male, 161 female, were divided into two groups: orthokeratology group and spectacles group. Parameters including efficacy data ( uncorrected visual acuity, corneal curvature, axial length and diopter ) and corneal endothelial cell data ( count of endothelial cell, endothelial cell density, fluorescein staining and central corneal thickness) were observed at 1d, 1, 6, 12 and 24mo after wearing.? RESULTS: The visual acuity of spectacles group recovered to normal after wearing, that of orthokeratology group recovered to normal at 1mo after wearing.At 2a after wearing, the corneal curvature, diopter of orthokeratology group decreased significantly (40.09 ±0.31D, 0.23 ±0.06D respectively); while those of spectacles group increased, the differences between the two groups were significant (P0.05 ) compared to those before wearing. At 2a after wearing, the axial length of the two groups were 23.96 ± 0.38mm, 26.49±0.88mm respectively (P0.05).The count of endothelial cell and endothelial cell density both decreased after wearing without significant differences (P>0.05).?CONCLUSION: Orthokeratology has less effect on the corneal endothelial cells, no obvious adverse reactions and can control the prognosis of myopia.%目的:探讨角膜塑形镜矫正青少年近视的临床效果及对角膜内皮细胞的影响。方法:随机在我院选择400例800眼明确诊断为近视的青少年患者,平均年龄为11.5±2.3岁,其中男239例,女161例,根据治疗方法随机分为角膜塑形组和框架眼镜组,分别为167例334眼和233例466眼,本试验所研究的数据包括:(1)疗效:裸眼视力、角膜曲率、眼轴长度、屈光度等;(2)角膜内皮细胞:角膜内皮细胞计数
19. Comparative clinical study of wavefront-guided laser in situ keratomileusis with versus without iris recognition for myopia or myopic astigmatism%有和无虹膜识别波阵面像差引导的准分子激光原位角膜磨镶术治疗近视或近视散光眼临床对比研究
Institute of Scientific and Technical Information of China (English)
王卫群; 张金嵩; 赵晓金
2011-01-01
Objective To explore the postoperative visual acuity results of wavefront-guided LASIK with iris recognition for myopia or myopic astigmatism and the changes of higher-order aberrations and contrast sensitivity function (CSF).Methods Series of prospective case studies,158 eyes ( 85 cases) of myopia or myopic astigmatism were divided into two groups:one group underwent wavefront-guided LASIK with iris recognition ( iris recognition group) ; another group underwent wavefront-guided LASIK treatment without iris recognition through the limbus mating point ( non-iris recognition group).To comparative analyze the postoperative visual acuity,residual refraction,the RMS of higher-order aberrations and CSF of two groups.Results There was no statistical significance difference between two groups of the average uncorrected visual acuity( t =0.039,0.058,0.898; P =0.844,0.810,0.343 ),best corrected visual acuity ( t =0.320,0.440,1.515 ; P =0.572,0.507,0.218 ),and residual refraction [ spherical equivalent ( t =0.027,0.215,0.238; P =0.869,0.643,0.626),spherical (t =0.145,0.117,0.038; P =0.704,0.732,0.845) and cylinder( t =1.676,1.936,0.334; P =0.195,0.164,0.563 ) ] at postoperative 10 days,1 month and 3 month.The security index of iris recognition group at postoperative 3 month was 1.06 and non- iris recognition group was 1.03 ; the efficacy index of iris recognition group is 1.01 and non-iris recognition group was 1.00.Postoperative 3 month iris recognition group 93.83% eyes and non-iris recognition group of 90.91% eyes spherical equivalent within ± 0.50 D ( x2 =0.479,P =0.489 ),iris recognition group of 98.77% eyes and non-iris recognition group of 97.40% eyes spherical equivalent within ± 1.00 D( Fisher test,P =0.613).There was no significance difference between the two groups of security,efficacy and predictability.Non-iris recognition group postoperative 1 month and postoperative 3 months 3-order order aberrations root mean square value (RMS) higher than the iris
20. Quantifying Assessments of Vision Improvements for Myopes, Hypermetropes, Presbyopes, and Astigmats, in Brazil and Elsewhere.
Science.gov (United States)
Lopes, Demetriou; D. M., D.; Niemi, Paul N.; D., O.; Mc Leod, Roger D.
2007-10-01
Vision can safely, rapidly, and significantly be improved among nearsighted, far-sighted, presbyopic, and astigmatatic individuals, using methods developed for Mc Leod's patent-pending Naturoptics. We hope to calibrate and apply the method in South America, particularly Brazil, using metric standards accessible from ordinary vision assessment charts as used there. This precursor for extension into Hispanic-speaking areas, especially Chile, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico, is to establish property-rights protected licensed teaching agreements. Initially, visually impaired potential students of to-be-established not-for-profit Naturopathic medical, surgical, dental, law, science, and arts schools, perhaps named Metocantins or Metaquaratinga University, if in Brazil, will learn to correct their vision; training and licensing them can provide earnings for the self-funding of all associated activities and expenses. We will publish established results that refute claims relating to vision. Mc Leod's spatial Fourier transform model for retinal focal surface electric field amplitude vision explains all phenomena and Land's two-wavelength interval color vision results.
1. CLINICAL ANALYSIS OF EXCIMER LASER PHOTOREFRACTIVE KERATECTOMY FOR TREATMENT OF MYOPIA AND MYOPIC ASTIGMATISM
Institute of Scientific and Technical Information of China (English)
张华; 郭绒霞; 孙乃学; 王峰; 张道过; 王彤; 孙健; 杨振国
1999-01-01
In1983,Trokeletal〔1〕firstreportedthelaboratoryreasearchaboutexcimerlaser.Clinicalusingofexcimerlaserpho-torefractivekeratecto...
2. Visual outcomes of conductive keratoplasty to treat hyperopia and astigmatism after laser in situ keratomileusis and photorefractive keratectomy
Directory of Open Access Journals (Sweden)
Alireza Habibollahi
2011-01-01
Conclusions : CK is a predictable and reliable method to correct hyperopia after LASIK and PRK, however cylinder correction may induce irregular and unpredictable outcomes and a modified nomogram is required for further studies.
3. Three-dimensional super-resolution imaging of the midplane protein FtsZ in live Caulobacter crescentus cells using astigmatism.
Science.gov (United States)
Biteen, Julie S; Goley, Erin D; Shapiro, Lucy; Moerner, W E
2012-03-01
Single-molecule super-resolution imaging provides a non-invasive method for nanometer-scale imaging and is ideally suited to investigations of quasi-static structures within live cells. Here, we extend the ability to image subcellular features within bacteria cells to three dimensions based on the introduction of a cylindrical lens in the imaging pathway. We investigate the midplane protein FtsZ in Caulobacter crescentus with super-resolution imaging based on fluorescent-protein photoswitching and the natural polymerization/depolymerization dynamics of FtsZ associated with the Z-ring. We quantify these dynamics and determine the FtsZ depolymerization time to be divisional stage.
4. Astigmatic mites from nests of birds of prey in the U.S.A. : IV. Description of the life-cycle of Acotyledon paradoxa Oudemans, 1903
NARCIS (Netherlands)
Fain, A.; Philips, J.R.
1978-01-01
INTRODUCTION In this paper we describe for the first time the life-cycle of Acotyledon paradoxα Oudemans, 1903. This species was known, so far, only from the hypopial stage and a protonymph. The discovery of the adults allows us to precise the systematic position of the genus Acotyledon and to throw
5. Laser resonators with several mirrors and lenses with the bow-tie laser resonator with compensation for astigmatism and thermal lens enects as an example
DEFF Research Database (Denmark)
Abitan, Haim; Skettrup, Torben
2005-01-01
Laser resonators with several mirrors (lenses) have been investigated in a systematic fashion. They have been grouped into classes according to their number n of mirrors/lenses. Stability polynomials, beam waist radii and locations have been obtained for each group up to n = 4. The bow-tie laser...
6. Phakic posterior chamber intraocular lens combined with astigmatic keratotomy for high myopia with astigmatism%有晶状体眼后房型人工晶状体植入联合AK术治疗高度近视及散光的临床疗效
Institute of Scientific and Technical Information of China (English)
朱双倩; 俞阿勇; 薛安全; 王树林; 包芳军; 王勤美
2009-01-01
目的 研究有晶状体眼后房型人工晶状体(ICL)联合角膜松解术(AK)治疗高度近视及散光的安全性、有效性、预测性.方法 对21例(34眼)高度近视患者行有晶状体眼ICL植入术,对其中20眼散光度数≥1.5D的联合AK术.观察术前及术后1、3、6个月的裸眼视力、最佳矫正视力、屈光度、眼压、内皮细胞密度等.结果 所有患者成功植入ICL,手术前后等效球镜分别为(-13.5±2.2)、(-0.30±0.84)、(-0.28±0.86)、(-0.28±0.84)D;术后裸眼视力分别为4.80±0.16、4.81±0.17、4.82±0.17;手术前后最佳矫正视力分别为4.74±0.26、4.86±0.17、4.87±0.17、4.9±0.18,差异有统计学意义(P<0.01);内皮细胞密度术前为(2871±256)个/mm2,术后为(2773±267)个/mm2,差异有统计学意义(P<0.01);行AK术眼术前后散光分别为(-2.31±0.64)、(-1.22±0.57)、(-1.02±0.40)、(-0.93±0.39)D,差异有统计学意义(P<0.01).结论 有晶状体眼ICL联合AK术治疗高度近视及散光有较好的安全性、有效性、预测性,长期疗效有待进一步观察.
7. 新型人工晶体推送器对白内障术后角膜散光影响的临床研究%Clinical study on the effect of new type intraocular lens unfolder on postoperative astigmatism
Institute of Scientific and Technical Information of China (English)
汤欣; 杨瑞波; 孙慧敏
2004-01-01
目的:评价2.6mm口径的新型人工晶体推送器(Emerald T推送器)对白内障超声乳化人工晶体植入术后角膜散光的影响.方法:将160例(160眼)老年性白内障患者随机分为两组:A组为2.6mm口径的Emerald T人工晶体推送器组;B组为3.2mm口径的Sapphire人工晶体推送器组.所有患者均行经透明角膜超声乳化人工晶体植入术,术后1周、1月、3月采用角膜地形图仪比较两组角膜散光的变化情况.结果:A、B两组术后1周、1月、3月平均手术源性角膜散光(D)分别为:0.36 0.20、0.33 0.23、0.31 0.22;0.78 0.61、0.69 0.58、0.58 0.44;两组间差别有统计学意义(P<0.05).术后1周及1月,A、B两组裸眼视力≥0.5者分别为:74眼(92.50%)、77眼(96.25%);62眼(77.50%)、70眼(87.50%),两组间差别有统计学意义(P<0.05),术后3月两组裸眼视力≥0.5者分别为:77眼(96.25%)、73眼(91.25%),两组间差别无统计学意义(P>0.05).结论:2.6mm口径的Emerald T人工晶体推送器所需手术切口小、术后散光小、视力恢复快且稳定,是目前较为理想的小切口人工晶体植入装置.
8. The experimental investigation of orbital angular momentum of complex astigmatic elliptical beams%复杂像散椭圆光束的轨道角动量的实验研究
Institute of Scientific and Technical Information of China (English)
董一鸣; 徐云飞; 张璋; 林强
2006-01-01
提出一种直接测量光束的轨道角动量的方法,其原理是把具有轨道角动量的光束照射到一个金属靶上,使其在光束的角动量作用下发生转动.通过测量靶转动的角度来计算光束的角动量值.对几种不同参数的光束的轨道角动量进行了测量,获得了轨道角动量与光束参数之间的关系,实验结果与理论分析较好地符合.
9. Long-Term Comparison of Posterior Chamber Phakic Intraocular Lens With and Without a Central Hole (Hole ICL and Conventional ICL) Implantation for Moderate to High Myopia and Myopic Astigmatism
Science.gov (United States)
Shimizu, Kimiya; Kamiya, Kazutaka; Igarashi, Akihito; Kobashi, Hidenaga
2016-01-01
Abstract The study shows a promising next-generation surgical option for the correction of moderate to high ametropia. Hole implantable collamer lens (ICL), STAAR Surgical, is a posterior chamber phakic intraocular lens with a central artificial hole. As yet, however, no long-term comparison of the clinical results of the implantation of ICLs with and without such a hole has hitherto been conducted. A prospective, randomized, controlled trial was carried out in order to compare the long-term clinical outcomes of the implantation, in such eyes, of ICLs with and without a central artificial hole. Examinations were conducted of the 64 eyes of 32 consecutive patients with spherical equivalents of −7.53 ± 2.39 diopters (D) (mean ± standard deviation) in whom implantation of a Hole ICL was performed in 1 eye, and that of a conventional ICL was carried out in the other, by randomized assignment. Before 1, 3, and 6 months, and 1, 3, and 5 years after surgery, the safety, efficacy, predictability, stability, intraocular pressure, endothelial cell density, and adverse events of the 2 surgical techniques were assessed and compared over time. The measurements of LogMAR uncorrected and corrected distance visual acuity 5 years postoperatively were −0.17 ± 0.14 and −0.24 ± 0.08 in the Hole ICL group, and −0.16 ± 0.10 and −0.25 ± 0.08 in the conventional ICL group. In these 2 groups, 96% and 100% of eyes, respectively, were within 1.0 D of the targeted correction 5 years postoperatively. Manifest refraction changed by −0.17 ± 0.41 D and −0.10 ± 0.26 D occurred in from 1 month to 5 years in the Hole and conventional ICL groups, respectively. Only 1 eye (3.1%), which was in the conventional ICL group, developed an asymptomatic anterior subcapsular cataract. Both Hole and conventional ICLs corrected of ametropia successfully throughout the 5-year observation period. It appears likely that the presence of the central hole does not significantly affect these visual and refractive outcomes. Trial Registration: UMIN000018771. PMID:27057883
10. 有晶状体眼前房型环曲面人工晶状体植入矫正散光%Artisan toric phakic intraocular lens for the correction of astigmatism
Institute of Scientific and Technical Information of China (English)
涂云海; 俞阿勇; 高潮
2007-01-01
散光的矫治一直是屈光手术的焦点,有晶状体眼前房型环曲面人工晶状体(Artisan toric phakic intraocular lens,TPIOL)矫正散光,以其矫治范围大、安全、有效,逐渐受到临床关注.本文就Artisan TPIOL的物理特征、手术患者的筛选、术前准备、手术方法,以及对其术后安全性、有效性、准确性、稳定性、最小损害等评价作一综述.
11. Corneal relaxing incision combined with phacoemulsification and IOL implantation
Institute of Scientific and Technical Information of China (English)
沈晔; 童剑萍; 李毓敏
2004-01-01
Objective: To analyze the effectiveness and safety of corneal relaxing incisions (CRI) in correcting keratometric astigmatism during cataract surgery. Methods: A prospective study of two groups: control group and treatment group. A treatment group included 25 eyes of 25 patients who had combined clear corneal phacoemulsification, IOL implantation and CRI. A control group included 25 eyes of 25 patients who had clear corneal phacoemulsification and IOL implantation.Postoperative keratometric astigmatism was measured at 1 week, 1 month, 3 months and 6 months. Results: CRI significantly decreased keratometric astigmatism in patients with preexisting astigmatism compared with astigmatic changes in the control group. In eyes with CRI, the mean keratometric astigmatism was 0.29±0.17 D (range 0 to 0.5 D) at 1 week, 0.41±0.21 D (range 0 to 0.82 D) at 1 month, respectively reduced by 2.42 D and 2.30 D at 1 week and 1 month postoperatively (P=0.000, P=0.000), and postoperative astigmatism was stable until 6 months follow-up. The keratometric astigmatism of all patients decreased to less than 1.00 D postoperatively. Conclusions: CRI is a practical, simple, safe and effective method to reduce preexisting astigmatism during cataract surgery. A modified nomogram is proposed. The long-term effect of CRI should be investigated.
12. Corneal relaxing incision combined with phacoemulsification and IOL implantation
Institute of Scientific and Technical Information of China (English)
沈晔; 童剑萍; 李毓敏
2004-01-01
Objective: To analyze the effectiveness and safety of corneal relaxing incisions (CRI) in correcting keratometric astigmatism during cataract surgery. Methods: A prospective study of two groups: control group and treatment group. A treatment group included 25 eyes of 25 patients who had combined clear corneal phacoemulsification, IOL implantation and CRI. A control group included 25 eyes of 25 patients who had clear corneal phacoemulsification and IOL implantation.Postoperative keratometric astigmatism was measured at 1 week, 1 month, 3 months and 6 months. Results: CRI signifi-cantly decreased keratometric astigmatism in patients with preexisting astigmatism compared with astigmatic changes in the control group. In eyes with CRI, the mean keratometric astigmatism was 0.29±0.17 D(range 0 to 0.5 D) at 1 week, 0.41±0.21 D (range 0 to 0.82 D) at 1 month, respectively reduced by 2.42 D and 2.30 D at 1 week and 1 month postoperatively (P=0.000, P=0.000), and postoperative astigmatism was stable until 6 months follow-up. The keratometric astigmatism of all patients decreased to less than 1.00 D postoperatively. Conclusions: CRI is a practical, simple, safe and effective method to reduce preexisting astigmatism during cataract surgery. A modified nomogram is proposed. The long-term effect of CRI should be investigated.
13. Small-incision lenticule extraction (SMILE)
DEFF Research Database (Denmark)
Hansen, Rasmus Søgaard; Lyhne, Niels; Grauslund, Jakob;
2016-01-01
PURPOSE: To study the outcomes of small-incision lenticule extraction (SMILE) for treatment of myopia and myopic astigmatism. METHODS: Retrospective study of patients treated for myopia or myopic astigmatism with SMILE, using a VisuMax(®) femtosecond laser (Carl Zeiss Meditec, Jena, Germany......), at the Department of Ophthalmology, Odense University Hospital, Odense, Denmark. Inclusion criteria were corrected distance visual acuity (CDVA) of 20/25 or better before surgery and no ocular conditions other than myopia up to -10.00 diopters (D) with astigmatism up to 3.00 D. RESULTS: Of the 729 treatments, 722...... predictable, efficient, and safe for treatment of myopia and myopic astigmatism....
14. 晶状体眼后房型人工晶状体/散光性后房型人工晶状体术矫正高度近视的临床观察%Clinical observation of implantation of posterior chamber intraocular lens/astigmatism posterior chamber intraocular lens for patients with high myopia
Institute of Scientific and Technical Information of China (English)
杨吟; 刘治容; 陈斌; 陈波; 杨萍; 吴峥峥
2014-01-01
目的 研究晶状体眼后房型人工晶状体(implantable contact lens,ICL)/散光性后房型人工晶状体(TICL)植入术矫正高度近视的有效性及安全性.方法 收集在我院行IC[/TICL植入术的患者77例(153只眼),在术后第1天、1周、1月、3月和6月时行视力、裂隙灯显微镜、眼底镜和眼压检查,术后1个月加做客观检影验光.对比术前最佳矫正视力及术后裸眼视力的差异.结果 153只眼均成功植入了ICL/TICL,术后144只眼(94.12%)未发现明显的并发症.术后1周时,153眼的裸眼视力为0.83±0.15,术前矫正视力与术后裸眼视力比较差异无统计学意义(P>0.05).结论 ICL/TICL植入手术治疗高度/超高度近视,是目前对于该类患者有效且安全的治疗方案.
15. Comparison of the curative effect between IR-LASIK and LASIK in high astigmatism treatment%高度散光患者虹膜定位波前引导的LASIK与常规LASIK手术疗效的比较
Institute of Scientific and Technical Information of China (English)
李耀宇; 翟国光; 邱岩; 邸玉兰; 屈哲; 黄耀辉
2008-01-01
目的:比较高度散光患者虹膜定位波前引导的LASIK(IR-LASIK)手术与常规LASIK手术的临床效果.方法:≥2.0DC的高度散光患者分别使用VISX S4-IR准分子激光机进行IR-LASIK手术(204例338眼)和常规LASIK手术(180例335眼),对术后裸眼视力和残留散光度数进行比较,IR手术组检查了眼球旋转的角度.结果:IR-LASIK手术组术后2d裸眼视力(≥1.0者89.1%)明显优于常规LASIK手术组(≥1.0者83.6%,P<0.05),术后残留散光度数也明显小于常规LASIK手术组(0.56DC vs1.15DC,P<0.01).IR-LASIK手术组波前像差检查的散光度数和轴位均与显然验光结果有一定的差别,所有患眼手术时均发生了眼球旋转.这可能是IR手术后疗效较好的重要原因.结论:IR-LASIK手术能够明显改善高度散光患者术后裸眼视力,减轻术后的散光残留.
16. Comparison of designs of off-axis Gregorian telescopes for millimeter-wave large focal-plane arrays.
Science.gov (United States)
Hanany, Shaul; Marrone, Daniel P
2002-08-01
We compare the diffraction-limited field of view (FOV) provided by four types of off-axis Gregorian telescopes: the classical Gregorian, the aplanatic Gregorian, and the designs that cancel astigmatism and both astigmatism and coma. The analysis is carried out with telescope parameters that are appropriate for satellite and balloonborne millimeter- and submillimeter-wave astrophysics. We find that the design that cancels both coma and astigmatism provides the largest flat FOV, approximately 21 square deg. We also find that the FOV can be increased by approximately 15% by means of optimizing the shape and location of the focal surface.
17. A comparison of designs of off-axis Gregorian telescopes for mm-wave large focal plane arrays
CERN Document Server
Hanany, S; Hanany, Shaul; Marrone, Daniel P.
2002-01-01
We compare the diffraction-limited field of view (FOV) provided by four types of off-axis Gregorian telescopes: the classical Gregorian, the aplanatic Gregorian, and designs that cancel astigmatism and both astigmatism and coma. The analysis is carried out using telescope parameters that are appropriate for satellite and balloon-borne millimeter and sub-millimeter wave astrophysics. We find that the design that cancels both coma and astigmatism provides the largest flat FOV, about 21 square degrees. We also find that the FOV can be increased by about 15% by optimizing the shape and location of the focal surface.
18. Comparison of designs of off-axis Gregorian telescopes for millimeter-wave large focal-plane arrays.
Science.gov (United States)
Hanany, Shaul; Marrone, Daniel P
2002-08-01
We compare the diffraction-limited field of view (FOV) provided by four types of off-axis Gregorian telescopes: the classical Gregorian, the aplanatic Gregorian, and the designs that cancel astigmatism and both astigmatism and coma. The analysis is carried out with telescope parameters that are appropriate for satellite and balloonborne millimeter- and submillimeter-wave astrophysics. We find that the design that cancels both coma and astigmatism provides the largest flat FOV, approximately 21 square deg. We also find that the FOV can be increased by approximately 15% by means of optimizing the shape and location of the focal surface. PMID:12153101
19. Eyelid Problems
Science.gov (United States)
... in Action Medical Editor & Editorial Advisory Board Sponsors Sponsorship Opporunities Spread the Word Shop AAP Find a ... irregular shape (astigmatism), it will threaten normal vision development and must be corrected as early as possible. ...
20. Light with a twist : ray aspects in singular wave and quantum optics
NARCIS (Netherlands)
Habraken, Steven Johannes Martinus
2010-01-01
Light may have a very rich spatial and spectral structure. We theoretically study the structure and physical properties of coherent optical modes and quantum states of light, focusing on optical vortices, general astigmatism, orbital angular momentum and rotating light.
1. Contact Lens Visual Rehabilitation in Keratoconus and Corneal Keratoplasty
Directory of Open Access Journals (Sweden)
Yelda Ozkurt
2012-01-01
Full Text Available Keratoconus is the most common corneal distrophy. It’s a noninflammatory progressive thinning process that leads to conical ectasia of the cornea, causing high myopia and astigmatism. Many treatment choices include spectacle correction and contact lens wear, collagen cross linking, intracorneal ring segments implantation and finally keratoplasty. Contact lenses are commonly used to reduce astigmatism and increase vision. There are various types of lenses are available. We reviewed soft contact lenses, rigid gas permeable contact lenses, piggyback contact lenses, hybrid contact lenses and scleral-semiscleral contact lenses in keratoconus management. The surgical option is keratoplasty, but even after sutur removal, high astigmatism may stil exists. Therefore, contact lens is an adequate treatment option to correct astigmatism after keratoplasty.
2. Modified Scleral Flap Incision to Reduce Corneal Astigmatisn after Intraocular Lens Implantation
Institute of Scientific and Technical Information of China (English)
YizhiLiu; ShaozhenLi
1995-01-01
Purpose:To investigate a simple method during extracapsular cataract extraction with posteior chamber intraocular lens implantation in order to reduce surgically induced corneal astig-matism.Methods:A modified scleral flap incision was used in the extracapsular cataract extraction with intraocular lens implantation and the postoperative changes in conreal astigmatism was observed.Results:The peak value of postoperative corneal astigmatism was3.60D,and the corneal astigmatism regression was 2.11D,surgically induced astigmatism was less significant in modified scleral flap incision group than that in convention-al limbal incison group(P<0.05).Conclusions:The modified scleral flap inciston is an ideal incision for cataract ex-traction with intraocular lens implantation when phacoemulsifier is not avaliable.Eye Science1995;11:136-139.
3. Functional outcomes and patient satisfaction after laser in situ keratomileusis for correction of myopia.
NARCIS (Netherlands)
Tahzib, N.G.; Bootsma, S.J.; Eggink, F.A.G.J.; Nabar, V.A.; Nuijts, R.M.M.A.
2005-01-01
PURPOSE: To determine subjective patient satisfaction and self-perceived quality of vision after laser in situ keratomileusis (LASIK) to correct myopia and myopic astigmatism. SETTING: Department of Ophthalmology, Academic Hospital Maastricht, Maastricht, The Netherlands. METHODS: A validated questi
4. Contact lens visual rehabilitation in keratoconus and corneal keratoplasty.
Science.gov (United States)
Ozkurt, Yelda; Atakan, Mehmet; Gencaga, Tugba; Akkaya, Sezen
2012-01-01
Keratoconus is the most common corneal distrophy. It's a noninflammatory progressive thinning process that leads to conical ectasia of the cornea, causing high myopia and astigmatism. Many treatment choices include spectacle correction and contact lens wear, collagen cross linking, intracorneal ring segments implantation and finally keratoplasty. Contact lenses are commonly used to reduce astigmatism and increase vision. There are various types of lenses are available. We reviewed soft contact lenses, rigid gas permeable contact lenses, piggyback contact lenses, hybrid contact lenses and scleral-semiscleral contact lenses in keratoconus management. The surgical option is keratoplasty, but even after sutur removal, high astigmatism may stil exists. Therefore, contact lens is an adequate treatment option to correct astigmatism after keratoplasty. PMID:22292112
5. Line focusing for soft x-ray laser-plasma lasing
OpenAIRE
Bleiner, Davide; Balmer, Jürg; Staub, Felix
2011-01-01
A computational study of line-focus generation was done using a self-written ray-tracing code and compared to experimental data. Two line-focusing geometries were compared, i.e., either exploiting the sagittal astigmatism of a tilted spherical mirror or using the spherical aberration of an off-axis- illuminated spherical mirror. Line focusing by means of astigmatism or spherical aberration showed identical results as expected for the equivalence of the two frames of reference. The variation o...
6. Contact Lens Visual Rehabilitation in Keratoconus and Corneal Keratoplasty
OpenAIRE
Yelda Ozkurt; Mehmet Atakan; Tugba Gencaga; Sezen Akkaya
2012-01-01
Keratoconus is the most common corneal distrophy. It's a noninflammatory progressive thinning process that leads to conical ectasia of the cornea, causing high myopia and astigmatism. Many treatment choices include spectacle correction and contact lens wear, collagen cross linking, intracorneal ring segments implantation and finally keratoplasty. Contact lenses are commonly used to reduce astigmatism and increase vision. There are various types of lenses are available. We reviewed soft contac...
7. Prospective study of toric IOL outcomes based on the Lenstar LS 900® dual zone automated keratometer
Directory of Open Access Journals (Sweden)
Gundersen Kjell
2012-07-01
Full Text Available Abstract Background To establish clinical expectations when using the Lenstar LS 900® dual-zone automated keratometer for surgery planning of toric intraocular lenses. Methods Fifty eyes were measured with the Lenstar LS 900® dual-zone automated keratometer . Surgical planning was performed with the data from this device and the known surgically induced astigmatism of the surgeon. Post-operative refractions and visual acuity were measured at 1 month and 3 months. Results Clinical outcomes from 43 uncomplicated surgeries showed an average post-operative refractive astigmatism of 0.44D ±0.25D. Over 70% of eyes had 0.50D or less of refractive astigmatism and no eye had more than 1.0D of refractive astigmatism. Uncorrected visual acuity was 20/32 or better in all eyes at 3 months, with 70% of eyes 20/20 or better. A significantly higher number of eyes had 0.75D or more of post-operative refractive astigmatism when the standard deviation of the pre-operative calculated corneal astigmatism angle, reported by the keratometer, was > 5 degrees. Conclusions In this single-site study investigating the use of the keratometry from the Lenstar LS 900® for toric IOL surgical planning, clinical outcomes appear equivalent to those reported in the literature for manual keratometry and somewhat better than has been reported for some previous automated instruments. A high standard deviation in the pre-operative calculated astigmatism angle, as reported by the keratometer, appears to increase the likelihood of higher post-operative refractive astigmatism.
8. Factors Influencing Efficacy of Peripheral Corneal Relaxing Incisions during Cataract Surgery
Directory of Open Access Journals (Sweden)
Nino Hirnschall
2015-01-01
Full Text Available Purpose. To evaluate influencing factors on the residual astigmatism after performing peripheral corneal relaxing incisions (PCRIs during cataract surgery. Methods. This prospective study included patients who were scheduled for cataract surgery with PCRIs. Optical biometry (IOLMaster 500, Carl Zeiss Meditec AG, Germany was taken preoperatively, 1 week, 4 months, and 1 year postoperatively. Additionally, corneal topography (Atlas model 9000, Carl Zeiss Meditec AG, Germany, ORA (Ocular Response Analyzer, Reichert Ophthalmic Instruments, USA, and autorefraction (Autorefractometer RM 8800 Topcon were performed postoperatively. Results. Mean age of the study population n=74 was 73.5 years (±9.3; range: 53 to 90 and mean corneal astigmatism preoperatively was −1.82 D (±0.59; 1.00 to 4.50. Mean corneal astigmatism was reduced to 1.14 D (±0.67; 0.11 to 3.89 4 months postoperatively. A partial least squares regression showed that a high eccentricity of the cornea, a large deviation between keratometry and topography, and a high preoperative astigmatism resulted in a larger postoperative error concerning astigmatism. Conclusions. PCRI causes a reduction of preoperative astigmatism, though the prediction is difficult but several factors were found to be a relevant source of error.
9. Keratometry device for surgical support
Directory of Open Access Journals (Sweden)
Saia Paula
2009-12-01
Full Text Available Abstract Background High astigmatisms are usually induced during corneal suturing subsequent to tissue transplantation or any other surgery which involves corneal suturing. One of the reasons is that the procedure is intimately dependent on the surgeon's skill for suturing identical stitches. In order to evaluate the influence of the irregularity on suturing for the residual astigmatism, a prototype for ophthalmic surgical support has been developed. The final intention of this prototype is to be an evaluation tool for guided suture and as an outcome diminish the postoperative astigmatism. Methods The system consists of hand held ring with 36 infrared LEDs, that is to be projected onto the lachrymal film of the cornea. The image is reflected back through the optics of the ocular microscope and its distortion from the original circular shape is evaluated by developed software. It provides keratometric and circularity measurements during surgery in order to guide the surgeon for uniformity in suturing. Results The system is able to provide up to 23D of astigmatism (32D - 55D range and is ± 0.25D accurate. It has been tested in 14 volunteer patients intraoperative and has been compared to a commercial keratometer Nidek Oculus Hand-held corneal topographer. The correlation factors are 0.92 for the astigmatism and 0.97 for the associated axis. Conclusion The system is potentially efficient for guiding the surgeon on uniformity of suturing, presenting preliminary data indicating an important decrease on the residual astigmatism, from an average of 8D - for patients not submitted to the prototype guidance - to 1.4D - for patients who have actually been submitted to the prototype guidance - after the first 24 hours post-surgery and in the subsequent weeks. It also indicates that the surgeon should achieve circularity greater or equal to 98% in order to avoid postoperative astigmatisms over 1D. Trial Registration Trial registration number: CAAE - 0212.0.004.000-09.
10. Observation of lens aberrations for high resolution electron microscopy II: Simple expressions for optimal estimates
Energy Technology Data Exchange (ETDEWEB)
Saxton, W. Owen, E-mail: wos1@cam.ac.uk
2015-04-15
This paper lists simple closed-form expressions estimating aberration coefficients (defocus, astigmatism, three-fold astigmatism, coma / misalignment, spherical aberration) on the basis of image shift or diffractogram shape measurements as a function of injected beam tilt. Simple estimators are given for a large number of injected tilt configurations, optimal in the sense of least-squares fitting of all the measurements, and so better than most reported previously. Standard errors are given for most, allowing different approaches to be compared. Special attention is given to the measurement of the spherical aberration, for which several simple procedures are given, and the effect of foreknowledge of this on other aberration estimates is noted. Details and optimal expressions are also given for a new and simple method of analysis, requiring measurements of the diffractogram mirror axis direction only, which are simpler to make than the focus and astigmatism measurements otherwise required. - Highlights: • Optimal estimators for CTEM lens aberrations are more accurate and/or use fewer observations. • Estimators have been found for defocus, astigmatism, three-fold astigmatism, coma and spherical aberration. • Estimators have been found relying on diffractogram shape, image shift and diffractogram orientation only, for a variety of beam tilts. • The standard error for each estimator has been found.
11. Effects of different types of refractive errors on bilateral amblyopia
Directory of Open Access Journals (Sweden)
Mücella Arıkan Yorgun
2012-12-01
Full Text Available Objectives: Identifying effects of different types of refractiveerrors on final visual acuity and stereopsis levels inpatients with bilateral amblyopia.Materials and methods: Patients with bilateral amblyopialower than ≥1.5 D anisometropia were included. Thepatients were classified according to the level of sphericalequivalent (0-4 D and >4 D of hypermetropia, the levelof astigmatism (below and above 2D in positive cylinderand type of composed refractive error [ 4 D of hypermetropiaand 2 D of astigmatism (group III]. Initialand final binocular best corrected visual acuities (BCVAwere compared between groups.Results: The initial binocular BCVA levels were significantlylower in patients with > 4 D of hypermetropia(p=0.028, without correction after treatment (p=0.235.The initial binocular BCVA was not different betweenastigmatism groups, but final BCVA levels were significantlylower in 4-6D of astigmatism compared with 2-4D of astigmatism (p=0.001. During comparison of composedrefractive errors, only the initial binocular BCVAwas significantly lower in group I compared to group II(p=0.015. The final binocular BCVA levels were not differentbetween groups I and III (p>0.05.Conclusions: Although the initial BCVA is lower in patientswith higher levels of hypermetropia, the response ofpatients to treatment with glasses is good. The responseof patients with high levels of astigmatism seems to belimited. J Clin Exp Invest 2012; 3(4: 467-471Key words: Amblyopia, isoametropic amblyopia, hypermetropia,refractive amblyopia, visual acuity
12. In vivo and in vitro analysis of topographic changes secondary to DSAEK venting incisions
Directory of Open Access Journals (Sweden)
Church D
2011-08-01
Full Text Available Majid Moshirfar, Monette T Lependu, Dane Church, Marcus C Neuffer John A Moran Eye Center, University of Utah, Salt Lake City, UT, USA Introduction: Descemet’s stripping automated endothelial keratoplasty (DSAEK venting incisions may induce irregular corneal astigmatism. The study examines in vivo and in vitro astigmatic effects of venting incisions. Patients and methods: In vivo analysis examined eleven eyes of eleven patients who had received DSAEK with venting incisions. A chart review of the eleven eyes including assessment of pre and postoperative refraction and topography was performed. In vitro analysis examined three cadaver eyes which received topographic imaging followed by venting incisions at 4 mm, 6 mm, and 7 mm optical zones. Topographic imaging was then performed again after the incisions. Results: Postoperative topographies of eleven eyes demonstrated localized flattening at incision sites and cloverleaf pattern astigmatism. There was a significant difference in corneal irregularity measurement (P = 0.03, but no significant difference in shape factor or change of topographic cylinder. The cloverleaf pattern was found in cadaver eyes with incisions placed at 4 mm and 6 mm optical zones but not at the 7 mm zone. Conclusion: DSAEK venting incisions can cause irregular corneal astigmatism that may affect visual outcomes. The authors recommend placement of venting incisions near the 7 mm optical zone. Keywords: DSAEK, venting incisions, endothelial keratoplasty, astigmatism, endothelium, endothelial transplant
13. Resolution of electro-holographic image
Science.gov (United States)
Son, Jung-Young; Chernyshov, Oleksii; Lee, Hyoung; Lee, Beom-Ryeol; Park, Min-Chul
2016-06-01
The resolution of the reconstructed image from a hologram displayed on a DMD is measured with the light field images along the propagation direction of the reconstructed image. The light field images reveal that a point and line image suffers a strong astigmatism but the line focusing distance differences for lines with different directions. This will be astigmatism too. The focusing distance of the reconstructed image is shorter than that of the object. The two lines in transverse direction are resolved when the gap between them is around 16 pixels of the DMD's in use. However, the depth direction is difficult to estimate due to the depth of focus of each line. Due to the astigmatism, the reconstructed image of a square appears as a rectangle or a rhombus.
14. Cyclopentolate as a cycloplegic drug in determination of refractive error
Directory of Open Access Journals (Sweden)
Bolinovska Sofija
2008-01-01
Full Text Available Cycloplegia is loss of the power of accommodation with inhibition of a ciliary muscle. We obtain in this way the smallest refraction of the lens and make it possible to determine the presence and size of the particular refractive error in cycloplegia by using cyclopentolate. Cyclopentolate is a synthetic anticholinergic drug and antagonist of the muscarine receptors. If applied in the eye, it blocks the effect of cholinergic stimulation on the sphincter pupillae muscle and ciliary muscle. It provokes severe mydriasis (dilation of the pupil and cycloplegia (paralysis of the accommodation. Cyclopentolate has been used occasionaly in diagnostic purposes: defining ocular refraction and in ophthalmoscopy. This is the prospective study which included 200 children (400 eyes aged 3-18 years, carried out in one ambulatory ophthalmological examination. The results were analysed using standard statistical methods. The most often refractive error in the examined group of children is hyperopia with hyperopic astigmatism, then myopia with myopic astigmatism and mixtus astigmatism are the most often in the oldest group of children. The mean value of corneal astigmatism on the right eye was 1.24 D, on the left eye 1.23 D. Anisometropy was found in 40% children. The presence of myopia, myopic and astigmatism mixtus tended to increase, and hyperopia and hyperopic astigmatism tended to decrease toward older groups of children. Refractive error could result in a poor development of visual acuity, causing amblyopia and strabismus, and because of that represents an important public health problem. As one of amblyogenic risk factors in children, it can be prevented with screening program and appropriate treatment, thus providing prevention of amblyopia as one form of blindness.
15. Unveiling orbital angular momentum and acceleration of light beams and electron beams
Science.gov (United States)
Special beams, such as the vortex beams that carry orbital angular momentum (OAM) and the Airy beam that preserves its shape while propagating along parabolic trajectory, have drawn significant attention recently both in light optics and in electron optics experiments. In order to utilize these beams, simple methods are needed that enable to easily quantify their defining properties, namely the OAM for the vortex beams and the nodal trajectory acceleration coefficient for the Airy beam. Here we demonstrate a straightforward method to determine these quantities by astigmatic Fourier transform of the beam. For electron beams in a transmission electron microscope, this transformation is easily realized using the condenser and objective stigmators, whereas for light beam this can be achieved using a cylindrical lens. In the case of Laguerre-Gauss vortex beams, it is already well known that applying the astigmatic Fourier transformation converts them to Hermite-Gauss beams. The topological charge (and hence the OAM) can be determined by simply counting the number of dark stripes of the Hermite-Gauss beam. We generated a series of electron vortex beams and managed to determine the topological charge up to a value of 10. The same concept of astigmatic transformation was then used to unveil the acceleration of an electron Airy beam. The shape of astigmatic-transformed depends only on the astigmatic measure and on the acceleration coefficient. This method was experimentally verified by generating electron Airy beams with different known acceleration parameters, enabling direct comparison to the deduced values from the astigmatic transformation measurements. The method can be extended to other types of waves. Specifically, we have recently used it to determine the acceleration of an optical Airy beams and the topological charge of so-called Airy-vortex light beam, i.e. an Airy light beam with an embedded vortex. This work was supported by DIP and the Israel Science
16. Unique reflector arrangement within very wide field of view for multibeam antennas
Science.gov (United States)
Dragone, C.
1983-12-01
It is pointed out that Cassegrainian and Gregorian reflector arrangements are needed for multibeam ground station and satellite antennas. A Cassegrainian arrangement is considered, taking into account aberrations. Dragone (1982) has presented a requirement for the minimization of astigmatism. In the present investigation a formula is presented for describing the deformation coefficients needed to eliminate coma on the basis of a slight deformation of the reflectors. The importance of residual astigmatism due to a derived equation is examined, and attention is given to a compact reflector arrangement which is the result of three optimizations with respect to aberration minimization.
17. Symbolic algebra approach to the calculation of intraocular lens power following cataract surgery
Science.gov (United States)
Hjelmstad, David P.; Sayegh, Samir I.
2013-03-01
We present a symbolic approach based on matrix methods that allows for the analysis and computation of intraocular lens power following cataract surgery. We extend the basic matrix approach corresponding to paraxial optics to include astigmatism and other aberrations. The symbolic approach allows for a refined analysis of the potential sources of errors ("refractive surprises"). We demonstrate the computation of lens powers including toric lenses that correct for both defocus (myopia, hyperopia) and astigmatism. A specific implementation in Mathematica allows an elegant and powerful method for the design and analysis of these intraocular lenses.
18. Wound construction in manual small incision cataract surgery
Directory of Open Access Journals (Sweden)
Haldipurkar S
2009-01-01
Full Text Available The basis of manual small incision cataract surgery is the tunnel construction for entry to the anterior chamber. The parameters important for the structural integrity of the tunnel are the self-sealing property of the tunnel, the location of the wound on the sclera with respect to the limbus, and the shape of the wound. Cataract surgery has gone beyond just being a means to get the lens out of the eye. Postoperative astigmatism plays an important role in the evaluation of final outcome of surgery. Astigmatic consideration, hence, forms an integral part of incisional considerations prior to surgery.
19. The refractive outcome of Toric Lentis Mplus implant in cataract surgery
Science.gov (United States)
Chiam, Patrick J; Quah, Say A
2016-01-01
AIM To evaluate the refractive outcome of Toric Lentis Mplus intraocular lens (IOL) implant. METHODS This is a retrospective case series. Consecutive patients with corneal astigmatism of at least 1.5 D had Toric Lentis Mplus IOL implant during cataract surgery. The exclusion criteria included irregular astigmatism on corneal topography, large scotopic pupil diameter (>6 mm), poor visual potential and significant ocular comorbidity. Postoperative manifest refraction, uncorrected distance visual acuity (UDVA), best-corrected distance visual acuity (BCVA), uncorrected intermediate visual acuity (UIVA) at 3/4 m and uncorrected near visual acuity (UNVA) were obtained. RESULTS There were 70 eyes from 49 patients in this study. Patients were refracted at a median of 8.9wk (range 4.0 to 15.5) from the operation date. Sixty-five percent of eyes had 6/7.5 (0.10 logMAR) or better, and 99% 6/12 (0.30 logMAR) or better postoperative UDVA. Eighty-nine percent could read Jaeger (J) 3 (0.28 logMAR) and 95% J5 (0.37 logMAR) at 40 cm. The median magnitude of astigmatism decreased from 1.91 D to 0.49 D (Wilcoxon, PLentis Mplus IOL has good predictability in reducing preexisting corneal astigmatism. PMID:27275424
20. Effect of iris registration on outcomes of LASIK for myopia with the VISX CustomVue platform
DEFF Research Database (Denmark)
Moshirfar, Majid; Chen, Michael C; Espandar, Ladan;
2009-01-01
PURPOSE: To compare visual outcomes after LASIK using the VISX STAR S4 CustomVue, with and without Iris Registration technology. METHODS: In this retrospective study, LASIK was performed on 239 myopic eyes, with or without astigmatism, of 142 patients. Iris registration LASIK was performed on 121...
1. Ultrasound-induced acoustophoretic motion of microparticles in three dimensions
DEFF Research Database (Denmark)
Muller, Peter Barkholt; Rossi, M.; Marín, Á. G.;
2013-01-01
D particle motion was recorded using astigmatism particle tracking velocimetry under controlled thermal and acoustic conditions in a long, straight, rectangular microchannel actuated in one of its transverse standing ultrasound-wave resonance modes with one or two half-wavelengths. The acoustic...
2. Creation, doubling, and splitting, of vortices in intracavity second harmonic generation
CERN Document Server
Lim, O K; Saffman, M; Królikowski, W
2003-01-01
We demonstrate generation and frequency doubling of unit charge vortices in a linear astigmatic resonator. Topological instability of the double charge harmonic vortices leads to well separated vortex cores that are shown to rotate, and become anisotropic, as the resonator is tuned across resonance.
3. Ptosis - infants and children
Science.gov (United States)
Blepharoptosis-children; Congenital ptosis; Eyelid drooping-children; Eyelid drooping-amblyopia; Eyelid drooping-astigmatism ... Ptosis in infants and children is often due to a problem with the muscle that raises the eyelid. A nerve problem in the eyelid can ...
4. Multifocal Toric Intraocular Lens for Traumatic Cataract in a Child
Directory of Open Access Journals (Sweden)
Yanfeng Zeng
2016-10-01
Full Text Available A child suffering from traumatic cataract and corneal astigmatism of 2.14 D had a phacoemulsification operation and implantation of a ReSTOR Toric intraocular lens (IOL to correct the astigmatism. The primary outcome measurements were the uncorrected distance visual acuity (UDVA, uncorrected near vision at 40 cm, intraocular pressure, spherical equivalent refraction, residual astigmatism, corneal astigmatism, presence of unusual optical phenomena, and use of spectacles. At 7 months postoperatively, UDVA was maintained between 16/20 and 24/20, near vision was between J1 and J3, residual spherical refraction was 0–0.37 D, and residual refractive cylinder was between 0 and 0.67 D. A multifocal toric IOL can provide the possibility of satisfactory vision for both distant and near conditions without the use of spectacles to meet children’s needs when studying and doing sports. Additionally, binocular vision can be reconstructed. This intervention, therefore, seems to be a satisfactory alternative.
5. Change in corneal curvature induced by surgery
NARCIS (Netherlands)
G. van Rij (Gabriel)
1987-01-01
textabstractThe first section deals with the mechanisms by which sutures, incisions and intracorneal contact lenses produce a change in corneal curvature. To clarify the mechanisms by which incisions and sutures produce astigmatism, we made incisions and placed sutures in the corneoscleral limbus of
6. Meta-analysis to compare the safety and efficacy of manual small incision cataract surgery and phacoemulsification
Directory of Open Access Journals (Sweden)
Parikshit Gogate
2015-01-01
Conclusion: The outcome of this meta-analysis indicated there is no difference between phacoemulsification and SICS for BCVA and UCVA of 6/18 and 6/60. Endothelial cell loss and intraoperative and postoperative complications were similar between procedures. SICS resulted in statistically greater astigmatism and UCVA of 6/9 or worse, however, near UCVA was better.
7. Visual outcomes and optical quality after implantation of a diffractive multifocal toric intraocular lens
Science.gov (United States)
Chen, Xiangfei; Zhao, Ming; Shi, Yuhua; Yang, Liping; Lu, Yan; Huang, Zhenping
2016-01-01
Background: This study evaluated the visual function after implantation of a multifocal toric intraocular lenses (IOLs). Materials and Methods: This study involved 10 eyes from eight cataract patients with corneal astigmatism of 1.0 diopter (D) or higher who had received phacoemulsification with implantation of an AcrySof IQ ReSTOR Toric IOL. Six-month evaluations included visual acuity, spherical equivalent (SE), defocus curve, residual astigmatism, IOL rotation, contrast sensitivity (CS), wavefront aberrations, modulation transfer function (MTF), and patient satisfaction assessments. Results: At 6 months postoperatively, uncorrected distance visual acuity (logarithm of the minimum angle of resolution) was 0.09 ± 0.04, corrected distance visual acuity was 0.02 ± 0.11, and uncorrected near visual acuity was 0.12 ± 0.07. The mean SE was −0.095 ± 0.394 D (±0.50 D in 90%). Refractive astigmatism at the 6-month follow-up visit was significantly reduced to 0.35 ± 0.32 D from 1.50 ± 0.41 D presurgery (P 0.05). There was an increase in MTF results between preoperative and postoperative evaluations at all spatial frequencies. Conclusions: The diffractive multifocal toric IOL is able to provide a predictable astigmatic correction with apparently outstanding levels of optical quality after implantation. PMID:27221680
8. Design of high power solid-state pulsed laser resonators
International Nuclear Information System (INIS)
Methods and configurations for the design of high power solid-state pulsed laser resonators, operating in free running, are presented. For fundamental mode high power resonators, a method is proposed for the design of a resonator with joined stability zones. In the case of multimode resonators, two configurations are introduced for maximizing the laser overall efficiency due to the compensation of the astigmatism induced by the excitation. The first configuration consists in a triangular ring resonator. The results for this configuration are discussed theoretically, showing that it is possible to compensate the astigmatism of the thermal lens virtually in a 100%; however this is only possible for a specific pumping power. The second configuration proposes a dual-active medium resonator, rotated 90 degree one from the other around the optical axis, where each active medium acts as an astigmatic lens of the same dioptric power. The reliability of this configuration is corroborated experimentally using a Nd:YAG dual-active medium resonator. It is found that in the pumping power range where the astigmatism compensation is possible, the overall efficiency is constant, even when increasing the excitation power with the consequent increase of the thermal lens dioptric power. (Author)
9. Expressions for third-order aberration theory for holographic images
S K Tripathy; S Ananda Rao
2003-01-01
Expressions for third-order aberration in the reconstructed wave front of point objects are established by Meier. But Smith, Neil Mohon, Sweatt independently reported that their results differ from that of Meier. We found that coefficients for spherical aberration, astigmatism, tally with Meier’s while coefficients for distortion and coma differ.
10. Anterior segment study with the pentacam scheimpflug camera in refractive surgery candidates
Directory of Open Access Journals (Sweden)
Masih Hashemi
2013-01-01
Conclusions: Myopic eyes had steeper corneas than hyperopic eyes and anterior chamber measurements were significantly higher in the myopic eyes. In myopic eyes, AE max and PE max and K max measurements were higher, and ACD measurements were lower in the astigmatic groups.
11. The correlation between variation of visual acuity and the anterior chamber depth in the early period after phacoemulsification
Directory of Open Access Journals (Sweden)
Kai-jian CHEN
2011-04-01
Full Text Available Objective To investigate the correlation between the visual acuity variation and the anterior chamber depth in the early period after phacoemulsification.Methods Thirty-six eyes of 32 patients with age-related cataract underwent 3.2mm clear corneal incision phacoemulsification and intraocular lens(IOL implantation.The visual acuity was examined and horizontal curvature(K1,vertical curvature(K2,corneal astigmatism,and anterior chamber depth were measured with IOL-master preoperatively and also on 1,3,7 and 15 postoperative days.The changes in parameters were compared,and the correlations among visual acuity,corneal astigmatism and anterior chamber depth were analyzed.Results Before operation and 1d,3d,7d and 15d after operation,the corneal astigmatism was-0.87±0.40D,-1.92±1.38D,-1.69±1.13D,-1.45±0.79D and-1.36±0.74D;the anterior chamber depth was 3.08±0.35mm,4.04±0.38mm,4.28±0.29mm,4.22±0.17mm and 4.22±0.16mm;the visual acuity was 0.18±0.10,0.44±0.14,0.59±0.12,0.61±0.11 and 0.62±0.14.Significant difference was found between pre-operative and postoperative visual acuity,corneal astigmatism and anterior chamber depth,and it was also found in corneal astigmatism between 1d and 15d post operation(P < 0.05,as well as in anterior chamber depth and visual acuity between 1d and 3d post operation(P < 0.05.A positive correlation was found between visual acuity and corneal astigmatism on 1d(r=0.42,P < 0.05,3d(r=0.35,P < 0.05 and 7d(r=0.35,P < 0.05 post operation;and a negative correlation was found between visual acuity and anterior chamber depth on 3d(r=-0.29,P < 0.05,7d(r=-0.43,P < 0.01 and 15d(r=-0.37,P < 0.05 post operation.Conclusion Both the corneal astigmatism and the anterior chamber depth are correlated with the visual acuity variation in the early period after phacoemulsification.
12. Refractive cylinder outcomes after calculating toric intraocular lens cylinder power using total corneal refractive power
Directory of Open Access Journals (Sweden)
Davison JA
2015-08-01
Full Text Available James A Davison,1 Richard Potvin21Wolfe Eye Clinic, Marshalltown, IA, USA; 2Science in Vision, Akron, NY, USAPurpose: To determine whether the total corneal refractive power (TCRP value, which is based on measurement of both anterior and posterior corneal astigmatism, is effective for toric intraocular lens (IOL calculation with AcrySof® Toric IOLsPatients and methods: A consecutive series of cataract surgery cases with AcrySof toric IOL implantation was studied retrospectively. The IOLMaster® was used for calculation of IOL sphere, the Pentacam® TCRP 3.0 mm apex/ring value was used as the keratometry input to the AcrySof Toric IOL Calculator and the VERION™ Digital Marker for surgical orientation. The keratometry readings from the VERION reference unit were recorded but not used in the actual calculation. Vector differences between expected and actual residual refractive cylinder were calculated and compared to simulated vector errors using the collected VERION keratometry data.Results: In total, 83 eyes of 56 patients were analyzed. Residual refractive cylinder was 0.25 D or lower in 58% of eyes and 0.5 D or lower in 80% of eyes. The TCRP-based calculation resulted in a statistically significantly lower vector error (P<0.01 and significantly more eyes with a vector error ≤0.5 D relative to the VERION-based calculation (P=0.02. The TCRP and VERION keratometry readings suggested a different IOL toric power in 53/83 eyes. In these 53 eyes the TCRP vector error was lower in 28 cases, the VERION error was lower in five cases, and the error was equal in 20 cases. When the anterior cornea had with-the-rule astigmatism, the VERION was more likely to suggest a higher toric power and when the anterior cornea had against-the-rule astigmatism, the VERION was less likely to suggest a higher toric power.Conclusion: Using the TCRP keratometry measurement in the AcrySof toric calculator may improve overall postoperative refractive results
13. SURGICAL AND VISUAL OUTCOME OF PHACOEMULSIFICATION SURGERY (ROUTINE AND MICRO - PHACO (BIMANUAL PHACO: A COMPARATIVE STUDY
Directory of Open Access Journals (Sweden)
Sudha
2015-03-01
Full Text Available Cataract surgery has evolved over the past few decades with progressive decrease in the size of the incision. Originally from 12 mm intracapsular incision to bimanual phacoemulsification (Micro - Phaco that has incision size of just 700 microns. In the pres ent comparative PROSPECTIVE study best corrected visual acuity postoperatively and surgically induced astigmatism were compared in routine Phacoemulsification technique and bimanual phaco (Micro - Phaco 60 eyes were studied. There was no statistically signi ficant difference in postoperative best corrected visual acuity (BCVA of patients operated with Micro - Phaco or routine Phacoemulsification. There was difference in surgically induced astigmatism (SIA ; average SIA in microphaco was 0.5972 as against 0.832 8 in routine Phacoemulsification.
14. Line focusing for soft x-ray laser-plasma lasing.
Science.gov (United States)
Bleiner, Davide; Balmer, Jürg E; Staub, Felix
2011-12-20
A computational study of line-focus generation was done using a self-written ray-tracing code and compared to experimental data. Two line-focusing geometries were compared, i.e., either exploiting the sagittal astigmatism of a tilted spherical mirror or using the spherical aberration of an off-axis-illuminated spherical mirror. Line focusing by means of astigmatism or spherical aberration showed identical results as expected for the equivalence of the two frames of reference. The variation of the incidence angle on the target affects the line-focus length, which affects the amplification length such that as long as the irradiance is above the amplification threshold, it is advantageous to have a longer line focus. The amplification threshold is physically dependent on operating parameters and plasma-column conditions and in the present study addresses four possible cases. PMID:22193201
15. Topography-guided custom ablation treatment for treatment of keratoconus
Directory of Open Access Journals (Sweden)
Rohit Shetty
2013-01-01
Full Text Available Keratoconus is a progressive ectatic disorder of the cornea which often presents with fluctuating refraction and high irregular astigmatism. Correcting the vision of these patients is often a challenge because glasses are unable to correct the irregular astigmatism and regular contact lenses may not fit them very well. Topography-guided custom ablation treatment (T-CAT is a procedure of limited ablation of the cornea using excimer laser with the aim of regularizing the cornea, improving the quality of vision and possibly contact lens fit. The aim of the procedure is not to give a complete refractive correction. It has been tried with a lot of success by various groups of refractive surgeons around the world but a meticulous and methodical planning of the procedure is essential to ensure optimum results. In this paper, we attempt to elucidate the planning for a T-CAT procedure for various types of cones and asphericities.
16. Coma-free alignment of high resolution electron microscopes with the aid of optical diffractograms
International Nuclear Information System (INIS)
Alignment by means of current commutating and defocusing of the objective does not yield the desired rotational symmetry of the imaging pencils. This was found while aligning a transmission electron microscope with a single field condenser objective. A series of optical diffractograms of micrographs taken under the same tilted illumination yet under various azimuths have been arranged in a tableau, wherein strong asymmetry is exhibited. Quantitative evaluation yields the most important asymmetric aberration to be the axial coma, which becomes critical when a resolution better than 5 A0 is obtained. The tableau also allows an assessment of the three-fold astigmatism. A procedure has been developed which aligns the microscope onto the coma-free and dispersion-free pencil axis and does not rely on current communication. The procedure demands equal appearance of astigmatic carbon film images produced under the same tilt yet diametrical azimuth. (Auth.)
17. Optical design of Kirkpatrick-Baez microscope for ICF
International Nuclear Information System (INIS)
A new flux-resolution optical design method of Kirkpatrick-Baez microscope (KB microscope) is proposed. In X-ray imaging diagnostics of inertial confinement fusion(ICF), spatial resolution and flux are always the key parameters. While the traditional optical design of KB microscope is to correct on-axis spherical aberration and astigmatic aberration, flux-resolution method is based on lateral aberration of full field and astigmatic aberration. Thus the spatial resolution related to field dimension and light flux can be estimated. By the expressions of spatial resolution and actual limits in ICF, rules of how to set original structure and optical design flow are summarized. An instance is presented and it shows that the design has met the original targets and overcome the shortcomings of image characterization in compressed core by traditional spherical aberration correction. (authors)
18. Imaging with Spherically Bent Crystals or Reflectors
Energy Technology Data Exchange (ETDEWEB)
Bitter, M; Hill, K W; Scott, S; Ince-Cushman, A; Reinke, M; Podpaly, Y; Rice, J E; Beiersdorfer, P
2010-06-01
This paper consists of two parts: Part I describes the working principle of a recently developed x-ray imaging crystal spectrometer, where the astigmatism of spherically bent crystals is being used with advantage to record spatially resolved spectra of highly charged ions for Doppler measurements of the ion-temperature and toroidal plasmarotation- velocity profiles in tokamak plasmas. This type of spectrometer was thoroughly tested on NSTX and Alcator C-Mod, and its concept was recently adopted for the design of the ITER crystal spectrometers. Part II describes imaging schemes, where the astigmatism has been eliminated by the use of matched pairs of spherically bent crystals or reflectors. These imaging schemes are applicable over a wide range of the electromagnetic radiation, which includes microwaves, visible light, EUV radiation, and x-rays. Potential applications with EUV radiation and x-rays are the diagnosis of laserproduced plasmas, imaging of biological samples with synchrotron radiation, and lithography.
19. Transport of a high brightness proton beam through the Munich tandem accelerator
Energy Technology Data Exchange (ETDEWEB)
Moser, M., E-mail: marcus.moser@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Department für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Greubel, C. [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Department für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Carli, W. [Beschleunigerlabor MLL, 85478 Garching (Germany); Peeper, K.; Reichart, P.; Urban, B.; Vallentin, T. [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Department für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Dollinger, G., E-mail: guenther.dollinger@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Department für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany)
2015-04-01
Basic requirement for ion microprobes with sub-μm beam focus is a high brightness beam to fill the small phase space usually accepted by the ion microprobe with enough ion current for the desired application. We performed beam transport simulations to optimize beam brightness transported through the Munich tandem accelerator. This was done under the constraint of a maximum ion current of 10 μA that is allowed to be injected due to radiation safety regulations and beam power constrains. The main influence of the stripper foil in conjunction with intrinsic astigmatism in the beam transport on beam brightness is discussed. The calculations show possibilities for brightness enhancement by using astigmatism corrections and asymmetric filling of the phase space volume in the x- and y-direction.
20. [Keratoconus, the most common corneal dystrophy. Can keratoplasty be avoided?].
Science.gov (United States)
Arne, Jean Louis; Fournié, Pierre
2011-01-01
Keratoconus is the most common form of corneal dystrophy. It consists of a non inflammatory progressive thinning process that leads to conical ectasia of the cornea, causing high myopia and astigmatism. In more advanced cases, opacities can be seen at the apex of the cone. Traditional conservative management of keratoconus begins with spectacle correction and contact lenses. Surgery is recommended when a stable contact lens fit fails to provide adequate vision. Keratoplasty was long the only surgical treatment, but recent years have seen the introduction of new surgical options:--Collagen cross-linking stiffens the cornea and can halt disease progression;--Intrastromal corneal rings can reduce astigmatism and improve visual acuity;--Intraocular lenses are valuable additional options for the correction of refractive errors. Currently, keratoplasty is mainly restricted to patients with opacities of the central cornea. PMID:22039707
1. Topography-guided custom ablation treatment for treatment of keratoconus.
Science.gov (United States)
Shetty, Rohit; D'Souza, Sharon; Srivastava, Samaresh; Ashwini, R
2013-08-01
Keratoconus is a progressive ectatic disorder of the cornea which often presents with fluctuating refraction and high irregular astigmatism. Correcting the vision of these patients is often a challenge because glasses are unable to correct the irregular astigmatism and regular contact lenses may not fit them very well. Topography-guided custom ablation treatment (T-CAT) is a procedure of limited ablation of the cornea using excimer laser with the aim of regularizing the cornea, improving the quality of vision and possibly contact lens fit. The aim of the procedure is not to give a complete refractive correction. It has been tried with a lot of success by various groups of refractive surgeons around the world but a meticulous and methodical planning of the procedure is essential to ensure optimum results. In this paper, we attempt to elucidate the planning for a T-CAT procedure for various types of cones and asphericities. PMID:23925335
2. Optical path length and trajectory stability in rotationally asymmetric multipass cells.
Science.gov (United States)
Harden, Galen H; Cortes-Herrera, Luis E; Hoffman, Anthony J
2016-08-22
We describe the behavior of optical trajectories in multipass rotationally asymmetric cavities (RACs) using a phase-space motivated approach. Emphasis is placed on generating long optical paths. A trajectory with an optical path length of 18 m is generated within a 68 cm3 volume. This path length to volume ratio (26.6 cm-2) is large compared to current state of the art multipass cells such as the cylindrical multipass cell (6.6 cm-2) and astigmatic Herriott cell (9 cm-2). Additionally, the effect of small changes to the input conditions on the path length is studied and compared to the astigmatic Herriott cell. This work simplifies the process of designing RACs with long optical path lengths and could lead to broader implementation of these multipass cells.
3. Plenoptic microscope based on laser optical feedback imaging (LOFI)
CERN Document Server
Glastre, W; Jacquin, O; de Chatellus, H Guillet; Lacot, E
2015-01-01
We present an overview of the performances of a plenoptic microscope which combines the high sensitivity of a laser optical feedback imaging setup , the high resolution of optical synthetic aperture and a shot noise limited signal to noise ratio by using acoustic photon tagging. By using an adapted phase filtering, this microscope allows phase drift correction and numerical aberration compensation (defocusing, coma, astigmatism ...). This new kind of microscope seems to be well adapted to make deep imaging through scattering and heterogeneous media.
4. Asymmetric wavelet reconstruction of particle hologram with an elliptical Gaussian beam illumination.
Science.gov (United States)
Wu, Xuecheng; Wu, Yingchun; Zhou, Binwu; Wang, Zhihua; Gao, Xiang; Gréhan, Gérard; Cen, Kefa
2013-07-20
We propose an asymmetric wavelet method to reconstruct a particle from a hologram illuminated by an elliptical, astigmatic Gaussian beam. The particle can be reconstructed by a convolution of the asymmetric wavelet and hologram. The reconstructed images have the same size and resolution as the recorded hologram; therefore, the reconstructed 3D field is convenient for automatic particle locating and sizing. The asymmetric wavelet method is validated by both simulated holograms of spherical particles and experimental holograms of opaque, nonspherical coal particles.
5. Fibrin-glue assisted multilayered amniotic membrane transplantation in surgical management of pediatric corneal limbal dermoid: a novel approach
OpenAIRE
Pirouzian, Amir; Ly, Hang; Holz, Huck; Sudesh, Rattehalli S.; Chuck, Roy S.
2010-01-01
Purpose To report a new surgical technique for excising pediatric corneal limbal dermoid and the post-resection ocular surface reconstruction. Methods We describe a method of deep lamellar excision followed by sutureless multilayered amniotic membrane transplantation in surgical management of corneal limbal dermoid. Result This technique achieves a rapid corneal re-epithelization, reduces post-operative pain, and will diminish post-operative scarring. Preoperative corneal astigmatism will per...
6. Intraocular correction of high-degree ametropia using individual multifocal LentisMPlus IOL
OpenAIRE
Fedorova, I. S.; S.Y. Kopayev; T.S. Kuznetsova,; D.G. Uzunyan
2013-01-01
ABSTRACT Background. For surgical correction of high-degree ametropia aggravated with astigmatism the following options are available: excimer laser correction; phakic lens implantation; bioptika – a combination of ablating the transparent crystalline lens (ATL) with implantation of multifocal toric diopter IOL of standard series and LASIK for the correction of a residual refractive error, ATL using 2 IOLs according to the Technology «Piggy Back»; additional meniscus IOL implantation «Ad...
7. Changes in Corneal Topography after 25-Gauge Transconjunctival Sutureless Vitrectomy versus after 20-Gauge Standard Vitrectomy
OpenAIRE
Okamoto, F; Okamoto, C.; Sakata, N.; Hiratsuka, K; Yamane, N; Hiraoka, T.; Kaji, Y; Oshika, T.
2007-01-01
PurposeTo evaluate the changes in regular and irregular corneal astigmatism after 25-gauge transconjunctival sutureless vitrectomy and 20-gauge standard vitrectomy.DesignProspective observational comparative case series.ParticipantsThirty-two eyes of 32 patients undergoing 25-gauge transconjunctival sutureless vitrectomy and 25 eyes of 24 patients undergoing 20-gauge standard vitrectomy.MethodsCorneal topography was obtained preoperatively and at 2 weeks and 1 month postoperatively.Main Outco...
8. Velocity map imaging of a slow beam of ammonia molecules inside a quadrupole guide
OpenAIRE
Quintero-Pérez, Marina; Jansen, Paul; Bethlem, Hendrick L.
2012-01-01
Velocity map imaging inside an electrostatic quadrupole guide is demonstrated. By switching the voltages that are applied to the rods, the quadrupole can be used for guiding Stark decelerated molecules and for extracting the ions. The extraction field is homogeneous along the axis of the quadrupole while it defocuses the ions in the direction perpendicular to both the axis of the quadrupole and the axis of the ion optics. To compensate for this astigmatism, a series of planar electrodes with ...
9. Comparison of Anterior Segment Measurements with Scheimpflug/Placido Photography-Based Topography System and IOLMaster Partial Coherence Interferometry in Patients with Cataracts
OpenAIRE
Jinhai Huang; Na Liao; Giacomo Savini; Fangjun Bao; Ye Yu; Weicong Lu; Qingjie Hu; Qinmei Wang
2014-01-01
Purpose. To assess the consistency of anterior segment measurements obtained using a Sirius Scheimpflug/Placido photography-based topography system (CSO, Italy) and IOLMaster partial coherence interferometry (Carl Zeiss Meditec, Germany) in eyes with cataracts. Methods. A total of 90 eyes of 90 patients were included in this prospective study. The anterior chamber depth (ACD), keratometry (K), corneal astigmatism axis, and white to white (WTW) values were randomly measured three times with Si...
10. Compact adaptive optic-optical coherence tomography system
Energy Technology Data Exchange (ETDEWEB)
Olivier, Scot S. (Livermore, CA); Chen, Diana C. (Fremont, CA); Jones, Steven M. (Danville, CA); McNary, Sean M. (Stockton, CA)
2012-02-28
Badal Optometer and rotating cylinders are inserted in the AO-OCT to correct large spectacle aberrations such as myopia, hyperopic and astigmatism for ease of clinical use and reduction. Spherical mirrors in the sets of the telescope are rotated orthogonally to reduce aberrations and beam displacement caused by the scanners. This produces greatly reduced AO registration errors and improved AO performance to enable high order aberration correction in a patient eyes.
11. Aberration influenced generation of rotating two-lobe light fields
Science.gov (United States)
Kotova, S. P.; Losevsky, N. N.; Prokopova, D. V.; Samagin, S. A.; Volostnikov, V. G.; Vorontsov, E. N.
2016-08-01
The influence of aberrations on light fields with a rotating intensity distribution is considered. Light fields were generated with the phase masks developed using the theory of spiral beam optics. The effects of basic aberrations, such as spherical aberration, astigmatism and coma are studied. The experimental implementation of the fields was achieved with the assistance of a liquid crystal spatial light modulator HOLOEYE HEO-1080P, operating in reflection mode. The results of mathematical modelling and experiments have been qualitatively compared.
12. Management of visual disturbances in albinism: a case report
OpenAIRE
Omar Rokiah; Idris Siti; Meng Chung; Knight Victor
2012-01-01
Abstract Introduction A number of vision defects have been reported in association with albinism, such as photophobia, nystagmus and astigmatism. In many cases only prescription sunglasses are prescribed. In this report, the effectiveness of low-vision rehabilitation in albinism, which included prescription of multiple visual aids, is discussed. Case presentation We present the case of a 21-year-old Asian woman with albinism and associated vision defects. Her problems were blurring of distant...
13. [Heterotopic fundus (author's transl)].
Science.gov (United States)
Denden, A
1976-07-01
Fundus heterotopicus is the term used to describe a rare, non-hereditary curvature anomaly of the fundus in the non-myopic eye, which is characterized: 1. functionally, by a slowly increasing myopic-astigmatic refractive error, 2. by correctable bitemporal or binasal refractionscomata and 3. ophthalmoscopically by a posterior out-pouching of the nasal or temporal fundus portions, and including the optic disc and macula in the obliquely descending wall of the extasis.
14. High Prevalence of Refractive Errors in 7 Year Old Children in Iran
Directory of Open Access Journals (Sweden)
Hassan HASHEMI
2016-02-01
Full Text Available Background: The latest WHO report indicates that refractive errors are the leading cause of visual impairment throughout the world. The aim of this study was to determine the prevalence of myopia, hyperopia, and astigmatism in 7 yr old children in Iran.Methods: In a cross-sectional study in 2013 with multistage cluster sampling, first graders were randomly selected from 8 cities in Iran. All children were tested by an optometrist for uncorrected and corrected vision, and non-cycloplegic and cycloplegic refraction. Refractive errors in this study were determined based on spherical equivalent (SE cyloplegic refraction.Results: From 4614 selected children, 89.0% participated in the study, and 4072 were eligible. The prevalence rates of myopia, hyperopia and astigmatism were 3.04% (95% CI: 2.30-3.78, 6.20% (95% CI: 5.27-7.14, and 17.43% (95% CI: 15.39-19.46, respectively. Prevalence of myopia (P=0.925 and astigmatism (P=0.056 were not statistically significantly different between the two genders, but the odds of hyperopia were 1.11 (95% CI: 1.01-2.05 times higher in girls (P=0.011. The prevalence of with-the-rule astigmatism was 12.59%, against-the-rule was 2.07%, and oblique 2.65%. Overall, 22.8% (95% CI: 19.7-24.9 of the schoolchildren in this study had at least one type of refractive error.Conclusion: One out of every 5 schoolchildren had some refractive error. Conducting multicenter studies throughout the Middle East can be very helpful in understanding the current distribution patterns and etiology of refractive errors compared to the previous decade. Keyword: Refractive errors, Cross-sectional study, Iran
15. Diode-pumped dye laser
Science.gov (United States)
Burdukova, O. A.; Gorbunkov, M. V.; Petukhov, V. A.; Semenov, M. A.
2016-10-01
This letter reports diode pumping for dye lasers. We offer a pulsed dye laser with an astigmatism-compensated three-mirror cavity and side pumping by blue laser diodes with 200 ns pulse duration. Eight dyes were tested. Four dyes provided a slope efficiency of more than 10% and the highest slope efficiency (18%) was obtained for laser dye Coumarin 540A in benzyl alcohol.
16. Topical timolol maleate 0.5% solution for the management of deep periocular infantile hemangiomas.
Science.gov (United States)
Painter, Sally L; Hildebrand, Göran Darius
2016-04-01
This retrospective, consecutive, clinical case series examined the use of topical timolol in the treatment of 5 children with deep, periocular infantile hemangiomas that caused astigmatism, change in head posture, or ptosis. All patients were treated with timolol maleate solution 0.5% twice daily until the lesions had regressed. All 5 children showed regression of the lesion and improvement in amblyogenic risk factors within 2 weeks.
17. Toric and toric multifocal IOLs in meridional amblyopia
Institute of Scientific and Technical Information of China (English)
Kirithika; Muthusamy; Charles; Claoué
2015-01-01
<正>Dear Sir,M eridional amblyopia is sometimes put forward as a reason for not implanting Toric intraocular lenses(IOLs).It has been noted that patients with high levels of childhood astigmatism(>3 DC)can develop persistent orientation-dependent visual deficits despite optical correction.Studies by Mitchell et al[1]demonstrated that meridional visual deprivation during the critical period of visual development results in permanently reduced response to stimuli in those orientations.This phenomenon was termed
18. Compact adaptive optic-optical coherence tomography system
Energy Technology Data Exchange (ETDEWEB)
Olivier, Scot S. (Livermore, CA); Chen, Diana C. (Fremont, CA); Jones, Steven M. (Danville, CA); McNary, Sean M. (Stockton, CA)
2011-05-17
Badal Optometer and rotating cylinders are inserted in the AO-OCT to correct large spectacle aberrations such as myopia, hyperopic and astigmatism for ease of clinical use and reduction. Spherical mirrors in the sets of the telescope are rotated orthogonally to reduce aberrations and beam displacement caused by the scanners. This produces greatly reduced AO registration errors and improved AO performance to enable high order aberration correction in a patient eyes.
19. Exploded representation of a refracting surface
Directory of Open Access Journals (Sweden)
W.H. Heath
2005-01-01
Full Text Available The concept of the exploded refracting sur-face is useful in the optics of contact lenses and vision underwater. The purpose of this paper is to show how to represent a refracting surface as an exploded pair of surfaces separated by a gap of zero width. The analysis is in terms of linear optics and allows for astigmatic and non-coaxial cases.
20. Foci in ray pencils of general divergency
OpenAIRE
W. F. Harris; R. D. van Gool
2009-01-01
In generalized optical systems, that is, in systems which may contain thin refracting elements of asymmetric dioptric power, pencils of rays may exhibit phenomena that cannot occur in conventional optical systems. In conventional optical systems astigmatic pencils have two principal meridians that are necessarily orthogonal; in generalized systems the principal meridians can be at any angle. In fact in generalized systems a pencil may have only one principal meridian or even none at all. I...
1. Spatial shaping for generating arbitrary optical dipoles traps for ultracold degenerate gases
OpenAIRE
Jeffrey G. Lee; Hill III, W. T.
2014-01-01
We present two spatial-shaping approaches -- phase and amplitude -- for creating two-dimensional optical dipole potentials for ultracold neutral atoms. When combined with an attractive or repulsive Gaussian sheet formed by an astigmatically focused beam, atoms are trapped in three dimensions resulting in planar confinement with an arbitrary network of potentials -- a free-space atom chip. The first approach utilizes an adaptation of the generalized phase-contrast technique to convert a phase ...
2. Geographical prevalence and risk factors for pterygium: a systematic review and meta-analysis
OpenAIRE
Liu, Lei; Wu, Jingyang; Geng, Jin; Yuan, Zhe; Huang, Desheng
2013-01-01
Objective Pterygium is considered to be a proliferative overgrowth of bulbar conjunctiva that can induce significant astigmatism and cause visual impairment; this is the first meta-analysis to investigate the pooled prevalence and risk factors for pterygium in the global world. Design A systematic review and meta-analysis of population-based studies. Setting International. Participants A total of 20 studies with 900 545 samples were included. Primary outcome measure The pooled prevalence and ...
3. Adaptation to interocular differences in blur
OpenAIRE
Kompaniez, E.; Sawides, L.; Marcos, S.; Webster, M A
2013-01-01
Adaptation to a blurred image causes a physically focused image to appear too sharp, and shifts the point of subjective focus toward the adapting blur, consistent with a renormalization of perceived focus. We examined whether and how this adaptation normalizes to differences in blur between the two eyes, which can routinely arise from differences in refractive errors. Observers adapted to images filtered to simulate optical defocus or different axes of astigmatism, as well as to images that w...
4. Orbital angular momentum exchange in an optical parametric oscillator
OpenAIRE
Martinelli, M.; Huguenin, J. A. O.; Nussenzveig, P.; Khoury, A. Z.
2004-01-01
We present a study of orbital angular momentum transfer from pump to down-converted beams in a type-II Optical Parametric Oscillator. Cavity and anisotropy effects are investigated and demostrated to play a central role in the transverse mode dynamics. While the idler beam can oscillate in a Laguerre-Gauss mode, the crystal birefringence induces an astigmatic effect in the signal beam that prevents the resonance of such mode.
5. Non-paraxial Elliptical Gaussian Beam
Institute of Scientific and Technical Information of China (English)
WANG Zhaoying; LIN Qiang; NI Jie
2001-01-01
By using the methods of Hertz vector and angular spectrum transormation, the exact solution of non-paraxial elliptical Gaussion beam with general astigmatism based on Maxwell′s equations is obtained. We discussed its propagation characteristics. The results show that the orientation of the elliptical beam spot changes continuously as the beam propagates through isotropic media. Splitting or coupling of beam spots may occur for different initial spot size. This is very different from that of paraxial elliptical Gaussian beam.
6. Visual outcomes after implantation of a novel refractive toric multifocal intraocular lens
Directory of Open Access Journals (Sweden)
Talita Shimoda
2014-04-01
Full Text Available Purpose: To assess the postoperative outcomes of a novel toric multifocal in traocular lens (IOL in patients with cataract and corneal astigmatism. Methods: This prospective nonrandomized study included patients with cataract, corneal astigmatism, and a motivation for spectacle independence. In all patients, a Rayner M-flex® T toric IOL was implanted in the capsular bag. Three months after surgery, the distance, intermediate, and near visual acuities; spherical equivalent; residual refractive astigmatism; defocus curve; and contrast sensitivity were evaluated. A patient satisfaction and visual phenomena questionnaire was administered to all patients. Results: Thirty-four eyes of 18 patients were included in this study. Three months after surgery, the mean corrected distance visual acuity (logMAR was 0.00 ± 0.08 at 6 m, 0.20 ± 0.09 at 70 cm, and 0.08 ± 0.11 at 40 cm. Uncorrected distance vision acuity was 20/40 or better in 100% eyes. The preoperative mean refractive cylinder (RC was -2.19 (SD: ± 0.53. After a 3-month follow-up, the average RC was -0.44 D (SD: ± 0.27; p<0.001. Contrast sensitivity levels were high. At the last follow-up, 87.5% patients were spectacle-independent for near, intermediate, and distance vision, and approximately 44% patients reported halos and glare. Conclusion: Toric multifocal IOL implantation in patients with cataract and corneal astigmatism using the Rayner M-flex® T toric IOL was a simple, safe, and accurate option. This technology provides surgeons with a feasible option for meeting patient expectations of an enhanced lifestyle resulting from decreased spectacle dependence.
7. Is Noncycloplegic Photorefraction Applicable for Screening Refractive Amblyopia Risk Factors?
Directory of Open Access Journals (Sweden)
Zhale Rajavi
2012-01-01
Full Text Available Purpose: To compare the accuracy of noncycloplegic photorefraction (NCP with that of cycloplegic refraction (CR for detecting refractive amblyopia risk factors (RARFs and to determine cutoff points. Methods: In this diagnostic test study, right eyes of 185 children (aged 1 to 14 years first underwent NCP using the PlusoptiX SO4 photoscreener followed by CR. Based on CR results, hyperopia (≥ +3.5 D, myopia (≥ -3 D, astigmatism (≥ 1.5 D, and anisometropia (≥ 1.5 D were set as diagnostic criteria based on AAPOS guidelines. The difference in the detection of RARFs by the two methods was the main outcome measure. Results: RARFs were present in 57 (30.8% and 52 (28.1% of cases by CR and NCP, respectively, with an 89.7% agreement. In contrast to myopia and astigmatism, mean spherical power in hyperopic eyes was significantly different based on the two methods (P < 0.001, being higher with CR (+5.96 ± 2.13 D as compared to NCP (+2.37 ± 1.36 D. Considering CR as the gold standard, specificities for NCP exceeded 93% and sensitivities were also acceptable (≥ 83% for myopia and astigmatism. Nevertheless, sensitivity of NCP for detecting hyperopia was only 45.4%. Using a cutoff point of +1.87 D, instead of +3.5 D, for hyperopia, sensitivity of NCP was increased to 81.8% with specificity of 84%. Conclusion: NCP is a relatively accurate method for detecting RARFs in myopia and astigmatism. Using an alternative cutoff point in this study, NCP may be considered an acceptable device for detecting hyperopia as well.
8. Comparison of Anterior Segment Measurements with Scheimpflug/Placido Photography-Based Topography System and IOLMaster Partial Coherence Interferometry in Patients with Cataracts.
Science.gov (United States)
Huang, Jinhai; Liao, Na; Savini, Giacomo; Bao, Fangjun; Yu, Ye; Lu, Weicong; Hu, Qingjie; Wang, Qinmei
2014-01-01
Purpose. To assess the consistency of anterior segment measurements obtained using a Sirius Scheimpflug/Placido photography-based topography system (CSO, Italy) and IOLMaster partial coherence interferometry (Carl Zeiss Meditec, Germany) in eyes with cataracts. Methods. A total of 90 eyes of 90 patients were included in this prospective study. The anterior chamber depth (ACD), keratometry (K), corneal astigmatism axis, and white to white (WTW) values were randomly measured three times with Sirius and IOLMaster. Concordance between them was assessed by calculating 95% limits of agreement (LoA). Results. The ACD and K taken with the Sirius were statistically significantly higher than that taken with the IOLMaster; however, the Sirius significantly underestimated the WTW values compared with the IOLMaster. Good agreement was found for Km and ACD measurements, with 95% LoA of -0.20 to 0.54 mm and -0.16 to 0.34 mm, respectively. Poor agreement was observed for astigmatism axis and WTW measurements, as the 95% LoA was -23.96 to 23.36° and -1.15 to 0.37 mm, respectively. Conclusion. With the exception of astigmatism axis and WTW, anterior segment measurements taken by Sirius and IOLMaster devices showed good agreement and may be used interchangeably in patients with cataracts. PMID:25400939
9. Comparison of Anterior Segment Measurements with Scheimpflug/Placido Photography-Based Topography System and IOLMaster Partial Coherence Interferometry in Patients with Cataracts
Directory of Open Access Journals (Sweden)
Jinhai Huang
2014-01-01
Full Text Available Purpose. To assess the consistency of anterior segment measurements obtained using a Sirius Scheimpflug/Placido photography-based topography system (CSO, Italy and IOLMaster partial coherence interferometry (Carl Zeiss Meditec, Germany in eyes with cataracts. Methods. A total of 90 eyes of 90 patients were included in this prospective study. The anterior chamber depth (ACD, keratometry (K, corneal astigmatism axis, and white to white (WTW values were randomly measured three times with Sirius and IOLMaster. Concordance between them was assessed by calculating 95% limits of agreement (LoA. Results. The ACD and K taken with the Sirius were statistically significantly higher than that taken with the IOLMaster; however, the Sirius significantly underestimated the WTW values compared with the IOLMaster. Good agreement was found for Km and ACD measurements, with 95% LoA of −0.20 to 0.54 mm and −0.16 to 0.34 mm, respectively. Poor agreement was observed for astigmatism axis and WTW measurements, as the 95% LoA was −23.96 to 23.36° and −1.15 to 0.37 mm, respectively. Conclusion. With the exception of astigmatism axis and WTW, anterior segment measurements taken by Sirius and IOLMaster devices showed good agreement and may be used interchangeably in patients with cataracts.
10. Chirped microlens arrays for diode laser circularization and beam expansion
Science.gov (United States)
Schreiber, Peter; Dannberg, Peter; Hoefer, Bernd; Beckert, Erik
2005-08-01
Single-mode diode lasers are well-established light sources for a huge number of applications but suffer from astigmatism, beam ellipticity and large manufacturing tolerances of beam parameters. To compensate for these shortcomings, various approaches like anamorphic prism pairs and cylindrical telescopes for circularization as well as variable beam expanders based on zoomed telescopes for precise adjustment of output beam parameters have been employed in the past. The presented new approach for both beam circularization and expansion is based on the use of microlens arrays with chirped focal length: Selection of lenslets of crossed cylindrical microlens arrays as part of an anamorphic telescope enables circularization, astigmatism correction and divergence tolerance compensation of diode lasers simultaneously. Another promising application of chirped spherical lens array telescopes is stepwise variable beam expansion for circular laser beams of fiber or solid-state lasers. In this article we describe design and manufacturing of beam shaping systems with chirped microlens arrays fabricated by polymer-on-glass replication of reflow lenses. A miniaturized diode laser module with beam circularization and astigmatism correction assembled on a structured ceramics motherboard and a modulated RGB laser-source for photofinishing applications equipped with both cylindrical and spherical chirped lens arrays demonstrate the feasibility of the proposed system design approach.
11. Prevalence of refractive errors and ocular disorders in preschool and school children of Ibiporã - PR, Brazil (1989 to 1996
Directory of Open Access Journals (Sweden)
Schimiti Rui Barroso
2001-01-01
Full Text Available Purpose: To establish the prevalence of refractive errors and ocular disorders in preschool and schoolchildren of Ibiporã, Brazil. Methods: A survey of 6 to 12-year-old children from public and private elementary schools was carried out in Ibiporã between 1989 and 1996. Visual acuity measurements were performed by trained teachers using Snellen's chart. Children with visual acuity <0.7 in at least one eye were referred to a complete ophthalmologic examination. Results: 35,936 visual acuity measurements were performed in 13,471 children. 1.966 children (14.59% were referred to an ophthalmologic examination. Amblyopia was diagnosed in 237 children (1.76%, whereas strabismus was observed in 114 cases (0.84%. Cataract (n=17 (0.12%, chorioretinitis (n=38 (0.28% and eyelid ptosis (n=6 (0.04% were also diagnosed. Among the 614 (4.55% children who were found to have refractive errors, 284 (46.25% had hyperopia (hyperopia or hyperopic astigmatism, 206 (33.55% had myopia (myopia or myopic astigmatism and 124 (20.19% showed mixed astigmatism. Conclusions: The study determined the local prevalence of amblyopia, refractive errors and eye disorders among preschool and schoolchildren.
12. First-order design of off-axis reflective ophthalmic adaptive optics systems using afocal telescopes
Science.gov (United States)
Gómez-Vieyra, Armando; Dubra, Alfredo; Williams, David R.; Malacara-Hernández, Daniel
2009-09-01
Scanning laser ophthalmoscopes (SLOs) and optical coherence tomographs are the state-of-the-art retinal imaging instruments, and are essential for early and reliable diagnosis of eye disease. Recently, with the incorporation of adaptive optics (AO), these instruments have started to deliver near diffraction-limited performance in both humans and animal models, enabling the resolution of the retinal ganglion cell bodies, their processes, the cone photoreceptor and the retinal pigment epithelial cells mosaics. Unfortunately, these novel instruments have not delivered consistent performance across human subjects and animal models. One of the limitations of current instruments is the astigmatism in the pupil and imaging planes, which degrades image quality, by preventing the wavefront sensor from measuring aberrations with high spatial content. This astigmatism is introduced by the sequence of off-axis reflective elements, typically spherical mirrors, used for relaying pupil and imaging planes. Expressions for minimal astigmatism on the image and pupil planes in off-axis reflective afocal telescopes formed by pairs of spherical mirrors are presented. The formulas, derived from the marginal ray fans equation, are valid for small angles of incidence (systems. An example related to this last application is discussed.
13. Refractive errors in Cameroonians diagnosed with complete oculocutaneous albinism
Directory of Open Access Journals (Sweden)
Eballé AO
2013-07-01
Full Text Available André Omgbwa Eballé1,3, Côme Ebana Mvogo2, Christelle Noche4, Marie Evodie Akono Zoua2, Andin Viola Dohvoma21Faculty of Medicine and Pharmaceutical Sciences, University of Douala, Douala, Cameroon, 2Faculty of Medicine and Biomedical Sciences, University of Yaoundé I, Yaoundé, Cameroon; 3Yaoundé Gynaeco-obstetric and Paediatric Hospital. Yaoundé, Cameroon; 4Faculty of Medicine, Université des Montagnes. Bangangté, CameroonBackground: Albinism causes significant eye morbidity and amblyopia in children. The aim of this study was to determine the refractive state in patients with complete oculocutaneous albinism who were treated at the Gynaeco-Obstetric and Paediatric Hospital, Yaoundé, Cameroon and evaluate its effect on vision.Methods: We carried out this retrospective study at the ophthalmology unit of our hospital. All oculocutaneous albino patients who were treated between March 1, 2003 and December 31, 2011 were included.Results: Thirty-five patients (70 eyes diagnosed with complete oculocutaneous albinism were enrolled. Myopic astigmatism was the most common refractive error (40%. Compared with myopic patients, those with myopic astigmatism and hypermetropic astigmatism were four and ten times less likely, respectively, to demonstrate significant improvement in distance visual acuity following optical correction.Conclusion: Managing refractive errors is an important way to reduce eye morbidity-associated low vision in oculocutaneous albino patients.Keywords: albinism, visual acuity, refraction, Cameroon
14. Visual and Refractive Outcomes after Cataract Surgery with Implantation of a New Toric Intraocular Lens
Directory of Open Access Journals (Sweden)
Cinzia Mazzini
2013-06-01
Full Text Available Purpose: The aim of this study was to evaluate and report the visual, refractive and aberrometric outcomes of cataract surgery with implantation of the new aspheric Tecnis ZCT toric intraocular lens (IOL in eyes with low to moderate corneal astigmatism. Methods: We conducted a prospective study of 19 consecutive eyes of 17 patients (mean age: 78 years with a visually significant cataract and moderate corneal astigmatism [higher than 1 diopter (D] undergoing cataract surgery with implantation of the aspheric Tecnis ZCT toric IOL (Abbott Medical Optics. Visual, refractive and aberrometric changes were evaluated during a 6-month follow-up. Ocular aberrations as well as IOL rotation were evaluated by means of the OPD-Station II (Nidek. Results: The six-month postoperative spherical equivalent and power vector components of the refractive cylinder were within ±0.50 D in all eyes (100%. Postoperative logMAR uncorrected and corrected distance visual acuities (UDVA/CDVA were 0.1 (about 20/25 or better in almost all eyes (94.74%. The mean logMAR CDVA improved significantly from 0.41 ± 0.23 to 0.02 ± 0.05 (p Conclusion: Cataract surgery with implantation of the aspheric Tecnis ZCT IOL is a predictable and effective procedure for visual rehabilitation in eyes with cataract and low to moderate corneal astigmatism, providing an excellent postoperative ocular optical quality.
15. A STUDY ON PREVALENCE OF REFRACTIVE ERRORS AMONG 5-16 YEARS RURAL CHILDREN IN CHANDRAGIRI, CHITTOOR DISTRICT, ANDHRA PRADESH
Directory of Open Access Journals (Sweden)
Krishna Murthy
2014-09-01
Full Text Available BACKGROUND: Uncorrected refractive errors are a major cause of low vision and even blindness. The refractive errors can be easily diagnosed and corrected by effective screening programmes. The uncorrected refractive errors in children have a definite impact in adversely affecting the learning capacity and scholastic performance. MATERIAL & METHODS: This Cross sectional was conducted from July to December 2013 among 1412 children aged 5-16 years residing in Chandragiri rural area, Chittoor District, Andhra Pradesh. Visual acuity was assessed using Snellen’s chart under standard illumination while detailed eye examination among the suspected cases was done by an ophthalmologist including detailed anterior segment evaluation, ocular motility, radioscopy and auto refraction under 2% homatropine cycloplegic refraction. RESULTS: The prevalence of refractive errors was found to be 7.4% among the study children (out of which 6.1% undiagnosed. Simple myopia was found in 2.4% children while astigmatism (both simple and compound combined was found in around 2.7% children. It was found that the proportion of myopia increased with age being lowest in 5-7 years (0.0% and highest in 14-16 age group (4.0%. The proportion of astigmatism also was found to be lowest in 5-7 age group (0.0% and higher in 11-13 age group and 14-16 age group (4.0% each However, the differences were not statistically significant (P=0.32; NS. The prevalence of myopia was found to be slightly higher in males (2.7% than in females (2.1% while that of astigmatism was found to be higher in females (3.1% than in males (2.3%. However the differences were not statistically significant (P=0.43; NS. A similar prevalence of refractive errors was found in Bangalore and New Delhi while lower and higher prevalence was reported elsewhere. Myopia and astigmatism are the common disorders in several Indian studies including the present study while African studies found myopia to be less common
16. Small Incision Cataract Surgery (SICS with Clear Corneal Incision and SICS with Scleral Incision – A Comparative Study
Directory of Open Access Journals (Sweden)
Md Shafiqul Alam
2014-01-01
Full Text Available Background: Age related cataract is the leading cause of blindness and visual impairment throughout the world. With the advent of microsurgical facilities simple cataract extraction surgery has been replaced by small incision cataract surgery (SICS with posterior chamber intra ocular lens implant, which can be done either with clear corneal incision or scleral incision. Objective: To compare the post operative visual outcome in these two procedures of cataract surgery. Materials and method: This comparative study was carried out in the department of Ophthalmology, Delta Medical College & Hospital, Dhaka, Bangladesh, during the period of January 2010 to December 2012. Total 60 subjects indicated for age related cataract surgery irrespective of sex with the age range of 40-80 years with predefined inclusion and exclusion criteria were enrolled in the study. Subjects were randomly and equally distributed in 2 groups; Group A for SICS with clear corneal incision and group B for SICS with scleral incision. Post operative visual out come was evaluated by determining visual acuity and astigmatism in different occasions and was compared between groups. Statistical analysis was done by SPSS for windows version12. Results: The highest age incidence (43.3% was found between 61 to 70 years of age group. Among study subjects 40 were male and 20 were female. Preoperative visual acuity and astigmatism were evenly distributed between groups. Regarding postoperative unaided visual outcome, 6/12 or better visual acuity was found in 19.98% cases in group A and 39.6% cases in group B at 1st week. At 6th week 6/6 vision was found in 36.3% in Group A and 56.1% in Group B and 46.2% in group A and 66% in group B without and with correction respectively. With refractive correction, 6/6 vision was attained in 60% subjects of group A and 86.67% of group B at 8th week. Post operative visual acuity was statistically significant in all occasions. Postoperative astigmatism of
17. Refractive status of primary school children in Mopani district, Limpopo Province, South Africa
Directory of Open Access Journals (Sweden)
R.G. Mabaso
2006-01-01
Full Text Available This article reports part of the findings of a study carried out to determine the causes, prevalence, and distribution of ocular dis-orders among rural primary school children in Mopani district of Limpopo Province, South Africa. Three hundred and eighty eight children aged 8 to 15 years were randomly selected from five randomly selected schools. Non-cycloplegic retinoscopy and auto-refrac-tion were performed on each child. The preva-lence of hyperopia, myopia, and astigmatism was 73.1%, 2.5% and 31.3% respective-ly. Hyperopia (Nearest spherical equivalent power (FNSE ranged from +0.75 to +3.50 D for the right and left eyes with means of +1.05 ± 0.35 D and +1.08 ± 0.34 D respectively. Myopia (FNSE ranged from –0.50 to –1.75 D for the right eye and –0.50 to –2.25 D for the left eye with means of –0.75 ± 0.55 D and –0.93 ± 0.55 D respectively. Regression model for myopia, shows that age had an odds ratio of 1.94 (1.15 to 3.26, indicating a signifi-cant increased risk of myopia with increasing age. Correcting cylinders for the right eyes ranged from –0.25 to –4.50 D (mean = −0.67 ± 0.47 D and for the left eyes from –0.25 to –2.50 D (mean = −0.60 ± 0.30 D. With-the-rule (WTR astigmatism (66.5% was more common, followed by against-the-rule (ATR astigmatism (28.1% and oblique (OBL astig-matism (5.4%. With-the-rule astigmatism was more common in females than males; ATR astigmatism and OBL astigmatism were common in males. Regular vision screening programmes, appropriate referral and vision correction in primary schools in Mopani district are recommended in order to elimi-nate refractive errors among the children.
18. Design of modified Czerny-Turner spectral imaging system with wide spectral region%改进的宽谱段车尔尼-特纳光谱成像系统设计
Institute of Scientific and Technical Information of China (English)
薛庆生; 陈伟
2012-01-01
A modified Czerny-Turner spectral imaging system was developed based on aberration theory to minimize the large astigmatism in classical Czerny-Turner spectrometers. The astigmatism from a plane grating placed in the divergent light beam was used to compensate the astigmatism from an objective lens. The broadband astigmatism corrected simultaneously conditions were deduced, and the a-stigmatism was corrected in a wide spectral region. The principle and method of astigmatism correction were analyzed in detail, and the initial parameter computing was programed. As an example,a Czerny-Turner imaging spectral system operating in 540 - 780 nm was designed. The ray tracing and optimization for the spectral imaging system were performed with ZEMAX-EE software. The analyzed results demonstrate that the total field-of-view modulation transfer function is higher than 0. 52 in the whole working spectra. The system shows good imaging quality due to the astigmatism to be corrected in the wide spectral region synchronously. Obtained results prove the feasibility of the modified method.%针对传统的车尔尼-特纳光谱仪像散较大的缺点,基于像差理论,提出了一种改进的车尔尼-特纳光谱成像系统.将平面光栅置于发散光中,利用平面光栅产生的像散来补偿物镜产生的像散.推导出了宽谱段像散同时校正条件,实现了宽谱段像散的同时校正.具体分析了像差校正的原理和方法,编制了初始结构快速计算程序.作为实例,设计了一个谱段为540~780 nm的宽谱段像散同时校正车尔尼-特纳光谱成像系统,利用光学设计软件ZEMAX-EE对该光谱成像系统进行了光线追迹和优化设计,并对设计结果进行了分析.结果表明,全视场调制传递函数在整个工作波段均达到0.52以上,实现了宽谱段像散的同时校正,并获得了良好的成像质量,满足了设计指标要求,结果也证实了所提出的改进方法是可行的.
19. Single-segment and double-segment INTACS for post-LASIK ectasia.
Directory of Open Access Journals (Sweden)
Hassan Hashemi
2014-09-01
Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.
20. Ring-field TMA for PRISMA: theory, optical design, and performance measurements
Science.gov (United States)
Calamai, Luciano; Barsotti, Stefano; Fossati, Enrico; Formaro, Roberto; Thompson, Kevin P.
2015-09-01
PRISMA (PRecursore IperSpettrale della Missione Applicativa) Hyperspectral Payload is an Electro-Optical instrument developed in Selex ES for the dedicated ASI (Italian Space Agency) mission for Earth observation. The performance requirements for this mission are stringent and have led to an instrument design that is based on a Ring-Field Three Mirror Anastigmat (Ring-Field TMA), a two channel prism dispersion based spectrometer (VNIR and SWIR), and a Panchromatic Camera. The Ring-Field TMA contains three mirrors (two conics and one conic with some higher order correction). Exceptional performance has been achieved by not only introducing 3rd order astigmatism to balance the 5th astigmatism at the ring field zone as is traditional in an Offner-type design but, additionally, 3rd order coma has been controlled to align the balance of the linear and field cubic coma terms at the same ring field zone. The predicted wavefront performance of the design over the field of view will be highlighted. An assembly and alignment procedure for the Ring-Field TMA has been developed from the results of the sensitivity and tolerances analysis. The tilt and decenter sensitivity of the design form is nearly exclusively determined by 3rd order binodal astigmatism. The nodal position is linear with perturbation, which greatly simplifies the decisions on alignment compensators. The manufactured mirrors of the Ring-Field TMA have been aligned at Selex ES and as will be reported the preliminary results in terms of optical quality are in good agreement with the predicted as-built performance, both on-axis and in the field.
1. The effect of treatment on different refractive amblyopia children%不同屈光状态弱视儿童治疗效果的临床分析
Institute of Scientific and Technical Information of China (English)
孙鹏飞; 肖瑛; 陈国玲; 侯静; 马广凤
2014-01-01
目的:探讨不同屈光状态儿童弱视治疗的临床效果。方法回顾弱视儿童86例(150眼),其中远视性弱视77眼,近视性弱视38眼,混合散光性弱视35眼,比较评价三组治疗效果。结果远视性弱视组治愈68眼(88.3%),近视性弱视治愈24眼(63.2%),混合散光性弱视治愈11眼(31.5%),三组治愈率有明显差异,P<0.05。结论远视性弱视治疗效果明显好于近视性和混合散光性弱视。%Objective To evaluate the efficacy of treatment on amblyopia children with different refractive states. Methods Total of 86 cases (150eyes) of amblyopia children, 77eyes with hyperopic amblyopia, 38 eyes with my-opic amblyopia and 35 eyes with mixed astigmatism amblyopia were treated and the effect were analyzed. Results 68 eyes (88.3%) of hyperopic amblyopia were cured, 24 eyes (63.2%) of myopic amblyopia were cured, 11 eyes (31.5%) of mixed astigmatism amblyopica were cured. There were significant difference of cured ratio among 3 groups, P<0.05. Conclusions The efficiency of treatment on hyperemia amblyopic is better than myopic amblyopia and mixed astigma-tism amblyopia.
2. Congenital glaucoma in siblings
OpenAIRE
Cont, M; Fontijn, J; Michels, R; Arlettaz, R.
2011-01-01
We report on a full term female neonate delivered by cesarean section at 37 2/7 weeks of gestation. The mother, a 28-year-old G2/P2, had a history of autoimmune thrombocytopenia and diabetes mellitus type I. She also has astigmatism and the father is myopic. There is no history of consanguinity. The first child, a son, was born at term and discharged home considered to be healthy. However, the parents noticed corneal clouding, and brought their baby to the pediatrician who confirmed cornea...
3. General analysis of aplanatic cassegrain, gregorian, and schwarzschild telescopes.
Science.gov (United States)
Wetherell, W B; Rimmer, M P
1972-12-01
The properties of two-conic reflecting aplanats are analyzed and discussed on the basis of third order aberration theory. Techniques for designing infinite conjugate two mirror aplanats and computing their image properties are developed. The secondary mirror alignment characteristics of Ritchey-Chrétien and aplanatic Gregorian telescopes are examined and neutral point locations defined. Design configurations corrected for a third Seidel aberration (astigmatism, image curvature, or distortion) are identified and their properties discussed. The properties of Ritchey-Chrétien and aplanatic Gregorian telescopes are compared.
4. ENGLISH ABSTRACTS OF CURRENT CHINESE MEDICAL LITERATURE
Institute of Scientific and Technical Information of China (English)
2001-01-01
@@Excimer laser in situ keratomileusis for severe ametropia after penetrating keratoplasty LU Wenxiu, LI Zhihui, QI Ying, et al.Department of Ophthalmology, Beijing Tongren Hospital,Beijiing 100730, ChinaChin J Ophthalmol 2001; 37: 94-97. Objective To evaluate the effects of excimer laser in situ keratomileusis in correcting severe myopia and astigmatism after penetrating keratoplasty.Methods Excimer laser in situ keratomileusis was performed on ten eyes of ten patients to correct high ametropia in cases having previously undergone penetrating
5. Optical modeling and physical performances evaluations for the JT-60SA ECRF antenna
Energy Technology Data Exchange (ETDEWEB)
Platania, P., E-mail: platania@ifp.cnr.it; Figini, L.; Farina, D.; Micheletti, D.; Moro, A.; Sozzi, C. [Istituto di Fisica del Plasma “P. Caldirola”, Consiglio Nazionale delle Ricerche, Via R. Cozzi 53, 20125, Milano (Italy); Isayama, A.; Kobayashi, T.; Moriyama, S. [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)
2015-12-10
The purpose of this work is the optical modeling and physical performances evaluations of the JT-60SA ECRF launcher system. The beams have been simulated with the electromagnetic code GRASP® and used as input for ECCD calculations performed with the beam tracing code GRAY, capable of modeling propagation, absorption and current drive of an EC Gaussion beam with general astigmatism. Full details of the optical analysis has been taken into account to model the launched beams. Inductive and advanced reference scenarios has been analysed for physical evaluations in the full poloidal and toroidal steering ranges for two slightly different layouts of the launcher system.
6. Inflammatory Biomarkers Profile as Microenvironmental Expression in Keratoconus
Science.gov (United States)
Jonescu-Cuypers, Christian; Nicula, Cristina; Voinea, Liliana-Mary
2016-01-01
Keratoconus is a degenerative disorder with progressive stromal thinning and transformation of the normal corneal architecture towards ectasia that results in decreased vision due to irregular astigmatism and irreversible tissue scarring. The pathogenesis of keratoconus still remains unclear. Hypotheses that this condition has an inflammatory etiopathogenetic component apart from the genetic and environmental factors are beginning to escalate in the research domain. This paper covers the most relevant and recent published papers regarding the biomarkers of inflammation, their signaling pathway, and the potentially new therapeutic options in keratoconus. PMID:27563164
7. [Wave front aberrations -- practical conclusions in eye with Restor 3+ difractive multifocal lens].
Science.gov (United States)
Staicu, Corina; Moraru, Ozana; Moraru, Cristian
2014-01-01
Implantation of multifocal intraocular lenses has become a rutine nowadays, but achieving good visual results requires a perfect intraoperative technique and also an adequate preoperative selection of the patients. We analysed the wave front aberrations (spherical aberations, coma and astigmatism) in the eyes implanted with ReStor + 3 IOL, and we realized some clinical correlations of these aberations with the pupil diameter in scotopic and fotopic conditions, kappa angle, IOL centration, residual refraction errors postoperatively. Taking into account the causes of postoperative high order aberration will allow the surgeon to make a good selection of the patiens and to a higher degree of satisfaction of both sides.
8. Erros de refração como causas de baixa visual em crianças da rede de escolas públicas da regional de Botucatu - SP Refractive errors as causes of visual impairment in children from public schools of the Botucatu region - SP
Directory of Open Access Journals (Sweden)
Claudia Akemi Shiratori de Oliveira
2009-04-01
9. Wavefront aberration statistics in normal eye populations: are they well described by the Kolmogorov model?
Science.gov (United States)
2014-06-01
This Letter studies the statistics of wavefront aberrations in a sample of eyes with normal vision. Methods relying on the statistics of the measured wavefront slopes are used, not including the aberration estimation stage. Power-law aberration models, an extension of the Kolmogorov one, are rejected by χ2-tests performed on fits to the slope structure function data. This is due to the large weight of defocus and astigmatism variations in normal eyes. Models of only second-order changes are not ruled out. The results are compared with previous works in the area.
10. Screening for significant refractive error using a combination of distance visual acuity and near visual acuity.
Directory of Open Access Journals (Sweden)
Peiyao Jin
Full Text Available To explore the effectiveness of using a series of tests combining near visual acuity (NVA and distance visual acuity (DVA for large-scale screenings for significant refractive error (SRE in primary school children.Each participant underwent DVA, NVA and cycloplegic autorefraction measurements. SREs, including high myopia, high hyperopia and high astigmatism were analyzed. Cycloplegic refraction results were considered to be the gold standard for the comparison of different screening measurements. Receiver-operating characteristic (ROC curves were constructed to compare the area under the curve (AUC and the Youden index among DVA, NVA and the series combined tests of DVA and NVA. The efficacies (including sensitivity, specificity, positive predictive value, and negative predictive value of each test were evaluated. Only the right eye data of each participant were analysed for statistical purpose.A total of 4416 children aged 6 to 12 years completed the study, among which 486 students had right eye SRE (SRE prevalence rate = 11.01%. There was no difference in the prevalence of high hyperopia and high astigmatism among different age groups. However, the prevalence of high myopia significantly increased with the age (χ² = 381.81, p<0.01. High hyperopia was the biggest SRE factor associated with amblyopia(p<0.01,OR = 167.40, 95% CI: 75.14∼372.94. The DVA test was better than the NVA test for detecting high myopia (Z = 2.71, p<0.01, but the NVA test was better for detecting high hyperopia (Z = 2.35, p = 0.02 and high astigmatism (Z = 4.45, p<0.01. The series combined DVA and NVA test had the biggest AUC and the highest Youden Index for detecting high hyperopia, myopia, astigmatism, as well as all of the SREs (all p<0.01.The series combined DVA and NVA test was more accurate for detecting SREs than either of the two tests alone. This new method could be applied to large-scale SRE screening of children, aged 6 to 12, in areas that are less
11. Double-phase-conjugate mirror in CdTe:V with elimination of conical diffraction at 1.54 microm.
Science.gov (United States)
Martel, G; Wolffer, N; Moisan, J Y; Gravey, P
1995-04-15
We have fabricated a double-phase-conjugate mirror (DPCM) in a single crystal of vanadium-doped cadmium telluride. Because of the high gain in the near-infrared region, a DPCM is possible at a telecommunication wavelength of 1.54 microm in this material. Experimental and theoretical thresholds for the DPCM are compared, and an experimental diffraction efficiency of 7.4% is reported. Conical diffraction has been eliminated by the method of cylindrical lenses. We propose to use this astigmatic configuration to enhance the capacity of interconnections between fibers with a single crystal in a vector-matrix architecture. PMID:19859380
12. New Gapless COS G140L Mode Proposed for Background-Limited Far-UV Observations
CERN Document Server
Redwine, Keith; Fleming, Brian; France, Kevin; Zheng, Wei; Osterman, Steven; Howk, J Christopher; Anderson, Scott F; Gaensicke, Boris T
2016-01-01
Here we describe the observation and calibration procedure for a new G140L observing mode for the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope (HST). This mode, CENWAV = 800, is designed to move the far-UV band fully onto the Segment A detector, allowing for more e cient ob- servation and analysis by simplifying calibration management between the two channels, and reducing the astigmatism in this wavelength region. We also de- scribe some of the areas of scientific interest for which this new mode will be especially suited.
13. Using aberration test patterns to optimize the performance of EUV aerial imaging microscopes
Energy Technology Data Exchange (ETDEWEB)
Mochi, Iacopo; Goldberg, Kenneth A.; Miyakawa, Ryan; Naulleau, Patrick; Han, Hak-Seung; Huh, Sungmin
2009-06-16
The SEMATECH Berkeley Actinic Inspection Tool (AIT) is a prototype EUV-wavelength zoneplate microscope that provides high quality aerial image measurements of EUV reticles. To simplify and improve the alignment procedure we have created and tested arrays of aberration-sensitive patterns on EUV reticles and we have compared their images collected with the AIT to the expected shapes obtained by simulating the theoretical wavefront of the system. We obtained a consistent measure of coma and astigmatism in the center of the field of view using two different patterns, revealing a misalignment condition in the optics.
14. Adaptive compensation of lower order thermal aberrations in concave-convex power oscillators under variable pump conditions
Science.gov (United States)
Jackel, Steven M.; Moshe, Inon
2000-09-01
A Nd:Cr:GSGG concave-convex power oscillator was developed that utilized both an adaptive mirror comprised of spherical and cylindrical optical elements together with a Faraday rotator to dynamically eliminate lower order aberrations (thermal focusing, astigmatism, and bipolar focusing). An adaptively controlled collimating lens corrected for shifts in the mode-waist position. The addition of a polarizer and a reentrant mirror totally eliminated thermal birefringence losses. The techniques developed are attractive in any solid state laser that must work under changing pump power conditions.
15. GANAS: A HYBRID ANASTIGMATIC ASPHERICAL PRIME-FOCUS CORRECTOR
Directory of Open Access Journals (Sweden)
F. Della Prugna
2009-01-01
Full Text Available The Cassegrain-Coud 1 meter Car to the focal plane of the primary f/5 l Zeiss telescope at the Venezuelan National Astronomical Observatory uses six optical elements. Removal of the secondary convex mirror gives accessspheroidal mirror, but spherical aberration, coma, astigmatism and eld curvature severely hamper its imaging capabilities. In order to carry out prime-focus imaging, we designed and manufactured a corrector group, called GAnAs, to minimize these aberrations over a circular eld of 300. The corrector group is a hybrid con guration with two thin aspherical 4th-order plates and a meniscus lens.
16. Orbital angular momentum of the laser beam and the second order intensity moments
Institute of Scientific and Technical Information of China (English)
高春清[1; 魏光辉[2; HorstWeber[3
2000-01-01
From the wave equation of a generalized beam the orbital angular momentum is studied. It is shown that the orbital angular momentum exists not only in the Laguerre-Gaussian beam, but in any beam with an angular-dependent structure. By calculating the second order intensity moments of the beam the relation between the orbital angular momentum and the second order moments 〈xθy〉, 〈yθx〉 is given. As an example the orbital angular momentum of the general astigmatic Gaussian beam is studied.
17. Orbital angular momentum of the laser beam and the second order intensity moments
Institute of Scientific and Technical Information of China (English)
2000-01-01
From the wave equation of a generalized beam the orbital angular momentum is studied. It is shown that the orbital angular momentum exists not only in the Laguerre_Gaussian beam,but in any beam with an angular_dependent structure. By calculating the second order intensity moments of the beam the relation between the orbital angular momentum and the second order moments 〈xθy〉, 〈yθx〉 is given. As an example the orbital angular momentum of the general astigmatic Gaussian beam is studied.
18. Optical Aberrations and Wavefront
Directory of Open Access Journals (Sweden)
Nihat Polat
2014-08-01
Full Text Available The deviation of light to create normal retinal image in the optical system is called aberration. Aberrations are divided two subgroup: low-order aberrations (defocus: spherical and cylindrical refractive errors and high-order aberrations (coma, spherical, trefoil, tetrafoil, quadrifoil, pentafoil, secondary astigmatism. Aberrations increase with aging. Spherical aberrations are compensated by positive corneal and negative lenticular spherical aberrations in youth. Total aberrations are elevated by positive corneal and positive lenticular spherical aberrations in elderly. In this study, we aimed to analyze the basic terms regarding optic aberrations which have gained significance recently. (Turk J Ophthalmol 2014; 44: 306-11
19. General analysis of aplanatic cassegrain, gregorian, and schwarzschild telescopes.
Science.gov (United States)
Wetherell, W B; Rimmer, M P
1972-12-01
The properties of two-conic reflecting aplanats are analyzed and discussed on the basis of third order aberration theory. Techniques for designing infinite conjugate two mirror aplanats and computing their image properties are developed. The secondary mirror alignment characteristics of Ritchey-Chrétien and aplanatic Gregorian telescopes are examined and neutral point locations defined. Design configurations corrected for a third Seidel aberration (astigmatism, image curvature, or distortion) are identified and their properties discussed. The properties of Ritchey-Chrétien and aplanatic Gregorian telescopes are compared. PMID:20119413
20. A first-order treatment of aberrations in Cassegrainian and Gregorian antennas
Science.gov (United States)
Dragone, C.
1982-05-01
The decrease in aperture efficiency caused by small aberrations in a reflector antenna is determined. The important case of a Cassegrainian (or Gregorian) antenna with a feed placed in the vicinity of the focal point is treated in detail. For this case the various aberration components due to astigmatism, coma, etc., are derived explicitly, their effect on aperture efficiency is shown, and the conditions that optimize performance are given. The results are useful for the design of multibeam antennas in ground stations and satellites.
1. A high performance laser diode transmitter for optical free space communication
Science.gov (United States)
Hildbebrand, U.; Ohm, G.; Wiesmann, Th.; Hildebrand, K.; Voit, E.
1990-07-01
For the ESA Semiconductor Intersatellite Link Experiment (SILEX), elements of the communication chain have been breadboarded. The electrooptical converter, called the laser diode transmitter package (LDTP), is described here. The requirements on the LDTP optical quality are deduced from the overall system requirements. The tolerable wavefront errors (WFE) and the stability of beam direction are most critical. Four breadboards have been assembled and tested. The very stringent requirements on WFE were surpassed, with a resulting rms value of 1/40 waves. In order to achieve this wavefront quality, the typical astigmatism of index-guided laser diodes (1-10 microns) had to be compensated by adjustable cylindrical lenses.
2. Investigation of communication laser diodes for the SILEX project
Science.gov (United States)
Menke, Bodo; Loeffler, Roland
1989-10-01
The Semiconductor Intersatellite Laser Experiment (SILEX) will construct an optical communications link over a range of 45,000 km, using 0.8-micron AlGaAs laser diodes capable of transmitting 65 Mbit/s. Numerous single-stripe diode types were furnished by manufacturers and subjected to measurements to establish conformity with the required far-field pattern spectrum spread under QPPM-modulation, mode-hopping, astigmatism, and rms wavefront error (WFE); WFE is demonstrated to be strongly affected by the laser window's introduction of strong spherical aberration. Three laser types have been chosen for breadboarding and accelerated life tests.
3. Selection strategy and reliability assessment for SILEX-communication laser diodes
Science.gov (United States)
Loeffler, Roland; Menke, Bodo
1991-03-01
Steps involved in the search for suitable laser diodes for Semiconductor laser Intersatellite Link EXperiment (SILEX) and evaluation of their capabilities in meeting qualification requirements are discussed. A baseline of the laser diode functional specifications is identified by synthesizing the SILEX system requirements and thereby predicting the desired diode characteristics. Samples of approximately 20 different laser diode types are submitted to comprehensive measurements of their characteristics, spectral widths, mode hopping behavior, far field patterns, wave front errors and astigmatisms under modulation. An evaluation program consisting of a conventional three temperature aging test and sensitivity and environmental tests is defined.
4. Transferences of Purkinje systems
Directory of Open Access Journals (Sweden)
W. F. Harris
2011-12-01
Full Text Available The transferences of heterocentric astigmatic Purkinje systems are special: submatrices B and C, that is, the disjugacy and the divergence of the system, are symmetric and submatrix D (the divarication is the transpose of submatrix A (the dilation. It is the primary purpose of this paper to provide a proof. The paper also derives other relationships among the fundamental properties and compact expressions for the transference and optical axis locator of a Purkinje system. (S Afr Optom 2011 70(2 57-60
5. Optical Design of Spaceborne Broadband Limb Sounder for Detecting Atmospheric Trace Gas%星载宽波段大气痕量气体临边探测仪光学设计
Institute of Scientific and Technical Information of China (English)
薛庆生
2012-01-01
In order to meet the urgent requirements of delecting atmospheric trace gas in limb observation geometry, an optical system of spaceborne broadband limb sounder for detecting atmospheric trace gas is designed. The system is an imaging spectrometer with the working wavelength band from 0. 3 μm to 0. 7 μm, and its full field of view is 2. 4% focal length is 120 mm, and the relative aperture is 1 : 6. To avoid the problems of the classical Czerny-Truncr spectrometer, such as low spatial resolution caused by large astigmatism, a modified Czerny-Turner spectrometer is designed, in which astigmatism can be corrected simultaneously in a wide band. By matching the modified Czerny-Turner spectrometer with a off-axis parabolic telescope,an examplc of limb sounder optical system is designed. Ray tracing, optimization and analysing are performed by ZEMAX software. The analyzed results demonstrate that the astigmatism is substantially corrected, and the MTF for different spectral band is more than 0. 69 which satisfies the pre-designed requirement and proves the feasibility of the astigmatism-correction method.%为满足大气痕量气体临边探测的迫切需求,克服传统Czerny-Turner光谱仪由于像散大导致空间分辨率低的缺点,设计了一种可以在宽波段内同时校正像散的改进型Czerny-Turner光谱仪,光谱范围为0.3~0.7μm,全视场角为2.4°,焦距为120 mm,相对孔径为1∶6.将离轴抛物面镜与改进型Czerny-Turner光谱仪匹配设计了一个临边探测仪光学系统并运用光学设计软件ZEMAX对临边探测仪光学系统进行了光线追迹和优化并对设计结果进行了分析,结果表明该系统的像散得到充分校正,光学系统在各个谱段的光学传递函数均达到0.69以上,完全满足设计指标要求,也证明了所提出的在宽波段内同时像散校正方法是可行的.
6. Intraocular correction of high-degree ametropia using individual multifocal LentisMPlus IOL
Directory of Open Access Journals (Sweden)
I.S. Fedorova
2013-03-01
Full Text Available ABSTRACT Background. For surgical correction of high-degree ametropia aggravated with astigmatism the following options are available: excimer laser correction; phakic lens implantation; bioptika – a combination of ablating the transparent crystalline lens (ATL with implantation of multifocal toric diopter IOL of standard series and LASIK for the correction of a residual refractive error, ATL using 2 IOLs according to the Technology «Piggy Back»; additional meniscus IOL implantation «Add-On»; ATL with implantation of an individual multifocal toric IOL. Purpose. To show a possibility of intraocular correction of high ametropia aggravated with astigmatism using toric multifocal custom IOLs. Material and methods. We observed two patients: the first female patient, 39 years old with a diagnosis of OU: high myopia, compound myopic astigmatism, initial complicated cataract, moderate amblyopia, peripheral chorioretinal degeneration (PCRD. On admission the distance visual acuity was vis OD=0.01 sph (- 15.5 D cyl (- 2.5 D ax 0°=0.6; vis OS=0.01 sph (- 18.0 D cyl (- 2.5 D ax 0°=0.5. The second patient was a 35-year woman with a diagnosis of OU: high hyperopia, compound hyperopic astigmatism, moderate amblyopia. Distance visual acuity on admission was OD=0.03 sph (+ 8.0 D cyl (+ 1.5 D ax 95°=0.5; OS=0.03 sph (+ 8.0 D cyl (+ 0.75 D ax 75°=0.6. Individual multifocal toric IOLs were implanted in both patents after the removal of the lens phacoemulsification. All standard ophthalmic examinations were used as well as the ultrasound biomicroscopy (UBM. Results. In the follow-up: 6 months after the surgery in the first patient the uncorrected visual acuity (UCVA was far vis OD=0.6, vis OS=0.5, near vis OD=0.4, vis OS=0.5, middle distance vis OD=0.4, vis OS=0.4. The second patient 3 months after surgery had the UCVA far vis OD=0.5, vis OS=0.6, near vis OD=0.4, vis OS=0.5, middle distance vis OD=0.2, vis OS=0.3. The maximum possible distance visual acuity
7. Fibrinous anterior uveitis following laser in situ keratomileusis
Directory of Open Access Journals (Sweden)
Parmar Pragya
2009-01-01
Full Text Available A 29-year-old woman who underwent laser in situ keratomileusis (LASIK for myopic astigmatism in both eyes presented with severe pain, photophobia and decreased visual acuity in the left eye eight days after surgery. Examination revealed severe anterior uveitis with fibrinous exudates in the anterior chamber, flap edema and epithelial bullae. Laboratory investigations for uveitis were negative and the patient required systemic and intensive topical steroids with cycloplegics to control the inflammation. This case demonstrates that severe anterior uveitis may develop after LASIK and needs prompt and vigorous management for resolution.
8. Management of advanced corneal ectasias.
Science.gov (United States)
Maharana, Prafulla K; Dubey, Aditi; Jhanji, Vishal; Sharma, Namrata; Das, Sujata; Vajpayee, Rasik B
2016-01-01
Corneal ectasias include a group of disorders characterised by progressive thinning, bulging and distortion of the cornea. Keratoconus is the most common disease in this group. Other manifestations include pellucid marginal degeneration, Terrien's marginal degeneration, keratoglobus and ectasias following surgery. Advanced ectasias usually present with loss of vision due to high irregular astigmatism. Management of these disorders is difficult due to the peripheral location of ectasia and associated severe corneal thinning. Newer contact lenses such as scleral lenses are helpful in a selected group of patients. A majority of these cases requires surgical intervention. This review provides an update on the current treatment modalities available for management of advanced corneal ectasias. PMID:26294106
9. Measuring the Orbital Angular Momentum of Electron Beams
CERN Document Server
Guzzinati, Giulio; Béché, Armand; Verbeeck, Jo
2014-01-01
The recent demonstration of electron vortex beams has opened up the new possibility of studying orbital angular momentum (OAM) in the interaction between electron beams and matter. To this aim, methods to analyze the OAM of an electron beam are fundamentally important and a necessary next step. We demonstrate the measurement of electron beam OAM through a variety of techniques. The use of forked holographic masks, diffraction from geometric apertures, diffraction from a knife-edge and the application of an astigmatic lens are all experimentally demonstrated. The viability and limitations of each are discussed with supporting numerical simulations.
10. Generalised Hermite-Gaussian beams and mode transformations
CERN Document Server
Wang, Yi; Zhang, Yanfeng; Chen, Hui; Yu, Siyuan
2016-01-01
Generalised Hermite-Gaussian modes (gHG modes), an extended notion of Hermite-Gaussian modes (HG modes), are formed by the summation of normal HG modes with a characteristic function $\\alpha$, which can be used to unite conventional HG modes and Laguerre-Gaussian modes (LG modes). An infinite number of normalised orthogonal modes can thus be obtained by modulation of the function $\\alpha$. The gHG mode notion provides a useful tool in analysis of the deformation and transformation phenomena occurring in propagation of HG and LG modes with astigmatic perturbation.
11. White light interferometry in amblyopic children--a pilot study.
Science.gov (United States)
Vernon, S A; Hardman-Lea, S; Rubinstein, M P; Snead, M P
1990-01-01
Interferometric acuity using the IRAS white light interferometer was compared with Snellen acuity in nine amblyopic children between the ages of five and nine years, and nine aged matched controls. All of the amblyopic eyes achieved better grating acuities than Snellen acuities. Fifty-seven per cent of the amblyopes with a best corrected Snellen acuity of 6/18 or less in their amblyopic eye, achieved grating acuities indistinguishable from normal. The hand held white light interferometer may have a role in the assessment of meridional amblyopia and in children with high astigmatic errors.
12. Development of a universal toric intraocular lens calculator
Science.gov (United States)
2014-02-01
We present a method for calculating the ideal toric lens to implant in astigmatic patients following cataract surgery. We show that the online calculators provided by major toric IOL manufacturers are insufficient for both theoretical and practical reasons. We reveal important theoretical shortcomings in their approach, illustrated by a number of cases which demonstrate how the approach can lead to errors in lens selection. Our approach combines the spherical and cylindrical power calculations into one, and allows for lens data from any manufacturer to be used, eliminating the reliance on multiple programs.
13. Phase and group velocity of focused, pulsed Gaussian beams in the presence and absence of primary aberrations
International Nuclear Information System (INIS)
This work presents a study on the phase- and group-velocity variations of focused, pulsed Gaussian beams during the propagation through the focal region along the optical axis. In the aberration-free case, it is discussed how the wavelength dependence of beam properties alters the group velocity, and how a chromatic aberration-like effect can arise even when focusing is performed with an element that does not have chromatic aberration. It is also examined what effects primary spherical aberration, astigmatism, coma, curvature of field and distortion, along with chromatic aberration, have on the phase- and group-velocity changes occurring during propagation through focus. (paper)
14. Nano-Structuring of Solid Surface by EUV Ar8+ Laser
International Nuclear Information System (INIS)
The paper demonstrates our first attempt for “direct (i.e. ablation) patterning” of PMMA by pulse, high-current, capillary-discharge-pumped Ar8+ ion laser (λ = 46.9 nm). For focusing a long-focal spherical mirror (R = 2100 mm) covered by 14 double-layer Sc-Si coating was used. The ablated focal spots demonstrate not only that the energy of our laser is sufficient for such experiments, but also that the design of focusing optics must be more sophisticated: severe aberrations have been revealed - an irregular spot shape and strong astigmatism with astigmatic deference as large as 16 mm. In some cases, on the bottom of ablated spots a laser-induced periodic surface structure (LIPSS) has appeared. Finally, an illumination of the sample through quadratic hole 7.5x7.5 μm, standing in contact with PMMA substrate, has ablated from the surface a strongly developed 2D diffraction pattern (period in the centre ⁓125 nm). (author)
15. A Handheld Open-Field Infant Keratometer (An American Ophthalmological Society Thesis)
Science.gov (United States)
Miller, Joseph M.
2010-01-01
Purpose: To design and evaluate a new infant keratometer that incorporates an unobstructed view of the infant with both eyes (open-field design). Methods: The design of the open-field infant keratometer is presented, and details of its construction are given. The design incorporates a single-ring keratoscope for measurement of corneal astigmatism over a 4-mm region of the cornea and includes a rectangular grid target concentric within the ring to allow for the study of higher-order aberrations of the eye. In order to calibrate the lens and imaging system, a novel telecentric test object was constructed and used. The system was bench calibrated against steel ball bearings of known dimensions and evaluated for accuracy while being used in handheld mode in a group of 16 adult cooperative subjects. It was then evaluated for testability in a group of 10 infants and toddlers. Results: Results indicate that while the device achieved the goal of creating an open-field instrument containing a single-ring keratoscope with a concentric grid array for the study of higher-order aberrations, additional work is required to establish better control of the vertex distance. Conclusion: The handheld open-field infant keratometer demonstrates testability suitable for the study of infant corneal astigmatism. Use of collimated light sources in future iterations of the design must be incorporated in order to achieve the accuracy required for clinical investigation. PMID:21212850
16. Refractive status of medical students of mymensingh medical college.
Science.gov (United States)
Akhanda, A H; Quayum, M A; Siddiqui, N I; Hossain, M M
2010-10-01
This study is done to find out the refractive status of medical students of Mymensingh Medical College (MMC), Mymensingh, Bangladesh. They are of the age of 17-19 years. This is a nonrandom purposive cross sectional study done at late part of the November 2008. Visual acuity estimation, automated refraction, streak retinoscopy, fundoscopy using +78D volk lens were done according to the need of the cases. Out of 175 students 53.14% are emmetropic and 46.86% are ametropic, ametropia is nearly equal in both sexes (male 51.22%, female 48.78%). About all students are of highest academic attainment (GPA 5). About one quarter of the ametropic students (21.61%) are not using spectacles. Simple myopia (81.70%) and myopic astigmatism (18.30%) are the types of ametropia. Out of 67 simple myopic students 56 are of bilateral involvement and 11 are of unilateral involvement. There is similarity in the distribution of sex & refractive status in between general population & medical students of Bangladesh. Myopia and myopic astigmatism are prevalent among medical students. PMID:20956887
17. Femtosecond laser-assisted deep anterior lamellar keratoplasty for keratoconus and keratectasia
Institute of Scientific and Technical Information of China (English)
Yan; Lu; Yu-Hua; Shi; Li-Ping; Yang; Yi-Rui; Ge; Xiang-Fei; Chen; Yan; Wu; Zhen-Ping; Huang
2014-01-01
·AIM: To describe the initial outcomes and safety of femtosecond laser-assisted deep anterior lamellar keratoplasty(DALK) for keratoconus and post-LASIK keratectasia.·METHODS: In this non-comparative case series, 10 eyes of 9 patients underwent DALK procedures with a femtosecond laser(Carl Zeiss Meditec AG, Jena,Germany). Of the 9 patients, 7 had keratoconus and 2had post-LASIK keratectasia. A 500 kHz VisuMax femtosecond laser was used to perform corneal cuts on both donor and recipient corneas. The outcome measures were the uncorrected visual acuity(UCVA),best-corrected visual acuity(BCVA), corneal thickness,astigmatism, endothelial density count(EDC), and corneal power.·RESULTS: All eyes were successfully treated. Early postoperative evaluation showed a clear graft in all cases. Intraoperative complications included one case of a small Descemet’s membrane perforation.Postoperatively, there was one case of stromal rejection,one of loosened sutures, and one of wound dehiscence.A normal corneal pattern topography and transparency were restored, UCVA and BCVA improved significantly,and astigmatism improved slightly. There was no statistically significant decrease in EDC.· CONCLUSION: Our early results indicate that femtosecond laser-assisted deep anterior lamellar keratoplasty could improve UCVA and BCVA in patients with anterior corneal pathology. This approach shows promise as a safe and effective surgical choice in the treatment of keratoconus and post-LASIK keratectasia.
18. Z(eff) profile measurement system with an optimized Czerny-Turner visible spectrometer in large helical device.
Science.gov (United States)
Zhou, H Y; Morita, S; Goto, M; Chowdhuri, M B
2008-10-01
Z(eff) measurement system using a visible spectrometer has been newly designed and constructed instead of an old interference filter system to eliminate line emissions from the signal and to measure the Z(eff) value in low-density plasmas. The system consists of the Czerny-Turner-type spectrometer with a charge-coupled device camera and 44 optical fibers vertical array. The spectrometer is equipped with an additional toroidal mirror for further reduction in the astigmatism in addition to a flat and two spherical mirrors and three gratings (110, 120, and 1200 grooves/mm) with 30 cm focal length. The images from 44 optical fibers can be detected without astigmatism in a wavelength range of 200-900 nm. Combination of the optical fiber (core diameter: 100 microm) with the lens (focal length: 30 mm) provides spatial resolution of 30 mm at the plasma center. Results clearly indicate a very good focus image of the fiber and suggest the absence of the cross-talk between adjacent fiber images. Absolute intensity calibration has been done using a standard tungsten lamp to analyze the Z(eff) value. The bremsstrahlung profile and resultant Z(eff) profile have been obtained after Abel inversion of the signals observed in large helical device plasmas with elliptical poloidal cross section. PMID:19044678
19. Microscopy system of atomic force based on a digital optical reading unit and a buzzer-scanner
International Nuclear Information System (INIS)
An astigmatic detection system (Ads) based on a compact disk/digital-versatile-disk (Cd-DVD) astigmatic optical pickup unit is presented. It can achieve a resolution better than 0.3 nm in detection of the vertical displacement and is able to detect the two-dimensional angular tilt of the object surface. Furthermore, a novel scanner design actuated by piezoelectric disk buzzers is presented. The scanner is composed of a quad-rod actuation structure and several piezoelectric disks. It can be driven directly with low-voltage and low-current sources, such as analogue outputs of a data acquisition card and enables a sufficient scanning range of up to μm. In addition, an economic, high-performance streamlined atomic force microscopy (AFM) was constructed, using the buzzer-scanner to move the sample relative to the probe, and using a Cd/DVD optical pickup unit to detect the mechanical resonance of a micro fabricated cantilever. The performance of the AFM is evaluated. The high sensitivity and high bandwidth of the detection system makes the equipment suitable for characterizing nano scale elements. An AFM using our detection system for detecting the deflection of micro fabricated cantilevers can resolve individual atomic steps on graphite surfaces. (Author)
20. Scleral Fixation of Posteriorly Dislocated Intraocular Lenses by 23-Gauge Vitrectomy without Anterior Segment Approach.
Science.gov (United States)
Nadal, Jeroni; Kudsieh, Bachar; Casaroli-Marano, Ricardo P
2015-01-01
Background. To evaluate visual outcomes, corneal changes, intraocular lens (IOL) stability, and complications after repositioning posteriorly dislocated IOLs and sulcus fixation with polyester sutures. Design. Prospective consecutive case series. Setting. Institut Universitari Barraquer. Participants. 25 eyes of 25 patients with posteriorly dislocated IOL. Methods. The patients underwent 23-gauge vitrectomy via the sulcus to rescue dislocated IOLs and fix them to the scleral wall with a previously looped nonabsorbable polyester suture. Main Outcome Measures. Best corrected visual acuity (BCVA) LogMAR, corneal astigmatism, endothelial cell count, IOL stability, and postoperative complications. Results. Mean follow-up time was 18.8 ± 10.9 months. Mean surgery time was 33 ± 2 minutes. Mean BCVA improved from 0.30 ± 0.48 before surgery to 0.18 ± 0.60 (p = 0.015) at 1 month, which persisted to 12 months (0.18 ± 0.60). Neither corneal astigmatism nor endothelial cell count showed alterations 1 year after surgery. Complications included IOL subluxation in 1 eye (4%), vitreous hemorrhage in 2 eyes (8%), transient hypotony in 2 eyes (8%), and cystic macular edema in 1 eye (4%). No patients presented retinal detachment. Conclusion. This surgical technique proved successful in the management of dislocated IOL. Functional results were good and the complications were easily resolved. PMID:26294964
1. Prevalence of amblyopia and refractive errors among primary school children
Directory of Open Access Journals (Sweden)
Zhale Rajavi
2015-01-01
Results: Amblyopia was present in 2.3% (95% CI: 1.8% to 2.9% of participants with no difference between the genders. Amblyopic subjects were significantly younger than non-amblyopic children (P=0.004. Overall, 15.9% of hyperopic and 5.9% of myopic cases had amblyopia. The prevalence of hyperopia ≥+2.00D, myopia ≤-0.50D, astigmatism ≥0.75D, and anisometropia (≥1.00D was 3.5%, 4.9%, 22.6%, and 3.9%, respectively. With increasing age, the prevalence of myopia increased (P<0.001, that of hyperopia decreased (P=0.007, but astigmatism showed no change. Strabismus was found in 2.3% of cases. Strabismus (OR=17.9 and refractive errors, especially anisometropia (OR=12.87 and hyperopia (OR=11.87, were important amblyogenic risk factors. Conclusion: The high prevalence of amblyopia in our subjects in comparison to developed countries reveals the necessity of timely and sensitive screening methods. Due to the high prevalence of amblyopia among children with refractive errors, particularly high hyperopia and anisometropia, provision of glasses should be specifically attended by parents and supported by the Ministry of Health and insurance organizations.
2. Repeatability and Comparison of Keratometry Values Measured with Potec PRK-6000 Autorefractometer, IOLMaster, and Pentacam
Directory of Open Access Journals (Sweden)
2014-05-01
Full Text Available Objectives: To research the repeatability and intercompatibility of keratometry values measured with Potec PRK-6000 autorefractometer, IOL Master, and Pentacam. Materials and Methods: In this prospective study, consecutive measurements were performed in two different sessions with the mentioned three devices on 110 eyes of 55 subjects who had no additional ocular pathology except for refraction error. The consistency of flat and steep keratometry, average keratometry, and corneal astigmatism values obtained in both sessions was compared by using intraclass correlation coefficient (ICC. The measurement differences between the devices were statistically compared as well. Results: The mean age of the study subjects was 23.05±3.01 (18-30 years. ICC values of average keratometry measurements obtained in the sessions were 0.996 for Potec PRK-6000 autorefractometer, 0.997 for IOL Master, and 0.999 for Pentacam. There was high compatibility between the three devices in terms of average keratometry values in Bland-Altman analysis. However, there were statistically significant differences between the devices in terms of parameters other than corneal astigmatism. Conclusion: The repeatability of the three devices was found considerably high in keratometry measurements. However, it is not appropriate for these devices to be substituted for each other in keratometry measurements. (Turk J Ophthalmol 2014; 44: 179-83
3. Incisões relaxantes limbares durante a cirurgia de catarata: resultados após seguimento de um ano Limbal relaxing incisions during cataract surgery: one-year follow-up
Directory of Open Access Journals (Sweden)
João Carlos Arraes
2006-06-01
ções pós-operatórias significativas.PURPOSE: To evaluate astigmatism variation between preoperative, 1st and 12th postoperative month of patients who underwent cataract surgery with limbal relaxing incisions (LRI aiming to reduce the preoperative astigmatism. METHODS: Sixteen patients who underwent cataract surgery by the phacoemulsification technique with a 5.5 mm escleral incision, at the Altino Ventura Foudation, between April and July of 2002. The limbal relaxing incisions were performed according to Gills' modified nomogram (1D - 1 LRI of 6 mm; 1-2D - 2 LRI of 6 mm; 2-3D - 2 LRI of 8 mm. They were done in the most curved meridians, determined by preoperative corneal topography. RESULTS: Significant reduction in preoperative astigmatism was observed in the 1st postoperative month in 2 limbal relaxing incisions of the 6 mm group (57% topographic astigmatism and 87% refractional and in 2 limbal relaxing incisions of the 8 mm group (50% topographic astigmatism and 65% refractional, maintaining the reduction with no significant alteration until the 12th postoperative month. The 1 limbal relaxing incision of the 6 mm group did not yield significant astigmatism reduction, but there was no significant alteration until de 12th postoperative month. There were also no complications such as postoperative discomfort, glare, aniseiconia, diplopia, incision infection and corneal thinning or ectasia. CONCLUSION: Two limbal relaxing incisions of 8 and 6 mm aiming to correct preoperative astigmatism of 2 to 3D and 1 to 2D, respectively, were safe and effective with a stable effect in the first postoperative follow-up year. The 1 limbal relaxing incision of 6 mm aiming to reduce 1 diopter of preoperative astigmatism was not effective, but it did not induce any significant postoperative complications.
4. Determination of injection molding process windows for optical lenses using response surface methodology.
Science.gov (United States)
Tsai, Kuo-Ming; Wang, He-Yi
2014-08-20
This study focuses on injection molding process window determination for obtaining optimal imaging optical properties, astigmatism, coma, and spherical aberration using plastic lenses. The Taguchi experimental method was first used to identify the optimized combination of parameters and significant factors affecting the imaging optical properties of the lens. Full factorial experiments were then implemented based on the significant factors to build the response surface models. The injection molding process windows for lenses with optimized optical properties were determined based on the surface models, and confirmation experiments were performed to verify their validity. The results indicated that the significant factors affecting the optical properties of lenses are mold temperature, melt temperature, and cooling time. According to experimental data for the significant factors, the oblique ovals for different optical properties on the injection molding process windows based on melt temperature and cooling time can be obtained using the curve fitting approach. The confirmation experiments revealed that the average errors for astigmatism, coma, and spherical aberration are 3.44%, 5.62%, and 5.69%, respectively. The results indicated that the process windows proposed are highly reliable. PMID:25321095
5. Microscopy system of atomic force based on a digital optical reading unit and a buzzer-scanner; Sistema de microscopia de fuerza atomica basada en una unidad de lectura optica digital y un escaner-zumbador
Energy Technology Data Exchange (ETDEWEB)
Dabirian, R.; Loza M, D. [Universidad de las Fuerzas Armadas-ESPE, Departamento de Ciencias de la Energia y Mecanica, Sangolqui (Ecuador); Wang, W. M.; Hwu, E. T., E-mail: whoand@gmail.com [Academia Sinica, Institute of Physics, Taipei, 11529 Taiwan (China)
2015-07-01
An astigmatic detection system (Ads) based on a compact disk/digital-versatile-disk (Cd-DVD) astigmatic optical pickup unit is presented. It can achieve a resolution better than 0.3 nm in detection of the vertical displacement and is able to detect the two-dimensional angular tilt of the object surface. Furthermore, a novel scanner design actuated by piezoelectric disk buzzers is presented. The scanner is composed of a quad-rod actuation structure and several piezoelectric disks. It can be driven directly with low-voltage and low-current sources, such as analogue outputs of a data acquisition card and enables a sufficient scanning range of up to μm. In addition, an economic, high-performance streamlined atomic force microscopy (AFM) was constructed, using the buzzer-scanner to move the sample relative to the probe, and using a Cd/DVD optical pickup unit to detect the mechanical resonance of a micro fabricated cantilever. The performance of the AFM is evaluated. The high sensitivity and high bandwidth of the detection system makes the equipment suitable for characterizing nano scale elements. An AFM using our detection system for detecting the deflection of micro fabricated cantilevers can resolve individual atomic steps on graphite surfaces. (Author)
6. High-performance oscillators employing adaptive optics comprised of discrete elements
Science.gov (United States)
Jackel, Steven M.; Moshe, Inon; Lavi, Raphael
1999-05-01
Flashlamp pumped oscillators utilizing Nd:Cr:GSGG or Nd:YAG rods were stabilized against varying levels of thermal focusing by use of a Variable Radius Mirror (VRM). In its simplest form, the VRM consisted of a lens followed by a concave mirror. Separation of the two elements controlled the radius of curvature of the reflected phase front. Addition of a concave-convex variable-separation cylindrical lens pair, allowed astigmatism to be corrected. These distributed optical elements together with a computer controlled servo system formed an adaptive optic capable of correcting the varying thermal focusing and astigmatism encountered in a Nd:YAG confocal unstable resonator (0 - 30 W) and in Nd:Cr:GSGG stable (hemispherical or concave- convex) resonators so that high beam quality could be maintained over the entire operating range. By utilizing resonators designed to eliminate birefringence losses, high efficiency could also be maintained. The ability to eliminate thermally induced losses in GSGG allows operating power to be increased into the range where thermal fracture is a factor. We present some results on the effect of surface finish (fine grind, grooves, chemical etch strengthening) on fracture limit and high gain operation.
7. Analysis of the robustness of the lens GRIN profile in a schematic model eye
Science.gov (United States)
Díaz, J. A.; Blazejewski, N.; Fernández-Dorado, J.; Arasa, J.; Sorroche, F.; Pizarro, C.
2011-11-01
In this work, an improvement of a previously-published human eye model with aging is presented. The previous eye model showed the overestimated performance of an averaged MTF at low spatial frequencies at all ages. Since that model had rotationally symmetrical corneal surfaces, these have been modeled to resemble an astigmatic element according to the recent experimental published data and have been included to produce a more accurate eye model. The gradient refractive index (GRIN) profile proposed for the crystalline lens has not been modified to test its robustness. Further, a tilt and decentration of the lens, and only a decentration for the iris, have been permitted in order to fit the average performance of the new eye model with aging. The results demonstrate that the GRIN profile for the crystalline lens fits the model well, since the decentration and/or tilt of the lens and the iris are sufficient free parameters to simulate the performance of the retinal image quality of an emmetropic human eye with aging, having an average astigmatic cornea.
8. Scleral Fixation of Posteriorly Dislocated Intraocular Lenses by 23-Gauge Vitrectomy without Anterior Segment Approach
Directory of Open Access Journals (Sweden)
2015-01-01
Full Text Available Background. To evaluate visual outcomes, corneal changes, intraocular lens (IOL stability, and complications after repositioning posteriorly dislocated IOLs and sulcus fixation with polyester sutures. Design. Prospective consecutive case series. Setting. Institut Universitari Barraquer. Participants. 25 eyes of 25 patients with posteriorly dislocated IOL. Methods. The patients underwent 23-gauge vitrectomy via the sulcus to rescue dislocated IOLs and fix them to the scleral wall with a previously looped nonabsorbable polyester suture. Main Outcome Measures. Best corrected visual acuity (BCVA LogMAR, corneal astigmatism, endothelial cell count, IOL stability, and postoperative complications. Results. Mean follow-up time was 18.8 ± 10.9 months. Mean surgery time was 33 ± 2 minutes. Mean BCVA improved from 0.30 ± 0.48 before surgery to 0.18 ± 0.60 (p=0.015 at 1 month, which persisted to 12 months (0.18 ± 0.60. Neither corneal astigmatism nor endothelial cell count showed alterations 1 year after surgery. Complications included IOL subluxation in 1 eye (4%, vitreous hemorrhage in 2 eyes (8%, transient hypotony in 2 eyes (8%, and cystic macular edema in 1 eye (4%. No patients presented retinal detachment. Conclusion. This surgical technique proved successful in the management of dislocated IOL. Functional results were good and the complications were easily resolved.
9. Analysis of nodal aberration properties in off-axis freeform system design.
Science.gov (United States)
Shi, Haodong; Jiang, Huilin; Zhang, Xin; Wang, Chao; Liu, Tao
2016-08-20
Freeform surfaces have the advantage of balancing off-axis aberration. In this paper, based on the framework of nodal aberration theory (NAT) applied to the coaxial system, the third-order astigmatism and coma wave aberration expressions of an off-axis system with Zernike polynomial surfaces are derived. The relationship between the off-axis and surface shape acting on the nodal distributions is revealed. The nodal aberration properties of the off-axis freeform system are analyzed and validated by using full-field displays (FFDs). It has been demonstrated that adding Zernike terms, up to nine, to the off-axis system modifies the nodal locations, but the field dependence of the third-order aberration does not change. On this basis, an off-axis two-mirror freeform system with 500 mm effective focal length (EFL) and 300 mm entrance pupil diameter (EPD) working in long-wave infrared is designed. The field constant aberrations induced by surface tilting are corrected by selecting specific Zernike terms. The design results show that the nodes of third-order astigmatism and coma move back into the field of view (FOV). The modulation transfer function (MTF) curves are above 0.4 at 20 line pairs per millimeter (lp/mm) which meets the infrared reconnaissance requirement. This work provides essential insight and guidance for aberration correction in off-axis freeform system design. PMID:27557003
10. Aperture referral in dioptric systems with stigmatic elements
Directory of Open Access Journals (Sweden)
W. F. Harris
2012-12-01
Full Text Available A previous paper develops the general theory of aperture referral in linear optics and shows how several ostensibly distinct concepts, including the blur patch on the retina, the effective cornealpatch, the projective field and the field of view, are now unified as particular applications of the general theory. The theory allows for astigmatism and heterocentricity. Symplecticity and the generality of the approach, however, make it difficult to gain insight and mean that the material is not accessible to readers unfamiliar with matrices and linear algebra. The purpose of this paper is to examine whatis, perhaps, the most important special case, that in which astigmatism is ignored. Symplecticity and, hence, the mathematics become greatly simplified. The mathematics reduces largely to elementary vector algebra and, in some places, simple scalar algebra and yet retains the mathematical form of the general approach. As a result the paper allows insight into and provides a stepping stone to the general theory. Under referral an aperture under-goes simple scalar magnification and transverse translation. The paper pays particular attention to referral to transverse planes in the neighbourhood of a focal point where the magnification may be positive, zero or negative. Circular apertures are treated as special cases of elliptical apertures and the meaning of referred apertures of negative radius is explained briefly. (S Afr Optom 2012 71(1 3-11
11. Effects of Silicone Hydrogel Contact Lens Application on Corneal High-order Aberration and Visual Guality in Patients with Corneal Opacities
Directory of Open Access Journals (Sweden)
Sevda Aydın Kurna
2012-03-01
Full Text Available Pur po se: Evaluation of the corneal high-order aberrations and visual quality changes after application of silicone hydrogel contact lenses in patients with corneal opacities due to various etiologies. Ma te ri al and Met hod: Fifteen eyes of 13 patients with corneal opacities were included in the study. During the ophthalmologic examination before and after contact lens application, visual acuity was measured with Snellen acuity chart and contrast sensitivity - with Bailey-Lowie Charts in letters. Aberrations were measured with corneal aberrometer (NIDEK Magellan Mapper under a naturally dilated pupil. Spherical aberration, coma, trefoil, irregular astigmatism and total high-order root mean square (RMS values were recorded. Measurements were repeated with balafilcon A lenses (PureVision 2 HD, B&L on all patients. Re sults: Patient age varied between 23 and 50 years. Two eyes had subepithelial infiltrates due to adenoviral keratitis, 1 had nebulae due to previous infections or trauma, and 2 had Salzmann’s nodular degeneration. We observed a mean increase of 1 line in visual acuity and 5 letters in contrast sensitivity with contact lenses versus glasses in the patients. Mean RMS values of spherical aberration, irregular astigmatism and total high-order aberrations decreased significantly with contact lenses. Dis cus si on: Silicone hydrogel soft contact lenses may improve visual quality by decreasing the corneal aberrations in patients with corneal opacities. (Turk J Ophthalmol 2012; 42: 97-102
12. Fiber Grating Coupled Light Source Capable of Tunable, Single Frequency Operation
Science.gov (United States)
Krainak, Michael A. (Inventor); Duerksen, Gary L. (Inventor)
2001-01-01
Fiber Bragg grating coupled light sources can achieve tunable single-frequency (single axial and lateral spatial mode) operation by correcting for a quadratic phase variation in the lateral dimension using an aperture stop. The output of a quasi-monochromatic light source such as a Fabry Perot laser diode is astigmatic. As a consequence of the astigmatism, coupling geometries that accommodate the transverse numerical aperture of the laser are defocused in the lateral dimension, even for apsherical optics. The mismatch produces the quadratic phase variation in the feedback along the lateral axis at the facet of the laser that excites lateral modes of higher order than the TM(sub 00). Because the instability entails excitation of higher order lateral submodes, single frequency operation also is accomplished by using fiber Bragg gratings whose bandwidth is narrower than the submode spacing. This technique is particularly pertinent to the use of lensed fiber gratings in lieu of discrete coupling optics. Stable device operation requires overall phase match between the fed-back signal and the laser output. The fiber Bragg grating acts as a phase-preserving mirror when the Bragg condition is met precisely. The phase-match condition is maintained throughout the fiber tuning range by matching the Fabry-Perot axial mode wavelength to the passband center wavelength of the Bragg grating.
13. Progress of Toric intraocular lenses%散光人工晶状体的应用进展
Institute of Scientific and Technical Information of China (English)
张劲松
2010-01-01
Astigmatism can affect visual quality and visual acuity, and it can be corrected by different methods. Toric intraocular lens (Toric IOL) which combined astigmatism correction and spherical correction together is a newest kind of functional intraocular lens. Using Toric IOL is a reliable, predictable and stable refractive correction choice. With Toric IOL popularized, more and more technical experience were accumulated. The patients selection, insert techniques and complications prevention were improved greatly.%散光会造成视觉质量和视力的下降,可以通过多种方法矫正.散光人工晶状体(Toric IOL)是将散光矫正与人工晶状体的球镜度数相结合的一种新型功能性人工晶状体.利用这种人工晶状体矫正角膜散光是一种合理的、预测性强、术后效果更加稳定的屈光矫正方式.随着Toric IOL在临床应用的推广,这一技术的应用经验得到不断的积累,Toric IOL的适应证选择、植入技术、并发症的预防等方面日趋完善.
14. Generalized magnification in visual optics. Part 2: Magnification as affine transformation
Directory of Open Access Journals (Sweden)
W. F. Harris
2010-12-01
Full Text Available In astigmatic systems magnification may be different in different directions. It may also be accompanied by rotation or reflection. These changes from object to image are examples of generalized magnification. They are represented by 2 2× matrices. Because they are linear transformations they can be called linear magnifications. Linear magnifications account for a change in appearance without regard to position. Mathematical structure suggests a natural further generalization to a magnification that is complete in the sense that it accountsfor change in appearance and position. It is represented by a 3 3× matrix with a dummy third row. The transformation is called affine in linear algebra which suggests that these generalized magnifica-tions be called affine magnifications. The purpose of the paper is to define affine magnification in the context of astigmatic optics. Several examples are presented and illustrated graphically. (S Afr Optom 2010 69(4 166-172
15. Advances in very lightweight composite mirror technology
Science.gov (United States)
Chen, Peter C.; Bowers, Charles W.; Content, David A.; Marzouk, Marzouk; Romeo, Robert C.
2000-09-01
We report progress in the development of very lightweight (roll off and several waves (rms optical) of astigmatism, coma, and third-order spherical aberration. These are indications of thermal contraction in an inhomogeneous medium. This inhomogeneity is due to a systematic radial variation in density and fiber/resin ratio induced in composite plies when draped around a small and highly curved mandrel. The figure accuracy is expected to improve with larger size optics and in mirrors with longer radii of curvature. Nevertheless, the present accuracy figure is sufficient for using postfiguring techniques such as ion milling to achieve diffraction-limited performances at optical and UV wavelengths. We demonstrate active figure control using a simple apparatus of low-mass, low-force actuators to correct astigmatism. The optimized replication technique is applied to the fabrication of a 0.6-m-diam mirror with an areal density of 3.2 kg/m2. Our result demonstrates that the very lightweight, large-aperture construction used in radio telescopes can now be applied to optical telescopes.
16. An omnidirectional 3D sensor with line laser scanning
Science.gov (United States)
Xu, Jing; Gao, Bingtuan; Liu, Chuande; Wang, Peng; Gao, Shuanglei
2016-09-01
An active omnidirectional vision owns the advantages of the wide field of view (FOV) imaging, resulting in an entire 3D environment scene, which is promising in the field of robot navigation. However, the existing omnidirectional vision sensors based on line laser can measure points only located on the optical plane of the line laser beam, resulting in the low-resolution reconstruction. Whereas, to improve resolution, some other omnidirectional vision sensors with the capability of projecting 2D encode pattern from projector and curved mirror. However, the astigmatism property of curve mirror causes the low-accuracy reconstruction. To solve the above problems, a rotating polygon scanning mirror is used to scan the object in the vertical direction so that an entire profile of the observed scene can be obtained at high accuracy, without of astigmatism phenomenon. Then, the proposed method is calibrated by a conventional 2D checkerboard plate. The experimental results show that the measurement error of the 3D omnidirectional sensor is approximately 1 mm. Moreover, the reconstruction of objects with different shapes based on the developed sensor is also verified.
17. Toric人工晶状体临床应用的研究进展%Clinical applications of Toric intraocular lens
Institute of Scientific and Technical Information of China (English)
肖显文; 张红; 田芳
2014-01-01
Toric intraocular lens (IOL),known as IOL with composite surface,is a new refractive IOL to correct astigmatism by cylindrical lens combined with spherical IOL.Toric IOL has become a reasonable,effective and stable refractive method to correct preoperative astigmatism of cataract patients following the continuous development of materials and technical improvements since it was applied clinically first in 1994.The new multifocal Toric IOL provides good distance,intermediate and near functional vision and enables the complete spectacle independence for cataract patients.%散光型人工晶状体(Toric intraocular lens,Toric IOL)也称复合曲面IOL,是将矫正散光的圆柱镜与IOL的球镜相结合的一种新型屈光性IOL.自1994年第一枚Toric IOL用于临床以来,Toric IOL的材料和设计得到不断改进,现已成为矫正白内障患者术前角膜散光的一种合理、有效并且稳定的屈光矫正方式.多焦点Toric IOL的出现则为白内障患者提供良好的远中近视力,真正满足白内障患者的脱镜需求.
18. Zeff profile measurement system with an optimized Czerny-Turner visible spectrometer in large helical device
International Nuclear Information System (INIS)
Zeff measurement system using a visible spectrometer has been newly designed and constructed instead of an old interference filter system to eliminate line emissions from the signal and to measure the Zeff value in low-density plasmas. The system consists of the Czerny-Turner-type spectrometer with a charge-coupled device camera and 44 optical fibers vertical array. The spectrometer is equipped with an additional toroidal mirror for further reduction in the astigmatism in addition to a flat and two spherical mirrors and three gratings (110, 120, and 1200 grooves/mm) with 30 cm focal length. The images from 44 optical fibers can be detected without astigmatism in a wavelength range of 200-900 nm. Combination of the optical fiber (core diameter: 100 μm) with the lens (focal length: 30 mm) provides spatial resolution of 30 mm at the plasma center. Results clearly indicate a very good focus image of the fiber and suggest the absence of the cross-talk between adjacent fiber images. Absolute intensity calibration has been done using a standard tungsten lamp to analyze the Zeff value. The bremsstrahlung profile and resultant Zeff profile have been obtained after Abel inversion of the signals observed in large helical device plasmas with elliptical poloidal cross section.
19. DEVICE FOR MEASURING OF THERMAL LENS PARAMETERS IN LASER ACTIVE ELEMENTS WITH A PROBE BEAM METHOD
Directory of Open Access Journals (Sweden)
A. N. Zakharova
2015-01-01
Full Text Available We have developed a device for measuring of parameters of thermal lens (TL in laser active elements under longitudinal diode pumping. The measurements are based on the probe beam method. This device allows one to determine sign and optical power of the lens in the principal meridional planes, its sensitivity factor with respect to the absorbed pump power and astigmatism degree, fractional heat loading which make it possible to estimate integral impact of the photoelastic effect to the formation of TL in the laser element. The measurements are performed in a linearly polarized light at the wavelength of 532 nm. Pumping of the laser element is performed at 960 nm that makes it possible to study laser materials doped with Yb3+ and (Er3+, Yb3+ ions. The precision of measurements: for sensitivity factor of TL – 0,1 m-1/W, for astigmatism degree – 0,2 m-1/W, for fractional heat loading – 5 %, for the impact of the photoelastic effect – 0,5 × 10-6 K-1. This device is used for characterization of thermal lens in the laser active element from an yttrium vanadate crystal, Er3+,Yb3+:YVO .
20. Toric设计角膜塑形镜与视觉质量%Toric design orthokeratology contact lenses and visual quality
Institute of Scientific and Technical Information of China (English)
杨丽娜; 周建兰; 谢培英
2013-01-01
Objective To observe the changes in corneal astigmatism after patients are fitted with different ortho-k contact lens (CL) designs and the influence of these lenses on visual quality.Methods In a case-control study,corneal astigmatism,corneal topography,wavefront aberrations (Pentacam),visual acuity and visual disturbance symptoms were observed in three groups (groups A,B,C) before and after CL wear.Group A (30 eyes) had lower corneal astigmatism and wore a general ortho-k contact lens design that fit quite well; group B (30 eyes) had lower corneal astigmatism and wore a general ortho-k contact lens design that did not fit well and was obviously decentered; group C (31 eyes) had higher corneal astigmatism and wore toric ortho-k contact lenses with an acceptable fit.SPSS 16.0 statistical software was used to analyze the data.Results Changes in corneal astigmatism after fitting with the ortho-k CL:astigmatism increased in group B but was lower in groups A and C.Fourier analysis from corneal topography:increases in asymmetry for all three groups at 3 mm were (-0.393±0.329)D,(-4.050±2.084)D,and (-0.494±0.522)D,respectively,all at P<0.001.Higher order aberrations in the three groups increased at 3 mm and were (-0.011±0.055)D (P>0.05),(-0.635±0.441)D (P<0.001) and (-0.055±0.082)D (P<0.01).The three groups at 3 mm regular:differences in the comparison of astigmatism,asymmetry and higher order aberrations were statistically significant,F=79.862,F=83.882,F=54.265,respectively,all at P<0.01.After fitting with ortho-k CLs,the total aberrations and total higher order aberrations for the three groups increased in varying degrees,with group B as the most significant.A comparison of the difference in aberrations:only the anterior surface of the spherical aberration had a statistically significant difference (F=18.048,P<0.01).After the CLs were removed:in group A 36.7 % achieved a UVA of 1.2,50.0% achieved 1.0 and 13.3% achieved 0.8; in group B 36.7
1. Long-term outcomes of limbal relaxing incisions during cataract surgery: aberrometric analysis
Directory of Open Access Journals (Sweden)
Monaco G
2015-08-01
Full Text Available Gaspare Monaco, Antonio ScialdoneDepartment of Ophthalmology, Ospedale Fatebenefratelli e Oftalmico, Milan, ItalyPurpose: To compare the final changes in corneal wavefront aberration by limbal relaxing incisions (LRIs after cataract surgery.Methods: This prospective cumulative interventional nonrandomized case study included cataract and astigmatic patients undergoing LRIs and phaco with intraocular lens implantation. LRIs were planned using Donnenfeld nomogram. The root mean square of corneal wave aberration for total Z(n,i(1≤n≤8, astigmatism Z(2,±1, coma Z(3–5–7,±1, trefoil Z(3–5–7,±2, spherical Z(4–6–8,0, and higher-order aberration (HOA Z(3≤n≤8 was examined before and 3 years after surgery (optical path difference-Scan II [OPD-Scan II]. Uncorrected distance visual acuity and best-corrected distance visual acuity (CDVA for distance, keratometric cylinder, and variations in average corneal power were also analyzed.Results: Sixty-four eyes of 48 patients were included in the study. Age ranged from 42 to 92 years (70.6±8.4 years. After LRIs, uncorrected distance visual acuity and best-corrected distance visual acuity improved statistically (P<0.01. The keratometric cylinder value decreased by 40.1%, but analysis of KP90 and KP135 polar values did not show any decrease that could be statistically confirmed (P=0.22 and P=0.24. No significant changes were detected in root mean square of total (P=0.61 and HOAs (P=0.13 aberrations. LRIs did not induce alteration in central corneal power confirming a 1:1 coupling ratio.Conclusion: LRIs determined a nonsignificant alteration of corneal HOA. Therefore, LRIs can be still considered a qualitatively viable mean in those cases where toric intraocular lenses are contraindicated or not available. Yet, the authors raise the question of nonpersonalized nomograms, as in the present study, LRIs did not reach the preset target cylinder. Keywords: astigmatism, ocular wavefront, intraocular
2. Comparação entre os resultados pós-operatórios de pacientes submetidos ao procedimento tríplice e transplante de córnea combinado a fixação secundária de lente intra-ocular Comparison between the postoperative results of triple procedure and combined penetrating keratoplasty/ transsclerally sutured posterior chamber lens implantation
Directory of Open Access Journals (Sweden)
Daniela Maggioni Pereira Leão
2006-08-01
Full Text Available OBJETIVO: Comparar os resultados pós-operatórios de 2 grupos de pacientes submetidos a transplante de córnea com técnicas e tempo cirúrgico diferentes, em relação à abordagem do cristalino e/ou lente intra-ocular. MÉTODOS: Neste estudo retrospectivo foram analisados 37 olhos de pacientes divididos em 2 grupos: extração de catarata, implante de lentes intra-oculares (LIO e transplante de córnea no mesmo tempo cirúrgico - grupo 1 (G1 e extração de catarata sem implante de lentes intra-oculares no primeiro tempo cirúrgico e fixação secundária de lentes intra-oculares associada a transplante de córnea no segundo tempo cirúrgico - grupo 2 (G2. As variáveis estudadas foram: acuidade visual, pressão intra-ocular (PIO, astigmatismo refracional, astigmatismo ceratométrico e complicações pós-operatórias. RESULTADOS: Foi observado melhora da acuidade visual nos 2 grupos (G1 pPURPOSE: To compare the outcomes of two surgical techniques of penetrating keratoplasty with different surgical time, regarding the crystalline and the intraocular lens. METHODS: This retrospective study included 37 patients' eyes divided into 2 groups: extracapsular cataract extraction, posterior chamber intraocular lens implantation and penetrating keratoplasty (Group 1, G1 and transscleral fixation of posterior chamber lens and penetrating keratoplasty (Group 2, G2. The following parameters were recorded: visual acuity, intraocular pressure, refractive astigmatism, complication and keratometric astigmatism. RESULTS: Visual acuity improved in the two groups (G1 p<0.001 and G2 p=0.008. In G2 a significant change for the worse of intraocular pressure outcome was observed when compared with the other group (p=0.014. Regarding refractive and keratometric astigmatism no significant differences between the groups were found. The follow-up was 11 months. CONCLUSION: The most important negative prognostic factor affecting visual acuity was the postkeratoplasty
3. 渐进多焦点镜片的逐点定向曲率补偿优化设计%Optimization Design for Progressive Addition Lenses by Pointwise Directional Curvature Compensation
Institute of Scientific and Technical Information of China (English)
秦琳玲; 钱霖; 余景池
2012-01-01
提出一种利用逐点定向曲率补偿法对渐进多焦点镜片初始模型进行整体优化以减小周边散光区(俗称盲区)的优化设计方法.推导出自由曲面任意方向法曲率的计算公式、确定主曲率和主方向的方法,求出镜片初始模型上各点的曲率差和最大曲率方向、最小曲率方向,通过迭加由不同曲率、不同轴向微小柱面构成的自由曲面,来实现逐点定向曲率补偿,使镜片各点的曲率差适当减小,从而减小散光.给出具体优化步骤和一个优化设计实例,并进行实际加工制作与检测.对比优化前后的光焦度和散光度的面形分布图.结果表明,逐点定向曲率补偿法能有效减小渐进多焦点镜片初始模型的最大散光并明显扩大视远区清晰视觉范围.%A global optimizing method using pointwise directional curvature compensation which can reduce the undesirable astigmatism is proposed. The calculation formula of normal curvature at arbitrary direction is deducted and the method is proposed to determine the principal curvature and principal direction. The principal curvature difference and the directions of the maximum and minimum curvature of the initial progressive addition lens are calculated. The optimizaiton method can reduce the astigmatism by adding the freeform surface consisting of micro cylinders with different curvatures and different directions. The optimization algorithm and an example are given. An initial progressive addition lens and an optimized lens by this method are manufactured and tested. Compared with the initial lens, the optimized lens has smaller maximum astigmatism value and larger clear region in the distance-vision zone.
4. 渐进多焦点眼用镜片的平均曲率流优化设计%Optimizing Design for Progressive Addition Lenses by Mean Curvature Flow
Institute of Scientific and Technical Information of China (English)
唐运海; 吴泉英; 钱霖; 刘琳
2011-01-01
The principle of mean curvature flow is illustrated. An optimizing method using mean curvature flow is proposed which can reduce the undesirable astigmatism in some regions of the progressive addition lens surface while retaining desirable optical features of the progressive lenses. It is presented that the more the surface of the progressive lens becomes closer to spherical by the process of mean curvature flow, the more the astigmatism is smaller. The optimizing algorithm and an example are given out. An initial designed progressive addition lens and an optimized lens by this method are manufactured and tested. Compared with the original lens, the optimized lens has smaller maximum astigmatism value and larger clear region on the intermediate zone and the distance-vision zone.%阐述平均曲率流原理,提出一种利用平均曲率流对初始设计的渐进多焦点眼用镜片进行局部优化以减小指定区域像散的优化设计方法.采用平均曲率流方法使镜片渐进表面上符合设定条件的区域更加趋近于球面,从而减小镜片表面该区域处的像散.给出具体算法步骤和一个优化设计实例,并进行实际加工制作与检测.对比优化前后的光焦度、像散和通道宽度等参数,结果表明:平均曲率流优化方法能够在保证镜片光焦度不变的情况下,有效减小初始设计渐进多焦点眼用镜片的最大像散并适当增加渐变通道宽度和扩大视远区清晰视觉范围.
5. Topographical Evaluation of the Decentration of Orthokeratology Lenses
Institute of Scientific and Technical Information of China (English)
Xiao Yang; Xingwu Zhong; Xiangming Gong; Junwen Zeng
2005-01-01
Purpose: To evaluate the amount of lens decentration and various factors affecting decentration after orthokeratology lens wear and to observe the effect of decentration on the visual functions.Methods: Two kinds of orthokeratology lenses were fitted to 270 eyes of 135 patients [initial mean refractive error: (-3.98±1.51)D]. Humphery Instruments ATLAS 990 was used for the computer-assisted analysis of corneal topographical maps. The examination of corneal topography was performed on patients before and after 6 months of wearing orthokeratology lenses. The amount of decentration of orthokeratology lenses was measured by finding the distance between center of optic zone and the pupil center. The factors influencing the amount of decentration were analyzed, including the initial refraction error, astigmatism, keratometry values, corneal eccentricity, and the diameter of lens.Visual symptoms including monocular diplopia, glare around lights were recorded to evaluate the effects of decentration on visual functions.Results: The mean amount of decentration was (0.49±0.34) mm after one night's wear.The mean amount of decentration after 1 month, 3 months and 6 months was (0.57±0.41) mm, (0.55±0.48) mm and (0.59±0.39) mm, respectively. After one month, the amount of decentration was less than 0.50 mm in 51.1% eyes, 0.50~1.0 mm in 35.6% eyes and more than 1.00 mm in 13.3% eyes. The direction of decentration of more than 0.50 mm was mainly in the temporal quadrant (48.5%). Patients with greater initial astigmatism and smaller lenses showed greater decentration (P<0.05). There was no statistically significant difference in decentration between the two groups with different corneal eccentricities and keratometry values (P>0.05). The amount of decentration was greater in patients who complained of monocular diplopia and glare.Conclusions: The amount of decentration of orthokeratology depends on the initial refractive error, astigmatism and the design of orthokeratology
6. Efficacy and safety of deep anterior lamellar keratoplasty vs. penetrating keratoplasty for keratoconus: a meta-analysis.
Directory of Open Access Journals (Sweden)
Hao Liu
Full Text Available To evaluate difference in therapeutic outcomes between deep anterior lamellar keratoplasty (DALK and penetrating keratoplasty (PKP for the clinical treatment of keratoconus.A comprehensive search was conducted in Pubmed, EMBASE, Cochrane Library, and Web of science. Eligible studies should include at least one of the following factors: best corrected visual acuity (BCVA, postoperative spherical equivalent (SE, postoperative astigmatism and endothelial cell count (ECC, central corneal thickness (CCT, graft rejection and graft failure, of which BCVA, graft rejection and graft failure were used as the primary outcome measures, and postoperative SE, astigmatism, CCT and ECC as the secondary outcome measures. Given the lack of randomized clinical trials (RCTs, cohort studies and prospective studies were considered eligible.Sixteen clinical trials involving 6625 eyes were included in this review, including 1185 eyes in DALK group, and 5440 eyes in PKP group. The outcomes were analyzed using Cochrane Review Manager (RevMan version 5.0 software. The postoperative BCVA in DALK group was significantly better than that in PKP group (OR = 0.48; 95%CI 0.39 to 0.60; p<0.001. There were fewer cases of graft rejection in DALK group than those in PKP group (OR = 0.28; 95%CI 0.15 to 0.50; p<0.001. Nevertheless the rate of graft failure was similar between DALK and PKP groups (OR = 1.05; 95%CI 0.81 to 1.36; p = 0.73. There were no significant differences in the secondary outcomes of SE (p = 0.70, astigmatism (p = 0.14 and CCT (p = 0.58 between DALK and PKP groups. And ECC in DALK group was significantly higher than PKP group (p<0.001. The postoperative complications, high intraocular pressure (high-IOP and cataract were analyzed, fewer cases of complications occurred in DALK group than those in PKP group (high-IOP, OR 0.22, 95% CI 0.11-0.44, P<0.001 (cataract, OR 0.22; 95% CI 0.08-0.61, P = 0.004. And no cases of expulsive hemorrhage and endophthalmitis were
7. Incidence of the refractive errors in children 3 to 9 years of age, in the city of Tetovo, Macedonia
Institute of Scientific and Technical Information of China (English)
Ejup Mahmudi; Vilma Mema; Nora Burda; Brikena Selimi; Sulejman Zhugli
2013-01-01
Objective: To determine the incidence of refractive errors at children 3 to 9 years of age in the area of Tetovo, Macedonia in rural and urban population. Methods: Population-based cross-sectional samples of children 3 to 9 years in rural and urban population were obtained through full ophthalmologic examination, and they underwent slit-lamp examination, ocular motility and refraction. They were presenting uncorrected and best-corrected visual acuity, along with refractive error under topical cycloplegia. Children 3 to 6 years of age with a visual acuity of 20/40 or worse and those 6 to 9 years of age with a visual acuity of 20/30 or worse underwent a complete ophthalmic examination to determine the cause of visual impairment. A spherical equivalent of-0.5 diopter (D) or worse was defined as myopia, +2.50 D or more was defined as hyperopia and a cylinder refraction greater than 0.75 D was considered astigmatism plus or minus. Results:The uncorrected visual acuity was 20/45 or worse in the better eye of 119 children, 59 male / 60 female (5.1% of participants). According to results of cycloplegic refraction, 1.6% of the children were myopic, 7.3% were hyperopic and the incidence rate of astigmatism was approximately 0.7%. In the multivariate logistic regression myopia and hyperopia were correlated with age (P = 0.040 and P < 0.002, respectively). Conclusions: The study showed a considerable prevalence rates of refractive errors myopia , hypermethropia , astigmatism and amblyopia at children of 3-9 years of age in Tetovo. There was no correlation between sex of the children’s and the refractive errors founds. There was a correlation with the need for corrective spectacles and the refractive errors they represent. Refractive errors was registered in high percentage at rural area than in urban area. Although with best corrected vision the prevalence of impairment was less in urban than in rural populations, blindness remained nearly twice as high in the rural
8. The status of refractive errors in elementary school children in South Jeolla Province, South Korea
Directory of Open Access Journals (Sweden)
Jang JU
2015-07-01
Full Text Available Jung Un Jang,1 Inn-Jee Park2 1Department of Optometry, Eulji University, Seongnam, 2Department of Optometry, Kaya University, Gimhae, South Korea Purpose: To assess the prevalence of refractive errors among elementary school children in South Jeolla Province of South Korea. Methods: The subjects were aged 8–13 years; a total of 1,079 elementary school children from Mokpo, South Jeolla Province, were included. In all participants, uncorrected visual acuity and objective and subjective refractions were determined using auto Ref-Keratometer and phoropter. A spherical equivalent of -0.50 diopter (D or worse was defined as myopia, +0.50 D or more was defined as hyperopia, and a cylinder refraction greater than 0.75 D was defined as astigmatism. Results: Out of 1,079 elementary school children, the prevalence of uncorrected, best-corrected, and corrected visual acuity with own spectacles of 20/40 or worse in the better eye was 26.1%, 0.4%, and 20.2%, respectively. The uncorrected visual acuity was 20/200 or worse in the better eye in 5.7% of school children, and 5.2% of them already wore corrective spectacles. The prevalence of myopia, hyperopia, and astigmatism was 46.5% (95% confidence interval [CI]: 43.56–49.5, 6.2% (95% CI: 4.92–7.81, and 9.4% (95% CI: 7.76–11.25, respectively. Conclusion: The present study reveals a considerably higher prevalence of refractive error among elementary school children in South Jeolla Province of South Korea, exceeding 50% of subjects. The prevalence of myopia in the school children in Korea is similar to many other countries including People’s Republic of China, Malaysia, and Hong Kong. This may indicate that genetics and educational influences, such as studying and learning, may play a role in the progression of myopia in Korean elementary school children. Keywords: refractive error, elementary school children, visual acuity, myopia, astigmatism
9. Fast and precise 3D fluorophore localization by gradient fitting
Science.gov (United States)
Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang
2016-02-01
Astigmatism imaging is widely used to encode the 3D position of fluorophore in single-particle tracking and super-resolution localization microscopy. Here, we present a fast and precise localization algorithm based on gradient fitting to decode the 3D subpixel position of the fluorophore. This algorithm determines the center of the emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the emitter in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising online reconstruction method for 3D super-resolution microscopy.
10. A polycarbonate ophthalmic-prescription lens series.
Science.gov (United States)
Davis, J K
1978-08-01
Improvements in polycarbonate material, production techniques, and scratch-resistant coatings, combined with a process-oriented design, have resulted in a precision lens series. Surface quality is comparable to that of untreated glass ophthalmic lenses. The repeatability of the process results in closely controlled axial power and off-axis performance. For most lens prescriptions, the ANSI Z80.1 optical-center specifications for prescription accuracy are maintained through a total field of view of 40 deg for an 8-mm range of center-of-rotation distances. Off-axis astigmatism is controlled for near-point seeing. The lenses are both lighter and thinner than those of crown glass. A scratch-resistant coating reduces the reflections normally associated with high-index (1.586) materials. Impact resistance exceeds that required by ANSI Z80.7 and is many times that required by ANSI Z80.1.
11. Fator de correção para indivíduos com capacidade acomodativa baseado no uso do refrator automático Correction factor for individuals with accommodative capacity based on automated refractor
Directory of Open Access Journals (Sweden)
Rodrigo Ueno Takahagi
2009-12-01
values with and without cycloplegy effect according to age. RESULTS: The correlation between the astigmatism diopter values with and without cicloplegy ranged from 81.52% to 92.27%. Analyzing the spherical diopter values, the correlation was lower (53.57% to 87.78%. The astigmatism axis also revealed low correlation values (28.86% to 58.80%. The multiple regression model according to age demonstrated multiple determination coefficient with high values for myopia (86.38% and astigmatism (79.79%. The lowest multiple determination coefficient was observed for astigmatism axis (17.70%. CONCLUSION: It was possible to demonstrate a high correlation in refractive errors with and without cycloplegy effect on the cylindrical ametropies. Mathematical formules, for cylindrical and spherical ametropies, were presented as a correction factor for refraction of the patients not submitted to cycloplegy.
12. Alternative optical concept for electron cyclotron emission imaging
Energy Technology Data Exchange (ETDEWEB)
Liu, J. X., E-mail: jsliu9@berkeley.edu [Department of Physics, University of California Berkeley, Berkeley, California 94720 (United States); Milbourne, T. [Department of Physics, College of William and Mary, Williamsburg, Virginia 23185 (United States); Bitter, M.; Delgado-Aparicio, L.; Dominguez, A.; Efthimion, P. C.; Hill, K. W.; Kramer, G. J.; Kung, C.; Pablant, N. A.; Tobias, B. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540 (United States); Kubota, S. [Department of Physics, University of California Los Angeles, Los Angeles, California 90095 (United States); Kasparek, W. [Department of Electrical Engineering, University of Stuttgart, Stuttgart (Germany); Lu, J. [Department of Physics, Chongqing University, Chongqing 400044 (China); Park, H. [Ulsan National Institute of Science and Technology, Ulsan 689-798 (Korea, Republic of)
2014-11-15
The implementation of advanced electron cyclotron emission imaging (ECEI) systems on tokamak experiments has revolutionized the diagnosis of magnetohydrodynamic (MHD) activities and improved our understanding of instabilities, which lead to disruptions. It is therefore desirable to have an ECEI system on the ITER tokamak. However, the large size of optical components in presently used ECEI systems have, up to now, precluded the implementation of an ECEI system on ITER. This paper describes a new optical ECEI concept that employs a single spherical mirror as the only optical component and exploits the astigmatism of such a mirror to produce an image with one-dimensional spatial resolution on the detector. Since this alternative approach would only require a thin slit as the viewing port to the plasma, it would make the implementation of an ECEI system on ITER feasible. The results obtained from proof-of-principle experiments with a 125 GHz microwave system are presented.
13. Modified convolution method to reconstruct particle hologram with an elliptical Gaussian beam illumination.
Science.gov (United States)
Wu, Xuecheng; Wu, Yingchun; Yang, Jing; Wang, Zhihua; Zhou, Binwu; Gréhan, Gérard; Cen, Kefa
2013-05-20
Application of the modified convolution method to reconstruct digital inline holography of particle illuminated by an elliptical Gaussian beam is investigated. Based on the analysis on the formation of particle hologram using the Collins formula, the convolution method is modified to compensate the astigmatism by adding two scaling factors. Both simulated and experimental holograms of transparent droplets and opaque particles are used to test the algorithm, and the reconstructed images are compared with that using FRFT reconstruction. Results show that the modified convolution method can accurately reconstruct the particle image. This method has an advantage that the reconstructed images in different depth positions have the same size and resolution with the hologram. This work shows that digital inline holography has great potential in particle diagnostics in curvature containers.
14. A study on refractive errors among school children in Kolkata.
Science.gov (United States)
Das, Angshuman; Dutta, Himadri; Bhaduri, Gautam; De Sarkar, Ajay; Sarkar, Krishnendu; Bannerjee, Manas
2007-04-01
Childhood visual impairment due to refractive errors is a significant problem in school children and has a considerable impact on public health. To assess the magnitude of the problem the present study was undertaken among the school children aged 5 to 10 years in Kolkata. Detailed ophthalmological examination was carried out in the schools as well as in the Regional Institute of Ophthalmology, Kolkata. Among 2317 students examined, 582 (25.11%) were suffering from refractive errors, myopia being the commonest (n = 325; 14.02%). Astigmatism affected 91 children (3.93%). There is an increase of prevalence of refractive errors with increase of age, but it is not statistically significant (p > 0.05). There is also no significant difference of refractive errors between boys and girls. PMID:17822183
15. The Risk of Contact Lens Wear and the Avoidance of Complications
Directory of Open Access Journals (Sweden)
Farihah Tariq
2013-11-01
Full Text Available Contact lenses are lenses placed on the surface of the cornea to correct refractive errors such as myopia (short-sightedness, hypermetropia (far-sightedness and astigmatism. Lens-related complications are becoming a greater health concern as increasing number of individuals are using them as an alternative to spectacles. Contact lenses alter the natural ocular environment and reduce the efficacy of the innate defences. Although many complications are minor, microbial keratitis is potentially blinding and suspected cases should be rapidly diagnosed and referred to an ophthalmologist for treatment. Several risk factors have been identified with extended wear, poor hand hygiene, inadequate lens and lens-case care being the most significant. Promotion of good contact lens hygiene and practices are essential to reduce the adverse effects of contact lens wear.
16. Distinguishing nonlinear processes in atomic media via orbital angular momentum transfer
CERN Document Server
Akulshin, Alexander M; Mikhailov, Eugeniy E; Novikova, Irina
2014-01-01
We suggest a technique based on the transfer of topological charge from applied laser radiation to directional and coherent optical fields generated in ladder-type excited atomic media to identify the major processes responsible for their appearance. As an illustration, in Rb vapours we analyse transverse intensity and phase profiles of the forward-directed collimated blue and near-IR light using self-interference and astigmatic transformation techniques when either or both of two resonant laser beams carry orbital angular momentum. Our observations unambiguously demonstrate that emission at 1.37 {\\mu}m is the result of a parametric four-wave mixing process involving only one of the two applied laser fields.
17. Spatial shaping for generating arbitrary optical dipoles traps for ultracold degenerate gases
CERN Document Server
Lee, Jeffrey G
2014-01-01
We present two spatial-shaping approaches -- phase and amplitude -- for creating two-dimensional optical dipole potentials for ultracold neutral atoms. When combined with an attractive or repulsive Gaussian sheet formed by an astigmatically focused beam, atoms are trapped in three dimensions resulting in planar confinement with an arbitrary network of potentials -- a free-space atom chip. The first approach utilizes an adaptation of the generalized phase-contrast technique to convert a phase structure embedded in a beam after traversing a phase mask, to an identical intensity profile in the image plane. Phase masks, and a requisite phase-contrast filter, can be chemically etched into optical material (e.g., fused silica) or implemented with spatial light modulators; etching provides the highest quality while spatial light modulators enable prototyping and realtime structure modification. This approach was demonstrated on an ensemble of thermal atoms. Amplitude shaping is possible when the potential structure ...
18. Contact lens fitting in a patient with Alport syndrome and posterior polymorphous corneal dystrophy: a case report
Directory of Open Access Journals (Sweden)
Juliana Maria da Silva Rosa
2016-02-01
Full Text Available ABSTRACT Alport Syndrome is a hereditary disease that is caused by a gene mutation and affects the production of collagen in basement membranes; this condition causes hemorrhagic nephritis associated with deafness and ocular changes. The X-linked form of this disease is the most common and mainly affects males. Typical ocular findings are dot-and-fleck retinopathy, anterior lenticonus, and posterior polymorphous corneal dystrophy. Some cases involving polymorphous corneal dystrophy and corneal ectasia have been previously described. Here we present a case report of a 33-year-old female with Alport syndrome, posterior polymorphous corneal dystrophy, and irregular astigmatism, whose visual acuity improved with a rigid gas permeable contact lens.
19. Corneal topographic changes after 20-gauge pars plana vitrectomy associated with scleral buckling for the treatment of rhegmatogenous retinal detachment
Directory of Open Access Journals (Sweden)
Alexandre Achille Grandinetti
2013-04-01
Full Text Available PURPOSE: To evaluate the changes in corneal topography after 20-gauge pars plana vitrectomy associated with scleral buckling for the repair of rhegmatogenous retinal detachment. METHODS: Twenty-five eyes of 25 patients with rhegmatogenous retinal detachment were included in this study. 20-gauge pars plana vitrectomy associated with scleral buckling was performed in all patients. The corneal topography of each was measured before surgery and one week, one month, and three months after surgery by computer-assisted videokeratoscopy. RESULTS: A statistically significant central corneal steepening (average, 0,9 D , p<0,001 was noted one week after surgery. The total corneal astigmatism had a significant increase in the first postoperative month (p=0,007. All these topographic changes persisted for the first month but returned to preoperative values three months after the surgery. CONCLUSION: Pars plana vitrectomy with scleral buckling was found to induce transient changes in corneal topography.
20. KERATOCONUS AND EPI-OFF CORNEAL CROSS-LINKING BY RIBOFLAVIN-ULTRAVIOLET TYPE A: INDICATIONS AND RATIONAL OF EMPLOYMENT
Directory of Open Access Journals (Sweden)
A. Caporossi
2010-11-01
Full Text Available Keratoconus is the most common dystrophic corneal ectasia, characterized by the presence of irregular astigmatism associated with a reduction of corneal thickness. It is the leading cause of corneal transplant in Italy and Europe. Recently a new therapeutic opportunity is offered by Riboflavin + UV A Corneal Cross-linking, first introduced in Italy in 2004 by Professor Aldo Caporossi at the Department of Ophthalmology of Siena. This treatment requires early diagnosis to prevent corneal ectatic modifications related to pathology. The modern treatment of keratoconus is directed into three "directories": 1 prevention of its progression; 2 reduction of the related refractive defect and induced corneal aberrations; 3 replacement of ectatic corneal in advanced phase not subjected to conservative approach and HRGP lens intolerance. Riboflavin + UV A Collagen Cross-linking is mostly indicated in patients between 10 and 26 years old with progressive keratoconus (stage 1 and 2 with strict adherence to the recommended inclusion thickness (thinnest point > 400 microns.
1. Velocity map imaging of a slow beam of ammonia molecules inside a quadrupole guide
CERN Document Server
Pérez, Marina Quintero; Bethlem, Hendrick L
2012-01-01
Velocity map imaging inside an electrostatic quadrupole guide is demonstrated. By switching the voltages that are applied to the rods, the quadrupole can be used for guiding Stark decelerated molecules and for extracting the ions. The extraction field is homogeneous along the axis of the quadrupole while it defocuses the ions in the direction perpendicular to both the axis of the quadrupole and the axis of the ion optics. To compensate for this astigmatism, a series of planar electrodes with horizontal and vertical slits is used. A velocity resolution of 35 m/s is obtained. It is shown that signal due to thermal background can be eliminated, resulting in the detection of slow molecules with an increased signal-to-noise ratio. As an illustration of the resolving power, we have used the velocity map imaging system to characterize the phase-space distribution of a Stark decelerated ammonia beam.
2. Velocity map imaging of a slow beam of ammonia molecules inside a quadrupole guide.
Science.gov (United States)
Quintero-Pérez, Marina; Jansen, Paul; Bethlem, Hendrick L
2012-07-21
Velocity map imaging inside an electrostatic quadrupole guide is demonstrated. By switching the voltages that are applied to the rods, the quadrupole can be used for guiding Stark decelerated molecules and for extracting the ions. The extraction field is homogeneous along the axis of the quadrupole, while it defocuses the ions in the direction perpendicular to both the axis of the quadrupole and the axis of the ion optics. To compensate for this astigmatism, a series of planar electrodes with horizontal and vertical slits is used. A velocity resolution of 35 m s(-1) is obtained. It is shown that signal due to thermal background can be eliminated, resulting in the detection of slow molecules with an increased signal-to-noise ratio. As an illustration of the resolving power we have used the velocity map imaging system to characterize the phase-space distribution of a Stark decelerated ammonia beam. PMID:22652864
3. Polyplanar optic display
Energy Technology Data Exchange (ETDEWEB)
Veligdan, J.; Biscardi, C.; Brewster, C.; DeSanto, L. [Brookhaven National Lab., Upton, NY (United States). Dept. of Advanced Technology; Beiser, L. [Leo Beiser Inc., Flushing, NY (United States)
1997-07-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.
4. Laser-driven polyplanar optic display
Energy Technology Data Exchange (ETDEWEB)
Veligdan, J.T.; Biscardi, C.; Brewster, C.; DeSanto, L. [Brookhaven National Lab., Upton, NY (United States). Dept. of Advanced Technology; Beiser, L. [Leo Beiser Inc., Flushing, NY (United States)
1998-01-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.
5. Kabuki make-up (Niikawa-Kuroki) syndrome in five Spanish children
Energy Technology Data Exchange (ETDEWEB)
1995-11-20
We describe 5 Spanish children with Kabuki make-up syndrome (KMS) - 3 females and 2 males - identified in Badajoz, Spain, between 1988 and 1990. All had the characteristic clinical and radiological manifestations of the syndrome. Psychomotor/mental retardation, postnatal growth deficiency, distinctive facial appearance, sagittal vertebral clefts, and dermatoglyphic abnormalities were present in all 5. Congenital heart defects were present in 4 patients. In addition, one had myopia, astigmatism, and bilateral paralysis of the VI cranial nerve. Another had apparent fusion of the hamate and capitate. An additional patient, as well as his mother, had an apparently balanced 15/17 translocation. The fact that these patients were ascertained in a catchment area of approximately 250,000 inhabitants and in a relatively limited period of time suggests that the prevalence of the KMS may be higher than previously recognized. 30 refs., 6 figs., 2 tabs.
6. Impact of primary aberrations on coherent lidar performance
DEFF Research Database (Denmark)
Hu, Qi; Rodrigo, Peter John; Iversen, Theis Faber Quist;
2014-01-01
In this work we investigate the performance of a monostatic coherent lidar system in which the transmit beam is under the influence of primary phase aberrations: spherical aberration (SA) and astigmatism. The experimental investigation is realized by probing the spatial weighting function...... of the lidar system using different optical transceiver configurations. A rotating belt is used as a hard target. Our study shows that the lidar weighting function suffers from both spatial broadening and shift in peak position in the presence of aberration. It is to our knowledge the first experimental...... effciency, the optimum truncation of the transmit beam and the spatial sensitivity of a CW coherent lidar system. Under strong degree of aberration, the spatial confinement is significantly degraded. However for SA, the degradation of the spatial confinement can be reduced by tuning the truncation...
7. The Possible Role of Peripheral Refraction in Development of Myopia.
Science.gov (United States)
Atchison, David A; Rosén, Robert
2016-09-01
Recent longitudinal studies do not support the current theory of relative peripheral hyperopia causing myopia. The theory is based on misunderstanding of the Hoogerheide et al. article of 1971, which actually found relative peripheral hyperopia to be present after, rather than before, myopia development. The authors present two alternative theories of the role of peripheral refraction in the development and progression of myopia. The one for which most detail is given is based on cessation of ocular growth when the periphery is at an emmetropic stage as determined by equivalent blur of the two line foci caused by oblique astigmatism. This paper is based on an invited commentary on the role of lens treatments in myopia from the 15th International Myopia Conference in Wenzhou, China in September 2015. PMID:27560691
8. Compact MEMS-based Adaptive Optics Optical Coherence Tomography for Clinical Use
Energy Technology Data Exchange (ETDEWEB)
Chen, D; Olivier, S; Jones, S; Zawadzki, R; Evans, J; Choi, S; Werner, J
2008-02-04
We describe a compact MEMS-based adaptive optics (AO) optical coherence tomography system with improved AO performance and ease of clinical use. A typical AO system consists of a Shack-Hartmann wavefront sensor and a deformable mirror that measures and corrects the ocular and system aberrations. Because of the limitation on the current deformable mirror technologies, the amount of real-time ocular-aberration compensation is restricted and small in the previous AO-OCT instruments. In this instrument, we proposed to add an optical apparatus to correct the spectacle aberrations of the patients such as myopia, hyperopia and astigmatism. This eliminated the tedious process of the trial lenses in clinical imaging. Different amount of spectacle aberration compensation was achieved by motorized stages and automated with the AO computer for ease of clinical use. In addition, the compact AO-OCT was optimized to have minimum system aberrations to reduce AO registration errors and improve AO performance.
9. Laser in ophthalmology. Laser i oftalmologien
Energy Technology Data Exchange (ETDEWEB)
Syrdalen, P. (Rikshospitalet, Oslo (Norway))
1991-09-01
The article presents a brief history of the use of laser in ophthalmology in Norway, from the introduction of the first argon-photocoalulator in 1972 to the excimer laser in 1990. The argon-photocoagulator is in daily us in all Eye Departments in Norway and the main group of patients treated are those with diabetic retionopathy. Glaucoma has been treated with argon-laser with good results for the last ten years. YAG-laser, introduced in Norway in 1985, is used to treat secondary cataracts which occur after extracapsular cataract extractions and implantation of artificial lenses. In 1990, the excimer laser was introduced for refractive surgery (myopia, astigmatism). 4 refs., 6 figs., 1 tab.
10. Phototherapeutic Keratectomy Combined with Photorefractive Keratectomy for Treatment of Myopia with Corneal Scars
Institute of Scientific and Technical Information of China (English)
2000-01-01
To evaluate the effect of phototherapeutic keratectomy combined with photorefractive keratectomy in the treatment of myopia with corneal scars, corneal epithelium was removed with laser plus scraping. Corneal scars were removed with PTK, followed by PRK for myopia. Healon was used to make corneal surface smoother during operation. 30 eyes of 24 cases of myopia with corneal scars were followed up for one year. Mean corrected vision was 0. 51 and myopic degree was -6.42D ?4.26D before operation. After operation, corneal scars of 21 eyes (70.0?) were removed in operative zone. The vision of 27 operated eyes (90. 0 %) was equal to or better than best corrected vision. Mean postoperative visual acuity was 0. 72. Corneal surface was smoother and astigmatism was reduced after the surgery. Our study showed that PTK combined with PRK is a safe and effective treatment for myopia with corneal scars.
11. Paediatric nasal polyps in cystic fibrosis.
Science.gov (United States)
Mohd Slim, Mohd Afiq; Dick, David; Trimble, Keith; McKee, Gary
2016-01-01
Patients with cystic fibrosis (CF) are at increased risk of nasal polyps. We present the case of a 17-month-old Caucasian patient with CF who presented with hypertelorism causing cycloplegic astigmatism, right-sided mucoid discharge, snoring and noisy breathing. Imaging suggested bilateral mucoceles in the ethmoid sinuses. Intraoperatively, bilateral soft tissue masses were noted, and both posterior choanae were patent. Polypectomy and bilateral mega-antrostomies were performed. Histological examination revealed inflammatory nasal polyposis typical of CF. The role of early functional endoscopic sinus surgery (FESS) in children with CF nasal polyposis remains questionable as the recurrence rate is higher, and no improvement in pulmonary function has been shown. Our case, however, clearly demonstrates the beneficial upper airway symptom relief and normalisation of facial appearance following FESS in a child with this condition. PMID:27329094
12. Comparative Study of Refractive Errors, Strabismus, Microsaccades, and Visual Perception Between Preterm and Full-Term Children With Infantile Cerebral Palsy.
Science.gov (United States)
Kozeis, Nikolaos; Panos, Georgios D; Zafeiriou, Dimitrios I; de Gottrau, Philippe; Gatzioufas, Zisis
2015-07-01
The purpose of this study was to examine the refractive status, orthoptic status and visual perception in a group of preterm and another of full-term children with cerebral palsy, in order to investigate whether prematurity has an effect on the development of refractive errors and binocular disorders. A hundred school-aged children, 70 preterm and 30 full-term, with congenital cerebral palsy were examined. Differences for hypermetropia, myopia, and emmetropia were not statistically significant between the 2 groups. Astigmatism was significantly increased in the preterm group. The orthoptic status was similar for both groups. Visual perception was markedly reduced in both groups, but the differences were not significant. In conclusion, children with cerebral palsy have impaired visual skills, leading to reading difficulties. The presence of prematurity does not appear to represent an additional risk factor for the development of refractive errors and binocular disorders. PMID:25296927
13. Genetic and environmental factors in orientation anisotropy: a field study in the British Isles.
Science.gov (United States)
Ross, H E; Woodhouse, J M
1979-01-01
Visual acuity for the detection of gratings at four orientations was measured for groups of ten boys and ten girls aged five to seven years, from the following four populations: Scots in Glasgow, Pakistanis in Glasgow, Gaels in Stornoway (Outer Hebrides) and East Anglians in Littleport (Cambridgeshire fenlands). The Glaswegians, both Scottish and Pakistani, showed the normal pattern of anisotropy, with poorest acuity for oblique orientations; the East Anglians showed no significant anisotropy; while the Gaels were unusual in showing poorest horizontal acuity. A group of fourteen Pakistani children in Stornoway differed slightly from a matched group of Gaels. The group differences bore little relation to the visual environments, and were probably due to genetic or cultural factors. The relatively poor horizontal acuity of the Gaels was not correlated with astigmatism. Sex differences were also found, with the boys showing higher mean acuity and a higher ratio between vertical and oblique acuity. PMID:503780
14. Toric implantable collamer lens for keratoconus
Directory of Open Access Journals (Sweden)
Mathew Kurian Kummelil
2013-01-01
Full Text Available Keratoconus is a progressive non-inflammatory thinning of the cornea that induces myopia and irregular astigmatism and decreases the quality of vision due to monocular diplopia, halos, or ghost images. Keratoconus patients unfit for corneal procedures and intolerant to refractive correction by spectacles or contact lenses have been implanted toric posterior chamber phakic intraocular lenses (PC pIOLs alone or combined with other surgical procedures to correct the refractive errors associated with keratoconus as an off label procedure with special informed consent from the patients. Several reports attest to the safety and efficacy of the procedure, though the associated corneal higher order aberrations would have an impact on the final visual quality.
15. Femtosecond laser enabled keratoplasty for advanced keratoconus
Directory of Open Access Journals (Sweden)
Yathish Shivanna
2013-01-01
Full Text Available Purpose : To assess the efficacy and advantages of femtosecond laser enabled keratoplasty (FLEK over conventional penetrating keratoplasty (PKP in advanced keratoconus. Materials and Methods: Detailed review of literature of published randomized controlled trials of operative techniques in PKP and FLEK. Results: Fifteen studies were identified, analyzed, and compared with our outcome. FLEK was found to have better outcome in view of better and earlier stabilization uncorrected visual acuity (UCVA, best corrected visual acuity (BCVA, and better refractive outcomes with low astigmatism as compared with conventional PKP. Wound healing also was noticed to be earlier, enabling early suture removal in FLEK. Conclusions: Studies relating to FLEK have shown better results than conventional PKP, however further studies are needed to assess the safety and intraoperative complications of the procedure.
16. Evaluation of stereopsis in different type of ametropic amblyopia children%不同类型屈光不正性弱视儿童的立体视觉
Institute of Scientific and Technical Information of China (English)
李珊珊; 黄馨慧; 邱斌; 叶晗; 戴锦晖
2010-01-01
目的 了解屈光不正性弱视儿童立体视觉的状况.方法 对4~8岁,平均(5.2±1.8)岁,205例屈光不正性弱视儿童(其中散光性弱视65例、近视性30例、远视性110例),应用颜少明等随机立体检查图及同视机,检测不同类型屈光不正性弱视儿童远融合范围、远近立体视、近零视差、交叉视差、非交叉视差立体感知度.结果 轻、中度屈光不正性弱视三种类型近零视差差异有统计学意义,远视性弱视较小,近视性次之,散光性最差(P均<0.05);远立体视有显著差异,远视性较好散光性较差(P均<0.05);重度弱视3型近零视差和远立体视均差异无统计学意义(P>0.05);远融合范围3型差异无统计学意义(P>0.05).弱视程度对弱视患者三级视功能有明显影响(P<0.05),程度越重影响越大.结论 弱视影响儿童期立体视觉建立,散光性弱视对立体视觉的影响大于近视性、远视性弱视儿童.%Objective To evaluate the stereopsis in different types of ametropic amblyopic children.Methods A total of 205 children between 4-8 years old with recovered ametropic amblyopia, including 65 of astigmatic amblyopia and 30 of myopic and 110 of hypermetropic, were involved in the subject. Using Yan's stereogram random dot synptophore stereogram and synoptophore, the distance fusion range, distance stereoacuity, approximationg zero disparity of these children were observed separately. Results In mild and moderate amblyopia there was significant difference in approximationg zero disparity and distance stereoacuity among astigmatic amblyopia and myopic and hypermetropic, within which astigmatic amblyopia the worst and hypermetropic amblyopia the best (P<0.05). No significant difference was found in the distance fusion range (P>0.05). Different type of ametropic amblyopic led no significant difference in severe amblyopia (P>0.05) which caused worse influence to the stereopsis than mild and moderate degree (P
17. Working Beyond the Static Limits of Laser Stability by Use of Adaptive and Polarization-Conjugation Optics
Science.gov (United States)
Moshe, Inon; Jackel, Steven; Lallouz, Raphael
1998-09-01
Strong thermo-optical aberrations in flash lamp-pumped Nd:Cr:GSGG rods were corrected to yield TEM 00 output at twice the efficiency of Nd:YAG. A hemispherical resonator operating at the limit of stability was employed. As much as 3 W of average power in a Gaussian beam ( M 2 1 ) was generated. Unique features were zero warm-up time and the ability to vary the repetition rate without varying energy, near- and far-field profiles, or polarization purity. Thermal focusing and astigmatism were corrected with a microprocessor-controlled adaptive-optics backmirror composed of discrete elements (variable-radius mirror). A reentrant resonator coupled polarizer losses back into the laser rod and corrected depolarization.
18. Dynamic compensation of thermal lensing and birefringence in a high-brightness Nd:Cr:GSGG oscillator
Science.gov (United States)
Moshe, Inon; Jackel, Steven M.; Lallouz, Raphael; Tzuk, I.
1997-09-01
In this work, five fundamental concepts were combined to a low development of high efficiency, low divergence, narrow bandwidth, flashlamp pumped oscillators capable of operation over a broad operating range. These concepts were: flashlamp pumped Nd:Cr:GSGG to achieve high efficiency, a 'Reentrant Cavity' to eliminate birefringence losses, a variable radius back mirror in a hemispherical cavity to achieve maximum Gaussian beam fill factor, a very high damage threshold, spectrum narrowing output coupler fabricated using a stack of uncoated etalons to form a resonant reflector, a cylindrical zoom lens to completely eliminate astigmatism. The results were successful, and yielded an oscillator that produced 10 mJ, TEM00 300 MHz bandwidth, 75 ns pulses, over a repetition rate of 1-20 Hz, and at a slope efficiency of 2 percent. These techniques were also successfully applied to a YLF oscillator. They may, in part, be adapted for use to unstable resonators.
19. Comparison of Adaptive Optics and Phase-Conjugate Mirrors for Correction of Aberrations in Double-Pass Amplifiers
Science.gov (United States)
Jackel, Steven; Moshe, Inon; Lavi, Raphy
2003-02-01
Correction of birefringence-induced effects (depolarization and bipolar focusing) were achieved in double-pass amplifiers by use of a Faraday rotator between the laser rod and the retroreflecting optic. A necessary condition was ray retrace. Retrace was limited by imperfect conjugate-beam fidelity and by nonreciprocal refractive indices. We compared various retroreflectors: stimulated-Brillouin-scatter phase-conjugate mirrors (PCMs), PCMs with rod-to-PCM relay imaging (IPCM), IPCMs with astigmatism-correcting adaptive optics, and all-adaptive-optics imaging variable-radius mirrors. Results with flash-lamp-pumped, Nd:Cr:GSGG double-pass amplifiers showed the superiority of adaptive optics over nonlinear optics retroreflectors in terms of maximum average power, improved beam quality, and broader oscillator pulse duration /bandwidth operating range. Hybrid PCM-adaptive optics retroreflectors yielded intermediate power /beam-quality results.
20. THERMAL LENSING MEASUREMENTS IN THE ANISOTROPIC LASER CRYSTALS UNDER DIODE PUMPING
Directory of Open Access Journals (Sweden)
P. A. Loiko
2012-01-01
Full Text Available An experimental setup was developed for thermal lensing measurements in the anisotropic diode-pumped laser crystals. The studied crystal is placed into the stable two-mirror laser cavity operating at the fundamental transversal mode. The output beam radius is measured with respect to the pump intensity for different meridional planes (all these planes contain the light propagation direction. These dependencies are fitted using the ABCD matrix method in order to obtain the sensitivity factors showing the change of the optical power of thermal lens due to variation of the pump intensity. The difference of the sensitivity factors for two mutually orthogonal principal meridional planes describes the thermal lens astigmatism degree. By means of this approach, thermal lensing was characterized in the diode-pumped monoclinic Np-cut Nd:KGd(WO42 laser crystal at the wavelength of 1.067 μm for light polarization E || Nm.
1. Assessment of diagnostic value of age for meridional amblyopia with Logistic regression and receiver operating characteristic curve%应用Logistic回归和ROC曲线评价患者年龄对儿童子午线性弱视诊断的影响
Institute of Scientific and Technical Information of China (English)
李辉; 许江涛; 蒋晓明; 周莹
2013-01-01
目的:应用Logistic多元回归分析和ROC曲线探讨年龄因素对诊断儿童子午线弱视有无影响.方法:研究对象为2008/2011年间在我院眼科门诊就诊,以散光为主要屈光异常并排除屈光参差及斜视的4~8岁儿童共1 005例1 910眼.采用Logistic多元回归分析年龄、性别、柱镜绝对值程度、球镜绝对值程度、散光类型对诊断子午线性弱视的影响,通过ROC曲线下面积(area under the ROC curve,AUC)分析进一步明确患者年龄因素对诊断子午线弱视的影响.结果:分别建立Logistic 回归模型1(包括性别、柱镜绝对值程度、球镜绝对值程度、散光类型四个变量)和模型2(前四个变量再加上年龄).两个模型的Logistic 回归分析都提示柱镜绝对值程度是诊断子午线性弱视的影响因素,模型2 的Logistic 回归分析同时提示年龄是诊断子午线性弱视的影响因素.模型1的AUC为0.64,模型2的AUC为0.74,两者比较有统计学差异(Por = 1. 00D and sphere < or = 3. 00D were present in one or both eyes. The difference of sphere between both eyes was less 1. 50D. The difference of astigmatism between both eyes was less 1. 00D. All astigmatism was calculated by the absolute value. By analyzing age, sex, astigmatism type, diopter of cylinder and diopter of sphere with Logistic regression, two mathematical models were established. Then the diagnostic efficacy of the model was assessed using the ROC curve.RESULTS: The model 1 included 4 parameters (sex, astigmatism type, diopter of cylinder and diopter of sphere). The model 2 included 5 parameters (the 4 parameters of the model 1 adding age). Using Logistic regression, the diopter of cylinder had an influence on the diagnosis of meridional amblyopia in two models. In model 2, age was another influencing factor on the diagnosis of meridional amblyopia. The model 1 area under ROC curve (AUC) was 0. 64, and the model 2 was 0.74. The area of model 2 was greater than the model 1
2. Research on the Human Visual Indicators of Computer-Controlled Intelligence Acquisition System%人体视力指标计算机智能采集系统研究
Institute of Scientific and Technical Information of China (English)
邢紫阳; 李明东; 彭鼎
2010-01-01
依据视力采集标准,结合计算机自动控制的特点,研究并分析了计算机智能采集系统的可行性及其测法推算.该系统可以采集到较真实、可靠的近视力值,能对色盲、色弱、散光进行鉴定,采集效率极高.%Based on the national standard about visual and the characteristics of computer automatic control , research and analysis of the human visual indicators of computer-controlled intelligence acquisition system and its measurement method have been studied. We can get a more realistic and reliable values of near sight and also can identify the color-blind and astigmatism with a high efficiency.
3. Influence of wave-front aberrations on bit error rate in inter-satellite laser communications
Science.gov (United States)
Yang, Yuqiang; Han, Qiqi; Tan, Liying; Ma, Jing; Yu, Siyuan; Yan, Zhibin; Yu, Jianjie; Zhao, Sheng
2011-06-01
We derive the bit error rate (BER) of inter-satellite laser communication (lasercom) links with on-off-keying systems in the presence of both wave-front aberrations and pointing error, but without considering the noise of the detector. Wave-front aberrations induced by receiver terminal have no influence on the BER, while wave-front aberrations induced by transmitter terminal will increase the BER. The BER depends on the area S which is truncated out by the threshold intensity of the detector (such as APD) on the intensity function in the receiver plane, and changes with root mean square (RMS) of wave-front aberrations. Numerical results show that the BER rises with the increasing of RMS value. The influences of Astigmatism, Coma, Curvature and Spherical aberration on the BER are compared. This work can benefit the design of lasercom system.
4. Effects of Turbulent Aberrations on Probability Distribution of Orbital Angular Momentum for Optical Communication
Institute of Scientific and Technical Information of China (English)
ZHANG Yi-Xin; CANG Ji
2009-01-01
Effects of atmospheric turbulence tilt, defocus, astigmatism and coma aberrations on the orbital angular mo-mentum measurement probability of photons propagating in weak turbulent regime are modeled with Rytov approximation. By considering the resulting wave as a superposition of angular momentum eigenstates, the or-bital angular momentum measurement probabilities of the transmitted digit axe presented. Our results show that the effect of turbulent tilt aberration on the orbital angular momentum measurement probabilities of photons is the maximum among these four kinds of aberrations. As the aberration order increases, the effects of turbulence aberrations on the measurement probabilities of orbital angular momentum generally decrease, whereas the effect of turbulence defoens can be ignored. For tilt aberration, as the difference between the measured orbital angular momentum and the original orbital angular momentum increases, the orbital angular momentum measurement probabifity decreases.
5. Corneal Topographical Changes Flollowing Strabismus Surgery
Institute of Scientific and Technical Information of China (English)
MaiGH; WangZ
1999-01-01
Purpose:To study corneal topographical changes after strabismus surgery.Methods:Computer-aided corneal topography was used in 43 strabismus patients(45 eyes)one or two days prior to and six or seven ays after strabismus surgery.The spherical and cylindrical equivalents were calculated based on the simulated keratometry.Results:After the surgery,only the changes at 3mm in the inferior quadrant were statistically significant.The changes at 3mm in the rest quadrants and the changes at 7mm were no significant.Significant changes in spherical equivalent were found post-operatively.neither the horizontal nor the verical meridional equivalent showed significant changes after surgery.Conclusions:The results of corneal topographical changes following strabismus surgery in our preliminary study indicated the little effect of strabismus surgery on corneal curvature and corneal astigmatism.
6. Study of X-ray Kirkpatrick-Baez imaging with single layer
Institute of Scientific and Technical Information of China (English)
Baozhong Mu; Zhanshan Wang; Shengzhen Yi; Xin Wang; Shengling Huang; Jingtao Zhu; Chengchao Huang
2009-01-01
The X-ray Kirkpatrick-Baez(KB)imaging experiment with single layer is implemented.Based on the astigmatism aberration and residual geometric aberration of a single mirror.a KB system with 16x mean magnification and approxinlately 0.45° grazing incidence angle is designed.The mirrors are deposited with an Ir layer of 20-nm thickness.Au grids backlit by X-ray tube of 8 keV are imaged via the KB system on scintillator charge-coupled device(CCD).In the ±80 μm field,resolutions of less than 5 μm are measured.The result is in good agreenmnt with the simulated imaging.
7. Retinal optical coherence tomography study on children with anisometropia monocular amblyopia%屈光参差性弱视儿童视网膜光学相干断层成像研究
Institute of Scientific and Technical Information of China (English)
初翠英; 代春华; 宋修芬; 纪芳; 蒋广伟; 姜善好
2014-01-01
目的:应用视网膜光学相干断层成像方法(OCT)研究屈光参差性弱视儿童视网膜神经纤维层(RNFL)和黄斑中心凹厚度,探讨弱视的发病机制。方法对屈光参差性单眼弱视儿童38例进行OCT检查,据弱视眼屈光状态分为远视散光弱视组18例,单纯远视弱视组20例,对侧健眼为正常对照组。分析比较三组视盘周围RNFL厚度和黄斑中心凹厚度的差异。结果远视散光弱视组、单纯远视弱视组和正常对照组视盘周围RNFL厚度分别为115.77±13.42μm、111.34±10.30μm 和103.05±11.10μm,黄斑中心凹厚度分别为198.86±28.30μm、191.98±27.81μm,181.18±29.06μm。两弱视组分别与正常对照组、两弱视组组间比较视盘周围RNFL厚度及黄斑中心凹厚度,差异均有统计学意义,P<0.05。结论屈光参差性弱视其弱视眼视盘周围RNFL厚度及黄斑中心凹厚度较对侧正常眼增厚,且远视散光弱视眼厚于单纯远视弱视眼。%Objective To assess retinal nerve fiber layer and the fovea in children with anisometropic am-blyopia by optical coherence tomography(OCT). Methods OCT was performed on 38 children with anisometropic am-blyopia. 18 children were astigmatic amblyopia and 20 children were hypermetropic amblyopia. The thickness of peri-papillary region retinal nerve fiber layer and the fovea were recorded and analyzed among amblyopia eyes and normal eyes. Results The thickness of mean peripapillary region RNFL and fovea were 115.77±13.42μm, 111.34±10.30μm, 103.05±11.10μm and 198.86±28.30μm, 191.98±27.81μm, 181.18±29.06μm respectively in the astigmatic amblyopia, hypermetropic amblyopia and normal eyes. The RNFL and fovea thickness in amblyopic eyes were thicker than those in normal eyes, and the RNFL and fovea thickness in astigmatic amblyopia were thicker than that in hypermetropic amblyopia, P < 0.05. Conclusions The thickness of RNFL and fovea in anisometropic amblyopia eyes were
8. A Case of Congenital Retinal Macrovessel Crossing the Foveola
Directory of Open Access Journals (Sweden)
Cem Özgönül
2014-03-01
Full Text Available Congenital retinal macrovessel is generally the presence of unilateral aberrant vessel crossing over the horizontal raphe through the macula. Typically, visual acuity is unaffected, although in rare cases, macular hemorrhage, foveolar cysts, foveal contour impairment, and the presence of anomalous vessel in the foveola can affect the vision. In our case, visual acuity of the right eye was counting fingers at 3 meters. He had four diopter oblique astigmatism, esotropia, and dissociated vertical deviation. Fundoscopy revealed a aberrant vein crossing the foveola. Spectral OCT examination showed hiperreflectivity of the vessel and fluorescein angiography showed no leakage of the vessel. Although in the literature it is specified that the aberrant vein crossing the fovea is a factor of lowering visual acuity, in our case we thought, low visual acuity is due to deep amblyopia. (Turk J Ophthalmol 2014; 44: 154-5
9. Cost-effective solar furnace system using fixed geometry Non-Imaging Focusing Heliostat and secondary parabolic concentrator
Energy Technology Data Exchange (ETDEWEB)
Chong, K.K.; Lim, C.Y.; Hiew, C.W. [Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Off Jalan Genting Kelang, Setapak, Kuala Lumpur 53300 (Malaysia)
2011-05-15
A novel cost-effective solar furnace system is proposed to be consisted of a Non-Imaging Focusing Heliostat (NIFH) and a much smaller parabolic concentrator. In order to simplify the design and hence leading to the cost reduction, a fixed geometry of the NIFH heliostat is adopted in the novel solar furnace system by omitting the requirement of continuous astigmatic correction throughout the year with the use of local controllers. The performance of this novel solar furnace configuration can be optimized when the heliostat's spinning-axis is orientated in such a way that the annual variations of incident angle and therefore the annual variations of aberrant image size are the least. To verify the new configuration, a prototype solar furnace has been constructed at Universiti Tunku Abdul Rahman. (author)
10. AO-OCT for in vivo mouse retinal imaging: Application of adaptive lens in wavefornt sensorless aberration correction
Science.gov (United States)
Bonora, Stefano; Jian, Yifan; Pugh, Edward N.; Sarunic, Marinko V.; Zawadzki, Robert J.
2014-03-01
We demonstrate Adaptive optics - Optical Coherence Tomography (OCT) with modal sensorless Adaptive Optics correction with the use of novel Adaptive Lens (AL) applied for in-vivo imaging of mouse retinas. The AL can generate low order aberrations: defocus, astigmatism, coma and spherical aberration that were used in an adaptive search algorithm. Accelerated processing of the OCT data with a Graphic Processing Unit (GPU) permitted real time extraction of image projection total intensity for arbitrarily selected retinal depth plane to be optimized. Wavefront sensorless control is a viable option for imaging biological structures for which AOOCT cannot establish a reliable wavefront that could be corrected by wavefront corrector. Image quality improvements offered by adaptive lens with sensorless AO-OCT was evaluated on in vitro samples followed by mouse retina data acquired in vivo.
11. Intraocular lens design for treating high myopia based on individual eye model
Science.gov (United States)
Wang, Yang; Wang, Zhaoqi; Wang, Yan; Zuo, Tong
2007-02-01
In this research, we firstly design the phakic intraocular lens (PIOL) based on individual eye model with optical design software ZEMAX. The individual PIOL is designed to correct the defocus and astigmatism, and then we compare the PIOL power calculated from the individual eye model with that from the experiential formula. Close values of PIOL power are obtained between the individual eye model and the formula, but the suggested method has more accuracy with more functions. The impact of PIOL decentration on human eye is evaluated, including rotation decentration, flat axis decentration, steep axis decentration and axial movement of PIOL, which is impossible with traditional method. To control the PIOL decentration errors, we give the limit values of PIOL decentration for the specific eye in this study.
12. High-resolution, flat-field, plane-grating, f/10 spectrograph with off-axis parabolic mirrors.
Science.gov (United States)
Schieffer, Stephanie L; Rimington, Nathan W; Nayyar, Ved P; Schroeder, W Andreas; Longworth, James W
2007-06-01
A high-resolution, flat-field, plane-grating, f/10 spectrometer based on the novel design proposed by Gil and Simon [Appl. Opt. 22, 152 (1983)] is demonstrated. The spectrometer design employs off-axis parabolic collimation and camera mirrors in a configuration that eliminates spherical aberrations and minimizes astigmatism, coma, and field curvature in the image plane. In accordance with theoretical analysis, the performance of this spectrometer achieves a high spatial resolution over the large detection area, which is shown to be limited only by the quality of its optics and their proper alignment within the spatial resolution of a 13 microm x 13 microm pixelated CCD detector. With a 1500 lines/mm grating in first order, the measured spectral resolving power of lambda/Dlambda = 2.5(+/-0.5) x 10(4) allows the clear resolution of the violet Ar(I) doublet at 419.07 and 419.10 nm.
13. Microchip laser operation of Tm,Ho:KLu(WO₄)₂ crystal.
Science.gov (United States)
Loiko, Pavel; Serres, Josep Maria; Mateos, Xavier; Yumashev, Konstantin; Kuleshov, Nikolai; Petrov, Valentin; Griebner, Uwe; Aguiló, Magdalena; Díaz, Francesc
2014-11-17
A microchip laser is realized on the basis of a monoclinic Tm,Ho-codoped KLu(WO₄)₂crystal cut for light propagation along the Ng optical indicatrix axis. This crystal cut provides positive thermal lens with extremely weak astigmatism, S/M = 4%. High sensitivity factors, M = dD/dP(abs), of 24.9 and 24.1 m(-1)/W for the mg- and pg- tangential planes are calculated with respect to the absorbed pump power. Such thermo-optic behavior is responsible for mode stabilization in the plano-plano microchip laser cavity, as well as the demonstrated perfect circular beam profile (M(2) laser performance attributed to the increased up-conversion losses. PMID:25402038
14. Comparative life test of 0.8-micron-laser diodes for SILEX under NRZ and QPPM modulation
Science.gov (United States)
Menke, Bodo; Loeffler, Roland
1991-06-01
The procedures and preliminary results of accelerated life tests performed within the framework of an evaluation program under the ESA contract are described. In order to calculate the activation energy and median lifetime and to investigate the drift behavior of optical parameters, a conventional three-temperature aging test at 30, 50, and 70 C is performed on 80 laser diodes in total, split into two subgroups operating under quaternary pulse position modulation (QPPM) and nonreturn-to-zero (NRZ) modulation at 16 Mbit/s with a PN-code length of (2 exp 7)-1. Measurements before and upon completion of the aging tests consist of P0/I curves, V/I characteristics, photo diode tracking ratios, spectra, mode hopping behaviors, far-field patterns, wave-front errors and astigmatisms, and linear polarization ratios.
15. Vision improvement by correcting higher-order aberrations with customized soft contact lenses in keratoconic eyes
Science.gov (United States)
Sabesan, Ramkumar; Jeong, Tae Moon; Carvalho, Luis; Cox, Ian G.; Williams, David R.; Yoon, Geunyoung
2007-04-01
Higher-order aberration correction in abnormal eyes can result in significant vision improvement, especially in eyes with abnormal corneas. Customized optics such as phase plates and customized contact lenses are one of the most practical, nonsurgical ways to correct these ocular higher-order aberrations. We demonstrate the feasibility of correcting higher-order aberrations and improving visual performance with customized soft contact lenses in keratoconic eyes while compensating for the static decentration and rotation of the lens. A reduction of higher-order aberrations by a factor of 3 on average was obtained in these eyes. The higher-order aberration correction resulted in an average improvement of 2.1 lines in visual acuity over the conventional correction of defocus and astigmatism alone.
16. Three-dimensional phenomena in microbubble acoustic streaming
CERN Document Server
Marin, Alvaro; Rallabandi, Bhargav; Wang, Cheng; Hilgenfeldt, Sascha; Kähler, Christian J
2015-01-01
Ultrasound-driven oscillating micro-bubbles have been used as active actuators in microfluidic devices to perform manifold tasks such as mixing, sorting and manipulation of microparticles. A common configuration consists on side-bubbles, created by trapping air pockets in blind channels perpendicular to the main channel direction. This configuration consists of acoustically excited bubbles with a semi-cylindrical shape that generate significant streaming flow. Due to the geometry of the channels, such flows have been generally considered as quasi two-dimensional. Similar assumptions are often made in many other microfluidic systems based on \\emph{flat} micro-channels. However, in this paper we show that microparticle trajectories actually present a much richer behavior, with particularly strong out-of-plane dynamics in regions close to the microbubble interface. Using Astigmatism Particle Tracking Velocimetry, we reveal that the apparent planar streamlines are actually projections of a \\emph{streamsurface} wi...
17. Quality descriptors of optical beams based on centred reduced moments I spot analysis
CERN Document Server
Castaneda, R; García-Sucerquia, J
2003-01-01
A method for analyzing beam spots is discussed. It is based on the central reduced moments of the spot and its associated density functions. These functions allow us to separately analyze specific spot fractions, in such a way that specific combinations of higher order moments can be interpreted as coordinates of their centre of mass and the length and orientations of their principal axis. So, the descriptors of the associated density functions deal with the quantitative estimation of spot features, such as coma-like and astigmatism-like distortions. To assure high accuracy, background noise suppression and an optimal match of the spot support onto the region [-1,1]x[- 1,1] are performed prior to the calculation of the moments. Simulations were performed for illustrating the method.
18. Aberration measurement from specific photolithographic images: a different approach.
Science.gov (United States)
Nomura, H; Tawarayama, K; Kohno, T
2000-03-01
Techniques for measurement of higher-order aberrations of a projection optical system in photolithographic exposure tools have been established. Even-type and odd-type aberrations are independently obtained from printed grating patterns on a wafer by three-beam interference under highly coherent illumination. Even-type aberrations, i.e., spherical aberration and astigmatism, are derived from the best focus positions of vertical, horizontal, and oblique grating patterns by an optical microscope. Odd-type aberrations, i.e., coma and three-foil, are obtained by detection of relative shifts of a fine grating pattern to a large pattern by an overlay inspection tool. Quantitative diagnosis of lens aberrations with a krypton fluoride (KrF) excimer laser scanner is demonstrated.
19. Corneal injection track: an unusual complication of intraocular lens implantation and review
Institute of Scientific and Technical Information of China (English)
Julie; Y.C.Lok; Alvin; L.Young
2015-01-01
Phacoemulsification is the main gold standard for cataract operation in the developed world together with foldable intraocular lens(IOL) implantation by injection,allowing for stable wound construction and less postoperative astigmatism. It is a safe procedure with high success rate with the advancement in machines,improvement of IOL injection systems and further maturation of surgeons’ techniques. Despite the large number of operations performed every day, foldable IOL injection leading to an intra-stromal corneal track is a very rare complication. We report a case of this unusual finding in a 70-year-old gentleman who has undergone cataract operation in November 2011 in our hospital and will review on the complications related to foldable IOL injection.
20. Schwarzschild-Couder two-mirror telescope for ground-based gamma-ray astronomy
CERN Document Server
Vasilev, V V
2007-01-01
Schwarzschild-type aplanatic telescopes with two aspheric mirrors, configured to correct spherical and coma aberrations, are considered for application in gamma-ray astronomy utilizing the ground-based atmospheric Cherenkov technique. We use analytical descriptions for the figures of primary and secondary mirrors and, by means of numerical ray-tracing, we find telescope configurations which minimize astigmatism and maximize effective light collecting area. It is shown that unlike the traditional prime-focus Davies-Cotton design, such telescopes provide a solution for wide field of view gamma-ray observations. The designs are isochronous, can be optimized to have no vignetting across the field, and allow for significant reduction of the plate scale, making them compatible with finely-pixilated cameras, which can be constructed from modern, cost-effective image sensors such as multi-anode PMTs, SiPMs, or image intensifiers.
1. [Keratoconus].
Science.gov (United States)
Fournié, P; Touboul, D; Arné, J-L; Colin, J; Malecaze, F
2013-09-01
Keratoconus is a slowly progressive, non-inflammatory disorder of the eye characterized by thinning and protrusion of the cornea. Typically diagnosed in the patient's adolescent years, keratoconus may lead to substantial distortion of vision primarily from irregular astigmatism and myopia, and secondarily from corneal scarring. The classic histopathologic features include breaks in Bowman's layer and thinning of the corneal stroma. The etiology of keratoconus remains unclear. Form fruste keratoconus shows little progression, and has become known due to videotopographic analysis; it is very important to rule out in refractive surgery candidates. Treatment begins first and foremost with contact lenses, progressing to surgery as contact lens intolerance develops, with the goal of stabilization, including: cross-linking, intrastromal corneal ring segments and corneal transplantation. PMID:23911067
2. [Riboflavin UVA cross-linking for keratoconus].
Science.gov (United States)
Maier, P; Reinhard, T
2013-09-01
Keratoconus is a progressive, ectatic disease of the cornea leading to thinning and highly irregular astigmatism. Until recently all treatment options, such as prescription of glasses or contact lenses were symptomatic and neither keratoplasty nor the implantation of intracorneal rings can heal the disease. Riboflavin ultraviolet A (UVA) collagen cross-linking (CXL) cannot heal keratoconus either but promises to halt the progression. The therapeutic principle is a photochemical reaction of riboflavin and UVA light leading to free oxygen radicals in the corneal stroma that induce covalent linking of the collagen fibrils. This stiffening effect should stop the progression. After the first reports at the end of the 1990s the treatment was widely used and many case series show that CXL can be effective in stopping disease progression in some patients. However, randomized, controlled multicenter trials showing high evidence of the treatment effectiveness are rare. This report includes a review of the literature regarding treatment effectiveness, indications and new developments. PMID:23760423
3. [Iatrogenic Keratectasia: A Review].
Science.gov (United States)
Kohlhaas, M
2015-06-01
Iatrogenic corneal ectasia is a rare complication but also one of the most feared situations that can occur after uneventful corneal laser surgery. Ectatic changes can occur as early as 1 week or can be delayed up to several years after LASIK. The actual incidence of ectasia is undetermined, an incidence rate of 0.04 to almost 2.8 % has been reported. Ectasia is most common following LASIK; however, cases have been reported following PRK and other corneal refractive procedures. Keratectasia shows progressive myopia, irregular astigmatism, ghosting, fluctuating vision and problems with scotopic vision. The progression leads to severe loss of corrected visual acuity. Risk factors are thin corneas 8 D, young (female) age lenses. In severe cases a penetrating or a deep anterior lamellar graft is necessary. PMID:25853948
4. Toric implantable collamer lens for keratoconus.
Science.gov (United States)
Kummelil, Mathew Kurian; Hemamalini, M S; Bhagali, Ridhima; Sargod, Koushik; Nagappa, Somshekar; Shetty, Rohit; Shetty, Bhujang K
2013-08-01
Keratoconus is a progressive non-inflammatory thinning of the cornea that induces myopia and irregular astigmatism and decreases the quality of vision due to monocular diplopia, halos, or ghost images. Keratoconus patients unfit for corneal procedures and intolerant to refractive correction by spectacles or contact lenses have been implanted toric posterior chamber phakic intraocular lenses (PC pIOLs) alone or combined with other surgical procedures to correct the refractive errors associated with keratoconus as an off label procedure with special informed consent from the patients. Several reports attest to the safety and efficacy of the procedure, though the associated corneal higher order aberrations would have an impact on the final visual quality. PMID:23925337
5. Keratoconus: current perspectives.
Science.gov (United States)
Vazirani, Jayesh; Basu, Sayan
2013-01-01
Keratoconus is characterized by progressive corneal protrusion and thinning, leading to irregular astigmatism and impairment in visual function. The etiology and pathogenesis of the condition are not fully understood. However, significant strides have been made in early clinical detection of the disease, as well as towards providing optimal optical and surgical correction for improving the quality of vision in affected patients. The past two decades, in particular, have seen exciting new developments promising to alter the natural history of keratoconus in a favorable way for the first time. This comprehensive review focuses on analyzing the role of advanced imaging techniques in the diagnosis and treatment of keratoconus and evaluating the evidence supporting or refuting the efficacy of therapeutic advances for keratoconus, such as newer contact lens designs, collagen crosslinking, deep anterior lamellar keratoplasty, intracorneal ring segments, photorefractive keratectomy, and phakic intraocular lenses. PMID:24143069
6. Main factors influencing postoperative visual function after refractive cataract surgery%屈光性白内障手术术后影响视觉质量的主要因素
Institute of Scientific and Technical Information of China (English)
龚敏; 刘谊
2014-01-01
人工晶状体( intraocular lens ,IOL)屈光力计算误差,角膜散光,前房深度以及IOL的位置等因素能导致术眼屈光状态的改变,影响白内障术后的整体视觉质量。我们将对手术过程顺利的屈光性白内障手术术后影响视觉质量的主要因素进行综述。%Factors including intraocular lens power calculation error, corneal astigmatism, anterior chamber depth and lens position can lead to the change of refractive status, they also influence the overall postoperative visual quality.This article provides a comprehensive review of the main factors affecting postoperative visual function after uneventful refractive cataract surgery.
7. Wavefront aberration function in terms of R. V. Shack's vector product and Zernike polynomial vectors.
Science.gov (United States)
Gray, Robert W; Rolland, Jannick P
2015-10-01
Previous papers have shown how, for rotationally symmetric optical imaging systems, nodes in the field dependence of the wavefront aberration function develop when a rotationally symmetric optical surface within an imaging optical system is decentered and/or tilted. In this paper, we show how Shack's vector product (SVP) can be used to express the wavefront aberration function and to define vectors in terms of the Zernike polynomials. The wavefront aberration function is then expressed in terms of the Zernike vectors. It is further shown that SVP fits within the framework of two-dimensional geometric algebra (GA). Within the GA framework, an equation for the third-order node locations for the binodal astigmatism term that emerge in the presence of tilts and decenters is then demonstrated. A computer model of a three-mirror telescope system is used to demonstrate the validity of the mathematical development. PMID:26479937
8. Optical tweezers absolute calibration
CERN Document Server
Dutra, R S; Neto, P A Maia; Nussenzveig, H M
2014-01-01
Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past fifteen years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spo...
9. The Genetic and Environmental Factors for Keratoconus
Directory of Open Access Journals (Sweden)
Ariela Gordon-Shaag
2015-01-01
Full Text Available Keratoconus (KC is the most common cornea ectatic disorder. It is characterized by a cone-shaped thin cornea leading to myopia, irregular astigmatism, and vision impairment. It affects all ethnic groups and both genders. Both environmental and genetic factors may contribute to its pathogenesis. This review is to summarize the current research development in KC epidemiology and genetic etiology. Environmental factors include but are not limited to eye rubbing, atopy, sun exposure, and geography. Genetic discoveries have been reviewed with evidence from family-based linkage analysis and fine mapping in linkage region, genome-wide association studies, and candidate genes analyses. A number of genes have been discovered at a relatively rapid pace. The detailed molecular mechanism underlying KC pathogenesis will significantly advance our understanding of KC and promote the development of potential therapies.
10. Endothelial keratoplasty: evolution and horizons
Directory of Open Access Journals (Sweden)
Gustavo Teixeira Grottone
2012-12-01
Full Text Available Endothelial keratoplasty has been adopted by corneal surgeons worldwide as an alternative to penetrating keratoplasty (PK in the treatment of corneal endothelial disorders. Since the first surgeries in 1998, different surgical techniques have been used to replace the diseased endothelium. Compared with penetrating keratoplasty, all these techniques may provide faster and better visual rehabilitation with minimal change in refractive power of the transplanted cornea, minimal induced astigmatism, elimination of suture-induced complications and late wound dehiscence, and a reduced demand for postoperative care. Translational research involving cell-based therapy is the next step in work on endothelial keratoplasty. The present review updates information on comparisons among different techniques and predicts the direction of future treatment.
11. Collagen cross-linking in the treatment of pellucid marginal degeneration
Directory of Open Access Journals (Sweden)
2014-01-01
Full Text Available Pellucid marginal degeneration (PMD is an uncommon cause of inferior peripheral corneal thinning disorder, characterized by irregular astigmatism. We analyzed a case of bilateral PMD patient and treated one eye with corneal collagen cross-linking (CXL therapy. Corneal topography was characteristic for PMD. Visual acuity, slitlamp examinations, tonometry, and corneal thickness were observed. Simulated keratometric and topographic index values were detected with corneal topography. Uncorrected, LogMAR visual acuity has improved from +0.8 to +0.55 during the 6 months and +0.3 during the 8 months follow-up after CXL. Pachymetry values and intraocular pressure showed no changes. Keratometric values and topografic indexes disclosed no progression of the disease. CXL may postpone or eliminate the need of corneal transplantation in cases with PMD.
12. Effects of Different Zernike Terms on Optical Quality and Vision of Human Eyes
Institute of Scientific and Technical Information of China (English)
ZHAO Hao-Xin; XU Bing; LI Jing; DAI Yun; YU Xiang; ZHANG Yu-Dong; JIANG Wen-Han
2009-01-01
The visual quality of human eyes is much restricted by high-order aberrations as well as low-order aberrations (defocus and astigmatism), but each term of high-order aberrations contributes differently. The visual acuity and contrast of the image on the retina can be gained by inducing aberrations to each term of high orders. Based on an adaptive optics system, the visual acuity of four subjects is tested by inducing aberrations to each Zernike term after correcting all the aberrations of the subjects. Zernike terms near the center of the Zernike tree affect visual quality more than those near the edge both theoretically and experimentally, and 0.1-μm aberration of these terms can clearly degrade the optical quality and vision. The results suggest that correcting the terms near the center of Zernike tree can improve the visual quality effectively in practice.
13. Infantile nystagmus and visual deprivation
DEFF Research Database (Denmark)
Fledelius, Hans C; Jensen, Hanne
2014-01-01
PURPOSE: To evaluate whether effects of early foveal motor instability due to infantile nystagmus might compare to those of experimental visual deprivation on refraction in a childhood series. METHODS: This was a retrospective analysis of data from the Danish Register for Blind and Weaksighted...... Children with infantile nystagmus recorded as prime diagnosis. We perused 90 records of children now aged 10-17 years, some of whom eventually exceeded the register borderline of 0.3 as best-corrected visual acuity. Spherical equivalent refraction was the primary outcome parameter, but visual acuity......, astigmatism, and age were further considered. The series comprised 48 children with nystagmus as single diagnosis, whereas 42 had clinical colabels (Down syndrome [13], dysmaturity [9], and mental retardation, encephalopathy [20]). RESULTS: Median binocular visual acuity was 0.3 in the full series, and median...
14. PREVALENCE OF REFRACTIVE ERRORS AMONG CHILDREN IN RURA L AREAS OF CHITTOOR DISTRICT, A . P
Directory of Open Access Journals (Sweden)
2015-08-01
Full Text Available BACKGROUND: The uncorrected refractive errors are the main cause of low vision which hampers performance at school, reduces productivity and impairs quality of life. It is considered to be one of the most important priorities in the global initiation for the elimination of avoidable blindness. The refractive errors are especially common among children as they do not complain and adjust with circumstances. School children constitute an ideal group for study of refractive errors because most of them go to school, easily accessible and offer excellent opportunity for services and health education. MATERIAL & METHODS: This is a cross sectional study conducted among 2,568 children attending various government schools in the rural areas of Chittoor district. The study was carried out during January to June 2015. A preliminary examination of visual acuity was determined by Snellen’s chart and those with defective vision were subjected to detailed eye examination by a specialist. The results were analyzed using MS excel software and Epiinfo 7 software version using percentages and Chi - square test. RESULTS: The overall prevalence of refractive errors among children was found to be 11.3%. (Astigmatism - 5.8%; myopia - 4.2%; hypermetropia - 1.4%. The prevalence of refractive errors increased steadily from 5.7% in 5 - 7 years age group to 14.7% in 14 - 16 years group. The prevalence was found to be similar in male and female children. The prevalence of myopia and astigmatism was found to increase steadily with age while hypermetropia showed an inverse trend CONCLUSIONS: Examination of school children for refractive errors is a useful strategy for early diagnosis and intervention.
15. Visual acuity and refraction by age for children of three different ethnic groups in Paraguay
Directory of Open Access Journals (Sweden)
Marissa Janine Carter
2013-04-01
Full Text Available PURPOSE: To characterize refractive errors in Paraguayan children aged 5-16 years and investigate effect of age, gender, and ethnicity. METHODS:The study was conducted at 3 schools that catered to Mennonite, indigenous, and mixed race children. Children were examined for presenting visual acuity, autorefraction with and without cycloplegia, and retinoscopy. Data were analyzed for myopia and hyperopia (SE ≤-1 D or -0.5 D and ≥2 D or ≥3 D and astigmatism (cylinder ≥1 D. Spherical equivalent (SE values were calculated from right eye cycloplegic autorefraction data and analyzed using general linear modelling. RESULTS: There were 190, 118, and 168 children of Mennonite, indigenous and mixed race ethnicity, respectively. SE values between right/left eyes were nonsignificant. Mean visual acuity (VA without correction was better for Mennonites compared to indigenous or mixed race children (right eyes: 0.031, 0.090, and 0.102 logMAR units, respectively; P<0.000001. There were 2 cases of myopia in the Mennonite group (1.2% and 2 cases in the mixed race group (1.4% (SE ≤-0.5 D. The prevalence of hyperopia (SE ≥2 D was 40.6%, 34.2%, and 46.3% for Mennonite, indigenous and mixed race children. Corresponding astigmatism rates were 3.2%, 9.5%, and 12.7%. Females were slightly more hyperopic than males, and the 9-11 years age group was the most hyperopic. Mennonite and mixed race children were more hyperopic than indigenous children. CONCLUSIONS: Paraguayan children were remarkably hyperopic and relatively free of myopia. Differences with regard to gender, age, and ethnicity were small.
16. Prevalence of refractive errors in Villa Maria, Córdoba, Argentina
Institute of Scientific and Technical Information of China (English)
Victoria M Snchez; Claudio P Juarez; Rafael Iribarren; Santiago G Latino; Victor E Torres; Ana L Gramajo; Mara N Artal; Mara B Yadarola; Patricia R Garay; Jos D Luna
2016-01-01
Background: Refractive errors are among the most frequent reasons for demand of eye-care services. Publications on refractive errors prevalence in our country are few. This study has the purpose to assess the prevalence of refractive errors in an adult population of Villa Maria, Córdoba, Argentina. Methods: The Villa Maria Eye Study is a population-based cross-sectional study conducted in the city of Villa Maria, Córdoba, Argentina from May 2008 to November 2009. Subject’s aged 40+ received a demographic interview and complete ophthalmological exam. Visual acuity was obtained with an ETDRS chart. Cycloplegic auto refraction was performed. The spherical equivalent was highly correlated between right and left eyes, so only data of right eyes are presented. Myopia and hyperopia were defined with a ±0.50 diopters (D) criterion and astigmatism >1 D. Results: This study included 646 subjects, aged 40 to 90 (mean age: 59.6±10.3 years old). Four hundred and sixty two (71.5%) were females. The mean spherical equivalent was +0.714±2.41 D (range, −22.00 to+8.25 D) and the power of the cylinder was, on average, −0.869±0.91 D (range, 0 to −6.50 D). In this sample, 61.6% subjects were hyperopic, and 13.5% were myopic. Myopia prevalence was lower in men (9.8% versus 14.9%) but this difference among genders was not statistically signiifcant. There were 141 subjects (21.8%) with anisometropia greater than 1 D, and 168 subjects (26.0%) with astigmatism greater than 1 D. Conclusions: The present study shows the prevalence of cycloplegic refractive errors in an adult population of Argentina. The prevalence of hyperopia was high, while myopia prevalence was very low.
17. Visual and refractive outcome of one-site phacotrabeculectomy compared with temporal approach phacoemulsification
Directory of Open Access Journals (Sweden)
Daniela Vaideanu
2008-10-01
Full Text Available Daniela Vaideanu, Kaveri Mandal, Anthony Hildreth, Scott G Fraser, Peter S PhelanGlaucoma Unit, Sunderland Eye Infirmary, Sunderland, UKBackground: We aimed to compare visual and refractive outcome following phacoemulsification and intraocular lens implant (IOL and combined one-site phacotrabeculectomy.Method: We performed a retrospective study of case records of patients who had temporal incision phacoemulsification with IOL or one-site phacotrabeculectomy, between June 1997 and June 2001. The patients were matched for age group, operating list and IOL type. All patients were operated on under local anesthesia by the same surgeon. Each arm of the study had 90 patients, age range 60 to 75 years. We collected pre- and postoperative visual acuity, pre- and postoperative refraction within six months after surgery, and intended refraction. The intraocular pressure control was not recorded, as it was not the aim of our study.Results: In the phacotrabeculectomy group, 76.6% of patients achieved aimed spherical equivalent, 15.5% of patients had against-the-rule (ATR astigmatism induced by the surgery, and 90% of the patients had best corrected visual acuity (BCVA more than 6/12. In the temporal incision phacoemulsification group, 81.1% of patients achieved aimed spherical equivalent, 10% of the patients had induced ATR by the surgery and 95.55% of patients achieved BCVA more than 6/12.Conclusion: In this study the visual outcome of the phacotrabeculectomy group did not differ significantly from the visual outcome of temporal approach phacoemulsification.Keywords: refractive outcome, phacoemulsifi cation, phacotrabeculectomy, astigmatism
18. Outcome of Corneal Collagen Crosslinking for Progressive Keratoconus in Paediatric Patients
Directory of Open Access Journals (Sweden)
Deepa Viswanathan
2014-01-01
Full Text Available Purpose. To evaluate the efficacy of corneal collagen crosslinking for progressive keratoconus in paediatric patients. Methods. This prospective study included 25 eyes of 18 patients (aged 18 years or younger who underwent collagen crosslinking performed using riboflavin and ultraviolet-A irradiation (370 nm, 3 mW/cm2, 30 min. Results. The mean patient age was 14.3 ± 2.4 years (range 8–17 and mean followup duration was 20.1 ± 14.25 months (range 6–48. Crosslinked eyes demonstrated a significant reduction of keratometry values. The mean baseline simulated keratometry values were 46.34 dioptres (D in the flattest meridian and 50.06 D in the steepest meridian. At 20 months after crosslinking, the values were 45.67 D (P=0.03 and 49.34 D (P=0.005, respectively. The best spectacle corrected visual acuity (BSCVA and topometric astigmatism improved after crosslinking. Mean logarithm of the minimum angle of resolution (logMAR BSCVA decreased from 0.24 to 0.21 (P=0.89 and topometric astigmatism reduced from mean 3.50 D to 3.25 D (P=0.51. Conclusions. Collagen crosslinking using riboflavin and ultraviolet-A is an effective treatment option for progressive keratoconus in paediatric patients. Crosslinking stabilises the condition and, thus, reduces the need for corneal grafting in these young patients.
19. Refractive Errors in Northern China Between the Residents with Drinking Water Containing Excessive Fluorine and Normal Drinking Water.
Science.gov (United States)
Bin, Ge; Liu, Haifeng; Zhao, Chunyuan; Zhou, Guangkai; Ding, Xuchen; Zhang, Na; Xu, Yongfang; Qi, Yanhua
2016-10-01
The purpose of this study was to evaluate the refractive errors and the demographic associations between drinking water with excessive fluoride and normal drinking water among residents in Northern China. Of the 1843 residents, 1415 (aged ≥40 years) were divided into drinking-water-excessive fluoride (DWEF) group (>1.20 mg/L) and control group (≤1.20 mg/L) on the basis of the fluoride concentrations in drinking water. Of the 221 subjects in the DWEF group, with 1.47 ± 0.25 mg/L (fluoride concentrations in drinking water), the prevalence rates of myopia, hyperopia, and astigmatism were 38.5 % (95 % confidence interval [CI] = 32.1-45.3), 19.9 % (95 % CI = 15-26), and 41.6 % (95 % CI = 35.1-48.4), respectively. Of the 1194 subjects in the control group with 0.20 ± 0.18 mg/L, the prevalence of myopia, hyperopia, and astigmatism were 31.5 % (95 % CI = 28.9-34.2), 27.6 % (95 % CI = 25.1-30.3), and 45.6 % (95 % CI = 42.8-48.5), respectively. A statistically significant difference was not observed in the association of spherical equivalent and fluoride concentrations in drinking water (P = 0.84 > 0.05). This report provides the data of the refractive state of the residents consuming drinking water with excess amounts of fluoride in northern China. The refractive errors did not result from ingestion of mild excess amounts of fluoride in the drinking water.
20. Aladin transmit-receive optics (TRO): the optical interface between laser, telescope and spectrometers
Science.gov (United States)
Mosebach, Herbert; Erhard, Markus; Camus, Fabrice
2005-09-01
1. Measuring higher order optical aberrations of the human eye: techniques and applications
Directory of Open Access Journals (Sweden)
L. Alberto V. Carvalho
2002-11-01
Full Text Available In the present paper we discuss the development of "wave-front", an instrument for determining the lower and higher optical aberrations of the human eye. We also discuss the advantages that such instrumentation and techniques might bring to the ophthalmology professional of the 21st century. By shining a small light spot on the retina of subjects and observing the light that is reflected back from within the eye, we are able to quantitatively determine the amount of lower order aberrations (astigmatism, myopia, hyperopia and higher order aberrations (coma, spherical aberration, etc.. We have measured artificial eyes with calibrated ametropia ranging from +5 to -5 D, with and without 2 D astigmatism with axis at 45º and 90º. We used a device known as the Hartmann-Shack (HS sensor, originally developed for measuring the optical aberrations of optical instruments and general refracting surfaces in astronomical telescopes. The HS sensor sends information to a computer software for decomposition of wave-front aberrations into a set of Zernike polynomials. These polynomials have special mathematical properties and are more suitable in this case than the traditional Seidel polynomials. We have demonstrated that this technique is more precise than conventional autorefraction, with a root mean square error (RMSE of less than 0.1 µm for a 4-mm diameter pupil. In terms of dioptric power this represents an RMSE error of less than 0.04 D and 5º for the axis. This precision is sufficient for customized corneal ablations, among other applications.
2. Anterior iris-claw lens implantation with single paracentesis
Directory of Open Access Journals (Sweden)
Ahmet Özer
2011-11-01
Full Text Available In this study, the technique and results of iris-claw intraocular lens (IOL implantation with corneal incision and single paracentesis were presented. Eighteen eyes of 18 patients who underwent iris-claw implantation surgery with a single paracentesis were included in this prospective study. Iris-claw lens was grasped by its forceps and placed into the anterior chamber through superior corneal opening. While IOL was held by forceps, a blunt enclavation spatula was introduced through inferior paracentesis. Then the spatula was directed toward underneath of iris through pupil and toward sides where iris was entrapped into the claw by gentle push of iris through the slotted center of the lens haptics. Mean age of patients was 54.28±25.21 years (7-76 years. Mean anterior chamber depth was 4.07±0.32 mm and mean keratometric power was 43.01±2.73 D. Preoperative BCVA was 20/63 or better in 8 (44.4% patients. At the first postoperative month BCVA was 20/63 or better in 14 (77.8% patients. Preoperative mean spherical refraction was +11.05±2.62 D, preoperative astigmatism was 2.15±0.85. Postoperative mean spherical refraction was - 0.58±0.25 D and mean astigmatism was - 1.92±0.67 D. The most frequent postoperative complication was mild corneal edema seen in three patients that resolved completely during the first week with medical treatment. Irisclaw IOL implantation can be performed easily with corneal incision and single paracentesis. Single paracentesis does not increase surgical time or cause inconvenience during the procedure.
3. Efficacy and safety of iris-supported phakic lenses (Verisyse for treating moderately high myopia
Directory of Open Access Journals (Sweden)
Melisa Ahmedbegović Pjano
2016-02-01
Full Text Available Aim To evaluate efficacy and safety of iris-supported phakic lenses (Verisyse for treating moderately high myopia. Methods This prospective clinical study included 40 eyes from 29 patients, who underwent implantation of Verisyse for correction of myopia from -6.00 to -14.50 diopters (D in the Eye Clinic ‘’Svjetlost’’, Sarajevo, from January 2011 to January 2014. Uncorrected distance visual acuity (UDVA, manifest residual spherical equivalent (MRSE, postoperative astigmatism, intraocular pressure (IOP, endothelial cell (EC density were evaluated at one, three, six and twelve months. Corrected visual acuity (CDVA, index of safety and efficacy were evaluated after 12 months. Results Out of 29 patients 15 were males and 14 females, with mean age of 27.9 ± 5.0. After 12 months 77.5% eyes had UDVA ≥ 0.5 and 32.5% had UDVA ≥ 0.8. Mean MRSE was 0.55D ± 0.57D and mean postoperative astigmatism -0.86D ± 0.47D. Efficacy index was 1.09 ± 0.19 and safety index 1.18 ± 0.21. One eye (2.5% lost two Snellen lines and three eyes (7.5% one line, 11 eyes (27.5% gained one line, and five eyes (15.5% gained two lines. EC loss after 12 months was 7.59 ± 3.05%. There was no significant change of IOP after one year follow up. Conclusion Implantation of iris-supported phakic lenses (Verisyse for treating moderately high myopia is an efficient and safe procedure.
4. Nano-Structuring of Solid Surface by EUV Ar8+ Laser
International Nuclear Information System (INIS)
Requirement of increased resolution is at present most loudly pronounced in microelectronics, which, following Moore´s law (doubling every two years the transistorcount that can be placed inexpensively on integrated circuit), commands a continuous upgrade of micro-/nano-lithography. Such upgrade indirectly influences processing speed, memory capacity, number and size of pixels in sensors, etc. The submitted paper demonstrates our first attempt for “direct (i.e. ablation) patterning” of PMMA by pulse, high-current, capillary-discharge-pumped Ar8+ ion laser (λ = 46,9 nm). For focusing a long-focal spherical mirror (R = 2100 mm) covered by 14 double-layer Sc-Si coating was used. The ablated focal spots demonstrate not only that the energy of our laser is sufficient for such experiments, but also that the design of focusing optics must be more sophisticated: severe aberrations have been revealed – an irregular spot shape and strong astigmatism with astigmatic difference as large as 16 mm. Moreover, on the bottom of ablated spots a laserinduced periodic surface structure (LIPSS) has appeared. Finally, a direct patterning of quadratic hole 7,5x7,5 µm, standing in contact with PMMA substrate, has shown a strongly developed 2D diffraction pattern (period in the centre ∼125 nm). In conclusion there will be shown the design of a new (grazing incidence) focusing optics, and a new “nano-patterning” tool – grazing incidence interferometer, which enable to ablate a regular, in advance defined pattern. It is believed that this is the first step to application of this technique not only to nanolithography,but also e.g. to study of electron dynamics in superlattices. (author)
5. Comparison of Keratometric Values Using Javal Keratometer, Oculus Pentacam, and Orbscan II
Directory of Open Access Journals (Sweden)
Hassan Hashemi
2014-05-01
Full Text Available Purpose: To compare Orbscan II and Pentacam keratometry readings in terms of their agreement with a manual Javal type keratometer Methods: In this retrospective study, records of patients who had refractive surgery were reviewed. We extracted data of 765 eyes which had keratometry with the Javal keratometer; of these, 577 had Orbscan II and 200 eyes had Pentacam acquisitions. Minimum (min-K and maximum (max-K keratometry readings and keratometric astigmatism with the latter two devices were compared with Javal. Results: Correlation coefficients for Javal and Orbscan II in measuring min-K and max-K were r=0.916 and r=0.913, respectively (p<0.001. The 95% limits of agreement (LoA between Javal and Orbscan II was 1.17-1.20 D for min-K and 1.22-1.24 D for max-K. The coefficients for Pentacam and Javal min-K and max-K readings were very high (r=0.943 and r=0.962. The 95% LoA between Pentacam and Javal in measuring min-K and max-K were 0.51-0.99D and 0.72-0.99D, respectively. The correlation between Pentacam and Javal measurements of keratometric astigmatism was stronger than that for Orbscan II and Javal (r=0.973 and r=0.800; the 95% LoA was 0.55-0.76D for Pentacam and Javal, and 1.14-1.19D for Orbscan II and Javal. Conclusion: According to this research, Orbscan II and Pentacam had high correlation and agreement with Javal-keratometer in determining keratometric values. Nevertheless, the results obtained from Pentacam showed better agreement and stronger correlation with Javal as compared with Orbscan II. It seems that Pentacam is a suitable substitute for Javal to perform keratometry in normal eyes.
6. The Evaluation of the Effects of Differently-Designed Toric Soft Contact Lenses on Visual Quality
Directory of Open Access Journals (Sweden)
Sevda Aydın Kurna
2013-08-01
Full Text Available Purpose: The purpose of this study was to evaluate the effects of aspheric Balafilcon A and spherical Senofilcon A toric soft contact lenses, which have two different stabilization systems, on visual quality. Material and Method: Forty eyes of 20 patients who were followed up in our contact lens section were included in this study. Refractive errors of the patients were between -0.50 and -6.0 diopters of myopia and >0.75 diopter of astigmatism. The patients were randomly assigned to wear Balafilcon A (Purevision Toric, Baush&Lomb with prism balast toric system or spherical designed Senofilcon A (Acuvue Oasys for astigmatism, Johnson&Johnson with accelerated stabilization toric system. We recorded and compared the visual acuity with Snellen chart, contrast sensitivity with Bailey-Lovie chart in letters, mean root mean square (RMS of corneal aberration with by Nidek Magellan Mapper for all eyes with and without glasses and while wearing Balafilcon A or Senofilcon A toric soft contact lenses. Results: We did not observe any difference in visual acuity between contact lenses. Contrast sensitivity was increased approximately 4.8-5.4 letters with contact lenses. Total higher order aberrations of mean RMS values were 0.42±0.14 µm without contact lens, 0.37±0.23 µm with Balafilcon A lens, 0.43±0.15 µm with Senofilcon A (p=0.507. Trefoil values were significantly higher with Senofilcon A lenses when compared to Balafilcon A lenses. There was no statistically significant difference for other measured higher order aberrations. Discussion: High and low contrast vision values were adequate in both contact lens groups. There was a non-significant decrease in total higher order aberration values with aspheric design contact lenses when compared to spherical designs. Although there were differences in aberrations related to lens designs, they did not have significant effect on visual quality. (Turk J Ophthalmol 2013; 43: 253-7
7. Results of endocapsular phacofracture debulking of hard cataracts
Directory of Open Access Journals (Sweden)
Davison JA
2015-07-01
Full Text Available James A Davison Wolfe Eye Clinic, Marshalltown, IA, USA Purpose/aim of the study: To present a phacoemulsification technique for hard cataracts and compare postoperative results using two different ultrasonic tip motions during quadrant removal.Materials and methods: A phacoemulsification technique which employs in situ fracture and endocapsular debulking for hard cataracts is presented. The prospective study included 56 consecutive cases of hard cataract (LOCS III NC [Lens Opacification Classification System III, nuclear color], average 4.26, which were operated using the Infiniti machine and the Partial Kelman tip. Longitudinal tip movement was used for sculpting for all cases which were randomized to receive longitudinal or torsional/interjected longitudinal (Intelligent Phaco [IP] strategies for quadrant removal. Measurements included cumulative dissipated energy (CDE, 3 months postoperative surgically induced astigmatism (SIA, and corneal endothelial cell density (ECD losses.Results: No complications were recorded in any of the cases. Respective overall and longitudinal vs IP means were as follows: CDE, 51.6±15.6 and 55.7±15.5 vs 48.6±15.1; SIA, 0.36±0.2 D and 0.4±0.2 D vs 0.3±0.2 D; and mean ECD loss, 4.1%±10.8% and 5.9%±13.4% vs 2.7%±7.8%. The differences between longitudinal and IP were not significant for any of the three categories.Conclusion: The endocapsular phacofracture debulking technique is safe and effective for phacoemulsification of hard cataracts using longitudinal or torsional IP strategies for quadrant removal with the Infiniti machine and Partial Kelman tip. Keywords: astigmatism, cataract, corneal endothelium, phacoemulsification, viscoelastic
8. Prevalence and profile of ophthalmic disorders in oculocutaneous albinism: a field report from South-Eastern Nigeria.
Science.gov (United States)
Udeh, N N; Eze, B I; Onwubiko, S N; Arinze, O C; Onwasigwe, E N; Umeh, R E
2014-12-01
To assess the burden and spectrum of refractive and non-refractive ophthalmic disorders in south-eastern Nigerians with oculocutaneous albinism. In a population-based survey in Enugu state, between August, 2011 and January, 2012, albinos were identified using the database of the Enugu state's Albino Foundation, and mass media-based mobilisation. The participants were enrolled at the Eye Clinics of the University of Nigeria Teaching Hospital and Enugu State University of Science and Technology Teaching Hospital using a defined protocol. Relevant socio-demographic and clinical data were obtained from each participant. Descriptive and comparative statistics were performed. Statistical significance was indicated by p < 0.05. The participants (n = 153; males, 70) were aged 23.5 + 10.4 SD years (range 6-60 years). Both refractive and non-refractive disorders were present in all participants. Non-refractive disorders comprised nystagmus, foveal hypoplasia, hypopigmented fundi and prominent choroidal vessels in 100.0% participants; and strabismus in 16.3% participants. Refractive disorders comprised astigmatism -73.2% eyes, myopia -23.9% and hypermetropia 2.9%. Spherical refractive errors ranged from -14.00 DS to +8.00 DS while astigmatic errors ranged from -6.00 DC to +6 DC. Mixed refractive and non-refractive disorder i.e. presenting visual impairment was present in 100.0% participants. Overall, refractive error was associated with non-possession of tertiary education (OR 0.61; 95% CI 0.38-0.96; p = 0.0374). There is high prevalence of refractive, non-refractive and mixed ophthalmic disorders among albinos in south-eastern Nigeria. This underscores the need for tailored provision of resources to address their eye care needs, and creation of needs awareness amongst them.
9. Common complications of deep lamellar keratoplasty in the early phase of the learning curve
Directory of Open Access Journals (Sweden)
Hosny M
2011-06-01
Full Text Available Mohamed HosnyOphthalmology Department, Faculty of Medicine, Cairo University, Cairo, EgyptPurpose: To evaluate and record the common complications that face surgeons when they perform their first few series of deep lamellar keratoplasty and measures to avoid these.Setting: Dar El Oyoun Hospital, Cairo, Egypt.Methods: Retrospective study of the first 40 eyes of 40 patients carried out by two corneal surgeons working in the same center. All patients were planned to undergo a deep anterior lamellar keratoplasty using the big bubble technique. Twelve patients suffered from keratoconus while 28 patients had anterior corneal pathologies. Recorded complications were classified as either intraoperative or postoperative.Results: Perforation of Descemet's membrane was the most common intraoperative complication. It occurred in nine eyes (22.5%: five eyes (12.5% had microperforations while four eyes (10% had macroperforations, three eyes (7.5% had central perforations, and six eyes (15% had peripheral perforations. Other complications included incomplete separation of Descemet's membrane and remnants of peripheral stromal tissue. Postoperative complications included double anterior chamber which occurred in four eyes (10% and Descemet's membrane corrugations. Postoperative astigmatism ranged from 1.25 to 4.5 diopters with a mean of 2.86 diopters in the whole series, but in the six cases with identified residual stroma in the periphery of the host bed, the astigmatism ranged from 2.75 to 4.5 diopters with a mean of 3.62 diopters.Conclusion: Deep lamellar keratoplasty is sensitive to procedural details. Learning the common complications and how to avoid them helps novice surgeons to learn the procedure faster.Keywords: deep lamellar keratoplasty, complications, big bubble technique
10. simEye: Computer-based simulation of visual perception under various eye defects using Zernike polynomials.
Science.gov (United States)
Fink, Wolfgang; Micol, Daniel
2006-01-01
We describe a computer eye model that allows for aspheric surfaces and a three-dimensional computer-based ray-tracing technique to simulate optical properties of the human eye and visual perception under various eye defects. Eye surfaces, such as the cornea, eye lens, and retina, are modeled or approximated by a set of Zernike polynomials that are fitted to input data for the respective surfaces. A ray-tracing procedure propagates light rays using Snell's law of refraction from an input object (e.g., digital image) through the eye under investigation (i.e., eye with defects to be modeled) to form a retinal image that is upside down and left-right inverted. To obtain a first-order realistic visual perception without having to model or simulate the retina and the visual cortex, this retinal image is then back-propagated through an emmetropic eye (e.g., Gullstrand exact schematic eye model with no additional eye defects) to an output screen of the same dimensions and at the same distance from the eye as the input object. Visual perception under instances of emmetropia, regular astigmatism, irregular astigmatism, and (central symmetric) keratoconus is simulated and depicted. In addition to still images, the computer ray-tracing tool presented here (simEye) permits the production of animated movies. These developments may have scientific and educational value. This tool may facilitate the education and training of both the public, for example, patients before undergoing eye surgery, and those in the medical field, such as students and professionals. Moreover, simEye may be used as a scientific research tool to investigate optical lens systems in general and the visual perception under a variety of eye conditions and surgical procedures such as cataract surgery and laser assisted in situ keratomileusis (LASIK) in particular. PMID:17092160
11. Clinical Outcomes after Uncomplicated Cataract Surgery with Implantation of the Tecnis Toric Intraocular Lens
Science.gov (United States)
Lubiński, Wojciech; Kaźmierczak, Beata; Gronkowska-Serafin, Jolanta; Podborączyńska-Jodko, Karolina
2016-01-01
Purpose. To evaluate the clinical outcomes after uncomplicated cataract surgery with implantation of an aspheric toric intraocular lens (IOL) during a 6-month follow-up. Methods. Prospective study including 27 consecutive eyes of 18 patients (mean age: 66.1 ± 11.4 years) with a visually significant cataract and corneal astigmatism ≥ 0.75 D and undergoing uncomplicated cataract surgery with implantation of the Tecnis ZCT toric IOL (Abbott Medical Optics). Visual, refractive, and keratometric outcomes as well as IOL rotation were evaluated during a 6-month follow-up. At the end of the follow-up, patient satisfaction and perception of optical/visual disturbances were also evaluated using a subjective questionnaire. Results. At 6 months after surgery, mean LogMAR uncorrected (UDVA) and corrected distance visual acuity (CDVA) were 0.19 ± 0.12 and 0.14 ± 0.10, respectively. Postoperative UDVA of 20/40 or better was achieved in 92.6% of eyes. Mean refractive cylinder decreased significantly from −3.73 ± 1.96 to −1.42 ± 0.88 D (p < 0.001), while keratometric cylinder did not change significantly (p = 0.44). Mean absolute IOL rotation was 1.1 ± 2.4°, with values of more than 5° in only 2 eyes (6.9%). Mean patient satisfaction score was 9.70 ± 0.46, using a scale from 0 (not at all satisfied) to 10 (very satisfied). No postoperative optical/visual disturbances were reported. Conclusion. Cataract surgery with implantation of the Tecnis toric IOL is an effective method of refractive correction in eyes with corneal astigmatism due to the good IOL positional stability, providing high levels of patient's satisfaction. PMID:27022478
12. Retreatments after multifocal intraocular lens implantation: an analysis
Science.gov (United States)
Gundersen, Kjell Gunnar; Makari, Sarah; Ostenstad, Steffen; Potvin, Rick
2016-01-01
Purpose To determine the incidence and etiology of required retreatment after multifocal intraocular lens (IOL) implantation and to evaluate the methods and clinical outcomes of retreatment. Patients and methods A retrospective chart review of 416 eyes of 209 patients from one site that underwent uncomplicated cataract surgery with multifocal IOL implantation. Biometry, the IOL, and refractive data were recorded after the original implantation, with the same data recorded after retreatment. Comments related to vision were obtained both before and after retreatment for retreated patients. Results The multifocal retreatment rate was 10.8% (45/416 eyes). The eyes that required retreatment had significantly higher residual refractive astigmatism compared with those who did not require retreatment (1.21±0.51 D vs 0.51±0.39 D, P<0.01). The retreatment rate for the two most commonly implanted primary IOLs, blended bifocal (10.5%, 16/152) and bilateral trifocal (6.9%, 14/202) IOLs, was not statistically significantly different (P=0.12). In those requiring retreatment, refractive-related complaints were most common. Retreatment with refractive corneal surgery, in 11% of the eyes, and piggyback IOLs, in 89% of the eyes, was similarly successful, improving patient complaints 78% of the time. Conclusion Complaints related to ametropia were the main reasons for retreatment. Residual astigmatism appears to be an important determinant of retreatment rate after multifocal IOL implantation. Retreatment can improve symptoms for a high percentage of patients; a piggyback IOL is a viable retreatment option. PMID:27041983
13. 硬性透气性角膜接触镜矫正圆锥角膜疗效分析%Clinical analysis of rigid gas permeable contact lens for keratoconus
Institute of Scientific and Technical Information of China (English)
张福生; 田晓丹; 徐艳春; 范春雷; 秦洁; 李艳; 巴秀凤
2011-01-01
Objective To evaluate the clinical efficacy and safety of RGP for keratoconus.Methods RGP corrected 63 cases of keratoconus in outpatient clinic from 2004 to 2010 were included in this study.Among them 41 males and 22 females,16-35 years old with mean aged (24.6± 7.81),Binoculus in 57 cases,ocellus in 6 cases.The patients who had significantly increased astigmatism and decreased visual acuity with glasses were detected with computer refractometer,corneal topographer and corneal endothelial microscopy for screening keratoconus.For the diagnosed or suspected keratoconus,based on the degree of corneal curvature,use a specially designed or seneral RGP to correct.The corrected visual acuity was measure with RGP and optometry with RGP.All the measured results were analyzed with correlation analysis with SPSS13.0 software,P <0.05 was significant differences.Results (1)The mean visual acuity with glasses:0.56+ 0.29,the mean visual acuity with RGP:0.93± 0.20,(t =-14.627,P =0.000).The corrected visual acuity with RGP was significantly better than with glasses.(2)The mean astigmatism before wearing RGP:(-4.16± 2.19)DC,the mean astigmatism with RGP:(-0.77+ 1.2)DC (t =-14.585,P =0.000).There was an obvious decrease of astigmatism with RGP.(3)In average 3.5 years' observation in 22 eyes,only 1 eye appeared corneal increased turbid,implemented lamellar corneal transplantation.There was a significant decreased astigmatism in the other 21 patients.Corneal thickness and corneal curvature were no significant difference.Conclusions RGP for irregular astigmatism of keratoconus can significantly improve the visual acuity to a certain extent and slow down the progress of keratoconus disease.%目的 探讨硬性透气性角膜接触镜(RGP)矫正圆锥角膜的临床疗效及安全性.方法 对2004~2010年视光门诊应用RGP矫正的63例圆锥角膜患者,男41例,女22例.年龄16-35岁,平均年龄(24.6±7.81)岁.双眼57例,单眼6例.对散光度明显增大,框架眼镜
14. Analysis of the reasons for decentration after orthokeratology%配戴角膜塑形镜后光学区偏中心原因分析
Institute of Scientific and Technical Information of China (English)
付心怡; 张晓峰; 夏静; 杨牧
2016-01-01
ten myopes (213 eyes,age range,6.5~17.0 years) were fitted with overnight orthokeratology lenses.Patients were divided among 3 groups based on age:6.5-10.0 years,group A;11.0-13.0 years,group B;and 14.0-17.0 years,group C.Patients were divided into 2 groups based on their initial spherical diopter <-3.00 D group and-3.00--6.00 D group.Patients were divided into 3 groups based on initial astigmatism:<0.50 D astigmatism group,0.50-1.00 D astigmatism group,and ≥l.00 D astigmatism group.Based on the distance of decentration at 3 months after orthokeratology,patients were divided into 3 groups:<0.5 mm decentration group,0.5-1.0 mm decentration group,and ≥ 1.0 mm decentration group.Corneal topography was measured before and 3 months after orthokeratology.The decentration (distance and angle) of the optical zone center after orthokeratology was calculated relative to the pupil center.Data were collected on the relevant factors affecting decentration after orthokeratology and the initial age,initial sperical diopter,initial astigmatism and initial corneal parameter.The data were analyzed with a t test,ANOVA test and Pearson correlation test.Results The mean distance for decentration 3 months after orthokeratology was 0.53±0.33 mm.The decentration was mainly located in the inferior temporal quadrant.After 3 months,a decentration distance of less than 0.5 mm was observed in 111 eyes (52.1%),0.5-1.0 mm in 81 eyes (38.0%) and more than 1.0 mm in 21 eyes (9.9%).There was no statistically significant difference in the decentration distance between the three age groups.Patients with a higher initial sperical diopter (t=1.76,P<0.05) and astigmatism (F=9.254,P<0.05) showed a greater decentmtion distance.The initial corneal keratometry value evaluated with the topographic map showed that the nasal side of the cornea was flatter than the temporal side.The initial corneal keratometry value was greater in patients with severe decentration (P<0.05).The correlation
15. 脑瘫患儿伴发视觉障碍的临床研究%Clinical research on visual impairment of children with cerebral palsy
Institute of Scientific and Technical Information of China (English)
罗瑜琳; 唐璟; 谭艺兰; 邓姿峰; 肖志刚; 陶利娟
2014-01-01
AIM: To understand the common conditions of visual impairment in cerebral palsy children, and to provide the basis for early screening of eyes, early diagnosis and treatment, and promote the visual rehabilitation for children with cerebral palsy. METHODS:Two hundred and twenty-three children with cerebral palsy underwent routine ophthalmologic examination, including the position of eye and eyeball movement, indirect ophthalmoscopy or Retcam II fundus examination, mydriasis optometry check and flash-visual evoked potential ( F-VEP ) examination, and the results were recorded and analyzed. RESULTS: Strabismus, ametropia and changes of F-VEP were the mainly impairments in 223 children with cerebral palsy, and some children also were associated with ocular fundus disease.There were 174 children with different types of strabismus, including 121 children with esotropia, 36 children with exotropia, 15 children with vertical strabismus, and 2 children with nystagmus.There were 129 children ( 247 eyes ) with refractive errors, including 118 eyes with compound hyperopic astigmatism, 51 eyes with simple hyperopia, 33 eyes with mixed astigmatism, 19 eyes with compound myopic astigmatism, 21 eyes with simple hyperopia astigmatism, 4 eyes with simple myopia astigmatism, only 1 eye with simple myopia.The F-VEP of 194 children ( 381 eyes ) were abnormal, and performed as delayed latency and reduced amplitude of P2 wave.In addition, there were 51 children with different types of ocular fundus changes, in which optic nerve atrophy and retinal hemorrhage were the most common. CONCLUSION: Children with cerebral palsy often are associated with different types of visual dysfunction, which seriously affect the visual quality and systemic rehabilitation. Routine eye examination and visual training should be paid attention, which play an important role in the normal development of the visual system and comprehensive rehabilitation of children with cerebral palsy.%目的:了解脑瘫患
16. Color light-emitting diode reflection topography: validation of keratometric repeatability in a large sample of wide cylindrical-range corneas
Directory of Open Access Journals (Sweden)
Kanellopoulos AJ
2015-02-01
Full Text Available Anastasios John Kanellopoulos,1,2 George Asimellis11LaserVision.gr Clinical and Research Eye Institute, Athens, Greece; 2New York University Medical School, New York, NY, USAPurpose: To investigate repeatability of steep and flat keratometry measurements, as well as astigmatism axis in cohorts with normal range and regular astigmatic such as: eyes following laser-assisted in situ keratomileusis (LASIK and normal population, as well as cohorts of high and irregular astigmatism such as keratoconic eyes, and keratoconic eyes following corneal collagen cross-linking, employing a novel corneal reflection topography device.Methods: Steep and flat keratometry and astigmatism axis measurement repeatability was investigated employing a novel multicolored-spot reflection topographer (Cassini in four study groups, namely a post myopic LASIK-treated Group A, a keratoconus Group B, a post-CXL keratoconus Group C, and a control Group D of routine healthy patients. Three separate, maps were obtained employing the Cassini, enabling investigation of the intra-individual repeatability by standard deviation. Additionally we investigated in all groups,the Klyce surface irregularity indices for keratoconus, the SAI (surface asymmetry index and the SRI (surface regularity index.Results: Flat keratometry repeatability was 0.74±0.89 (0.03 to 5.26 diopters (D in the LASIK Group A, 0.88±1.45 (range minimum to maximum, 0.00 to 7.84 D in the keratoconic Group B, and 0.71±0.94 (0.02 to 6.23 D in the cross-linked Group C. The control Group D had flat keratometry repeatability 0.36±0.46 (0.00 to 2.71 D. Steep keratometry repeatability was 0.64±0.82 (0.01 to 4.81 D in the LASIK Group A, 0.89±1.22 (0.02 to 7.85 D in the keratoconic Group B, and 0.93±1.12 (0.04 to 5.93 D in the cross-linked Group C. The control Group D had steep keratometry repeatability 0.41±0.50 (0.00 to 3.51 D. Axis repeatability was 3.45±1.62° (0.38 to 7.78° for the LASIK Group A, 4.12±3.17
17. Análise comparativa da refração automática objetiva e refração clínica Automatic objective refraction and clinical refraction - a comparative analysis
Directory of Open Access Journals (Sweden)
Ricardo Uras
2001-02-01
automated objective refraction was performed using the automatic keratorefractor TOPCON 3000. Results: 1,001 eyes of 504 patients were studied. 45.2% were male patients and the mean age was 36.6 years. There was an overall concordance between clinical refraction and the automated objective refraction in 66.7% of the patients. The concordance of a spherical value, not considering variations of -0.50 to +0.50 SD was, approximately 90%. In simple hyperopic/myopic astigmatic eyes the concordance was 27.6%, in eyes with compound hyperopic/myopic astigmatism the concordance was 97.7%. Cycloplegia did not significantly affect this concordance. There was no significant difference regarding the axis of astigmatic eyes when using both techniques. Conclusion: Automated objective refraction is an useful tool in clinical refraction but clinical data should also be considered and the final lens prescription should never be based solely on the automated examination.
18. Prevalência das ametropias e oftalmopatias em crianças pré-escolares e escolares em favelas do Alto da Boa Vista, Rio de Janeiro, Brasil Prevalence of the ametropias and eye diseases in preschool and school children of Alto da Boa Vista favelas, Rio de Janeiro, Brazil
Directory of Open Access Journals (Sweden)
Abelardo de Souza Couto Júnior
2007-10-01
prevalence was 3.50% (hyperopia and astigmatism hyperopic were 1.78%, myopia and astigmatism myopic 1.06%, mixed astigmatism 0.67%.The eyes diseases prevalence was 3.50% (amblyopia was 2.00%, manifest strabismus was 1.72% and others causes was 1.11%. CONCLUSION: It was shown the prevalence of the main ophthalmologic children disorders. It also points out the need of ocular health campaigns thus achieve remarkably the development of the children visual acuity.
19. Relationship between Best Corrected Visual Acuity and Refraction Parameters in Myopia%近视者最佳矫正视力与屈光参数间的相关性
Institute of Scientific and Technical Information of China (English)
吕雅平; 夏文涛; 褚仁远; 周行涛; 戴锦晖; 周浩
2011-01-01
目的 探究近视者最佳矫正视力(best corrected visual acuity,BCVA)与屈光参数等相关指标之间的相关性.方法 选取2274例(4245眼)不同程度近视者,检测近视者的BCVA和球镜屈光度(diopter ofspherical,DS)、柱镜屈光度(diopter of cylinder,DC)、柱镜轴位、眼轴长度(axial length,AL)、角膜厚度等屈光参数,分析上述因素对BCVA的影响,并建立了BCVA与上述屈光参数以及年龄、性别等指标的相关性数学模型.结果 logistic回归分析显示BCVA(y)与DS(x1)、DC(x2)、性别(x3)、AL(x4)、角膜厚度(x5)、柱镜轴位(x6)、年龄(x7)之间有相关性(P<0.05):y0.5806-0.0340x1-0.0468x2+0.0565x3+0.0165x4+0.0007x5+0.0002x6-0.0058x7.结论 近视者的年龄、性别和角膜厚度对BCVA存在微弱的影响,DS、DC、AL及柱镜轴位对BCVA有着明显的影响.随着DS和DC的增加,AL的延长,BCVA呈现明显的下降趋势.综合分析屈光参数可以帮助评估近视者的视力状况.%Objective To explore the relationship between best corrected visual acuity(BCVA) and refraction parameters in myopia. Methods Two thousand two hundred and seventy-four patients (4245 eyes)with different degrees of myopia were collected. Their BCVA, diopter of spherical (DS), diopter of cylinder (DC), astigmatism axis, axial length (AL) and corneal thickness were detected. The influence of those parameters on BCVA was studied and the mathematical model of the relationship between BCVA and other parameters including the age and gender of patients was established. Results The logistic regression analysis showed that there were correlations between the BCVA(y) and DS(x1, DC(x2), gender(x3), AL(x4), corneal thickness (x5), astigmatism axis (x6) and age (x7) (Pastigmatism axis have significant
20. Exploration on the indexes of taking off glasses after recovery in children with amblyopia%儿童弱视愈后脱镜指标的探讨
Institute of Scientific and Technical Information of China (English)
王洪峰; 王恩荣; 廖美婷; 邢玉琴
2011-01-01
目的:探讨儿童弱视治愈后脱镜指标.方法:儿章弱视治愈后必须符合4项指标才能脱镜,脱镜后要继续追踪观察3年.结果:368例678眼的弱视儿童治愈后,经3~7年的治疗,有205例364眼脱镜,占治愈眼数的53.69%.其中轻度弱视脱镜244眼,高于中度(114眼)和重度(6眼);屈光不正性弱视脱镜305眼,屈光参差性29眼,斜视性30眼;单纯远视性弱视脱镜316眼,高于单纯远散(11眼)和复性远散(37眼);而单纯近视性弱视和单纯近散、复性近散性弱视均没能脱镜.弱视儿童初戴眼镜属低屈光度脱镜289眼,中度58眼,高度17眼.从就诊时的年龄上看3~8周岁者脱镜率高.结论:儿童弱视治愈后按照4项脱镜指标摘掉眼镜是可行的.在脱镜前一定要坚持治疗复诊,即使脱镜后也要坚持追踪观察,最好观察超过视力发育敏感期12周岁之后.%Objective: To explore the indexes of taking off glasses after recovery in children with amblyopia. Methods: The children with amblyopia could take off their glasses after recovery when they accorded with four indexes, and all the children were followed up for three years after taking off glasses. Results: 368 children (678 eyes) with amblyopia were selected, after treatment for 3 ~ 7 years, 205 children (364 eyes) took off glasses, accounting for 53.69%, including 244 eyes of mild amblyopia, the proportion was higher than those of middle amblyopia (114 eyes) and severe amblyopia (6 eyes) ; 305 eyes had ametropic amblyopia, 29 eyes had anisometropic amblyopia and 30 eyes had strabismic amblyopia; 316 eyes had simple hypermetropic amblyopia, the proportion was higher than those of simple hypermetropic astignatism ( 11 eyes) and complex astigmatism (37 eyes) ; the children with simple myopic amblyopia, simple myopic astigmatism and complex myopic astigmatism did not take off glasses; 289 eyes were low diopter, 58 eyes were middle diopter and 17 eyes were high diopter; the rate of taking
1. Estudo prospectivo comparativo de duas técnicas cirúrgicas de extração extra-capsular planejada de catarata com implante de lente intra-ocular: incisão limbar e incisão escleral tunelizada Prospective comparative study of two techniques of planned extracapsular cataract extraction: limbal incision and scleral tunnel incision
Directory of Open Access Journals (Sweden)
Lincoln Lemes Freitas
2001-06-01
extraction (ECCE with posterior chamber intraocular lens implantation. This study aims to compare limbal incision and scleral tunnel incision in planned ECCE. Methods: Fifty-four consecutive patients (59 eyes with follow-up of 6 months were studied prospectively. ECCE with limbal incision was performed in 30 patients (Group I, and with scleral tunnel incision in 29 patients (Group II. Corrected visual acuity, intraocular inflammation (cells and flare, surgical time, specular microscopy, induced astigmatism and pachymetry were assessed. Results: Surgical time, endothelial cells loss and induced astigmatism were statistically greater in group I than in group II. No significant differences were found between groups when comparing the corrected visual acuity, intraocular inflammation and pachymetry. Conclusions: ECCE with scleral tunnel incision technique offers advantages regarding surgical time, endothelial cells loss and induced astigmatism if compared with limbal incision technique. Surgical steps used in this technique help in transition for phacoemulsification with low cost and a safer way.
2. Accurate test of optical wave front for optical imaging system%光学成像系统光学波前的高精度测试
Institute of Scientific and Technical Information of China (English)
邵晶; 马冬梅; 聂真威
2011-01-01
Based on the Extended Nijboer-Zernike theory, the effect of different amplitudes for exit pupils on the image intensity in the focal plane was analyzed. A novel approach was applied to testing the wavefront according to the actual condition of the amplitude in the exit pupil, which can help eliminating the error caused by the nonuniformity illuminated pupil and the Fast Fourier Transform in the o-riginal phase retrieval algorithms. A testing experiment was performed on an imaging optical system, and obtained results show that the tested wave fronts in the exit pupil of a camera lens are 0. 196 5X in PV and 0. 022 4X in RMS (the testing wavelength X is 632. 8 nm). The aberrations in the wavefront are mainly astigmatism, coma and high order astigmatism. Furthermore, the approach can also be used to analyze the amplitude in the exit pupil of a camera lens and calculate the light intensity distribution on other focal planes. The experiment proves this approach available.%基于扩展奈波尔-泽尼克理论,分析了不同出瞳振幅分布情况对光学系统焦面处光强分布的影响.针对光学成像系统出瞳振幅实际分布状态,提出了一种新的测试光学波前的方法,解决了相位恢复算法中出瞳振幅分布不均匀和快速傅里叶变换引入计算误差的问题.通过测评实验,对一光学系统进行了测试,获得的光学系统出瞳波前(PV)值为0.196 5λ,RMS值为0.022 4λ(测试波长λ=632.8 nm),此波前中主要含有像散、彗差和高阶像散等像差.该方法亦可用于分析光学系统出瞳振幅分布,数值计算其他焦面处的光强分布.测评实验证明了此方法的有效性.
3. 小切口非超声乳化白内障联合青光眼的围手术期护理%Small Incision Phacoemulsification Cataract and Glaucoma on the Perioperative Period Nursing
Institute of Scientific and Technical Information of China (English)
张建红
2014-01-01
Objective To approach peri operation period nursing of combined cataract and glaucoma by small incision phacoemulsification. Method To analyze 48 cases (60 eyes) clinical data of combined cataract and glaucoma in our hospital ophthalmology from October 2010 to August 2013, which was to be divided into routine nursing group (24 cases, 30 eyes) and special nursing group (24 cases, 30 eyes). Result The postoperative visual acuity, intraocular pressure and the degree of astigmatism of combined cataract and glaucoma of special nursing group were better than routine nursing group, P<0.05, the difference were statistical significance. Conclusion The postoperative visual acuity, intraocular pressure and the degree of astigmatism were obviously increased of combined cataract and glaucoma by special nursing, reduced the postoperative complications, which was to be used.%目的:探讨小切口非超声乳化白内障联合青光眼的围手术期护理情况。方法分析我院2010年10月至2013年8月眼科收治的青光眼并白内障患者48例(60眼)临床资料,依据护理措施不同进行分组,常规护理组(24例、30眼)和特殊护理组(24例、30眼)。结果特殊护理组青光眼并白内障患者术后视力、眼压和散光度数明显优于常规护理组P<0.05,差异均有统计学意义。结论特殊护理措施在小切口非超声乳化白内障联合青光眼围手术期应用可以明显提高术后视力、眼压和散光度数,值得临床推广应用。
4. Visual performance and aberration associated with contact lens wear in patients with keratoconus: a pilot study
Directory of Open Access Journals (Sweden)
Abdu M
2014-08-01
Full Text Available Mustafa Abdu, Norhani Mohidin, Bariah Mohd-Ali Optometry and Vision Science Program, School of Healthcare Sciences, Faculty of Health Science, Universiti Kebangsaan Malaysia, Jalan Raja Muda Abdul Aziz, Kuala Lumpur, Malaysia Background: Rigid gas permeable (RGP and silicone hydrogel (SH contact lenses with specific designs are currently being used to improve visual function in patients with keratoconus. However, there are minimal data available comparing the effects of these lenses on visual function in patients with keratoconus. The objectives of this study were to compare visual acuity and contrast sensitivity using spectacles, RGP lenses, and SH lenses, and to evaluate the effects of RGP and SH lenses on higher-order aberrations and visual quality in eyes with keratoconus. The relationship between visual outcomes, aberration, and visual quality were also examined. Methods: This was a pilot study involving 13 eyes from nine subjects with keratoconus. Subjects were fitted with RGP and SH contact lenses. Visual acuity and contrast sensitivity were measured using Snellen and Pelli-Robson charts, respectively. Ocular aberrations and visual quality were measured using an OPD-Scan II device. All measurements were conducted before and after contact lens wear. Results: Significantly better visual acuity was obtained with RGP lenses than with spectacles or SH lenses (P<0.001. No significant difference in contrast sensitivity values was detected between RGP and SH lenses (P=0.06. Both SH and RGP lenses significantly reduced total ocular and higher-order aberrations (P<0.001 when compared with spectacles, but RGP lenses reduced trefoil, coma, and spherical aberrations more than SH lenses. No significant difference in astigmatic aberrations was found between RGP and SH lenses (P=0.12. Negative correlations were found between visual acuity and coma aberration and contrast sensitivity with higher-order aberrations and coma, trefoil, and astigmatic
5. Microfocusing at the PG1 beamline at FLASH
Energy Technology Data Exchange (ETDEWEB)
Dziarzhytski, Siarhei, E-mail: siarhei.dziarzhytski@desy.de [DESY, Notkestrasse 85, 22067 Hamburg (Germany); Gerasimova, Natalia [European XFEL GmbH, Albert-Einstein-Ring 19, 22761 Hamburg (Germany); Goderich, Rene [University of South Florida (United States); Mey, Tobias [Laser Laboratorium Göttingen eV, Hans-Adolf-Krebs-Weg 1, 37077 Göttingen (Germany); Reininger, Ruben [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); Rübhausen, Michael [University of Hamburg and Center for Free-Electron Laser Science, Notkestrasse 85, 22607 Hamburg (Germany); Siewert, Frank [Institute for Nanometre Optics and Technology at Helmholtz Zentrum Berlin/BESSY II, Albert-Einstein-Strasse 15, 12489 Berlin (Germany); Weigelt, Holger; Brenner, Günter [DESY, Notkestrasse 85, 22067 Hamburg (Germany)
2016-01-01
The Kirkpatrick–Baez (KB) refocusing mirrors unit at the PG1 beamline at FLASH has been newly designed, developed and fully commissioned. The vertical focal size of the KB optics is measured to be 5.8 ± 1 µm FWHM and the horizontal 6 ± 2 µm FWHM; astigmatism has been minimized to below 1 mm between waist positions. Such a tight focus is essential for the VUV double Raman spectrometer as it serves as an entrance slit for the first monochromator and defines its resolution to a very large extent. The Raman spectrometer is a permanent end-station at the PG1 beamline, dedicated to inelastic soft X-ray scattering experiments. The Kirkpatrick–Baez (KB) refocusing mirror system installed at the PG1 branch of the plane-grating monochromator beamline at the soft X-ray/XUV free-electron laser in Hamburg (FLASH) is designed to provide tight aberration-free focusing down to 4 µm × 6 µm full width at half-maximum (FWHM) on the sample. Such a focal spot size is mandatory to achieve ultimate resolution and to guarantee best performance of the vacuum-ultraviolet (VUV) off-axis parabolic double-monochromator Raman spectrometer permanently installed at the PG1 beamline as an experimental end-station. The vertical beam size on the sample of the Raman spectrometer, which operates without entrance slit, defines and limits the energy resolution of the instrument which has an unprecedented design value of 2 meV for photon energies below 70 eV and about 15 meV for higher energies up to 200 eV. In order to reach the designed focal spot size of 4 µm FWHM (vertically) and to hold the highest spectrometer resolution, special fully motorized in-vacuum manipulators for the KB mirror holders have been developed and the optics have been aligned employing wavefront-sensing techniques as well as ablative imprints analysis. Aberrations like astigmatism were minimized. In this article the design and layout of the KB mirror manipulators, the alignment procedure as well as microfocus
6. Effect of sutureless small incision cataract surgery plus intraocular lens implantation on Africans with cataract: a report of 1 730 cases%小切口无缝线白内障摘除加人工晶体植入术1730例临床疗效观察
Institute of Scientific and Technical Information of China (English)
庞永明; 李辉
2011-01-01
Objective To investigate the effect of suture less small incision cataract surgery (SICS) plus intraocular lens (IOL) implantation for Africans with cataract. Methods Sutureless SICS plus IOL implantation was conducted on 1 730 African patients with cataract, a total of 2 207 eyes. The clinical effect was evaluated. Results One week after the surgery, 1 403 eyes were found with vision ≥0.5 (63.6%), and 112 were found with vision ≥1.0 (5.1%), with astigmatism of (1.96±0.72) D. Three months after the surgery, 2 094 eyes were found with vision ≥0.5 (94.9%), and 136 were found with vision ≥ 1.0 (6.2%), with astigmatism of (0.87±0.54) D. Conclusion Sutureless SICS plus IOL implantation leads to a smaller chance of injury and provides ideal vision recovery, which is worthy to be extended in African.%目的 探讨小切口无缝线白内障囊外摘除加人工晶体植入术治疗非洲黑人白内障的临床疗效.方法对1 730例(2 207眼)黑人白内障患者行小切口无缝线白内障囊外摘除及人工晶体植入术,评估疗效.结果术后1周视力≥0.5者占63.6%(1 403眼),视力≥1.0者占5.1%(112眼),散光为(1.96±0.72)D.3个月后视力≥0.5者占94.9%(2094眼).视力≥1.0 (136眼)占6.2%,散光为(0.87±0.54)D.结论小切口无缝线白内障囊外摘除及人工晶体植入术损伤小、术后视力恢复好,在非洲地区有推广运用价值.
7. One-site versus two-site phacotrabeculectomy: a prospective randomized study
Directory of Open Access Journals (Sweden)
Moschos MM
2015-08-01
Full Text Available Marilita M Moschos,1 Irini P Chatziralli,2 Michael Tsatsos3 1First Department of Ophthalmology, University of Athens, 2Second Department of Ophthalmology, Ophthalmiatrion Athinon, Athens, Greece; 3Department of Ophthalmology, Cambridge University Hospital NHS, Cambridge, UK Purpose: The purpose of this study is to compare the efficacy and safety of one-site and two-site combined phacotrabeculectomy with foldable posterior chamber intraocular lens implantation.Methods: Thirty-four patients (41 eyes with glaucoma and cataract were randomly assigned to undergo either a one-site (22 eyes or a two-site (19 eyes combined procedure. One-site approach consisted of a standard superior phacotrabeculectomy with a limbus-based conjunctival flap, while two-site approach consisted of a clear cornea phacoemulsification and a separate superior trabeculectomy with a limbus-based conjunctival flap.Results: Mean follow-up period was 54 months (standard deviation [SD] 2.3. Mean preoperative intraocular pressure (IOP in the one-site group was 21.3 mmHg (SD 2.8 and in the two-site group was 21.8 mmHg (SD 3.0 (P>0.1. Mean postoperative IOP significantly decreased in both groups compared to the preoperative level and was 15.6 mmHg (SD 3.5 in the one-site group and 14.9 mmHg (SD 2.7 in the two-site group. Three months later, the difference between the two groups was not statistically significant (P=0.058. The one-site group required significantly more medications than the two-site group (P=0.03. Best-corrected visual acuity (BCVA improved similarly in both groups, but there was less postoperative (induced astigmatism in the two-site group in a marginal statistical level (P=0.058. Intra- and postoperative complications were comparable in the two groups.Conclusion: Both techniques yielded similar results concerning final BCVA and IOP reduction. However, the two-site group had less induced astigmatism and a better postoperative IOP control with less required
8. EARLY DETECTION OF LAW VISION WITH PRE-SCHOOL CHILDREN TRHOUG SYSTEMATIC CHECK-UPS AT THE AGE OF 3 AND 5 IN BITOLA
Directory of Open Access Journals (Sweden)
M. SOTIROVSKA-SIRVINI
1997-12-01
Full Text Available The early detection of the law vision of pre-school children has been carried out through systematic check-ups by an ophthalmologist sent by the Department for preventive protection-Advisory Division at the Medical Center since 1977.The children are invited by the Advisory Center and during the systematic check-ups they are sent to the Cabinet for orthoptics and pleoptics for the ophthamologic check-up.The parents are motivated for the check-up and are explained how to prepare the child for cooperation during the examination. The findings are recorded in the child record card in the Advisory Center together with the findings of the psychologist, blood test, urine, faecal test of parasites and some other necessary tests.The records of the examination were obtained from the evidence on systematic check-ups at the Cabinet for orthoptics and pleoptics.From 1987 till August this year, 5.414 children-3.609 at the age of 3 and 1.805 at the age of 5 came for systematic check-ups at the Cabinet.Among the examined children, 5.7% were with echophobia, 2.3% were with astigmatism, 1.6% with amblyopia, 1.4% with hypermetropia, 0.9% with strabismus and 0.2% with miopia.Among the children with sight problems, the echophobia is present with 47.3%, astigmatism with 18.8%, amblyopia with 13.2%, hypermetropia with 11.9%, strabism with 7.2% and miopia with 1.8%.Children whose parents, brothers and sisters wear glasses were sent for check-ups at the age of 18 to 3 years. The earliest findings of sight difficulties are obtained with children who cooperate during the check-ups.As soon as the low vision is discovered, the defectologist-orthoptic therapist starts the necessary exercises and the sight is corrected with the glasses.At this age, children easily adapt to wearing glasses and they do not oppose them as the children whose sight correction started at the school age.
9. Optical quality of toric intraocular lens implantation in cataract surgery
Institute of Scientific and Technical Information of China (English)
Xian-Wen; Xiao; Jing; Hao; Hong; Zhang; Fang; Tian
2015-01-01
AIM: To analyze the optical quality after implantation of toric intraocular lens with optical quality analysis system.METHODS: Fifty-two eyes of forty-four patients with regular corneal astigmatism of at least 1.00 D underwent implantation of Acry Sof toric intraocular lens, including T3 group 19 eyes, T4 group 18 eyes, T5 group 10 eyes,T6 group 5 eyes. Main outcomes evaluated at 3mo of follow-up, included uncorrected distance visual acuity(UDVA), corrected distance visual acuity(CDVA), residual refractive cylinder and intraocular lens(IOL) axis rotation.Objective optical quality were measured using optical quality analysis system(OQAS Ⅱ, Visiometrics, Spain),included the cutoff frequency of modulation transfer function(MTFcutoff), objective scattering index(OSI),Strehl ratio, optical quality analysis system value(OV)100%, OV 20% and OV 9% [the optical quality analysis system(OQAS) values at contrasts of 100%, 20%, and 9%].RESULTS: At 3mo postoperative, the mean UDVA and CDVA was 0.18 ±0.11 and 0.07 ±0.08 log MAR; the mean residual refractive cylinder was 0.50 ±0.29 D; the mean toric IOL axis rotation was 3.62 ±1.76 degrees, the mean MTFcutoff, OSI, Strehl ratio, OV 100%, OV 20% and OV9% were 22.862 ±5.584, 1.80 ±0.84, 0.155 ±0.038, 0.76 ±0.18,0.77±0.19 and 0.78±0.21. The values of UDVA, CDVA, IOL axis rotation, MTFcutoff, OSI, Strehl ratio, OV100%,OV20% and OV9% depending on the power of the cylinder of the implantation were not significantly different(P >0.05), except the residual refractive cylinder(P <0.05).CONCLUSION: The optical quality analysis system was useful for characterizing the optical quality of Acry Sof toric IOL implantation. Implantation of an Acry Sof toric IOL is an effective and safe method to correct corneal astigmatism during cataract surgery.
10. Gendered Disparities in Quality of Cataract Surgery in a Marginalised Population in Pakistan: The Karachi Marine Fishing Communities Eye and General Health Survey.
Directory of Open Access Journals (Sweden)
Full Text Available Marine fishing communities are among the most marginalised and hard-to-reach groups and have been largely neglected in health research. We examined the quality of cataract surgery and its determinants, with an emphasis on gender, in marine fishing communities in Karachi, Pakistan, using multiple indicators of performance.The Karachi Marine Fishing Communities Eye and General Health Survey was a door-to-door, cross-sectional study conducted between March 2009 and April 2010 in fishing communities living on 7 islands and in coastal areas in Keamari, Karachi, located on the Arabian Sea. A population-based sample of 638 adults, aged ≥ 50 years, was studied. A total of 145 eyes (of 97 persons had undergone cataract surgery in this sample. Cataract surgical outcomes assessed included vision (presenting and best-corrected with a reduced logMAR chart, satisfaction with surgery, astigmatism, and pupil shape. Overall, 65.5% of the operated eyes had some form of visual loss (presenting visual acuity [PVA] < 6/12. 55.2%, 29.0%, and 15.9% of these had good, borderline, and poor visual outcomes based on presenting vision; with best correction, these values were: 68.3 %, 18.6%, and 13.1%, respectively. Of 7 covariates evaluated in the multivariable generalized estimating equations (GEE analyses, gender was the only significant independent predictor of visual outcome. Women's eyes were nearly 4.38 times more likely to have suboptimal visual outcome (PVA<6/18 compared with men's eyes (adjusted odds ratio 4.38, 95% CI 1.96-9.79; P<0.001 after adjusting for the effect of household financial status. A higher proportion of women's than men's eyes had an irregular pupil (26.5% vs. 14.8% or severe/very severe astigmatism (27.5% vs. 18.2%. However, these differences did not reach statistical significance. Overall, more than one fourth (44/144 of cataract surgeries resulted in dissatisfaction. The only significant predictor of satisfaction was visual outcome (P <0
11. A method to design aspheric spectacles for correction of high-order aberrations of human eye
Institute of Scientific and Technical Information of China (English)
LI Rui; WANG ZhaoQi; LIU YongJi; MU GuoGuang
2012-01-01
Aiming at the correction of high-order aberrations of human eye with spectacles,a design method of aspheric spectacles is proposed based on the eye's wavefront aberrations data.Regarding the eyeball and the spectacles as a whole system-the lens-eye system-the surface profiles of the spectacles are achieved by optimization procedure of lens design.Different from the conventional optometry,in which the refraction prescription is acquired with a visual chart,the design takes into account the two aspects of actual human viewing,eyeball rolling and certain distinct viewing field.The rotation angle of eyeball is set to be ±20° as wearing spectacles,and the field of view is set to be ∧7° which is especially important as watching screen display.The individual eye model is constructed as the main part of the lens-eye system.The Liou eye model is modified by sticking a thin meniscus lens to the crystalline lens.Then the defocus of the individual eye is transferred to the front surface of the meniscus lens,and the astigmatism and high-order aberrations are transferred to the front surface of the cornea.50 eyes are involved in this research,among which 36 eyes have good enough visual performance already after sphero-cylindrical correction.10 eyes have distinct improvement in vision and 4 eyes have no visual improvement by further aspheric correction.6 typical subject eyes are selected for the aberrations analysis and the spectacles design in this paper.It is shown that the validity of visual correction of aspheric lens depends on the characteristics of the eye's wavefront aberrations,and it is effective for the eye with larger astigmatism or spherical aberration.Compared with sphero-cylindrical correction only,the superiority taken by the aspheric correction is mainly on the improvement of MTF at a larger field of view.For the best aspheric correction,the MTF values increase by 18.87%,38.34%,44.36%,51.29% and 57.32% at the spatial frequencies of 40
12. Descemet stripping automated endothelial keratoplasty (DSAEK with thin grafts in patients suffered bullous keratophaty with low preoperative visual acuity
Directory of Open Access Journals (Sweden)
S. V. Trufanov
2014-07-01
Full Text Available Purpose: To evaluate the results of DSAEK with thin grafts in patients suffered bullous keratophaty with low preoperative visual acuity.Methods: DSAEK with thin grafts the thickness of which was 150‑70 μm was fulfilled in 47 patients (47 eyes suffered bullous keratophaty without visible leukomas in the corneal stroma. Visual acuity prior to the operation with a maximum spectacle correction accounted for an average of 0.05±0.04. Tear film osmolarity of 20 patients (20 eyes who participated in the research was measured. Results: In follow-up period graft kept transparency in 39 patients. Visual acuity in 3 months after the operation, on average, without correction was 0.38±0.16, with a maximum of spectacle correction is 0.51± 0.18. The spherical component varied in the range from 0 to 3.75 D, with an average of 1.63 per±1.1 D. Corneal astigmatism was from 0.5 to 4.0 D, an average of 1.8±0.98 D. At preoperative osmolarity indicators were within the normal reference for both operated and non-operated eyes — 292.3±10.4 и 279.3±3.51. In a first postoperative week osmolarity was not detected while on a non-operated eye it was 278.4±1.4. After 1, 3 and 6 months osmolarity indicators on both eyes were within normal reference. Spherical component ranged from 0 to 3.75 D, averaging 1.1±1.63 D. Corneal astigmatism ranged from 0.5 to 4.0 (D, with an average of 1.8±0.98 D 1.63 per±1.1 D.Conclusion: DSAEK with thin grafts is an effective modern methods of surgical treatment of bullous keratophaty. For old patients with severe ocular pathology — concomitant eye diseases, repeated surgery of the eye, the developed stage of the keratophaty — we have not noted the apparent correlation between the thickness of the transplant, visual acuity and the time of recovery of visual functions after keratoplasty. Osmolarity in an early postoperative period is a non-informative method of diagnostics. Restoration of osmolarity level to preoperative
13. Descemet stripping automated endothelial keratoplasty (DSAEK with thin grafts in patients suffered bullous keratophaty with low preoperative visual acuity
Directory of Open Access Journals (Sweden)
S. V. Trufanov
2013-01-01
Full Text Available Purpose: To evaluate the results of DSAEK with thin grafts in patients suffered bullous keratophaty with low preoperative visual acuity.Methods: DSAEK with thin grafts the thickness of which was 150‑70 μm was fulfilled in 47 patients (47 eyes suffered bullous keratophaty without visible leukomas in the corneal stroma. Visual acuity prior to the operation with a maximum spectacle correction accounted for an average of 0.05±0.04. Tear film osmolarity of 20 patients (20 eyes who participated in the research was measured. Results: In follow-up period graft kept transparency in 39 patients. Visual acuity in 3 months after the operation, on average, without correction was 0.38±0.16, with a maximum of spectacle correction is 0.51± 0.18. The spherical component varied in the range from 0 to 3.75 D, with an average of 1.63 per±1.1 D. Corneal astigmatism was from 0.5 to 4.0 D, an average of 1.8±0.98 D. At preoperative osmolarity indicators were within the normal reference for both operated and non-operated eyes — 292.3±10.4 и 279.3±3.51. In a first postoperative week osmolarity was not detected while on a non-operated eye it was 278.4±1.4. After 1, 3 and 6 months osmolarity indicators on both eyes were within normal reference. Spherical component ranged from 0 to 3.75 D, averaging 1.1±1.63 D. Corneal astigmatism ranged from 0.5 to 4.0 (D, with an average of 1.8±0.98 D 1.63 per±1.1 D.Conclusion: DSAEK with thin grafts is an effective modern methods of surgical treatment of bullous keratophaty. For old patients with severe ocular pathology — concomitant eye diseases, repeated surgery of the eye, the developed stage of the keratophaty — we have not noted the apparent correlation between the thickness of the transplant, visual acuity and the time of recovery of visual functions after keratoplasty. Osmolarity in an early postoperative period is a non-informative method of diagnostics. Restoration of osmolarity level to preoperative
14. Development of a solid state laser of Nd:YLF
International Nuclear Information System (INIS)
The CW laser action was obtained at room temperature of a Nd:YLF crystal in an astigmatically compensated cavity, pumped by an argon laser. This laser was completely projected, constructed and characterized in our laboratories, thus having a high degree of nationalization. It initiates a broader project on lasers development that will have several applications like nuclear fusion, industry, medicine, telemetry, etc.... Throught the study of the optical properties of the Nd:YLF crystal, laser operation was predicted using a small volume gain medium on the mentioned cavity, pumped by an Ar 514,5 nm laser line. To obtain the laser action at polarizations σ (1,053 μm) and π (1,047 μm) an active medium was prepared which was a cristalline plate with a convenient crystalographic orientation. The laser characterization is in reasonable agreement with the initial predictions. For a 3.5% output mirror transmission, the oscillation threshold is about 0.15 W incident on the crystal, depending upon the sample used. For 1 W of incident pump light, the output power is estimated to be 12 mW, which corresponds to almost 1.5% slope efficiency. The versatile arrangement is applicable to almost all optically pumped solid state laser materials. (Author)
15. Graphical user interfaces for teaching and design of GRIN lenses in optical interconnections
International Nuclear Information System (INIS)
The use of graphical user interfaces (GUIs) enables the implementation of practical teaching methodologies to make the comprehension of a given subject easier. GUIs have become common tools in science and engineering education, where very often, the practical implementation of experiences in a laboratory involves much equipment and many people; they are an efficient and inexpensive solution to the lack of resources. The aim of this work is to provide primarily physics and engineering students with a series of GUIs to teach some configurations in optical communications using gradient-index (GRIN) lenses. The reported GUIs are intended to perform a complementary role in education as part of a ‘virtual lab’ to supplement theoretical and practical sessions and to reinforce the knowledge acquired by the students. In this regard, a series of GUIs to teach and research the implementation of GRIN lenses in optical communications applications (including a GRIN light deflector and a beam-size controller, a GRIN fibre lens for fibre-coupling purposes, planar interconnectors, and an anamorphic self-focusing lens to correct astigmatism in laser diodes) was designed using the environment GUIDE developed by MATLAB. Numerical examples using available commercial GRIN lens parameter values are presented. (paper)
16. Impact of capillarity forces on the steady-state self-organization in the thin chromium film on glass under laser irradiation
Energy Technology Data Exchange (ETDEWEB)
Gedvilas, Mindaugas, E-mail: mgedvilas@ftmc.lt; Voisiat, Bogdan; Regelskis, Kęstutis; Račiukaitis, Gediminas
2014-11-28
17. Large binocular telescope interferometer adaptive optics: on-sky performance and lessons learned
Science.gov (United States)
Bailey, Vanessa P.; Hinz, Philip M.; Puglisi, Alfio T.; Esposito, Simone; Vaitheeswaran, Vidhya; Skemer, Andrew J.; Defrère, Denis; Vaz, Amali; Leisenring, Jarron M.
2014-07-01
The Large Binocular Telescope Interferometer is a high contrast imager and interferometer that sits at the combined bent Gregorian focus of the LBT's dual 8.4 m apertures. The interferometric science drivers dictate 0.1" resolution with 103 - 104 contrast at 10 μm, while the 4 μm imaging science drivers require even greater contrasts, but at scales <0.2". In imaging mode, LBTI's Adaptive Optics system is already delivering 4 μm contrast of 104 - 105 at 0.3" - 0.75" in good conditions. Even in poor seeing, it can deliver up to 90% Strehl Ratio at this wavelength. However, the performance could be further improved by mitigating Non-Common Path Aberrations. Any NCPA remedy must be feasible using only the current hardware: the science camera, the wavefront sensor, and the adaptive secondary mirror. In preliminary testing, we have implemented an "eye doctor" grid search approach for astigmatism and trefoil, achieving 5% improvement in Strehl Ratio at 4 μm, with future plans to test at shorter wavelengths and with more modes. We find evidence of NCPA variability on short timescales and discuss possible upgrades to ameliorate time-variable effects.
18. Challenges and approaches in modern biometry and IOL calculation.
Science.gov (United States)
Haigis, Wolfgang
2012-01-01
The introduction of new intraocular lenses (IOLs), industry marketing to the public and patient expectations has warranted increased accuracy of IOL power calculations. Toric IOLs, multifocal IOLs, aspheric IOLs, phakic lenses, accommodative lenses, cases of refractive lens exchange and eyes that have undergone previous refractive surgery all require improved clinical measurements and IOL prediction formulas. Hence, measurement techniques and IOL calculation formulas are essential factors that affect the refractive outcome. Measurement with ultrasound has been the historic standard for measurement of ocular parameters for IOL calculation. However the introduction of optical biometry using partial coherence interferometry (PCI) has steadily established itself as the new standard. Additionally, modern optical instruments such as Scheimpflug cameras and optical coherence tomographers are being used to determine corneal power that was normally the purview of manual keratometry and topography. A number of methods are available to determine the IOL power including the empirical, analytical, numerical or combined methods. Ray tracing techniques or paraxial approximation by matrix methods or classical analytical 'IOL formulas' are actively used in for the prediction of IOL power. There is no universal formula for all cases - phakic and pseudophakic cases require different approaches, as do short eyes, long eyes, astigmatic eyes or post-refractive surgery eyes. Invariably, IOLs are characterized by different methods and lens constants, which require individual optimization. This review describes the current methods for biometry and IOL calculation. PMID:23960962
19. Comparative investigation of nonparaxial mode propagation along the axis of uniaxial crystal
Science.gov (United States)
Khonina, S. N.; Kharitonov, S. I.
2015-01-01
We compare nonparaxial propagation of Bessel and Laguerre-Gaussian modes along the axis of anisotropic media. It is analytically and numerically shown that the nonparaxial laser modes propagating along the crystal axis are periodically oscillating owing to polarization conversion. The oscillation period for Bessel beams is inversely proportional to the square of the spatial frequency of the laser mode and the difference between the dielectric constants of an anisotropic crystal. So, for higher spatial frequency of Bessel beams, we will get shorter period of oscillations. For a linearly polarized light, there is a periodic redistribution of the energy between the two transverse components, and for a beam with the circular polarization, the energy is transferred from the initial beam to a vortex beam and backward. Similar periodic behavior is observed for the high-order in radial index Laguerre-Gaussian beams. However, it is true only at short distances. As the distance increases, the frequency of periodicity slows down and the beam is astigmatically distorted. We show that high-spatial-frequency nonparaxial beams can provide spin-orbit conversion efficiency close to 100% on small distances (tens of microns) of propagation along the axis of uniaxial crystals. It provides an opportunity of miniaturization of mode optical converters.
20. NGS WFSs module for MAORY at E-ELT
Science.gov (United States)
Esposito, S.; Agapito, G.; Antichi, J.; Bonanno, A.; Carbonaro, L.; Giordano, C.; Spanò, P.
We report on the natural guide star (NGS) wavefront sensors (WFS) module for MAORY, the multi-cojugate adaptive optics (MCAO) system for the ESO E-ELT. Three low-order, near-infrared (H-band), Shack-Hartmann sensors provide fast acquisition of the first 5 modes (tip, tilt, focus, astigmatism) on 3 natural guide stars over a 160 arcsec field of view. Three moderate-order (20x20), visible (600-800 nm), pyramid WFSs provide the slow Truth sensing to correct LGS wavefront estimates of low-order modes. These sensors are mounted onto three R-theta stages to patrol the field of view. The module is also equipped with a retractable, on-axis, high-order (80x80), visible, pyramid WFS for the single-conjugate AO (SCAO) mode of MAORY and MICADO. The visible WFSs share the same 80x80 pyramid WFS design. This choice enables also a MCAO NGS capability. Simulations show that Strehl ratios (SR) over 40% are reached with MCAO and three, 2x2 sub-apertures, NIR low-order WFSs working with H-mag=20 reference stars. In SCAO mode, 90% SR for a 8mag stars with a contrast down to 10-5, and 45% SR for a 16mag star, are achieved.
1. Frequency of Color Vision Defect in Students of Mashhad Dental School and Evaluation of Related Factors
Directory of Open Access Journals (Sweden)
Full Text Available Introduction: In esthetic dentistry, color matching ability is one of the influencing factors in treatment. To achieve this goal, matching the color of restoration with natural teeth is essential. The objective of this study was to determine the frequency of color vision defect in students of Mashhad Dental School and evaluation of related factors.Materials & Methods: In this descriptive analytical study, 356 students of Mashhad Dental School were evaluated. Demographic data including age, gender, color vision defect in relatives, use of glasses and contact lenses, refractive errors (myopia, hypermetropia and astigmatism were documented in the designed questionnaire. To determine the impaired color vision, Ishihara diagnostic test was used. Statistical analysis of SPSS version 19 was performed using Chi-Square and Logistic Regression tests at the significance level of 0.05%.Results: Color vision defect was found in 6% (12 persons of male students while none of the females were affected. All affected persons were red-green color blind and strong deutan. There was a significant relationship between color vision deficiency and history of color vision defect in relatives (P= 0.03, so that 25% (3 persons of affected persons had a positive family history of color vision defect. Conclusion: Considering the frequency of color vision defect in the present study as well as the importance of color matching in dental treatments and because most affected persons are unaware of this defect, color vision tests seem necessary.
2. Photo-elastic effect, thermal lensing and depolarization in a-cut tetragonal laser crystals
Science.gov (United States)
Yumashev, K. V.; Zakharova, A. N.; Loiko, P. A.
2016-06-01
We report on analytical description of thermal lensing effect in tetragonal crystals cut along the [1 0 0] crystallographic axis, for the two principal light polarizations, E ┴ c and E || c, under diode-pumping (plane stress approximation). Within this approach, we take into account anisotropy of elastic, photo-elastic, thermal and optical properties of the material. Expressions for the ‘generalized’ thermo-optic coefficient χ are presented. It is shown that astigmatism of thermal lens is determined both by the photo-elastic and end-bulging effects. The sign of the photo-elastic term χ″ can be either positive or negative affecting significantly the sign of the thermal lens. Depolarization loss in a-cut tetragonal crystals is few orders of magnitude lower than that in cubic crystals. Calculations are performed for a-cut tetragonal molybdates, Nd:CaMoO4, Nd:PbMoO4 and Nd:NaBi(MoO4)2.
3. Adaptive optics system for fast automatic control of laser beam jitters in air
Science.gov (United States)
Grasso, Salvatore; Acernese, Fausto; Romano, Rocco; Barone, Fabrizio
2010-04-01
Adaptive Optics (AO) Systems can operate fast automatic control of laser beam jitters for several applications of basic research as well as for the improvement of industrial and medical devices. We here present our theoretical and experimental research showing the opportunity of suppressing laser beam geometrical fluctuations of higher order Hermite Gauss modes in interferometric Gravitational Waves (GW) antennas. This in turn allows to significantly reduce the noise that originates from the coupling of the laser source oscillations with the interferometer asymmetries and introduces the concrete possibility of overcoming the sensitivity limit of the GW antennas actually set at 10-23 1 Hz value. We have carried out the feasibility study of a novel AO System which performs effective laser jitters suppression in the 200 Hz bandwidth. It extracts the wavefront error signals in terms of Hermite Gauss (HG) coefficients and performs the wavefront correction using the Zernike polynomials. An experimental Prototype of the AO System has been implemented and tested in our laboratory at the University of Salerno and the results we have achieved fully confirm effectiveness and robustness of the control upon first and second order laser beam geometrical fluctuations, in good accordance with GW antennas requirements. Above all, we have measured 60 dB reduction of astigmatism and defocus modes at low frequency below 1 Hz and 20 dB reduction in the 200 Hz bandwidth.
4. [Health status of journalists and organization of work in a national daily newspaper].
Science.gov (United States)
Boscolo, P; Tulli, F; Fattorini, E; De Stefano, A; Rapinese, M; Carlesi, G; Castagnoli, A; Messineo, A
1988-05-01
A multidisciplinary investigation was performed on 173 reporters (53 men and 20 women) of a newspaper. The microclimate and illumination conditions of the main seat, in which the use of VDT was beginning, were satisfactory, although not all the instruments were correctly adjusted. A very low percentage of reporters working in the main center was suffering from arterial hypertension indicating the presence of the "healthy worker effect". The values of plasma cortisol and arterial blood pressure of 10 reporters of the main seat, except two cases, changed normally during the evening hours. It is to point out that among the reporters there was significant correlation between spondylosis and astigmatism. The psychological investigation evidenced that the reporters were aggressive, eager of success and with constant attention. The EMG biofeedback demonstrated in the reporters with a more prolonged period of employment nervous tension and difficulty in relaxing. Particularly, in the reporters of the main center, the Stait-Trait Anxiety Inventory was more altered than in those of the peripheral seats. PMID:3154751
5. Formation of plasma channels in air under filamentation of focused ultrashort laser pulses
Science.gov (United States)
Ionin, A. A.; Seleznev, L. V.; Sunchugasheva, E. S.
2015-03-01
The formation of plasma channels in air under filamentation of focused ultrashort laser pulses was experimentally and theoretically studied together with theoreticians of the Moscow State University and the Institute of Atmospheric Optics. The influence of various characteristics of ultrashort laser pulses on these plasma channels is discussed. Plasma channels formed under filamentation of focused laser beams with a wavefront distorted by spherical aberration (introduced by adaptive optics) and by astigmatism, with cross-section spatially formed by various diaphragms and with different UV and IR wavelengths, were experimentally and numerically studied. The influence of plasma channels created by a filament of a focused UV or IR femtosecond laser pulse (λ = 248 nm or 740 nm) on characteristics of other plasma channels formed by a femtosecond pulse at the same wavelength following the first one with varied nanosecond time delay was also experimentally studied. An application of plasma channels formed due to the filamentation of focused UV ultrashort laser pulses including a train of such pulses and a combination of ultrashort and long (~100 ns) laser pulses for triggering and guiding long (~1 m) electric discharges is discussed.
6. X-ray tests of a two-dimensional stigmatic imaging scheme with variable magnifications
Science.gov (United States)
Lu, J.; Bitter, M.; Hill, K. W.; Delgado-Aparicio, L. F.; Efthimion, P. C.; Pablant, N. A.; Beiersdorfer, P.; Caughey, T. A.; Brunner, J.
2014-11-01
A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. The Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmatic imaging scheme has been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. This imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.
7. Optical design of a stigmatic extreme-ultraviolet spectroscopic system for emission and absorption studies of laser-produced plasmas
International Nuclear Information System (INIS)
The design of a stigmatic spectroscopic system for diagnostics of laser-produced plasmas in the 2.5-40-nm region is presented. The system consists of a grazing-incidence toroidal mirror that focuses the radiation emitted by a laser-produced plasma onto the entrance slit of a spectrograph. The latter has a grazing-incidence spherical variable-line-spaced grating with flat-field properties coupled to a spherical focusing mirror that compensates for the astigmatism. The mirror is crossed with respect to the grating; i.e., it is mounted with its tangential plane coincident with the equatorial plane of the grating. The spectrum is acquired by an extreme-UV- (EUV-) enhanced CCD detector with high quantum efficiency. This stigmatic design also has spectral and spatial resolution capability for extended sources: The spectral resolution is also preserved for off-plane points, whereas the spatial resolution decreases for points far from the optical axis. The expected performance is presented and compared with that of a stigmatic design with a plane variable-line-spaced grating illuminated in converging light
8. Refractive Errors Affect the Vividness of Visual Mental Images
Science.gov (United States)
Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia
2013-01-01
The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186
9. E-beam column monitoring for improved CD SEM stability and tool matching
Science.gov (United States)
Hayes, Timothy S.; Henninger, Randall S.
2000-06-01
Tool matching is an important metric for in-line semiconductor metrology systems. The ability to obtain the same measurement results on two or more systems allows a semiconductor fabrication facility (fab) to deploy product in an efficient manner improving overall equipment efficiency (OEE). Many parameters on the critical dimension scanning electron microscopes (CDSEMs) can affect the long-term precision component to the tool-matching metric. One such class of parameters is related to the electron beam column stability. The alignment and condition of the gun and apertures, as well as astigmatism correction, have all been found to affect the overall measurements of the CDSEM. These effects are now becoming dominant factors in sub-3nm tool-matching criteria. This paper discusses the methodologies of column parameter monitoring and actions and controls for improving overall stability. Results have shown that column instabilities caused by contamination, gun fluctuations, component failures, detector efficiency, and external issues can be identified through parameter monitoring. The Applied Materials (AMAT) 7830 Series CDSEMs evaluated at IBM's Burlington, Vermont manufacturing facility have demonstrated 5 nm tool matching across 11 systems, which has resulted in non-dedicated product deployment and has significantly reduced cost of ownership.
10. Development of an imaging VUV monochromator in normal incidence region
International Nuclear Information System (INIS)
This paper describes a development of the two-dimensional imaging monochromator system. A commercial normal incidence monochromator working on off-Rowland circle mounting is used for this purpose. The imaging is achieved with utilizing the pinhole camera effect created by an entrance slit of limited height. The astigmatism in the normal incidence mounting is small compared with a grazing incidence mount, but has a finite value. The point is that for near normal incidence, the vertical focusing with a concave grating is produced at outside across the exit slit. Therefore, by putting a 2-D detector at the position away from the exit slit (∼30 cm), a one-to-one correspondence between the position of a point on the detector and where it originated in the source is accomplished. This paper consists of 1) the principle and development of the imaging monochromator using the off-Rowland mounting, including the 2-D detector system, 2) a computer simulation by ray tracing for investigations of the imaging properties of imaging system, and aberration from the spherical concave grating on the exit slit, 3) the plasma light source (TPD-S) for the test experiments, 4) Performances of the imaging monochromator system on the spatial resolution and sensitivity, and 5) the use of this system for diagnostic studies on the JIPP T-IIU tokamak. (J.P.N.)
11. Cuba on our minds
Directory of Open Access Journals (Sweden)
Charles Rutheiser
2002-07-01
Full Text Available [First paragraph] Conversatons with Cuba. C. PETER RIPLEY. Athens: University of Georgia Press, 1999. xxvi + 243 pp. (Cloth US$24.95 Real Life in Castro's Cuba. CATHERINE MOSES. Wilmington DE: Scholarly Resources, 2000. xi + 184 pp. (Paper US$ 18.95 The Cuban Way: Capitalism, Communism, and Confrontation. ANA JULIA JATAR-HAUSMANN. West Hartford CT: Kumarian Press, 1999. xvii + 161 pp. (Paper US$21.95 Castro and the Cuban Revolution. THOMAS M. LEONARD. Westport CT: Greenwood Press, 1999. xxv + 188 pp. (Cloth US$ 45.00 Cuba has attracted a great deal of attention from both scholarly and popular authors since 1959. The literature that they have produced has generated much heat, but has shed a considerably smaller amount of light. Most accounts have been situated at the polar extremes of ideology, either condemning or celebrating the island's revolutionary experiment and its maximum leader (for the former is often virtually totally collapsed into the personage of Fidel Castro with the same degrees of vociferous, simplistic certitude. However, neither the fulminating diatribes of the anti-Castro Right nor the fulsome paeans of the Euro-American Left have done much justice to making sense of the complex, confounding, and contradictory realities of Cuban society before, during, and after the Revolution. Indeed, contemporary developments have only magnified the distortions rendered by the astigmatic lenses of cold war intellectualism.
12. Characterization and Operation of Liquid Crystal Adaptive Optics Phoropter
Energy Technology Data Exchange (ETDEWEB)
Awwal, A; Bauman, B; Gavel, D; Olivier, S; Jones, S; Hardy, J L; Barnes, T; Werner, J S
2003-02-05
Adaptive optics (AO), a mature technology developed for astronomy to compensate for the effects of atmospheric turbulence, can also be used to correct the aberrations of the eye. The classic phoropter is used by ophthalmologists and optometrists to estimate and correct the lower-order aberrations of the eye, defocus and astigmatism, in order to derive a vision correction prescription for their patients. An adaptive optics phoropter measures and corrects the aberrations in the human eye using adaptive optics techniques, which are capable of dealing with both the standard low-order aberrations and higher-order aberrations, including coma and spherical aberration. High-order aberrations have been shown to degrade visual performance for clinical subjects in initial investigations. An adaptive optics phoropter has been designed and constructed based on a Shack-Hartmann sensor to measure the aberrations of the eye, and a liquid crystal spatial light modulator to compensate for them. This system should produce near diffraction-limited optical image quality at the retina, which will enable investigation of the psychophysical limits of human vision. This paper describes the characterization and operation of the AO phoropter with results from human subject testing.
13. OPTICAL analysis of solar facility heliostats
Energy Technology Data Exchange (ETDEWEB)
Igel, E.; Hughes, R.L.
1977-05-01
An experimentally verified simple analytical model, based on classical optical aberrations, is derived and predicts the power reception of a central receiver solar facility. A laboratory simulation was made of a typical heliostat, and its images were photographed and measured at several angles of incidence. The analytically predicted image size is in agreement with experiment to within less than 10% over an incident angle range of 60 degrees. Image size for several of the heliostats in the Sandia-ERDA Solar Thermal Test Facility array were calculated throughout a day and compared with ideal images and the size of the receiver. The optical parameters of the system and the motion of the sun were found to severely affect the design and optimization of any solar thermal facility. This analysis shows that it is the aberration astigmatism which governs the solar image size at the receiver. Image growth is minimal when heliostats are used at small angles of incidence, which usually corresponds to a limited operating time of two to three hours. However, image size is markedly increased at large angles of incidence. The principal result is that the predominant sources of image enlargement are identified and measures for minimizing these enlargements are presented. This analysis considers only the idealized optical problem and does not consider the pragmatic errors associated with implementation and operation of a heliostat array.
14. Late onset post-LASIK keratectasia with reversal and stabilization after use of latanoprost and corneal collagen cross-linking
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available We report a case of late onset keratectasia after laser in situ keratomileusis (LASIK and its quick reversal and stabilization after use of latanoprost and riboflavin/ultraviolet-A corneal collagen cross-linking (CXL. A 39-year-old man with normal intraocular pressure developed a rapid deterioration of vision in his left eye 6 years after LASIK-retreatment for high myopic astigmatism. Keratectasia was diagnosed by corneal topography and ultrasound pachymetry. After two months of treatment with latanoprost and a minor intraocular pressure reduction, uncorrected distance visual acuity improved from 20/100 to 20/20 and corneal topography showed reversal of keratectasia. CXL was performed after the reversal to achieve long-term stabilization. At 1, 3, 6, 13 and 39 months followup exams after the CXL, stable vision, refraction, and topography were registered. This case shows that keratectasia may rapidly occur several years after LASIK and that a quick reversal and stabilization may be achieved by use of latanoprost followed by CXL.
15. [Excessive medical problems in the treatment of common eye diseases in children].
Science.gov (United States)
Wang, L H
2016-08-01
In this paper, some typical excessive medical problems in the treatment of common eye diseases in children were listed as follows: unnecessary examinations carried out for children with little or no corresponding complaints; prescription for spectacles for physiological hyperopia or astigmatism in children; over-diagnosis, over-or nonstandard-treatment for amblyopia; strabismus surgeries performed in children with esotropia but without full optical correction of hyperopic refractive error, in children with monocular strabismus and amblyopia but without standard cover therapy, in children with intermittent exotropia but without optical correction of myopic refractive errors and myopic anisometropia, and without evaluation of their fusional control ability; exaggerated the harm of myopia and the curative effect of Orthokeratology contact lenses without considering the patient's compliance; cataract surgery performed in infants with partial opacity of the lens that has little effect on the vision. Every ophthalmologist should work based on evidence-based preferred practice pattern, professional standards and expert consensus to promote the standardization of the diagnosis and treatment of children's common eye diseases in China. (Chin J Ophthalmol, 2016, 52: 561-564). PMID:27562274
16. Large Binocular Telescope Interferometer Adaptive Optics: On-sky performance and lessons learned
CERN Document Server
Bailey, Vanessa P; Puglisi, Alfio T; Esposito, Simone; Vaitheeswaran, Vidhya; Skemer, Andrew J; Defrere, Denis; Vaz, Amali; Leisenring, Jarron M
2014-01-01
The Large Binocular Telescope Interferometer is a high contrast imager and interferometer that sits at the combined bent Gregorian focus of the LBT's dual 8.4~m apertures. The interferometric science drivers dictate 0.1'' resolution with $10^3-10^4$ contrast at $10~\\mu m$, while the $4~\\mu m$ imaging science drivers require even greater contrasts, but at scales $>$0.2''. In imaging mode, LBTI's Adaptive Optics system is already delivering $4~\\mu m$ contrast of $10^4-10^5$ at $0.3''-0.75''$ in good conditions. Even in poor seeing, it can deliver up to 90\\% Strehl Ratio at this wavelength. However, the performance could be further improved by mitigating Non-Common Path Aberrations. Any NCPA remedy must be feasible using only the current hardware: the science camera, the wavefront sensor, and the adaptive secondary mirror. In preliminary testing, we have implemented an eye doctor'' grid search approach for astigmatism and trefoil, achieving 5\\% improvement in Strehl Ratio at $4~\\mu m$, with future plans to tes...
17. [Design of Dual-Beam Spectrometer in Spectrophotometer for Colorimetry].
Science.gov (United States)
Liu, Yi-xuan; Yan, Chang-xiang
2015-07-01
Spectrophotometers for colorimetry are usually composed of two independent and identical spectrometers. In order to reduce the volume of spectrophotometer for colorimetry, a design method of double-beam spectrometer is put forward. A traditional spectrometer is modified so that a new spectrometer can realize the function of double spectrometers, which is especially suitable for portable instruments. One slit is replaced by the double-slit, than two beams of spectrum can be detected. The working principle and design requirement of double-beam spectrometer are described. A spectrometer of portable spectrophotometer is designed by this method. A toroidal imaging mirror is used for the Czerny-Turner double-beam spectrometer in this paper, which can better correct astigmatism, and prevent the dual-beam spectral crosstalk. The results demonstrate that the double-beam spectrometer designed by this method meets the design specifications, with the spectral resolution less than 10 nm, the spectral length of 9.12 mm, and the volume of 57 mm x 54 mm x 23 mm, and without the dual-beam spectral overlap in the detector either. Comparing with a traditional spectrophotometer, the modified spectrophotometer uses a set of double-beam spectrometer instead of two sets of spectrometers, which can greatly reduce the volume. This design method can be specially applied in portable spectrophotometers, also can be widely applied in other double-beam spectrophotometers, which offers a new idea for the design of dual-beam spectrophotometers. PMID:26717779
18. Multi-pass gas cell designed for VOCs analysis by infrared spectroscopy system
Science.gov (United States)
Wang, Junbo; Wang, Xin; Wei, Haoyun
2015-10-01
Volatile Organic Compounds (VOCs) emitted from chemical, petrochemical, and other industries are the most common air pollutants leading to various environmental hazards. Regulations to control the VOCs emissions have been more and more important in China, which requires specific VOCs measurement systems to take measures. Multi-components analysis system, with an infrared spectrometer, a gas handling module and a multi-pass gas cell, is one of the most effective air pollution monitoring facilities. In the VOCs analysis system, the optical multi-pass cell is required to heat to higher than 150 degree Celsius to prevent the condensation of the component gas. Besides that, the gas cell needs to be designed to have an optical path length that matches the detection sensitivity requirement with a compact geometry. In this article, a multi-pass White cell was designed for the high temperature absorption measurements in a specified geometry requirement. The Aberration theory is used to establish the model to accurately calculate the astigmatism for the reflector system. In consideration of getting the optimum output energy, the dimensions of cell geometry, object mirrors and field mirror are optimized by the ray-tracing visible simulation. Then finite element analysis was used to calculate the thermal analysis for the structure of the external and internal elements for high stability. According to the simulation, the cell designed in this paper has an optical path length of 10 meters with an internal volume of 3 liters, and has good stability between room temperature to 227 degree Celsius.
19. Generation and application of the soft X-ray laser beam based on capillary discharge
International Nuclear Information System (INIS)
In this work we report on the generation and characterization of a focused soft X-ray laser beam with intensity and energy density that exceed the threshold for the ablation of PMMA. We demonstrate a feasibility of direct ablation of holes using a focused soft X-ray laser beam. Ablated craters in PMMA/gold-covered-PMMA samples were obtained by focusing the soft X-ray Ar8+ laser pulses generated by a 46.9 nm tabletop capillary-discharge-pumped driver with a spherical Si/Sc multilayer mirror. It was found that the focused beam is capable by one shot to ablate PMMA, even if the focus is significantly influenced by astigmatism. Analysis of the laser beam footprints by atomic force microscope shows that ablated holes have periodic surface structure (similarly as Laser-Induced Periodic Surface Structure) with period ∼2,8 μm and with peak-to-peak depth ∼5-10 nm.
20. Small Incision Lenticule Extraction (SMILE) vs. Femtosecond Laser in Situ Keratomileusis (FS-LASIK) for treatment of myopia
DEFF Research Database (Denmark)
Hansen, Rasmus Søgaard; Justesen, Birgitte; Lyhne, Niels;
), and safety at 1 day, 1 week and 3 months after SMILE and FS-LASIK for all degrees of myopia, but in particular high myopia. Setting: Department of Ophthalmology, Odense University Hospital, Odense, Denmark Methods: Retrospective study of results after SMILE and FS-LASIK for all degrees of myopia. All...... treatments were performed at the Department of Ophthalmology, Odense University Hospital from April 2011 to December 2013. Inclusion criteria: CDVA ≤ 0.10 (logMAR) before surgery and no other ocular conditions than myopia with or without astigmatism of maximum 3 D. Exclusion criteria: Eyes having undergone...... diameter ranged from 6.00 to 6.60 mm, whereas the FS-LASIK optical zone ranged from 6.00 to 6.25 mm. Maximum attempted spherical correction was -10.00 D in both procedures. Clinical examinations were performed pre-operatively and at 1 day, 1 week and 3 months post-operatively. For analysis, high myopia...
1. Small Incision Lenticule Extraction (SMILE) vs. Femtosecond Laser in Situ Keratomileusis (FS-LASIK) for treatment of myopia
DEFF Research Database (Denmark)
Hansen, Rasmus Søgaard; Lyhne, Niels; Justesen, Birgitte;
and CDVA), and safety at 1 day, 1 week and 3 months after SMILE and FS-LASIK for all degrees of myopia, but in particular high myopia. Setting: Department of Ophthalmology, Odense University Hospital, Odense, Denmark Methods: - 157 ord Retrospective study of results after SMILE and FS-LASIK for all degrees...... of myopia. All treatments were performed at the Department of Ophthalmology, Odense University Hospital from April 2011 to December 2013. Inclusion criteria: CDVA ≤ 0.10 (logMAR) before surgery and no other ocular conditions than myopia with or without astigmatism of maximum 3 D. Exclusion criteria: Eyes...... myopia was defined as a spherical equivalent (SE) refraction of -6.00 D or worse. Results: - 186 ord In total, 612 SMILE eyes and 306 FS-LASIK eyes were included and analyzed. Before surgery, 88% of SMILE eyes and 85% of FS-LASIK eyes were highly myopic and SE refraction averaged -7.26±1.75 D (range: -0...
2. Factors Influencing Intraocular Pressure Changes after Laser In Situ Keratomileusis with Flaps Created by Femtosecond Laser or Mechanical Microkeratome.
Directory of Open Access Journals (Sweden)
Meng-Yin Lin
Full Text Available The aim of this study is to describe factors that influence the measured intraocular pressure (IOP change and to develop a predictive model after myopic laser in situ keratomileusis (LASIK with a femtosecond (FS laser or a microkeratome (MK. We retrospectively reviewed preoperative, intraoperative, and 12-month postoperative medical records in 2485 eyes of 1309 patients who underwent LASIK with an FS laser or an MK for myopia and myopic astigmatism. Data were extracted, such as preoperative age, sex, IOP, manifest spherical equivalent (MSE, central corneal keratometry (CCK, central corneal thickness (CCT, and intended flap thickness and postoperative IOP (postIOP at 1, 6 and 12 months. Linear mixed model (LMM and multivariate linear regression (MLR method were used for data analysis. In both models, the preoperative CCT and ablation depth had significant effects on predicting IOP changes in the FS and MK groups. The intended flap thickness was a significant predictor only in the FS laser group (P < .0001 in both models. In the FS group, LMM and MLR could respectively explain 47.00% and 18.91% of the variation of postoperative IOP underestimation (R2 = 0.47 and R(2 = 0.1891. In the MK group, LMM and MLR could explain 37.79% and 19.13% of the variation of IOP underestimation (R(2 = 0.3779 and 0.1913 respectively. The best-fit model for prediction of IOP changes was the LMM in LASIK with an FS laser.
3. The effect of retinal defocus on golf putting.
Science.gov (United States)
Bulson, Ryan C; Ciuffreda, Kenneth J; Hung, George K
2008-07-01
The purpose of this experiment was to determine the effect of type and magnitude of retinal defocus on golf putting accuracy, and on the related eye, head, and putter movements. Eye, head, and putter movements were assessed objectively along with putting accuracy in 16 young adult, visually normal inexperienced golfers during a fixed 9-foot golf putt. Convex spherical (+0.50 D, +1.00 D, +1.50 D, +2.00 D, +10.00 D) and cylindrical (+1.00 D x 90, +2.00 D x 90) lenses were added binocularly to create various types and magnitudes of retinal defocus. Putting accuracy was significantly reduced only under the highest spherical blur lens condition (+10.00 D). No significant differences were found between any other lens conditions for eye, head or putter movements. Small amounts of spherical and astigmatic retinal defocus had a minimal impact on overall golf putting performance, except for putting accuracy under the highest blur condition. This is consistent with the findings of related studies. For a fixed putting distance, factors other than quality of the retinal image, such as blur adaptation and motor learning, appeared to be sufficient to maintain a high level of motor performance. PMID:18565089
4. Sparse aperture mask wavefront sensor testbed results
Science.gov (United States)
Subedi, Hari; Zimmerman, Neil T.; Kasdin, N. Jeremy; Riggs, A. J. E.
2016-07-01
Coronagraphic exoplanet detection at very high contrast requires the estimation and control of low-order wave- front aberrations. At Princeton High Contrast Imaging Lab (PHCIL), we are working on a new technique that integrates a sparse-aperture mask (SAM) with a shaped pupil coronagraph (SPC) to make precise estimates of these low-order aberrations. We collect the starlight rejected from the coronagraphic image plane and interfere it using a sparse aperture mask (SAM) at the relay pupil to estimate the low-order aberrations. In our previous work we numerically demonstrated the efficacy of the technique, and proposed a method to sense and control these differential aberrations in broadband light. We also presented early testbed results in which the SAM was used to sense pointing errors. In this paper, we will briefly overview the SAM wavefront sensor technique, explain the design of the completed testbed, and report the experimental estimation results of the dominant low-order aberrations such as tip/tit, astigmatism and focus.
5. Why are freeform telescopes less alignment sensitive than a traditional unobscured TMA?
Science.gov (United States)
Thompson, Kevin P.; Schiesser, Eric; Rolland, Jannick P.
2015-10-01
As freeform optical systems emerge as interesting and innovative solutions for imaging in 3D packages there is an assumption they are going to be more sensitive particularly at assembly. While it is true that the clocking of the component becomes a relatively weak new tolerance, for the most effective new class of freeform systems the alignment sensitivity is actually lower in most cases than for a comparable traditional unobscured three mirror anastigmatic (TMA) telescope. Traditional unobscured TMA telescopes, whose designs emerged in the mid-70s and which begin to appear as hardware in the literature in the early 90s, are based on using increasingly offset apertures with otherwise coaxial rotationally symmetric mirrors. The mirrors (typically 3 to correct spherical, coma, and astigmatism) have evolved to contain more high order terms as the designs are pushed to more compact and wider field packages - the NIRCAM camera for the JWST is an excellent example of this [1]. As the higher order terms are added, the mirrors become increasingly sensitive to decenters and tilts. An emerging class of freeform telescopes that provide wider field of view and/or faster f/numbers than the traditional TMA are based on a strategy where the surface shape remains a low order Zernike-type surface even in compact, unobscured packages. This optical design strategy results in an optical form that is not only higher performance but simultaneously less sensitive to alignment.
6. [Soft contactlenses in general practice (author's transl)].
Science.gov (United States)
Miller, B
1975-07-01
In contrast to the hard lenses the soft lens has enough permeability for oxygen and water-soluble substances, whereas high molecular substances, bacteria and virus cannot penetrate the soft lenses, so long as their surfaces are intact. The two principal production methods, the spin cast method and the lathe-turned method are compared. The duration of wearing of the soft lens depends on the deposits of proteins from the tears on the surface of the lens and the desinfection method. The daily boiling of the lenses shortens their useful life, while chemical desinfection causes besides bacteriolysis, damage of the corneal cell protein. The new cleaners on the base of proteolytic plant enzymes promise good results. For the optical correction of astigmatism with more than 1 cyl, soft lenses with conic outer surface are used or combinations of a soft and a hard lens (Duosystem). The therapeutic use of soft lenses has as aim: protection of the cornea against mechanical irritation, release of pain, protracted administration output of medicaments. Further indications for use: aseptic corneal inflammation and corneal defects.
7. Beam shaping characteristics of an unstable-waveguide hybrid resonator.
Science.gov (United States)
Xiao, Longsheng; Qin, Yingxiong; Tang, Xiahui; Wan, Chenhao; Li, Gen; Zhong, Lijing
2014-04-01
The unstable-waveguide hybrid resonator emits a rectangular, simple astigmatic beam with a large number of high-spatial-frequency oscillations in the unstable direction. To equalize the beam quality, in this paper, a beam shaping system with a spatial filter for the hybrid resonator was investigated by numerical simulation and experimental method. The high-frequency components and fundamental mode of the output beam of the hybrid resonator in the unstable direction are separated by a focus lens. The high-frequency components of the beam are eliminated by the following spatial filter. A nearly Gaussian-shaped beam with approximately equal beam propagation factor M² in the two orthogonal directions was obtained. The effects of the width of the spatial filter on the beam quality, power loss, and intensity distribution of the shaped beam were investigated. The M² factor in the unstable direction is changed from 1.6 to 1.1 by optimum design. The power loss is only 9.5%. The simulation results are in good agreement with the experimental results.
8. Method and apparatus for ophthalmological surgery
International Nuclear Information System (INIS)
The invention contemplates use of a scanning laser characterized by ultraviolet radiation to achieve controlled ablative photodecomposition of one or more selected regions of a cornea. Irradiated flux density and exposure time are so controlled as to achieve desired depth of the ablation, which is a local sculpturing step, and the scanning action is coordinated to achieve desired ultimate surface change in the cornea. The scanning may be so controlled as to change the front surface of the cornea from a greater to a lesser spherical curvature, or from a lesser to a greater spherical curvature, thus effecting reduction in a myopic or in a hyperopic condition, without resort to a contact or other corrective auxiliary lens technique, in that the cornea becomes the corrective lens. The scanning may also be so controlled as to reduce astigmatism and to perform the precise incisions of a keratotomy. Still further, the scanning may be so controlled as to excise corneal tissue uniformly over a precisely controlled area of the cornea for precision accommodation of a corneal transplant
9. A practical guide to handling laser diode beams
CERN Document Server
Sun, Haiyin
2015-01-01
This book offers the reader a practical guide to the control and characterization of laser diode beams. Laser diodes are the most widely used lasers, accounting for 50% of the global laser market. Correct handling of laser diode beams is the key to the successful use of laser diodes, and this requires an in-depth understanding of their unique properties. Following a short introduction to the working principles of laser diodes, the book describes the basics of laser diode beams and beam propagation, including Zemax modeling of a Gaussian beam propagating through a lens. The core of the book is concerned with laser diode beam manipulations: collimating and focusing, circularization and astigmatism correction, coupling into a single mode optical fiber, diffractive optics and beam shaping, and manipulation of multi transverse mode beams. The final chapter of the book covers beam characterization methods, describing the measurement of spatial and spectral properties, including wavelength and linewidth meas...
10. NO and NO2 emission ratios measured from in-use commercial aircraft during taxi and takeoff.
Science.gov (United States)
Herndon, Scott C; Shorter, Joanne H; Zahniser, Mark S; Nelson, David D; Jayne, John; Brown, Robert C; Miake-Lye, Richard C; Waitz, Ian; Silva, Phillip; Lanni, Thomas; Demerjian, Ken; Kolb, Charles E
2004-11-15
In August 2001, the Aerodyne Mobile Laboratory simultaneously measured NO, NO2, and CO2 within 350 m of a taxiway and 550 m of a runway at John F. Kennedy Airport. The meteorological conditions were such that taxi and takeoff plumes from individual aircraft were clearly resolved against background levels. NO and NO2 concentrations were measured with 1 s time resolution using a dual tunable infrared laser differential absorption spectroscopy instrument, utilizing an astigmatic multipass Herriott cell. The CO2 measurements were also obtained at 1 s time resolution using a commercial non-dispersive infrared absorption instrument. Plumes were measured from over 30 individual planes, ranging from turbo props to jumbo jets. NOx emission indices were determined by examining the correlation between NOx (NO + NO2) and CO2 during the plume measurements. Several aircraft tail numbers were unambiguously identified, allowing those specific airframe/engine combinations to be determined. The resulting NOx emission indices from positively identified in-service operating airplanes are compared with the published International Civil Aviation Organization engine certification test database collected on new engines in certification test cells.
11. Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope.
Science.gov (United States)
He, Yi; Deng, Guohua; Wei, Ling; Li, Xiqi; Yang, Jinsheng; Shi, Guohua; Zhang, Yudong
2016-01-01
We have designed, constructed and tested an adaptive optics scanning laser ophthalmoscope (AOSLO) using a bimorph mirror. The simulated AOSLO system achieves diffraction-limited criterion through all the raster scanning fields (6.4 mm pupil, 3° × 3° on pupil). The bimorph mirror-based AOSLO corrected ocular aberrations in model eyes to less than 0.1 μm RMS wavefront error with a closed-loop bandwidth of a few Hz. Facilitated with a bimorph mirror at a stroke of ±15 μm with 35 elements and an aperture of 20 mm, the new AOSLO system has a size only half that of the first-generation AOSLO system. The significant increase in stroke allows for large ocular aberrations such as defocus in the range of ±600° and astigmatism in the range of ±200°, thereby fully exploiting the AO correcting capabilities for diseased human eyes in the future.
12. Optical characterization of solar furnace system using fixed geometry nonimaging focusing heliostat and secondary parabolic concentrator
Science.gov (United States)
Chong, Kok-Keong; Lim, Chuan-Yang; Keh, Wee-Liang; Fan, Jian-Hau; Rahman, Faidz Abdul
2011-10-01
A novel solar furnace system has been proposed to be consisted of a Nonimaging Focusing Heliostat and a smaller parabolic concentrator. In this configuration, the primary heliostat consists of 11×11 array of concave mirrors with a total reflective area of 121 m2 while the secondary parabolic concentrator has a focal length of 30 cm. To simplify the design and reduce the cost, fixed geometry of the primary heliostat is adopted to omit the requirement of continuous astigmatic correction throughout a year. The overall performance of the novel solar furnace configuration can be optimized if the heliostat's spinning-axis is fixed in the orientation dependent on the latitude angle so that the annual variation of incidence angle is the least, which ranges from 33° to 57°. Case study of the novel solar furnace system has been performed with the use of ray-tracing method to simulate solar flux distribution profile for two different target distances, i.e. 50 m and 100 m. The simulated results have revealed that the maximum solar concentration ratio ranges from 20,530 suns to 26,074 suns for the target distance of 50 m, and ranges from 40,366 suns to 43,297 suns for the target distance of 100 m.
13. Optical characterization of nonimaging focusing heliostat
Science.gov (United States)
Chong, Kok-Keong
2011-10-01
A novel nonimaging focusing heliostat consisted of many small movable element mirrors that can be dynamically maneuvered in a line-tilting manner has been proposed for the astigmatic correction in a wide range of incident angle from 0° to 70°. In this article, a comprehensive optical characterization of the new heliostat with total reflective area of 25 m2 and slant range of 25 m using ray-tracing method has been carried to analyze the performance including solar concentration ratio, ratio of aberrated-to-ideal image area, intercept efficiency and spillage loss. The optical characterization of the heliostat in the application of solar power tower system has embraced the cases of 1×1, 9×9, 11×11, 13×13, 15×15, 17×17 and 19×19 arrays of concave mirrors provided that the total reflective area remains the same. The simulated result has shown that the maximum solar concentration ratio at a high incident angle of 65° can be improved from 1.76 suns (single mirror) to 104.99 suns (9×9 mirrors), to 155.93 suns (11×11 mirrors), to 210.44 suns (13×13 mirrors), to 246.21 suns (15×15 mirrors), to 259.80 suns (17×17 mirrors) and to 264.73 suns (19×19 mirrors).
14. Preliminary results of a high-resolution refractometer using the Hartmann-Shack wave-front sensor: part I
Directory of Open Access Journals (Sweden)
Carvalho Luis Alberto
2003-01-01
Full Text Available In this project we are developing an instrument for measuring the wave-front aberrations of the human eye using the Hartmann-Shack sensor. A laser source is directed towards the eye and its diffuse reflection at the retina generates an approximately spherical wave-front inside the eye. This wave-front travels through the different components of the eye (vitreous humor, lens, aqueous humor, and cornea and then leaves the eye carrying information about the aberrations caused by these components. Outside the eye there is an optical system composed of an array of microlenses and a CCD camera. The wave-front hits the microlens array and forms a pattern of spots at the CCD plane. Image processing algorithms detect the center of mass of each spot and this information is used to calculate the exact wave-front surface using least squares approximation by Zernike polynomials. We describe here the details of the first phase of this project, i. e., the construction of the first generation of prototype instruments and preliminary results for an artificial eye calibrated with different ametropias, i. e., myopia, hyperopia and astigmatism.
15. Adaptive optics ophthalmologic systems using dual deformable mirrors
Energy Technology Data Exchange (ETDEWEB)
Jones, S; Olivier, S; Chen, D; Sadda, S; Joeres, S; Zawadzki, R; Werner, J S; Miller, D
2007-02-01
Adaptive Optics (AO) have been increasingly combined with a variety of ophthalmic instruments over the last decade to provide cellular-level, in-vivo images of the eye. The use of MEMS deformable mirrors in these instruments has recently been demonstrated to reduce system size and cost while improving performance. However, currently available MEMS mirrors lack the required range of motion for correcting large ocular aberrations, such as defocus and astigmatism. In order to address this problem, we have developed an AO system architecture that uses two deformable mirrors, in a woofer/tweeter arrangement, with a bimorph mirror as the woofer and a MEMS mirror as the tweeter. This setup provides several advantages, including extended aberration correction range, due to the large stroke of the bimorph mirror, high order aberration correction using the MEMS mirror, and additionally, the ability to ''focus'' through the retina. This AO system architecture is currently being used in four instruments, including an Optical Coherence Tomography (OCT) system and a retinal flood-illuminated imaging system at the UC Davis Medical Center, a Scanning Laser Ophthalmoscope (SLO) at the Doheny Eye Institute, and an OCT system at Indiana University. The design, operation and evaluation of this type of AO system architecture will be presented.
16. Effects of short-term wear of silicone hydrogel contact lenses on refractive behaviour
Directory of Open Access Journals (Sweden)
W. D. H. Gillan
2012-12-01
Full Text Available Contact lens wear is known to induce change in both the cornea and refractive state. Often a shift towards increased myopia is noted. Historically investigations into the effects of contact lenses onrefractive state have often been incomplete in terms of statistical analysis whereby nearest equivalent sphere is used or the spherical, cylindrical and axis components are analyzed in isolation. The aim ofthis study was to investigate the short-term effects of silicone hydrogel contact lenses on refractive behaviour. Seven volunteers agreed to wear a silicone hydrogel lens on one eye for a period of thirty minutes. Prior to lens wear, after ten minutes of lens wear and after thirty minutes of lens wear 50 autorefractor measurements were taken of refractive state from each subject. Data were analyzed using multivariate statistical methods. Scatter plots and other multivariate statistics are used to show how lens wear influences refractive behaviour. The results of this study show that silicone hydrogel contact lenses do influence refractive behaviour in both a spherical as well as an antistigmatic (astigmatism fashion. (S Afr Optom 2012 71(2 78-85
17. Transconjunctival sutureless intrascleral intraocular lens fixation using intrascleral tunnels guided with catheter and 30-gauge needles.
Science.gov (United States)
Takayama, Kohei; Akimoto, Masayuki; Taguchi, Hogara; Nakagawa, Satoko; Hiroi, Kano
2015-11-01
We invented a new method for fixing an intraocular lens (IOL) in the scleral tunnel without using a wide conjunctival incision. Modified bent catheter needles were used to penetrate the IOL haptics through the sclerotomy sites. The IOL haptics were inserted into 30-guage (G) scleral tunnels guided by double 30-G needles piercing the sclera. All procedures were performed through the conjunctiva without wide incision. The procedure does not require special forceps, trocars or fibrin glue, only catheter and 30-G needles. The aid of an assistant was not required to support the IOL haptic. The procedures were easily learnt based on our previous method. As with other transconjunctival sutureless surgeries, patients feel less discomfort and the conjunctiva can be conserved for future glaucoma surgery. Complications included two cases of vitreous haemorrhage (16.7%), and one case each of postoperative hypotony, and iris capture (8.3%). Astigmatism induced by intraocular aberration was the same as we reported previously. Our method for fixing the IOL into the scleral tunnel is innovative, less expensive, less invasive and quick. This modified method is a good alternative for fixing IOL haptics into the sclera.
18. Aberration correction in double-pass amplifiers through the use of phase-conjugate mirrors and/or adaptive optics
Science.gov (United States)
Jackel, Steven M.; Moshe, Inon; Lavi, Raphael
2001-04-01
Corrrection of birefringence induced effects (depolarization and bipolar focusing) was achieved in double-pass amplifiers using a Faraday rotator placed between the laser rod and the retroreflecting optic. A necessary condition was that each ray in the beam retraced its path through the amplifying medium. Retrace was limited by imperfect conjugate-beam fidelity and by nonreciprocal double-pass indices of refraction. We compare various retroreflectors: stimulated Brillouin scatter phase-conjugate-mirrors (PCMs), PCMs with relay lenses to image the rod principal plane onto the PCM entrance aperture (IPCMs), IPCMs with external, adaptively-adjusted, astigmatism-correcting cylindrical doublets, and all adaptive optics imaging variable-radius-mirrors (IVRMs). Results with flashlamp pumped, Nd:Cr:GSGG double-pass amplifiers show that average output power increased fivefold with a Faraday rotator plus complete nonlinear optics retroreflector package (IPCM+cylindrical zoom), and that this represents an 80% increase over the power achieved using just a PCM. Far better results are, however, achieved with an IVRM.
19. Long term refractive and structural outcome following laser treatment for zone 1 aggressive posterior retinopathy of prematurity
Directory of Open Access Journals (Sweden)
Parag K Shah
2014-01-01
Full Text Available Aim: To report the long term refractive, visual and structural outcome post-laser for zone 1 aggressive posterior retinopathy of prematurity (AP-ROP. Materials and Methods: A retrospective analysis was performed of refractive status of premature infants with zone 1 AP-ROP who underwent laser photocoagulation from 2002 to 2007 and followed up till 2013. Once the disease regressed, children were followed up six monthly with detailed examination regarding fixation pattern, ocular motility, nystagmus, detailed anterior segment and posterior segment examination, and refractive status including best corrected visual acuity. Results: Forty-eight eyes of 25 infants were included in the study. Average follow-up was 6.91 years (range, 3.8-9.5years after laser treatment. Astigmatism was noted in 43 out of 48 eyes (89.6%. Two eyes had simple myopia whereas three eyes had no refractive error. Conclusion: After successful laser treatment for zone 1 retinopathy of prematurity (ROP, 94% of our cases developed refractive error. Although most had a favorable anatomical and visual outcome, long-term follow-up even after a successful laser treatment in ROP was necessary.
20. A diffraction-limited scanning system providing broad spectral range for laser scanning microscopy
Science.gov (United States)
Yu, Jiun-Yann; Liao, Chien-Sheng; Zhuo, Zong-Yan; Huang, Chen-Han; Chui, Hsiang-Chen; Chu, Shi-Wei
2009-11-01
Diversified research interests in scanning laser microscopy nowadays require broadband capability of the optical system. Although an all-mirror-based optical design with a suitable metallic coating is appropriate for broad-spectrum applications from ultraviolet to terahertz, most researchers prefer lens-based scanning systems despite the drawbacks of a limited spectral range, ghost reflection, and chromatic aberration. One of the main concerns is that the geometrical aberration induced by off-axis incidence on spherical mirrors significantly deteriorates image resolution. Here, we demonstrate a novel geometrical design of a spherical-mirror-based scanning system in which off-axis aberrations, both astigmatism and coma, are compensated to reach diffraction-limited performance. We have numerically simulated and experimentally verified that this scanning system meets the Marechà l condition and provides high Strehl ratio within a 3°×3° scanning area. Moreover, we demonstrate second-harmonic-generation imaging from starch with our new design. A greatly improved resolution compared to the conventional mirror-based system is confirmed. This scanning system will be ideal for high-resolution linear/nonlinear laser scanning microscopy, ophthalmoscopic applications, and precision fabrications.
1. Thin-disk laser multi-pass amplifier
CERN Document Server
Schuhmann, K; Graf, T; Hänsch, T W; Kirch, K; Kottmann, F; Pohl, R; Taqqu, D; Voß, A; Weichelt, B; Antognini, A
2015-01-01
In the context of the Lamb shift measurement in muonic helium we developed a thin-disk laser composed of a Q-switched oscillator and a multi-pass amplifier delivering pulses of 150 mJ at a pulse duration of 100 ns. Its peculiar requirements are stochastic trigger and short delay time (< 500 ns) between trigger and optical output. The concept of the thin-disk laser allows for energy and power scaling with high efficiency. However the single pass gain is small (about 1.2). Hence a multi-pass scheme with precise mode matching for large beam waists (w = 2 mm) is required. Instead of using the standard 4f design, we have developed a multi-pass amplifier with a beam propagation insensitive to thermal lens effects and misalignments. The beam propagation is equivalent to multiple roundtrips in an optically stable resonator. To support the propagation we used an array of 2 x 8 individually adjustable plane mirrors. Astigmatism has been minimized by a compact mirror placement. Precise alignment of the kinematic arra...
2. OPHTHALMOLOGIC ABNORMALITIES IN CHILDREN WITH IMPAIRED HEARING
Directory of Open Access Journals (Sweden)
Inderjit
2014-02-01
Full Text Available AIM: To determine the nature of ophthalmologic abnormalities in severe and profound grades of hearing impaired children and to treat visual impairment if any at the earliest . MATERIAL AND METHODS: Study was conducted on100 children in the age group of 5 - 14 years with severe and profound hearing loss visiting outpatient department of Ram Lal Eye and ENT hospital Govt. Medical College Amritsar and subjected to detailed ophthalmological examination. RESULTS: 100 children in the age group 5 - 14 years with hearing impairment were enrolled for t he study , 68 had profound and 32 had severe hearing loss . Visual disorders were found to be as high as 71%. Highest percentage was seen in children aged 7 years. Majority of them (50% had refractive error. Out of these 50 children , 28(56% had myopia , 10 (20% hypermetropia and 12(24% had astigmatism . The other ophthalmic abnormalities in our study were conjunctivitis 14(19.71% , fundus abnormalities and squint 11(15.49% , blepharitis 5 (7.04% , vitamin A deficiency 6 (8.04% , amblyopia 8 (11.26% , pupil disorder 3 (4.22% , cataract 3 (4.22% and heterochromia iridis 7 (9.85%. CONCLUSION : The high prevalence of ophthalmic abnormalities in deaf children mandate screening them for possible ophthalmic abnormalities. Early diagnosis and correction of visual d isturbances would go a long way in social and professional performance of these children.
3. Higher order aberrations in amblyopic children and their role in refractory amblyopia
Directory of Open Access Journals (Sweden)
Arnaldo Dias-Santos
2014-12-01
Full Text Available Objective: Some studies have hypothesized that an unfavourable higher order aberrometric profile could act as an amblyogenic mechanism and may be responsible for some amblyopic cases that are refractory to conventional treatment or cases of “idiopathic” amblyopia. This study compared the aberrometric profile in amblyopic children to that of children with normal visual development and compared the aberrometric profile in corrected amblyopic eyes and refractory amblyopic eyes with that of healthy eyes. Methods: Cross-sectional study with three groups of children – the CA group (22 eyes of 11 children with unilateral corrected amblyopia, the RA group (24 eyes of 13 children with unilateral refractory amblyopia and the C group (28 eyes of 14 children with normal visual development. Higher order aberrations were evaluated using an OPD-Scan III (NIDEK. Comparisons of the aberrometric profile were made between these groups as well as between the amblyopic and healthy eyes within the CA and RA groups. Results: Higher order aberrations with greater impact in visual quality were not significantly higher in the CA and RA groups when compared with the C group. Moreover, there were no statistically significant differences in the higher order aberrometric profile between the amblyopic and healthy eyes within the CA and RA groups. Conclusions: Contrary to lower order aberrations (e.g., myopia, hyperopia, primary astigmatism, higher order aberrations do not seem to be involved in the etiopathogenesis of amblyopia. Therefore, these are likely not the cause of most cases of refractory amblyopia.
4. Microfocusing at the PG1 beamline at FLASH
Energy Technology Data Exchange (ETDEWEB)
Dziarzhytski, Siarhei; Gerasimova, Natalia; Goderich, Rene; Mey, Tobias; Reininger, Ruben; Rübhausen, Michael; Siewert, Frank; Weigelt, Holger; Brenner, Günter
2016-01-01
The Kirkpatrick–Baez (KB) refocusing mirror system installed at the PG1 branch of the plane-grating monochromator beamline at the soft X-ray/XUV free-electron laser in Hamburg (FLASH) is designed to provide tight aberration-free focusing down to 4 µm x 6 µm full width at half-maximum (FWHM) on the sample. Such a focal spot size is mandatory to achieve ultimate resolution and to guarantee best performance of the vacuum-ultraviolet (VUV) off-axis parabolic double-monochromator Raman spectrometer permanently installed at the PG1 beamline as an experimental end-station. The vertical beam size on the sample of the Raman spectrometer, which operates without entrance slit, defines and limits the energy resolution of the instrument which has an unprecedented design value of 2 meV for photon energies below 70 eV and about 15 meV for higher energies up to 200 eV. In order to reach the designed focal spot size of 4 µm FWHM (vertically) and to hold the highest spectrometer resolution, special fully motorized in-vacuum manipulators for the KB mirror holders have been developed and the optics have been aligned employing wavefront-sensing techniques as well as ablative imprints analysis. Aberrations like astigmatism were minimized. In this article the design and layout of the KB mirror manipulators, the alignment procedure as well as microfocus optimization results are presented.
5. How surfactants influence evaporation-driven flows
Science.gov (United States)
Liepelt, Robert; Marin, Alvaro; Rossi, Massimiliano; Kähler, Christian J.
2014-11-01
Capillary flows appear spontaneously in sessile evaporating drops and give rise to particle accumulation around the contact lines, commonly known as coffee-stain effect (Deegan et al., Nature, 1997). On the other hand, out-of-equilibrium thermal effects may induce Marangoni flows in the droplet's surface that play an important role in the flow patterns and in the deposits left on the substrate. Some authors have argued that contamination or the presence of surfactants might reduce or eventually totally annul the Marangoni flow (Hu & Larson, J. Phys. Chem. B, 2006). On the contrary, others have shown an enhancement of the reverse surface flow (Sempels et al., Nat. Commun., 2012). In this work, we employ Astigmatic Particle Tracking Velocimetry (APTV) to obtain the 3D3C evaporation-driven flow in both bulk and droplet's surface, using surfactants of different ionic characters and solubility. Our conclusions lead to a complex scenario in which different surfactants and concentrations yield very different surface-flow patterns, which eventually might influence the colloidal deposition patterns.
6. James Webb Space Telescope Optical Simulation Testbed II. Design of a Three-Lens Anastigmat Telescope Simulator
CERN Document Server
Choquet, Élodie; N'Diaye, Mamadou; Perrin, Marshall D; Soummer, Rémi
2014-01-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to reproduce the main aspects of wavefront sensing and control (WFSC) for JWST. To replicate the key optical physics of JWST's three-mirror anastigmat (TMA) design at optical wavelengths we have developed a three-lens anastigmat optical system. This design uses custom lenses (plano-convex, plano-concave, and bi-convex) with fourth-order aspheric terms on powered surfaces to deliver the equivalent image quality and sampling of JWST NIRCam at the WFSC wavelength (633~nm, versus JWST's 2.12~micron). For active control, in addition to the segmented primary mirror simulator, JOST reproduces the secondary mirror alignment modes with five degrees of freedom. We present the testbed requirements and its optical and optomechanical design. We study the linearity of the main aberration modes (focus, astigmatism, coma) both as a function of field point and level of misalignments of the secondary mirror. We find that t...
7. New data postprocessing for e-beam projection lithography
Science.gov (United States)
Okamoto, Kazuya; Kamijo, Koichi; Kojima, Shinichi; Minami, Hideyuki; Okino, Teruaki
2001-08-01
In electron beam projection lithography (EPL), one of the most crucial tasks is to develop a data post-processing system, namely, a specific tool to expose a faithful pattern for every subfield on the wafer based on the pattern layout data. This system includes two basic flows. The 1st flow is common for reticle fabrication, and the 2nd flow is unique for EPL. During the 2nd flow, based on the LSI pattern data, electron optics space-charge effect correction will be automatically and rapidly executed and output to the EPL system in order to adjust parameters such as focus, magnification, rotation and astigmatism. In addition, this system should perform such tasks as segmentations of subfields (including complementary division), arrangement of stripes and reticlets, and alignment mark insertion. For proximity effect correction, we will first use a pattern shape modulation first. Shape modification at stitching boundaries is also investigated. In summary, to achieve conformable EPL delivery to customers, a new data post- processing system is developed in collaboration with some suppliers.
8. Recurrent duplications of 17q12 associated with variable phenotypes.
Science.gov (United States)
Mitchell, Elyse; Douglas, Andrew; Kjaegaard, Susanne; Callewaert, Bert; Vanlander, Arnaud; Janssens, Sandra; Yuen, Amy Lawson; Skinner, Cindy; Failla, Pinella; Alberti, Antonino; Avola, Emanuela; Fichera, Marco; Kibaek, Maria; Digilio, Maria C; Hannibal, Mark C; den Hollander, Nicolette S; Bizzarri, Veronica; Renieri, Alessandra; Mencarelli, Maria Antonietta; Fitzgerald, Tomas; Piazzolla, Serena; van Oudenhove, Elke; Romano, Corrado; Schwartz, Charles; Eichler, Evan E; Slavotinek, Anne; Escobar, Luis; Rajan, Diana; Crolla, John; Carter, Nigel; Hodge, Jennelle C; Mefford, Heather C
2015-12-01
The ability to identify the clinical nature of the recurrent duplication of chromosome 17q12 has been limited by its rarity and the diverse range of phenotypes associated with this genomic change. In order to further define the clinical features of affected patients, detailed clinical information was collected in the largest series to date (30 patients and 2 of their siblings) through a multi-institutional collaborative effort. The majority of patients presented with developmental delays varying from mild to severe. Though dysmorphic features were commonly reported, patients do not have consistent and recognizable features. Cardiac, ophthalmologic, growth, behavioral, and other abnormalities were each present in a subset of patients. The newly associated features potentially resulting from 17q12 duplication include height and weight above the 95th percentile, cataracts, microphthalmia, coloboma, astigmatism, tracheomalacia, cutaneous mosaicism, pectus excavatum, scoliosis, hypermobility, hypospadias, diverticulum of Kommerell, pyloric stenosis, and pseudohypoparathryoidism. The majority of duplications were inherited with some carrier parents reporting learning disabilities or microcephaly. We identified additional, potentially contributory copy number changes in a subset of patients, including one patient each with 16p11.2 deletion and 15q13.3 deletion. Our data further define and expand the clinical spectrum associated with duplications of 17q12 and provide support for the role of genomic modifiers contributing to phenotypic variability. PMID:26420380
9. Analytic PSF Correction for Gravitational Flexion Studies
CERN Document Server
Levinson, Rebecca Sobel
2013-01-01
Given a galaxy image, one cannot simply measure its flexion. An image's spin one and three shape properties, typically associated with F- and G-flexion, are actually complicated functions of the galaxy's intrinsic shape and the telescope's PSF, in addition to the lensing properties. The same is true for shear. In this work we create a completely analytic mapping from apparent measured galaxy flexions to gravitational flexions by (1) creating simple models for a lensed galaxy and for a PSF whose distortions are dominated by atmospheric smearing and optical aberrations, (2) convolving the two models, and (3) comparing the pre- and post-convolved flexion-like shape variations of the final image. For completeness, we do the same for shear. As expected, telescope astigmatism, coma, and trefoil can corrupt measurements of shear, F- flexion, and G-flexion, especially for small galaxies. We additionally find that PSF size dilutes the flexion signal more rapidly than the shear signal. Moreover, mixing between shears, ...
10. Optimized SESAMs for kilowatt-level ultrafast lasers.
Science.gov (United States)
Diebold, A; Zengerle, T; Alfieri, C G E; Schriber, C; Emaury, F; Mangold, M; Hoffmann, M; Saraceno, C J; Golling, M; Follman, D; Cole, G D; Aspelmeyer, M; Südmeyer, T; Keller, U
2016-05-16
We present a thorough investigation of surface deformation and thermal properties of high-damage threshold large-area semiconductor saturable absorber mirrors (SESAMs) designed for kilowatt average power laser oscillators. We compare temperature rise, thermal lensing, and surface deformation of standard SESAM samples and substrate-removed SESAMs contacted using different techniques. We demonstrate that for all cases the thermal effects scale linearly with the absorbed power, but the contacting technique critically affects the strength of the temperature rise and the thermal lens of the SESAMs (i.e. the slope of the linear change). Our best SESAMs are fabricated using a novel substrate-transfer direct bonding technique and show excellent surface flatness (with non-measureable radii of curvature (ROC), compared to astigmatic ROCs of up to 10 m for standard SESAMs), order-of-magnitude improved heat removal, and negligible deformation with absorbed power. This is achieved without altering the saturation behavior or the recovery parameters of the samples. These SESAMs will be a key enabling component for the next generation of kilowatt-level ultrafast oscillators. PMID:27409874
11. Optical control of the Advanced Technology Solar Telescope.
Science.gov (United States)
Upton, Robert
2006-08-10
The Advanced Technology Solar Telescope (ATST) is an off-axis Gregorian astronomical telescope design. The ATST is expected to be subject to thermal and gravitational effects that result in misalignments of its mirrors and warping of its primary mirror. These effects require active, closed-loop correction to maintain its as-designed diffraction-limited optical performance. The simulation and modeling of the ATST with a closed-loop correction strategy are presented. The correction strategy is derived from the linear mathematical properties of two Jacobian, or influence, matrices that map the ATST rigid-body (RB) misalignments and primary mirror figure errors to wavefront sensor (WFS) measurements. The two Jacobian matrices also quantify the sensitivities of the ATST to RB and primary mirror figure perturbations. The modeled active correction strategy results in a decrease of the rms wavefront error averaged over the field of view (FOV) from 500 to 19 nm, subject to 10 nm rms WFS noise. This result is obtained utilizing nine WFSs distributed in the FOV with a 300 nm rms astigmatism figure error on the primary mirror. Correction of the ATST RB perturbations is demonstrated for an optimum subset of three WFSs with corrections improving the ATST rms wavefront error from 340 to 17.8 nm. In addition to the active correction of the ATST, an analytically robust sensitivity analysis that can be generally extended to a wider class of optical systems is presented. PMID:16926876
12. Impact of capillarity forces on the steady-state self-organization in the thin chromium film on glass under laser irradiation
International Nuclear Information System (INIS)
13. Canopy induced aberration correction in airborne electro-optical imaging systems
Science.gov (United States)
Harder, James A.; Sprague, Michaelene W.
2011-11-01
An increasing number of electro-optical systems are being used by pilots in tactical aircraft. This means that the afore mentioned systems must operate through the aircrafts canopy, unfortunately the canopy functions as a less than ideal lens element in the electro-optical sensor optical path. The canopy serves first and foremost as an aircraft structural component, considerations like minimizing the drag co-efficient and the ability to survive bird strikes take precedence over achieving optimal optical characteristics. This paper describes how the authors characterized the optical characteristics of an aircraft canopy. Families of modulation transfer functions were generated, for various viewing geometries through the canopy and for various electro-optical system entrance pupil diameters. These functions provided us with the means to significantly reduce the effect of the canopy "lens" on the performance of a representative electro-optical system, using an Astigmatic Corrector Lens. A comparison of the electro-optical system performance with and without correction is also presented.
14. Active Optics for high contrast imaging:Super smooth off-axis parabolas for ELTs XAO instruments
Science.gov (United States)
Hugot, Emmanuel; Laslandes, Marie; Ferrari, Marc; Dohlen, Kjetil; El hadi, Kacem
2011-09-01
In the context of direct imaging of exoplanets using XAO, the main limitations in images are due to residual quasi-static speckles induced by atmospheric phase residuals and instrumental static and quasi-static aberrations not corrected by AO: the post-coronagraphic image quality is directly linked to the power spectral density (PSD) of the optical train before the coronagraph. In this context, the potential of Stress Polishing has been demonstrated at LAM after the delivery of the three toric mirrors (TMs) for the VLT-SPHERE instrument. The extreme optical quality of such aspherical optics is obtained thanks to the spherical polishing of warped mirrors using full sized tools, avoiding the generation of high spatial frequency ripples due to classical sub-aperture tool marks. Furthermore, sub-nanometric roughnesses have been obtained thanks to a super smoothing method. Work is ongoing at LAM in order to improve this manufacturing method to cover a wide range of off-axis aspherics, with a reduction of the manufacturing time and cost. Smart warping structures are designed in order to bend the mirrors with a combination of focus, astigmatism and coma. This development will allow the stress polishing of supersmooth OAP for XAO optical relays improving the wavefront quality and in this way the high contrast level of future exoplanet imagers.
15. Features of propagation of the high-intensity femtosecond laser pulses in magnesium and sodium fluoride crystals
International Nuclear Information System (INIS)
The periodic filamentation patterns across and along laser channel tracks, induced by high-intensity femtosecond laser pulses in magnesium and sodium fluoride crystals, have been disclosed. The patterns are rationalized by deterministic vectorial effect, difference in propagations of linearly- and circularly-polarized laser pulses, and appearances of the orbital angular momentum of the light beams due to optical astigmatism. - Highlights: • The periodic pattern of multiple filamentation in the cross-section of the femtosecond laser channel in ionic crystals is shown and explained for the first time. • When the femtosecond laser beam is perpendicular to optical axis of the anisotropic MgF2 crystal, the single filaments have strictly periodic structure due to inducing change of pulse polarization by the crystal. • The lower efficiency of multiphoton ionization and CCs luminescence in the case of irradiation by circularly polarized femtosecond laser pulses of MgF2 take place. • In our paper, it is reported for the first time on twisting of the femtosecond laser beam in ionic crystal that can have useful application
16. Visual acuity assessment in schoolchildren in the municipality of Herval d’Oeste, Santa Catarina state, Brazil
Directory of Open Access Journals (Sweden)
Rafaela Santini de Oliveira
2013-07-01
Full Text Available Objectives: To evaluate visual acuity through the application of a screening test; identify the prevalence of low vision; and provide proper management to it. Methods: A cross-sectional and quantitative study in which first-to-fifth grade students of two elementary schools in the municipality of Herval d’Oeste were evaluated in the second half of 2011, by means of a questionnaire with the following variables: gender, age, previous use of glasses, perception of their own vision, and application of the Snellen Test to assess visual acuity (VA. Students presenting VA<0.7 and signs and symptoms of ocular disorders were referred to an ophthalmologist. Results: The sample comprised 318 students: 158 (49.6% males and 160 (50.3% females, between 5 and 15 years old. Thirty of these students showed low visual acuity and were referred to eye care, and 24 children attended ophthalmic examinations - 19 (79.16% needed optical correction. The most prevalent diagnoses were astigmatism, hyperopia, and myopia. Conclusion: The detection of low vision among schoolchildren through screening tests is an important task of health promotion and an effective strategy to prevent visual disorders, which can interfere with intellectual, psychological and social development. The effective implementation of programs and actions to promote health through the integration of health, education and community should be considered.
17. Outcomes of Sutureless Iris-Claw Lens Implantation
Science.gov (United States)
Nowomiejska, Katarzyna; Moneta-Wielgoś, Joanna; Jünemann, Anselm G. M.
2016-01-01
Purpose. To evaluate the indications, refraction, and visual and safety outcomes of iris-claw intraocular lens implanted retropupillary with sutureless technique during primary or secondary operation. Methods. Retrospective study of case series. The Haigis formula was used to calculate intraocular lens power. In all cases the wound was closed without suturing. Results. The study comprised 47 eyes. The mean follow-up time was 15.9 months (SD 12.2). The mean preoperative CDVA was 0.25 (SD 0.21). The final mean CDVA was 0.46 (SD 0.27). No hypotony or need for wound suturing was observed postoperatively. Mean postoperative refractive error was −0.27 Dsph (−3.87 Dsph to +2.85 Dsph; median 0.0, SD 1.28). The mean postoperative astigmatism was −1.82 Dcyl (min −0.25, max −5.5; median −1.25, SD 1.07). Postoperative complications were observed in 10 eyes. The most common complication was ovalization of the iris, which was observed in 8 eyes. The mean operation time was 35.9 min (min 11 min, max 79 min; median 34, SD 15.4). Conclusion. Retropupilary iris-claw intraocular lens (IOL) implantation with sutureless wound closing is an easy and fast method, ensuring good refractive outcome and a low risk of complication. The Haigis formula proved to be predictable in postoperative refraction. PMID:27642519
18. Scleral lens for keratoconus: technology update
Directory of Open Access Journals (Sweden)
Rathi VM
2015-10-01
Full Text Available Varsha M Rathi,1 Preeji S Mandathara,2 Mukesh Taneja,1 Srikanth Dumpati,1 Virender S Sangwan1 1L V Prasad Eye Institute, Hyderabad, India; 2School of Optometry and Vision Science, University of New South Wales, Kensington, NSW, Australia Abstract: Scleral lenses are large diameter lenses which rest over the sclera, unlike the conventional contact lenses which rest on the cornea. These lenses are fitted to not touch the cornea and there is a space created between the cornea and the lens. These lenses are inserted in the eyes after filling with sterile isotonic fluid. Generally, scleral contact lenses are used for high irregular astigmatism as seen in various corneal ectatic diseases such as keratoconus, pellucid marginal degeneration, or/and as liquid bandage in ocular surface disorders. In this article, we review the new developments, that have taken place over the years, in the field of scleral contact lenses as regard to new designs, materials, manufacturing technologies, and fitting strategies particularly for keratoconus. Keywords: keratoconus, scleral lens, technology update, PROSE
19. Bifocal contact lenses: History, types, characteristics, and actual state and problems
Directory of Open Access Journals (Sweden)
Hiroshi Toshida
2008-07-01
Full Text Available Hiroshi Toshida, Kozo Takahashi, Kazushige Sado, Atsushi Kanai, Akira MurakamiDepartment of Ophthalmology, Juntendo University School of Medicine, Tokyo, JapanAbstract: Since people who wear contact lenses (CL often continue using CL even when they develop presbyopia, there are growing expectations for bifocal CL. To understand actual state and problems, history, types, and their characteristics are summarized in this review. Bifocal CL have a long history over 70 years. Recently, bifocal CL have achieved remarkable progress. However, there still is an impression that prescription of bifocal CL is not easy. It should also be remembered that bifocal CL have limits, including limited addition for near vision, as well as the effects of aging and eye diseases in the aged, such as dry eye, astigmatism, cataract, etc. Analysis of the long-term users of bifocal CL among our patients has revealed the disappearance of bifocal CL that achieved unsatisfactory vision and poor contrast compared with those provided by other types of CL. Changing the prescription up to 3 times for lenses of the same brand may be appropriate. Lenses that provide poor contrast sensitivity, suffer from glare, or give unsatisfactory vision have been weeded out. The repeated replacement of products due to the emergence of improved or new products will be guessed.Keywords: bifocal contact lens, presbyopia, accommodation
20. Keratoconus: current perspectives
Directory of Open Access Journals (Sweden)
Vazirani J
2013-10-01
Full Text Available Jayesh Vazirani, Sayan BasuCornea and Anterior Segment Services, LV Prasad Eye Institute, Hyderabad, IndiaAbstract: Keratoconus is characterized by progressive corneal protrusion and thinning, leading to irregular astigmatism and impairment in visual function. The etiology and pathogenesis of the condition are not fully understood. However, significant strides have been made in early clinical detection of the disease, as well as towards providing optimal optical and surgical correction for improving the quality of vision in affected patients. The past two decades, in particular, have seen exciting new developments promising to alter the natural history of keratoconus in a favorable way for the first time. This comprehensive review focuses on analyzing the role of advanced imaging techniques in the diagnosis and treatment of keratoconus and evaluating the evidence supporting or refuting the efficacy of therapeutic advances for keratoconus, such as newer contact lens designs, collagen crosslinking, deep anterior lamellar keratoplasty, intracorneal ring segments, photorefractive keratectomy, and phakic intraocular lenses.Keywords: keratoconus, corneal topography, hydrops, collagen cross-linking, keratoplasty, contact lenses
1. Low-volume, fast response-time hollow silica waveguide gas cells for mid-IR spectroscopy.
Science.gov (United States)
Francis, Daniel; Hodgkinson, Jane; Livingstone, Beth; Black, Paul; Tatam, Ralph P
2016-09-01
Hollow silica waveguides (HSWs) are used to produce long path length, low-volume gas cells, and are demonstrated with quantum cascade laser spectroscopy. Absorption measurements are made using the intrapulse technique, which allows measurements to be made across a single laser pulse. Simultaneous laser light and gas coupling is achieved through the modification of commercially available gas fittings with low dead volume. Three HSW gas cell configurations with different path lengths and internal diameters are analyzed and compared with a 30 m path length astigmatic Herriott cell. Limit of detection measurements are made for the gas cells using methane at a wavelength 7.82 μm. The lowest limit of detection was provided by HSW with a bore diameter of 1000 μm and a path length of 5 m and was measured to be 0.26 ppm, with a noise equivalent absorbance of 4.1×10-4. The long-term stability of the HSW and Herriott cells is compared through analysis of the Allan-Werle variance of data collected over a 24 h period. The response times of the HSW and Herriott cells are measured to be 0.8 s and 36 s, respectively. PMID:27607251
2. Spatial shaping for generating arbitrary optical dipole traps for ultracold degenerate gases
Energy Technology Data Exchange (ETDEWEB)
Lee, Jeffrey G., E-mail: jglee@umd.edu [Joint Quantum Institute, University of Maryland, College Park, Maryland 20742 (United States); Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742 (United States); Hill, W. T., E-mail: wth@umd.edu [Joint Quantum Institute, University of Maryland, College Park, Maryland 20742 (United States); Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742 (United States); Department of Physics, University of Maryland, College Park, Maryland 20742 (United States)
2014-10-15
We present two spatial-shaping approaches – phase and amplitude – for creating two-dimensional optical dipole potentials for ultracold neutral atoms. When combined with an attractive or repulsive Gaussian sheet formed by an astigmatically focused beam, atoms are trapped in three dimensions resulting in planar confinement with an arbitrary network of potentials – a free-space atom chip. The first approach utilizes an adaptation of the generalized phase-contrast technique to convert a phase structure embedded in a beam after traversing a phase mask, to an identical intensity profile in the image plane. Phase masks, and a requisite phase-contrast filter, can be chemically etched into optical material (e.g., fused silica) or implemented with spatial light modulators; etching provides the highest quality while spatial light modulators enable prototyping and realtime structure modification. This approach was demonstrated on an ensemble of thermal atoms. Amplitude shaping is possible when the potential structure is made as an opaque mask in the path of a dipole trap beam, followed by imaging the shadow onto the plane of the atoms. While much more lossy, this very simple and inexpensive approach can produce dipole potentials suitable for containing degenerate gases. High-quality amplitude masks can be produced with standard photolithography techniques. Amplitude shaping was demonstrated on a Bose-Einstein condensate.
3. Spatial shaping for generating arbitrary optical dipole traps for ultracold degenerate gases
Science.gov (United States)
Lee, Jeffrey G.; Hill, W. T.
2014-10-01
We present two spatial-shaping approaches - phase and amplitude - for creating two-dimensional optical dipole potentials for ultracold neutral atoms. When combined with an attractive or repulsive Gaussian sheet formed by an astigmatically focused beam, atoms are trapped in three dimensions resulting in planar confinement with an arbitrary network of potentials - a free-space atom chip. The first approach utilizes an adaptation of the generalized phase-contrast technique to convert a phase structure embedded in a beam after traversing a phase mask, to an identical intensity profile in the image plane. Phase masks, and a requisite phase-contrast filter, can be chemically etched into optical material (e.g., fused silica) or implemented with spatial light modulators; etching provides the highest quality while spatial light modulators enable prototyping and realtime structure modification. This approach was demonstrated on an ensemble of thermal atoms. Amplitude shaping is possible when the potential structure is made as an opaque mask in the path of a dipole trap beam, followed by imaging the shadow onto the plane of the atoms. While much more lossy, this very simple and inexpensive approach can produce dipole potentials suitable for containing degenerate gases. High-quality amplitude masks can be produced with standard photolithography techniques. Amplitude shaping was demonstrated on a Bose-Einstein condensate.
4. Prevalence of Vitamin-A deficiency AND refractive errors in primary school-going children
Directory of Open Access Journals (Sweden)
Rupali Darpan Maheshgauri
2016-03-01
To assess refractive errors in primary school-going children. To critically analyze the need for supplementation of Vitamin A, to children of low socioeconomic strata. Methods: Students were examined from 2 primary schools. Visual acuity was tested using Snellen's chart, Pictogram and Landolt C chart. Detailed anterior and posterior segment examination using Binocular loop, Ophthalmoscope and Streak retinoscope. RESULTS: Total no of 560 children of age 3 to 13yr were screened from 2 primary schools.Statistically significant difference was found in the age of the study subject and presence of refractive errors. Percentage of students having Refractive error: myopia (29.64% is the major cause of refractive error, followed by astigmatism (4.28% hypermetropia (3.25% and amblyopia (1.25%. Conclusion: It was observed that many children had high refractive error and were undiagnosed. The possible reason could be ignorance on the part of teachers and parents, even when the children have vision related complains. Also the children in the younger age-group lack the acumen to judge whether they can see clearly or not. Prevalence of Vitamin A deficiency appears reduced in urban areas. [Natl J Med Res 2016; 6(1.000: 23-27
5. Tearing of thin spherical shells adhered to equally curved rigid substrates
Science.gov (United States)
McMahan, Connor; Lee, Anna; Marthelot, Joel; Reis, Pedro
Lasik (Laser-Assisted in Situ Keratomileusis) eye surgery involves the tearing of the corneal epithelium to remodel the corneal stroma for corrections such as myopia, hyperopia and astigmatism. One issue with this procedure is that during the tearing of the corneal epithelium, if the two propagating cracks coalesce, a flap detaches which could cause significant complications in the recovery of the patient. We seek to gain a predictive physical understanding of this process by performing precision desktop experiments on an analogue model system. First, thin spherical shells of nearly uniform thickness are fabricated by the coating of hemispherical molds with a polymer solution, which upon curing yields an elastic and brittle structure. We then create two notches near the equator of the shell and tear a flap by pulling tangentially to the spherical substrate, towards its pole. The resulting fracture paths are characterized by high-resolution 3D digital scanning. Our primary focus is on establishing how the positive Gaussian curvature of the system affects the path of the crack tip. Our results are directly contrasted against previous studies on systems with zero Gaussian curvature, where films were torn from planar and cylindrical substrates.
6. Data processing for fabrication of GMT primary segments: raw data to final surface maps
Science.gov (United States)
Tuell, Michael T.; Hubler, William; Martin, Hubert M.; West, Steven C.; Zhou, Ping
2014-07-01
The Giant Magellan Telescope (GMT) primary mirror is a 25 meter f/0.7 surface composed of seven 8.4 meter circular segments, six of which are identical off-axis segments. The fabrication and testing challenges with these severely aspheric segments (about 14 mm of aspheric departure, mostly astigmatism) are well documented. Converting the raw phase data to useful surface maps involves many steps and compensations. They include large corrections for: image distortion from the off-axis null test; misalignment of the null test; departure from the ideal support forces; and temperature gradients in the mirror. The final correction simulates the active-optics correction that will be made at the telescope. Data are collected and phase maps are computed in 4D Technology's 4SightTM software. The data are saved to a .h5 (HDF5) file and imported into MATLAB® for further analysis. A semi-automated data pipeline has been developed to reduce the analysis time as well as reducing the potential for error. As each operation is performed, results and analysis parameters are appended to a data file, so in the end, the history of data processing is embedded in the file. A report and a spreadsheet are automatically generated to display the final statistics as well as how each compensation term varied during the data acquisition. This gives us valuable statistics and provides a quick starting point for investigating atypical results.
7. Evaluation of corneal biomechanical properties following penetrating keratoplasty using ocular response analyzer
Directory of Open Access Journals (Sweden)
Vanathi Murugesan
2014-01-01
Full Text Available Purpose: To evaluate corneal biomechanical properties in eyes that has undergone penetrating keratoplasty (PK. Materials and Methods: Retrospective observational study in a tertiary care centre. Data recorded included ocular response analyzer (ORA values of normal and post-keratoplasty eyes [corneal hysteresis (CH, corneal resistance factor (CRF, Goldmann-correlated intraocular pressure (IOPg, and cornea-compensated intraocular pressure (IOPcc], corneal topography, and central corneal thickness (CCT. Wilcoxon signed rank test was used to analyze the difference in ORA parameter between post-PK eyes and normal eyes. Correlation between parameters was evaluated with Spearman′s rho correlation. Results: The ORA study of 100 eyes of 50 normal subjects and 54 post-keratoplasty eyes of 51 patients showed CH of 8.340 ± 1.85 and 9.923 ± 1.558, CRF of 8.846 ± 2.39 and 9.577 ± 1.631 in post-PK eyes and normal eyes, respectively. CH and CRF did not correlate with post-keratoplasty astigmatism (P = 0.311 and 0.276, respectively while a significant correlation was observed with IOPg (P = 0.004 and IOPcc (P < 0.001. Conclusion: Biomechanical profiles were significantly decreased in post-keratoplasty eyes with significant correlation with higher IOP as compared with that in normal eyes.
8. Keratoconus: Overview and update on treatment
Directory of Open Access Journals (Sweden)
2010-01-01
Full Text Available Keratoconus is a non-inflammatory, progressive thinning process of the cornea. It is a relatively common disorder of unknown etiology that can involve each layer of the cornea and often leads to high myopia and astigmatism. Computer-assisted corneal topography devices are valuable diagnostic tools for the diagnosis of subclinical keratoconus and for tracking the progression of the disease. The traditional conservative management of keratoconus begins with spectacle correction and contact lenses. Several newer, more invasive, treatments are currently available, especially for contact lens-intolerant patients. Intrastromal corneal ring segments can be used to reshape the abnormal cornea to improve the topographic abnormalities and visual acuity. Phakic intraocular lenses such as iris-fixated, angle-supported, posterior chamber implantable collamer and toric lenses are additional valuable options for the correction of refractive error. Corneal cross-linking is a relatively new method of stiffening the cornea to halt the progression of the disease. The future management of keratoconus will most likely incorporate multiple treatment modalities, both simultaneous and sequential, for the prevention and treatment of this disease.
9. Theoretical and experimental investigation of design for multioptical-axis freeform progressive addition lenses
Science.gov (United States)
Xiang, HuaZhong; Chen, JiaBi; Zhu, TianFen; Wei, YeFei; Fu, DongXiang
2015-11-01
A freeform progressive addition lens (PAL) provides a good solution to correct presbyopia and prevent juvenile myopia by distributing pupils' optical powers of distance zone, near zone, and intermediate zone and is more widely adopted in the present optometric study. However, there is still a lack of a single-optical-axis system for the design of a PAL. This paper focuses on the research for an approach for designing a freeform PAL. A multioptical-axis system based on real viewing conditions using the eyes is employed for the representation of the freeform surface. We filled small pupils in the intermediate zone as a progressive corridor and the distance- and near-vision portions were defined as the standard spherical surfaces delimited by quadratic curves. Three freeform PALs with a spherical surface as the front side and a freeform surface as the backside were designed. We demonstrate the fabrication and measurement technologies for the PAL surface using computer numerical control machine tools from Schneider Smart and a Visionix VM-2000 Lens Power Mapper. Surface power and astigmatic values were obtained. Preliminary results showed that the approach for the design and fabrication is helpful to advance the design procedure optimization and mass production of PALs in optometry.
10. Deep anterior lamellar keratoplasty in keratoconus
Directory of Open Access Journals (Sweden)
Nikolić Ljubiša
2011-01-01
Full Text Available Introduction. Deep anterior lamellar keratoplasty (DALK is intended for the surgical treatment of corneal pathology without the involvement of the endothelium. Sparing of the healthy host endothelium for lifetime is of utmost importance in young patients. Therefore, keratoconus is among the main indications for DALK. Outline of Cases. Two men, 22 and 28 years of age, underwent DALK for the treatment of progressive keratoconus, with low visual acuity, impossible to be corrected with gas-permeable contact lenses, due to the extreme conical protrusion of the cornea. Baring of Descemet’s membrane was achieved with lamellar dissection and peeling off the stroma. An 8.5 mm graft without the endothelium was sutured into an 8.0 mm bed. Both grafts remained clear and attached, without either ocular surface pathology or problems arising from sutures. The best corrected visual acuity was 20/25 and 20/40, with the astigmatism of 2.5 and 3.0 diopters, respectively. The follow-up was one year. Conclusion. This is the first presentation of DALK in our literature. The restoration of corneal transparency and stability, with sparing of the host endothelium, has put DALK among successful corneal tranplantation procedures. Together with Descemet stripping endothelial keratoplasty, which already accounts for almost a half of all our keratoplasties, it offers an alternative to penetrating keratoplasty.
11. Influence of Misalignment on High-Order Aberration Correction for Normal Human Eyes
Institute of Scientific and Technical Information of China (English)
ZHAO Hao-Xin; XU Bing; XUE Li-Xia; DAI Yun; LIU Qian; RAO Xue-Jun
2008-01-01
@@ Although a compensation device can correct aberrations of human eyes, the effect will be degraded by its misalignment, especially for high-order aberration correction. We caJculate the positioning tolerance of correction device for high-order aberrations, and within what degree the correcting effect is better than low-order aberration (defocus and astigmatism) correction. With fixed certain misalignment within the positioning tolerance, we calculate the residual wavefront rms aberration of the first-6 to first-35 terms along with the 3rd-5th terms of aberrations corrected, and the combined first-13 terms of aberrations are also studied under the same quantity of misalignment. However, the correction effect of high-order aberrations does not meliorate along with the increase of the high-order terms under some misalignment, moreover, some simple combined terms correction can achieve similar result as complex combinations. These results suggest that it is unnecessary to correct too much the terms of high-order aberrations which are diffcult to accomplish in practice, and gives confdence to correct high-order aberrations out of the laboratory.
12. Management of visual disturbances in albinism: a case report
Directory of Open Access Journals (Sweden)
Omar Rokiah
2012-09-01
Full Text Available Abstract Introduction A number of vision defects have been reported in association with albinism, such as photophobia, nystagmus and astigmatism. In many cases only prescription sunglasses are prescribed. In this report, the effectiveness of low-vision rehabilitation in albinism, which included prescription of multiple visual aids, is discussed. Case presentation We present the case of a 21-year-old Asian woman with albinism and associated vision defects. Her problems were blurring of distant vision, glare and her dissatisfaction with her current auto-focus spectacle-mounted telescope device, which she reported as being heavy as well as cosmetically unacceptable. We describe how low-vision rehabilitation using multiple visual aids, namely spectacles, special iris-tinted contact lenses with clear pupils, and bi-level telemicroscopic apparatus devices improved her quality of life. Subsequent to rehabilitation our patient is happier and continues to use the visual aids. Conclusions Contact lenses with a special iris tint and clear pupil area are useful aids to reduce the glare experienced by albinos. Bi-level telemicroscopic apparatus telemicroscopes fitted onto our patient’s prescription spectacles were cosmetically acceptable and able to improve her distance vision. As a result these low-vision rehabilitation approaches improved the quality of life of our albino patient.
13. Seven year follow-up after advanced surface ablation with excimer laser for treatment of myopia: Long-term outcomes of cooling PRK and LASEK
DEFF Research Database (Denmark)
Hansen, Rasmus Søgaard; Lyhne, Niels; Grauslund, Jakob;
Purpose: To evaluate and compare refractive predictability, uncorrected and corrected distance visual acuity (UDVA and CDVA), corneal haze, corneal densitometry and patient satisfaction up to 7 years after Photorefractive Keratectomy with cooling (cPRK) and Laser-Assisted Sub-epithelial Keratectomy......, no cPRK eyes, and 2 LASEK eyes (3.6%) had lost 2 or more lines of CDVA. At final follow-up the mean corneal densitometry was 12.31±0.80 in cPRK eyes and 12.30±0.84 in LASEK eyes (P=0.74), and trace haze was found in 4 cPRK eyes (6%) and 6 LASEK eyes (11%) (P=0.51). Ninetyfive percent of all patients...... were satisfied or very satisfied with the surgery 5 to 7 years after surgery. Conclusions: Both cPRK and LASEK seemed safe up to 7 years after surgery for treatment of myopia and low degrees of astigmatism. Results were comparable concerning refractive predictability, visual acuity, corneal haze...
14. Individual Differences in Scotopic Visual Acuity and Contrast Sensitivity: Genetic and Non-Genetic Influences.
Science.gov (United States)
Bartholomew, Alex J; Lad, Eleonora M; Cao, Dingcai; Bach, Michael; Cirulli, Elizabeth T
2016-01-01
Despite the large amount of variation found in the night (scotopic) vision capabilities of healthy volunteers, little effort has been made to characterize this variation and factors, genetic and non-genetic, that influence it. In the largest population of healthy observers measured for scotopic visual acuity (VA) and contrast sensitivity (CS) to date, we quantified the effect of a range of variables on visual performance. We found that young volunteers with excellent photopic vision exhibit great variation in their scotopic VA and CS, and this variation is reliable from one testing session to the next. We additionally identified that factors such as Circadian preference, iris color, astigmatism, depression, sex and education have no significant impact on scotopic visual function. We confirmed previous work showing that the amount of time spent on the vision test influences performance and that laser eye surgery results in worse scotopic vision. We also showed a significant effect of intelligence and photopic visual performance on scotopic VA and CS, but all of these variables collectively explain obvious non-genetic factors suggests a strong genetic component. Our preliminary genome-wide association study (GWAS) of 106 participants ruled out any common genetic variants of very large effect and paves the way for future, larger genetic studies of scotopic vision. PMID:26886100
15. Individual Differences in Scotopic Visual Acuity and Contrast Sensitivity: Genetic and Non-Genetic Influences.
Directory of Open Access Journals (Sweden)
Alex J Bartholomew
Full Text Available Despite the large amount of variation found in the night (scotopic vision capabilities of healthy volunteers, little effort has been made to characterize this variation and factors, genetic and non-genetic, that influence it. In the largest population of healthy observers measured for scotopic visual acuity (VA and contrast sensitivity (CS to date, we quantified the effect of a range of variables on visual performance. We found that young volunteers with excellent photopic vision exhibit great variation in their scotopic VA and CS, and this variation is reliable from one testing session to the next. We additionally identified that factors such as Circadian preference, iris color, astigmatism, depression, sex and education have no significant impact on scotopic visual function. We confirmed previous work showing that the amount of time spent on the vision test influences performance and that laser eye surgery results in worse scotopic vision. We also showed a significant effect of intelligence and photopic visual performance on scotopic VA and CS, but all of these variables collectively explain <30% of the variation in scotopic vision. The wide variation seen in young healthy volunteers with excellent photopic vision, the high test-retest agreement, and the vast majority of the variation in scotopic vision remaining unexplained by obvious non-genetic factors suggests a strong genetic component. Our preliminary genome-wide association study (GWAS of 106 participants ruled out any common genetic variants of very large effect and paves the way for future, larger genetic studies of scotopic vision.
16. Autokeratomileusis Laser
Science.gov (United States)
Kern, Seymour P.
1987-03-01
Refractive defects such as myopia, hyperopia, and astigmatism may be corrected by laser milling of the cornea. An apparatus combining automatic refraction/keratometry and an excimer type laser for precision reshaping of corneal surfaces has been developed for testing. When electronically linked to a refractometer or keratometer or holographic imaging device, the laser is capable of rapidly milling or ablating corneal surfaces to preselected dioptric power shapes without the surgical errors characteristic of radial keratotomy, cryokeratomileusis or epikeratophakia. The excimer laser simultaneously generates a synthetic Bowman's like layer or corneal condensate which appears to support re-epithelialization of the corneal surface. An electronic feedback arrangement between the measuring instrument and the laser enables real time control of the ablative milling process for precise refractive changes in the low to very high dioptric ranges. One of numerous options is the use of a rotating aperture wheel with reflective portions providing rapid alternate ablation/measurement interfaced to both laser and measurement instrumentation. The need for the eye to be fixated is eliminated or minimized. In addition to reshaping corneal surfaces, the laser milling apparatus may also be used in the process of milling both synthetic and natural corneal inlays for lamellar transplants.
17. Structured light-matter interactions in optical nanostructures (Presentation Recording)
Science.gov (United States)
Litchinitser, Natalia M.; Sun, Jingbo; Shalaev, Mikhail I.; Xu, Tianboyu; Xu, Yun; Pandey, Apra
2015-09-01
We show that unique optical properties of metamaterials open unlimited prospects to "engineer" light itself. For example, we demonstrate a novel way of complex light manipulation in few-mode optical fibers using metamaterials highlighting how unique properties of metamaterials, namely the ability to manipulate both electric and magnetic field components, open new degrees of freedom in engineering complex polarization states of light. We discuss several approaches to ultra-compact structured light generation, including a nanoscale beam converter based on an ultra-compact array of nano-waveguides with a circular graded distribution of channel diameters that coverts a conventional laser beam into a vortex with configurable orbital angular momentum and a novel, miniaturized astigmatic optical element based on a single biaxial hyperbolic metamaterial that enables the conversion of Hermite-Gaussian beams into vortex beams carrying an orbital angular momentum and vice versa. Such beam converters is likely to enable a new generation of on-chip or all-fiber structured light applications. We also present our initial theoretical studies predicting that vortex-based nonlinear optical processes, such as second harmonic generation or parametric amplification that rely on phase matching, will also be strongly modified in negative index materials. These studies may find applications for multidimensional information encoding, secure communications, and quantum cryptography as both spin and orbital angular momentum could be used to encode information; dispersion engineering for spontaneous parametric down-conversion; and on-chip optoelectronic signal processing.
18. Thermal stability test and analysis of a 20-actuator bimorph deformable mirror
Institute of Scientific and Technical Information of China (English)
Ning Yu; Zhou Hong; Yu Hao; Rao Chang-Hui; Jiang Wen-Han
2009-01-01
One of the important characteristic of adaptive mirrors is the thermal stability of surface flatness. In this paper, the thermal stability from 13℃ to 25℃ of a 20-actuator bimorph deformable mirror is tested by a Shack-Hartmann wavefront sensor. Experimental results show that, the surface P-V of bimorph increases nearly linearly with ambient temperature. The ratio is 0.11 μm/℃ and the major component of surface displacement is defocused, compared with which, astigmatism, coma and spherical aberration contribute very small. Besides, a finite element model is built up to analyse the influence of thickness, thermal expansion coefficient and Young's modulus of materials on thermal stability. Calculated results show that bimorph has the best thermal stability when the materials have the same thermal expansion coefficient. And when the thickness ratio of glass to PZT is 3 and Young's modulus ratio is approximately 0.4, the surface instability behaviour of the bimorph manifests itself most severely.
19. Deep stroma investigation by confocal microscopy
Science.gov (United States)
Rossi, Francesca; Tatini, Francesca; Pini, Roberto; Valente, Paola; Ardia, Roberta; Buzzonetti, Luca; Canovetti, Annalisa; Malandrini, Alex; Lenzetti, Ivo; Menabuoni, Luca
2015-03-01
Laser assisted keratoplasty is nowadays largely used to perform minimally invasive surgery and partial thickness keratoplasty [1-3]. The use of the femtosecond laser enables to perform a customized surgery, solving the specific problem of the single patient, designing new graft profiles and partial thickness keratoplasty (PTK). The common characteristics of the PTKs and that make them eligible respect to the standard penetrating keratoplasty, are: the preservation of eyeball integrity, a reduced risk of graft rejection, a controlled postoperative astigmatism. On the other hand, the optimal surgical results after these PTKs are related to a correct comprehension of the deep stroma layers morphology, which can help in the identification of the correct cleavage plane during surgeries. In the last years some studies were published, giving new insights about the posterior stroma morphology in adult subjects [4,5]. In this work we present a study performed on two groups of tissues: one group is from 20 adult subjects aged 59 +/- 18 y.o., and the other group is from 15 young subjects, aged 12+/-5 y.o.. The samples were from tissues not suitable for transplant in patients. Confocal microscopy and Environmental Scanning Electron Microscopy (ESEM) were used for the analysis of the deep stroma. The preliminary results of this analysis show the main differences in between young and adult tissues, enabling to improve the knowledge of the morphology and of the biomechanical properties of human cornea, in order to improve the surgical results in partial thickness keratoplasty.
20. Vision, eye disease, and art: 2015 Keeler Lecture.
Science.gov (United States)
Marmor, M F
2016-02-01
The purpose of this study was to examine normal vision and eye disease in relation to art. Ophthalmology cannot explain art, but vision is a tool for artists and its normal and abnormal characteristics may influence what an artist can do. The retina codes for contrast, and the impact of this is evident throughout art history from Asian brush painting, to Renaissance chiaroscuro, to Op Art. Art exists, and can portray day or night, only because of the way retina adjusts to light. Color processing is complex, but artists have exploited it to create shimmer (Seurat, Op Art), or to disconnect color from form (fauvists, expressionists, Andy Warhol). It is hazardous to diagnose eye disease from an artist's work, because artists have license to create as they wish. El Greco was not astigmatic; Monet was not myopic; Turner did not have cataracts. But when eye disease is documented, the effects can be analyzed. Color-blind artists limit their palette to ambers and blues, and avoid greens. Dense brown cataracts destroy color distinctions, and Monet's late canvases (before surgery) showed strange and intense uses of color. Degas had failing vision for 40 years, and his pastels grew coarser and coarser. He may have continued working because his blurred vision smoothed over the rough work. This paper can barely touch upon the complexity of either vision or art. However, it demonstrates some ways in which understanding vision and eye disease give insight into art, and thereby an appreciation of both art and ophthalmology. PMID:26563659
1. Refractive ocular conditions and reasons for spectacles renewal in a resource-limited economy
Directory of Open Access Journals (Sweden)
Folorunso Francisca N
2010-05-01
Full Text Available Abstract Background Although a leading cause of visual impairment and a treatable cause of blindness globally, the pattern of refractive errors in many populations is unknown. This study determined the pattern of refractive ocular conditions, reasons for spectacles renewal and the effect of correction on refractive errors in a resource-limited community. Methods A retrospective review of case records of 1,413 consecutive patients seen in a private optometry practice, Nigeria between January 2006 and July 2007. Results A total number of 1,216 (86.1% patients comprising of (486, 40% males and (730, 60% females with a mean age of 41.02 years SD 14.19 were analyzed. The age distribution peaked at peri-adolescent and the middle age years. The main ocular complaints were spectacles loss and discomfort (412, 33.9%, blurred near vision (399, 32.8% and asthenopia (255, 20.9%. The mean duration of ocular symptoms before consultation was 2.05 years SD 1.92. The most common refractive errors include presbyopia (431, 35.3%, hyperopic astigmatism (240, 19.7% and presbyopia with hyperopia (276, 22.7%. Only (59, 4.9% had myopia. Following correction, there were reductions in magnitudes of the blind (VA Conclusions Adequate correction of refractive errors reduces visual impairment and avoidable blindness and to achieve optimal control of refractive errors in the community, services should be targeted at individuals in the peri-adolescent and the middle age years.
2. Soft x-ray backlighting of cryogenic implosions using a narrowband crystal imaging system (invited)
Energy Technology Data Exchange (ETDEWEB)
Stoeckl, C., E-mail: csto@lle.rochester.edu; Bedzyk, M.; Brent, G.; Epstein, R.; Fiksel, G.; Guy, D.; Goncharov, V. N.; Hu, S. X.; Ingraham, S.; Jacobs-Perkins, D. W.; Jungquist, R. K.; Marshall, F. J.; Mileham, C.; Nilson, P. M.; Sangster, T. C.; Shoup, M. J.; Theobald, W. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States)
2014-11-15
A high-performance cryogenic DT inertial confinement fusion implosion experiment is an especially challenging backlighting configuration because of the high self-emission of the core at stagnation and the low opacity of the DT shell. High-energy petawatt lasers such as OMEGA EP promise significantly improved backlighting capabilities by generating high x-ray intensities and short emission times. A narrowband x-ray imager with an astigmatism-corrected bent quartz crystal for the Si He{sub α} line at ∼1.86 keV was developed to record backlit images of cryogenic direct-drive implosions. A time-gated recording system minimized the self-emission of the imploding target. A fast target-insertion system capable of moving the backlighter target ∼7 cm in ∼100 ms was developed to avoid interference with the cryogenic shroud system. With backlighter laser energies of ∼1.25 kJ at a 10-ps pulse duration, the radiographic images show a high signal-to-background ratio of >100:1 and a spatial resolution of the order of 10 μm. The backlit images can be used to assess the symmetry of the implosions close to stagnation and the mix of ablator material into the dense shell.
3. Effect of refractive error on temperament and character properties
Institute of Scientific and Technical Information of China (English)
Emine; Kalkan; Akcay; Fatih; Canan; Huseyin; Simavli; Derya; Dal; Hacer; Yalniz; Nagihan; Ugurlu; Omer; Gecici; Nurullah; Cagil
2015-01-01
AIM: To determine the effect of refractive error on temperament and character properties using Cloninger’s psychobiological model of personality.METHODS: Using the Temperament and Character Inventory(TCI), the temperament and character profiles of 41 participants with refractive errors(17 with myopia,12 with hyperopia, and 12 with myopic astigmatism) were compared to those of 30 healthy control participants.Here, temperament comprised the traits of novelty seeking, harm-avoidance, and reward dependence, while character comprised traits of self-directedness,cooperativeness, and self-transcendence.RESULTS: Participants with refractive error showed significantly lower scores on purposefulness,cooperativeness, empathy, helpfulness, and compassion(P <0.05, P <0.01, P <0.05, P <0.05, and P <0.01,respectively).CONCLUSION: Refractive error might have a negative influence on some character traits, and different types of refractive error might have different temperament and character properties. These personality traits may be implicated in the onset and/or perpetuation of refractive errors and may be a productive focus for psychotherapy.
4. A Comparison of Clinical Outcomes of Dislocated Intraocular Lens Fixation between In Situ Refixation and Conventional Exchange Technique Combined with Vitrectomy
Science.gov (United States)
Eum, Sun Jung; Kim, Myung Jun; Kim, Hong Kyun
2016-01-01
Purpose. To evaluate surgical efficacy of in situ refixation technique for dislocated posterior chamber intraocular lens (PCIOL). Methods. This was a single-center retrospective case series. 34 patients (34 eyes) who underwent sclera fixation for dislocated IOLs combined with vitrectomy were studied. Of 34 eyes, 17 eyes underwent IOL exchange and the other 17 eyes underwent in situ refixation. Results. Mean follow-up period was 6 months. Mean logMAR best corrected visual acuity (BCVA) was not significantly different between the groups 6 months after surgery (0.10 ± 0.03 in the IOL exchange group and 0.10 ± 0.05 in the refixation group; p = 0.065). Surgically induced astigmatism (SIA) was significantly lower in the refixation group (0.79 ± 0.41) than in the IOL exchange group (1.29 ± 0.46) (p = 0.004) at 3 months, which persisted to 6 months (1.13 ± 0.18 in the IOL exchange group and 0.74 ± 0.11 in the refixation group; p = 0.006). Postoperative complications occurred in 3 eyes in the IOL exchange group (17.6%) and 2 eyes in the refixation group (11.8%). However, all of the patients were well managed without additional surgery. Conclusion. The in situ refixation technique should be preferentially considered if surgery is indicated since it seemed to produce a sustained less SIA compared to IOL exchange. PMID:27119019
5. A Comparison of Clinical Outcomes of Dislocated Intraocular Lens Fixation between In Situ Refixation and Conventional Exchange Technique Combined with Vitrectomy
Directory of Open Access Journals (Sweden)
Sun Jung Eum
2016-01-01
Full Text Available Purpose. To evaluate surgical efficacy of in situ refixation technique for dislocated posterior chamber intraocular lens (PCIOL. Methods. This was a single-center retrospective case series. 34 patients (34 eyes who underwent sclera fixation for dislocated IOLs combined with vitrectomy were studied. Of 34 eyes, 17 eyes underwent IOL exchange and the other 17 eyes underwent in situ refixation. Results. Mean follow-up period was 6 months. Mean logMAR best corrected visual acuity (BCVA was not significantly different between the groups 6 months after surgery (0.10±0.03 in the IOL exchange group and 0.10±0.05 in the refixation group; p=0.065. Surgically induced astigmatism (SIA was significantly lower in the refixation group (0.79±0.41 than in the IOL exchange group (1.29±0.46 (p=0.004 at 3 months, which persisted to 6 months (1.13±0.18 in the IOL exchange group and 0.74±0.11 in the refixation group; p=0.006. Postoperative complications occurred in 3 eyes in the IOL exchange group (17.6% and 2 eyes in the refixation group (11.8%. However, all of the patients were well managed without additional surgery. Conclusion. The in situ refixation technique should be preferentially considered if surgery is indicated since it seemed to produce a sustained less SIA compared to IOL exchange.
6. Aspherical Lens Design Using Genetic Algorithm for Reducing Aberrations in Multifocal Artificial Intraocular Lens
Directory of Open Access Journals (Sweden)
Chih-Ta Yen
2015-09-01
Full Text Available A complex intraocular lens (IOL design involving numerous uncertain variables is proposed. We integrated a genetic algorithm (GA with the commercial optical design software of (CODE V to design a multifocal IOL for the human eye. We mainly used an aspherical lens in the initial state to the crystalline type; therefore, we used the internal human eye model in the software. The proposed optimized algorithm employs a GA method for optimally simulating the focusing function of the human eye; in this method, the thickness and curvature of the anterior lens and the posterior part of the IOL were varied. A comparison of the proposed GA-designed IOLs and those designed using a CODE V built-in optimal algorithm for 550 degrees myopia and 175 degrees astigmatism conditions of the human eye for pupil size 6 mm showed that the proposed IOL design improved the spot size of root mean square (RMS, tangential coma (TCO and modulation transfer function (MTF at a spatial frequency of 30 with a pupil size of 6 mm by approximately 17%, 43% and 35%, respectively. However, the worst performance of spherical aberration (SA was lower than 46%, because the optical design involves a tradeoff between all aberrations. Compared with the traditional CODE V built-in optimal scheme, the proposed IOL design can efficiently improve the critical parameters, namely TCO, RMS, and MTF.
7. Intraocular lens iris fixation. Clinical and macular OCT outcomes
Directory of Open Access Journals (Sweden)
Garcia-Rojas Leonardo
2012-10-01
Full Text Available Abstract Background To assess the efficacy, clinical outcomes, visual acuity (VA, incidence of adverse effects, and complications of peripheral iris fixation of 3-piece acrylic IOLs in eyes lacking capsular support. Thirteen patients who underwent implantation and peripheral iris fixation of a 3-piece foldable acrylic PC IOL for aphakia in the absence of capsular support were followed after surgery. Clinical outcomes and macular SD-OCT (Cirrus OCT; Carl Zeiss Meditec, Germany were analyzed. Findings The final CDVA was 20/40 or better in 8 eyes (62%, 20/60 or better in 12 eyes (92%, and one case of 20/80 due to corneal astigmatism and mild persistent edema. No intraoperative complications were reported. There were seven cases of medically controlled ocular hypertension after surgery due to the presence of viscoelastic in the AC. There were no cases of cystoid macular edema, chronic iridocyclitis, IOL subluxation, pigment dispersion, or glaucoma. Macular edema did not develop in any case by means of SD-OCT. Conclusions We think that this technique for iris suture fixation provides safe and effective results. Patients had substantial improvements in UDVA and CDVA. This surgical strategy may be individualized however; age, cornea status, angle structures, iris anatomy, and glaucoma are important considerations in selecting candidates for an appropriate IOL fixation method.
8. Visual and Refractive Outcomes of a Toric Presbyopia-Correcting Intraocular Lens
Science.gov (United States)
Epitropoulos, Alice T.
2016-01-01
Purpose. To evaluate outcomes in astigmatic patients implanted with the Trulign (Bausch + Lomb) toric presbyopia-correcting intraocular lens (IOL) during cataract surgery in a clinical practice setting. Methods. Retrospective study in 40 eyes (31 patients) that underwent cataract extraction and IOL implantation in a procedure using intraoperative wavefront aberrometry guidance (ORA system). Endpoints included uncorrected visual acuity (VA), reduction in refractive cylinder, accuracy to target, axis orientation, and safety. Results. At postoperative month 1, refractive cylinder was ≤0.50 D in 97.5% of eyes (≤1.00 D in 100%), uncorrected distance VA was 20/25 or better in 95%, uncorrected intermediate VA was 20/25 or better in 95%, and uncorrected near VA was 20/40 (J3 equivalent) or better in 92.5%. Manifest refraction spherical equivalent was within 1.00 D of target in 95% of eyes and within 0.50 D in 82.5%. Lens rotation was <5° and best-corrected VA was 20/25 or better in all eyes. Conclusion. The IOL effectively reduced refractive cylinder and provided excellent uncorrected distance and intermediate vision and functional near vision. Refractive predictability and rotational stability were exceptional. Implantation of this toric presbyopia-correcting IOL using ORA intraoperative aberrometry provides excellent refractive and visual outcomes in a standard of care setting. PMID:26885382
9. Clinical Trial of Manual Small Incision Surgery and Standard Extracapsular Surgery
Directory of Open Access Journals (Sweden)
Parikshit Gogate
2003-01-01
Full Text Available Introduction. Manual small incision cataract surgery (MSICS is used increasingly for cataract extraction and intraocular lens implantation. It is thought that the small wound heals faster than a conventional incision, leading to less astigmatism and a better uncorrected visual acuity. This is important as many patients do not wear or cannot afford spectacles after surgery, which means that their uncorrected visual acuity is what they rely on to carry out their every day functions. Often this is less than 6/18 on the Snellens chart, which would fall below the WHO good outcome category for post-operative visual impairment. A post-operative vision of 6/18 or better without spectacles is a goal which appears to be within the reach of small incision techniques for cataract surgery. However, there are concerns that the method used to remove the nucleus in MSICS may be more traumatic to the corneal endothelium than conventional ECCE surgery.
10. Excimer Laser Phototherapeutic Keratectomy for the Treatment of Clinically Presumed Fungal Keratitis
Directory of Open Access Journals (Sweden)
Liang-Mao Li
2014-01-01
Full Text Available This retrospective study was to evaluate treatment outcomes of excimer laser phototherapeutic keratectomy (PTK for clinically presumed fungal keratitis. Forty-seven eyes of 47 consecutive patients underwent manual superficial debridement and PTK. All corneal lesions were located in the anterior stroma and were resistant to medication therapy for at least one week. Data were collected by a retrospective chart review with at least six months of follow-up data available. After PTK, infected corneal lesions were completely removed and the clinical symptoms resolved in 41 cases (87.2%. The mean ablation depth was 114.39±45.51 μm and diameter of ablation was 4.06±1.07 mm. The mean time for healing of the epithelial defect was 8.8±5.6 days. Thirty-four eyes (82.9% showed an improvement in best spectacle-corrected visual acuity of two or more lines. PTK complications included mild to moderate corneal haze, hyperopic shift, irregular astigmatism, and thinning cornea. Six eyes (12.8% still showed progressed infection, and conjunctival flap covering, amniotic membrane transplantation, or penetrating keratoplasty were given. PTK is a valuable therapeutic alternative for superficial infectious keratitis. It can effectively eradicate lesions, hasten reepithelialization, and restore and preserve useful visual function. However, the selection of surgery candidates should be conducted carefully.
11. Space and time resolving spectrograph for fusion plasma diagnostics
International Nuclear Information System (INIS)
This paper discusses construction of an EUV (60-350 angstrom) space and time resolving, grazing incidence spectrograph (STRS). The simultaneous spectral coverage of the instrument ranges from 20 to 60 angstrom, depending on the wavelength region. The spectral resolution is about 1 angstrom. The spectral resolution, accomplished by using the pinhole camera effect and the inherent astigmatism of a concave grating in grazing incidence, is about 2 m, with a total field of view of 60 cm at a distance of 2 cm from the plasma. The detector consists of a 75 mm MCP image intensifier optically coupled to three CCD area array detectors. Time resolution of up to 2 ms is achieved with high speed read-out electronics. A PDP 11.73 minicomputer controls the spectrograph and collects and reduces 3.0 MB of data per shot. The complete design of the STRS and the results of initial tests of the detector system, spectrograph, and data handling software are presented
12. Time-resolving multispatial grazing incidence spectrograph for plasma fusion diagnostics
International Nuclear Information System (INIS)
A grazing incidence spectrograph which operates in the EUV (40 to 350 A) that has multispectral, temporal, and spatial resolving capabilities is being constructed for plasma fusion diagnostics. The spectrograph achieves a simultaneous spectral coverage of 20 and 60 A when centered on 40 and 350 A, respectively, with ∼ 1-A resolution. The detector consists of an image intensifier fiber optically coupled to 3 area array detectors (CCDS), which can be read out in 5 ms, thereby determining the time resolution of the instrument. The spatial resolution is accomplished by using the astigmatism inherent to a concave grating in grazing incidence, coupled with the pinhole camera effect produced by an entrance slit of limited height. The spectrograph can view ∼ 54 cm of plasma which is 2 m away from the entrance slit with 4- and 8-cm resolution at 350 and 40 A, respectively. The authors will present the results of a feasibility study, the spectrograph design, and the results of the data reduction and interpretation codes which are under development
13. Accounting for the phase, spatial frequency and orientation demands of the task improves metrics based on the visual Strehl ratio.
Science.gov (United States)
Young, Laura K; Love, Gordon D; Smithson, Hannah E
2013-09-20
Advances in ophthalmic instrumentation have allowed high order aberrations to be measured in vivo. These measurements describe the distortions to a plane wavefront entering the eye, but not the effect they have on visual performance. One metric for predicting visual performance from a wavefront measurement uses the visual Strehl ratio, calculated in the optical transfer function (OTF) domain (VSOTF) (Thibos et al., 2004). We considered how well such a metric captures empirical measurements of the effects of defocus, coma and secondary astigmatism on letter identification and on reading. We show that predictions using the visual Strehl ratio can be significantly improved by weighting the OTF by the spatial frequency band that mediates letter identification and further improved by considering the orientation of phase and contrast changes imposed by the aberration. We additionally showed that these altered metrics compare well to a cross-correlation-based metric. We suggest a version of the visual Strehl ratio, VScombined, that incorporates primarily those phase disruptions and contrast changes that have been shown independently to affect object recognition processes. This metric compared well to VSOTF for letter identification and was the best predictor of reading performance, having a higher correlation with the data than either the VSOTF or cross-correlation-based metric.
14. Numerical implementation of generalized Coddington equations for ophthalmic lens design
Science.gov (United States)
Rojo, P.; Royo, S.; Ramírez, J.; Madariaga, I.
2014-02-01
A method for general implementation in any software platform of the generalized Coddington equations is presented, developed, and validated within a Matlab environment. The ophthalmic lens design strategy is presented thoroughly, and the basic concepts of generalized ray tracing are introduced. The methodology for ray tracing is shown to include two inter-related processes. Firstly, finite ray tracing is used to provide the main direction of propagation of the considered ray at the incidence point of interest. Afterwards, generalized ray tracing provides the principal curvatures of the local wavefront at that point, and its orientation after being refracted by the lens. The curvature values of the local wavefront are interpreted as the sagital and tangential powers of the lens at the point of interest. The proposed approach is validated using a double-check of the calculated lens performance in the spherical lens case: while finite ray tracing is validated using a commercial ray tracing software, generalized ray tracing is validated using a software application for ophthalmic lens design based on the classical version of Coddington equations. Equations of the complete tracing process are developed in detail for the case of generic astigmatic ophthalmic lenses as an example. Three-dimensional representation of the sagital and tangential powers of the ophthalmic lens at all directions of gaze then becomes possible, and results are presented for lenses with different geometries.
15. Programmable diffractive lens for ophthalmic application
Science.gov (United States)
Millán, María S.; Pérez-Cabré, Elisabet; Romero, Lenny A.; Ramírez, Natalia
2014-06-01
Pixelated liquid crystal displays have been widely used as spatial light modulators to implement programmable diffractive optical elements, particularly diffractive lenses. Many different applications of such components have been developed in information optics and optical processors that take advantage of their properties of great flexibility, easy and fast refreshment, and multiplexing capability in comparison with equivalent conventional refractive lenses. We explore the application of programmable diffractive lenses displayed on the pixelated screen of a liquid crystal on silicon spatial light modulator to ophthalmic optics. In particular, we consider the use of programmable diffractive lenses for the visual compensation of refractive errors (myopia, hypermetropia, astigmatism) and presbyopia. The principles of compensation are described and sketched using geometrical optics and paraxial ray tracing. For the proof of concept, a series of experiments with artificial eye in optical bench are conducted. We analyze the compensation precision in terms of optical power and compare the results with those obtained by means of conventional ophthalmic lenses. Practical considerations oriented to feasible applications are provided.
16. RESEARCH OF THERMO-OPTICAL INHOMOGENEITIES IN Yb-Er GLASS AT DIODE PUMPING
Directory of Open Access Journals (Sweden)
V. Khramov
2016-03-01
Full Text Available Subject of Research. Investigation method of thermo-optical distortions in solid-state lasers was developed and presented. The method can be easily used for research of small diameter (approximately 2 mm active elements. Method. The experimental method described in this paper is based on the registration of deviation of the energy center of the probe beam passing through the thermally stressed active element. Main Results. We have presented experimental results of the thermal lens optical power research in the active element made of Yb-Er glass pumped transversely by a laser diode in the following modes: without generating, free-running and Q-switching. We have submitted obtained dependences of the optical power on the pumping energy. The measurements have been performed for the two polarization components at two wavelengths (632.8 nm and 1550 nm showing the absence of explicit astigmatism of the thermal lens. Practical Relevance. Knowledge of the thermal regime of such lasers gives the possibility for more precise calculation of the resonator parameters in terms of the thermal lens occurrence.
17. The Preliminary Clinical Observation of Array Multifocal lntraocular Lens Implantation
Institute of Scientific and Technical Information of China (English)
Zhende Lin; Bo Feng; Yizhi Liu; Bing Cheng
2001-01-01
Purpose: To evaluate the clinical effects of implantation of Array multifocal intraocular lenses. Methods: Thirty-one cases (37 eyes) of cataract patients, including 15 males( 19 eyes)and 16 females( 18 eyes), were involved in this study. All patients underwent standard phacoemulsification with Array multifocal intraocular lens implantation. The complications during operation, postoperative distant visual acuity, near visual acuity,corneal curvature and visual symptoms were observed. Results: the mean value of best postoperative visual acuity was recorded as follows:uncorrected distant visual acuity was 0.8, the best-corrected distant visual acuity was 0.9, uncorrected near visual acuity was 0.5, near visual acuity with distant-corrected was 0.6, the best-corrected near visual acuity wss 0.9. The astigmatism of cornea was less than 1.5 D pre-operatively and post-operatively. One patient complained of glare. Conclusion: Array multifocal intraocular lens can provide good distant and near visual acuity. With observation of more cases and follow-up of longer time, we can draw a further conclusion. Eye Science 2001; 17: 57 ~ 60.
18. Analysis of incidence of keratoconus in relatives of patients who underwent corneal transplant due to advanced keratoconus using the Orbscan II topographic graphs
Science.gov (United States)
López Olazagasti, Estela; Hernández y del Callejo, César E.; Ibarra-Galitzia, Jorge; Ramírez-Zavaleta, Gustavo; Tepichín, Eduardo
2011-10-01
Keratoconus is a corneal disease in which the cornea assumes a conical shape due to an irregular alteration of the internal structure of the corneal tissue and sometimes is progressive, especially in young people. Anatomically, the main signs of keratoconus are thinning of the cornea in its central or paracentral region, usually accompanied by an increase in this part of a high irregular astigmatism, with a consequent loss of vision. Its diagnosis requires a thorough study including the family history, a complete ophthalmologic examination and imaging studies. This diagnosis allows classifying the type of keratoconus, which allow determining options of management, with what it is possible to establish a visual prognosis of each eye. One of the indicators that help in the diagnosis of keratoconus is an inherited familiar propensity. The literature reports an incidence of keratoconus of 11%1 in first-degree relatives of patients with keratoconus. Results suggest an ethnic dependence, which implies that the knowledge of the tendency of keratoconus in the Mexican population is important. In this work, we present the preliminary results of the study realized to a group of relatives of patients who underwent corneal transplant by advanced keratoconus using Orbscan II topographic diagnosis, to determine the predisposition to Keratoconus in this group.e
19. Simultaneous phacoemulsification, lens implantation and endothelial keratoplasty: Triple procedure
Directory of Open Access Journals (Sweden)
Nikolić Ljubiša
2011-01-01
Full Text Available Introduction. Simultaneous Descemet stripping endothelial keratoplasty, phacoemulsification, and intraocular lens implantation are indicated in Fuchs’ dystrophy with associated cataract. Compared to the standard method of the triple procedure which includes penetrating keratoplasty, this new method has the advantages of sutureless surgery, small limbal incision, faster recovery, less surface problems, less astigmatism, stronger tensile strength and more predictable calculation of the intraocular lens power. This is the first report of such a combination of procedures in our literature. Case report. A 76-year-old woman suffered from a gradual bilateral visual loss. The best corrected visual acuity was 20/60 (right eye and finger counting at 1m (left eye. Corneal thickness was 590 μm and 603 μm, respectively. A marked cornea guttata and nuclear cataract were present in both eyes. Phacoemulsification, lens implantation, and Descemet stripping were done in the left eye. The posterior lamellar corneal graft, 8.0 mm in diameter and about 150 μm thick, was bent and inserted through the limbal incision. The air was injected into the anterior chamber to attach the graft to the recipient stroma. The cornea remained clear, and the transplant was attached during a two-year follow-up. Visual acuity was 20/40 after two months, and 20/25 after one year. Conclusion. The new technique proved itself as a good choice for the treatment of a mild Fuchs’ dystrophy associated with cataract.
20. Spontaneous Rotation of a Toric Implantable Collamer Lens
Directory of Open Access Journals (Sweden)
Alejandro Navas
2010-11-01
Full Text Available We present a case of toric implantable collamer lens (TICL spontaneous rotation in a patient with myopic astigmatism. A 23-year-old female underwent TICL implantation. Preoperative uncorrected visual acuity (UCVA was 20/800 and 20/1200, respectively, with –7.75 –4.25 × 0° and –8.25 –5.25 × 180°. The left eye achieved an UCVA of 20/30. After 3 months of successful implantation of TICL in the left eye, the patient presented with a sudden decrease in visual acuity in the left eye. UCVA was 20/100 with a refraction of +2.50 –4.50 × 165°. We observed the toric marks with a 30° rotation from the original position and decided to reposition the TICL, obtaining a final UCVA of 20/25, which remained stable at 6 months’ follow-up. TICL can present a considerable rotation that compromises visual acuity. The relocation of TICL is a safe and effective procedure to recover visual acuity due to significant spontaneous TICL rotation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5447933077812195, "perplexity": 9635.169944864254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607245.69/warc/CC-MAIN-20170523005639-20170523025639-00330.warc.gz"} |
https://www.vernier.com/experiment/esi-26_fossil-fuels/ | ### Introduction
Hydrocarbons are compounds containing only hydrogen and carbon atoms. Many common fuels such as gasoline, diesel fuel, heating oil, aviation fuel, and natural gas are essentially mixtures of hydrocarbons. Paraffin wax, used to make many candles, is a mixture of hydrocarbons with the representative formula C25H52.
Ethyl alcohol, a substituted hydrocarbon with the formula C2H5OH, is used as a gasoline additive (gasohol) and as a gasoline substitute.
### Objectives
In the Preliminary Activity, you will determine the heat of combustion of paraffin wax (in kJ/g). You will first use the energy from burning paraffin wax to heat a known quantity of water. By monitoring the temperature of the water, you can find the amount of heat transferred to it (in kJ), using the formula
$q = C_p cdot m cdot Delta t$
where q is heat, Cp is the specific heat capacity of water, m is the mass of water, and Δt is the change in temperature of the water. Finally, the amount of fuel burned will be taken into account by calculating the heat per gram of paraffin wax consumed in the combustion.
After completing the Preliminary Activity, you will first use reference sources to find out more about fossil fuel energy before you choose and investigate a researchable question. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026108145713806, "perplexity": 1146.537948867933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00260.warc.gz"} |
http://finalfantasy.wikia.com/wiki/List_of_Final_Fantasy_VII_statuses | # List of Final Fantasy VII statuses
21,174 pages on
this wiki
The following is a list of status effects in Final Fantasy VII.
Certain enemies absorb status attacks, similar to how elemental attacks can be absorbed. For example, Dragon Zombie, Ghost, Ghost Ship and Zenene will absorb the Death status, gaining full recovery from being hit by it. Some enemies are weak to certain statuses and take double damage from any attack that attempts to inflict that status.
## StatusesEdit
### DeathEdit
The player can add the instant death element to a character's physical attacks by linking Destruct Materia or Odin Materia with status effect Materia in the character's weapon. Equipping the pair in the character's armor will make him/her immune to instant death, instead.
Enemies can also be killed by a single hit if the player performs the Overflow glitch with Vincent's or Barret's ultimate weapons.
It is possible to have a party of three Dead characters if the party is assembled as such outside of battle by killing the party leader's allies during a battle, using the PHS to swap them with two alive characters, starting another battle to kill off the party leader, and using PHS to get the two KO'd characters back into the party. If the player enters any battle with three KO'd party members they get an instant Game Over.
If Sneak Attack is used to revive fallen party members their ATB bars will never fill up.
If the player triggers a damage overflow glitch and the attacker has HP Absorb linked to an ability that does the overflow, the attacker instant dies when the HP Absorb Materia tries to restore the HP to the user from the overflown attack.
Game Element Type Effect
Death Magic Inflicts Death.
Death Force Enemy Skill Prevents Death.
L5 Death Enemy Skill Inflicts Death on all enemies whose levels are a multiple of 5.
Roulette Enemy Skill Inflicts Death on a random target.
Flash Command Inflicts Death to all enemies. It is the 2nd level ability of the Slash-All Materia.
Steel Bladed Sword Summon Summon Odin and inflicts Death on all enemies. If resistant, Gunge Lance is used instead.
Death Joker Limit Break Inflicts Death to the player party if the player rolls two Cait Siths and a Bar in Cait Sith's Slots Limit Break.
Dragon Dive Limit Break Inflicts 6 non-elemental attacks with a chance of Death to random enemies.
Finishing Touch Limit Break Inflicts Death on all enemies. Doesn't work on all enemies.
Hyper Jump Limit Break Inflicts major non-elemental damage with a chance of Death to all enemies.
Game Over Limit Break Inflicts Death on any enemy regardless of immunities if the player rolls three Cait Siths in Cait Sith's Slots Limit Break.
Hammerblow Limit Break Ejects an enemy. Doesn't work on all enemies.
Satan Slam Limit Break Inflicts extreme non-elemental damage and a chance of Death to all enemies.
Hidden One Enemy Attack Inflicts Death.
Joker Enemy Attack Inflicts Death. Doesn't always work.
Knife Enemy Attack Inflicts Death.
L4 Death Enemy Attack Inflicts Death on party whose levels are a multiple of 4.
Scissor Attack Enemy Attack Inflicts minor non-elemental damage and Death. Used as a counterattack when HP is 50% or less.
Suffocation Song Enemy Attack Inflicts Death. Only used if Gighee is alive, and if at least one opponent has Sadness.
Safety Bit Accessory Prevents Death.
Kiss of Death Item Inflicts Death on all enemies with 68% precision.
### Near-deathEdit
Near-death is listed in the manual as both ANEMIC and CRITICAL. The party members' HP digits in the battle menu turn yellow when they hit critical health at below 25% max health, and the characters appear exhausted on the screen.
The sword Ultima Weapon and the megaphone HP Shout are weaker when their wielders are in Near-death, the Ultimate Weapon turning blue.
Tifa's Master Fist and Powersoul weapons double in attack power when she is in critical health.
The final bosses use several attacks that reduce the party's health to critical.
### SleepEdit
Sleep is caused by the spell Sleepel and by certain enemy attacks. Sleeping units cannot act until awoken. They are vulnerable to magic damage as magic damage doesn't wake them, but being hit by a physical attack removes the status.
Game Element Type Effect
Sleepel Magic Inflicts Sleep.
Bad Breath Enemy Skill Inflicts Sleep as well as other statuses to all enemies.
Frog Song Enemy Skill Inflicts or cures Frog and Sleep.
AB Cannon Enemy Attack Inflicts non-elemental damage and Sleep.
Big Pollen Enemy Attack Inflicts moderate non-elemental damage and Sleep to party.
Combo Enemy Attack Fourth hit version inflicts non-elemental damage and Sleep.
Firing Line Enemy Attack Inflicts non-elemental damage as well as Sleep and Poison to party.
Hell Bubble Enemy Attack Inflicts Sleep.
Pollen Enemy Attack Inflicts Sleep and non-elemental damage on party.
Refuse Enemy Attack Inflicts Sleep and non-elemental damage.
Sleep Scales Enemy Attack Inflicts Sleep.
Smoke Bullet Enemy Attack Inflicts Sleep and Darkness.
Dream Powder Item Inflicts Sleep on all enemies with 80% precision.
### PoisonEdit
Poison periodically inflicts Poison-elemental physical damage upon the afflicted unit and does not remain after battle. It can be cured using an Antidote and Remedy items, Poisona and Esuna spells and White Wind and Angel Whisper Enemy Skills, as well as Aeris's status healing Limit Breaks.
Poison can be avoided by using the Resist spell which is the third spell of the Heal Materia, as well as the accessories Ribbon, Poison Ring, Star Pendant and Fairy Ring, and linking Poison+Added Effect or Hades+Added Effect in a character's armor.
Afflicted target takes 1/32 of Max HP of physical Poison-elemental damage every 2.5 units of time ignoring Defense and Barrier statuses. The status last till the end of battle or until cured. The damage from the Poison status functions as if it is the player doing damage to themselves. Therefore, when in both the Poison status and the All Lucky 7s status the Poison will deal 7777 damage. As the requirement of being in the All Lucky 7s status is to have 7777 health, the player will have both these statuses removed, and be inflicted with the Death status. If the poisoned character is trying to Defend the defensive stance is canceled out whenever the character takes damage.
Because the Poison status deals Poison-elemental damage, the player can absorb it if they make the character absorbent to Poison, such as with the Poison Materia linked with a mastered Elemental Materia. If the character is immune to the Poison element (take no damage from it), they are also immune to the Poison status.
Game Element Type Effect
Bio Magic Inflicts minor Poison-elemental damage and Poison.
Bio2 Magic Inflicts moderate Poison-elemental damage and Poison.
Bio3 Magic Inflicts major Poison-elemental damage and Poison.
Bad Breath Enemy Skill Inflicts Poison as well as other statuses to all enemies.
Abnormal Breath Enemy Attack Dragon Zombie attack. Inflicts moderate non-elemental damage and Poison.
Acid Rain Enemy Attack Inflicts moderate Poison-elemental damage and Poison.
Bio Gas Enemy Attack Inflicts moderate non-elemental damage and Poison.
Bug Needle Enemy Attack Inflicts minor non-elemental damage as well as Poison and Sadness.
C Cannon Enemy Attack Inflicts moderate non-elemental damage and Poison.
Combo Enemy Attack 2nd hit version. Inflicts moderate non-elemental damage and Poison.
Firing Line Enemy Attack Inflicts non-elemental damage as well as Poison and Sleep to party.
Left Revenge Enemy Attack Reduces MP by 15/32 and inflicts Poison and Slow Numb.
Left Thrust Enemy Attack Reduces MP by 25/32 and inflicts Poison and Slow Numb.
Liquid Poison Enemy Attack Inflicts Poison.
Northern Cross Enemy Attack Inflicts Poison.
Piazzo Shower Enemy Attack Inflicts minor Poison-elemental damage and Poison.
Poison Blow Enemy Attack Inflicts Poison.
Poison Breath Enemy Attack Inflicts moderate Poison-elemental damage and Poison.
Poison Fang Enemy Attack Inflicts minor non-elemental damage and Poison. If used by Unknown3, inflicts extreme physical damage and Poison.
Poison Storm Enemy Attack Inflicts Poison.
Refuse Enemy Attack Second version. Inflicts minor Poison elemental damage and Poison.
Scorpion's Tail Enemy Attack Inflicts moderate non-elemental damage and Poison.
Shady Breath Enemy Attack Inflicts Poison to all enemies.
Smog Enemy Attack Inflicts minor non-elemental damage and Poison.
Smog Alert Enemy Attack Inflicts moderate non-elemental damage as well as Poison, Darkness, Silence, and Sadness.
Stigma Enemy Attack Inflicts major non-elemental damage as well as Poison and Slow to the party.
Toxic Barf Enemy Attack Inflicts Poison and Slow.
Toxic Powder Enemy Attack Inflicts moderate Poison-elemental damage and Poison to the party.
Star Pendant Accessory Prevents Poison.
Poison Ring Accessory Prevents Poison and makes user absorb Poison damage.
Fairy Ring Accessory Prevents Poison along with some other status effects.
Deadly Waste Item Casts Bio2 on all enemies.
M-Tentacles Item Casts Bio3 on all enemies.
Sadness makes the affected character take 30% less damage from physical and magical attacks, but also halves the rate the Limit gauge fills. The effect can be positive or negative, depending on the party's situation. Sadness is the opposite of Fury. When a character is under Sadness status, their Limit bar is displayed in blue, rather than the normal pink.
This status is healed by Hyper, Remedy, Esuna, White Wind, and Aeris's Breath of the Earth Limit Break. It can be created by using a Tranquilizer, or through an enemy attack. Hypers and Tranquilizers can be bought in shops.
Game Element Type Effect
Absorb Enemy Attack Inflicts moderate non-elemental damage as well as Sadness.
Bad Mouth Enemy Attack Inflicts moderate non-elemental damage as well as Sadness.
Bug Needle Enemy Attack Inflicts minor non-elemental damage as well as Sadness and Poison.
Creepy Touch Enemy Attack Inflicts Sadness. Used only as a counterattack.
High/Low Suit Enemy Attack Inflicts Sadness. Always used twice in a row, and only if Gighee is alive.
Lick Enemy Attack Inflicts non-elemental damage and Sadness.
Pale Horse Enemy Attack Inflicts moderate non-elemental damage as well as Sadness, Frog, and Small.
Para Tail Enemy Attack Inflicts moderate physical damage and Sadness.
Shower Enemy Attack Inflicts minor non-elemental damage and Sadness.
Smog Alert Enemy Attack Inflicts moderate non-elemental damage as well as Sadness, Poison, Darkness, and Silence.
Teardrop Enemy Attack Inflicts moderate non-elemental damage and Sadness.
Triclops Enemy Attack Inflicts Sadness and Slow-numb. Only used when all Grangalan Jr. Jr.s are dead.
War Cry Enemy Attack Inflicts Sadness.
Peace Ring Accessory Prevents Fury, Sadness, and Berserk.
Hyper Item Cures Sadness, but inflicts Fury if used while normal.
Tranquilizer Item Cures Fury, but inflicts Sadness if used while normal.
### FuryEdit
Fury reduces the character's hit rate for both physical and magical attacks by 3/10, but also doubles the rate at which the Limit gauge fills; so it can be a positive or negative status effect, depending upon the situation.
There is an opposing status effect, called Sadness, which decreases physical damage taken, but also decreases the rate at which the Limit gauge fills. The effect is nullified by the item Tranquilizer, and can be created with the opposite item, Hyper. Both can be bought in shops.
Fury affects Hit% in the following formula:
$New Hit\% = \frac{7}{10}Old Hit\%$
Fury status can also affect enemies, but this is difficult since Hypers cannot normally be given to enemies, and there are no attacks in the player's possession that can inflict the status. If a character chooses to use a Hyper, but is then immediately Confused, they will use the Hyper on the enemy party instead. If enemies are afflicted by Fury their Hit% is negatively affected, making the Fury status the only way to do this as the Darkness status is broken in that it doesn't actually hamper enemies' hit chance, only the player party's.
Game Element Type Effect
Extreme Bomber Enemy Attack Inflicts major non-elemental damage and Fury.
Flaming Peck Enemy Attack Inflicts minor non-elemental damage and Fury.
Guitar Slap Enemy Attack Inflicts moderate non-elemental damage and Fury.
High/Low Suit (alternate version) Enemy Attack Inflicts Fury. Used twice in a row, but due to a bug in Christopher's AI, is never used.
Rage Bomber Enemy Attack Inflicts moderate non-elemental damage and Fury.
Repeating Slap Enemy Attack Inflicts major non-elemental damage and Fury.
Slap Enemy Attack Inflicts non-elemental damage and Fury.
Spaz Voice Enemy Attack Inflicts Fury.
Peace Ring Accessory Prevents Fury, Sadness, and Berserk.
Hyper Item Cures Sadness, but inflicts Fury if used while in normal status.
Tranquilizer Item Cures Fury, but inflicts Sadness if used while in normal status.
### ConfusionEdit
The afflicted player character loses control and will randomly attack allies, but will always carry out the last command the player input; however, if possible, it will always be against the player party. Confused enemies will continue to use their AI script, but with the definition of allies and enemies reversed.
Game Element Type Effect
Confu Magic Inflicts Confuse.
Bad Breath Enemy Skill Inflicts Confuse as well as other statuses to all enemies.
Abnormal Breath Enemy Attack Inflicts Confuse. Unknown 2's attack.
Bewildered Enemy Attack Inflicts Confuse.
Bite Enemy Attack Bandersnatch's attack. Inflicts non-elemental damage and Confuse.
Confu Missile Enemy Attack Inflicts moderate non-elemental damage and Confuse.
Confu Scales Enemy Attack Inflicts non-elemental damage and Confuse.
Fascination Enemy Attack Inflicts Confuse.
Flying Muddle Enemy Attack Inflicts non-elemental damage and Confuse. Is never used.
Flying Zip Confuse Enemy Attack Inflicts non-elemental damage and Confuse. Is never used.
Funny Breath Enemy Attack Inflicts Confuse on the party.
L2 Confu Enemy Attack Inflicts Confuse if the target's level is a multiple of 2.
Muyddle Mallet Enemy Attack Inflicts non-elemental damage and Confuse.
Neo Turk Light Enemy Attack Inflicts Confuse.
Ruby Ray Enemy Attack Inflicts extreme non-elemental damage and Confuse.
Supernova Enemy Attack Reduces HP by 15/16 and inflicts Confuse, Silence, and Slow to the party.
Uppercut Enemy Attack Crazy Saw's attack. Inflicts moderate non-elemental damage and Confuse.
Zip Confu Enemy Attack Inflicts non-elemental damage and Confuse. Is never used.
Loco weed Item Inflicts Confuse on all enemies.
Peace Ring Accessory Protects against Confuse among other statuses (although it doesn't say so on the accessory description).
Ribbon Accessory Protects against Confuse as well as all other statuses.
### SilenceEdit
Silence disables Magic, W-Magic, Summon and W-Summon commands.
Game Element Type Effect
Silence Magic Inflicts Silence.
Mute Mask Item Inflicts Silence with 80% precision.
Bad Breath Enemy Skill Inflicts Silence as well as other statuses to all enemies.
Seal Evil Limit Break Inflicts Silence and Stop on all enemies.
Curses Enemy Attack Inflicts Silence.
Diamond Flash Enemy Attack Reduces HP by 7/8 and inflicts Silence to the party.
Magic Extinguish Enemy Attack Inflicts Silence. Used as a counter to magic attacks.
Smog Alert Enemy Attack Inflicts non-elemental damage as well as Silence and other statuses.
Sun Enemy Attack Inflicts Silence and Darkness.
Super Nova Enemy Attack Reduces HP by 15/16 and inflicts Silence, Confuse, and Slow to the party.
Ultrasound Enemy Attack Inflicts non-elemental damage and Silence.
Voice of Ages Enemy Attack Inflicts non-elemental damage and Silence. Used as a counter to magic attacks.
### HasteEdit
Haste increases the target's speed making their ATB gauge fill out twice as fast as normal. As a downside all timed effects like Barrier and Slow-numb run out double the pace. If a target is immune to Haste, the target will also be immune to Slow.
As well as the spell, Red XIII's Limit Break, Lunatic High, grants Haste. The Enemy Skill Big Guard also grants Haste to the party, with the added benefits of Barrier and MBarrier. The item Speed Drink casts Haste and the accessory Sprint Shoes also grants Auto-Haste. The effect can be removed by DeSpell. Auto-Haste cannot be dispelled.
### SlowEdit
Slow makes a target's ATB gauge fill out half the normal speed, but the pace of all timed effects, such as Barrier and Slow-numb is halved as well. Linking Added Effect with Time in a character's armor makes the character immune to Slow (but, curiously, also immune to Haste). The effect can be removed with DeSpell, White Wind and Angel Whisper.
Game Element Type Effect
Slow Magic Inflicts Slow.
Abnormal Shell Enemy Attack Inflicts minor non-elemental damage, as well as Slow and Darkness.
Dance Enemy Attack Inflicts minor non-elemental damage and Slow to party.
Eyesight Enemy Attack One of four versions. Inflicts Slow and Darkness, but is only used as a counter to physical attacks.
Silk Enemy Attack Inflicts Slow.
Slow Dance Enemy Attack Inflicts minor non-elemental damage and Slow.
Stigma Enemy Attack Inflicts major non-elemental damage as well as Slow and Poison to the party.
Super Nova Enemy Attack Inflicts damage equal to 15/16 of character's HP as well as Slow, Confuse, and Silence to party.
Support Beam Enemy Attack Inflicts Slow.
Toxic Barf Enemy Attack Inflicts Poison and Slow.
Jem Ring Accessory Protects against Petrify and Slow.
Spider Web Item Inflicts Slow on all enemies.
### StopEdit
Stop halts the time counter for the afflicted target, preventing their ATB gauge from filling, but also halting time based effects such as Barrier or Regen. Linking Added Effect Materia with Contain, Time or Choco/Mog makes the character immune to the effect. It can be cured by DeSpell, White Wind, Angel Whisper or Aeris's status healing Limit Breaks.
Game Element Type Effect
Freeze Magic Inflicts major Ice-elemental damage and inflicts Stop with 68% precision.
Stop Magic Inflicts Stop with 60% precision.
Hourglass Item Inflicts Stop with 80% precision.
Seal Evil Limit Break Inflicts Stop and Silence to all enemies.
Stop Eye Enemy Attack Inflicts Stop. Is never used.
Stop Web Enemy Attack Inflicts Stop.
Thread Enemy Attack Inflicts Stop. Only used if target is in Slow.
World Enemy Attack Inflicts Stop.
### FrogEdit
In addition to the spell found on the Transform Materia, an enemy skill called Frog Song causes the status along with Sleep. The Frog status prevents all commands except Attack, Item (including W-Item), and the spell Frog if the player has it equipped. Frogs can't use Limit Breaks.
It also decreases the unit's attack power by 1/4th of their base damage and, unlike most other status effects in the game, does not disappear when a character is KO'd. Linking Added Effect with either Transform or Hades makes the character immune to Frog.
Game Element Type Effect
Toad Magic Inflicts or cures Frog.
Bad Breath Enemy Skill Inflicts Frog as well as other statuses.
Frog Song Enemy Skill Inflicts and/or cures Frog and Sleep.
Frog Jab Enemy Attack Inflicts or cures Frog.
Pale Horse Enemy Attack Inflicts Frog, Small, and Sadness.
Petrified Frog Enemy Attack Inflicts Frog and Slow-numb.
Right Revenge Enemy Attack Inflicts Frog and Slow-numb, as well as reducing the target's HP by 15/32.
Right Thrust Enemy Attack Inflicts Frog and Slow-numb, as well as reducing the target's HP by 15/16.
White Cape Accessory Prevents Frog and Small.
Ribbon Accessory Prevents Frog and all other status effects.
Impaler Item Inflicts or cures Frog.
Maiden's Kiss Item Cures Frog.
### SmallEdit
A unit afflicted with the Small status will have their Attack Power reduced to 1, therefore never doing more than 1 damage with physical attacks. Unlike most other statuses, Small is not removed of a character is KO'd. The enemy Hungry can eat a character in Small status, removing them from battle and flagging them as defeated.
Game Element Type Effect
Mini Magic Inflicts or cures Small.
Bad Breath [[Enemy Skill Materia|Enemy Skill]] Inflicts Small as well as other statuses to all enemies.
L4 Suicide [[Enemy Skill Materia|Enemy Skill]] Reduces HP by 31/32 and inflicts Small to enemies whose levels are a multiple of 4.
Pale Horse Enemy Attack Inflicts major non-elemental] damage as well as Frog, Small, and Sadness.
Right Revenge Enemy Attack Reduces HP by 15/32 and inflicts Small and Slow-numb.
Right Thrust Enemy Attack Reduces HP by 15/16 and inflicts Small and Slow-numb.
White Cape Accessory Prevents Frog and Small.
Shrivel Item Casts Small.
Cornucopia Item Heals Small.
### Slow-numbEdit
Called Slow-numb, characters with this condition are petrified after 60 seconds.
Game Element Type Effect
Left Revenge Enemy Attack Reduces MP by 15/32 and inflicts Poison and Slow-numb.
Left Thrust Enemy Attack Reduces MP by 25/32 and inflicts Poison and Slow-numb.
Petrif-Eye Enemy Attack Inflicts Slow-numb.
Petrified Frog Enemy Attack Inflicts Frog and Slow-numb.
Petrify Smog Enemy Attack Inflicts Slow-numb.
Rock Finger Enemy Attack Inflicts moderate non-elemental damage and Slow-numb.
Stone Stare Enemy Attack Inflicts Slow-numb.
Triclops Enemy Attack Inflicts Sadness and Slow-numb. Only used if all Grangalan Jr. Jr.s are dead.
### PetrifyEdit
The Petrify status flags the target as defeated and is thus similar to the Death status. When petrified, targets can't change in any way at all, not gain or lose HP or MP, and not be applied with any other status effect. The Contain Materia has the effect, and the player can either add the effect to physical attacks or prevent against it by pairing it with the Added Effect Materia.
Game Element Type Effect
Break Magic Inflicts major Earth elemental damage as well as Petrify with 32% accuracy.
Vagyrisk Claw Item Inflicts Petrify on one target with 68% accuracy.
White Wind Enemy Skill Restores HP equal to caster's current HP and cures all statuses (including Petrify).
Deadly Needles Enemy Attack Inflicts minor non-elemental damage and Petrify.
Petrif-Eye Enemy Attack Inflicts Petrify and moderate non-elemental damage.
Stone Stare Enemy Attack Inflicts Petrify.
Jem Ring Accessory Prevents Petrify and Slow.
Safety Bit Accessory Prevents Petrify and Instant Death.
### RegenEdit
A unit under Regen will gain HP continually while they glow orange until the status wears off. Unlike most games with Regen status, which restore 1/32 of a character's max HP every 4 seconds, Regen in Final Fantasy VII causes HP to continuously rise until the effect wears off. This negates weaker enemy attacks, as the unit will regenerate too fast for the damage to register. If a target has Regen and not enough HP to survive an attack the time an enemy's attack is cast, but gains some HP from Regen during the cast animation, they will still die when the animation is finished, resulting in a KO with HP still remaining.
Regen runs during a spell's "charge up" animation, as well as throughout Limit Break animations. Regen running even when the rest of the action is halted can be exploited through opening the game console's disc tray during battle.
The player can only cast Regen through the Regen ability.
The effect of Regen, like Poison, is not attached to any element. The Regen status is unrelated to the Restorative element and will still heal undead enemies.
### BarrierEdit
Barrier halves physical damage taken and can be gained by using the Barrier Materia or using the Wall spell or the Protect Ring accessory. The item Light Curtain can be used to grant protect to all allies. The Enemy Skill Big Guard casts Barrier along with MBarrier and Haste.
Barrier's duration is indicated by a depleting gauge in the player's battle menu. The effect runs faster under Haste, and is slowed down by Slow and halted by Stop. When an affected unit is attacked with a physical attack a transparent shield appears to block it, getting smaller as the effect runs out.
### MBarrierEdit
The spells MBarrier, Big Guard, and Wall grant MBarrier status to a party member, while the Protect Ring grants MBarrier at the start of the battle. MBarrier halves magical damage taken and its duration is indicated by a depleting gauge in the player's battle menu. MBarrier depletes faster with Haste and slower with Slow and is halted entirely by Stop.
### ReflectEdit
The accessory Reflect Ring, the spell Reflect, the item Mirror, and the enemy ability Materia-jammer all cast Reflect to either one unit or the whole party. Reflect bounces magic back to the caster for up to four times.
Reflect is different in Final Fantasy VII as opposed to other games in the series, as a spell bounced off Reflect will continuously bounce back and forth between the targets if they both have Reflect up, until a target's Reflect status runs out after four hits. With Auto-Reflect (Reflect Ring and the enemy Mirage) the spell is bounced only once. The spell is reflected either back to the caster, or to a random member of the opposing party if an ally reflects a spell off an ally. The effect can be removed by DeBarrier or DeSpell.
There are many spells that cannot be reflected; summons and spells cast from items cannot be reflected.
The following spells cannot be reflected:
### DualEdit
Dual is a dummied status effect that does nothing. It does not visually change the player model and lasts until the end of battle. As with most other statuses, the only things that protect a player from it are Peerless, Petrify, and Resist.
There is nothing that may suggest what the status was originally meant for left in the data.
### ShieldEdit
Shield is a positive status effect given by the spell of the same name. When under its effect, the target voids all normal attacks and absorbs all elemental damage, though they can still be damaged by items (not when used in W-Item) and non-elemental special attacks. Shield status can be removed by DeBarrier and DeSpell.
### Death-sentenceEdit
Death-sentence is applied through the Enemy Skill Death Sentence, which sets the timer to 60 seconds (30 if the target is in Haste). The accessory Curse Ring also applies Death-sentence to the character equipped with it, and will not reset if they are KO'd and then revived.
### ManipulateEdit
The Manipulate status is induced through the Manip command. It cannot be applied on allies and one character can control only one enemy at a time. Units under this status turn cyan and face the opposite way, the same way the players are facing. The status lasts for the duration of the battle unless removed by being attacked with a physical attack or via White Wind. Units in the Petrify, Resist, Sleep, Stop, or Paralyzed statuses cannot be manipulated. Petrify, Sleep, Stop, and Paralyzed also cancel out the Manipulate status.
Manipulate is the lowest status on the color priority chart meaning that if an enemy has another color-affecting status, it will not be cyan but the other color. If they have more than one other color-affecting status, the one with the higher priority will show on top. It has the lowest priority due to it being more obvious to the player besides the color due to other visual differences. Also, in this status, it is more useful to know what other statuses the unit has contracted.
Some enemies need to be manipulated to learn the Enemy Skills Big Guard, White Wind, Death Force, Angel Whisper, and Dragon Force.
The SOLDIER:2nd enemy is susceptible to the status. If there is more than one left in battle, and the manipulated unit is requested to attack another unit of the same kind, the second unit will retaliate, relinquishing the status, but triggering the first unit to attack the second. This will continue until either one dies, or the player attacks them.
The Manipulate status can be used to have full control during a battle situation. It is commonly used so players can damage themselves so their last two digits of a character's HP are 77. Doing this, they can restore themselves with a number of Potions to initiate the All Lucky 7s status.
### BerserkEdit
Berserk increases the unit's physical attack damage by x1.5 at the cost of losing control, only physically attacking a random enemy at every chance they get.
An amusing glitch results with some enemies that are not immune to Berserk but do not have a specified attack to use when Berserked. When this happens, the game falls back on having them use a spell that costs more MP than they will ever have, and thus continually will give the message "[Enemy]'s skill power is used up". Pairing the Mystify Materia with Added Effect lets the player add the status to a character's physical attacks, or defend against the status, depending whether the combination is set in a character's weapon or armor.
Game Element Type Effect
Berserk Magic Inflicts Berserk 80% of the time when used on enemies, and 100% of the time when used on allies.
War Gong Item Inflicts Berserk 80% of the time when used on enemies, and 100% of the time when used on allies.
Howling Moon Limit Break Inflicts Berserk and Haste on user, and increases Attack by 60%.
Berserk Needle Enemy Attack Inflicts minor non-elemental damage and Berserk.
Crazy Claw Enemy Attack Inflicts minor non-elemental damage and Berserk.
Peace Ring Accessory Prevents Berserk, Fury, and Sadness.
### PeerlessEdit
Peerless makes characters immune to all physical and magical attacks for a short time, and effectively works as a Resist status regarding status attributes, locking any previous statuses in and preventing the application of new ones. The status also gives immunity against MP remove attacks.
The target glows in a yellow hue. The effect can only be used from Aeris's Limit Breaks, Planet Protector or Great Gospel.
### ParalyzedEdit
When a unit is paralyzed they freeze on the spot and can't act. Paralysis lasts a shorter while than Stop and, unlike in Stop, the characters' Limit Break gauges don't fill when they are attacked while paralyzed.
Game Element Type Effect
Hades Summon Inflicts Paralyzed as well as other statuses and non-elemental damage to all enemies.
Cross-Slash Limit Break Inflicts non-elemental damage and Paralyzed.
? Needle Enemy Attack Inflicts Paralyzed. Used as a counter to physical attacks when HP is 25% or less.
Big Swing Enemy Attack Inflicts extreme physical damage and Paralyzed to party.
Bone Enemy Attack Inflicts major non-elemental damage and Paralyzed.
Death Claw Enemy Attack Inflicts minor non-elemental damage and Paralyzed.
Electro-Mag Rod Enemy Attack Inflicts Lightning-elemental damage as well as Paralyzed.
Halt Whip Enemy Attack Inflicts minor non-elemental damage and Paralyzed.
Havoc Wing Enemy Attack Inflicts extreme non-elemental damage and Paralyzed/Darkness.
Hell Spear Enemy Attack Inflicts minor non-elemental damage and Paralyzed.
Needle Enemy Attack Inflicts very minor non-elemental damage and Paralyzed.
Paralaser Enemy Attack Inflicts non-elemental damage as well as Paralyzed.
Paralyzer Needle Enemy Attack Inflicts minor non-elemental damage and Paralyzed.
Stare Down Enemy Attack Inflicts Paralyzed.
Whip Sting Enemy Attack Inflicts moderate physical damage and Paralyzed.
Dazers Item Inflicts Paralyzed.
Jem Ring Accessory Prevents Petrify, Slow-numb and Paralyzed.
### DarknessEdit
Darkness is mainly usable by enemies. The player only has access to the status through the Ink item. Darkness is of little tactical use to players, as most enemies do not suffer from the status. Darkness halves the physical accuracy of weapon-based attacks, but due to poor programming the status only affects the commands Attack, Morph, Deathblow, Mug, Slash-All, Flash, 2x Cut, and 4x Cut. Because enemies do not use the above commands they are unaffected by it, making inflicting opponents with the status pointless.
Game Element Type Effect
Abnormal Shell Enemy Attack Inflicts non-elemental damage as well as Slow and Darkness.
Autumn Leaves Enemy Attack Moderate non-elemental damage as well as Darkness to party.
Combo Enemy Attack Moderate physical damage and Darkness. Third part of the whole attack.
Dance Enemy Attack Inflicts minor non-elemental damage as well as Slow and Darkness to party.
Dark Dragon Breath Enemy Attack Moderate non-elemental damage and Darkness to party.
Dark Eye Enemy Attack Inflicts Darkness. Only used if Grangalan Jr. is destroyed.
Dark Needle Enemy Attack Inflicts minor non-elemental damage and Darkness.
Erupt Enemy Attack Inflicts minor non-elemental damage and Darkness.
Evil Poison Enemy Attack Moderate Poison elemental damage and Darkness.
Eyesight Enemy Attack Inflicts Darkness and Slow. Only used as a counter to physical attacks.
Great Gale Enemy Attack Inflicts moderate Wind-elemental damage and Darkness to party.
Havoc Wing Enemy Attack Inflicts extreme physical damage as well as Paralyzed and Darkness.
Isogin Smog Enemy Attack Moderate non-elemental damage and Darkness.
Sandgun Enemy Attack Inflicts Darkness.
Sandstorm Enemy Attack Inflicts moderate Earth-elemental damage and Darkness to party.
Seed Shot Enemy Attack Inflicts Darkness.
Smog Enemy Attack Inflicts minor non-elemental damage and Darkness.
Smog Alert Enemy Attack Moderate non-elemental damage as well as Poison, Darkness, Silence, and Sadness.
Smoke Bullet Enemy Attack Inflicts Sleep and Darkness.
Sun Enemy Attack Inflicts Silence and Darkness.
Swamp Shoot Enemy Attack Inflicts minor non-elemental damage and Darkness.
Silver Glasses Accessory Prevents Darkness.
Fairy Ring Accessory Prevents Poison and Darkness.
Ink Item Inflicts Darkness on an enemy.
### SeizureEdit
Seizure can only be inflicted by Bottomswell's Waterball attack, which places Seizure and Imprisoned statuses on a character (despite the status screen saying it places Death and Imprisoned). The victim can be cured by defeating the Waterball by casting magic on it.
### Death ForceEdit
Death Force can be cast using the Enemy Skill Materia, and although the Adamantaimai possess the skill, it never uses it in battle, meaning only the player can ever use it.
Death Force protects against all forms of Instant Death with the exception of Cait Sith's Death Joker. If Death-sentence is cast upon a character with Death Force up, although the timer will continue to count down, once it is finished, the character will not die.
Death Force is removable with DeSpell or White Wind.
### ResistEdit
Resist is granted by the spell Resist from the Heal Materia and by the Vaccine item. It locks in status effects for the target, both making them immune to all status effects including Instant Death, and locking any current status effects the character has on them, positive or negative. It does not protect against statuses granted as handicaps on the Battle Square, meaning that when applied it is more detrimental than it is helpful as the player cannot then remove the negative status effects applied.
### Lucky GirlEdit
Cait Sith's Limit Break Lucky Girl allows all party members to deal critical hits every time they attack, whether it be a regular physical attack, a command attack such as Slash-All, or a Limit Break. The Lucky Girl is a possible combination in Cait Sith's Level 2 Limit Break, Slots, and is executed by lining up three heart symbols.
### ImprisonedEdit
There are only three instances of this status being inflicted, each being boss battles. The player's party lacks access to means of inflicting enemies with this status (aside from possibly hacking the game).
Imprisoned immobilizes the afflicted target and flags it as defeated (although they retain their current HP). Thus a Game Over is triggered if all party members become Imprisoned. When inflicted, an entrapment (which is technically an enemy) is spawned in place of the affected party member. To remove this state, one must attack said entrapment until it is defeated.
When Turks:Reno adds this effect, Pyramid is spawned as the entrapment. Characters imprisoned by the Pyramid are flagged as non-targets, meaning attacks and curative magic or items have no effect on them until they are freed.
When Bottomswell adds this effect, Waterpolo is spawned as the entrapment and constantly drains the HP of those affected. (See Seizure.)
When fighting Carry Armor, its Right Arm or Left Arm attachment may imprison one or two party members. When the boss performs its spinning arms attack, imprisoned characters take damage.
Game Element Type Effect
Arm Grab Enemy Attack Inflicts Imprisoned on one opponent.
Pyramid Enemy Attack Inflicts Imprisoned on one opponent, revives a Pyramid with full HP.
Waterball Enemy Attack Inflicted Imprisoned and Seizure on one opponent. Revives a Waterpolo.
### Back RowEdit
Back Row halves the damage taken by party members but also halves the damage dealt. It affects all normal attack commands; the Attack, Morph, Deathblow, Mug, Slash-All, Flash, 2x Cut, and 4x Cut. This damage reduction can be bypassed using the Long Range Materia, as well as several long ranged weapons (which can be wielded by Barret, Red XIII, Yuffie and Vincent). While enemies have defined rows, they do not have a set back row like characters do.
### DefendEdit
Defend halves all damage taken until their next turn begins, and is usable by any character via the Defend command.
### All Lucky 7sEdit
A status effect that appears most as an Easter Egg, All Lucky 7s triggers when a unit's HP falls on 7,777. Once the status effect is achieved, an "All Lucky 7s!" battle message will display, and every move the unit performs results in either 7,777 damage or 7,777 heal. Player characters become uncontrollable and attack enemies repeatedly for a total of 489,951 damage and following this, the their damage/heal will always be 7,777. Enemies will continue by their normal AI but every attack deals 7,777 damage.
All Lucky 7s lasts for the duration of the battle, as once ended, the characters' HP becomes 1. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20040413737297058, "perplexity": 19806.487430249545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00037-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/linear-algebra-find-vectors-that-span-the-image-of-a.135484/ | # Homework Help: LINEAR ALGEBRA: Find vectors that span the image of A
1. Oct 8, 2006
### VinnyCee
For each matrix A, find vectors that span the image of A. Give as few vectors as possible.
$$\mathbf{A_1} = \left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \\ 1 & 3 \\ 1 & 4 \end{array} \right]$$
$$\mathbf{A_2} = \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \\ \end{array} \right]$$
My work:
$$T(\overrightarrow{x})\,=\,A_1\,\overrightarrow{x}$$
$$A_1\,\left[ \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array} \right]\,=\,\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \\ 1 & 3 \\ 1 & 4 \end{array} \right]\,\left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right]$$
$$x_1\,\left[ \begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array} \right]\,+\,x_2\,\left[ \begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array} \right]$$
The image of $$A_1$$ is the space spanned by
$$v_1\,=\,\left[ \begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array} \right]\,\,and\,\,v_2\,=\,\left[ \begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array} \right]$$
For the second part:
$$A_2\,\overrightarrow{x}$$
$$\left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \\ \end{array} \right]\,\left[ \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array} \right]$$
$$x_1\,\left[\begin{array}{c} 1 \\ 1 \\ \end{array}\right]\,+\,x_2\,\left[\begin{array}{c} 1 \\ 2 \\ \end{array}\right]\,+\,x_3\,\left[\begin{array}{c} 1 \\ 3 \\ \end{array}\right]\,+\,x_4\,\left[\begin{array}{c} 1 \\ 4 \\ \end{array}\right]$$
But now I don't know where to proceed to solve the second part. The B.O.B. says that the answer is
$$\left[\begin{array}{c} 1 \\ 1 \\ \end{array}\right],\,\left[\begin{array}{c} 1 \\ 2 \\ \end{array}\right]$$
How do I show such?
Last edited: Oct 8, 2006
2. Oct 9, 2006
### HallsofIvy
You are trying to find a basis for the "row space". Treat each row as a vector and reduce. Obviously you have 4 rows, so 4 vectors, but the image is subset of R2 and so can have dimension no larger than 2.
Actually, if the dimension is 2, then (1,0), (0, 1) will work. If 1, any one of the rows will work.
3. Oct 9, 2006
### VinnyCee
$$A_1$$ has four rows, but only two vectors. I am assuming that the number of columns should equal the number of vectors?
Where are you getting the (1, 0) and (0, 1 ) from?
By "dimension", do you mean the number of columns in a matrix, or the number of rows?
4. Sep 24, 2007 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686180114746094, "perplexity": 717.6931210540672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864740.48/warc/CC-MAIN-20180622162604-20180622182604-00398.warc.gz"} |
https://www.hpmuseum.org/forum/post-97635.html | Programs for the Sharp EL-5500 III
01-16-2017, 04:26 PM
Post: #1
Eddie W. Shore Senior Member Posts: 1,251 Joined: Dec 2013
Programs for the Sharp EL-5500 III
Programs:
Euclid Algorithm
Binomial Expansion
Days Between Dates
Complex Power (a + bi)^n
Error Function Approximation
Simpson's Rule
Dancing Star Demo
I really like the EL-5500 III. It is lightweight and BASIC programming.
01-16-2017, 07:47 PM
Post: #2
Didier Lachieze Senior Member Posts: 1,386 Joined: Dec 2013
RE: Programs for the Sharp EL-5500 III
The EL-5500 III is the US version of the PC-1403.
You may be interested by the Sharp PC-1403 entry points for matrix operations to be able to use them in BASIC programs. They have been published by Hrast on the previous hpmuseum forum:
It would be nice to confirm that these entry points are the same on the EL-5500 III.
01-16-2017, 10:51 PM
Post: #3
Csaba Tizedes Senior Member Posts: 495 Joined: May 2014
RE: Programs for the Sharp EL-5500 III
nJoy!
Csaba
05-11-2018, 03:18 PM
Post: #4
Eddie W. Shore Senior Member Posts: 1,251 Joined: Dec 2013
RE: Programs for the Sharp EL-5500 III
Net Present Value
Synthetic Division
Vector Basics (cross product, dot product, norm, angle between vectors)
Atwood Machine
https://edspi31415.blogspot.com/2018/05/...-2018.html
05-13-2018, 05:20 AM (This post was last modified: 05-13-2018 05:21 AM by Dan.)
Post: #5
Dan Member Posts: 162 Joined: Jan 2017
RE: Programs for the Sharp EL-5500 III
Cool calculator and great programs!
I've implemented keystroke programming on a custom calculator project and am wondering how difficult it would be to implement a programming language like BASIC.
05-13-2018, 10:22 AM
Post: #6
Dieter Senior Member Posts: 2,397 Joined: Dec 2013
RE: Programs for the Sharp EL-5500 III
(05-11-2018 03:18 PM)Eddie W. Shore Wrote: Net Present Value
...
https://edspi31415.blogspot.com/2018/05/...-2018.html
Eddie, I do not understand how the NPV program is supposed to work. Please help me here, this is the code on your website:
Code:
2 PAUSE “NET PRESEN VALUE” 4 CLEAR // clears all the variables 6 INPUT “CF0:”; N, “RATE:”; I 8 J = 1 10 INPUT “FLOW:”; F, “FREQ:”; K 12 FOR L=1 TO K: N = N + F/(1 + I/100)^J: J = J+1 14 NEXT L 16 INPUT “MORE=1: “; L // enter 1 to enter more cash flows, anything else to end entry 18 PRINT USING “#############.##”; “NPV: “; N 20 END
As far as I can tell the program will sum up the discounted first K cash flows in line 12/14. Then the user is prompted with "MORE" so that second, third etc. cash flow and its frequency can be entered. For another CF the user is supposed to enter "1", or anything else to quit. But look at line 16/18: the program does not process the user input L at all. Instead it prints the NPV (for the first CF only) and quits.
I'd say there is a line missing here:
Code:
17 IF L=1 GOTO 10
Or have I overlooked something here?
Dieter
05-13-2018, 11:01 AM
Post: #7
Sylvain Cote Senior Member Posts: 1,717 Joined: Dec 2013
RE: Programs for the Sharp EL-5500 III
(05-13-2018 05:20 AM)Dan Wrote: I've implemented keystroke programming on a custom calculator project and am wondering how difficult it would be to implement a programming language like BASIC.
It always depends on the number of features you want, it can be really simple if you do a minimal implementation, have a look at uBASIC.
05-13-2018, 03:13 PM
Post: #8
toml_12953 Senior Member Posts: 1,795 Joined: Dec 2013
RE: Programs for the Sharp EL-5500 III
(01-16-2017 07:47 PM)Didier Lachieze Wrote: The EL-5500 III is the US version of the PC-1403.
You may be interested by the Sharp PC-1403 entry points for matrix operations to be able to use them in BASIC programs. They have been published by Hrast on the previous hpmuseum forum:
It would be nice to confirm that these entry points are the same on the EL-5500 III.
Do any of the other Sharp models have matrix ops?
Tom L
Cui bono?
05-13-2018, 03:52 PM
Post: #9
Didier Lachieze Senior Member Posts: 1,386 Joined: Dec 2013
RE: Programs for the Sharp EL-5500 III
The Sharp PC-1475 has also matrix operations.
05-14-2018, 03:03 AM
Post: #10
Dan Member Posts: 162 Joined: Jan 2017
RE: Programs for the Sharp EL-5500 III
(05-13-2018 11:01 AM)Sylvain Cote Wrote:
(05-13-2018 05:20 AM)Dan Wrote: I've implemented keystroke programming on a custom calculator project and am wondering how difficult it would be to implement a programming language like BASIC.
It always depends on the number of features you want, it can be really simple if you do a minimal implementation, have a look at uBASIC.
Thanks Sylvain.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21281905472278595, "perplexity": 4374.1063997507135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00694.warc.gz"} |
https://voer.edu.vn/c/using-the-normal-distribution/97633764/6959ad15 | Giáo trình
# Introductory Statistics
Mathematics and Statistics
## Using the Normal Distribution
Tác giả: OpenStaxCollege
The shaded area in the following graph indicates the area to the left of x. This area is represented by the probability P(X < x). Normal tables, computers, and calculators provide or calculate the probability P(X < x).
The area to the right is then P(X > x) = 1 – P(X < x). Remember, P(X < x) = Area to the left of the vertical line through x. P(X < x) = 1 – P(X < x) = Area to the right of the vertical line through x. P(X < x) is the same as P(Xx) and P(X > x) is the same as P(Xx) for continuous distributions.
# Calculations of Probabilities
Probabilities are calculated using technology. There are instructions given as necessary for the TI-83+ and TI-84 calculators.
If the area to the left is 0.0228, then the area to the right is 1 – 0.0228 = 0.9772.
The final exam scores in a statistics class were normally distributed with a mean of 63 and a standard deviation of five.
a. Find the probability that a randomly selected student scored more than 65 on the exam.
a. Let X = a score on the final exam. X ~ N(63, 5), where μ = 63 and σ = 5
Draw a graph.
Then, find P(x > 65).
P(x > 65) = 0.3446
The probability that any student selected at random scores more than 65 is 0.3446.
z = = 0.4
Area to the left is 0.6554.
P(x > 65) = P(z > 0.4) = 1 – 0.6554 = 0.3446
b. Find the probability that a randomly selected student scored less than 85.
b. Draw a graph.
Then find P(x < 85), and shade the graph.
Using a computer or calculator, find P(x < 85) = 1.
normalcdf(0,85,63,5) = 1 (rounds to one)
The probability that one student scores less than 85 is approximately one (or 100%).
c. Find the 90th percentile (that is, find the score k that has 90% of the scores below k and 10% of the scores above k).
c. Find the 90th percentile. For each problem or part of a problem, draw a new graph. Draw the x-axis. Shade the area that corresponds to the 90th percentile.
Let k = the 90th percentile. The variable k is located on the x-axis. P(x < k) is the area to the left of k. The 90th percentile k separates the exam scores into those that are the same or lower than k and those that are the same or higher. Ninety percent of the test scores are the same or lower than k, and ten percent are the same or higher. The variable k is often called a critical value.
k = 69.4
The 90th percentile is 69.4. This means that 90% of the test scores fall at or below 69.4 and 10% fall at or above. To get this answer on the calculator, follow this step:
d. Find the 70th percentile (that is, find the score k such that 70% of scores are below k and 30% of the scores are above k).
d. Find the 70th percentile.
Draw a new graph and label it appropriately. k = 65.6
The 70th percentile is 65.6. This means that 70% of the test scores fall at or below 65.5 and 30% fall at or above.
invNorm(0.70,63,5) = 65.6
A personal computer is used for office work at home, research, communication, personal finances, education, entertainment, social networking, and a myriad of other things. Suppose that the average number of hours a household personal computer is used for entertainment is two hours per day. Assume the times for entertainment are normally distributed and the standard deviation for the times is half an hour.
a. Find the probability that a household personal computer is used for entertainment between 1.8 and 2.75 hours per day.
a. Let X = the amount of time (in hours) a household personal computer is used for entertainment. X ~ N(2, 0.5) where μ = 2 and σ = 0.5.
Find P(1.8 < x < 2.75).
The probability for which you are looking is the area between x = 1.8 and x = 2.75. P(1.8 < x < 2.75) = 0.5886
normalcdf(1.8,2.75,2,0.5) = 0.5886
The probability that a household personal computer is used between 1.8 and 2.75 hours per day for entertainment is 0.5886.
b. Find the maximum number of hours per day that the bottom quartile of households uses a personal computer for entertainment.
b. To find the maximum number of hours per day that the bottom quartile of households uses a personal computer for entertainment, find the 25th percentile, k, where P(x < k) = 0.25.
invNorm(0.25,2,0.5) = 1.66
The maximum number of hours per day that the bottom quartile of households uses a personal computer for entertainment is 1.66 hours.
There are approximately one billion smartphone users in the world today. In the United States the ages 13 to 55+ of smartphone users approximately follow a normal distribution with approximate mean and standard deviation of 36.9 years and 13.9 years, respectively.
a. Determine the probability that a random smartphone user in the age range 13 to 55+ is between 23 and 64.7 years old.
a. normalcdf(23,64.7,36.9,13.9) = 0.8186
b. Determine the probability that a randomly selected smartphone user in the age range 13 to 55+ is at most 50.8 years old.
b. normalcdf(–1099,50.8,36.9,13.9) = 0.8413
c. Find the 80th percentile of this distribution, and interpret it in a complete sentence.
c.
• invNorm(0.80,36.9,13.9) = 48.6
• The 80th percentile is 48.6 years.
• 80% of the smartphone users in the age range 13 – 55+ are 48.6 years old or less.
• There are approximately one billion smartphone users in the world today. In the United States the ages 13 to 55+ of smartphone users approximately follow a normal distribution with approximate mean and standard deviation of 36.9 years and 13.9 years respectively. Using this information, answer the following questions (round answers to one decimal place).
a. Calculate the interquartile range (IQR).
a.
• IQR = Q3Q1
• Calculate Q3 = 75th percentile and Q1 = 25th percentile.
• invNorm(0.75,36.9,13.9) = Q3 = 46.2754
• invNorm(0.25,36.9,13.9) = Q1 = 27.5246
• IQR = Q3Q1 = 18.7508
• b. Forty percent of the ages that range from 13 to 55+ are at least what age?
b.
• Find k where P(x > k) = 0.40 ("At least" translates to "greater than or equal to.")
• 0.40 = the area to the right.
• Area to the left = 1 – 0.40 = 0.60.
• The area to the left of k = 0.60.
• invNorm(0.60,36.9,13.9) = 40.4215.
• k = 40.42.
• Forty percent of the ages that range from 13 to 55+ are at least 40.42 years.
• A citrus farmer who grows mandarin oranges finds that the diameters of mandarin oranges harvested on his farm follow a normal distribution with a mean diameter of 5.85 cm and a standard deviation of 0.24 cm.
a. Find the probability that a randomly selected mandarin orange from this farm has a diameter larger than 6.0 cm. Sketch the graph.
a. normalcdf(6,10^99,5.85,0.24) = 0.2660
b. The middle 20% of mandarin oranges from this farm have diameters between ______ and ______.
b.
• 1 – 0.20 = 0.80
• The tails of the graph of the normal distribution each have an area of 0.40.
• Find k1, the 40th percentile, and k2, the 60th percentile (0.40 + 0.20 = 0.60).
• k1 = invNorm(0.40,5.85,0.24) = 5.79 cm
• k2 = invNorm(0.60,5.85,0.24) = 5.91 cm
• c. Find the 90th percentile for the diameters of mandarin oranges, and interpret it in a complete sentence.
c. 6.16: Ninety percent of the diameter of the mandarin oranges is at most 6.15 cm.
# References
“Naegele’s rule.” Wikipedia. Available online at http://en.wikipedia.org/wiki/Naegele's_rule (accessed May 14, 2013).
“403: NUMMI.” Chicago Public Media & Ira Glass, 2013. Available online at http://www.thisamericanlife.org/radio-archives/episode/403/nummi (accessed May 14, 2013).
“Scratch-Off Lottery Ticket Playing Tips.” WinAtTheLottery.com, 2013. Available online at http://www.winatthelottery.com/public/department40.cfm (accessed May 14, 2013).
“Smart Phone Users, By The Numbers.” Visual.ly, 2013. Available online at http://visual.ly/smart-phone-users-numbers (accessed May 14, 2013).
“Facebook Statistics.” Statistics Brain. Available online at http://www.statisticbrain.com/facebook-statistics/(accessed May 14, 2013).
# Chapter Review
The normal distribution, which is continuous, is the most important of all the probability distributions. Its graph is bell-shaped. This bell-shaped curve is used in almost all disciplines. Since it is a continuous distribution, the total area under the curve is one. The parameters of the normal are the mean µ and the standard deviation σ. A special normal distribution, called the standard normal distribution is the distribution of z-scores. Its mean is zero, and its standard deviation is one.
# Formula Review
Normal Distribution: X ~ N(µ, σ) where µ is the mean and σ is the standard deviation.
Standard Normal Distribution: Z ~ N(0, 1).
Calculator function for probability: normalcdf (lower x value of the area, upper x value of the area, mean, standard deviation)
Calculator function for the kth percentile: k = invNorm (area to the left of k, mean, standard deviation)
How would you represent the area to the left of one in a probability statement?
P(x < 1)
What is the area to the right of one?
Is P(x < 1) equal to P(x ≤ 1)? Why?
Yes, because they are the same in a continuous distribution: P(x = 1) = 0
How would you represent the area to the left of three in a probability statement?
What is the area to the right of three?
1 – P(x < 3) or P(x > 3)
If the area to the left of x in a normal distribution is 0.123, what is the area to the right of x?
If the area to the right of x in a normal distribution is 0.543, what is the area to the left of x?
1 – 0.543 = 0.457
Use the following information to answer the next four exercises:
X ~ N(54, 8)
Find the probability that x > 56.
Find the probability that x < 30.
0.0013
Find the 80th percentile.
Find the 60th percentile.
56.03
X ~ N(6, 2)
Find the probability that x is between three and nine.
X ~ N(–3, 4)
Find the probability that x is between one and four.
0.1186
X ~ N(4, 5)
Find the maximum of x in the bottom quartile.
Use the following information to answer the next three exercise: The life of Sunshine CD players is normally distributed with a mean of 4.1 years and a standard deviation of 1.3 years. A CD player is guaranteed for three years. We are interested in the length of time a CD player lasts. Find the probability that a CD player will break down during the guarantee period.
1. Sketch the situation. Label and scale the axes. Shade the region corresponding to the probability.
2. P(0 < x < ____________) = ___________ (Use zero for the minimum value of x.)
1. Check student’s solution.
2. 3, 0.1979
Find the probability that a CD player will last between 2.8 and six years.
1. Sketch the situation. Label and scale the axes. Shade the region corresponding to the probability.
2. P(__________ < x < __________) = __________
Find the 70th percentile of the distribution for the time a CD player lasts.
1. Sketch the situation. Label and scale the axes. Shade the region corresponding to the lower 70%.
2. P(x < k) = __________ Therefore, k = _________
1. Check student’s solution.
2. 0.70, 4.78 years
# Homework
Use the following information to answer the next two exercises: The patient recovery time from a particular surgical procedure is normally distributed with a mean of 5.3 days and a standard deviation of 2.1 days.
What is the probability of spending more than two days in recovery?
1. 0.0580
2. 0.8447
3. 0.0553
4. 0.9420
The 90th percentile for recovery times is?
1. 8.89
2. 7.07
3. 7.99
4. 4.32
c
Use the following information to answer the next three exercises: The length of time it takes to find a parking space at 9 A.M. follows a normal distribution with a mean of five minutes and a standard deviation of two minutes.
Based upon the given information and numerically justified, would you be surprised if it took less than one minute to find a parking space?
1. Yes
2. No
3. Unable to determine
Find the probability that it takes at least eight minutes to find a parking space.
1. 0.0001
2. 0.9270
3. 0.1862
4. 0.0668
d
Seventy percent of the time, it takes more than how many minutes to find a parking space?
1. 1.24
2. 2.41
3. 3.95
4. 6.05
According to a study done by De Anza students, the height for Asian adult males is normally distributed with an average of 66 inches and a standard deviation of 2.5 inches. Suppose one Asian adult male is randomly chosen. Let X = height of the individual.
1. X ~ _____(_____,_____)
2. Find the probability that the person is between 65 and 69 inches. Include a sketch of the graph, and write a probability statement.
3. Would you expect to meet many Asian adult males over 72 inches? Explain why or why not, and justify your answer numerically.
4. The middle 40% of heights fall between what two values? Sketch the graph, and write the probability statement.
1. X ~ N(66, 2.5)
2. 0.5404
3. No, the probability that an Asian male is over 72 inches tall is 0.0082
IQ is normally distributed with a mean of 100 and a standard deviation of 15. Suppose one individual is randomly chosen. Let X = IQ of an individual.
1. X ~ _____(_____,_____)
2. Find the probability that the person has an IQ greater than 120. Include a sketch of the graph, and write a probability statement.
3. MENSA is an organization whose members have the top 2% of all IQs. Find the minimum IQ needed to qualify for the MENSA organization. Sketch the graph, and write the probability statement.
4. The middle 50% of IQs fall between what two values? Sketch the graph and write the probability statement.
The percent of fat calories that a person in America consumes each day is normally distributed with a mean of about 36 and a standard deviation of 10. Suppose that one individual is randomly chosen. Let X = percent of fat calories.
1. X ~ _____(_____,_____)
2. Find the probability that the percent of fat calories a person consumes is more than 40. Graph the situation. Shade in the area to be determined.
3. Find the maximum number for the lower quarter of percent of fat calories. Sketch the graph and write the probability statement.
1. X ~ N(36, 10)
2. The probability that a person consumes more than 40% of their calories as fat is 0.3446.
3. Approximately 25% of people consume less than 29.26% of their calories as fat.
Suppose that the distance of fly balls hit to the outfield (in baseball) is normally distributed with a mean of 250 feet and a standard deviation of 50 feet.
1. If X = distance in feet for a fly ball, then X ~ _____(_____,_____)
2. If one fly ball is randomly chosen from this distribution, what is the probability that this ball traveled fewer than 220 feet? Sketch the graph. Scale the horizontal axis X. Shade the region corresponding to the probability. Find the probability.
3. Find the 80th percentile of the distribution of fly balls. Sketch the graph, and write the probability statement.
In China, four-year-olds average three hours a day unsupervised. Most of the unsupervised children live in rural areas, considered safe. Suppose that the standard deviation is 1.5 hours and the amount of time spent alone is normally distributed. We randomly select one Chinese four-year-old living in a rural area. We are interested in the amount of time the child spends alone per day.
1. In words, define the random variable X.
2. X ~ _____(_____,_____)
3. Find the probability that the child spends less than one hour per day unsupervised. Sketch the graph, and write the probability statement.
4. What percent of the children spend over ten hours per day unsupervised?
5. Seventy percent of the children spend at least how long per day unsupervised?
1. X = number of hours that a Chinese four-year-old in a rural area is unsupervised during the day.
2. X ~ N(3, 1.5)
3. The probability that the child spends less than one hour a day unsupervised is 0.0918.
4. The probability that a child spends over ten hours a day unsupervised is less than 0.0001.
5. 2.21 hours
In the 1992 presidential election, Alaska’s 40 election districts averaged 1,956.8 votes per district for President Clinton. The standard deviation was 572.3. (There are only 40 election districts in Alaska.) The distribution of the votes per district for President Clinton was bell-shaped. Let X = number of votes for President Clinton for an election district.
1. State the approximate distribution of X.
2. Is 1,956.8 a population mean or a sample mean? How do you know?
3. Find the probability that a randomly selected district had fewer than 1,600 votes for President Clinton. Sketch the graph and write the probability statement.
4. Find the probability that a randomly selected district had between 1,800 and 2,000 votes for President Clinton.
5. Find the third quartile for votes for President Clinton.
Suppose that the duration of a particular type of criminal trial is known to be normally distributed with a mean of 21 days and a standard deviation of seven days.
1. In words, define the random variable X.
2. X ~ _____(_____,_____)
3. If one of the trials is randomly chosen, find the probability that it lasted at least 24 days. Sketch the graph and write the probability statement.
4. Sixty percent of all trials of this type are completed within how many days?
1. X = the distribution of the number of days a particular type of criminal trial will take
2. X ~ N(21, 7)
3. The probability that a randomly selected trial will last more than 24 days is 0.3336.
4. 22.77
Terri Vogel, an amateur motorcycle racer, averages 129.71 seconds per 2.5 mile lap (in a seven-lap race) with a standard deviation of 2.28 seconds. The distribution of her race times is normally distributed. We are interested in one of her randomly selected laps.
1. In words, define the random variable X.
2. X ~ _____(_____,_____)
3. Find the percent of her laps that are completed in less than 130 seconds.
4. The fastest 3% of her laps are under _____.
5. The middle 80% of her laps are from _______ seconds to _______ seconds.
Thuy Dau, Ngoc Bui, Sam Su, and Lan Voung conducted a survey as to how long customers at Lucky claimed to wait in the checkout line until their turn. Let X = time in line. [link] displays the ordered real data (in minutes):
0.5 4.25 5 6 7.25 1.75 4.25 5.25 6 7.25 2 4.25 5.25 6.25 7.25 2.25 4.25 5.5 6.25 7.75 2.25 4.5 5.5 6.5 8 2.5 4.75 5.5 6.5 8.25 2.75 4.75 5.75 6.5 9.5 3.25 4.75 5.75 6.75 9.5 3.75 5 6 6.75 9.75 3.75 5 6 6.75 10.75
1. Calculate the sample mean and the sample standard deviation.
2. Construct a histogram.
3. Draw a smooth curve through the midpoints of the tops of the bars.
4. In words, describe the shape of your histogram and smooth curve.
5. Let the sample mean approximate μ and the sample standard deviation approximate σ. The distribution of X can then be approximated by X ~ _____(_____,_____)
6. Use the distribution in part e to calculate the probability that a person will wait fewer than 6.1 minutes.
7. Determine the cumulative relative frequency for waiting less than 6.1 minutes.
8. Why aren’t the answers to part f and part g exactly the same?
9. Why are the answers to part f and part g as close as they are?
10. If only ten customers has been surveyed rather than 50, do you think the answers to part f and part g would have been closer together or farther apart? Explain your conclusion.
1. mean = 5.51, s = 2.15
2. Check student's solution.
3. Check student's solution.
4. Check student's solution.
5. X ~ N(5.51, 2.15)
6. 0.6029
7. The cumulative frequency for less than 6.1 minutes is 0.64.
8. The answers to part f and part g are not exactly the same, because the normal distribution is only an approximation to the real one.
9. The answers to part f and part g are close, because a normal distribution is an excellent approximation when the sample size is greater than 30.
10. The approximation would have been less accurate, because the smaller sample size means that the data does not fit normal curve as well.
Suppose that Ricardo and Anita attend different colleges. Ricardo’s GPA is the same as the average GPA at his school. Anita’s GPA is 0.70 standard deviations above her school average. In complete sentences, explain why each of the following statements may be false.
1. Ricardo’s actual GPA is lower than Anita’s actual GPA.
2. Ricardo is not passing because his z-score is zero.
3. Anita is in the 70th percentile of students at her college.
[link] shows a sample of the maximum capacity (maximum number of spectators) of sports stadiums. The table does not include horse-racing or motor-racing stadiums.
40,000 40,000 45,050 45,500 46,249 48,134 49,133 50,071 50,096 50,466 50,832 51,100 51,500 51,900 52,000 52,132 52,200 52,530 52,692 53,864 54,000 55,000 55,000 55,000 55,000 55,000 55,000 55,082 57,000 58,008 59,680 60,000 60,000 60,492 60,580 62,380 62,872 64,035 65,000 65,050 65,647 66,000 66,161 67,428 68,349 68,976 69,372 70,107 70,585 71,594 72,000 72,922 73,379 74,500 75,025 76,212 78,000 80,000 80,000 82,300
1. Calculate the sample mean and the sample standard deviation for the maximum capacity of sports stadiums (the data).
2. Construct a histogram.
3. Draw a smooth curve through the midpoints of the tops of the bars of the histogram.
4. In words, describe the shape of your histogram and smooth curve.
5. Let the sample mean approximate μ and the sample standard deviation approximate σ. The distribution of X can then be approximated by X ~ _____(_____,_____).
6. Use the distribution in part e to calculate the probability that the maximum capacity of sports stadiums is less than 67,000 spectators.
7. Determine the cumulative relative frequency that the maximum capacity of sports stadiums is less than 67,000 spectators. Hint: Order the data and count the sports stadiums that have a maximum capacity less than 67,000. Divide by the total number of sports stadiums in the sample.
8. Why aren’t the answers to part f and part g exactly the same?
1. mean = 60,136
s = 10,468
2. Answers will vary.
3. Answers will vary.
4. Answers will vary.
5. X ~ N(60136, 10468)
6. 0.7440
7. The cumulative relative frequency is 43/60 = 0.717.
8. The answers for part f and part g are not the same, because the normal distribution is only an approximation.
An expert witness for a paternity lawsuit testifies that the length of a pregnancy is normally distributed with a mean of 280 days and a standard deviation of 13 days. An alleged father was out of the country from 240 to 306 days before the birth of the child, so the pregnancy would have been less than 240 days or more than 306 days long if he was the father. The birth was uncomplicated, and the child needed no medical intervention. What is the probability that he was NOT the father? What is the probability that he could be the father? Calculate the z-scores first, and then use those to calculate the probability.
A NUMMI assembly line, which has been operating since 1984, has built an average of 6,000 cars and trucks a week. Generally, 10% of the cars were defective coming off the assembly line. Suppose we draw a random sample of n = 100 cars. Let X represent the number of defective cars in the sample. What can we say about X in regard to the 68-95-99.7 empirical rule (one standard deviation, two standard deviations and three standard deviations from the mean are being referred to)? Assume a normal distribution for the defective cars in the sample.
• n = 100; p = 0.1; q = 0.9
• μ = np = (100)(0.10) = 10
• σ = $\sqrt{npq}$ = $\sqrt{\text{(100)(0}\text{.1)(0}\text{.9)}}$ = 3
1. z = ±1: x1 = µ + = 10 + 1(3) = 13 and x2 = µ = 10 – 1(3) = 7. 68% of the defective cars will fall between seven and 13.
2. z = ±2: x1 = µ + = 10 + 2(3) = 16 and x2 = µ = 10 – 2(3) = 4. 95 % of the defective cars will fall between four and 16
3. z = ±3: x1 = µ + = 10 + 3(3) = 19 and x2 = µ = 10 – 3(3) = 1. 99.7% of the defective cars will fall between one and 19.
We flip a coin 100 times (n = 100) and note that it only comes up heads 20% (p = 0.20) of the time. The mean and standard deviation for the number of times the coin lands on heads is µ = 20 and σ = 4 (verify the mean and standard deviation). Solve the following:
1. There is about a 68% chance that the number of heads will be somewhere between ___ and ___.
2. There is about a ____chance that the number of heads will be somewhere between 12 and 28.
3. There is about a ____ chance that the number of heads will be somewhere between eight and 32.
A \$1 scratch off lotto ticket will be a winner one out of five times. Out of a shipment of n = 190 lotto tickets, find the probability for the lotto tickets that there are
1. somewhere between 34 and 54 prizes.
2. somewhere between 54 and 64 prizes.
3. more than 64 prizes.
• n = 190; p = $1 5$ = 0.2; q = 0.8
• μ = np = (190)(0.2) = 38
• σ = $\sqrt{npq}$ = $\sqrt{\text{(190)(0}\text{.2)(0}\text{.8)}}$ = 5.5136
1. For this problem: P(34 < x < 54) = normalcdf(34,54,48,5.5136) = 0.7641
2. For this problem: P(54 < x < 64) = normalcdf(54,64,48,5.5136) = 0.0018
3. For this problem: P(x > 64) = normalcdf(64,1099,48,5.5136) = 0.0000012 (approximately 0)
Facebook provides a variety of statistics on its Web site that detail the growth and popularity of the site.
On average, 28 percent of 18 to 34 year olds check their Facebook profiles before getting out of bed in the morning. Suppose this percentage follows a normal distribution with a standard deviation of five percent.
1. Find the probability that the percent of 18 to 34-year-olds who check Facebook before getting out of bed in the morning is at least 30.
2. Find the 95th percentile, and express it in a sentence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7644082307815552, "perplexity": 771.7275907551095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00643.warc.gz"} |
https://www.physicsforums.com/threads/surface-area.159178/ | # Surface Area
• Start date
• #1
26
0
Surface Area (help me to prove something:)
I was studying a bit about multiple integrals and found this theorem:
If we have function z=f(x,y) which is defined over the region R, surface S over the region is
$$S=\iint_R\sqrt{1+\left(\frac{\partial f}{\partial x}\right)^2+\left(\frac{\partial f}{\partial y}\right)^2}\,dA$$
I wanted to prove this, because I doesn't seem to me to be trivial and I had to use nasty gradients and nontrivial things. I wonder if there is some easier proof out there?
You know, if we want to count arc length of curve given by function y=f(x), integral looks similar
$$L=\int_R\sqrt{1+\left(\frac{df}{dx}\right)^2}\,dx$$
but in this case it is obvious...
Last edited:
• #2
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
In order to derive the surface element expression, think in terms of tangent vectors and the area of the parallellogram they span:
Given a surface z=f(x,y), we may set up the two tangent vectors:
$$\vec{t}_{x}=\vec{i}+\frac{\partial{f}}{\partial{x}}\vec{k}$$
$$\vec{t}_{y}=\vec{j}+\frac{\partial{f}}{\partial{y}}\vec{k}$$
Two vectors parallell to these, and with infinitesemal lengths are therefore:
$$d\vec{t}_{x}=(\vec{i}+\frac{\partial{f}}{\partial{x}}\vec{k})dx$$
$$d\vec{t}_{y}=(\vec{j}+\frac{\partial{f}}{\partial{y}}\vec{k})dy$$
These two infinitesemal vectors can be regarded to lie ON the surface in their entirety!
The area of the parallellogram they span is therefore given by the norm of their cross product:
$$dS=||d\vec{t}_{x}\times{d}\vec{t}_{y}||=||(-\frac{\partial{f}}{\partial{x}}\vec{i}-\frac{\partial{f}}{\partial{y}}\vec{j}+\vec{k}||dxdy=\sqrt{1+(\frac{\partial{f}}{\partial{x}})^{2}+(\frac{\partial{f}}{\partial{y}})^{2}}dxdy$$
which is the sought expression.
• #3
454
1
Well, the question becomes what you mean by surface area. For instance, if you try to define it in a way like the way arclength is defined (limit of polygonal arcs), then there are deep problems, as even the nicest surfaces have ill-defined surface area.
• #4
26
0
Oh, thanx arildno!!
that was something I wanted to see :) definitely much nicer than what I have done.
I was also thinking about tangent vectors
$$d\vec{t}_{x}=\left(\vec{i}+\frac{\partial{f}}{\partial{ x}}\vec{k}\right)dx$$
$$d\vec{t}_{y}=\left(\vec{j}+\frac{\partial{f}}{\partial{ y}}\vec{k}\right)dy$$
but just in a way that I realised that
$$dS\neq\Vert d\vec{t}_x\Vert\cdot\Vert d\vec{t}_y\Vert$$
I was close, I should just have thought about cross product :)
DeadWolfe: Well, I don't know any math definition of surface area (and guess that it is far too complicated for a poor high school guy like me - when I imagine all that topology stuff like manifolds...:) But nevertheless, I can imagine same surface and define their area based on intuition and see that mentioned formulas for surface area are right for nice surfaces.
• #5
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
Remember that what I presented is really how the surface element "ought" to be, if the surface is sufficiently "nice".
A minimum condition for this being true is that the tangent plane is defined at every point of the surface.
I don't remember whether this condition is also sufficient or merely necessary, perhaps somebody else might answer that?
• #6
454
1
Fair enough, I didn't know exactly what you were looking for.
BTW, you should be proud of being this far ahead in high school.
• #7
26
0
oh, thanx, I don't know if I'm really proud, but I'm happy that I have learned some basics of calculus and abstract algebra so far and now, I can understand more interesting parts of physics, which a like most. For instance partial derivatives and variations are nothing difficult to understand at all and once you are familiar with it you can read and grasp such interesting things like lagrangian and hamiltonian mechanics. Or other examle: vector spaces for quantum mechanics or curl and electromagnetism .
In Slovakia (and Czechia), we have nice university students who treat us well with correspondence seminars they run for us - give us really difficult problems to solve and also write texts explaining some parts of physics and mathematics so that they can give us more difficult problems :)
• #8
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
Eastern Europe has an enviably good tradition on the maths&sciences education.
I only wish we had something like that here in Norway. Unfortunately, we don't.
• #9
26
0
You should see some of my peers who stick their nose into things like groups or QFT. That's not normal anyway :)
• #10
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
Norwegian 16-year olds struggle with fractions and linear equations with a single variable.
That is the level of maths in Norway.
• #11
26
0
I have been talking about slovakia and czechia which might have looked like I was talking in general, BUT not at all. Yes, we have rather good math oriented high school classes and some competitions here (but this occurs everywhere I think), but still it is only like a drop in the ocean. My math/physics teacher is absolutely terrible and people in my class hate math/physics and are not able to understand it. (and my math/physics teacher is not an exception at all)
But yes, we have oportunities fortunately.
Actually, I'm not very proud of my country (in most cases) and your words "enviably good tradition on the maths&sciences education" seemed very curious to hear for me :)
BTW, we don't have any Nobel Prize laureate :) (Norwey has 9:)
There are doubts about our education system and our universities....
Last edited:
• #12
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
A Norwegian high school student that knows about partial differentiation and finds it easy simply doesn't exist.
Norway is about 3-4 years behind other European countries in math competence, slightly above the Albanian level.
• #13
disregardthat
1,866
34
well that sucks, since I'm norwegian... Why is the level so low?
3-4 years? I seriously doubt that arildno. When I first learned that an x^2 in an equation has 2 answers, I doubt russia were doing integrals...
Last edited:
• #14
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
You'd better believe it. Norway is the very worst country in Europe in math competence, apart from Albania.
As for the reason, well..Øystein Djupedal is only a minor symptom of that disease.
For those not into Norwegian politics, Ø.D is the current minister of education. His latest good idea to get more girls interested in physics, was to make dolls part of the teaching aids. He meant it. Seriously.
Last edited:
• #15
191
0
His latest good idea to get more girls interested in physics, was to make dolls part of the teaching aids. He meant it. Seriously.
Sounds like people at your schools are having a hell of a good time "learning"...
But you know, Norway has good social security, a fairly well stabilized social system for the future, compared to other european countries (never mind the states, where the term itself isn't even known). So maybe you people have plenty of time to work thru these things...
• #16
disregardthat
1,866
34
Hm, well.. We have an exchange student from America on our school, and he is one year ahead of us. So I doubt all countries are that far ahead.
Anyway, I do not find the tasks we get on school challenging enough. I understand why other countries lies ahead of us... But it is really demotivating! How are we supposed to survive in other countries if we want to study further?
If Ø.D said that, he is a joke, he know nothing of pedagogy in school...
But seriously arildno, how long ahead would you honestly estimate for example britain are ahead of norway? I am now thinking of high school (videregående)
Last edited:
• #17
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
That is true. The US is a couple of places above Norway in the latest international surveys. That amounts to about a year, as I've heard. I was talking about Europe.
What you really should do, is to enhance self-study.
The most important thing to do in that respect is to do LOTS of exercises, more than those few ordinarily present in a Norwegian math book on your level (the physics books are much better, in 2FY for example).
Since you can read English well, I advise you to visit, say, Akademika (or other university book-shop) and try to get hold of some pre-calculs/calculus books, depending on your level.
• #18
disregardthat
1,866
34
Well yeah, I am going to an english mathclass now, (although it's not harder than the norwegian one) And I am going to buy next years book and try some of the stuff standing there.
How about the other countries in europe? Would you estimate they are more than 1 year ahead? Are they confident with integrals in the first year of high school (videregående)?
• #19
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
I would think that first and foremost, the PRE-calculus competence is by far higher in other countries than in Norway.
Thus, other countries' pupils take the more advanced issues way more easily than Norwegian students, the majority of whom do not find fractions, algebra, linear equations with one variable, functions, graph drawing and coordinate systems to be TRIVIAL issues when they start at "videregående".
Since these issues ought to be felt trivial when starting with calculus, since they are essential base skills used, the Norwegian students will lag behind and bang their heads on these issues as well as on the new issues in calculus.
• #20
disregardthat
1,866
34
That's sad. Why is it like this, do you know of any reason? And do you think the level of learning flattens out in the late high school?
• #21
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
As long as you work assiduously, you should have no problems getting through high school. But you should prepare for an extremely steep learning curve if you are planning, say, to go to NTNU or UiO afterwards for further studies.
• #22
disregardthat
1,866
34
Well, those who want to study math further, they study more than that is planned for, they study deeper into the things they are going through, they study the advanced versions of it, they will be prepeared.
• #23
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
But seriously arildno, how long ahead would you honestly estimate for example britain are ahead of norway? I am now thinking of high school (videregående)
That depends on which part of the British educational system you are looking at. If you look at the "public" schools (expensive private institutions like Eton), then they are among the very best in Europe.
However, the rest of the school system (financed by the state/municipalities) is not very much better than the Norwegian schools, from what I've heard.
• #24
disregardthat
1,866
34
Well, is that only in britain? Do you still mean that the schools financed by state in other countries in europe are very much better than norway's?
• #25
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
Yes. The Finnish school is extremely good.
One of the things in the Finnish model is to require that at ALL age levels, you need to have majored in some subject (majored: hovedfag/master's degree).
In Norway, however, nannies without formal education are allowed to teach children in the age group 6-9. And those WITH a teacher's education have not majored in anything.
• #26
disregardthat
1,866
34
That's a major flaw in norways education.. Does this have a large effect ultimately do you think? By that I mean the number of graduates in universities in mathematics from norway.
• #27
arildno
Homework Helper
Gold Member
Dearly Missed
10,025
135
Most likely, unfortunately.
• #28
HallsofIvy
Homework Helper
41,847
969
When I was in Macedonia, our guide kept talking about how wonderful the entire Balkans region was "except Albania"! I pointed out that in the United States, we say "except West Virginia"!
(Ducking head while fire comes in from West Virginians!)
• #29
disregardthat
1,866
34
On my school there is one exchange student from Ukraina, and he told me that the level of maths is way over what we learn. I get good grades now because he remembers the stuff he learned 3 years ago(!)
That is wild, I suppose that most of the stuff is not that hard, it's just that we are not used to it. I guess the reason for that if we think the stuff we are learning now is hard, is because it is new. And not because it really is very difficult! The other reason is that all east-europeans is much more clever than scandinavians :tongue:
I think this is POOR for a resourceful country like Norway.
• Last Post
Replies
3
Views
2K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
12
Views
2K
• Last Post
Replies
3
Views
5K
• Last Post
Replies
1
Views
4K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
23
Views
13K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
2
Views
762 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5236828923225403, "perplexity": 1789.3319390958156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00481.warc.gz"} |
http://mathoverflow.net/questions/3194/what-are-the-benefits-of-using-algebraic-spaces-over-schemes/3202 | What are the Benefits of Using Algebraic Spaces over Schemes?
I have heard that algebraic spaces have better formal properties than schemes. What are these benefits? Also, is there a natural way to go straight from affine schemes to algebraic spaces bypassing the locally ringed space construction?
-
The better formal property is that algebraic spaces are closed under taking quotients of etale equivalence relations (in practice typically coming from group actions), while schemes are not.
One can define then directly from affines as done in http://www.math.univ-toulouse.fr/~toen/m2.html , Cours N° 2 (this source gives also a nice impression of the difference/similarity of schemes and alg. spaces and shows how they are exactly designed to support more quotients)
-
Here is one intuitive way to think about it:
A scheme is something which is Zariski-locally affine, whereas an algebraic space is something which is etale-locally affine.
One way to make this precise: a scheme is the coequalizer of a Zariski-open equivalence relation, whereas an algebraic space is the coequalizer of an etale equivalence relation.
Another way is to say that, as a functor on rings, a scheme has a Zariski-open covering by affine functors, whereas an algebraic space has an etale covering by affine functors (thus bypassing reference to locally ringed spaces, regarding your second question).
Why do we care? A priori, if you want to work in the etale topology anyway, why not fix the definition of scheme to say "etale-locally affine" instead of "Zariski-locally affine". This is just one motivation for studying algebraic spaces, which you can read more about in Champs algébriques.
(Edit: For "a fortiori" reasons to study algebraic spaces, I'll just say read the other answers :)
-
One of them was answered in response to question 1558 on when quotients of schemes by free group actions exist. When the group is finite, they exist as algebraic spaces. But, there are examples where they do not exist as schemes. So, being closed under quotients by free finite group actions is certainly nice.
I know the second part of your question is explained in the first couple of sections of Champs algebriques. They define a space as a covariant functor from algebras to sets that satisfies descent. Then, an algebraic space is such a functor that has an etale cover by the functors associated to some affine schemes. I don't really remember the details here.
-
Are questions numbered? That's a fine feature, but where do the numbers show? – Georges Elencwajg Oct 29 '09 at 9:27
The number 1558 appears in the URL of the question. But note that questions aren't necessarily numbered consecutively (but increasingly). These numbers probably are some internal id. – Armin Straub Oct 29 '09 at 15:03
Actually, every post (question or answer) is numbered, and the numbers are consecutive. For example, this answer is number 3201 (you can get this by clicking the "link" link at the bottom of the answer and looking in the URL). If you try the URL mathoverflow.net/questions/3201, you'll be brought directly to this answer. – Anton Geraschenko Oct 29 '09 at 18:29
In addition to being closed under taking quotients by etale equivalence relations, algebraic spaces are also closed under taking quotients by arbitrary finite group actions (need not be free, so the induced relation need not be etale). I only realized this recently, but I think the following argument is correct.
Suppose G is a finite group acting on an algebraic space X. Then the stack quotient [X/G] is a Deligne-Mumford stack (it has an etale cover by X). By the Keel-Mori theorem, DM stacks have coarse spaces. Let [X/G]→Y be the coarse space of [X/G]. Then Y is an algebraic space and the map [X/G]→Y is universal for maps from [X/G] to algebraic spaces. This means that Y is a categorical quotient of X by G. In fact, I think Y might actually be a geometric quotient (or at least a good quotient) of X by G, but I haven't unraveled all the definitions yet.
-
You need to assume that [X/G] has finite inertia for the Keel-Mori theorem to apply. This happens exactly when every stabilizer X^g is a closed subscheme. An important case is when X is separated. – David Rydh Nov 3 '09 at 4:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104580879211426, "perplexity": 453.72471521081593}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653672.23/warc/CC-MAIN-20141024030053-00133-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/15687/get-the-name-of-a-symbol-passed-to-a-function | # Get the name of a symbol passed to a function
I'm trying to get the name of a symbol passed to a function with this:
f[x_] := {SymbolName[x], x}
SetAttributes[f, HoldFirst]
x = 5;
f[x]
But x is being evaluated anyway:
SymbolName::sym: Argument 5 at position 1 is expected to be a symbol. >>
{SymbolName[5], 5}
What am I missing here?
• I'm fairly certain this is a duplicate. Please help me locate it, or correct my assertion. – Mr.Wizard Dec 4 '12 at 13:13
• @Mr.Wizard If it's a dupe (quite possible), then I haven't seen the original, since I don't recall it. – Leonid Shifrin Dec 4 '12 at 13:16
• @Leonid perhaps I was thinking of this question but that is the converse of the present one, or perhaps it was on Stack Overflow. – Mr.Wizard Dec 4 '12 at 13:24
• @Mr.Wizard That one is related, but not a dupe. But the issue is very common, I would not be surprised if it was asked in some slightly different context before. – Leonid Shifrin Dec 4 '12 at 13:30
You are missing Unevaluated:
SetAttributes[f, HoldFirst]
f[x_] := {SymbolName[Unevaluated@x], x}
because SymbolName does not hold its arguments, so you have to prevent evaluation also there.
Generally, if you are passing some argument via a chain of function calls, and want to keep it unevaluated, you have to prevent it's evaluation at each stage (function call). If the chain is long, it may be easier (and more robust) to wrap the argument in Hold, for the passing purposes, and unwrap in the function that actually needs it.
Note by the way that it is better to set attributes before you give definitions to a function, to avoid some surprises, unless you know precisely what you do and why.
• "... it is better to set attributes before you give definitions to a function, to avoid some surprises." I argue that it's better to set the Attribute at the appropriate time, being mindful of the ramifications of the order. This has powerful uses as you know. I know what you are trying to warn new users against but for a long time that "rule" kept me from understanding and using Attributes to their full potential. Maybe this is one of those "you must know the rules before you decide to break them" cases. – Mr.Wizard Dec 4 '12 at 13:22
• @Mr.Wizard I stand by what I advised. Those who know these advanced uses know what they do. Most people would find it highly confusing when their functions would not work according to their definitions, only to discover (with pain, hours later), that there was some evaluation happening at definition-time which ruined their definitions. I am speaking from personal experience here, but I know that lots of other people got into this trap at some point, and more than once. – Leonid Shifrin Dec 4 '12 at 13:27
• You already had my vote but I appreciate the addition. Quite reasonably your answers are generally viewed as authoritative (even if you disagree) and it's good, IMHO, to at least allow for situations where an Attribute is not set first. – Mr.Wizard Dec 4 '12 at 13:35
• @Mr.Wizard Thanks, I appreciate that. I used to give more detailed answers which would also explain the reasons behind some of the rules. A little more busy now :) – Leonid Shifrin Dec 4 '12 at 13:37
• @LeonidShifrin I was alluring to R - quite lamely as it seems. Thx for the offer though! – Yves Klett Dec 4 '12 at 13:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29077592492103577, "perplexity": 1096.2110693823065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00048.warc.gz"} |
http://www.dummies.com/how-to/content/how-to-change-the-font-in-word-2013.html | The most basic attribute of text in Word 2013 is its typeface, or font. The font sets up the way your text looks — its overall text style. Although deciding on a proper font may be agonizing (and, indeed, many graphic artists are paid well to choose just the right font), the task of selecting a font in Word is quite easy. It generally goes like this:
1. On the Home tab, in the Font group, click the down arrow to display the Font Face list.
A menu of font options appears.
The top part of the menu shows fonts associated with the document theme. The next section contains fonts you've chosen recently, which is handy for reusing fonts. The rest of the list, which can be quite long, shows all fonts in Windows that are available to Word.
2. Scroll to the font you want.
The fonts in the All Fonts part of the list are displayed in alphabetical order as well as in context (as they appear when printed).
3. Click to select a font.
You can also use the Font menu to preview the look of fonts. Scroll through the list to see which fonts are available and how they may look. As you move the mouse over a font, any selected text in your document is visually updated to show how that text would look in that font. The text isn’t changed until you select the new font.
• When no font is displayed in the Font group (the listing is blank), it means that more than one font is being used in the selected block of text.
• You can quickly scroll to a specific part of the menu by typing the first letter of the font you need, such as T for Times New Roman.
• Graphic designers prefer to use two fonts in a document — one for the text and one for headings and titles. Word is configured this way as well. The font you see with Body after its name is the current text, or body, font. The font marked as Heading is used for headings. These two fonts are part of the document theme.
• Fonts are the responsibility of Windows, not Word. Thousands of fonts are available for Windows, and they work in all Windows applications. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892368495464325, "perplexity": 1409.858220805094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00137-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/102134/how-to-make-this-code-involving-hypergeometric-functions-to-run-faster | # How to make this code involving Hypergeometric functions to run faster?
This question is followed up from this Question. I would like to thank Dr. Hintze and I_Mariusz for the comments and help. I am pretty new to mathematica ( I just learned it 4 days ago) so I would like to ask if there is a way to accelerate the following codes :
Clear["Global*"]
eq = (2*S0/(y*sigma^2))^
nu*(Gamma[nu + (2*mu)/sigma^2]/Gamma[2*nu + (2*mu)/sigma^2])*
Hypergeometric1F1[nu, 2*nu + (2*mu)/sigma^2, -2*S0/(y*sigma^2)];
int = Integrate[eq, {y, K, Infinity}, GenerateConditions -> False]
int2 = int /. nu -> (-vu/2 + Sqrt[vu^2 + 8*alpha/sigma^2]/2) /.
vu -> (2*mu/sigma^2 - 1)
mu = 15/100;
sigma = 5/100;
S0 = 100;
K = 95;
T = 1;
F[alpha_] = int2/alpha;
A = 18.4;
n = 40;
m = 50;
S = ConstantArray[0, m + 1];
B = 1;
For[k = 0, k <= m, k++,
S[[k + 1]] = Exp[A/2]/(2*B)*Re[F[A/(2*B)]];
For[j = 1, j <= n + k, j++,
S[[k + 1]] =
S[[k + 1]] + Exp[A/2]/B*(-1)^j*Re[F[(A + 2*j*Pi*I)/(2*B)]];
] ]
f = 0
For[k = 0, k <= m, k++,
f = f + Binomial[m, k]*S[[k + 1]]*2^(-m);
]
f
Currently, it takes me hours to run this program. Thank you so much for your time. I truly appreciate it.
• I believe that if you change F[alpha_] = int2/alpha to F[alpha_] := int2/alpha this will run quite quickly. Using = in a function assignment doesn't actually assign it as a pattern (see the documentation on Set (=) versus SetDelayed (:=)). – nben Dec 15 '15 at 16:30
• Thank you ! please let me try . – D. Nguyen Dec 15 '15 at 16:31
• If you make that a delayed definition you need to also make int2 an explicit function of alpha. The other thing I'd suggest is to convert your For loop to a Table[ Sum[] ] structure. – george2079 Dec 15 '15 at 16:49
• george2079 : Thanks ! , let me understand how Table is used. I've have never used it before. – D. Nguyen Dec 15 '15 at 16:51
• The other thing you should do here is note you are frequently recomputing F for the same argument. Define F so it remembers: F[alpha_] := F[alpha] = int2[alpha]/alpha; ( or restructure your loop so you aren't repeating calculations , that inner For loop just adds one new term for each k.) – george2079 Dec 15 '15 at 16:57
(* with int2 redefined as : int2[alpha_] = .. *)
` | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4450695812702179, "perplexity": 2849.630873960426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00100.warc.gz"} |
https://eprints.soton.ac.uk/view/divisions/5213723a-4221-4705-af4b-deec7da59e04/1995.html | The University of Southampton
University of Southampton Institutional Repository
Items where Division is "Faculties (pre 2018 reorg) > Faculty of Physical Sciences and Engineering (pre 2018 reorg) > Physics & Astronomy (pre 2018 reorg)Current Faculties > Faculty of Engineering and Physical Sciences > School of Physics and Astronomy > Physics & Astronomy (pre 2018 reorg)School of Physics and Astronomy > Physics & Astronomy (pre 2018 reorg)" and Year is 1995
Up a level
Export as ASCII CitationAll URLSBibTeXDublin CoreDublin CoreEP3 XMLEndNoteExcelGoogle MapsHTML CitationHTML CitationHTML IconsHTML ListJSONMETSObject IDsOpenURL ContextObjectRDF+N-TriplesRDF+N3RDF+XMLReferReference Manager
Group by: No Grouping | Authors/Creators | Item Type
Number of items: 6.
$W$ + 2 jets production at Tevatron: VECBOS and CompHEP comparison - A. Belyaev, E. Boos, L. Dudko and A. Pukhov
Type: Other | 1995 | Item not available on this server.
W+2jets production at Tevatron -- VECBOS and CompHEP comparison - A. Belyaev, E. Boos L. Dudko and A. Pukhov
Type: Article | 1995 | Item not available on this server.
Determination of magnetostrictive stresses in magnetic rare-earth superlattices by a cantilever method - M. Ciria, J.I. Arnaudas, A. del Moral, G.J. Tomka, C. de la Fuente, P.A.J. de Groot, M.R. Wells and R.C.C. Ward
Type: Article | 1995 | Item availability restricted.
Type: Thesis | 1995
Electroweak top quark production at the Fermilab Tevatron - Ann Heinson, A. S. Belyaev and E. E. Boos
Type: Conference or Workshop Item | 1995 | Item not available on this server.
Type: Thesis | 1995 | Item not available on this server.
This list was generated on Tue Jun 18 01:45:17 2019 BST.
Contact ePrints Soton: eprints@soton.ac.uk
ePrints Soton supports OAI 2.0 with a base URL of https://eprints.soton.ac.uk/cgi/oai2
This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.
We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3346831202507019, "perplexity": 16998.17674537708}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.79/warc/CC-MAIN-20190620210153-20190620232153-00098.warc.gz"} |
https://www.acmicpc.net/problem/15216 | 시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
2 초 512 MB 0 0 0 0.000%
## 문제
The construction worker previously known as Lars has many bricks of height 1 and different lengths, and he is now trying to build a wall of width w and height h. Since the construction worker previously known as Lars knows that the subset sum problem is NP-hard, he does not try to optimize the placement but he just lays the bricks in the order they are in his pile and hopes for the best. First he places the bricks in the first layer, left to right; after the first layer is complete he moves to the second layer and completes it, and so on. He only lays bricks horizontally, without rotating them. If at some point he cannot place a brick and has to leave a layer incomplete, then he gets annoyed and leaves. It does not matter if he has bricks left over after he finishes.
Yesterday the construction worker previously known as Lars got really annoyed when he realized that he could not complete the wall only at the last layer, so he tore it down and asked you for help. Can you tell whether the construction worker previously known as Lars will complete the wall with the new pile of bricks he has today?
## 입력
The first line contains three integers h, w, n (1 ≤ h ≤ 100, 1 ≤ w ≤ 100, 1 ≤ n ≤ 10 000), the height of the wall, the width of the wall, and the number of bricks respectively. The second line contains n integers xi (1 ≤ xi ≤ 10), the length of each brick.
## 출력
Output YES if the construction worker previously known as Lars will complete the wall, and NO otherwise.
## 예제 입력 1
2 10 7
5 5 5 5 5 5 5
## 예제 출력 1
YES
## 예제 입력 2
2 10 7
5 5 5 3 5 2 2
## 예제 출력 2
NO | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1568450778722763, "perplexity": 1356.6926970828708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657867.24/warc/CC-MAIN-20190116195543-20190116221543-00404.warc.gz"} |
https://arxiv.org/abs/1801.03121 | astro-ph.GA
(what is this?)
# Title: Merging massive black holes: the right place and the right time
Abstract: The LIGO/Virgo detections of gravitational waves from merging black holes of $\simeq$ 30 solar mass suggest progenitor stars of low metallicity (Z/Z$_{\odot} \lesssim 0.3$). In this talk I will provide constrains on where the progenitors of GW150914 and GW170104 may have formed, based on advanced models of galaxy formation and evolution combined with binary population synthesis models. First I will combine estimates of galaxy properties (star-forming gas metallicity, star formation rate and merger rate) across cosmic time to predict the low redshift BBH merger rate as a function of present day host galaxy mass, formation redshift of the progenitor system and different progenitor metallicities. I will show that the signal is dominated by binaries formed at the peak of star formation in massive galaxies with and binaries formed recently in dwarf galaxies. Then, I will present what very high resolution hydrodynamic simulations of different galaxy types can learn us about their black hole populations.
Comments: Proceeding of IAU Symposium 338 : "Gravitational Waves Astrophysics : Early results from GW searches and EM counterparts" Subjects: Astrophysics of Galaxies (astro-ph.GA) Cite as: arXiv:1801.03121 [astro-ph.GA] (or arXiv:1801.03121v1 [astro-ph.GA] for this version)
## Submission history
From: Astrid Lamberts [view email]
[v1] Tue, 9 Jan 2018 19:57:00 GMT (823kb,D) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3976484537124634, "perplexity": 4788.9147016054285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865809.59/warc/CC-MAIN-20180523200115-20180523220115-00569.warc.gz"} |
http://tex.stackexchange.com/questions/51084/aligning-trees-built-with-tikzpicture | # Aligning trees built with tikzpicture
I want to align some trees I am creating. I want the root node of all the trees to be on the same line. Currently for each tree I create, the root node of each subsequent tree is on a line after the bottom of the previous tree. How can I do this?
-
In a very simple manner, you can specify the position of the root in the picture. Look at this MWE:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{trees}
\begin{document}
\begin{tikzpicture}[edge from parent fork down,
every node/.style={fill=red!30,rounded corners},
edge from parent/.style={red,thick,draw}]
\node at (0,0) {root}
child {node {left}}
child {node {right}
child {node {child}}
child {node {child}}
};
\node at (4,0) {root}
child {node {left}}
child {node {right}
child {node {child}}
child {node {child}}
};
\node at (8,0) {root}
child {node {left}}
child {node {right}
child {node {child}}
child {node {child}}
};
\end{tikzpicture}
\end{document}
This will be graphically:
-
that's exactly what I want to do. – Harold Rosenberger Apr 7 '12 at 14:02
[edge from parent fork down,every node/.style=fill=red!30,rounded corners}, edge from parent/.style={red,thick,draw}] – Harold Rosenberger Apr 7 '12 at 14:04
Now, I do now have nice straight lines as shown. ALso, how can I get it to look more like a typical binary tree? Also, the top two level of my trees, the nodes are black. – Harold Rosenberger Apr 7 '12 at 14:16
Happy that it works also to you. To have the picture more similar to a binary tree you just need to remove the general options in [ ] after the opening of the tikzpicture. You can also refer to the pgfmanual page 114 (version October 25, 2010). – Claudio Fiandrino Apr 7 '12 at 14:22
@HaroldRosenberger: Please clean up by removing the comments after this discussion. Comments are only for adding some valuable remark to an answer. Feel free to edit your question for adding information, or to post your own answer, or if it's derived from Claudio's solution, make an edit suggestion regarding his answer to extend it. But please delete those many comments, otherwise a mod may do it and you might keep some code though. – Stefan Kottwitz Apr 8 '12 at 9:12
show 12 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4730128347873688, "perplexity": 3122.637300993813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.answers.com/Q/How_would_you_Calculate_the_H3O_ion_concentration_in_a_solution_with_pH3_and_a_solution_with_pH8 | # How would you Calculate the H3O ion concentration in a solution with pH3 and a solution with pH8?
In a solution, hydrogen ions normally bond with molecules of water, forming H3O+ (hydronium) ions. Thus, the concentration of the hydronium ions will be the same as the concentration of hydrogen ions, which is related to the pH of a solution according to the following equation:
pH = -log[H+] = -log[H3O+]
This equation can be solved for the concentration of hydronium ions:
[H3O+] = 10-pH
Thus, for a solution with a pH of 3, the concentration of hydronium ions will be 10-3 = 0.001 moles/liter, and for a solution with a pH of 8, the concentration of hydronium ions will be 10-8 = 0.00000001 moles/liter.
16 people found this useful
Thanks for the feedback!
Follow
# You started a Kickstarter campaign and reached your goal of \$50K in 5 days! Did you expect such a positive reaction?
View Full Interview
# What is the concentration of silver ions in a solution of silver iodate?
It depends on how much you've put into solution. But all concentrations are equal in moles: AgIO3 : Ag+ : IO3- = 1:1:1
Thanks for the feedback!
# What is the pH of an aqueous solution with the hydronium ion concentration H3O 2 x 10-14 M?
To find the pH use this: -log(2 X 10^-14 M) = 13. 7 pH ( you could call it 14 )
Thanks for the feedback!
# What are the relative numbers of H3O and OH- ions in an acidic an alkaline and a neutral solution?
Neutral- Equal Alkaline- More OH than H30 Acidic- The h30 is greater than the oh
Thanks for the feedback!
# How to Calculate pH
Potenz hydrogen, or pH, is the measure of hydrogen ion concentration within a water-based solution. Commonly used to measure acidity, the process of calculating the pH of a su (MORE)
# Learn the Basics and Acids in Chemistry by Calculating pH
PH is a test of alkalinity versus acidity in an item. Alkaline items are sometimes referred to as basic while items with levels of acidity are classified as acidic. Every elem (MORE)
# Calculating Concentration of a Solution
Dealing with the concentrations of different solutions is a major part of basic chemistry. It can be confusing sometimes. There are several ways to find the concentration of a (MORE)
In Causes
# The Role of Positive and Negative Ions in Causing and Treating Depression
An ion is a molecule that has either gained or lost one electron. A molecule generally has a balanced charge with an equal number of negatively charged electrons and positivel (MORE)
# What Percent is a Concentration?
In chemistry, often a given concentration of a specific chemical solution is used in experiments. An important part of a chemical experience is knowing the strength of both th (MORE)
# Guidelines for Using Laboratory Stock Solutions
Stock solutions are important parts of a laboratory's inventory. A stock solution is a solution, generally a highly concentrated one, that is diluted to a lower strength for u (MORE)
# HOW DO YOU calculate the concentration of the naoh solution?
Data given Mass =60 gram Formula mass=40 Volume=50 Molality=? m =Mass of solution _________________*Litre Formula mass Volume 60 * 1000 40 500 60000 ____ (MORE)
# A solution has a pOH of 1.3 What is the hydronium ion concentration of this solution?
[H3O+] = 10-pH = 10-(14.0-pOH) = 10-(14-1.3) = 10-(12.7) = 2.0 x 10-13 mol/L
Thanks for the feedback!
# What would its new pH be if the concentration of H3O plus ions in the solution were increased by 100 times and why?
Going from right to left, each number on the ph scale represents a 10 fold increase in the concentration of H3O+ ions. For example, water has a ph of 7. Urine has 10 times mo (MORE)
# Which subtance increase the h plus ion concentration in a solution?
Acids are able to donate, split off, ionise into proton(s) and an anion. Example: Acetic acid --> proton and acetate CH3COOH --> H+ + CH3COO-
Thanks for the feedback!
# How much more solution is pH10 than pH8?
At pH = 10 there is 1*10-4 mol/L OH- At pH = 8 there is 1*10-6 mol/L OH- So at pH10 it is 100 times more OH- in this alkaline solution (and 100 times less H+)
Thanks for the feedback! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385425209999084, "perplexity": 2809.0880762765864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115867691.21/warc/CC-MAIN-20150124161107-00196-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://brilliant.org/problems/reality/ | Reality.......
Algebra Level 2
If $a+b+c$=$1$ and $a,b,c>0$ Then what is the least value of $1/a+1/b+1/c$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5625650882720947, "perplexity": 1618.0147175160407}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00216.warc.gz"} |
http://chim.lu/ech0303.php | # Chemical equilibrium
## Exercise 16
Give the definition of $K_s$ for the saturated solution resulting from the equilibrium (don't write (aq) ): $Ag_3PO_4(s)$ $3Ag^+(aq)$ $+$ $PO_4^{3-}(aq)$ $Ca_3(PO_4)_2(s)$ $3Ca^{2+}(aq)$ $+$ $2PO_4^{3-}(aq)$ $CaF_2(s)$ $Ca^{2+}(aq)$ $+$ $2F^-(aq)$ $Al_2S_3(s)$ $2Al^{3+}(aq)$ $+$ $3S^{2-}(aq)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9805505871772766, "perplexity": 260.6069132169103}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889473.61/warc/CC-MAIN-20180120063253-20180120083253-00008.warc.gz"} |
http://conversion.org/frequency/radians-per-second/revolutions-per-hour | # radians per second to revolutions per hour conversion
Conversion number between radians per second [rad/s] and revolutions per hour [rph] is 572.95779513082. This means, that radians per second is bigger unit than revolutions per hour.
### Contents [show][hide]
Switch to reverse conversion:
from revolutions per hour to radians per second conversion
### Enter the number in radians per second:
Decimal Fraction Exponential Expression
[rad/s]
eg.: 10.12345 or 1.123e5
Result in revolutions per hour
?
precision 0 1 2 3 4 5 6 7 8 9 [info] Decimal: Exponential:
### Calculation process of conversion value
• 1 radians per second = (1.59154943091895*10^-01) / ((1/3600)) = 572.95779513082 revolutions per hour
• 1 revolutions per hour = ((1/3600)) / (1.59154943091895*10^-01) = 0.0017453292519943 radians per second
• ? radians per second × (1.59154943091895*10^-01 ("Hz"/"radians per second")) / ((1/3600) ("Hz"/"revolutions per hour")) = ? revolutions per hour
### High precision conversion
If conversion between radians per second to hertz and hertz to revolutions per hour is exactly definied, high precision conversion from radians per second to revolutions per hour is enabled.
Since definition contain rounded number(s) too, there is no sense for high precision calculation, but if you want, you can enable it. Keep in mind, that converted number will be inaccurate due this rounding error!
gads
### radians per second to revolutions per hour conversion chart
Start value: [radians per second] Step size [radians per second] How many lines? (max 100)
visual:
radians per secondrevolutions per hour
00
105729.5779513082
2011459.155902616
3017188.733853925
4022918.311805233
5028647.889756541
6034377.467707849
7040107.045659158
8045836.623610466
9051566.201561774
10057295.779513082
11063025.35746439
Copy to Excel
## Multiple conversion
Enter numbers in radians per second and click convert button.
One number per line.
Converted numbers in revolutions per hour:
Click to select all
## Details about radians per second and revolutions per hour units:
Convert Radians per second to other unit:
### radians per second
Definition of radians per second unit: = Hz/(2 × π). Radian per second is actually a unit of angular velocity, but radian can be treated as a dimensionless number. According to this, angular velocity can be converted directly into rotations per second. One hertz is equal to (2×π×rad) / s. [rad / s] is derived SI unit of measure.
Convert Revolutions per hour to other unit:
### revolutions per hour
Definition of revolutions per hour unit: = Hz/3600. The number of cycles or periodically repeating events occurring in one hour. Symbol rph (revolutions per hour), the value is 1/3600 Hertz.
← Back to Frequency units
© 2018 conversion.org Terms of use | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5877699851989746, "perplexity": 4396.2444100957355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00002.warc.gz"} |
http://support.sas.com/rnd/app/ets/examples/garchex/ | # SAS/ETS Web Examples
## Estimating GARCH Models
Contents | SAS Program
# Overview
The generalized autoregressive conditional heteroscedasticity (GARCH) model of Bollerslev (1986) is an important type of time series model for heteroscedastic data. It explicitly models a time-varying conditional variance as a linear function of past squared residuals and of its past values. The GARCH process has been widely used to model economic and financial time-series data.
Many extensions of the simple GARCH model have been developed in the literature. This example illustrates estimation of variants of GARCH models using the AUTOREG and MODEL procedures, which include the
Please note that parameter restrictions implied in the GARCH type models are not discussed in this example. If estimated parameters do not satisfy the desired restrictions in a specific model, the BOUNDS or RESTRICT statement can be used to explicitly impose the restrictions in PROC MODEL.
For other examples of GARCH type models, see "Heteroscedastic Modeling of the Federal Funds Rate."
# Details
The data used in this example are generated with the SAS DATA step. The following code generates a simple GARCH model with normally distributed residuals.
%let df = 7.5;
%let sig1 = 1;
%let sig2 = 0.1 ;
%let var2 = 2.5;
%let nobs = 1000 ;
%let nobs2 = 2000 ;
%let arch0 = 0.1 ;
%let arch1 = 0.2 ;
%let garch1 = 0.75 ;
%let intercept = 0.5 ;
data normal;
lu = &var2;
lh = &var2;
do i= -500 to &nobs ;
/* GARCH(1,1) with normally distributed residuals */
h = &arch0 + &arch1*lu**2 + &garch1*lh;
u = sqrt(h) * rannor(12345) ;
y = &intercept + u;
lu = u;
lh = h;
if i > 0 then output;
end;
run;
See the SAS program for more code that generates other types of GARCH models.
## Simple GARCH Model with Normally Distributed Residuals
The simple GARCH(p,q) model can be expressed as follows.
Let
The residual is modeled as
where is i.i.d. with zero mean and unit variance, and where is expressed as
In a standard GARCH model, is normally distributed. Alternative models can be specified by assuming different distributions for , for example, the distribution, Cauchy distribution, etc.
To estimate a simple GARCH model, you can use the AUTOREG procedure. You use the GARCH= option to specify the GARCH model, and the (P= , Q= ) suboption to specify the orders of the GARCH model.
proc autoreg data = normal ;
/* Estimate GARCH(1,1) with normally distributed residuals with AUTOREG*/
model y = / garch = ( q=1,p=1 ) ;
run ;
quit ;
The AUTOREG procedure produces the following output given in Figure 1.1 for a GARCH model with normally distributed errors .
Figure 1.1 Estimation Results using PROC AUTOREG
The AUTOREG Procedure
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Intercept 1 0.4783 0.0559 8.55 <.0001
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Intercept 1 0.4793 0.0323 14.86 <.0001
ARCH0 1 0.1159 0.0317 3.66 0.0003
ARCH1 1 0.2467 0.0389 6.35 <.0001
GARCH1 1 0.6972 0.0435 16.02 <.0001
You can also use the MODEL procedure to estimate a simple GARCH model. You must first specify the parameters in the model. Then specify the mean model and the variance model. The XLAG function returns the lag of the first argument if it is nonmissing. If the lag of the first argument is missing then the second argument is returned. The XLAG function makes it easy to specify the lag initialization for a GARCH process. The mse.y variable contains the value of the mean squared error for y at each iteration. These values are obtained automatically from first stage estimates, and are used to specify lagged values in estimation.
/* Estimate GARCH(1,1) with normally distributed residuals with MODEL*/
proc model data = normal ;
parms arch0 .1 arch1 .2 garch1 .75 ;
/* mean model */
y = intercept ;
/* variance model */
h.y = arch0 + arch1*xlag(resid.y**2,mse.y) +
garch1*xlag(h.y,mse.y) ;
/* fit the model */
fit y / method = marquardt fiml ;
run ;
quit ;
Figure 1.2 shows the parameter estimates obtained by using the MODEL procedure.
Figure 1.2 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear FIML Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.479341 0.0319 15.02 <.0001
arch0 0.115242 0.0345 3.34 0.0009
arch1 0.246811 0.0432 5.72 <.0001
garch1 0.697988 0.0494 14.13 <.0001
## GARCH Model with t-Distributed Residuals
To estimate a GARCH model with -distributed errors, you can use the AUTOREG procedure. You specify the GARCH(p,q) process with the GARCH=(p=,q=) option, and specify the distributed error structure with the DIST= option.
/* Estimate GARCH(1,1) with t-distributed residuals with AUTOREG*/
proc autoreg data = t ;
model y = / garch=( q=1, p=1 ) dist = t ;
run ;
quit;
The AUTOREG procedure produces the following output given in Figure 1.3 for a GARCH model with -distributed residuals.
Figure 1.3 Estimation Results using PROC AUTOREG
The AUTOREG Procedure
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Intercept 1 0.3997 0.0998 4.00 <.0001
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Variable Label
Intercept 1 0.5294 0.0374 14.14 <.0001
ARCH0 1 0.1244 0.0328 3.80 0.0001
ARCH1 1 0.2919 0.0309 9.44 <.0001
GARCH1 1 0.7419 0.0199 37.27 <.0001
TDFI 1 0.1351 0.0250 5.41 <.0001 Inverse of t DF
You can also use the MODEL procedure to estimate the GARCH model with -distributed residuals. Use the ERRORMODEL statement to specify the -distributed residuals. First specify the dependent variable name, a tilde(), and then the name of the error distribution with its parameters. The degrees of freedom for the distribution are also estimated as a parameter in the MODEL procedure.
/* Estimate GARCH(1,1) with t-distributed residuals with MODEL*/
proc model data = t ;
parms df 7.5 arch0 .1 arch1 .2 garch1 .75 ;
/* mean model */
y = intercept ;
/* variance model */
h.y = arch0 + arch1 * xlag(resid.y **2, mse.y) +
garch1*xlag(h.y, mse.y);
/* specify error distribution */
errormodel y ~ t(h.y,df);
/* fit the model */
fit y / method=marquardt;
run;
quit;
The MODEL procedure produces the following output given in Figure 1.4 for the GARCH model with -distributed errors.
Figure 1.4 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear Liklhood Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.528964 0.0376 14.05 <.0001
df 7.468812 1.2402 6.02 <.0001
arch0 0.091711 0.0249 3.69 0.0002
arch1 0.215409 0.0248 8.69 <.0001
garch1 0.740497 0.0222 33.34 <.0001
Note that the MODEL procedure outputs the estimate of the degree of freedom for the distribution, while the AUTOREG procedure outputs the reciprocal of the estimated degree of freedom.
## GARCH Model with Cauchy Distributed Residuals
You can also estimate a GARCH model with Cauchy-distributed errors in the MODEL procedure. The log likelihood function for GARCH with Cauchy-distributed residuals can be expressed as
We can then write the code using the model procedure.
/* Estimate GARCH(1,1) with Cauchy distributed residuals */
proc model data = cauchy ;
parms arch0 .1 arch1 .2 garch1 .75 intercept .5 ;
mse_y = &var2 ;
/* mean model */
y = intercept ;
/* variance model */
h.y = arch0 + arch1 * xlag(resid.y ** 2, mse_y) +
garch1 * xlag(h.y, mse_y);
/* specify error distribution */
obj = log(h.y/((h.y**2+resid.y**2) * constant('pi')));
obj = -obj ;
errormodel y ~ general(obj);
/* fit the model */
fit y / method=marquardt;
run;
quit;
The MODEL procedure produces the following output given in Figure 1.5 for a GARCH fit with Cauchy distributed residuals.
Figure 1.5 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear Liklhood Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.499152 0.00385 129.73 <.0001
arch0 0.024821 0.00350 7.10 <.0001
arch1 0.005344 0.00180 2.97 0.0030
garch1 0.678644 0.0417 16.29 <.0001
## GARCH Model with Generalized Error Distribution Residuals
You can also estimate a GARCH model with GED (generalized error distribution) residuals with the MODEL procedure.
The log likelihood function for GARCH with GED residuals is expressed as
where is the sample size, is the gamma function, is a constant given by
and is a positive parameter governing the thickness of the tails of the distribution. Note that for , constant , and the GED is the standard normal distribution. For more details about the generalized error distribution, see Hamilton (1994).
To estimate a GARCH model with GED residuals, you can use the following code.
/* Estimate GARCH(1,1) with generalized error distribution residuals */
proc model data = normal ;
parms nu 2 arch0 .1 arch1 .2 garch1 .75;
control mse.y = &var2 ; /*defined in data generating step*/
/* mean model */
y = intercept ;
/* variance model */
h.y = arch0 + arch1 * xlag(resid.y ** 2, mse.y) +
garch1 * xlag(h.y, mse.y);
/* specify error distribution */
lambda = sqrt(2**(-2/nu)*gamma(1/nu)/gamma(3/nu)) ;
obj = log(nu/lambda) -(1 + 1/nu)*log(2) - lgamma(1/nu)-
.5*abs(resid.y/lambda/sqrt(h.y))**nu - .5*log(h.y) ;
obj = -obj ;
errormodel y ~ general(obj,nu);
/* fit the model */
fit y / method=marquardt;
run;
quit;
The MODEL procedure produces the following output given in Figure 1.6 for a GARCH fit with residuals following a generalized error distribution.
Figure 1.6 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear Liklhood Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.47119 0.0321 14.69 <.0001
nu 1.964288 0.1336 14.70 <.0001
arch0 0.172375 0.0359 4.80 <.0001
arch1 0.276887 0.0441 6.28 <.0001
garch1 0.637912 0.0476 13.39 <.0001
## GARCH-M Model
Another type of GARCH model is the GARCH-M model, which adds the heteroscedasticity term directly into the mean equation. In this example, consider the following specification:
The residual is modeled as
where is i.i.d. with zero mean and unit variance, and where is expressed as
The AUTOREG procedure enables you to specify the GARCH-M model with the MEAN= suboption of the GARCH= option. The MEAN= option specifies the functional form of the GARCH-M model. The values of the MEAN= option are
LINEAR, specifies the linear function
LOG, specifies the log function
SQRT, specifies the square-root function
In this example, the square-root specification is considered.
/* Estimate GARCH-M Model with PROC AUTOREG */
proc autoreg data= garchm ;
model y = / garch=( p=1, q=1, mean = sqrt);
run;
quit;
The AUTOREG procedure produces the following output in Figure 1.7 for a GARCH-M model.
Figure 1.7 Estimation Results using PROC AUTOREG
The AUTOREG Procedure
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Intercept 1 1.2190 0.0480 25.39 <.0001
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Intercept 1 0.4987 0.1431 3.49 0.0005
ARCH0 1 0.0831 0.0275 3.02 0.0025
ARCH1 1 0.1695 0.0289 5.86 <.0001
GARCH1 1 0.7914 0.0321 24.62 <.0001
DELTA 1 0.5206 0.1196 4.35 <.0001
You can also use the MODEL procedure to estimate the GARCH-M model.
/* Estimate GARCH-M Model with PROC MODEL */
proc model data = garchm ;
parms arch0 .1 arch1 .2 garch1 .75 gamma .5 ;
h = arch0 + arch1*xlag(resid.y**2,mse.y) + garch1*xlag(h.y,mse.y);
y = intercept + gamma*sqrt(h) ;
h.y = h ;
fit y / fiml method = marquardt;
run;
quit;
This PROC MODEL step produces the following output as shown in Figure 1.8.
Figure 1.8 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear FIML Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
arch0 0.082298 0.0268 3.07 0.0022
arch1 0.17043 0.0284 6.00 <.0001
garch1 0.79191 0.0318 24.87 <.0001
gamma 0.516689 0.1225 4.22 <.0001
intercept 0.502515 0.1488 3.38 0.0008
## EGARCH Model
The Exponential GARCH (EGARCH) model was proposed by Nelson (1991). It models the conditional variance of as follows:
where
The AUTOREG procedure also supports the EGARCH model. You can use the TYPE=EXP suboption of the GARCH= option to specify the EGARCH model.
/* Estimate EGARCH Model with PROC AUTOREG */
proc autoreg data= egarch ;
model y = / garch=( q=1, p=1 , type = exp) ;
run;
quit;
This produces the following output as shown in Figure 1.9.
Figure 1.9 Estimation Results using PROC AUTOREG
The AUTOREG Procedure
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Intercept 1 0.4790 0.0381 12.59 <.0001
Variable DF Estimate Standard Error t Value Approx
Pr > |t|
Intercept 1 0.4854 0.0361 13.43 <.0001
EARCH0 1 0.0960 0.0366 2.63 0.0087
EARCH1 1 0.2322 0.0744 3.12 0.0018
EGARCH1 1 0.7206 0.0961 7.50 <.0001
THETA 1 0.4403 0.1806 2.44 0.0148
You can also estimate the EGARCH model using the MODEL procedure.
/* Estimate EGARCH Model with PROC MODEL */
proc model data = egarch ;
parms earch0 .1 earch1 .2 egarch1 .75 theta .65 ;
/* mean model */
y = intercept ;
/* variance model */
if (_obs_ = 1 ) then
h.y = exp( earch0 + egarch1 * log(mse.y) );
else h.y = exp(earch0 + earch1*zlag(g) + egarch1*log(zlag(h.y))) ;
g = theta*(-nresid.y) + abs(-nresid.y) - sqrt(2/constant('pi')) ;
/* fit the model */
fit y / fiml method = marquardt ;
run;
quit;
Note that in this example, the parameter is set to be equal to . The nresid.y variable gives the normalized residual of the variable y, i.e., . Note also that if .
The PROC MODEL code produces the following output for the EGARCH model as shown in Figure 1.10.
Figure 1.10 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear FIML Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.485374 0.0368 13.18 <.0001
earch0 0.095713 0.0361 2.65 0.0081
earch1 0.233174 0.0690 3.38 0.0007
egarch1 0.721372 0.0934 7.73 <.0001
theta 0.434851 0.1790 2.43 0.0153
## QGARCH Model
The Quadratic GARCH model was proposed by Sentana (1995) to model asymmetric effects of positive and negative shocks.
A simple Quadratic GARCH(1,1) model describes the residual process as
where is i.i.d. with zero mean and unit variance, and
where is the asymmetric parameter that helps to separately identify the impact of positive and negative shocks on volatility.
The following code estimates a simple Quadratic GARCH(1,1) model in the MODEL procedure.
/* Estimate Quadratic GARCH (QGARCH) Model */
proc model data = qgarch ;
parms arch0 .1 arch1 .2 garch1 .75 phi .2;
/* mean model */
y = intercept ;
/* variance model */
h.y = arch0 + arch1*xlag(resid.y**2,mse.y) + garch1*xlag(h.y,mse.y) +
phi*xlag(-resid.y,mse.y);
/* fit the model */
fit y / method = marquardt fiml ;
run ;
quit ;
Note that in specifying the equation for , you need to add a negative sign in front of the residual term, since PROC MODEL gives the negative of the residuals. This also applies to the GJR-GARCH model and the TGARCH model, to be discussed later in the example.
The code produces the following output shown in Figure 1.11 for a Quadratic GARCH(1,1) model.
Figure 1.11 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear FIML Summary of Residual Errors
Equation DF Model DF Error SSE MSE Root MSE R-Square Adj R-Sq
y 5 995 2090.0 2.1005 1.4493 -0.0004 -0.0045
resid.y 995 996.5 1.0015 1.0007
Nonlinear FIML Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.510972 0.0356 14.37 <.0001
arch0 0.108659 0.0281 3.86 0.0001
arch1 0.166853 0.0294 5.67 <.0001
garch1 0.774571 0.0341 22.69 <.0001
phi 0.16391 0.0426 3.84 0.0001
## GJR-GARCH Model
Another asymmetric GARCH process is the GJR-GARCH model of Glosten, Jagannathan and Runkle (1993). They propose modeling , where is i.i.d. with zero mean and unit variance, and
where if and if .
You can use the following code to estimate a GJR-GARCH(1,1) model.
/* Estimate GJR-GARCH Model */
proc model data = gjrgarch ;
parms arch0 .1 arch1 .2 garch1 .75 phi .1;
/* mean model */
y = intercept ;
/* variance model */
if zlag(resid.y) > 0 then
h.y = arch0 + arch1*xlag(resid.y**2,mse.y) + garch1*xlag(h.y,mse.y) ;
else
h.y = arch0 + arch1*xlag(resid.y**2,mse.y) + garch1*xlag(h.y,mse.y) +
phi*xlag(resid.y**2,mse.y) ;
/* fit the model */
fit y / method = marquardt fiml ;
run ;
quit ;
This produces the following output shown in Figure 1.12 with the GJR-GARCH model.
Figure 1.12 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear FIML Summary of Residual Errors
Equation DF Model DF Error SSE MSE Root MSE R-Square Adj R-Sq
y 5 995 13387.5 13.4548 3.6681 -0.0000 -0.0040
resid.y 995 996.2 1.0012 1.0006
Nonlinear FIML Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.557155 0.0513 10.85 <.0001
arch0 0.099621 0.0348 2.87 0.0042
arch1 0.185367 0.0386 4.80 <.0001
garch1 0.771999 0.0269 28.71 <.0001
phi 0.089578 0.0509 1.76 0.0788
## TGARCH Model
The Threshold GARCH model (TGARCH) of Zakoian (1994) is similar to the GJR GARCH, but it specifies the conditional standard deviation instead of conditional variance:
where if , and if . Similarly, if , and if .
You can estimate TGARCH(1,1) model using the following code.
/* Estimate Threshold Garch (TGARCH) Model */
proc model data = tgarch ;
parms arch0 .1 arch1_plus .1 arch1_minus .1 garch1 .75 ;
/* mean model */
y = intercept ;
/* variance model */
if zlag(resid.y) < 0 then
h.y = (arch0 + arch1_plus*zlag(-resid.y) + garch1*zlag(sqrt(h.y)))**2 ;
else
h.y = (arch0 + arch1_minus*zlag(-resid.y) + garch1*zlag(sqrt(h.y)))**2 ;
/* fit the model */
fit y / method = marquardt fiml ;
run ;
quit ;
This produces the following output for the TGARCH model as shown in Figure 1.13.
Figure 1.13 Estimation Results using PROC MODEL
The MODEL Procedure
Nonlinear FIML Parameter Estimates
Parameter Estimate Approx Std Err t Value Approx
Pr > |t|
intercept 0.504727 0.0124 40.57 <.0001
arch0 0.160549 0.0456 3.52 0.0004
arch1_plus 0.076379 0.0361 2.11 0.0347
arch1_minus 0.14484 0.0307 4.72 <.0001
garch1 0.632826 0.1161 5.45 <.0001
# References
Bollerslev, T. (1986), "Generalized Autoregressive Conditional Heteroskedasticity," Journal of Econometrics, 31, 307-327.
Glosten, L., Jagannathan, R. and Runkle, D. (1993), "On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks," Journal of Finance, 48(5), 1779-1801.
Hamilton, J. D. (1994), Time Series Analysis, Princeton, NJ: Princeton University Press.
Nelson, B. (1991), "Conditional Heteroskedasticity in Asset Returns: A New Approach," Econometrica, 59, 347-370.
SAS Institute Inc. (1999), SAS/ETS User’s Guide, Version 8, Cary, NC: SAS Institute Inc.
Sentana, E. (1995), "Quadratic ARCH Models," Review of Economic Studies, 62, 639-661.
Zakoian, M. (1994), "Threshold Heteroscedastic Models," Journal of Economic Dynamics and Control, 18, 931-955. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330577850341797, "perplexity": 12665.461510069266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823228.36/warc/CC-MAIN-20181209232026-20181210013526-00190.warc.gz"} |
http://www.theinfolist.com/html/ALL/s/labor_theory_of_value.html | TheInfoList
The labor theory of value (LTV) is a theory of value that argues that the
economic value In economics Economics () is a social science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics focuses on the beh ...
of a good or service is determined by the total amount of " socially necessary labor" required to produce it. The LTV is usually associated with
Marxian economics Marxian economics, or the Marxian school of economics, is a heterodox In religion, heterodoxy (from Ancient Greek Ancient Greek includes the forms of the Greek language used in ancient Greece and the classical antiquity, ancient world f ...
, although it originally appeared in the theories of earlier
classical economics Classical economics or classical political economy is a school of thought A school of thought, or intellectual tradition, is the perspective of a group of people who share common characteristics of opinion or outlook of a philosophy ...
such as
Adam Smith Adam Smith ( 1723 – 17 July 1790) was a Scottish economist, philosopher as well as a moral philosopher Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and ...
and
David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the classical economists along with Thomas Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) w ...
, and later in
anarchist economics Anarchist economics is the set of theories and practices of economic activity within the political philosophy of anarchism Anarchism is a political philosophy Political philosophy or political theory is the philosophical P ...
. Smith saw the price of a commodity in terms of the labor that the purchaser must expend to buy it, which embodies the concept of how much labor a commodity, a tool for example, can save the purchaser. The LTV is central to Marxist theory, which holds that the
working class The working class (or labouring class) comprises those engaged in manual-labour occupations or industrial work, who are remunerated via waged or salaried contracts. Working-class occupations (see also "Designation of workers by collar color ...
is exploited under
capitalism Capitalism is an economic system An economic system, or economic order, is a system A system is a group of interacting Interaction is a kind of action that occurs as two or more objects have an effect upon one another. The idea o ...
, and dissociates price and value. However, Marx did not refer to his own theory of value as a "labour theory of value". Orthodox
neoclassical economics Neoclassical economics is an approach to economics in which the production, consumption and valuation (pricing) of goods and services are driven by the supply and demand In microeconomics, supply and demand is an economic model In econo ...
rejects the LTV, using a theory of value based on subjective
preferences In psychology, economics and philosophy, a preference is a technical term usually used in relation to choosing between wikt:alternative, alternatives. For example, someone prefers A over B if they would rather choose A than B. Preference can also b ...
. The revival in interpretation of Marx known as the also rejects Marxian economics and the LTV, calling them "substantialist". This reading claims that the LTV is a misinterpretation of the concept of
fetishism A fetish (derived from the french: fétiche; which comes from the pt, feitiço; and this in turn from la, facticius, "artificial" and , "to make") is an object believed to have supernatural The supernatural encompasses supposed phenomen ...
in relation to value, and that this understanding never appears in Marx's work. The school heavily emphasizes works such as ''
Capital Capital most commonly refers to: * Capital letter Letter case (or just case) is the distinction between the letters that are in larger uppercase or capitals (or more formally ''majuscule'') and smaller lowercase (or more formally ''minusc ...
'' as explicitly being a
critique Critique is a method of disciplined, systematic study of a written or oral discourse Discourse is a generalization of the notion of a conversation to any form of communication. Discourse is a major topic in social theory, with work spanning f ...
of political economy, instead of a "more correct" theory.
# Definitions of value and labor
When speaking in terms of a labor theory of value, "value", without any qualifying adjective should theoretically refer to the amount of labor necessary to produce a marketable commodity, including the labor necessary to develop any real capital used in the production. Both
David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the classical economists along with Thomas Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) w ...
and
Karl Marx Karl Heinrich Marx (; 5 May 1818 – 14 March 1883) was a German philosopher A philosopher is someone who practices philosophy Philosophy (from , ) is the study of general and fundamental questions, such as those about reason, M ...
tried to quantify and embody all labor components in order to develop a theory of the real price, or natural price of a commodity. The labor theory of value as presented by
Adam Smith Adam Smith ( 1723 – 17 July 1790) was a Scottish economist, philosopher as well as a moral philosopher Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and ...
did not require the quantification of past labor, nor did it deal with the labor needed to create the tools (capital) that might be used in producing a commodity. Smith's theory of value was very similar to the later utility theories in that Smith proclaimed that a commodity was worth whatever labor it would command in others (value in trade) or whatever labor it would "save" the self (value in use), or both. However, this "value" is subject to supply and demand at a particular time:
The real price of every thing, what every thing really costs to the man who wants to acquire it, is the toil and trouble of acquiring it. What every thing is really worth to the man who has acquired it, and who wants to dispose of it or exchange it for something else, is the toil and trouble which it can save to himself, and which it can impose upon other people. (''Wealth of Nations'' Book 1, chapter V)
Smith's theory of price has nothing to do with the past labor spent in producing a commodity. It speaks only of the labor that can be "commanded" or "saved" at present. If there is no use for a buggy whip, then the item is economically worthless in trade or in use, regardless of all the labor spent in creating it.
## Distinctions of economically pertinent labor
Value "in use" is the
usefulness Within economics, the concept of utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness within the theory of utilitarianism by moral philosop ...
of this commodity, its
utility As a topic of economics Economics () is a social science Social science is the Branches of science, branch of science devoted to the study of society, societies and the Social relation, relationships among individuals within thos ...
. A classical
paradox A paradox is a logically self-contradictory statement or a statement that runs contrary to one's expectation. It is a statement that, despite apparently valid reasoning from true premises, leads to a seemingly self-contradictory or a logically u ...
often comes up when considering this type of value. In the words of Adam Smith:
The word value, it is to be observed, has two different meanings, and sometimes expresses the utility of some particular object, and sometimes the power of purchasing other goods which the possession of that object conveys. The one may be called "value in use"; the other, "value in exchange." The things which have the greatest value in use have frequently little or no value in exchange; and, on the contrary, those which have the greatest value in exchange have frequently little or no value in use. Nothing is more useful than water: but it will purchase scarce anything; scarce anything can be had in exchange for it. A diamond, on the contrary, has scarce any value in use; but a very great quantity of other goods may frequently be had in exchange for it (''Wealth of Nations'' Book 1, chapter IV).
Value "in
exchange Exchange may refer to: Places United States * Exchange, Indiana Exchange is an Unincorporated area, unincorporated community in Green Township, Morgan County, Indiana, Green Township, Morgan County, Indiana, Morgan County, in the U.S. state of In ...
" is the relative proportion with which this commodity exchanges for another commodity (in other words, its
price A price is the (usually not negative) quantity Quantity is a property that can exist as a multitude or magnitude, which illustrate discontinuity and continuity. Quantities can be compared in terms of "more", "less", or "equal", or b ...
in the case of
money Money is any item or verifiable record that is generally accepted as payment for goods and services and repayment of debts, such as taxes, in a particular country or socio-economic context. The main functions of money are distinguished as: a ...
). It is relative to labor as explained by Adam Smith:
The value of any commodity, ..to the person who possesses it, and who means not to use or consume it himself, but to exchange it for other commodities, is equal to the quantity of labour which it enables him to purchase or command. Labour, therefore, is the real measure of the exchangeable value of all commodities (''Wealth of Nations'' Book 1, chapter V).
Value (without qualification) is the labor embodied in a commodity under a given structure of production. Marx defined the value of the commodity by this third definition. In his terms, value is the 'socially necessary abstract labor' embodied in a commodity. To
David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the classical economists along with Thomas Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) w ...
and other classical economists, this definition serves as a measure of "real cost", "absolute value", or a "measure of value" invariable under changes in distribution and technology. Ricardo, other classical economists and Marx began their expositions with the assumption that value in exchange was equal to or proportional to this labor value. They thought this was a good assumption from which to explore the dynamics of development in capitalist societies. Other supporters of the labor theory of value used the word "value" in the second sense to represent "exchange value".
# Labor process
Since the term "value" is understood in the LTV as denoting something created by labor, and its "magnitude" as something proportional to the quantity of labor performed, it is important to explain how the labor process both preserves value and adds new value in the commodities it creates.Unless otherwise noted, the description of the labor process and the role of the value of means of production in this section are drawn from chapter 7 of ''Capital'' vol1 . The value of a commodity increases in proportion to the duration and intensity of labor performed on average for its production. Part of what the LTV means by "socially necessary" is that the value only increases in proportion to this labor as it is performed with average skill and average productivity. So though workers may labor with greater skill or more productivity than others, these more skillful and more productive workers thus produce more value through the production of greater quantities of the finished commodity. Each unit still bears the same value as all the others of the same class of commodity. By working sloppily, unskilled workers may drag down the average skill of labor, thus increasing the average labor time necessary for the production of each unit commodity. But these unskillful workers cannot hope to sell the result of their labor process at a higher price (as opposed to value) simply because they have spent more time than other workers producing the same kind of commodities. However, production not only involves labor, but also certain means of labor: tools, materials, power plants and so on. These means of labor—also known as
means of production The means of production is a concept that encompasses the social use and ownership Ownership is the state or fact of exclusive right In Anglo-Saxon law Anglo-Saxon law (Old English Old English (, ), or Anglo-Saxon, is the earliest record ...
—are often the product of another labor process as well. So the labor process inevitably involves these means of production that already enter the process with a certain amount of value. Labor also requires other means of production that are not produced with labor and therefore bear no value: such as sunlight, air, uncultivated land, unextracted minerals, etc. While useful, even crucial to the production process, these bring no value to that process. In terms of means of production resulting from another labor process, LTV treats the magnitude of value of these produced means of production as constant throughout the labor process. Due to the constancy of their value, these means of production are referred to, in this light, as constant capital. Consider for example workers who take coffee beans, use a roaster to roast them, and then use a brewer to brew and dispense a fresh cup of coffee. In performing this labor, these workers add value to the coffee beans and water that comprise the material ingredients of a cup of coffee. The worker also transfers the value of constant capital—the value of the beans; some specific depreciated value of the roaster and the brewer; and the value of the cup—to the value of the final cup of coffee. Again, on average, the worker can transfer no more than the value of these means of labor previously possessed to the finished cup of coffee.In the case of instruments of labor, such as the roaster and the brewer (or even a ceramic cup), the value transferred to the cup of coffee is only a depreciated value calculated over the life of those instruments of labor according to some accounting convention. So the value of coffee produced in a day equals the sum of both the value of the means of labor—this constant capital—and the value newly added by the worker in proportion to the duration and intensity of their work. Often this is expressed mathematically as: where * $c$ is the constant capital of materials used in a period plus the depreciated portion of tools and plant used in the process. (A period is typically a day, week, year, or a single turnover: meaning the time required to complete one batch of coffee, for example.) * $L$ is the quantity of labor time (average skill and productivity) performed in producing the finished commodities during the period * $W$ is the value (or think "worth") of the product of the period ($w$ comes from the German word for value: ''wert'') Note: if the product resulting from the labor process is homogeneous (all similar in quality and traits, for example, all cups of coffee) then the value of the period's product can be divided by the total number of items (use-values or $v_u$) produced to derive the unit value of each item. $\beginw_i= \frac\,\end$ where $\begin\sum v_u\end$ is the total items produced. The LTV further divides the value added during the period of production, $L$, into two parts. The first part is the portion of the process when the workers add value equivalent to the wages they are paid. For example, if the period in question is one week and these workers collectively are paid $1,000, then the time necessary to add$1,000 to—while preserving the value of—constant capital is considered the necessary labor portion of the period (or week): denoted $NL$. The remaining period is considered the surplus labor portion of the week: or $SL$. The value used to purchase labor-power, for example, the \$1,000 paid in wages to these workers for the week, is called variable capital ($v$). This is because in contrast to the constant capital expended on means of production, variable capital can add value in the labor process. The amount it adds depends on the duration, intensity, productivity and skill of the labor-power purchased: in this sense, the buyer of labor-power has purchased a commodity of variable use. Finally, the value added during the portion of the period when surplus labor is performed is called
surplus value In Marxian economics Marxian economics, or the Marxian school of economics, is a Heterodox economics, heterodox school of political economic thought. Its foundations can be traced back to Karl Marx, Karl Marx's Critique of political economy#M ...
($s$). From the variables defined above, we find two other common expressions for the value produced during a given period: and The first form of the equation expresses the value resulting from production, focusing on the costs $c+v$ and the surplus value appropriated in the process of production, $s$. The second form of the equation focuses on the value of production in terms of the values added by the labor performed during the process $NL+SL$.
# Relation between values and prices
One issue facing the LTV is the relationship between value quantities on one hand and prices on the other. If a commodity's value is not the same as its price, and therefore the magnitudes of each likely differ, then what is the relation between the two, if any? Various LTV schools of thought provide different answers to this question. For example, some argue that value in the sense of the amount of labor embodied in a good acts as a center of gravity for price. However, most economists would say that cases where pricing is given as approximately equal to the value of the labour embodied, are in fact only special cases. In General Theory pricing most usually fluctuates. The standard formulation is that prices normally include a level of income for "
capital Capital most commonly refers to: * Capital letter Letter case (or just case) is the distinction between the letters that are in larger uppercase or capitals (or more formally ''majuscule'') and smaller lowercase (or more formally ''minusc ...
" and "
land Land is the solid surface of Earth that is not permanently submerged in water. Most but not all land is situated at elevations above sea level (variable over geologic time frames) and consists mainly of Earth's crust, crustal components such a ...
". These incomes are known as "
profit Profit may refer to: Business and law * Profit (accounting) Profit, in accounting Accounting or Accountancy is the measurement, processing, and communication of financial and non financial information about economic entity, economic en ...
" and "
rent Rent may refer to: Economics *Renting Renting, also known as hiring or letting, is an agreement where a payment is made for the temporary use of a good, service or owned by another. A is when the pays a flat rental amount and the pays ...
" respectively. Yet Marx made the point that value cannot be placed upon labour as a commodity, because capital is a constant, whereas profit is a variable, not an income; thus explaining the importance of profit in relation to pricing variables. In Book 1, chapter VI, Adam Smith writes:
The real value of all the different component parts of price, it must be observed, is measured by the quantity of labour which they can, each of them, purchase or command. Labour measures the value not only of that part of price which resolves itself into labour, but of that which resolves itself into rent, and of that which resolves itself into profit.
The final sentence explains how Smith sees value of a product as relative to labor of buyer or consumer, as opposite to Marx who sees the value of a product being proportional to labor of laborer or producer. And we value things, price them, based on how much labor we can avoid or command, and we can command labor not only in a simple way but also by
trading Trade involves the transfer of goods from one person or entity to another, often in exchange for money. Economists refer to a system A system is a group of Interaction, interacting or interrelated elements that act according to a set of ru ...
things for a profit. The demonstration of the relation between commodities' unit values and their respective prices is known in Marxian terminology as the
transformation problem In 20th-century discussions of Karl Marx Karl Heinrich Marx (; 5 May 1818 – 14 March 1883) was a German philosopher, economist, historian, sociologist, political theorist, journalist and socialist revolutionary. Born in Trier, German Con ...
or the transformation of values into prices of production. The transformation problem has probably generated the greatest bulk of debate about the LTV. The problem with transformation is to find an algorithm where the magnitude of value added by labor, in proportion to its duration and intensity, is sufficiently accounted for after this value is distributed through prices that reflect an equal rate of return on capital advanced. If there is an additional magnitude of value or a loss of value after transformation, then the relation between values (proportional to labor) and prices (proportional to total capital advanced) is incomplete. Various solutions and impossibility theorems have been offered for the transformation, but the debate has not reached any clear resolution. LTV does not deny the role of supply and demand influencing price, since the price of a commodity is something other than its value. In ''Value, Price and Profit'' (1865),
Karl Marx Karl Heinrich Marx (; 5 May 1818 – 14 March 1883) was a German philosopher A philosopher is someone who practices philosophy Philosophy (from , ) is the study of general and fundamental questions, such as those about reason, M ...
quotes
Adam Smith Adam Smith ( 1723 – 17 July 1790) was a Scottish economist, philosopher as well as a moral philosopher Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and ...
and sums up:
It suffices to say that if supply and demand equilibrate each other, the market prices of commodities will correspond with their natural prices, that is to say, with their values as determined by the respective quantities of labor required for their production.Marx, Karl (1865)
Value, Price and Profit.
/ref>
The LTV seeks to explain the level of this equilibrium. This could be explained by a ''
cost of productionManufacturing cost is the sum of costs of all resources consumed in the process of making a product. The manufacturing cost is classified into three categories: direct materials costDirect materials cost the cost of direct materials which can be eas ...
'' argument—pointing out that all costs are ultimately labor costs, but this does not account for profit, and it is vulnerable to the charge of tautology in that it explains prices by prices. Marx later called this "Smith's adding up theory of value". Smith argues that labor values are the natural measure of exchange for direct producers like hunters and fishermen. Marx, on the other hand, uses a measurement analogy, arguing that for commodities to be comparable they must have a common element or substance by which to measure them, and that labor is a common substance of what Marx eventually calls ''commodity-values''.
# History
## Origins
The labor theory of value has developed over many centuries. It had no single originator, but rather many different thinkers arrived at the same conclusion independently. Aristotle is claimed to hold to this view. Some writers trace its origin to
Thomas Aquinas Thomas Aquinas (; it, Tommaso d'Aquino, lit=Thomas of Aquino, Italy, Aquino; 1225 – 7 March 1274) was an Italian Dominican Order, Dominican friar, Philosophy, philosopher, Catholic priest, and Doctor of the Church. An immensely influential ...
. In his ''
Summa Theologiae The ''Summa Theologiae'' or ''Summa Theologica'' (), often referred to simply as the ''Summa'', is the best-known work of Thomas Aquinas Thomas Aquinas (; it, Tommaso d'Aquino, lit=Thomas of Aquino; 1225 – 7 March 1274) was an Italian ...
'' (1265–1274) he expresses the view that "value can, does and should increase in relation to the amount of labor which has been expended in the improvement of commodities." Scholars such as
Joseph Schumpeter Joseph Alois Schumpeter (; February 8, 1883 – January 8, 1950) was an Austrian political economist. He was born in Moravia Moravia ( , also , ; cs, Morava ; german: link=no, Mähren ; pl, Morawy ; szl, Morawijo; la, Moravia) is a h ...
have cited
Ibn Khaldun Ibn Khaldun (; ar, أبو زيد عبد الرحمن بن محمد بن خلدون الحضرمي, ; 27 May 1332 – 17 March 1406) was an Arabs, Arab The Historical Muhammad', Irving M. Zeitlin, (Polity Press, 2007), p. 21; "It is, of course ...
, who in his ''
Muqaddimah The ''Muqaddimah'', also known as the ''Muqaddimah of Ibn Khaldun'' ( ar, مقدّمة ابن خلدون) or ''Ibn Khaldun's Prolegomena'' ( grc, Προλεγόμενα), is a book written by the Arab historian Ibn Khaldun Ibn Khaldun (; a ...
'' (1377), described labor as the source of value, necessary for all earnings and capital accumulation. He argued that even if earning "results from something other than a craft, the value of the resulting profit and acquired (capital) must (also) include the value of the labor by which it was obtained. Without labor, it would not have been acquired." Scholars have also pointed to
Sir William Petty Sir William Petty Royal Society, FRS (26 May 1623 – 16 December 1687) was an English economist, physician, scientist and philosopher. He first became prominent serving Oliver Cromwell and the Commonwealth of England, Commonwealth in Ireland. ...
's ''Treatise of Taxes'' of 1662 and to
John Locke John Locke (; 29 August 1632 – 28 October 1704) was an English philosopher and physician, widely regarded as one of the most influential of Enlightenment Enlightenment, enlighten or enlightened may refer to: Age of Enlightenment * ...
's
labor theory of property The labor theory of property (also called the labor theory of appropriation, labor theory of ownership, labor theory of entitlement, or principle of first appropriation) is a theory of natural law that holds that property originally comes about by ...
, set out in the '' Second Treatise on Government'' (1689), which sees labor as the ultimate source of economic value.
Karl Marx Karl Heinrich Marx (; 5 May 1818 – 14 March 1883) was a German philosopher A philosopher is someone who practices philosophy Philosophy (from , ) is the study of general and fundamental questions, such as those about reason, M ...
himself credited
Benjamin Franklin Benjamin Franklin ( April 17, 1790) was one of the Founding Fathers of the United States The Founding Fathers of the United States, or simply the Founding Fathers or Founders, were a group of American revolutionary Patriots (also ...
in his 1729 essay entitled "A Modest Enquiry into the Nature and Necessity of a Paper Currency" as being "one of the first" to advance the theory.
Adam Smith Adam Smith ( 1723 – 17 July 1790) was a Scottish economist, philosopher as well as a moral philosopher Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and ...
accepted the theory for pre-capitalist societies but saw a flaw in its application to contemporary
capitalism Capitalism is an economic system An economic system, or economic order, is a system A system is a group of interacting Interaction is a kind of action that occurs as two or more objects have an effect upon one another. The idea o ...
. He pointed out that if the "labor embodied" in a product equaled the "labor commanded" (i.e. the amount of labor that could be purchased by selling it), then profit was impossible.
David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the classical economists along with Thomas Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) w ...
(seconded by
Marx Karl Heinrich Marx (; 5 May 1818 – 14 March 1883) was a German philosopher A philosopher is someone who practices philosophy Philosophy (from , ) is the study of general and fundamental questions, such as those about reason, M ...
) responded to this paradox by arguing that Smith had confused labor with wages. "Labor commanded", he argued, would always be more than the labor needed to sustain itself (wages). The value of labor, in this view, covered not just the value of wages (what Marx called the value of
labor power Labour or labor may refer to: * Childbirth Childbirth, also known as labour or delivery, is the ending of pregnancy where one or more babies leaves the uterus by passing through the vagina or by Caesarean section. In 2015, there were about 13 ...
), but the value of the entire product created by labor. Ricardo's theory was a predecessor of the modern theory that equilibrium prices are determined solely by
production costs Cost of goods sold (COGS) is the carrying value of goods sold during a particular period. Costs are associated with particular goods using one of the several formulas, including specific identification, first-in first-out (FIFO), or average cost. ...
associated with
Neo-Ricardianism The neo-Ricardian school is an economic school that derives from the close reading and interpretation of David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the ...
. Based on the discrepancy between the wages of labor and the value of the product, the "
Ricardian socialists Ricardian socialism is a branch of classical economic thought based upon the work of the economist David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the cl ...
"— Charles Hall,
Thomas Hodgskin Thomas Hodgskin (12 December 1787 – 21 August 1869) was an English socialist Socialism is a political Politics (from , ) is the set of activities that are associated with Decision-making, making decisions in Social group, groups, or ...
, John Gray, and
John Francis Bray John Francis Bray (26 June 1809 – 1 February 1897) was a radical, Chartism, chartist, writer on socialist economics and activist in both Britain and his native America in the 19th century. He was hailed in later life as the 'Benjamin Franklin' ...
, and
Percy Ravenstone Piercy Ravenstone was a pseudonym used by a nineteenth-century political economist whose work led him to being variously described as a socialist, a tory and as an institutionalist. His contribution was noted by David Ricardo David Ricardo (18 ...
—applied Ricardo's theory to develop theories of
exploitation Exploitation may refer to: *Exploitation of natural resources *Exploitation of labour *Exploitation fiction *Exploitation film *Exploitation (film), ''Exploitation'' (film), a 2012 film *Sexual slavery and other forms of slavery *Oppression See al ...
. Marx expanded on these ideas, arguing that workers work for a part of each day adding the value required to cover their wages, while the remainder of their labor is performed for the enrichment of the capitalist. The LTV and the accompanying theory of exploitation became central to his economic thought. 19th century American individualist anarchists based their economics on the LTV, with their particular interpretation of it being called " Cost the limit of price". They, as well as contemporary individualist anarchists in that tradition, hold that it is unethical to charge a higher price for a commodity than the amount of labor required to produce it. Hence, they propose that trade should be facilitated by using notes backed by labor.
## Adam Smith and David Ricardo
Adam Smith held that, in a primitive society, the amount of labor put into producing a good determined its exchange value, with exchange value meaning, in this case, the amount of labor a good can purchase. However, according to Smith, in a more advanced society the market price is no longer proportional to labor cost since the value of the good now includes compensation for the owner of the means of production: "The whole produce of labour does not always belong to the labourer. He must in most cases share it with the owner of the stock which employs him." According to Whitaker, Smith is claiming that the 'real value' of such a commodity produced in advanced society is measured by the labor which that commodity will command in exchange but " mithdisowns what is naturally thought of as the genuine classical labor theory of value, that labor-cost regulates market-value. This theory was Ricardo's, and really his alone." Classical economist David Ricardo's labor theory of value holds that the
value Value or values may refer to: * Value (ethics) In ethics Ethics or moral philosophy is a branch of philosophy Philosophy (from , ) is the study of general and fundamental questions, such as those about Metaphysics, existence, reason, E ...
of a
good 125px, In many Western religions, angels are considered to be good beings and are contrasted with devils who are considered evil In most contexts, the concept of good denotes the conduct that should be preferred when posed with a choice betwee ...
(how much of another good or service it exchanges for in the market) is proportional to how much
labor Labour or labor may refer to: * , the delivery of a baby * , or work ** , physical work ** , a socioeconomic relationship between a worker and an employer Literature * , an American quarterly on the history of the labor movement * ', an academic ...
was required to produce it, including the labor required to produce the raw materials and machinery used in the process.
David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the classical economists along with Thomas Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) w ...
stated it as, "The value of a commodity, or the quantity of any other commodity for which it will exchange, depends on the relative quantity of labour which is necessary for its production, and not on the greater or less compensation which is paid for that labour." In this connection Ricardo seeks to differentiate the quantity of labour necessary to produce a commodity from the wages paid to the laborers for its production. Therefore, wages did not always increase with the price of a commodity. However, Ricardo was troubled with some deviations in prices from proportionality with the labor required to produce them. For example, he said "I cannot get over the difficulty of the wine, which is kept in the cellar for three or four years .e., while constantly increasing in exchange value or that of the oak tree, which perhaps originally had not 2 s. expended on it in the way of labour, and yet comes to be worth £100." (Quoted in Whitaker) Of course, a capitalist economy stabilizes this discrepancy until the value added to aged wine is equal to the cost of storage. If anyone can hold onto a bottle for four years and become rich, that would make it hard to find freshly corked wine. There is also the theory that adding to the price of a
luxury Luxury may refer to: *Luxury goods In economics, a luxury good (or upmarket good) is a good (economics), good for which demand (economics), demand increases more than what is proportional as income rises, so that expenditures on the good become a ...
product increases its
exchange-value In political economy and especially Marxian economics Marxian economics, or the Marxian school of economics, is a heterodox school of political economic thought. Its foundations can be traced back to the critique of classical political economy i ...
by mere prestige. The labor theory as an explanation for value contrasts with the
subjective theory of value Subjective may refer to: * Subjectivity, a subject's personal perspective, feelings, beliefs, desires or discovery, as opposed to those made from an independent, objective, point of view ** Subjective experience, the subjective quality of consciou ...
, which says that value of a good is not determined by how much labor was put into it but by its usefulness in satisfying a want and its scarcity. Ricardo's labor theory of value is not a
normative Normative generally means relating to an evaluative standard. Normativity is the phenomenon in human societies of designating some actions or outcomes as good or desirable or permissible and others as bad or undesirable or impermissible. A Norm (p ...
theory, as are some later forms of the labor theory, such as claims that it is ''immoral'' for an individual to be paid less for his labor than the total revenue that comes from the sales of all the goods he produces. It is arguable to what extent these classical theorists held the labor theory of value as it is commonly defined. For instance,
David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the classical economists along with Thomas Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) w ...
theorized that prices are determined by the amount of labor but found exceptions for which the labor theory could not account. In a letter, he wrote: "I am not satisfied with the explanation I have given of the principles which regulate value."
Adam Smith Adam Smith ( 1723 – 17 July 1790) was a Scottish economist, philosopher as well as a moral philosopher Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and ...
theorized that the labor theory of value holds true only in the "early and rude state of society" but not in a modern economy where owners of capital are compensated by profit. As a result, "Smith ends up making little use of a labor theory of value."
## Anarchism
Pierre Joseph Proudhon Pierre-Joseph Proudhon (, , ; 15 January 1809, Besançon – 19 January 1865, Paris Paris () is the Capital city, capital and List of communes in France with over 20,000 inhabitants, most populous city of France, with an estimated popu ...
's mutualism and American individualist anarchists such as
Josiah Warren Josiah Warren (; 26 June, 1798 – 14 April, 1874) was an American utopian socialist Utopian socialism is the term often used to describe the first current of modern socialism Socialism is a Political philosophy, political, Social philoso ...
,
Lysander Spooner Lysander Spooner (January 19, 1808 – May 14, 1887) was an American individualist anarchist Individualist anarchism is the branch of anarchism that emphasizes the individual and their Will (philosophy), will over external determinants such as ...
and
Benjamin Tucker Benjamin Ricketson Tucker (; April 17, 1854 – June 22, 1939) was an American anarchist Anarchism is a political philosophy and Political movement, movement that is sceptical of authority and rejects all involuntary, coercive forms of h ...
adopted the labor theory of value of
classical economics Classical economics or classical political economy is a school of thought A school of thought, or intellectual tradition, is the perspective of a group of people who share common characteristics of opinion or outlook of a philosophy ...
and used it to criticize capitalism while favoring a non-capitalist market system. Warren is widely regarded as the first American
anarchist Anarchism is a political philosophy and Political movement, movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the State (polity), state, which it holds to ...
,Palmer, Brian (2010-12-29
What do anarchists want from us?
'' Slate.com''
Riggenbach, Jeff (2011-02-25
Josiah Warren: The First American Anarchist
''
Mises Institute The Ludwig von Mises Institute for Austrian Economics, or Mises Institute, is a libertarian nonprofit think-tank located in Auburn, Alabama, Auburn, Alabama, United States. It is named after Austrian School economist Ludwig von Mises (1881–1973 ...
''
and the four-page weekly paper he edited during 1833, ''The Peaceful Revolutionist'', was the first anarchist periodical published.William Bailie, ''Josiah Warren: The First American Anarchist — A Sociological Study'', Boston: Small, Maynard & Co., 1906, p. 20 Cost the limit of price was a maxim coined by Warren, indicating a (
prescriptive Linguistic prescription, or prescriptive grammar, is the attempt to establish rules defining preferred or correct usage of language. These rules may address such Linguistics, linguistic aspects as spelling, pronunciation, vocabulary, syntax, and ...
) version of the labor theory of value. Warren maintained that the compensation for labor (or for its product) could only be an equivalent amount of labor (or a product embodying an equivalent amount).In ''Equitable Commerce'', Warren writes, "If a priest is required to get a soul out of purgatory, he sets his price according to the value which the relatives set upon his prayers, instead of their cost to the priest. This, again, is cannibalism. The same amount of labor equally disagreeable, with equal wear and tear, performed by his customers, would be a just remuneration Thus,
profit Profit may refer to: Business and law * Profit (accounting) Profit, in accounting Accounting or Accountancy is the measurement, processing, and communication of financial and non financial information about economic entity, economic en ...
,
rent Rent may refer to: Economics *Renting Renting, also known as hiring or letting, is an agreement where a payment is made for the temporary use of a good, service or owned by another. A is when the pays a flat rental amount and the pays ...
, and
interest In finance Finance is the study of financial institutions, financial markets and how they operate within the financial system. It is concerned with the creation and management of money and investments. Savers and investors have money availa ...
were considered unjust economic arrangements.Wendy McElroy,
Individualist Anarchism vs. "Libertarianism" and Anarchocommunism
," in the ''New Libertarian'', issue #12, October, 1984.
In keeping with the tradition of
Adam Smith Adam Smith ( 1723 – 17 July 1790) was a Scottish economist, philosopher as well as a moral philosopher Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and ...
's ''
The Wealth of Nations ''An Inquiry into the Nature and Causes of the Wealth of Nations'', generally referred to by its shortened title ''The Wealth of Nations'', is the ''magnum opus 's ''The Creation of Adam ''The Creation of Adam'' () is a fresco Fresco (pl ...
'', the "cost" of labor is considered to be the
subjective Subjective may refer to: * Subjectivity, a subject's personal perspective, feelings, beliefs, desires or discovery, as opposed to those made from an independent, objective, point of view ** Subjective experience, the subjective quality of consciou ...
cost; i.e., the amount of suffering involved in it. He put his theories to the test by establishing an experimental "labor for labor store" called the
Cincinnati Time Store The Cincinnati Time Store (1827-1830) was the first in a series of retail stores created by American individualist anarchist Josiah Warren to test his economic labor theory of value. The experimental store operated from May 18, 1827 until May 18 ...
at the corner of 5th and Elm Streets in what is now downtown Cincinnati, where trade was facilitated by notes backed by a promise to perform labor. "All the goods offered for sale in Warren's store were offered at the same price the merchant himself had paid for them, plus a small surcharge, in the neighborhood of 4 to 7 percent, to cover store overhead." The store stayed open for three years; after it closed, Warren could pursue establishing colonies based on Mutualism. These included "
Utopia A utopia ( ) typically describes an imaginary community A community is a social unit (a group of living things) with commonality such as Norm (social), norms, religion, values, Convention (norm), customs, or Identity (social science), identi ...
" and "
Modern TimesModern Times may refer to modern history. Modern Times may also refer to: Music * Modern Times (band), a band from Luxembourg * Modern Times (Al Stewart album), ''Modern Times'' (Al Stewart album), a 1975 album by Al Stewart * Modern Times (Bob Dy ...
". Warren said that
Stephen Pearl Andrews Stephen Pearl Andrews (March 22, 1812 – May 21, 1886) was an American individualist anarchist, linguist, political philosopher, outspoken abolitionist Abolitionism, or the abolitionist movement, was the movement to end slavery. In W ...
' ''The Science of Society'', published in 1852, was the most lucid and complete exposition of Warren's own theories. Mutualism is an
economic theory Economics () is the social science that studies how people interact with value; in particular, the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. ...
and
anarchist school of thought Anarchism is the political philosophy which holds ruling classes and the State (polity), state to be undesirable, unnecessary and harmful, The following sources cite anarchism as a political philosophy: Slevin, Carl. "Anarchism." ''The Concise ...
that advocates a society where each person might possess a
means of production The means of production is a concept that encompasses the social use and ownership Ownership is the state or fact of exclusive right In Anglo-Saxon law Anglo-Saxon law (Old English Old English (, ), or Anglo-Saxon, is the earliest record ...
, either individually or collectively, with trade representing equivalent amounts of labor in the
free market In economics Economics () is a social science Social science is the branch A branch ( or , ) or tree branch (sometimes referred to in botany Botany, also called , plant biology or phytology, is the science of pl ...
. Integral to the scheme was the establishment of a mutual-credit bank that would lend to producers at a minimal interest rate, just high enough to cover administration. Mutualism is based on a labor theory of value that holds that when labor or its product is sold, in exchange, it ought to receive goods or services embodying "the amount of labor necessary to produce an article of exactly similar and equal utility". Mutualism originated from the writings of philosopher
Pierre-Joseph Proudhon Pierre-Joseph Proudhon (, , ; 15 January 1809, Besançon – 19 January 1865, Paris Paris () is the Capital city, capital and List of communes in France with over 20,000 inhabitants, most populous city of France, with an estimated popu ...
.
Collectivist anarchism Collectivist anarchism, also called anarchist collectivism and anarcho-collectivism, is a revolutionary socialistPatsouras, Louis (2005). ''Marx in Context''. iUniverse. p. 54. doctrine Doctrine (from la, Wikt:doctrina, doctrina, meaning "teac ...
as defended by
Mikhail Bakunin Mikhail Alexandrovich Bakunin (; – 1 July 1876) was a Russian revolutionary A revolutionary is a person who either participates in, or advocates a revolution. Also, when used as an adjective, the term ''revolutionary'' refers to somethi ...
defended a form of labor theory of value when it advocated a system where "all necessaries for production are owned in common by the labour groups and the free communes ... based on the distribution of goods according to the labour contributed".
## Karl Marx
Contrary to popular belief Marx never used the term "Labor theory of value" in any of his works but used the term
Law of value The law of the value of commodities (German: ''Wertgesetz der Waren''), known simply as the law of value, is a central concept in Karl Marx's critique of political economy first expounded in his polemic '' The Poverty of Philosophy'' (1847) agains ...
, Marx opposed "ascribing a supernatural creative power to labor", arguing as such:
Labor is not the source of all wealth. Nature is just as much a source of use values (and it is surely of such that material wealth consists!) as labor, which is itself only the manifestation of a force of nature, human labor power.
Here, Marx was distinguishing between
exchange value In political economy and especially Marxian economics, exchange value (German: ''Tauschwert'') refers to one of four major attributes of a commodity#Marxist concept, commodity, i.e., an item or service produced for, and sold on the Market (economics ...
(the subject of the LTV) and
use value Use value (german: Gebrauchswert) or value in use is a concept in classical political economy and Marxian economics. It refers to the tangible features of a commodity (a tradeable object) which can satisfy some human requirement, want or need, or w ...
. Marx used the concept of " socially necessary labor time" to introduce a social perspective distinct from his predecessors and
neoclassical economics Neoclassical economics is an approach to economics in which the production, consumption and valuation (pricing) of goods and services are driven by the supply and demand In microeconomics, supply and demand is an economic model In econo ...
. Whereas most economists start with the individual's perspective, Marx started with the perspective of society ''as a whole''. "Social production" involves a complicated and interconnected
division of labor Division or divider may refer to: Mathematics *Division (mathematics) Division is one of the four basic operations of arithmetic, the ways that numbers are combined to make new numbers. The other operations are addition, subtraction, and mult ...
of a wide variety of people who depend on each other for their survival and prosperity. "Abstract" labor refers to a characteristic of
commodity In economics Economics () is a social science Social science is the branch A branch ( or , ) or tree branch (sometimes referred to in botany Botany, also called , plant biology or phytology, is the science of plan ...
-producing labor that is shared by all different kinds of heterogeneous (concrete) types of labor. That is, the concept abstracts from the ''particular'' characteristics of all of the labor and is akin to average labor. "Socially necessary" labor refers to the quantity required to produce a commodity "in a given state of society, under certain social average conditions or production, with a given social average intensity, and average skill of the labor employed." That is, the value of a product is determined more by societal standards than by individual conditions. This explains why technological breakthroughs lower the price of commodities and put less advanced producers out of business. Finally, it is not labor per se that creates value, but labor power sold by free wage workers to capitalists. Another distinction is between productive and unproductive labor. Only wage workers of productive sectors of the economy produce value.For the difference between wage workers and working animals or
slave Slavery and enslavement are both the state and the condition of being a slave, who is someone forbidden to quit their service for an enslaver, and who is treated by the enslaver as their property Property is a system of rights that gives ...
s confer: John R. Bell: Capitalism and the Dialectic - The Uno-Sekine Approach to Marxian Political Economy, p. 45. London, Pluto Press 2009
According to Marx an increase in productiveness of the laborer does not affect the value of a commodity, but rather, increases the surplus value realized by the capitalist. Therefore, decreasing the cost of production does not decrease the value of a commodity, but allows the capitalist to produce more and increases the opportunity to earn a greater profit or surplus value, as long as there is demand for the additional units of production.
# Criticism
The Marxist labor theory of value has been criticised on several counts. Some argue that it predicts that profits will be higher in labor-intensive industries than in capital-intensive industries, which would be contradicted by measured empirical data inherent in quantitative analysis. This is sometimes referred to as the "Great Contradiction".Böhm von Bawerk, "Karl Marx and the Close of His System"
Karl Marx and the Close of His System ''Karl Marx and the Close of His System'' (german: Zum Abschluss des Marxschen Systems) is an 1896 book about the philosopher Karl Marx by the Austrian economist Eugen von Böhm-Bawerk, in which the author critiques Marx's economic theories. Backg ...
In volume 3 of Capital, Marx explains why profits are not distributed according to which industries are the most labor-intensive and why this is consistent with his theory. Whether or not this is consistent with the labor theory of value as presented in volume 1 has been a topic of debate. According to Marx,
surplus value In Marxian economics Marxian economics, or the Marxian school of economics, is a Heterodox economics, heterodox school of political economic thought. Its foundations can be traced back to Karl Marx, Karl Marx's Critique of political economy#M ...
"Testing the labour theory of value: An exchange."
(2010): 1-15.
Nitzan, Jonathan, and Shimshon Bichler
Capital as power: A study of order and creorder
Routledge, 2009, pp.93-97, 138-144
Furthermore, the authors argue that the studies do not seem to actually attempt to measure the correlation between value and price. The authors argue that, according to Marx, the value of a commodity indicates the abstract labor time required for its production; however Marxists have been unable to identify a way to measure a unit (elementary particle) of abstract labor (indeed the authors argue that most have given up and little progress has been made beyond Marx's original work) due to numerous difficulties. This means assumptions must be made and according to the authors, these involve circular reasoning: Bichler and Nitzan argue that this amounts to converting prices into values and then determining if they correlate, which the authors argue proves nothing since the studies are simply correlating prices with themselves. Paul Cockshott disagreed with Bichler and Nitzan's arguments, arguing that it was possible to measure abstract labour time using wage bills and data on working hours, while also arguing Bichler and Nitzan's claims that the true value-price correlations should be much lower actually relied on poor statistical analysis itself. Most Marxists, however, reject Bichler and Nitzan's interpretation of Marx, arguing that their assertion that individual commodities can have values, rather than prices of production, misunderstands Marx's work. For example, Fred Moseley argues Marx understood "value" to be a "macro-monetary" variable (the total amount of labor added in a given year plus the depreciation of fixed capital in that year), which is then concretized at the level of individual prices of production, meaning that "individual values" of commodities do not exist. The theory can also be sometimes found in non-Marxist traditions.Confer: Weizsäcker, Carl Christian von (2010): A New Technical Progress Function (1962). German Economic Review 11/3 (first publication of an article written in 1962); Weizsäcker Carl Christian von, and Paul A. Samuelson (1971): A new labor theory of value for rational planning through use of the bourgeois profit rate. Proceedings of the National Academy of Sciences]
(facsimile)
For instance, Mutualism (economic theory), mutualist theorist Kevin Carson's ''Studies in Mutualist Political Economy'' opens with an attempt to integrate marginalist critiques into the labor theory of value. Some post-Keynesian economists have been highly critical of the labor theory of value. Joan Robinson, who herself was considered an expert on the writings of Karl Marx, wrote that the labor theory of value was largely a tautology and "a typical example of the way metaphysical ideas operate". The well-known Marxian economist Roman Rosdolsky replied to Robinson's claims at length, arguing that Robinson failed to understand key components of Marx's theory; for instance, Robinson argued that "Marx's theory, as we have seen, rests on the assumption of a constant rate of exploitation", but as Rosdolsky points out, there is a great deal of contrary evidence. In ecological economics, the labor theory of value has been criticized, where it is argued that labor is in fact energy over time. Such arguments generally fail to recognize that Marx is inquiring into social relations among human beings, which cannot be reduced to the expenditure of energy, just as democracy cannot be reduced to the expenditure of energy that a voter makes in getting to the polling place. However, echoing Joan Robinson, Alf Hornborg, an environmental historian, argues that both the reliance on "energy theory of value" and "labor theory of value" are problematic as they propose that use-values (or material wealth) are more "real" than exchange-values (or cultural wealth)--yet, use-values are culturally determined. For Hornborg, any Marxist argument that claims uneven wealth is due to the "exploitation" or "underpayment" of use-values is actually a tautological contradiction, since it must necessarily quantify "underpayment" in terms of exchange-value. The alternative would be to conceptualize unequal exchange as "an asymmetric net transfer of material inputs in production (e.g., embodied labor, energy, land, and water), rather than in terms of an underpayment of material inputs or an asymmetric transfer of 'value'". In other words, uneven exchange is characterised by incommensurability, namely: the unequal transfer of material inputs; competing value-judgements of the worth of labor, fuel, and raw materials; differing availability of industrial technologies; and the off-loading of environmental burdens on those with less resources.
# A generalization
Some authors proposed to reconsider the role of production equipment (constant capital) in production of value, following hints in ''Das Kapital'', where Marx described in Chapter XV (Machinery and Modern Industry) the functional role of machinery, driven by external energy sources, in production processes as the substitute for labour.
* Abstract labour and concrete labour, Abstract labor and concrete labor * Cost the limit of price * Division of labor * Labor notes (currency) *
Law of value The law of the value of commodities (German: ''Wertgesetz der Waren''), known simply as the law of value, is a central concept in Karl Marx's critique of political economy first expounded in his polemic '' The Poverty of Philosophy'' (1847) agains ...
* Prices of production * Producerism * Productive and unproductive labour, Productive and unproductive labor * Social division of labor * Surplus labour, Surplus labor * Surplus product * Surplus value * Transformation problem * Value-form * Anarchy of Production Competing theories * Anarcho-communism * Entitlement theory * Marginalism *
Neo-Ricardianism The neo-Ricardian school is an economic school that derives from the close reading and interpretation of David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the ...
* Subjective theory of value
# References
* Bhaduri, Amit. 1969. "On the Significance of Recent Controversies on Capital Theory: A Marxian View." ''Economic Journal''. 79(315) September: 532–539. * Eugen von Böhm-Bawerk, von Böhm-Bawerk, Eugen ''Karl Marx and the Close of His System'' (Classic criticism of Marxist economic theory). * G.A. Cohen 'The Labour Theory of Value and the Concept of Exploitation', in his ''History Labour and Freedom''. * Duncan, Colin A.M. 1996. ''The Centrality of Agriculture: Between Humankind and The Rest of Nature.'' McGill–Queen's University Press, Montreal. * ——2000. The Centrality of Agriculture: History, Ecology and Feasible Socialism. Socialist Register, pp. 187–205. * ——2004. Adam Smith's green vision and the future of global socialism. In Albritton, R; Shannon Bell; John R. Bell; and R. Westra [Eds.] ''New Socialisms: Futures Beyond Globalization. ''New York/London, Routledge. pp. 90–104. * * Eldred, Michael (1984)
''Critique of Competitive Freedom and the Bourgeois-Democratic State: Outline of a Form-analytic Extension of Marx's Uncompleted System''
With an Appendix 'Value-form Analytic Reconstruction of the Capital-Analysis' by Michael Eldred, Marnie Hanlon, Lucia Kleiber and Mike Roth, Kurasje, Copenhagen. Emended, digitized edition 2010 with a new Preface, lxxiii + 466 pp. . * Ellerman, David P. (1992) Property & Contract in Economics: The Case for Economic Democracy. Blackwell. Chapters 4,5, and 13 critiques of LTV in favor of the labor theory of property. * Engels, F. (1880)
''Socialism: Utopian and Scientific''
* Freeman, Alan: ''Price, value and profit – a continuous, general treatment''. In: Alan Freeman, Guglielmo Carchedi (editors): ''Marx and Non-equilibrium Economics''. Edward Elgar Publishing. Cheltenham, UK, Brookfield, US 1996. . * Hagendorf, Klaus
''The Labour Theory of Value. A Historical-Logical Analysis''
Paris: EURODOS; 2008. * Hagendorf, Klaus
''Labour Values and the Theory of the Firm. Part I: The Competitive Firm''
Paris: EURODOS; 2009. * Hansen, Bue Rübner. (2011). "Review of ''Capital as Power'' by Jonathan Nitzan and Shimson Bichler". ''Historical Materialism'' 19, no. 2: 144–159. * Henderson, James M.; Quandt, Richard E. 1971: Microeconomic Theory – A Mathematical Approach. Second Edition/International Student Edition. McGraw-Hill Kogakusha, Ltd. * Keen, Steve
''Use, Value, and Exchange: The Misinterpretation of Marx''
* * ([Internet edition: 1999] [1887 English edition]). * * Moseley, Fred. (2016)
''Money and Totality''
Leiden, Netherlands: Brill. * Murray, Patrick. (2016)
''The Mismeasure of Wealth : Essays on Marx and Social Form''
Leiden, Netherlands: Brill. * Ormazabal, Kepa M. (2004)
''Smith On Labour Value''
Bilbo, Biscay, Spain: University of the Basque Country Working Paper. * Parrington, Vernon Louis
''Econodynamics. The Theory of Social Production.''
Springer, Dordrecht-Heidelberg-London-New York. * Isaak Illich Rubin, Rubin, Isaak Illich (1928)
''Essays on Marx's Theory of Value''
* Anwar Shaikh (Economist), Shaikh, Anwar (1998). "The Empirical Strength of the Labour Theory of Value" in ''Conference Proceedings of Marxian Economics: A Centenary Appraisal'', Riccardo Bellofiore (ed.), Macmillan, London. * * Fernando Vianello, Vianello, F. (1987). "Labour theory of value", in: Eatwell, J. and Milgate, M. and Newman, P. (eds.): ''The New Palgrave: A Dictionary of Economics'', Macmillan e Stockton, London e New York, . * Wolff, Jonathan (2003)
''Karl Marx''
in ''Stanford Encyclopedia of Philosophy''. * {{DEFAULTSORT:Labor Theory Of Value Labour economics Marxian economics Classical economics Theory of value (economics) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41417208313941956, "perplexity": 4147.758594915702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00716.warc.gz"} |
http://segh.net/about/ | Diverse scientific fields and multidisciplinary expertise brought together within an international community
SEGH was established in 1971 to provide a forum for scientists from various disciplines to work together in understanding the interaction between the geochemical environment and the health of plants, animals, and humans.
SEGH recognizes the importance of interdisciplinary research, representing expertise in a diverse range of scientific fields, such as biology, engineering, geology, hydrology, epidemiology, chemistry, medicine, nutrition, and toxicology.
SEGH members come from a variety of backgrounds within the academic, regulatory, and industrial communities, thus providing a representative perspective on current issues and concerns.
SEGH membership is international and there are regional sections to coordinate activities in Europe, Americas and Asia/ Pacific.
Organisational Profile
President and Regional Chairs: President Dr Chaosheng Zhang
President European Chair Americas Chair Asia/Pacific Chair Dr Chaosheng Zhang Dr Chaosheng Zhang Dr. Nurdan S. Duzgoren-Aydin, Prof. Kyoung-Woong Kim University of Galway University of Galway New Jersey City University Korea chaosheng.zhang@nuigalway.ie kwkim@gist.ac.kr
China-Ireland Consortium: Taicheng An (China), Yongguan Zhu (China) , Chaosheng Zhang (NUI Galway, Ireland)”
Organisational roles
Membership Secretary / Treasurer Secretary Webmaster Mrs Anthea Brown Mr Malcolm Brown Dr Michael Watts Rt. British Geological Survey Rt. British Geological Survey British Geological Survey seghmembership@gmail.com segh.secretary@gmail.com seghwebmaster@gmail.com
SEGH is a member of the Geological Society of America's Associated Society Partnerships. For more information on educational programmes, collaborations and communications link to www.geosociety.org.
Keep up to date
## SEGH 34th International Conference on Sustainable Geochemistry
Victoria Falls, Zimbabwe
02 July 2018
## SubmitContent
Members can keep in touch with their colleagues through short news and events articles of interest to the SEGH community.
## Science in theNews
Latest on-line papers from the SEGH journal: Environmental Geochemistry and Health
• Characteristics of PM 2.5 , CO 2 and particle-number concentration in mass transit railway carriages in Hong Kong 2017-08-01
### Abstract
Fine particulate matter (PM2.5) levels, carbon dioxide (CO2) levels and particle-number concentrations (PNC) were monitored in train carriages on seven routes of the mass transit railway in Hong Kong between March and May 2014, using real-time monitoring instruments. The 8-h average PM2.5 levels in carriages on the seven routes ranged from 24.1 to 49.8 µg/m3, higher than levels in Finland and similar to those in New York, and in most cases exceeding the standard set by the World Health Organisation (25 µg/m3). The CO2 concentration ranged from 714 to 1801 ppm on four of the routes, generally exceeding indoor air quality guidelines (1000 ppm over 8 h) and reaching levels as high as those in Beijing. PNC ranged from 1506 to 11,570 particles/cm3, lower than readings in Sydney and higher than readings in Taipei. Correlation analysis indicated that the number of passengers in a given carriage did not affect the PM2.5 concentration or PNC in the carriage. However, a significant positive correlation (p < 0.001, R 2 = 0.834) was observed between passenger numbers and CO2 levels, with each passenger contributing approximately 7.7–9.8 ppm of CO2. The real-time measurements of PM2.5 and PNC varied considerably, rising when carriage doors opened on arrival at a station and when passengers inside the carriage were more active. This suggests that air pollutants outside the train and passenger movements may contribute to PM2.5 levels and PNC. Assessment of the risk associated with PM2.5 exposure revealed that children are most severely affected by PM2.5 pollution, followed in order by juveniles, adults and the elderly. In addition, females were found to be more vulnerable to PM2.5 pollution than males (p < 0.001), and different subway lines were associated with different levels of risk.
• Comparison of chemical compositions in air particulate matter during summer and winter in Beijing, China 2017-08-01
### Abstract
The development of industry in Beijing, the capital of China, particularly in last decades, has caused severe environmental pollution including particulate matter (PM), dust–haze, and photochemical smog, which has already caused considerable harm to local ecological environment. Thus, in this study, air particle samples were continuously collected in August and December, 2014. And elements (Si, Al, V, Cr, Mn, Fe, Ni, Cu, Zn, Mo, Cd, Ba, Pb and Ti) and ions ( $${\text{NO}}_{3}^{-}$$ , $${\text{SO}}_{4}^{2-}$$ , F, Cl, Na+, K+, Mg2+, Ca2+ and $${\text{NH}}_{4}^{+}$$ ) were analyzed by inductively coupled plasma mass spectrometer and ion chromatography. According to seasonal changes, discuss the various pollution situations in order to find possible particulate matter sources and then propose appropriate control strategies to local government. The results indicated serious PM and metallic pollution in some sampling days, especially in December. Chemical Mass Balance model revealed central heating activities, road dust and vehicles contribute as main sources, account for 5.84–32.05 % differently to the summer and winter air pollution in 2014.
• Annual ambient atmospheric mercury speciation measurement from Longjing, a rural site in Taiwan 2017-08-01
### Abstract
The main purpose of this study was to monitor ambient air particulates and mercury species [RGM, Hg(p), GEM and total mercury] concentrations and dry depositions over rural area at Longjing in central Taiwan during October 2014 to September 2015. In addition, passive air sampler and knife-edge surrogate surface samplers were used to collect the ambient air mercury species concentrations and dry depositions, respectively, in this study. Moreover, direct mercury analyzer was directly used to detect the mercury Hg(p) and RGM concentrations. The result indicated that: (1) The average highest RGM, Hg(p), GEM and total mercury concentrations, and dry depositions were observed in January, prevailing dust storm occurred in winter season was the possible major reason responsible for the above findings. (2) The highest average RGM, Hg(p), GEM and total mercury concentrations, dry depositions and velocities were occurred in winter. This is because that China is the largest atmospheric mercury (Hg) emitter in the world. Its Hg emissions and environmental impacts need to be evaluated. (3) The results indicated that the total mercury ratios of Kaohsiung to that of this study were 5.61. This is because that Kaohsiung has the largest industry density (~60 %) in Taiwan. (4) the USA showed average lower mercury species concentrations when compared to those of the other world countries. The average ratios of China/USA values were 89, 76 and 160 for total mercury, RGM and Hg(p), respectively, during the years of 2000–2012. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3109772205352783, "perplexity": 7482.382811162749}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427766.28/warc/CC-MAIN-20170729112938-20170729132938-00446.warc.gz"} |
https://mathoverflow.net/questions/405505/reference-request-for-a-proof-of-the-two-square-theorem | Reference request for a proof of the two-square Theorem
One can show (see below for a sketch of a proof) that every odd prime number $$p$$ can be written in exactly $$(p+1)/2$$ different ways as $$p=a\cdot b+c\cdot d$$ with $$a,b,c,d\in\mathbb N$$ satisfying $$\max(c,d)<\min(a,b)$$.
Example for $$p=23$$: $$\begin{matrix} 1\cdot 23+0\cdot 0 & 23\cdot 1+0\cdot 0 \\ 2\cdot 11+1\cdot 1 & 11\cdot 2+1\cdot 1 \\ 3\cdot 7+1\cdot 2 & 3\cdot 7+2\cdot 1 \\ 7\cdot 3+1\cdot 2 & 7\cdot 3+2\cdot 1 \\ 4\cdot 5+1\cdot 3 & 4\cdot 5+3\cdot 1 \\ 5\cdot 4+1\cdot 3 & 5\cdot 4+3\cdot 1 \end{matrix}$$
Klein's Vierergruppe $$\mathbb V$$ acts on all such quadruplets $$(a,b,c,d)$$ by permuting the first two, permuting the last two, or permuting both the first two and the last two elements. So we get an easy proof that every prime $$p$$ congruent to $$1$$ modulo $$4$$ must be a sum of squares: $$(p+1)/2$$ is then odd and the only fixed points under the action of $$\mathbb V$$ are of the form $$(a,a,c,c)$$.
Does somebody know a reference for this proof? It looks a bit like Zagier's proof which also uses a parity argument for a set acted upon by involutions.
Motivation: This is in fact a variation of Arithmetic properties of positively reduced $2\times 2$-matrices .
Sketch of proof Given a solution $$(a,b,c,d)$$ we consider $$u=(a,c),\ v=(-d,b)$$. We associate to $$(a,b,c,d)$$ the sublattice $$\Lambda=\mathbb Zu+\mathbb Zv$$ of index $$p$$ in $$\mathbb Z^2$$. Suppose now $$cd>0$$ and consider the eight open cones of $$\mathbb R^2$$ defined by the complement of the four lines defined by $$xy(x^2-y^2)=0$$. We colour these open cones alternatingly black and white. The four vectors $$\pm u,\pm v$$ are contained in different black cones (colouring the first cone above the halfline $$(\mathbb R_{>0},0)$$ in black).
We say that a sublattice $$\Lambda$$ of finite index in $$\mathbb Z^2$$ has a monochromatic basis if there exists a basis $$b_1,b_2$$ of $$\Lambda=\mathbb Z b_1+\mathbb Z b_2$$ such that all four elements $$\pm b_1,\pm b_2$$ belong to different open cones of the same colour.
(Not every lattice has a monochromatic basis but many do.)
We claim that all monochromatic bases of a lattice (having a monochromatic basis) are of the same colour: If $$b_1,b_2$$ is a black monochromatic basis and $$w_1,w_2$$ is a white monochromatic basis, then $$w_1,w_2$$ belong to two open adjacent cones of $$\mathbb R^2\setminus(\mathbb Rb_1\cup \mathbb R b_2)$$ which is impossible by the following small Lemma:
Lemma: If $$f_1,f_2$$ and $$g_1,g_2$$ are two bases of a lattice $$\Lambda=\mathbb Z f_1+\mathbb Z f_2=\mathbb Z g_1+\mathbb Z g_2$$ such that $$\{\pm f_1,\pm f_2\}$$ and $$\{\pm g_1,\pm g_2\}$$ do not intersect, then $$g_1,g_2$$ or $$g_1,-g_2$$ are contained in a common connected component of $$\mathbb R^2\setminus(\mathbb R f_1\cup \mathbb R f_2)$$. (Otherwise we have up to sign changes and exchanges of indices $$f_1=\alpha b_1+\beta b_2$$ and $$f_2=\gamma b_1-\delta b_2$$ with $$\alpha,\beta,\gamma,\delta$$ strictly positive integers. This implies that $$b_1$$ belongs to the convex hull of $$(0,0),f_1,f_2$$ which is a contradiction.)
Set now $$\Lambda_\mu=\{(x,y)\in\mathbb Z,\ \vert\ x+\mu y\equiv 0\pmod p\}$$. If $$\mu\in \{2,\ldots,p-2\}$$, then $$\Lambda_\mu$$ contains no elements of the form $$(\pm m,0),(0,\pm m),(\pm m,\pm m)$$ with $$m$$ in $$\{1,\ldots,p-1\}$$. This implies that every open black or white cone contains a point with coordinates of absolute value at most $$p-1$$. A reduction algorithm implies the existence of a monochromatic basis. (Start with two arbitrary non-zero elements $$e_1,e_2$$ of $$\Lambda$$ having coordinates of absolute value smaller than $$p$$ which belong to two different consecutive black cones. If the interior of the convex hull spanned by $$\pm e_1,\pm e_2$$ contains a non-zero element in a black cone, we can replace $$e_1$$ or $$e_2$$ and decrease the area of the convex hull spanned by $$\pm e_1,\pm e_2$$. If the interior contains no non-zero elements of $$\Lambda$$ in black cones, we get either a monochromatic basis or the convex hull contains at least four lattice points in four distinct white cones and we switch the working colour to white.)
Moreover, since $$\Lambda_\mu$$ and $$\Lambda_{p-\mu}$$ differ by a horizontal reflection, they have monochromatic bases of different colours. Retaining only lattices with black monochromatic bases, We get $$(p-2-2+1)/2=(p-3)/2$$ such lattices with black monochromatic bases.
Monochromatic bases of a lattice $$\Lambda_\mu$$ are not unique but in finite number. It remains to show that exactly one black monochromatic basis of a lattice $$\Lambda_\mu$$ has the form $$u=(a,c),v=(-d,b)$$ as required for a solution of $$p=ab+cd$$ (with $$\min(a,b)>\max(c,d)$$ and $$0\leq c,d$$). We call such a basis a reduced monochromatic basis. First observe that every lattice with a black monochromatic basis $$b_1,b_2$$ (where we suppose $$b_1\in\mathbb N^2$$ and $$b_2$$ in $$(-\mathbb N)\times\mathbb N$$) has a reduced black monochromatic basis: Replace $$b_1$$ by $$b_1-kb_2$$ with $$k$$ maximal for monochromaticity. Replace then $$b_2$$ by $$b_2+sb_1$$ with $$s$$ maximal for monochromaticity. The resulting black monochromatic basis is reduced.
Observing that the two lattices $$\mathbb Z(p,0)+\mathbb Z(1,\pm 1)$$ contain no vectors $$u,v$$ (associated to a solution $$(a,b,c,d)$$ such that ...) and adding the two trivial solutions $$(p,1,0,0),(1,p,0,0)$$ (corresponding to the lattices $$\mathbb Z(p,0)+\mathbb Z(0,1)$$ and $$\mathbb Z(1,0)+\mathbb Z(0,p)$$) we get a total number of at least $$(p-3)/2+2=(p-1)/2$$ solutions and we are done after showing that every lattice with a black monochromatic basis contains only one reduced monochromatic basis (also using the fact that sublattices of prime index $$p$$ are in one-to-one correspondence with points of the projective line over $$\mathbb F_p$$).
Supose now that $$u=(a,c),v=(-d,b)$$ is a reduced black basis and let $$u'=(a',c'),v'=(-d',b')$$ be a second reduced black basis giving rise to two distinct solutions $$(a,b,c,d)$$ and $$(a',b',c',d')$$. Since $$u$$ and $$v$$ determine each other uniquely in a reduced black basis, we can assume that $$u'\not=u$$ and $$v'\not=v$$.
The two vectors $$u',v'$$ are thus contained in the four open cones defined by $$\mathbb R^2\setminus(\mathbb R u\cup\mathbb R v)$$.
The lemma used previously shows that they can not belong to two adjacent cones of $$\mathbb R^2\setminus(\mathbb R u\cup\mathbb R v)$$.
We suppose now that $$u',v'$$ belong to $$\mathcal C\cup (-\mathcal C)$$ for $$\mathcal C$$ a cone (onnected component) of $$\mathbb R^2\setminus(\mathbb R u\cup \mathbb R v)$$. If $$u'$$ and $$v'$$ belong to two opposite cones, we exchange the basis $$u,v$$ with the basis $$u',v'$$. We can now assume that both vectors $$u'$$ and $$v'$$ belong to the open cone $$(0,+\infty)u+ (0,+\infty)v$$ spanned by $$u$$ and $$v$$. We have thus $$u'=\alpha u+\beta v$$ and $$v'=\gamma u+\delta v$$ with $$\alpha,\beta,\gamma,\delta$$ strictly positive integers. Reducedness of the black monochromatic basis $$u,v$$ implies that $$v+u$$ is either white or belongs to the black cone $$\mathcal C_u$$ containing $$u$$. If $$u+v$$ is white, we get a contradiction by observing that it is contained in the convex hull of $$(0,0),u',v'$$ (which contains no other points of $$\Lambda$$). The point $$u+v$$ is thus in the black cone $$\mathcal C_u$$ containing $$u$$. Geometric considerations imply now $$\delta\geq 3$$ and the impossible inequalities $$p\geq (3 b+c)a/2>(3ab+ac)/2>ab+a(b+c)/2>ab+cd=p\ .$$
Indeed, since $$u$$ and $$u+v$$ belong both to $$\mathcal C_u$$, the line $$u+\mathbb Rv$$ of slope $$<-1$$ intersects the white cone separating $$\mathcal C_u$$ from the black cone $$\mathcal C_v$$ containing $$v$$ in a segment containing at least one lattice-point of $$\Lambda$$. This implies $$\delta\geq 3$$ and the second coordinate of $$v'$$ is thus at least equal to $$3b+c$$. On the other hand, the first coordinate of $$u'$$ has to be at least equal to the first coordinate of the intersection $$u+\mathbb Rv \cap \mathbb R(1,1)$$ which is $$\geq a/2$$ (since $$v$$ has slope $$<-1$$).
• I learned recently that, although I am sure I have only ever heard Viergruppe, it is apparently Vierergruppe. \\ The not-very-difficult starting fact is too difficult for me. Does it go too far afield to sketch a proof? Oct 5 at 21:47
• As explained by Mathologer, Zagier's proof can be thought of as boiling down to showing that for primes $p \equiv 1\pmod{4}$, the equation $p = x^2 + 4yz$ has an odd number of positive integer solutions $(x,y,z)$. So it seems to be not quite the same proof. You might look at Christian Elsholtz's paper, A combinatorial approach to sums of two squares and related problems, to see if your proof is implicitly in there somewhere. Oct 6 at 1:43
• @LSpice : Viergruppe corrected to Vierergruppe. I have added a slightly sketchy proof. Oct 6 at 6:56
• Regardless of whether new or not, this is a real neat proof, Roland. Oct 6 at 10:16
• A quite recent proof by A. David Christopher (A partition-theoretic proof of Fermat's two squares theorem. Discrete Math. 339 (2016), no. 4, 1410–1411) also splits p into such a sum of products of two factors, $a_1 f_1+ a_2 f_2$. Oct 6 at 15:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 147, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381037950515747, "perplexity": 165.07868489825242}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00016.warc.gz"} |
https://cantera.org/tutorials/legacy2yaml.html | ## Converting CTI and XML input files to YAML¶
If you want to convert an existing, legacy CTI or XML input file to the YAML format, this section will help.
## cti2yaml¶
Cantera comes with a converter utility cti2yaml (or cti2yaml.py) that converts legacy CTI format mechanisms into the new YAML format introduced in Cantera 2.5. This program can be run from the command line to convert files to the YAML format. (New in Cantera 2.5)
Usage:
cti2yaml [-h] input [output]
The input argument is required, and specifies the name of the input file to be converted. The optional output argument specifies the name of the new output file. If output is not specified, then the output file will have the same name as the input file, with the extension replaced with .yaml.
Example:
cti2yaml mymech.cti
will generate the output file mymech.yaml.
If the cti2yaml script is not on your path, but the Cantera Python module is, cti2yaml can be used by running:
python -m cantera.cti2yaml mymech.cti
It is not necessary to use cti2yaml to convert any of the CTI input files included with Cantera. YAML versions of these files are already included with Cantera.
For input files where you have both the CTI and XML versions, cti2yaml is recommended over ctml2yaml. In cases where the mechanism was originally converted from a CK-format mechanism, it is recommended to use ck2yaml if the original input files are available.
## ctml2yaml¶
Cantera comes with a converter utility ctml2yaml (or ctml2yaml.py) that converts legacy XML (CTML) format mechanisms into the new YAML format introduced in Cantera 2.5. This program can be run from the command line to convert files to the YAML format. (New in Cantera 2.5)
Usage:
ctml2yaml [-h] input [output]
The input argument is required, and specifies the name of the input file to be converted. The optional output argument specifies the name of the new output file. If output is not specified, then the output file will have the same name as the input file, with the extension replaced with .yaml.
Example:
ctml2yaml mymech.xml
will generate the output file mymech.yaml.
If the ctml2yaml script is not on your path, but the Cantera Python module is, ctml2yaml can be used by running:
python -m cantera.cti2yaml mymech.xml
It is not necessary to use ctml2yaml to convert any of the XML input files included with Cantera. YAML versions of these files are already included with Cantera. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5493143796920776, "perplexity": 4288.456692017708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00124.warc.gz"} |
http://gmatclub.com/forum/fox-jeans-regularly-sells-for-15-a-pair-and-pony-jeans-22647.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 24 Jul 2016, 10:36
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Fox Jeans regularly sells for $15 a pair and Pony jeans post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message Director Joined: 17 Oct 2005 Posts: 932 Followers: 1 Kudos [?]: 172 [0], given: 0 Fox Jeans regularly sells for$15 a pair and Pony jeans [#permalink]
### Show Tags
07 Nov 2005, 00:50
00:00
Difficulty:
(N/A)
Question Stats:
0% (00:00) correct 0% (00:00) wrong based on 0 sessions
### HideShow timer Statistics
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
Fox Jeans regularly sells for $15 a pair and Pony jeans regularly sell for$18 a pair. During a sale these regular unit prices are discounted at different rates so that a total of $9 is saved by purchasing 5 pairs of jeans: 3 pairs of Fox jeans and 2 piars of Pony jeans. if the the of the two discounts rates is 22 percent, what is the discount rate on Pony jeans? Senior Manager Joined: 15 Apr 2005 Posts: 417 Location: India, Chennai Followers: 2 Kudos [?]: 13 [0], given: 0 Re: Another DS [#permalink] ### Show Tags 07 Nov 2005, 01:49 joemama142000 wrote: Fox Jeans regularly sells for$15 a pair and Pony jeans regularly sell for $18 a pair. During a sale these regular unit prices are discounted at different rates so that a total of$9 is saved by purchasing 5 pairs of jeans: 3 pairs of Fox jeans and 2 piars of Pony jeans. if the the of the two discounts rates is 22 percent, what is the discount rate on Pony jeans?
Can you please confirm if the bold part of the question is correct?.[/b]
Director
Joined: 17 Oct 2005
Posts: 932
Followers: 1
Kudos [?]: 172 [0], given: 0
### Show Tags
07 Nov 2005, 01:57
Fox Jeans regularly sells for $15 a pair and Pony jeans regularly sell for$18 a pair. During a sale these regular unit prices are discounted at different rates so that a total of $9 is saved by purchasing 5 pairs of jeans: 3 pairs of Fox jeans and 2 piars of Pony jeans. if the sum of the two discounts rates is 22 percent, what is the discount rate on Pony jeans? sorry its 3am here VP Joined: 30 Sep 2004 Posts: 1488 Location: Germany Followers: 6 Kudos [?]: 248 [0], given: 0 [#permalink] ### Show Tags 07 Nov 2005, 02:13 10%...3*15*x+2*18*(0,22-x)=9...solve for x _________________ If your mind can conceive it and your heart can believe it, have faith that you can achieve it. Senior Manager Joined: 15 Apr 2005 Posts: 417 Location: India, Chennai Followers: 2 Kudos [?]: 13 [0], given: 0 [#permalink] ### Show Tags 07 Nov 2005, 04:24 joemama142000 wrote: Fox Jeans regularly sells for$15 a pair and Pony jeans regularly sell for $18 a pair. During a sale these regular unit prices are discounted at different rates so that a total of$9 is saved by purchasing 5 pairs of jeans: 3 pairs of Fox jeans and 2 piars of Pony jeans. if the sum of the two discounts rates is 22 percent, what is the discount rate on Pony jeans?
sorry its 3am here
Thanks for the changes. Here is my answer.
3 Fox jeans before discount - 45$2 pony jeans before discount - 36$
Let x be the discount for FJ and y be the discount for PJ.
45 * (X/100) + (36 * Y/100) = 9
X + Y = 22
.45 X + (.36 (22 - X)) = 9
.45 X + 7.92 - .36X = 9
.09 X = 1.08
X = 12 and Y = 10
Hence Discount for PJ = 10%
Director
Joined: 27 Jun 2005
Posts: 506
Location: MS
Followers: 3
Kudos [?]: 74 [0], given: 0
### Show Tags
08 Nov 2005, 00:22
Discount on pony is 10 %
x+y =22
3*15*x/100+2*18*y/100=9
solve for y ...10%
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24070784449577332, "perplexity": 7853.744524860046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824113.35/warc/CC-MAIN-20160723071024-00060-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://ajayshahblog.blogspot.com/2007/12/daily-data-from-electronic-payments.html | ## Saturday, December 29, 2007
### Daily data from electronic payments systems: a new treasurehouse of data on the economy
John W. Galbraith and Greg Tkacz have a paper which, to me, implies that RBI should initiate release of daily data on traffic on payments networks such as credit cards, debit cards, etc.
#### 1 comment:
1. Hello Sir
I've written a blogpost concerning SS Tarapore's article in IIMA's journal - Vikalpa.
Would like your views on the same
Please note: LaTeX mathematics works. This means that if you want to say $10 you have to say \$10. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21669411659240723, "perplexity": 11236.529871405608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257822598.11/warc/CC-MAIN-20160723071022-00225-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://brilliant.org/problems/defying-gravity-4-sided/ | # Defying Gravity! - 4-sided
This is an extension of this problem.
Now, imagine four surfaces of equal length making a square in a controlled environment, with same parameters (the gravity can be switched to be perpendicular to any of the surfaces). Now, say a ball is dropped from the midpoint of one of the sides of the square, falling in direction perpendicular to the adjacent side of the square. After a certain time period $$t_1$$, the direction of the acceleration due to gravity is switched in a cyclic manner, to the next adjacent surface. Then, after time $$t_2$$, the direction is switched in the same fashion. Then after every time period $$T$$, the direction of gravity is changed. The periodic changes occur in such a way that the ball visits the midpoints of all four surfaces. If the surfaces are $$1 km.$$ long, and the acceleration due to gravity is $$10 ms^{-2}$$, find $$\lfloor t_2-t_1\rfloor$$.
Also, derive the expression for $$T$$, in terms of $$k$$ and $$g$$, which are length of surface and acceleration due to gravity respectively.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219274520874023, "perplexity": 155.89858156144825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607245.69/warc/CC-MAIN-20170523005639-20170523025639-00330.warc.gz"} |
https://link.springer.com/article/10.1007/s10107-018-1340-y?error=cookies_not_supported&code=a627509b-2db0-455e-9503-c21b5abde8d3 | # Behavior of accelerated gradient methods near critical points of nonconvex functions
## Abstract
We examine the behavior of accelerated gradient methods in smooth nonconvex unconstrained optimization, focusing in particular on their behavior near strict saddle points. Accelerated methods are iterative methods that typically step along a direction that is a linear combination of the previous step and the gradient of the function evaluated at a point at or near the current iterate. (The previous step encodes gradient information from earlier stages in the iterative process). We show by means of the stable manifold theorem that the heavy-ball method is unlikely to converge to strict saddle points, which are points at which the gradient of the objective is zero but the Hessian has at least one negative eigenvalue. We then examine the behavior of the heavy-ball method and other accelerated gradient methods in the vicinity of a strict saddle point of a nonconvex quadratic function, showing that both methods can diverge from this point more rapidly than the steepest-descent method.
This is a preview of subscription content, access via your institution.
## References
1. Attouch, H., Cabot, A.: Convergence rates of inertial forward–backward algorithms. SIAM J. Optim. 28(1), 849–874 (2018)
2. Attouch, H., Goudou, X., Redont, P.: The heavy ball with friction method, I. The continuous dynamical system: global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Commun. Contemp. Math. 2(01), 1–34 (2000)
3. Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than $$1/k^2$$. SIAM J. Optim. 26(3), 1824–1834 (2016)
4. Bubeck, S.: Convex optimization: algorithms and complexity. Found. Trends Mach. Learn. 8(3–4), 231–357 (2015)
5. Chambolle, A., Dossal, Ch.: On the convergence of the iterates of the “fast iterative shrinkage/thresholding algorithm”. J. Optim. Theory Appl. 166(3), 968–982 (2015)
6. Du, S.S., Jin, C., Lee, J.D., Jordan, M.I., Singh, A., Poczos, B.: Gradient descent can take exponential time to escape saddle points. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 1067–1077. Curran Associates Inc, Red Hook (2017)
7. Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Program. 156(1–2), 59–99 (2016)
8. Jin, C., Netrapalli, P., Jordan, M.I.: Accelerated gradient descent escapes saddle points faster than gradient descent. arXiv preprint arXiv:1711.10456, (2017)
9. Lee, J.D., Simchowitz, M., Jordan, M.I., Recht, B.: Gradient descent only converges to minimizers. JMLR Workshop Conf. Proc. 49(1), 1–12 (2016)
10. Li, H., Lin, Z.: Accelerated proximal gradient methods for nonconvex programming. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 379–387. Curran Associates Inc, Red Hook (2015)
11. Nesterov, Y.: A method for unconstrained convex problem with the rate of convergence $$O(1/k^2)$$. Dokl AN SSSR 269, 543–547 (1983)
12. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York (2004)
13. Polyak, B.T.: Introduction to Optimization. Optimization Software (1987)
14. Recht, B., Wright, S.J.: Nonlinear Optimization for Machine Learning (2017). (Manuscript in preparation)
15. Shub, M.: Global Stability of Dynamical Systems. Springer, Berlin (1987)
16. Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. Technical report, Department of Mathematics, University of Washington, (2008)
17. Zavriev, S.K., Kostyuk, F.V.: Heavy-ball method in nonconvex optimization problems. Comput. Math. Model. 4(4), 336–341 (1993)
## Acknowledgements
We are grateful to Bin Hu for his advice and suggestions on the manuscript. We are also grateful to the referees and editor for helpful suggestions.
## Author information
Authors
### Corresponding author
Correspondence to Stephen J. Wright.
This work was supported by NSF Awards IIS-1447449, 1628384, 1634597, and 1740707; AFOSR Award FA9550-13-1-0138; and Subcontract 3F-30222 from Argonne National Laboratory. Part of this work was done while the second author was visiting the Simons Institute for the Theory of Computing, and partially supported by the DIMACS/Simons Collaboration on Bridging Continuous and Discrete Optimization through NSF Award CCF-1740425.
## A properties of the sequence $$\{t_k\}$$ defined by (51)
### A properties of the sequence $$\{t_k\}$$ defined by (51)
In this “Appendix” we show that the following two properties hold for the sequence defined by (51):
\begin{aligned} \frac{t_{k-1}-1}{t_k} \;\; \text{ is } \text{ an } \text{ increasing } \text{ nonnegative } \text{ sequence } \end{aligned}
(52)
and
\begin{aligned} \lim _{k \rightarrow \infty } \frac{t_{k-1}-1}{t_k} = 1. \end{aligned}
(53)
We begin by noting two well known properties of the sequence $$t_k$$ (see for example [4, Section 3.7.2]):
\begin{aligned} t_k^2 - t_k = t_{k-1}^2 \end{aligned}
(54)
and
\begin{aligned} t_k \ge \frac{k+1}{2}. \end{aligned}
(55)
To prove that $$\frac{t_{k-1}-1}{t_k}$$ is monotonically increasing, we need
\begin{aligned} \frac{t_{k-1}-1}{t_k} = \frac{t_{k-1}}{t_k} - \frac{1}{t_k} \le \frac{t_k}{t_{k+1}} - \frac{1}{t_{k+1}} = \frac{t_k - 1}{t_{k+1}}, \quad k=1,2,\ldots . \end{aligned}
Since $$t_{k+1} \ge t_k$$ (which follows immediately from (51)), it is sufficient to prove that
\begin{aligned} \frac{t_{k-1}}{t_k} \le \frac{t_k}{t_{k+1}}. \end{aligned}
By manipulating this expression and using (54), we obtain the equivalent expression
\begin{aligned} t_{k-1} \le \frac{t_k^2}{t_{k+1}} = \frac{t_{k+1}^2 - t_{k+1}}{t_{k+1}} = t_{k+1} - 1. \end{aligned}
(56)
By definition of $$t_{k+1}$$, we have
\begin{aligned} t_{k+1} = \frac{\sqrt{4t_k^2 + 1} + 1}{2} \ge t_k + \frac{1}{2} = \frac{\sqrt{4t_{k-1}^2 + 1} + 1}{2} + \frac{1}{2} \ge t_{k-1} + 1. \end{aligned}
Thus (56) holds, so the claim (52) is proved. The sequence $$\{ (t_{k-1}-1)/t_k \}$$ is nonnegative, since $$(t_0-1)/t_1 = 0$$.
Now we prove (53). We can lower-bound $$(t_{k-1}-1)/t_k$$ as follows:
\begin{aligned} \frac{t_{k-1}-1}{t_k}&= \frac{2(t_{k-1} -1)}{\sqrt{4 t_{k-1}^2 +1} + 1}\ge \frac{2(t_{k-1} -1)}{\sqrt{4 t_{k-1}^2} + 2} \nonumber \\&= \frac{2(t_{k-1} -1)}{2 (t_{k-1} + 1)} = 1 - \frac{2}{t_{k-1}+1}. \end{aligned}
(57)
For an upper bound, we have from $$t_k \ge t_{k-1}$$ that
\begin{aligned} \frac{t_{k-1} -1}{t_k} \le \frac{t_{k-1}}{t_k} \le 1. \end{aligned}
(58)
Since $$t_{k-1} \rightarrow \infty$$ (because of (55)), it follows from (57) and (58) that (53) holds.
## Rights and permissions
Reprints and Permissions
O’Neill, M., Wright, S.J. Behavior of accelerated gradient methods near critical points of nonconvex functions. Math. Program. 176, 403–427 (2019). https://doi.org/10.1007/s10107-018-1340-y
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1007/s10107-018-1340-y | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625389337539673, "perplexity": 3048.6622266425647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00446.warc.gz"} |
http://codeforces.com/blog/entry/48659 | albertg's blog
By albertg, history, 2 years ago, translation, ,
735A - Ostap and Grasshopper
Problem on programming technique. You have to find at which positions are grasshoper and insect. If k does not divide the difference of position, then answer is NO. Otherwise we have to check positions pos+k, pos+2k, ..., where pos is the minimal poisiton of grasshoper and insect. If somewhere is an obstacle, then answer is NO, otherwise the answer is YES.
735B - Urbanization
First of all, note that n1+n2 chosen ones should be people with top (n1+n2) coeficients. Secondly, if the person with intelegence C will be in the first city then he will contribute to our overall IQ with C/n1 points. So, if n1<n2, then top-n1 ratings should be in the small city and the top-n2 from others — in the big city.
736A - Tennis Championship
Let us solve the inverse problem: at least how many competitors should be, if the champion will have n matches. Then there's obvious reccurrent formula: f(n+1)=f(n)+f(n-1) (Let us make the draw in a way, where the champion will play n matches to advance to finals and the runner-up played (n-1) matches to advance the final). So, we have to find the index of maximal fibunacci number which is no more that number in the input.
736B - Taxes
The first obvious fact is that the answer for prime numbers is 1. If the number is not prime, then the answer is at least 2. When is it possible? It is possible in 2 cases; when it is sum of 2 primes of its maximal divisor is 2. If 2 divides n, then so does integer n/2. n/2<=2=>n<=4=>n=4, where n is prime. According to Goldbach's conjecture, which is checked for all numbers no more than 10^9, every number is a sum of two prime numbers. Odd number can be sum of two primes, if (n-2) is prime (the only even prime number is 2). Otherwise, the answer is 3 — n=3+(n-3), (n-3) is sum of 2 primes, because it is even.
736C - Ostap and Tree
First of all, thanks to albert96 and GlebsHP for their help with the tutorial of this problem. Secondly, sorry for being late.
Problem can be solved by the method of dynamic programming. Let dp[v][i][j] be the number of possibilities to color subtree of vertex v in such a way that the closest black vertex is on depth i, and the closest white vertex — on depth j (we also store dp[v][-1][j] and dp[v][i][-1] in the cases where there are no black and white vertexes in diapason k of v respectively). In order to connect two subtrees, we can check all pairs (i,j) in both subtrees (by brute-force algorithm). Then let we have pair (a,c) in the first subtree and pair (b,d) in the second one. If min(a,c)+max(b,d)<=k, then we update value of current vertex.
Complexity of the algorithm O(n*k^4), which is acceptable for this particular problem (n — the number of vertexes, k^4 brute force search of pairs (a,b); (c,d)).
736D - Permutations
This problem consists of 3 ideas. Idea 1: remainder modulo 2 of the number of permutation is equal to the remainder modulo 2 of the determinant of the matrix whose entries are 1 if (ai,bi) is in our list and 0 otherwise. Idea 2: If we cahnge 1 by 0, then the determinant will differ by algebraic compliment. That is, if we count inverse matrix, than it will reflect reminders modulo 2 (B(m,n)=A'(m,n)/detA, detA is odd). Idea 3: Inverse matrix can be counted for O((n/32)^3) time. However, we can work is field of integers modulo 2. The summation can be replaced by XOR. So if we store in one "int" not a single but 32 numbers, then we can reduce our assymptocy to O(n^3/32), which is OK.
736E - Chess Championship
Suppose set (a1,a2,...,am). Then the list is valid if set {2m-2, 2m-4, 2m-6, ..., 0} majorizes the set {a1,a2,...,am}. Let us prove it! Part 1: Suppose n<=m. Top n players will play n(n-1)/2 games with each other and n(m-n) games with low-ranked contestants. In these games they will collect 2*n(n-1)/2 points (in each game there is exactly 2 points) for sure and at most 2*n*(m-n) points in games with others. So they will have at most 2*(n*(n-1)/2+n*(m-n))=2*((m-1)+(m-2)+...+(m-n)) points. Now construction: Let's construct results of participant with most points and then use recursion. Suppose the winner has even number of points (2*(m-n) for some n). Then we consider that he lost against contestants holding 2,3,4,...,n places and won against others. If champion had odd number of points (2*(m-n)-1 for some n), then we will construct the same results supposing that he draw with (n+1)th player instead of winning agianst him. It is easy to check that majorization is invariant, so in the end we will have to deal with 1 men competition, when set of scores {a1} is majorized by set {0}. So a1=0, and there is obvious construction for this case. So we have such an algorithm: we search for a compiment set which is majorized by {2m-2,2m-4,...,0}. If there is no such set answer is NO. Otherwise answer is YES and we construct our table as shown above. Assymptosy is O(m^2logm) (calling recursion m times, sorting the array (we can lose non-decreasing order because of poor results) and then passing on it linearly.
•
• -229
•
» 2 years ago, # | ← Rev. 2 → +34
» 2 years ago, # | ← Rev. 2 → +69 Dear Codeforces administration and contest preparation team,Should we all copy our comments from the neighbouring thread to this one (just like you did with the C/A problem statement) or would you please explain why this round is rated and why is this allowed to reuse the same problem?The final decision is obviously yours, I'm just politely asking for the commentary on this.
» 2 years ago, # | +61 Sorry, but this contest sucks big league. Some problems I assume are good, but those C and D are terrific!
• » » 2 years ago, # ^ | +231 Pls make codeforces great again.
• » » » 2 years ago, # ^ | 0 together
• » » » 2 years ago, # ^ | +82 I'll create a contest and make other authors give me problems for it.
• » » 2 years ago, # ^ | +29 You wrote terrific, but I'm not sure whether you mean terrific or terrible. Terrific also means great :|
• » » 2 years ago, # ^ | +7 It's not your business! You didn't even participate in that contest.
» 2 years ago, # | ← Rev. 2 → +8 Where is editorial in english?
• » » 2 years ago, # ^ | +107 In Google.
• » » » 2 years ago, # ^ | -8 +1
» 2 years ago, # | ← Rev. 2 → +28 I think you made a mistake. For problem A and B div 1 it should say: "Just google it"
» 2 years ago, # | -35 Let me tell you, this round was so bigly good, it was so good folks. I had so much fun googling problems, let me tell you, you have no idea how much fun I had.
• » » 2 years ago, # ^ | +6 Who forced you to google?
• » » » 2 years ago, # ^ | -12 I actually didn't google problem C. For problem D, I knew it was some type of well known theorem, so I just googled it with no shame, since such problem shouldn't have been on a contest anyway.
» 2 years ago, # | +5 Jokes aside, are there any other solutions for the Tennis contest problem (736A) not using the Fibonacci Sequence?
• » » 2 years ago, # ^ | +4 this is actually a problem on calculating the worst-case height of the AVL-Tree.http://pages.cs.wisc.edu/~ealexand/cs367/NOTES/AVL-Trees/index.html
• » » 2 years ago, # ^ | ← Rev. 3 → +5 So the fibonacci pattern wasn't very intuitive to me. Hence I solved this question with a basic recurrence as described. Let us define a f(N) as minimum number of people required for a person to win N matches. If we have this, our answer is simply the greatest value of N where f(N)>=n, where n is the input, i.e. the number of people.So, for N match wins, we need N losers, hence we add N to the answer(ret). Now we run recurrence over these losers, if we notice the number of wins required for these losers will be [0,N-2], inclusive, as only if the highest player had N-2 matches won, could he have played the winner who was at N-1, leading him to N victories. Hence we iterate over [0,N-2] and find the number of people needed for those players.Hence we had all the losers counted, We add 1 to f(n) to get the total player count.Here is the recursive method. Add memoization for AC. long long f(int n){ long long ret = n; for(int i = 0; i<=n-2; i++) ret+=f(i); return ret; } This recurrence, naturally turned out give same output as fibonacci, but it is a non-experimantal way of solving this problem. Thanks :)
• » » » 2 years ago, # ^ | +3 I found this way better than the non-intuitive fibbonacci pattern . Thanx for writing . :)
» 2 years ago, # | ← Rev. 3 → +7 In problem D, If there are no solution without using bitset, you should set contraint for n is 500 instead. Making problem harder by bitset is not good idea!Is the complexity using bitset O(n3 / 32)?
• » » 2 years ago, # ^ | +23 Why do you think bitset problem is not a good idea? Many people regard a single 32-bit integer arithmetic as a single unit time operation. Then why not a bitset?
• » » 2 years ago, # ^ | -56 complexity is (n/32)^3 ~ 80^3
• » » » 2 years ago, # ^ | +28 The complexity is O(N^3 / 32). Using bitsets only reduces a factor of 32 in one dimension (the row XOR operations). The outer loops for Gaussian elimination are still O(N^2).
» 2 years ago, # | +36 How to solve Div2E ?
» 2 years ago, # | +13 Why has the determinant of a 0-1 matrix anything to do with the number of valid permutations? Can somebody explain the intuition behind this?
• » » 2 years ago, # ^ | +39 Actually, the Permanent of a matrix counts the number of valid permutations. However, calculating the permanent is a hard problem. But we only need the parity of the number of permutations, i.e. the parity of the permanent. For that, we can calculate the determinant of the matrix over a field of size 2 (the permanent and determinant are equal in ).
• » » 2 years ago, # ^ | +10 The permanent of an adjacency matrix counts the number of perfect matchings in bipartite graph of rows-columns. Since the determinant is something like a - b and permanent is a + b, they have the same parity. The number of perfect matchings in bipartite graph rows-columns is the number of vertex-disjoint covers of cycles in the original graph (every vertex will have one incoming edge and one outgoing edge). This is basically the number of valid permutations when you look at them as cycles.
» 2 years ago, # | ← Rev. 2 → +10 Tennis Championship : "Then there's obvious recurrent formula: f(n+1)=f(n)+f(n-1)", is it really obvious ? Can someone explain the obviousness of the formula without a rigorous mathematical demonstration ?
• » » 2 years ago, # ^ | ← Rev. 3 → +20 Assuming we want a guy to have played n + 1 matches:First of all, he must have played exactly n matches.Secondly, he must have beaten someone else who had played at least n - 1 matches. It is obvious that we want to minimize the number of matches used to maximize the answer. Therefore, we just assume that the guy that was beaten had played n - 1 matches before.Thus the recurrence formula comes to the surface, which is F(n + 1) = F(n) + F(n - 1), where F(n) represents the number of people we need to have one guy who has played n matches.The base case are F(0) = 1, F(1) = 2.
• » » » 2 years ago, # ^ | ← Rev. 2 → 0 I do not understand this statement:"It is obvious that we want to minimize the number of matches used to maximize the answer."Why do we want to minimize the number of matches?
• » » » » 2 years ago, # ^ | ← Rev. 4 → 0 there are 2 scenarios for the guy that was beaten: 1. he has played n -1 games 2. he has played n games.now, it's logical to say that the higher the value of n + 1 (defined as the total matches played by the winner) the more people are needed for the winner to play n + 1 matches right?so out of the two possibilities, you would choose the first, you choose greedily to minimise the total number of people needed for the winner to play n +1 matches.hope that helps
» 2 years ago, # | +15 In Problem D, I didn't know the determinant will differ by algebraic compliment. Here is my approach I tried(ultimately same as calculating inverse). We know that the original matrix has inverse, which means the rows are linearly independent. So for every row vector e_i^T, there exists a unique combination of rows that make e_i^T. We can determine the combination with row-echelon form.Next, if the matrix is singular, there exists a set of rows that XOR sum of it is zero. That is, if we change one 1 to 0 and the matrix becomes singular, we should be able to make zero with some rows, including the changed row. It is obvious that the combination originally produces e_i^T. Now we can determine the answer. (i,j) is NO if row i is included in the combination of e_j^T, YES otherwise.
» 2 years ago, # | +9 The Problem D ( Div2 ) can be found HereWhich is a Contest Problem. Herethat Arranged in : Sat, Aug 13, 2016Common for Some Guy. I got this problem by a Friend's resource. And Surprised .
• » » 2 years ago, # ^ | -13 I send round proposal at 20th December 2015, which is earlier than August 13, 2016.
» 2 years ago, # | +28 In problem D, shouldn't it be O(n^3/32) instead of O((n/32)^3) ?
• » » 2 years ago, # ^ | 0 I think so, too
• » » 2 years ago, # ^ | 0 Actually if you are going to use big O notation, then both are wrong, correct order is O(n3). But yes you are right it should be n3 / 32, at least for most accepted solutions, have not seen author's code, but i'm pretty sure he could not reduce the outer N loop, so it would never be (n / 32)3
» 2 years ago, # | 0 Why is this article downvoted so much? By the way, about the solution for D, I think it takes n*(n-1)/2 * (2*n/32) iterations for computing an inverse matrix in the worst case because it's based on the Gaussian elimination algorithm.If someone knows a way with (n/32)^3 iterations, let me know ><.Anyway, you shouldn't use Big-O notation for explaining time complexity with constants. So, O((n/32)^3) is not good notation in the first place.
» 2 years ago, # | +8 How to solve div 2 E ? Editorial is also missing..
» 2 years ago, # | ← Rev. 2 → -11 Such bad formatting. Please make use of break tag in appropriate places, at least. Coming with so much enthusiasm to read the editorial of problems I couldn't solve and it is just disappointing to see what we get.
» 2 years ago, # | +5 Just curious: why are the limits to Div1C n=100 and k=20 when a O(n*k*k) dynamic programming solution exists?
• » » 2 years ago, # ^ | ← Rev. 2 → 0 Maybe author's solution is O(n*2^k)?
• » » 2 years ago, # ^ | 0 Do you mind explaining your solution? It is particularly short and fast, but I can't understand it. Thanks!
• » » 2 years ago, # ^ | ← Rev. 6 → +34 O(n*k*k) solution: (EDIT: I think I made a mistake in the explanation for i > k) (Note: I am not good at explaining solutions, so this might not make much sense). Let us say a node q reaches a black node if for some p distance(q, p) <= k and p is black. The problem wants us to find the number of ways to color nodes such that all nodes can reach a black node.First root the tree at any node.dp[x][i] keeps track of info for node x, and 0 <= i <= 2*k.If i <= k, then node x is a distance of i from the nearest black child, and all of its children can reach a black child.If i > k, then some node in the subtree of node x is i-k-1 (<----EDIT) edges away and cannot reach a black node (there may be multiple nodes that cannot reach a black node, but it is OK to just select the farthest one, because if the farthest one is able to reach a black node in the future, then any closer ones will be able to too).Examples: If i = 0, then node x is colored black and all of its children can reach a black node.If i = k+1, then x cannot reach a black node, but all of its children can. (<----EDIT)If i = 2*k, then node x is not colored black, and there is a node in the subtree of node x that is k edges away and cannot reach a black node.At the beginning of the dfs, set dp[x][0] = 1 (this is the case when we color the node black) and dp[x][k+1] = 1 (this is the case when we don't color the case black).During the dfs call on each child (call it j), update dp[x]. This can be done by iterating over dp[x][f] and dp[j][g] for 0 <= f, g <= 2*k.There are 4 cases when updating. First let Y denote the union of x and the subtrees of the children that have already been visited (not including j). (For example, if there are edges 1-2, 1-3, 1-4, 1-5, then when j=4, Y = {1, subtree(2), subtree(3)}).f<=k, g<=k: All nodes in Y and j can reach black nodes. Update nxtDP[min(f, g+1)] because that's the distance to the nearest black node.f>k, g>k: Y and j both have a node that can't reach black nodes. Update nxtDP[max(f, g+1)] because some node max(f, g+1) edges away from x can't reach a black node.f<=k, g>k: All nodes in Y can reach a black node, but some node in g can't. If the nodes in g that currently can't reach a black node can reach black nodes in Y, update nxtDP[f]. Otherwise update nxtDP[g+1].
• » » » 2 years ago, # ^ | 0 Wow, the representation of state is ... fantastic.
• » » » 2 years ago, # ^ | +13 I don't really understand the Sentence "If i <= k, then node x is a distance of i from the nearest black child, and all of its children can reach a black child." if the distance between node x and the nearest black child is exactly k,and,so the children of the x node might have a (k+1) distance from the nearest black child (of node x). Can you explain a little?
• » » » » 2 years ago, # ^ | ← Rev. 2 → 0 .
• » » » » 2 years ago, # ^ | ← Rev. 3 → 0 I don't completely understand your question, but here's an example when i <= k:Let the tree be rooted at 1 and k=5Edges are 1-2, 2-3, 3-4, 1-5If 4 is colored black but none of the other nodes are, then this would correspond to dp[4][0], dp[3][1], dp[2][2], dp[1][3]The initialization of dp[5] would NOT be affected.Note that dp[x] only contains information about subtrees of children that have already been visited. If a child has not been visited, then it will NOT affect dp[x] until it is visited.
• » » » » » 2 years ago, # ^ | 0 Can you elaborate a bit more on setting dp[x][0] = 1 and dp[x][k+1] = 1?
• » » » » » » 2 years ago, # ^ | +1 Quote: ----------At the beginning of the dfs, set dp[x][0] = 1 (this is the case when we color the node black) and dp[x][k+1] = 1 (this is the case when we don't color the case black). When we initialize the DP, the only node in the subtree of x that we have currently visited is x, so this is why dp[x][k+1] is used when the node is not colored black (a node of distance k+1-k-1=0 away cannot reach a black node). dp[x][0] covers the case when it is colored black, because a node of distance 0 away (in this case x itself) is black.
• » » » » » » » 2 years ago, # ^ | 0 Thank you!
• » » » 2 years ago, # ^ | 0 Exactly, what does the entry dp[i][j] represent? It seems as if it represents "number of ways to color subtree rooted at i, such that the closest black node to i is j units away". Is my understanding correct?
» 2 years ago, # | ← Rev. 2 → +6 "Then let we have pair (a,c) in the first subtree and pair (b,d) in the second one. If min(a,c)+max(b,d)<=k, then we update value of current vertex."Can anybody clarify this? Should it mean "pair (a,b) in the first subtree and pair (c,d) in the second one"? What does it mean if min(a, c) + max(b, d) > k, and why is the converse a sufficient condition for satisfiability?I solved the problem using a completely different DP recurrence, so I would appreciate if somebody could provide some more intuition about this particular recurrence.
• » » 2 years ago, # ^ | 0 Can you also share your approach?
• » » » 2 years ago, # ^ | ← Rev. 2 → +21 Sure, I'll try. Let f(x, l, d) = the number of ways to color the subtree rooted at x, where l is the distance of nearest black node to x that is NOT in the subtree rooted at x, and d is the distance to the nearest black node in the subtree of x (both can not exist, signalled by -1). The transition is very unwieldy: If min(l, d) > k, the solution is 0. If d = 0, we have to place a black node at x and we can use f(y, 1, d) for all of the children y of x and all d <= 2*k. If d > 0, one of the children must have a black node that is at distance d — 1 of its root, and all of the other children must have d' >= d — 1. I solved this case via a second DP over the children of x. Seems overly complicated... My code is here in case you want to check it out: http://codeforces.com/contest/736/submission/22556505
• » » » » 2 years ago, # ^ | 0 Thanks a lot. Great approach. It is really unwieldy.
» 2 years ago, # | +38 Someone please explain the solution of "Ostap and the tree". What is the recurrence state? The editorial is not very clear. Thanks
» 2 years ago, # | -10 A non-tex editorial?
» 2 years ago, # | 0 "According to Goldbach's conjecture, which is checked for all numbers no more than 10^9, every number is a sum of two prime numbers."Actually, it states that every even number (integer, obviously), not every number is a sum of two prime numbers. A small lapse, I believe, for the rest of the solution is correct and based on the correct statement.
» 2 years ago, # | 0 Can somebody explain the editorial for Div1D? I honestly have no idea what those three ideas are.
• » » 2 years ago, # ^ | ← Rev. 3 → +8 The determinant of a matrix is the sum of every combination of ((-1)^inv)*(the product of all the a[i][j] chosen) where every i and j doesn't repeat itself on another chosen pair and inv is the number of inversions on the number of columns/rows chosen. This means that it has information about the sum of every possible product without repeating the same row/column inside this matrix. A key observation: -1=1 on MOD 2. The inverse matrix is the transpost of the matrix of the cofactors. You get the cofactor of some position i,j by excluding the row i and column j and getting the determinant of the rest of the matrix and multiplying it by (-1)^(i+j). This means that the inverse matrix has information about every cofactor. Also, because of 2, you don't need to worry about the (-1)^(i+j) If you change some a[i][j] from 1 to 0, the determinant will differ by the value of the cofactor associated to it because the 1 was originally multiplying cofactor and being summed to the original determinant. Think about Laplace's formula to calculate the determinant and you will understand. Since the inverse matrix has information about every cofactor, you can use it to get every single cofactor efficiently. This is my take on this problem, it might have some mistake but I think that's it.
• » » » 2 years ago, # ^ | ← Rev. 3 → 0 Please ignore. I miss a critical condition in the problem statement.
• » » » » 2 years ago, # ^ | 0 Just so anyone seeing this will understand: the problem would be when you can't get the inverse but since the number of permutations is always odd at the start, the determinant will be 1 on MOD 2. That means it is not equal to 0 and that's enough to say that you can get the inverse.
• » » » 2 years ago, # ^ | 0 Thanks for the detailed explanation, it made me realise that my high school math has left me a huge hole in the concepts of matrix. If it wasn't you I wouldn't have thought of representing the combination of permutations in matrix. (They only bothered to teach me about finding the determinant of matrix using recursion in high school.) PS: In case if anyone also got TLE using 2D array, use bitset should solve the problem.
» 2 years ago, # | +8 can anybody explain the solution of [Ostap and Tree] not able to understand the part of editorial "Then let we have pair (a,c) in the first subtree and pair (b,d) in the second one. If min(a,c)+max(b,d)<=k, then we update value of current vertex."what is the meaning of min(a,c) and max(b,d) here , what is the significance of this condition. and how to calculate the final answer ? I tried for two days but unable to grasp the solution. sorry for bad english.
• » » 2 years ago, # ^ | 0 I think the pair (a,b) are the last two parameters of dp table (ie. (i,j)). And also (c,d).
» 2 years ago, # | 0 For Problem E, Chess Championship, you're proof is that if there exists a solution, then set {2m-2, 2m-4..} majorizes {a1, a2..}, not the other way around, which is what you want. How do you prove that if set {2m-2, 2m-4..} majorizes {a1, a2..} then there exists a solution?Also, another condition for the existence of a solution is that the sum of all a[i] is equal to m*(m-1).
» 2 years ago, # | ← Rev. 2 → 0 Can anyone explain the logic of Problem 735C to me.
• » » 2 years ago, # ^ | -10 Draw a tree and you will know the logic.
» 2 years ago, # | +3 Not able to understand the solution of 736C - Ostap and Tree. Can someone please explain this -> 22656733. What does dp[x][i] represent and also the transitions.
» 4 weeks ago, # | 0 How does someone solve problem like D (Taxes), if he doesn't know the property of Goldbach's Conjecture. Is he supposed to derive this property on his own during a 2 hour contest? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6661781668663025, "perplexity": 1028.6365220418793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578678807.71/warc/CC-MAIN-20190425014322-20190425040322-00242.warc.gz"} |
https://icsecbsemath.com/2016/11/27/class-8-chapter-27-relations-and-mapping-lecture-notes-2/ | Function or Mapping
Let ${A \ and\ B}$ be two non-empty sets. Then, a function or a mapping $f$ from $A \ to \ B$ is a rule which associates to each element $x \in A$, a unique $f(x) \in B$, called the image of $x$. If $f$ is a function from $A \ to \ B$, then we write $f: A \rightarrow B$
For $f$ to be a function from $A \ to \ B$:
(i) Every element in $A$ must have its image in $B$.
(ii) No element in $A$ must have more than one image
Example 1:
Let $A = \{1,\ 2, \ 3\} \ and B\ = \{2,\ 4,\ 6,\ 8\}$
Consider the rule, $f(x) = 2x, \ then\ f(1)=2, \ f(2) =4, \ f(3)=6$
Clearly, each $x \in A$ has a unique image in $B$. Hence, $f$ is a function from $A \ to \ B$.
Representation of a Function
You can represent the function in three different ways:
Arrow Diagram: The function in the above Example can be represented as follows.
Roster Method: Let $f$ be a function between $A \ and \ B$. The first thing is to form ordered pairs of all elements in $A$ that have image in $B$. Then the function f is represented as the set of all such ordered pairs.
The function $f$ in the above example can be written as follows: $f=\{(1,\ 2), \ (2,\ 4),\ (3,\ 6)\}$
Equation Form: Let $f$ be a function between $A \ and \ B$. If f can be represented as a rule of association, then it would take equation for. For example, in the above example, $f(x) = 2x$
If $y \in B \ and\ x \in A, \ then\ y=2x$. Hence, $y=2x$ equation represents the function $f$.
Let’s do one example for more clarification.
Example 2:
Let $A=\{1,2,3,4,5\} \ and\ B=\{1,4,9,16,20,25\}$
Define $f=A \rightarrow B: f(x)=x^2$
Represent this function by the above three methods.
Solution:
First find out the following:
$f(1)=1,\ f(2)=4,\ f(3)=9, \ f(4)=16,\ f(5)=25$
Arrow Method: Now draw the diagram
Roster Method: In Roster form the function can be represented as:
$f=\{(1,\ 1), (2,\ 4), (3,\ 9), (4,\ 16), (5,\ 25)\}$
Equation Form: In Equation form the function can be represented as $y=x^2$
Domain, Co-Domain and Range of a Function
Let f be a function from $A \ to \ B$. Then, we define:
Domain $(f)=A$
Co-Domain$(f)=B$
Range $(f)$ = Set of all images of $A \ in \ B$
Function as a Relation
Let A and B be two non-empty sets and R be a relation from $A \ to \ B$. Then $R$ is called a function from
$A \ to \ B$, if (i) domain $(R) = A$ and (ii) no two ordered pairs in $R$ have the same first components.
The following example will make it more clear:
Example 3:
Let $A= \{1, 2, 3, 4\} \ and \ B=\{3, 4, 5, 6, 7\}$
Let $R1= \{(1, 3), (2, 4), (3, 5)\},$
$R2=\{(1, 3), (1, 7), (2, 4), (3, 5), (4, 6)\},$
$R3= \{(1, 3), (2, 4), (3, 5), (4, 6)\}$
Justify, which of the above relations is a function from $A \ to \ B$
Solution
• Domain $(R1) = \{1, 2, 3\} \ne A$ Hence $R1$ is not a function of $A \ to \ B$
• Two different ordered pairs, namely $(1, 3) \ and\ (1, 7)$ have the same first co-ordinates. Hence $R2$ is not a function of $A \ to \ B$
• Domain $(R3) = \{1, 2, 3, 4\} = A$. Also, no two different ordered pairs in $R3$ have the same first co-ordinates. Hence $R3$ is a function of $\ A \ to \ B$
Real Valued Functions
A rule $f$ which associates to each real number $x$, a unique real number $f(x)$, is called a real valued function. Here, $f(x)$ is an expression in $x$.
Let’s do an example.
Example 4:
Let $f(x)=4x-3,\ x\ belongs\ to\ R.$ Find the value of $f(0), \ f(1), \ f(2),\ f(3)$
Solution:
Substitute corresponding values of $f(x)$ in the function. We get
$f(0)=-3,\ f(1)=1,\ f(2)=5\ and\ f(3)=9$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463953375816345, "perplexity": 627.8886607108237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00177.warc.gz"} |
https://www.nature.com/articles/s41598-021-86606-3?error=cookies_not_supported&code=407ba889-f1e8-4076-b56b-29df77817096 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Modeling the first wave of Covid-19 pandemic in the Republic of Cyprus
## Abstract
We present different data analytic methodologies that have been applied in order to understand the evolution of the first wave of the Coronavirus disease 2019 in the Republic of Cyprus and the effect of different intervention measures that have been taken by the government. Change point detection has been used in order to estimate the number and locations of changes in the behaviour of the collected data. Count time series methods have been employed to provide short term projections and a number of various compartmental models have been fitted to the data providing with long term projections on the pandemic’s evolution and allowing for the estimation of the effective reproduction number.
## Introduction
Coronavirus disease 2019 (COVID-19), an infection caused by the novel coronavirus SARS-CoV-21 that first emerged in Wuhan, China2 , by the beginning of February 2021, counts more than 105 million cases and has claimed nearly 2,300,000 lives3. Despite some advances in therapy4 and some vaccines receiving conditional approval5,6,7, there are still many gaps in our understanding of the dynamics of the new pandemic disease, of the effect of the interventions different governments have imposed, and of a number of other epidemiological parameters. Epidemic modeling is a fundamental component of epidemiology, especially with regards to infectious diseases. Following the pioneering work of R. Ross, W. Kermack, and McKendrick in early twentieth century8, the discipline has established itself and comprises a major source of information for decision makers. For instance, in the United Kingdom, the Scientific Advisory Group of Emergencies (SAGE) is a major body that collects evidence from multiple sources including inputs from mathematical modeling to advise the British government on its response to the complex COVID-19 situation; for more information see https://www.gov.uk/government/collections/scientific-evidence-supporting-the-government-response-to-coronavirus-covid-19.
In the context of the COVID-19 pandemic, expert opinions can help decision makers comprehend the status of the pandemic by collecting, analyzing, and interpreting relevant data and developing scientifically sound methods and models. An exact model that would describe perfectly the data is usually not feasible and of limited scope; hence scientists usually aim for models that allow a statistical simulation of synthetic data. At the same time, models can also approximate the dynamics of the disease and discover important patterns in the data. In this way, researchers can study various scenarios and understand the likely consequences of government interventions.
The Republic of Cyprus is an island-state, located in the Mediterranean in the southeast of Europe. The first (imported) cases of COVID-19 appeared in early March 2020 with the first wave peaking in late March-early April. The health authorities responded rapidly and rigorously to the COVID-19 pandemic by scaling-up testing, increasing efforts to trace and isolate contacts of cases, and implementing measures such as closures of educational institutions, and travel and movement restrictions. Lacking a scientific unit specialised in epidemics and other infectious health hazards, a diverse group of experts from various disciplines, including epidemiologists, clinicians, statisticians and data scientists was formed with the aim of trying to understand the evolution of the COVID-19 pandemic in Cyprus and of assisting the Cypriot government in informed decision making. This manuscript contains the work of this group, including results from statistical and mathematical models used to understand the epidemiology of the first wave of COVID-19 in Cyprus, which spans from the beginning of March till the end of May 2020. Specifically, we proposed a range of different models that captured different aspects of the COVID-19 pandemic. The analysis consisted of several methods applied to understand the evolution of pandemics in the long and short run. We used change-point detection, count time series methods and compartmental models for short and long term projections, respectively. We estimated the effective reproduction number by using three different methods and obtained consistent results across all the used methods. Results were cross-validated against observed data. Besides providing a comprehensive data analysis, we illustrate the importance of mathematical models to epidemiology.
## Cyprus COVID-19 surveillance system
The Unit for Surveillance and Control of Communicable Diseases (USCCD) of the Ministry of Health operates COVID-19 surveillance. The lab-based surveillance system consisted, during the first pandemic wave, of 19 laboratories (7 public and 12 private) that carried out molecular diagnostic testing for SARS-CoV-2. Sociodemographic, epidemiological, and clinical data of individuals with SARS-CoV-2 infection were routinely collected from laboratories and clinics, and reported to an electronic platform of the USCCD. A confirmed COVID-19 case was a person, symptomatic or asymptomatic, with a respiratory swab (nasopharynx and/or pharynx) positive for SARS-CoV-2 by a real-time reverse-transcription polymerase chain (rRT-PCR) assay. Cases were considered imported if they had travel history from an affected area within 14 days of the disease onset. Locally-acquired cases were individuals who tested positive for SARS-CoV-2 and had the earliest onset date in Cyprus without travel history from affected areas. People with symptomatic COVID-19 were considered recovered after the resolution of symptoms and two negative tests for SARS-CoV-2 at least 24-h apart from each other. For asymptomatic cases, the negative tests to document virus clearance were obtained at least 14 days after the initial positive test. A person with a positive test at 14 days was further isolated for one week and finally released at 21 days after the initial diagnosis without further laboratory tests. Testing approaches in the Republic of Cyprus included: (a) targeted testing of suspect cases and their contacts; of repatriates at the airport and during their 14-day quarantine; of teachers and students when schools re-opened in mid-May; of employees in essential services that continued their operation throughout the first pandemic wave (e.g. customer services, public domain, etc); and of health-care workers in public hospitals, and (b) population screenings following random sampling in the general population of most districts and in two municipalities with increased disease burden. By June 2nd 2020, 120,298 PCR tests had been performed (13,734.2 per 100,000 population). Public health measures were taken in 4 phases: Period 1 (10–14 March, 2020) included closures of educational institutions and cancellation of public gatherings (> 75 persons); Period 2 (15–23 March, 2020) involved closure of entertainment areas (for instance, malls, theatres, etc), allowance of 1 person per 8 square meters in public service areas, and restrictions to international travel (for example, access to the Republic of Cyprus was permitted only for specific persons and after SARS-CoV-2 testing); Period 3 (24–30 March, 2020) included closure of most retail services; and Period 4 (31 March–3 May) included the suspension of incoming flights with few exceptions (for instance, repatriated Cypriot citizens), stay at home order, and night curfew.
## Statistical methods
In this section we introduce the different techniques and models that have been used for the modeling and analysis of the COVID-19 infections in Cyprus.
### Change-point analysis and projections
Change-point detection is an active area of statistical research that has attracted a lot of interest in recent years and plays an essential role in the development of the mathematical sciences. A non-exhaustive list of application areas includes financial econometrics9, credit scoring10, and bioinformatics11. The focus is on the so-called a posteriori change-point detection, where the aim is to estimate the number and locations of certain changes in the behaviour of a given data sequence. For a review of methods of inference for single and multiple change-points (especially in the context of time series) under the a-posteriori framework, see12. Detecting these change-points enables data segmentation to homogeneous parts thus leading to a more elaborate modeling approach. Advantages of discovering such segments include interpretation and forecasting. Interpretation naturally associates the detected change-points to real-life events or/and political decisions. In this way, improved description of the observed process and the impact of any intervention can be communicated. With regards to forecasting, the role of the final segment is quite important as it allows for a more accurate prediction of future values of the data sequence at hand. Methods developed in this context are based on a given model. For the purpose of this paper, we work with the following signal-plus-noise model
\begin{aligned} X_t = f_t + \sigma \epsilon _t, \quad t=1,2,\ldots ,T, \end{aligned}
(1)
where $$X_t$$ denotes the daily number of COVID-19 cases and $$f_t$$ is a deterministic signal with structural changes at certain time points. Details about $$f_{t}$$ are given below. The sequence $$\epsilon _t$$ consists of independent and identically distributed (iid) data with mean zero and variance equal to one and $$\sigma >0$$. We denote the number of change-points by K and their respective locations by $$r_1, r_2, \ldots , r_K$$. The locations are unknown and the aim is to estimate them. The daily number of COVID-19 cases in the Republic of Cyprus is investigated by using the following two models for $$f_{t}$$ of (1):
1. 1.
Continuous, piecewise-linear signals $$f_{t} = \mu _{j,1} + \mu _{j,2}t$$, for $$t = r_{j-1} + 1, r_{j-1}+2, \ldots , r_{j}$$ with the additional constraint of $$\mu _{k,1} + \mu _{k,2}r_{k} = \mu _{k+1,1} + \mu _{k+1,2}r_{k}$$ for $$k=1,2,\ldots ,K$$. The change-points, $$r_k$$, satisfy $$f_{r_k-1} + f_{r_k+1}\ne 2f_{r_k}$$.
2. 2.
Piecewise-constant signals $$f_t = \mu _j$$ for $$t = r_{j-1}+1,r_{j-1}+2,\ldots ,r_j$$, and $$f_{r_j}\ne f_{r_j+1}.$$
In this work, we are using the Isolate-Detect (ID) methodology of Anastasiou and Fryzlewicz (2019)13 in order to detect changes based on (1) under the settings of continuous piecewise-linear and piecewise-constant signals, as described above; see Online Appendix A1 for a description of the method.
### Count time series methodology
The analysis of count time series data (like daily incidence data we consider in this work) has attracted considerable attention, see Kedem and Fokianos (2002)14 for several references and Fokianos (2015)15 for a recent review of this research area. In what follows, we take the point of view of generalized linear modeling as advanced by16. This framework naturally generalizes the traditional ARMA methodology and includes several complicated data generating processes besides count data such as binary and categorical data. In addition, fitting of such models can be carried out by likelihood methods; therefore testing, diagnostics and all type of likelihood arguments are available to the data analyst.
The logarithmic function is the most popular link function for modeling count data. In fact, this choice corresponds to the canonical link of generalized linear models. Suppose that $$\{X_t \}$$ denotes a daily incidence time series and assume, that given the past, $$X_{t}$$ is conditionally Poisson distributed with mean $$\lambda _{t}$$. Define $$\nu _{t} \equiv \log \lambda _{t}$$. A log-linear model with feedback for the analysis of count time series17 is defined as
\begin{aligned} \nu _{t} = d+a_{1} \nu _{t-1} + b_{1} \log (X_{t-1}+1). \end{aligned}
(2)
In general, the parameters $$d,a_{1},b_{1}$$ can be positive or negative but they need to satisfy certain conditions to obtain stability of the model. The inclusion of the hidden process makes the mean of the process to depend on the long-term past values of the observed data. Further discussion on model (2) can be found in Online Appendix A2 which also includes some discussion about interventions. An intervention is an unusual event that has a temporary or a permanent impact on the observed process. Computational methods for discovering interventions, in the context of (2), under a general mixed Poisson framework have been discussed by18. In this work, we will consider additive outliers (AO) defined by
\begin{aligned} \nu _{t} = d+a_{1} \nu _{t-1} + b_{1} \log (X_{t-1}+1) + \sum _{k=1}^{K}\gamma _{k} I(t= r_{k}) \end{aligned}
(3)
where the notation follows closely that of the section above and I(.) denotes the indicator function. Inclusion of the indicator function shows that at the time point $$r_{k}$$, the mean process has a temporary shift whose effect is measured by the parameter $$\gamma _{k}$$ but in the log-scale. Other type of interventions can be included (see Online Appendix A2) whose effect can be permanent and, in this sense, intervention analysis and change-point detection methodologies address similar problems but from a different point of view. Model fitting is based on maximum likelihood estimation and its implementation has been described in detail by Liboschik et al. (2017)18.
### Compartmental models
Compartmental models in epidemiology, like the Susceptible-Infectious-Recovered (SIR) and Susceptible-Exposed-Infectious-Recovered (SEIR) models and their modifications, have been used to model infectious diseases since the early 1920’s (see19,20 among others). The basic assumptions for these models are the existence of a closed community, i.e. without influx of new susceptibles or mortality due to other causes, with a fixed population, say N, and also that the individuals who recover from the illness are immune and do not become susceptible again. In the basic SEIR model, at any point in time t, each individual is either susceptible (S(t)), exposed (E(t)), infectious (I(t)) or recovered (R(t), including death). Typically, the epidemic starts at time $$t=0$$ with a number of infectious individuals, usually thought of as being externally infected, and the rest of the population being susceptible. People progress between the different compartments and this motion is described usually through a system of ordinary differential equations that can be put in a stochastic framework.
A variety of SEIR modifications and extensions exist in the literature, and a multitude of them emerged recently because of the COVID-19 epidemic. In this work, we consider four such modifications, based on the models proposed in21 and22 for the analysis of the COVID-19 epidemic in Wuhan and the rest of the Chinese provinces.
#### Compartmental model 1
Initially, we employ the SEIR model based on the metapopulation model of Li et al. (2020)22 but simplified to take into account only a single population. The novelty compared to the standard SEIR model, is that this model takes into account the existence of undocumented/asymptomatic infections, which transmit the virus at a potentially reduced rate. The model tracks the evolution of four state variables at each day t, representing the number of susceptible, exposed, infected-reported and infected-unreported individuals, $$S(t), E(t), I^r(t), I^u(t)$$ respectively. The parameters of the model are the transmission rate $$\beta$$ ($$\hbox {days}^{-1}$$), the relative transmission rate $$\mu$$ representing the reduction in transmission for asymptomatic individuals, the average latency/incubation period Z (days), the average infectious period D (days) and the reporting rate $$\alpha$$ representing the proportion of infected individuals which are reported. For a graphic description of the model see Figure A4 in the Online Appendix. The time evolution of the system is defined by the following set of differential equations (recall N denotes the population size):
\begin{aligned} \begin{aligned} \frac{d S(t)}{dt}&=-\frac{\beta S(t) I^r(t)}{N}-\frac{\mu \beta S(t) I^u(t)}{N}, \\ \frac{dE(t)}{dt}&=\frac{\beta S(t) I^r(t)}{N}+\frac{\mu \beta S(t) I^u(t)}{N} - \frac{E(t)}{Z}, \\ \frac{dI^r(t)}{dt}&=\alpha \frac{E(t)}{Z}-\frac{I^r(t)}{D},\\ \frac{dI^u(t)}{dt}&=(1-\alpha )\frac{E(t)}{Z}-\frac{I^u(t)}{D}. \end{aligned} \end{aligned}
(4)
Following Li et al. (2020)22, we use a stochastic version of this model with a delay mechanism. Each term, say U, on the right hand side of (4) is replaced by a Poisson random variable with mean U. At each day, we use the 4th order Runge–Kutta numerical scheme to integrate the resulting equations and obtain the values of the four state variables on the next day. For each new reported infection, we draw a Gamma random variable with mean $$\tau _d$$ days, to determine when this infection will be recorded. For the main analysis we use $$\tau _d$$=6 days, as the average reporting delay between the onset of symptoms and the recording of an infection; see also22. Note that the results are robust with respect to the value of the reporting delay. The final output of this model is the number of recorded infections on each day t, $$y=y(t)$$.
#### Compartmental model 2
We also use the metapopulation model of Li et al. (2020)22. Such metapopulation models have been successfully applied to predict epidemic spreading through human mobility networks23,24, by modeling the transmission dynamics in a set of populations, indexed by i, connected through mobility patterns, say $$M_{ij}$$. This is implemented by incorporating information on human movement between the five main districts of Cyprus: Nicosia, Limassol, Larnaca, Paphos and Ammochostos. In this case, $$i=1,2,3,4,5$$ and $$M_{ij}$$ denotes the daily number of people traveling from district i to district j, $$i \ne j$$. Such information is based on the 2011 census data obtained from the Cyprus Statistical Service. The time evolution of the four compartmental states in each district i is defined by the following set of differential equations:
\begin{aligned} \begin{aligned} \frac{d S_i(t)}{dt}&=-\frac{\beta S_i(t) I_i^r(t)}{N_i}-\frac{\mu \beta S_i(t) I_i^u(t)}{N_i} +\theta \sum _j{\frac{M_{ij}S_j(t)}{N_j-I_j^r(t)}} - \theta \sum _j{\frac{M_{ji}S_i(t)}{N_i-I_i^r(t)}}, \\ \frac{dE_i(t)}{dt}&=\frac{\beta S_i(t) I_i^r(t)}{N_i}+\frac{\mu \beta S_i(t) I_i^u(t)}{N_i} - \frac{E_i(t)}{Z} +\theta \sum _j{\frac{M_{ij}E_j(t)}{N_j-I_j^r(t)}} - \theta \sum _j{\frac{M_{ji}E_i(t)}{N_i-I_i^r(t)}}, \\ \frac{dI_i^r(t)}{dt}&=\alpha \frac{E_i(t)}{Z}-\frac{I_i^r(t)}{D}, \\ \frac{dI_i^u(t)}{dt}&=(1-\alpha )\frac{E_i(t)}{Z}-\frac{I_i^u(t)}{D} +\theta \sum _j{\frac{M_{ij}E_j(t)}{N_j-I_j^u(t)}} - \theta \sum _j{\frac{M_{ji}E_i(t)}{N_i-I_i^u(t)}}, \end{aligned} \end{aligned}
(5)
where the notation follows the notation given in Compartmental Model 1 Section. In addition to the four state variables, this model also updates at each time step the population of each area i, say $$N_i$$, by $$N_i = N_i + \theta \sum _j{M_{ij}} - \theta \sum _j{M_{ji}}$$, where the multiplicative factor $$\theta$$ is assumed to be greater than 1 to reflect under-reporting of human movement. Like Model (4), Model (5) is integrated stochastically using a 4th order Runge-Kutta (RK4) scheme. Specifically, for each step of the RK4 scheme, each unique term on the right-hand side of the four equations is determined using a random sample from a Poisson distribution. The equations describing the evolution of the population in each district i, are solved deterministically, $$i=1,\ldots ,5.$$ For more, see the supplement of Li et al. (2020)22.
#### Compartmental model 3
Further, we consider the meta-population model of Peng et al. (2020)21. This is a generalisation of the classical SEIR model, consisting of seven states: (S(t), P(t), E(t), I(t), Q(t), R(t), D(t)). At time t, the susceptible cases S(t) will become with a rate $$\zeta$$ insusceptible P(t) or with a rate $$\beta$$ exposed E(t), that is infected but not yet infectious, i.e. in a latent state. Some of the exposed cases will eventually become infected with a rate $$\gamma$$. Infected means they have the capacity of infecting but are not quarantined yet. The introduction of the new quarantined state, Q(t), in the classical SEIR model, formed by the infected cases with a constant rate $$\delta$$, allows to consider the effect of preventive measures. Finally, the quarantined cases, are now split to cured cases, R(t), with rate $$\lambda (t)$$ and to closed, D(t), with mortality rate $$\kappa (t)$$. The model’s parameters are the transmission rate $$\beta$$, the protection rate $$\zeta$$, the average latent time $$\gamma ^{-1}$$ (days), the average quarantine time $$\delta ^{-1}$$ (days) as well as the time dependent cure rate $$\lambda (t)$$ and mortality rate $$\kappa (t)$$. The relations are characterized by the following system of differential equations:
\begin{aligned} \begin{aligned} \frac{d S(t)}{dt}&=-\frac{\beta S(t) I(t)}{N}-\zeta S(t),&\frac{dE(t)}{dt}&=\frac{\beta S(t) I(t)}{N}-\gamma E(t),\\ \frac{dI(t)}{dt}&=\gamma E(t)-\delta I(t),&\frac{dQ(t)}{dt}&= \delta I(t)-\lambda (t) Q(t)-\kappa (t) Q(t),\\ \frac{dR(t)}{dt}&= \lambda (t) Q(t),&\frac{dD(t)}{dt}&= \kappa (t) Q(t),\\ \ \frac{dP(t)}{dt}&=\zeta S(t). \end{aligned} \end{aligned}
(6)
The total population size is assumed to be constant and equal to $$N=S(t)+P(t)+E(t)+I(t)+Q(t)+R(t)+D(t)$$. According to the official reports, the number of quarantined cases, recovered and deaths, due to COVID-19, are available. However, the recovered and death cases are directly related to the number of quarantined cases, which plays an important role in the analysis, especially since the numbers of exposed (E) and infectious (I) cases are very hard to determine. The latter two are therefore treated as hidden variables. This implies that we need to estimate the four parameters $$\zeta ,\beta , \gamma ^{-1}, \delta ^{-1}$$ and both the time dependent cure rate $$\lambda (t)$$ and mortality rate $$\kappa (t)$$. Notice here that while the rest of the parameters are considered fixed during the pandemic, we allow the cure and mortality rate to vary with time. We expect that the former will increase with time, given that social distancing measures have been put in place, while the latter will decrease. Finally, this is an optimization problem, and the methodology we have followed in order to address it can be found in Online Appendix A3.
#### Compartmental model 4
The last model we consider is a modified version of a solution created by Bettencourt and Ribeiro (2008)25 to estimate real-time effective reproduction number $$R_{t}$$ (To avoid confusion, the notation $$R_t$$ denotes the effective reproduction number but R(t) denotes the number of recovered cases in a given population) using a Bayesian approach on a simple Susceptible - Infected (SI) compartmental model:
\begin{aligned} \begin{aligned} \frac{d S(t)}{dt}&=-\frac{\beta S(t) I(t)}{N},\\ \frac{dI(t)}{dt}&=\frac{\beta S(t) I(t)}{N}-\frac{I(t)}{D}.\\ \end{aligned} \end{aligned}
(7)
We use the Bayes rule to update the beliefs about the true value of $$R_t$$ based on our predictions and on how many new cases have been reported each day. Having seen k new cases on day t, the posterior distribution of $$R_t$$ is proportional to (denoted by $$\propto$$) the prior beliefs of the value of $$P(R_t)$$ times the likelihood of $$R_t$$ given that we have recorded k new cases, i.e., $$P(R_t | k) \propto P(R_t) \times L(R_t | k)$$. To make this iterative every day that passes by, we use last day’s posterior $$P(R_{t-1} | k_{t-1})$$ to be today’s prior $$P(R_t)$$. Therefore in general $$P(R_t | k) \propto \prod _{t=0}^{T}{L(R_t | k_t)}$$. However, in the above model the posterior is influenced equally by all previous days. Thus, we propose a modification suggested in26 that shortens the memory and incorporates only the last m days of the likelihood function, $$P(R_t | k) \propto \prod _{t=m}^{T}{L(R_t | k_t)}$$. The likelihood function is modelled with a Poisson distribution.
### Estimation of the effective reproduction number $$R_t$$
Recall the compartmental models discussed in the Compartmental Model 1 and 2 sub-sections. Then the effective reproduction number is given by
\begin{aligned} R_t=\alpha \beta D + (1-\alpha )\mu \beta D, \end{aligned}
(8)
see the supplement of Li et al.22. We estimate $$R_t$$ in (8) during consecutive time periods (either week-long or fortnight-long) for which its value is considered to be constant. To achieve this we estimate the parameters of each model, also assumed to be constant for each time period, using the daily number of diagnoses in the Republic of Cyprus (In order to maintain consistent notation throughout this article, we use the notation $$R_t$$, even though in this section the effective reproduction number is considered constant for each time period). To estimate the parameters we employ Bayesian statistics, that is, we postulate prior distributions on the parameters and incorporate the data and the model (through the likelihood) to obtain the posterior distributions on the parameters. The posterior distributions capture our updated beliefs about the parameters after combining the prior with the observed data; see for example27.
For the model defined by (4), we consider the whole area of Cyprus as a single uniform population. For this case, the observations are not sufficiently informative to identify all five parameters of the model. A solution would be to enforce identifiability by postulating strongly informative prior distributions on the parameters. Instead, we choose to make the assumption that the parameters ZD and $$\mu$$ have globally constant values, fixed over time. In particular we set $$D=3.5$$ and $$\mu =0.5$$ as estimated in22 and $$Z=5.1$$ which appears to be the globally accepted mean incubation period. We thus only need to infer the reporting rate $$\alpha$$ and the transmission rate $$\beta$$, which vary both temporally and between different countries because of the amount of testing and the degree of adherence to the social distancing policies. On the other hand, the model defined by (5) is sufficiently informative to infer all six model parameters. All computational methods, prior modelling and assumptions in relation to both compartmental models discussed in Compartmental Model 1 and 2 sub-sections are given in Online Appendix A4. In addition to the above methods we further consider the method of Corie et al. (2013)28 as a benchmark to compare all methodologies for estimating the effective reproduction number.
## Results
### Descriptive surveillance statistics
By the end of May 2020, 952 cases of COVID-19 were diagnosed in the Republic of Cyprus. Of these, 50.2% were male ($$n = 478$$) and the median age was 45 years (IQR: 31–59 years). The setting of potential exposure was available for 807 cases (84.8%). Of these, 17.4% ($$n = 140$$) had history of travel or residence abroad during a 14-day period before the onset of symptoms. Locally acquired infections were 667 (82.7%) with 8.6% ($$n = 57$$) related to a health-care facility in one geographical setting (cluster A) and 12.4% ($$n = 83$$) clustered in another setting (cluster B). The epidemic curve by date of sampling and date of symptom onset is shown in Fig. 1. The number of cases started to decline in April reaching very low levels in late May.
### Long-term impact of the COVID-19 epidemic on Cyprus
In this section, we investigate the long-term impact of COVID-19 to Cyprus. Towards this, we give long-term projections for the daily incidence and death rates. We fit system (6) to COVID-19 data that were collected during the period from the 1st of March 2020 till the 31st of May 2020, in Cyprus. We treat all the reported cases without making the distinction between local and imported. The model parameters are estimated using the methodology described in Online Appendix A3. When the model is fitted to data, it can be used to forecast the epidemic. In order to study the evolution of the model as new data are added and the quality of the respective forecasts, we have fitted model (6) in four datasets constructed from the original one using different time periods. Specifically, the four datasets were formed using the daily reported numbers of diagnoses from the beginning of the observation period until and including the 2/4/2020, 17/4/2020, 15/5/2020 and 24/5/202 respectively. The dates were chosen according to the change points detected using the methodology described in Change-point Analysis and Projection Section.
The fitted model in each case was used in order to predict the pandemic’s evolution until the 30/6/2020. In Fig. 2, we show the number of predicted exposed plus infectious cases (green solid lines) and the number of predicted recovered cases (blue solid lines) for the duration of the prediction period, and compare them to the observed cases which are indicated by circles and triangles. We use circles for data that have been used in the prediction and triangles for the observed data that are used for validation. Visual inspection shows that after a period of about two months during which the model overestimates the number of active cases and underestimates the number of recovered, see Fig. 2 (top), model (6) was able to capture accurately the evolution of the pandemic, Fig. 2 (bottom).
The performance of the predictions can also be evaluated by means of the relative error (RE) which are computed using $$RE=\sqrt{\frac{\sum _{t}(y_t-x_t)^2}{\sum _{t} x_t^2 }}$$, where $$x_t$$ denotes the datum for day t and $$y_t$$ the model prediction for the same day. The RE for the recovered cases equals $$0.4\%, 0.2\%, 0.3\%$$ and $$0.3\%$$ for the four time periods respectively with the corresponding RE for the active cases being high in the beginning $$18\%, 5.8\%$$, but then dropping considerably $$0.16\%$$ and $$0.1\%$$, reflecting the fact the model caught up with the evolution of the pandemic. Overall, system (6) gives adequate predictions especially when data from longer time periods are used.
Figure 3 shows the number of deaths and their respective predictions using subsets of data as described above. In the duration of the first data set, there were no deaths registered and therefore the prediction was identically zero, giving also an RE equal to $$100\%$$ see Fig. 3 (top left). As more deaths are registered the model’s ability to predict the correct number of deaths is improving, see Fig. 3.
The recovery rate ($$\lambda (t)$$) is modelled as
\begin{aligned} \lambda (t)=\frac{\lambda _1}{1+\exp (-\lambda _2(t-\lambda _3))}, \quad \lambda _i\ge 0, \quad i=1,2,3. \end{aligned}
(9)
The idea is that the recovery rate, as time increases, should converge towards a constant. In Fig. 4 (left), the fitted recovery rate (solid line) is plotted against the observed number of recovered cases (stars).
Finally, Model 3 can be used to estimate the unobserved number of exposed, E(t), and infectious, I(t), cases during the development of the pandemic. The maximum number of exposed cases occurs on the 21st of March 2020 and is estimated to be 173 cases, Fig. 4 (right, blue line), with the maximum of infectious individuals (136) being attained on the 26th of March 2020. We can observe a delay in the transition of exposed to infectious in the order of 5 days, which suggests a 5 day latent time of COVID-19.
### Change-point analysis
We first consider the change-point detection method of the Statistical Methods section for the case of piecewise-linear signal plus noise model. Figure 5 illustrates the results obtained by this analysis on daily incidence data.
This method has detected three important changes. The first change occurs on the 23rd of March, while the last two changes are estimated on the 2nd and 17th of April. The first change-point, on the 23rd of March, indicates a significant increase in the number of cases, and it is believed to be related to the development of the clusters A and B as mentioned in the Descriptive surveillance statistics section. The second change-point shows that the upward trend vanishes and a negative slope takes its place leading to a vast decrease in the number of cases. There is a connection of this change point with the Government’s lockdown decrees on the 24th and 31st of March; it shows the almost immediate impact that such decisions can have in fighting the pandemic. The third change-point indicates stabilisation in the number of the detected cases over the last period. The median for the number of cases in the last segment is equal to 3. Predictions (together with 95% CI.) for the week ahead are also shown in Fig. 5. Note that there will be no more than 8 cases on the 1st of June and no more than 9 cases per day in the period from the 2nd of the month until the 7th. The point estimator is found to be equal to 2 cases per day. The corresponding analysis for the piecewise-constant signal plus noise model is given in Online Appendix A5. Both models describe adequately the daily number of cases and (a) they provide a justification of the positive impact of the Government’s measures in fighting COVID-19, (b) they give a clear view of how contagious the virus is, especially in cases when hubs are developed in the society, and (c) the last homogeneous discovered segment provides better understanding of the current state of the virus in terms of its transmittal.
### Count time series analysis
We first fit model (2) without interventions to daily incidence data, considering the time period between the 4th of March and the 31st of May. Maximum likelihood estimation shows that the fitted model is given by
\begin{aligned} {\hat{\nu }}_{t} = -0.003_{(.009)} + 0.547_{(0.447)} {\hat{\nu }}_{t-1}+ 0.451_{(.050)} \log (1+Y_{t-1}). \end{aligned}
Corresponding standard errors of the regression coefficients are given underneath in parentheses. Note that the sum of the coefficients $$0.547+0.451 \sim 1$$ which shows evidence of non-stationarity observed in the data. Exploring further the data by investigating existence of interventions we find two additive outliers on 13th and 26th of March (p-value after adjusting for all type of interventions is negligible). The resulting model is given by
\begin{aligned} {\hat{\nu }}_{t} =-0.007_{(.007)} + 0.779_{(0.046)} {\hat{\nu }}_{t-1} +0.211_{(.046)} \log (1+Y_{t-1})+1.643_{(0.137)} I(t=10) + 1.102_{(0.288)} I(t=23). \end{aligned}
Note again that the sum $$0.779+211 \sim 1$$ which shows that the non-stationarity persists even after including additive outliers (in the log-scale). Furthermore, the positive sign of both interventions shows the sudden explosion of the daily number of people infected. The corresponding Bayesian information criterion (BIC) values obtained after fitting this model is equal to 576.643 which improves the BIC of the model without intervention which was equal to 615.766. Figure 6 shows the fit of the model to the data and gives 95% prediction intervals for the week ahead.
Comparing both change-point analysis (see Fig. 5) and the result obtained by using the above intervention analysis, we observe that both approaches give similar prediction intervals that include future observed incidence data. Indeed, the observed data for the week ahead (01/06/2020-07/06/2020) were 4,6,1,0,5,5 and 1 cases.
### Results for the effective reproduction number
Recall the effective reproduction number $$R_t$$ defined by (8). We perform Bayesian analysis using (4) (see Section “Estimation of the effective Reproduction Number $$R_t$$”) separately on two sets of data: the data concerning all COVID-19 diagnoses in Cyprus and the data concerning diagnoses from local transmission only; for details regarding prior modelling and computation see Online Appendix A4. We examine six consecutive fortnight periods, for each of which we draw 10,000 steps of the used Markov chain Monte Carlo algorithm (independence sampler), discard the first 2000 steps as burn-in and use the remaining ones to approximate the posterior distributions of $$\alpha , \beta$$ and hence $$R_t$$.
For the data concerning all diagnoses, the first recorded incident was on 07/03/2020, hence, as detailed in Online Appendix A4, we initialize our analysis of the outbreak 3 days earlier, on 04/03/2020. Figure 7 shows the estimated joint posterior distributions of the reporting rate $$\alpha$$ and the transmission rate $$\beta$$, for the six fortnight periods. In the first period the posterior probability is distributed in a wide range of values, while in the following periods it concentrates on a narrower range of values. In particular, in the first period, $$\beta$$ takes with high posterior probability values between 1 and 2, while $$\alpha$$ concentrates between 0.5 and 1. This can be attributed on both the data and the postulated priors on $$\alpha$$ and $$\beta$$. Early in the outbreak, there was a high degree of social interactions, and only a small fraction of the public took protective measures, hence an infected individual could transmit the virus to many people before getting detected and put into isolation; this explains the high values of $$\beta$$. Furthermore, there were only a few recorded diagnoses, the virus had not yet spread in the community and due to the effective contact tracing procedures, the reporting rate was high; this can explain the high values of $$\alpha$$ (even though the prior on $$\alpha$$ at this stage penalizes values far from 0.5). In the next two periods, the introduction of lockdown measures by the government and the adherence of the majority of the public to the advised protective measures, results to lower values of $$\beta$$. At the same time, the virus has penetrated certain local communities (cf. hubs A and B in the Descriptive surveillance statistics subsection), and as a result we have lower values of the reporting rate $$\alpha$$, initially around 0.5 and in the third and fourth periods between 0.2 and 0.5 (even though the prior on $$\alpha$$ at this stage puts higher penalty on small values). In the fourth period, there is a high concentration of the posterior distribution on even lower values of $$\beta$$. This is the effect of the continued strict lockdown imposed. Finally in the final two periods, the values of $$\beta$$ remain low, with a slight increase compared to the fourth period, which can be attributed to the relaxation of measures by the government on 04/05/2020. The values of the reporting rate $$\alpha$$ significantly increase in the last two periods, which can be attributed to the very high number of both targeted and random testing performed.
The considerations in the last paragraph, combined with Eq. (8), explain the results on the effective reproduction number $$R_t$$ in Figs. 8 and 9. In Fig. 8 and the top part of Fig. 9, we see that the posterior distribution of $$R_t$$, is spread in a wide range of high values between 3–6, with a median of 4.47, while in the next three periods with the introduction of the progressively stricter measures the posterior increasingly concentrates on lower values. In particular in the fourth period, the posterior median is 0.38. In the last two periods, with the relaxation of the lockdown, there is a small increase in the values of $$R_t$$, however its posterior distribution still mostly concentrates under 1, with medians around 0.7. The bottom part of Fig. 9, shows the posterior probabilities of the event $$R_t<1$$, which steadily increase following the progressively stricter measures imposed, from 0 in the first period to 0.87 in the fourth period, before slightly dropping to around 0.69 and 0.67 with the relaxation of measures in the last two periods, respectively.
For the data concerning only locally transmitted diagnoses, the first recorded incident was on 10/03/2020, hence we initialize our analysis of the outbreak three days earlier, on 07/03/2020. The results of the analysis using data concerning only local transmission diagnoses, are similar to the ones of the analysis using the full data; see Figures A5, A6 and A7 in the Online Appendix.
Next, we consider the estimation model described in22 where Cyprus is divided in five subpopulations (Nicosia, Limassol, Larnaca, Paphos, Ammochostos) and the mobility patterns between them are taken into account (as described in Metapopulation compartmental model 2). The effective reproduction number is given by (8). The compartmental model 2 structure was integrated stochastically using a 4th order Runge-Kutta (RK4) scheme. We use uniform prior distributions on the parameters of the model, with ranges similar to Li et al (2020)22 as follows: relative trasmissibility $$0.2\le \mu \le 1$$, movement factor $$1\le \theta \le 1.75$$; latency period $$3.5\le Z\le 5.5$$; infectious period $$3\le D \le 4$$. For the infection rate we choose $$0.1\le \beta \le 1.5$$ before the lockdown and $$0\le \beta \le 0.8$$ after the lockdown and for the reporting rate we choose $$0.3\le \alpha \le 1$$. Note that the Ensemble Adjustment Kalman Filter (EAKF, described in Online Appendix A4) is not constrained by the initial priors and can migrate outside these ranges to obtain system solutions. For the initialization purposes we assume that all five districts are potential origins with an undocumented infected and exposed population drawn from a uniform distribution [0, 5] a week before the first documented case. Initial condition does not affect the outcome of the inference. Transmission model 2 does not explicitly represent the process of infection confirmation. Thus, we mapped simulated documented infections to confirmed cases using a separate observational delay model. In this delay model, we account for the time interval between a person transitioning from latent to contagious and observational confirmation of that individual infection through a delay of $$T_d$$. We assume that $$T_d$$ follows a Gamma distribution $$G(a,\tau _d/a)$$ where $$\tau _d=6$$ days and $$a=1.85$$ as derived by Li et al. (2020)22 using data from China. Inference is robust with respect to the choice of $$\tau _d$$.
For the inference we use diagnoses from local transmission in Cyprus as were reported by the Ministry of Health. In Fig. 10 we plot the time evolution of the weekly effective reproduction number $$R_t$$. While at the beginning of the outbreak the effective reproduction number was close to 2.5, after the lockdown measures, it dropped below 1 and stayed consistently there until the end of May 2020.
We then use the methodology proposed by25 and recently modified by26 as described in detail in Compartmental Model 4 sub-section. For that method we also use the diagnoses from local transmission in Cyprus as were reported by the Ministry of Health. Figure 11 shows the daily median value as well as the 95% credible intervals for the effective reproductive number using that method.
These results should be compared with the methodology of Corie et al. (2013)28—see Fig. 12 which shows weekly estimates of $$R_{t}$$ based on weekly time intervals. At the end of May, COVID-19 was well contained in Cyprus especially even though the disease initiated with a high value of $$R_{t}$$. The government lockdown helped reduce the reproduction number, as the data shows. Comparing the results obtained by all methods presented in this Section, we see no gross discrepancies on concluding that $$R_{t} < 1$$ by the end of May, with high probability.
## Discussion
The work presented in this report is the result of the efforts of the authors to give guidance to the Cypriot government for controlling the COVID-19 infection outbreak. Different models and methods have been applied to the data collected by the Unit for Surveillance and Control of Communicable Diseases of the Cypriot Ministry of Health.
Change point detection has been used in order to estimate the number and locations of changes in the behaviour of data. Three important changes have been detected. The first one indicated a significant increase in the number of cases, that we suspect is related to the formation of two clusters of COVID-19 infection cases, while the other two indicate a drop and stabilisation respectively in the numbers of cases that are probably linked to the effectiveness of the Government’s lockdown decrees. A log-linear model with feedback and additive outliers was utilised to examine the existence of interventions resulting in two cases. Both methods were subsequently used to provide with prediction intervals on the number of future numbers of diagnoses with comparable results.
Modifications of the SEIR model have been employed in order to study the long-term impact of the COVID-19 to Cyprus. As expected, the fitting improved once enough data were available, with a very good agreement to the observations. Using the fitted Model 3 it was possible to additionally estimate the number of unobserved exposed and infectious cases during the pandemic, with the maximum number of infectious individuals being reached on the 26th of March. Moreover the SEIR based Models 1 and 2 facilitated the estimation and prediction of the effective reproduction number. The estimation was performed in a Bayesian framework during consecutive time periods for which the effective reproduction number was considered constant. We have treated two separate cases: in the first, there was no distinction between the patients while in the second we have considered only those patients resulting from local transmission. Finally, we have estimated the effective reproduction number for a partition of the population according to the five geographic areas of Cyprus.
## Methods
All methodology is described in detail in "Results" section.
## Data availability
Data and code are available at GitHub (https://github.com/chrisnic12/covid_cyprus).
## References
1. 1.
Coronaviridae Study Group of the International Committee on Taxonomy of Viruses. The species severe acute respiratory syndrome-related coronavirus: classifying 2019-ncov and naming it sars-cov-2. Nat. Microbiol. 5, 536–544 (2020).
2. 2.
Zhu, N. et al. A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J. Med. 382, 727–733 (2020).
3. 3.
World Health Organization. Covid-19 weekly epidemiological update (Technical Report, 2021).
4. 4.
Beigel, J. H. et al. Remdesivir for the treatment of Covid-19: preliminary report. N. Engl. J. Med.https://doi.org/10.1056/NEJMoa2007764 (2020).
5. 5.
Polack, F. P. et al. Safety and efficacy of the bnt162b2 mrna covid-19 vaccine. N. Engl. J. Med. 383, 2603–2615 (2020).
6. 6.
Baden, L. R. et al. Efficacy and safety of the mrna-1273 sars-cov-2 vaccine. N. Engl. J. Med. 384, 403–416 (2020).
7. 7.
Voysey, M. et al. Safety and efficacy of the chadox1 ncov-19 vaccine (azd1222) against sars-cov-2: an interim analysis of four randomised controlled trials in brazil, south africa, and the uk. Lancet 397, 99–111 (2021).
8. 8.
Kermack, W. & McKendrick, A. G. A contribution to the mathematical theory of epidemics. Proc. R. Soc. London Ser. A 115, 700–721 (1927).
9. 9.
Schröder, A. L. & Fryzlewicz, P. Adaptive trend estimation in financial time series via multiscale change-point-induced basis recovery. Stat. Its Interface 6, 449–463 (2013).
10. 10.
Bolton, R. & Hand, D. Statistical fraud detection: a review. Stat. Sci. 17, 235–255 (2002).
11. 11.
Olshen, A. B., Venkatraman, E. S., Lucito, R. & Wigler, M. Circular binary segmentation for the analysis of array-based DNA copy number data. Biostatistics 5, 557–572 (2004).
12. 12.
Jandhyala, V., Fotopoulos, S., MacNeill, I. & Liu, P. Inference for single and multiple change-points in time series. J. Time Ser. Anal. 34, 423–446 (2013).
13. 13.
Anastasiou, A. & Fryzlewicz, P. Detecting multiple generalized change-points by isolating single ones. https://arxiv.org/pdf/1901.10852.pdf (2019).
14. 14.
Kedem, B. & Fokianos, K. Regression Models for Time Series Analysis (Wiley, 2002).
15. 15.
Fokianos, K. Statistical Analysis of Count Time Series Models: A GLM perspective. In Handbook of Discrete-Valued Time Series, Handbooks of Modern Statistical Methods (eds Davis, R. et al.) 3–28 (Chapman and Hall, 2015).
16. 16.
McCullagh, P. & Nelder, J. . A. Generalized Linear Models 2nd edn. (Chapman and Hall, 1989).
17. 17.
Fokianos, K. & Tjøstheim, D. Log-linear poisson autoregression. J. Multivar. Anal. 102, 563–578 (2011).
18. 18.
Liboschik, T., Fokianos, K. & Fried, R. tscount: an R package for analysis of count time series following generalized linear models. J. Stat. Softw. 82, 1–51. https://doi.org/10.18637/jss.v082.i05 (2017).
19. 19.
Keeling, M. J. & Rohani, P. Modeling Infectious Diseases in Humans and Animals (Princeton University Press, 2008).
20. 20.
Nicolaides, C., Avraam, D., Cueto-Felgueroso, L., González, M. C. & Juanes, R. Hand-hygiene mitigation strategies against global disease spreading through the air transportation network. Risk Anal. 40, 723–740 (2020).
21. 21.
Peng, L., Yang, W., Zhang, D., Zhuge, C. & L., H. Epidemic analysis of COVID-19 in China by dynamical modeling. (2020). arXiv:2002.06563.
22. 22.
Li, R. et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov-2). Science 368, 489–493 (2020).
23. 23.
Bajardi, P. et al. Human mobility networks, travel restrictions, and the global spread of 2009 h1n1 pandemic. PloS ONE 6, e16591 (2011).
24. 24.
Nicolaides, C., Cueto-Felgueroso, L., González, M. C. & Juanes, R. A metric of influential spreading during contagion dynamics through the air transportation network. PloS ONE 7, e40961 (2012).
25. 25.
Bettencourt, L. M. & Ribeiro, R. M. Real time Bayesian estimation of the epidemic potential of emerging infectious diseases. PLoS ONE 3, e2185 (2008).
26. 26.
Systrom, K. The metric we need to manage COVID-19 rt: The effective reproduction number. Internet]. http://systrom.com/blog/the-metric-we-need-to-managecovid-19 (2020).
27. 27.
Bernardo, J. M. & Smith, A. F. M. Bayesian Theory (John Wiley and Sons, 1994).
28. 28.
Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am. J. Epidemiol. 178, 1505–1512. https://doi.org/10.1093/aje/kwt133 (2013).
29. 29.
Ferguson, N., Laydon, M., Nedjati Gilani, N. et al. Impact of non-pharmaceutical interventions (NPI)s to reduce COVID–19 mortality and healthcare demand. https://doi.org/10.25561/77482 (2020). Imperial College London (16-03-2020).
## Acknowledgements
We thank the Cyprus Ministry of Health for providing us with Data.
## Author information
Authors
### Contributions
S.A., A.A., A.B., T.C., G.H., C.N., G.N. and K.F. designed the study and wrote the manuscript; S.A., A.A., A.B., C.N., G.N. and K.F. analyzed the data and executed the models; E.C. helped with data acquisition. All authors revised the manuscript for important intellectual content and approved the submitted version.
### Corresponding authors
Correspondence to Georgios Nikolopoulos or Konstantinos Fokianos.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Agapiou, S., Anastasiou, A., Baxevani, A. et al. Modeling the first wave of Covid-19 pandemic in the Republic of Cyprus. Sci Rep 11, 7342 (2021). https://doi.org/10.1038/s41598-021-86606-3
• Accepted:
• Published: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210123777389526, "perplexity": 1217.5078851185156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00286.warc.gz"} |
https://www.physicsforums.com/threads/jacobian-when-theres-a-multivariate-function-inside-it.795702/ | # Jacobian when there's a multivariate function inside it
• Thread starter gummz
• Start date
• #1
32
2
## Homework Statement
differentiate the function F(x,y) = f( g(x)k(y) ; g(x)+h(y) )
## Homework Equations
Standard rules for partial differentiation
## The Attempt at a Solution
The Jacobian will have two columns because of the variables x and y. But what then? f is a multivariate function inside the Jacobian!
## Answers and Replies
• #2
BvU
Homework Helper
15,029
4,079
So on top of the standard rules you get the chain rule.
Show some attempt at solution and help is on the way.
To demo my ignorance: Differentiating gives two columns, but one row only, right ?
Is there a significance in the ";" ? You write F ( x , y ) -- a notation which I am also familiar with -- , but then you write f ( u ; v )
• #3
Ray Vickson
Homework Helper
Dearly Missed
10,706
1,722
## Homework Statement
differentiate the function F(x,y) = f( g(x)k(y) ; g(x)+h(y) )
## Homework Equations
Standard rules for partial differentiation
## The Attempt at a Solution
The Jacobian will have two columns because of the variables x and y. But what then? f is a multivariate function inside the Jacobian!
Do you mean ##F(x,y) = f(u,v)##, where ##u = g(x) k(y)## and ##v = g(x) + h(y)##? If so, just apply the chain rule for derivatives. You need to express the answers in terms of the functions ##f_1, f_2##, where ##f_1(u,v) \equiv \partial f(u,v)/\partial u## and ##f_2(u,v) \equiv \partial f(u,v) / \partial v##.
• #4
130
31
Consider the partial derivatives that make up the derivative matrix. It should be a 2x2, you have two functions, and take the derivative of both functions wrt x or wrt y.
• Last Post
Replies
1
Views
3K
• Last Post
Replies
2
Views
337
• Last Post
Replies
0
Views
4K
• Last Post
Replies
12
Views
912
• Last Post
Replies
6
Views
2K
• Last Post
Replies
11
Views
359
• Last Post
Replies
1
Views
1K
• Last Post
Replies
6
Views
2K
• Last Post
Replies
2
Views
232
• Last Post
Replies
3
Views
639 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548400640487671, "perplexity": 3430.3270460162016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00059.warc.gz"} |
http://exxamm.com/QuestionSolution14/3D+Geometry/The+point+which+divides+the+line+joining+the+points+2+4+5+and+3+5+4+in+the+ratio+2+3+externaly+lies+on+the+p/1387112987 | The point which divides the line joining the points (2,4,5) and (3,5,-4) in the ratio 2:3 externaly lies on the p
### Question Asked by a Student from EXXAMM.com Team
Q 1387112987. The point which divides the line joining the points (2,4,5) and (3,5,-4) in the ratio 2:3 externaly lies on the plane?
A
YOZ Plane.
B
none of these
C
ZOX Plane.
D
XOY Plane.
#### HINT
(Provided By a Student and Checked/Corrected by EXXAMM.com Team)
#### Access free resources including
• 100% free video lectures with detailed notes and examples
• Previous Year Papers
• Mock Tests
• Practices question categorized in topics and 4 levels with detailed solutions
• Syllabus & Pattern Analysis | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25535789132118225, "perplexity": 4866.757046672475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864405.39/warc/CC-MAIN-20180521142238-20180521162238-00264.warc.gz"} |
http://languagelog.ldc.upenn.edu/nll/?p=2126 | ## Icktheology
A couple of days ago, when I posted about Nicholas Kristof's take on the electrophysiology of politics, I limited my discussion to a 2008 Science article about the relationship between physiological reactions to "threatening images" (like a spider on someone's face) and political attitudes towards "protective policies" (like immigration). Thanks to a couple of enterprising readers, I found a readable .pdf of a 2009 manuscript from the same (University of Nebraska) laboratory, which Kristof also discusses, on the relationship between physiological reactions to "disgusting images" and attitudes towards "sex items" like gay marriage. And as promised, I have a bit more to say after reading it.
Taken together, the two papers are more convincing than either one alone, since the combined results call into question the idea that the measured physiological differences might simply be due to the greater uneasiness of (more conservative) townies being hooked up to electrodes in a university laboratory, compared to (more liberal) university faculty, students, and staff. But the second paper confirms some of the concerns that I had about the size of the effects in the first paper. And it also shows that I was off base when I wrote that "This is not a case of egregious journalistic misunderstanding or over-interpretation". In particular, Kristof doesn't just exaggerate, he directly contradicts the 2009 paper's findings when he writes that
Liberals released only slightly more moisture in reaction to disgusting images than to photos of fruit. But conservatives’ glands went into overdrive.
The 2009 paper is Smith et al., "The Ick Factor: Physiological Sensitivity to Disgust as a Predictor of Political Attitudes". The subjects in the study appear to be the same 50 Lincoln-area people used in the 2008 Science paper, and in fact, the data in the two papers seem to have come from the different aspects of the same two sessions, one involving answers to several political and psychological questionnaires, and the other involving measurements of physiological responses to seeing a series of still images. (Perhaps this is wrong, and it's just that the time period, the procedures, and the number of subjects are so similar as to give this impression.)
Political attitudes were measured by ascertaining reactions to16 brief issue-prompt items presented in the well-known Wilson-Patterson format (Wilson and Patterson 1968) in which respondents indicate agreement or disagreement (they can also equivocate). The specific prompts employed … and cover a range of topics including taxes, defense, and social issues.
The three possible answers were coded as 0, 0.5, and 1, in such as way as to make the more (conventionally) "conservative" position equal to 1.
To measure self-report, we employed the standard disgust sensitivity survey battery, DS-R (Disgust Sensitivity-Revised). […] [A] full discussion of this widely-used battery can be found at the Disgust Scale Homepage maintained by Jonathan Haidt.
To measure physiological response to disgust, we relied upon one of the most common indicators of physiological activity: skin conductance. […]
In order to stimulate possible disgust reactions, participants were shown five individual disgusting still images scattered throughout the sequential showing of a large number of individual images. Each image appeared on a computer screen for 15 seconds and was separated from the succeeding image by an inter-stimulus interval (ISI) of 10 seconds. Subsequent factor analysis indicated that responses to three of the five disgusting stimuli loaded heavily on the same factor, so our central measure of physiological disgust sensitivity is based on readings obtained during viewing of these three identified disgusting images: an emaciated body, human excrement, and a person eating a mouthful of worms.
One thing that struck me as odd about this is two of the three "threatening" images used in the 2008 Science paper also strike me as having a substantial "ick factor": the authors describe them as "the face of a frightened person, a dazed individual with a bloody face, and an open wound with maggots in it". And many people might find the "disgusting" images somewhat "threatening" as well, it seems to me. (This will become relevant to the discussion later on.)
In any case, the results show clearly that Kristof's statement ("Liberals released only slightly more moisture in reaction to disgusting images than to photos of fruit. But conservatives’ glands went into overdrive.") is not just an exaggeration of the size of the effect, as I suggested in my earlier post, but a qualitatively false assertion. Here's the table showing correlations of subjects' phsyiological responses to the "disgust" images with their political attitudes:
The overall correlation with conservative attitudes in general (represented by the "index of all 16 items") is a small (and statistically insignificant) 0.18 — and falls to 0.14 if gender and age are controlled for. 6 of the 16 items have negative correlations, meaning that the glands of people who responded as "liberals" on those items tended to respond more strongly than those who responded as "conservatives" (though the difference was not judged to be statistically significant). 8 of the remaining 10 items had positive but statistically insignificant correlations. The only questionnaire items with significant positive correlations to the disgust-picture skin-conductance responses were "Gay Marriage" and "Premarital Sex".
Now, this calls into question my suggestion that perhaps "conservatives" responded more strongly (to the "threatening" pictures) because they were in an environment that made (at least some of) them (at least somewhat) more ill at ease than the "liberals" were. In this data, people with conservative positions on Small Government, Illegal Immigrants, Military Spending, and the Death Penalty, had (on average) weaker SCL reactions than people with liberal positions on those issues did. Of course, the data is clearly very noisy, and so maybe this is all just noise.
But still, the pattern undermines the "social distance" hypothesis, and it flatly contradicts Kristof's asssertion about those conservative glands going into overdrive.
At this point, I'd like to reiterate the point that we're dealing with between-group differences that are small relative to the amount of within-group variation, even in the case of the strongest observed connections between physiology and politics. In my earlier post, I made this point by calculating the difference in means in terms of pooled standard deviation, and showed what sort of distributional overlap this implies. This time, I'll do it in terms of correlations.
The strongest correlation observed in this study is r=0.42, between SCL responses to "disgusting" pictures and attitudes towards Gay Marriage. What does this mean? Graphically, if we had two continuous variables, and plotted one of them against the other, the scatter-plot would look something like this:
This is fake data, alas, since the actual underlying results are not published. I created it with this little script in the R statistics language. (Sorry about the lack of indentation — WordPress's brain-dead interface eats all line-initial spaces…)
C=0
NoiseGain=50
while(C != 0.42){
X1 = 1:48 + NoiseGain*runif(48)
X2 = 1:48 + NoiseGain*runif(48)
C = round(cor(X1,X2),digits=2)
}
plot(X1,X2, type="p", pch="x", col="red",
main="48 pairs of points, illustrating r=0.42\n(Fake data)",
xlab="Skin-conductance effect of 3 disgusting pictures",
ylab="Opposition to gay marriage",
yaxt="n", xaxt="n",
mgp=c(1,1,1))
To be more accurate, I should quantize the political-attitude results into the three bins of their survey responses. Doing that, we get an illustration like this one:
The R code:
C=0
while(C != 0.42){
X1 = 1:48 + NoiseGain*runif(48,min=-0.5,max=0.5)
X2 = 1:48 + NoiseGain*runif(48,min=-0.5,max=0.5)
X2[X2<=16] = -1
X2[X2>16 & X2<=32] = 0
X2[X2>32] = 1
C = round(cor(X1,X2), digits=2)
}
plot(X2,X1, type="p", pch="x", col="red",
main="48 pairs of points, illustrating r=0.42\n(Fake data)",
ylab="Skin-conductance effect of 3 disgusting pictures",
xlab="Opposed to gay marriage? (NO, INDIFFERENT, YES)",
yaxt="n", xaxt="n",
mgp=c(1,1,1)
Again, this is not the real data — I'd love to have the real data to show you, but anyhow, given a correlation of r=0.42, it must look something like this. And again, you can see that the range of skin-conductance effects overlaps very substantially across the categories of political attitudes. It's probably a real effect, unlikely to have arisen through sampling error — though I'd feel better about this if I was sure that they'd done a correction for multiple comparisons, and if the set of "disgusting" pictures involved hadn't been adjusted post hoc:
In order to stimulate possible disgust reactions, participants were shown five individual disgusting still images scattered throughout the sequential showing of a large number of individual images.[…]. Subsequent factor analysis indicated that responses to three of the five disgusting stimuli loaded heavily on the same factor, so our central measure of physiological disgust sensitivity is based on readings obtained during viewing of these three identified disgusting images.
I should emphasize that a correlation of 0.42 is nothing to be ashamed of. As I noted in a post a few months ago, a metanalysis of research results in Social Psychology shows that the mode of published correlations is below r=0.1, and 0.42 is well out on the upper tail:
But the point is that we're dealing with small-to-moderate shifts in heavily-overlapping distributions, not some sort of partition of the population into purity-seeking conservatives and ick-tolerant liberals (if that's even the right description for what's happening here).
And now the question of their variables really mean comes to the fore. There are a few things that make this less clear than one would like it to be.
The most important puzzle is the strange divergence between their physiological and psychological measures of disgust sensitivity. Recall that they also "employed the standard disgust sensitivity survey battery, DS-R (Disgust Sensitivity-Revised)" to measure their subjects' self-reported disgust sensitivity. This measure also correlated with attitudes toward Gay Marriage (and to a lesser extent with other sex-related political attitudes):
So what comes next is unexpected :
The roughly parallel results for physiological and for self-reported disgust sensitivity naturally lead to suspicions that the two measures correlate strongly with each other. Surprisingly, however, despite the fact that both measures are related to political attitudes involving sexuality, self-reported disgust sensitivity is unrelated to physiological disgust sensitivity (as measured by skin conductance changes caused by the viewing of disgusting stimuli). The simple bivariate correlation of the two is substantively small, inverse (r = -.15), and statistically insignificant (p = .32).
Note also that self-reported disgust sensitivity also correlated with liberal attitudes on Welfare Spending, Small Goverment, Foreign Aid, and Gun Control. These effects were reduced but not eliminated by controlling for sex and age, with sex playing the most important role — women tended to have higher self-reported disgust sensitivity than men:
One possible reason for the absence of a bivariate relationship is the potentially confounding effect of gender. It may be that differences (physiological and otherwise) between males and females need to be controlled in order for a relationship between physiological and self-reported disgust sensitivity to appear.
Interestingly, while the data suggest important differences between males and females, these differences are not physiological and they do not seem to be the reason for the absence of a correlation. As indicated in Figure 1, where the range of both physiological and self-reported disgust sensitivity has been standardized to run from 0 to 1 in order to facilitate comparisons, mean gender differences occur for self-reported disgust sensitivity (p < .01) but not for physiological disgust sensitivity (p = .82). Perhaps females claim to be more disgust sensitive because, due to societal norms, they often feel pressure to be disgust sensitive whereas males often feel pressure to be disgust insensitive. Though this interpretation is only speculation, we can state with some certainty that, when the focus of attention is on physiology, the difference between males and females is substantively minute and statistically insignificant. When the focus is on self-report, however, males claim to be substantially and significantly less sensitive to disgust than females. Previous scholarship (and, undoubtedly, folk wisdom) holding that “women are more disgust sensitive than men,” (Inbar, Pizarro, and Bloom 2009) is more accurately summarized as women report being more disgust sensitive than men.
Here's their Figure 1:
Note that (as discussed here) this is typical of sex differences on other emotional dimensions such as "empathy":
In general, sex differences in empathy were a function of the methods used to assess empathy. There was a large sex difference favoring women when the measure of empathy was self-report scales; moderate differences (favoring females) were found for reflexive crying and self-report measures in laboratory situations; and no sex differences were evident when the measure of empathy was either physiological or unobtrusive observations of nonverbal reactions to another's emotional state.
The sex/gender – self-report/physiology stuff is interesting, but as the authors explain, it doesn't account for the unexpected disconnection between their measures:
These gender differences in self-report are substantial, however they do not account for the lack of overall correlation between reported disgust sensitivity and physiological disgust sensitivity. A partial correlation of reported and physiological disgust sensitivity controlling for the effects of gender still does not produce a statistically significant relationship (r = -.18; p = .23) and exactly the same numbers result when both gender and age (another variable that could rightly be thought to affect physiological readings) are included as control variables (r = -.18; p = .23). In sum, individuals who believe themselves to be particularly disgust sensitive or particularly disgust insensitive are unlikely to be reporting sentiments that in point of fact relate to their physiological responses to disgust. People are not particularly adept at reporting their emotional states and the gender differences in self-report but not in physiology encourage the conclusion that societal pressures could be more crucial in shaping self-reports than physiological responses.
So in the end, I'm left wondering what space of "dispositional temperaments" is really at issue here, and what its relationship is to the measurements that they're using.
If you remove all the interpretations and just look at the data in this study, we've got something like a 48 X 24 matrix (48 subjects by 16 political questions plus physiological responses to 5 images plus the DS-R measure plus sex plus age). There are more than 200 pairwise correlations, plus a much larger number of potential multivariate analyses. A few of these turn out to be statistically significant. (After Bonferroni correction? I'm not sure.) Specifically, if we look at some of the inter-relationships among various of the political questions, subject sex and age, reactions to three of the pictures, and the DS-R measure, we get a pattern that partly makes sense in terms of Haidt's theory of purity-conscious conservatives, and partly (maybe mostly?) doesn't.
To the extent that I understand all of this, the results seem at least as puzzling as they are intriguing. And given how murky the political, psychological, and physiological spaces are, and how weak the interconnections tend to be, it worries me that most of the people who are thinking in public about these things tend to talk about them in the essentialist terms implied by the use of generic plurals ("… conservatives … liberals …") This includes not only people like Kristof, who probably don't know any better, but also people like Jonathan Haidt and the authors of this study, who clearly do.
1. ### David Eddyshaw said,
February 18, 2010 @ 11:45 am
The oddest thing about all this is the underlying assumption that cultural conservatism and economic conservatism necessarily correlate at all. This seems to be very much the case in the parochial context of US politics, at least at the present time, but I very much doubt its universal validity.
In my highly theologically conservative British church I sometimes have the impression that I am the lone conservative voter, and I think this is pretty typical over here.
Conversely, it's not hard to think of communist states past and present which are not beacons of social liberalism.
2. ### Dan Lufkin said,
February 18, 2010 @ 12:50 pm
The Pew organization last summer published an interesting survey on correlation between political orientation and attitudes to various scientific topics like climate change, compulsory vaccination and evolution. It's available at
http://people-press.org/report/528
3. ### Dan Lufkin said,
February 18, 2010 @ 12:56 pm
Aw, it comes up dry. Try
http://people-press.org/report/528/
or go back to the 2009 listing and find Public Praises Science; Scientists Fault Public, Media
Sorry!
4. ### Dees said,
February 18, 2010 @ 1:26 pm
I read the title of this post and I was really disappointed that it didn't contain anything about the theology of fish. I was thinking the Greek "ichthus" + "theology". Oh well. Maybe next time.
5. ### David Eddyshaw said,
February 18, 2010 @ 2:49 pm
@Dan Lufkin:
Interesting.
I shouldn't have confused the issue by talking about my church affiliation, though, which I just meant as a single example; what I was principally driving at was the strange assumption that a whole range of opinions on very diverse and not obviously logically connected issues can all be lumped together as either "conservative" or "liberal". It seems to me that a lot of this "research" uncritically takes associations which are in fact highly contingent outcomes of specifically US historical developments and imagines that we're talking about archetypal human attitudes.
I suppose in principle the sort of research we're talking about could show some underlying pan-human culture-independent psychological connexion between attitudes to (say) homosexuality and universal medical care free at the point of use, if it were carried out properly. I eagerly await better studies.
6. ### Kylopod said,
February 18, 2010 @ 4:13 pm
Conversely, it's not hard to think of communist states past and present which are not beacons of social liberalism.
I would not call communist states, past or present, beacons of any sort of liberalism.
7. ### andrew c said,
February 18, 2010 @ 5:47 pm
Anecdata: my boss, who supports the Australian Liberal Party (conservative socially, liberal economically) has a PA who is a openly lesbian, and recently a sole parent. He was very supportive of her during the pregnancy and shows absoluely no sign of being bothered by open homosexuality. But we still disagree loudly about economics.
8. ### Stephen Jones said,
February 18, 2010 @ 7:10 pm
and I was really disappointed that it didn't contain anything about the theology of fish. I was thinking the Greek "ichthus" + "theology".
Icktheology is about the theology of green lizards. The first part of the compound comes the known expert on the field, David Icke.
9. ### D.O. said,
February 18, 2010 @ 7:24 pm
I think the best follow-up to these experiments is to show a range of subjects some gay-life pictures, such as same-sex couples holding hands, getting married, maybe even having sex, in parallel to the same of "heteros" and maybe diluted by neutral images. And then try to elicit how much of (I guess, obvious) liberal/conservative spread of opinions to those events is correlated with gland activity. The rest of it (taxes, immigration, war) can be safely forsaken. If I get it right, Haidt's idea is that gland activity will explain a lot of it. After that, they can look at experiments that will correlate general irkiness to homo-related irkiness and if everything works out right we can start pondering if it is general irkiness -> "homo-irkiness" -> conservative attitude causal chain or maybe conservative worldview -> "homo-irkiness" -> induced gland activity or something else.
As things stand now, the logical gaps between measured quantities are so large that the results are impossible to interpret in any breakfast table talk way.
10. ### WoDan said,
February 19, 2010 @ 10:26 am
@Dees
You´ll have to put up with this:
Richter, Johann Gottfried Ohnefalsch
Ichthyotheologie, oder: vernunft- und schriftmässiger Versuch die Menschen aus Betrachtung der Bische zur Bewunderung, Ehrfurcht und Liebe ihres grossen, liebreichen und allein weisen Schöpfers zu führen.
Leipzig: Friedrich Lankischens Erben, 1754.
(Title translation: Ichthyotheology, or a reason- and scripture-based essay to guide humans to admiration, reverence and love for their great, loving and all-wise creator by contemplation of fish.)
11. ### Simon Cauchi said,
February 19, 2010 @ 1:10 pm
GoogleBooks has some scanning errors. I think it should be "Fische" (not "Bische") and "allerweisen" (not "allein weisen").
But what a lovely find!
12. ### WoDan said,
February 19, 2010 @ 1:44 pm
You are right about the "Bische". I miscorrected when I tried to get the capitalization right. Sorry.
"allein weise" is in the original (check the scan). I think it´s a dated variant of "allweise". The modern (technical) term would be "allwissend" (omniscient)
13. ### Amy Stoller said,
February 19, 2010 @ 3:50 pm
@David: "The oddest thing about all this is the underlying assumption that cultural conservatism and economic conservatism necessarily correlate at all. This seems to be very much the case in the parochial context of US politics, at least at the present time, but I very much doubt its universal validity."
Well put. I doubt, however, that it correlates nearly so much in the US as certain pundits with axes to grind would have us believe.
This whole study strikes me as bizarre. Who is defining "conservative" and "liberal"? And "disgusting" and "threatening"? How are they defining these terms? And what makes anybody think surveying/testing 50 people in Lincoln, NE, is going to give reliable results about anything at all?
@andrew c: Thank you for "anecdata." New to me.
Talk about your lies, damned lies, and statistics … the study itself sounds like bad science to me, before you even get to the bad science reporting.
14. ### Amy Stoller said,
February 19, 2010 @ 3:51 pm
My comment to andrew c was supposed to be at the end of my post, not before my remarks about "lies, etc."
15. ### Simon Cauchi said,
February 20, 2010 @ 9:47 am
You're quite right about "allein weisen" (I've now checked the scan), but that means "only wise", as in the hymn by Walter Chalmers Smith, "Immortal, invisible, God only wise". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4776380956172943, "perplexity": 3887.5872891336176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541696.67/warc/CC-MAIN-20161202170901-00379-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.typhoon-hil.com/documentation/typhoon-hil-software-manual/References/rms_measurements.html | # RMS measurements
This section describes RMS measurements
Root mean square (RMS) measurements are supported for all devices. These types of measurements use the so called DSP solver part of the processor. Same solver is used for power sources and thermal models. Number of RMS measurements is not strictly limited.
Table 1. RMS measurement components in HIL toolbox
component component dialog window component parameters
Current RMS measurement
• Operation mode (PLL based/Fixed frequency)
• Fundamental frequency
Voltage RMS measurement
• Operation mode (PLL based/Fixed frequency)
• Fundamental frequency
## Tab: General
There are two operation modes of RMS measurements: PLL based and Fixed frequency.
• PLL based mode is more general, and it adapts in real time to the signal's frequency. This mode should be used in case of variable frequency signals. In case the measured signal is significantly distorted built-in PLL may fail to lock resulting in inconsistent readings.
• In Fundamental frequency mode RMS value is measured with predefined fundamental frequency, defined by value set in Fundamental frequency parameter.
## Tab: Signal Processing
If Pin to System CPU is checked, signal processing code in the component will be mapped to the System CPU. In this case, execution rates for signal processing code inside the component are defined in the Signal processing settings. If Pin to System CPU is unchecked, signal processing code in the component will be mapped to the CPU according to standard CPU partitioning algorithms. More about this algorithm can be found in the Signal processing settings. In this case, it is possible to specify Slow and Fast execution rates.
If the Signal output property is set to True, the measured RMS value is available on a dynamically added output signal port. If Pin to System CPU is checked, the Output execution rate property defines the output execution rate. If Pin to System CPU is unchecked, the output execution rate is the same as Slow execution rate. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131603002548218, "perplexity": 3109.63568323978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00542.warc.gz"} |
https://cs.stackexchange.com/questions/115973/base-n-encoding-with-smallest-output | # Base-N encoding with smallest output
I have a set of bytes (utf-8), and need to encode them into the smallest dataset possible using a Base N encoding scheme. Is it simply the higher the Base N encoding is (ie something like Base85 encoding) the smaller the output will be, or is there a point where using a higher base encoding creates longer outputs? What base number encoding should I use for a minimum length output?
• Ignoring compression issues, if your input has $m$ bits and you use a base-$N$ encoding, then you'll need $\left\lceil\frac{m}{\log N}\right\rceil$ symbols. If you encode symbols with a fixed length encoding, then you'll need at least $\lceil\log N\rceil$ bits per symbol. This is a convoluted way to say that, in general, there is no magic way to encode $m$ bits in less than $m$ bits. The above could be achieved, however, if the input distribution is not uniform and you are fine with a variable-length encoding (see e.g., the Huffman code). – Steven Oct 18 '19 at 14:22
• UTF-8 can always be compressed because it cannot contain arbitrary bytes. It is intentionally redundant to always allow finding the first byte of a code point, and to scan backwards to the previous code point. – gnasher729 Oct 19 '19 at 23:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842401385307312, "perplexity": 597.2899861264862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00021.warc.gz"} |
https://hal-brgm.archives-ouvertes.fr/hal-01440548 | # Bounding probabilistic sea-level projections within the framework of the possibility theory
Abstract : Despite progresses in climate change science, projections of future sea-level rise remain highly uncertain, especially due to large unknowns in the melting processes affecting the ice-sheets in Greenland and Antarctica. Based on climate-models outcomes and the expertise of scientists concerned with these issues, the IPCC provided constraints to the quantiles of sea-level projections. Moreover, additional physical limits to future sea-level rise have been established, although approximately. However, many probability functions can comply with this imprecise knowledge. In this contribution, we provide a framework based on extra-probabilistic theories (namely the possibility theory) to model the uncertainties in sea-level rise projections by 2100 under the RCP 8.5 scenario. The results provide a concise representation of uncertainties in future sea-level rise and of their intrinsically imprecise nature, including a maximum bound of the total uncertainty. Today, coastal impact studies are increasingly moving away from deterministic sea-level projections, which underestimate the expectancy of damages and adaptation needs compared to probabilistic laws. However, we show that the probability functions used so-far have only explored a rather conservative subset of sea-level projections compliant with the IPCC. As a consequence, coastal impact studies relying on these probabilistic sea-level projections are expected to underestimate the possibility of large damages and adaptation needs.
Type de document :
Article dans une revue
https://hal-brgm.archives-ouvertes.fr/hal-01440548
Contributeur : Gonéri Le Cozannet <>
Soumis le : jeudi 19 janvier 2017 - 13:18:44
Dernière modification le : jeudi 19 janvier 2017 - 14:27:35
Document(s) archivé(s) le : jeudi 20 avril 2017 - 13:23:25
### Fichier
Le_Cozannet_et_al_2017_Environ...
Publication financée par une institution
### Citation
Gonéri Le Cozannet, Jean-Charles Manceau, Jeremy Rohmer. Bounding probabilistic sea-level projections within the framework of the possibility theory. Environmental Research Letters, IOP Publishing, 2017, 12 (1), pp.014012. ⟨10.1088/1748-9326/aa5528⟩. ⟨hal-01440548⟩
### Métriques
Consultations de la notice
## 124
Téléchargements de fichiers | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642426133155823, "perplexity": 8459.74758914516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578675477.84/warc/CC-MAIN-20190424234327-20190425020327-00188.warc.gz"} |
http://biomechanical.asmedigitalcollection.asme.org/issue.aspx?journalid=114&issueid=934124 | 0
IN THIS ISSUE
### Guest Editorial
J Biomech Eng. 2015;137(7):070301-070301-2. doi:10.1115/1.4030529.
Commentary by Dr. Valentin Fuster
### Research Papers
J Biomech Eng. 2015;137(7):071001-071001-9. doi:10.1115/1.4030070.
Transport of solutes through diffusion is an important metabolic mechanism for the avascular cartilage tissue. Three types of interconnected physical phenomena, namely mechanical, electrical, and chemical, are all involved in the physics of transport in cartilage. In this study, we use a carefully designed experimental-computational setup to separate the effects of mechanical and chemical factors from those of electrical charges. Axial diffusion of a neutral solute (Iodixanol) into cartilage was monitored using calibrated microcomputed tomography (micro-CT) images for up to 48 hr. A biphasic-solute computational model was fitted to the experimental data to determine the diffusion coefficients of cartilage. Cartilage was modeled either using one single diffusion coefficient (single-zone model) or using three diffusion coefficients corresponding to superficial, middle, and deep cartilage zones (multizone model). It was observed that the single-zone model cannot capture the entire concentration-time curve and under-predicts the near-equilibrium concentration values, whereas the multizone model could very well match the experimental data. The diffusion coefficient of the superficial zone was found to be at least one order of magnitude larger than that of the middle zone. Since neutral solutes were used, glycosaminoglycan (GAG) content cannot be the primary reason behind such large differences between the diffusion coefficients of the different cartilage zones. It is therefore concluded that other features of the different cartilage zones such as water content and the organization (orientation) of collagen fibers may be enough to cause large differences in diffusion coefficients through the cartilage thickness.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071002-071002-7. doi:10.1115/1.4030173.
We compare experimental and computational results for the actions of the cardioactive drugs Lidocaine, Verapamil, Veratridine, and Bay K 8644 on a tissue monolayer consisting of mainly fibroblasts and human-induced pluripotent stem cell-derived cardiomyocytes (hiPSc-CM). The choice of the computational models is justified and literature data is collected to model drug action as accurately as possible. The focus of this work is to evaluate the validity and capability of existing models for native human cells with respect to the simulation of pharmaceutical treatment of monolayers and hiPSc-CM. From the comparison of experimental and computational results, we derive suggestions for model improvements which are intended to computationally support the interpretation of experimental results obtained for hiPSc-CM.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071003-071003-10. doi:10.1115/1.4030176.
A continuum mathematical model with sharp interface is proposed for describing the occurrence of patterns in initially circular and homogeneous bacterial colonies. The mathematical model encapsulates the evolution of the chemical field characterized by a Monod-like uptake term, the chemotactic response of bacteria, the viscous interaction between the colony and the underlying culture medium and the effects of the surface tension at the boundary. The analytical analysis demonstrates that the front of the colony is linearly unstable for a proper choice of the parameters. The simulation of the model in the nonlinear regime confirms the development of fingers with typical wavelength controlled by the size parameters of the problem, whilst the emergence of branches is favored if the diffusion is dominant on the chemotaxis or for high values of the friction parameter. Such results provide new insights on pattern selection in bacterial colonies and may be applied for designing engineered patterns.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071004-071004-8. doi:10.1115/1.4030310.
Hydrated soft tissues, such as articular cartilage, are often modeled as biphasic systems with individually incompressible solid and fluid phases, and biphasic models are employed to fit experimental data in order to determine the mechanical and hydraulic properties of the tissues. Two of the most common experimental setups are confined and unconfined compression. Analytical solutions exist for the unconfined case with the linear, isotropic, homogeneous model of articular cartilage, and for the confined case with the non-linear, isotropic, homogeneous model. The aim of this contribution is to provide an easily implementable numerical tool to determine a solution to the governing differential equations of (homogeneous and isotropic) unconfined and (inhomogeneous and isotropic) confined compression under large deformations. The large-deformation governing equations are reduced to equivalent diffusive equations, which are then solved by means of finite difference (FD) methods. The solution strategy proposed here could be used to generate benchmark tests for validating complex user-defined material models within finite element (FE) implementations, and for determining the tissue's mechanical and hydraulic properties from experimental data.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071005-071005-8. doi:10.1115/1.4030175.
In this paper, a quantitative interpretation for atomic force microscopy-based dynamic nanoindentation (AFM-DN) tests on the superficial layers of bovine articular cartilage (AC) is provided. The relevant constitutive parameters of the tissue are estimated by fitting experimental results with a finite element model in the frequency domain. Such model comprises a poroelastic stress–strain relationship for a fibril reinforced tissue constitution, assuming a continuous distribution of the collagen network orientations. The identification procedure was first validated using a simplified transversely isotropic constitutive relationship; then, the experimental data were manually fitted by using the continuous distribution fibril model. Tissue permeability is derived from the maximum value of the phase shift between the input harmonic loading and the harmonic tissue response. Tissue parameters related to the stiffness are obtained from the frequency response of the experimental storage modulus and phase shift. With this procedure, an axial to transverse stiffness ratio (anisotropy ratio) of about 0.15 is estimated.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071006-071006-14. doi:10.1115/1.4030174.
Different digital volume correlation (DVC) approaches are currently available or under development for bone tissue micromechanics. The aim of this study was to compare accuracy and precision errors of three DVC approaches for a particular three-dimensional (3D) zero-strain condition. Trabecular and cortical bone specimens were repeatedly scanned with a micro-computed tomography (CT). The errors affecting computed displacements and strains were extracted for a known virtual translation, as well as for repeated scans. Three DVC strategies were tested: two local approaches, based on fast-Fourier-transform (DaVis-FFT) or direct-correlation (DaVis-DC), and a global approach based on elastic registration and a finite element (FE) solver (ShIRT-FE). Different computation subvolume sizes were tested. Much larger errors were found for the repeated scans than for the virtual translation test. For each algorithm, errors decreased asymptotically for larger subvolume sizes in the range explored. Considering this particular set of images, ShIRT-FE showed an overall better accuracy and precision (a few hundreds microstrain for a subvolume of 50 voxels). When the largest subvolume (50–52 voxels) was applied to cortical bone, the accuracy error obtained for repeated scans with ShIRT-FE was approximately half of that for the best local approach (DaVis-DC). The difference was lower (250 microstrain) in the case of trabecular bone. In terms of precision, the errors shown by DaVis-DC were closer to the ones computed by ShIRT-FE (differences of 131 microstrain and 157 microstrain for cortical and trabecular bone, respectively). The multipass computation available for DaVis software improved the accuracy and precision only for the DaVis-FFT in the virtual translation, particularly for trabecular bone. The better accuracy and precision of ShIRT-FE, followed by DaVis-DC, were obtained with a higher computational cost when compared to DaVis-FFT. The results underline the importance of performing a quantitative comparison of DVC methods on the same set of samples by using also repeated scans, other than virtual translation tests only. ShIRT-FE provides the most accurate and precise results for this set of images. However, both DaVis approaches show reasonable results for large nodal spacing, particularly for trabecular bone. Finally, this study highlights the importance of using sufficiently large subvolumes, in order to achieve better accuracy and precision.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071007-071007-10. doi:10.1115/1.4029986.
The effects of diabetes on the collagen structure and material properties of the sclera are unknown but may be important to elucidate whether diabetes is a risk factor for major ocular diseases such as glaucoma. This study provides a quantitative assessment of the changes in scleral stiffness and collagen fiber alignment associated with diabetes. Posterior scleral shells from five diabetic donors and seven non-diabetic donors were pressurized to 30 mm Hg. Three-dimensional surface displacements were calculated during inflation testing using digital image correlation (DIC). After testing, each specimen was subjected to wide-angle X-ray scattering (WAXS) measurements of its collagen organization. Specimen-specific finite element models of the posterior scleras were generated from the experimentally measured geometry. An inverse finite element analysis was developed to determine the material properties of the specimens, i.e., matrix and fiber stiffness, by matching DIC-measured and finite element predicted displacement fields. Effects of age and diabetes on the degree of fiber alignment, matrix and collagen fiber stiffness, and mechanical anisotropy were estimated using mixed effects models accounting for spatial autocorrelation. Older age was associated with a lower degree of fiber alignment and larger matrix stiffness for both diabetic and non-diabetic scleras. However, the age-related increase in matrix stiffness was 87% larger in diabetic specimens compared to non-diabetic controls and diabetic scleras had a significantly larger matrix stiffness (p = 0.01). Older age was associated with a nearly significant increase in collagen fiber stiffness for diabetic specimens only (p = 0.06), as well as a decrease in mechanical anisotropy for non-diabetic scleras only (p = 0.04). The interaction between age and diabetes was not significant for all outcomes. This study suggests that the age-related increase in scleral stiffness is accelerated in eyes with diabetes, which may have important implications in glaucoma.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071008-071008-11. doi:10.1115/1.4029746.
Closure of the left atrioventricular orifice is achieved when the anterior and posterior leaflets of the mitral valve press together to form a coaptation zone along the free edge of the leaflets. This coaptation zone is critical to valve competency and is maintained by the support of the mitral annulus, chordae tendinae, and papillary muscles. Myocardial ischemia can lead to an altered performance of this mitral complex generating suboptimal mitral leaflet coaptation and a resultant regurgitant orifice. This paper reports on a two-part experiment undertaken to measure the dependence of coaptation force distribution on papillary muscle position in normal and functional regurgitant porcine mitral heart valves. Using a novel load sensor, the local coaptation force was measured in vitro at three locations (A1–P1, A2–P2, and A3–P3) along the coaptation zone. In part 1, the coaptation force was measured under static conditions in ten whole hearts. In part 2, the coaptation force was measured in four explanted mitral valves operating in a flow loop under physiological flow conditions. Here, two series of tests were undertaken corresponding to the normal and functional regurgitant state as determined by the position of the papillary muscles relative to the mitral valve annulus. The functional regurgitant state corresponded to grade 1. The static tests in part 1 revealed that the local force was directly proportional to the transmitral pressure and was nonuniformly distributed across the coaptation zone, been strongest at A1–P1. In part 2, tests of the valve in a normal state showed that the local force was again directly proportional to the transmitral pressure and was again nonuniform across the coaptation zone, been strongest at A1–P1 and weakest at A2–P2. Further tests performed on the same valves in a functional regurgitant state showed that the local force measured in the coaptation zone was directly proportional to the transmitral pressure. However, the force was now observed to be weakest at A1–P1 and strongest at A2–P2. Movement of the anterolateral papillary muscle (APM) away from both the annular and anterior–posterior (AP) planes was seen to contribute significantly to the altered force distribution in the coaptation zone. It was concluded that papillary muscle displacement typical of myocardial ischemia changes the coaptation force locally within the coaptation zone.
Topics: Valves , Muscle , Displacement
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071009-071009-8. doi:10.1115/1.4028967.
This work studies a model for milk transport through lactating human breast ducts and describes mathematically the mass transfer from alveolar sacs through the mammary ducts to the nipple. In this model, both the phenomena of diffusion in the sacs and conventional flow in ducts have been considered. The ensuing analysis reveals that there is an optimal range of bifurcation numbers leading to the easiest milk flow based on the minimum flow resistance. This model formulates certain difficult-to-measure values like diameter of the alveolar sacs and the total length of the milk path as a function of easy-to-measure properties such as milk fluid properties and macroscopic measurements of the breast. Alveolar dimensions from breast tissues of six lactating women are measured and reported in this paper. The theoretically calculated alveoli diameters for optimum milk flow (as a function of bifurcation numbers) show excellent match with our biological data on alveolar dimensions. Also, the mathematical model indicates that for minimum milk flow resistance the glandular tissue must be within a short distance from the base of the nipple, an observation that matches well with the latest anatomical and physiological research.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071010-071010-8. doi:10.1115/1.4030404.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071011-071011-6. doi:10.1115/1.4030532.
In the past years, there have been several experimental studies that aimed at quantifying the material properties of articular ligaments such as tangent modulus, tensile strength, and ultimate strain. Little has been done to describe their response to mechanical stimuli that lead to damage. The purpose of this experimental study was to characterize strain-induced damage in medial collateral ligaments (MCLs). Displacement-controlled tensile tests were performed on 30 MCLs harvested from Sprague Dawley rats. Each ligament was monotonically pulled to several increasing levels of displacement until complete failure occurred. The stress–strain data collected from the mechanical tests were analyzed to determine the onset of damage and its evolution. Unrecoverable changes such as increase in ligament's elongation at preload and decrease in the tangent modulus of the linear region of the stress–strain curves indicated the occurrence of damage. Interestingly, these changes were found to appear at two significantly different threshold strains ($P<0.05$). The mean threshold strain that determined the increase in ligament's elongation at preload was found to be 2.84% (standard deviation (SD) = 1.29%) and the mean threshold strain that caused the decrease in the tangent modulus of the linear region was computed to be 5.51% (SD = 2.10%), respectively. The findings of this study suggest that the damage mechanisms associated with the increase in ligament's elongation at preload and decrease in the tangent modulus of the linear region in the stress–strain curves in MCLs are likely different.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):071012-071012-8. doi:10.1115/1.4029984.
Accurate and reliable “individualized” low back erector spinae muscle (ESM) data are of importance to estimate its force producing capacity. Knowing the force producing capacity, along with spinal loading, enhances the understanding of low back injury mechanisms. The objective of this study was to build regression models to estimate the ESM cross-sectional area (CSA). Measurements were taken from axial-oblique magnetic resonance imaging (MRI) scans of a large historical population [54 females and 53 males at L3/L4, 50 females and 44 males at L4/L5, and 41 females and 35 males at L5/S1 levels]. Results suggest that an individual's ESM CSA can be accurately estimated based on his/her gender, height, and weight. Results further show that there is no significant difference between the measured and estimated ESM CSAs, and expected absolute error is less than 15%.
Commentary by Dr. Valentin Fuster
### Technical Brief
J Biomech Eng. 2015;137(7):074501-074501-5. doi:10.1115/1.4030405.
Accurate quantification of subtle wrist motion changes resulting from ligament injuries is crucial for diagnosis and prescription of the most effective interventions for preventing progression to osteoarthritis. Current imaging techniques are unable to detect injuries reliably and are static in nature, thereby capturing bone position information rather than motion which is indicative of ligament injury. A recently developed technique, 4D (three dimensions + time) computed tomography (CT) enables three-dimensional volume sequences to be obtained during wrist motion. The next step in successful clinical implementation of the tool is quantification and validation of imaging biomarkers obtained from the four-dimensional computed tomography (4DCT) image sequences. Measures of bone motion and joint proximities are obtained by: segmenting bone volumes in each frame of the dynamic sequence, registering their positions relative to a known static posture, and generating surface polygonal meshes from which minimum distance (proximity) measures can be quantified. Method accuracy was assessed during in vitro simulated wrist movement by comparing a fiducial bead-based determination of bone orientation to a bone-based approach. The reported errors for the 4DCT technique were: 0.00–0.68 deg in rotation; 0.02–0.30 mm in translation. Results are on the order of the reported accuracy of other image-based kinematic techniques.
Commentary by Dr. Valentin Fuster
J Biomech Eng. 2015;137(7):074502-074502-6. doi:10.1115/1.4030406.
The ranges of angular motion measured using multisegmented spinal column models are typically small, meaning that minor experimental errors can potentially affect the reliability of these measures. This study aimed to investigate the sensitivity of the 3D intersegmental angles, measured using a multisegmented spinal column model, to errors due to marker misplacement. Eleven healthy subjects performed trunk bending in five directions. Six cameras recorded the trajectory of 22 markers, representing seven spinal column segments. Misplacement error for each marker was modeled as a Gaussian function with a standard deviation of 6 mm, and constrained to a maximum value of 12 mm in each coordinate across the skin. The sensitivity of 3D intersegmental angles to these marker misplacement errors, added to the measured data, was evaluated. The errors in sagittal plane motions resulting from marker misplacement were small (RMS error less than 3.2 deg and relative error in the angular range less than 15%) during the five trunk bending direction. The errors in the frontal and transverse plane motions, induced by marker misplacement, however, were large (RMS error up to 10.2 deg and relative error in the range up to 58%), especially during trunk bending in anterior, anterior-left, and anterior-right directions, and were often comparable in size to the intersubject variability for those motions. The induced errors in the frontal and transverse plane motions tended to be the greatest at the intersegmental levels in the lower lumbar region. These observations questioned reliability of angle measures in the frontal and transverse planes particularly in the lower lumbar region during trunk bending in anterior direction, and thus did not recommend interpreting these measures for clinical evaluation and decision-making.
Topics: Errors , Human spine
Commentary by Dr. Valentin Fuster | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6982501745223999, "perplexity": 3321.7650621991247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584328678.85/warc/CC-MAIN-20190123085337-20190123111337-00202.warc.gz"} |
http://www.maa.org/publications/periodicals/american-mathematical-monthly/american-mathematical-monthly-may-2006 | # American Mathematical Monthly -May 2006
## May 2006
A $1 Problem by Michael J. Mossinghoff mjm@member.ams.org Suppose you need to design a new$1 coin with a polygonal shape, fixed diameter, and maximal area or maximal perimeter. Are regular polygons optimal? Does the answer depend on the number of sides? We investigate these two isodiametric problems for polygons and describe how to construct polygons that are optimal, or very nearly so, in each case.
What Can be Approximated by Polynomials with Integer Coefficients
by Le Baron O. Ferguson
ferguson@math.ucr.edu
A well-known result of Weierstrass states that any continuous function on a closed bounded interval of the real line can be uniformly approximated by polynomials. If we restrict ourselves to polynomials whose coefficients are all integers can anything interesting be said? The answer is yes.
Periodicity and Predictability in Chaotic Systems
by Marcelo Sobottka and Luiz P.L. de Oliveira
sobottka@dim.uchile.cl, luna@exatas.unisinos.br
In this paper, we present a simple chaotic system (satisfying Devaney’s definition) that is periodic and computationally predictable under a symbolic representation scheme. The system consists of the restriction of the tent map to the rational numbers of its original domain. The example contradicts the usual belief that chaotic systems are necessarily nonperiodic and nonpredictable. A general discussion on the concept of computational predictability and its relationship with the existence of periodic orbits is included.
The Simplest Example of a Normal Asymptotic Expansion
by José Antonio Adell and Alberto Lekuona
The central limit theorem has been described as one of the most important results in mathematics, mainly due to its proven application well beyond its own field. The investigation of the rates of convergence in this theorem, taking the form of normal asymptotic expansions, has great interest both from theoretical and practical points of view. However, proofs of these kinds of results are generally intricate, no matter what method is used. Being guided by the principle of "the general is embodied in the concrete," we provide a very simple example of a normal asymptotic expansion in which the technical complexity is reduced to a minimum. This example has the additional advantage of making clear the connections between some familiar notions and tools from probability theory and mathematical analysis. The content is accessible to a nonexpert looking for the "what" and "why" of this amazing research area.
Notes
The Arbitrariness of the Cevian Triangle
by Mowaffaq Hajja
Curiosities Concerning Weak Topology in Hilbert Space
by Gilbert Helmberg
gilbert.helmberg@telering.at
More Formulas for π
by Hei-Chi Chan
chan.hei-chi@uis.edu
On Gauss’s Entry from January 6, 1809
by Detlef Gröger
groeger.d@t-online.de
Problems and Solutions
Reviews
Prime Obsession
by John Derbyshire
Reviewed by Jeffrey Nunemacher
jlnunema@cc.owu.edu
Stalking the Riemann Hypothesis
by Dan Rockmore
Reviewed by Jeffrey Nunemacher
jlnunema@cc.owu.edu | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549332022666931, "perplexity": 795.5559386121688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774894.154/warc/CC-MAIN-20141217075254-00116-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://gis.stackexchange.com/questions/25494/how-accurate-is-approximating-the-earth-as-a-sphere?noredirect=1 | # How accurate is approximating the Earth as a sphere?
What level of error do I encounter when approximating the earth as a sphere? Specifically, when dealing with the location of points and, for example, the great circle distances between them.
Are there any studies on the average and worst case error compared to an ellipsoid? I'm wondering how much accuracy I'd be sacrificing if I go with a sphere for the sake of easier calculations.
My particular scenario involves directly mapping WGS84 coordinates as if they were coordinates on a perfect sphere (with the mean radius defined by the IUGG) without any transformation.
• Are you specifically interested in a spherical model or are you interested in ellipsoid models? I imagine that the amount of error would vary greatly between a sphere and an ellipse. – Jay Laura May 15 '12 at 18:30
• A related analysis appears in this reply. To obtain an answer to your question, though, you need to specify how the earth is approximated as a sphere. Many approximations are in use. They all are tantamount to giving functions f' = u(f,l) and l' = v(f,l) where (f,l) are geographical coordinates of the sphere and (f',l') are geographical coordinates of the ellipsoid. See Section 1.7 ("Transformation...of the ellipsoid of revolution onto the surface of a sphere") in Bugayevskiy & Snyder, Map Projections, A Reference Manual. Taylor & Francis [1995]. – whuber May 15 '12 at 18:44
• This is akin to the early debate over the Google/Bing EPSG 900913 projection (which uses WGS84 coordinates but projects tham as if they were on a sphere) and the errors probably account for EPSG initially rejecting the projection until giving in to pressure from developers. Without want to overly distract you, following up on some of this debate can add some additional breadth to the information in excellent link provided by whuber. – MappaGnosis May 15 '12 at 19:40
• @Jzl5325: Yup, I meant a strict sphere and not ellipsoid, edited the question to provide a bit more context too. – Jeff Bridgman May 15 '12 at 19:49
• I think you should read this: en.wikipedia.org/wiki/Haversine_formula – longtsing Apr 17 at 2:51
## 2 Answers
In short, the distance can be in error up to roughly 22km or 0.3%, depending on the points in question. That is:
• The error can be expressed in several natural, useful ways, such as (i) (residual) error, equal to the difference between the two calculated distances (in kilometers), and (ii) relative error, equal to the difference divided by the "correct" (ellipsoidal) value. To produce numbers convenient to work with, I multiply these ratios by 1000 to express the relative error in parts per thousand.
• The errors depend on the endpoints. Due to the rotational symmetry of the ellipsoid and sphere and their bilateral (north-south and east-west) symmetries, we may place one of the endpoints somewhere along the prime meridian (longitude 0) in the northern hemisphere (latitude between 0 and 90) and the other endpoint in the eastern hemisphere (longitude between 0 and 180).
To explore these dependencies, I have plotted the errors between endpoints at (lat,lon) = (mu,0) and (x,lambda) as a function of latitude x between -90 and 90 degrees. (All points are nominally at an ellipsoid height of zero.) In the figures, rows correspond to values of mu at {0, 22.5, 45, 67.5} degrees and columns to values of lambda at {0, 45, 90, 180} degrees. This gives us a good view of the spectrum of possibilities. As expected, their maximum sizes are approximately the flattening (around 1/300) times the major axis (around 6700 km), or about 22 km.
### Contour plot
Another way to visualize the errors is to fix one endpoint and let the other vary, contouring the errors that arise. Here, for example, is a contour plot where the first endpoint is at 45 degrees north latitude, 0 degrees longitude. As before, error values are in kilometers and positive errors mean the spherical calculation is too large:
It might be easier to read when wrapped around the globe:
The red dot in the south of France shows the location of the first endpoint.
For the record, here is the Mathematica 8 code used for the calculations:
``````WGS84[x_, y_] := GeoDistance @@ (GeoPosition[Append[#, 0], "WGS84"] & /@ {x, y});
sphere[x_, y_] := GeoDistance @@
(GeoPosition[{GeodesyData["WGS84", {"ReducedLatitude", #[[1]]}], #[[2]], 0}, "WGS84"] & /@ {x, y});
``````
And one of the plotting commands:
``````With[{mu = 45}, ContourPlot[(sphere[{mu, 0}, {x, y}] - WGS84[{mu, 0}, {x, y}]) / 1000,
{y, 0, 180}, {x, -90, 90}, ContourLabels -> True]]
``````
I've explored this question recently. I think people want to know
1. what spherical radius should I use?
2. what is the resulting error?
A reasonable metric for the quality of the approximation is the maximum absolute relative error in the great-circle distance
``````err = |s_sphere - s_ellipsoid| / s_ellipsoid
``````
with the maximum evaluated over all possible pairs of points.
If the flattening f is small, the spherical radius which minimizes err is very close to (a + b)/2 and the resulting error is about
``````err = 3*f/2 = 0.5% (for WGS84)
``````
(evaluated with 10^6 randomly chosen pairs of points). It is sometimes suggested to use (2*a + b)/3 as the spherical radius. This results in a slightly larger error, err = 5*f/3 = 0.56% (for WGS84).
Geodesics whose length is most underestimated by the spherical approximation lie near a pole, e.g., (89.1,0) to (89.1,180). Geodesics whose length is most overestimated by the spherical approximation are meridional near the equator, e.g., (-0.1,0) to (0.1,0).
ADDENDUM: Here's another way of approaching this problem.
Select pairs of uniformly distributed points on the ellipsoid. Measure the ellipsoidal distance s and the distance on a unit sphere t. For any pair of points, s / t gives an equivalent spherical radius. Average this quantity over all the pairs of points and this gives a mean equivalent spherical radius. There's a question of exactly how the average should be done. However all the choices I tried
``````1. <s>/<t>
2. <s/t>
3. sqrt(<s^2>/<t^2>)
4. <s^2>/<s*t>
5. <s^2/t>/<s>
``````
all came out within a few meters of the IUGG recommended mean radius, R1 = (2 a + b) / 3. Thus, this value minimizes the RMS error in spherical distance calculations. (However it results in a slightly larger maximum relative error compared to (a + b) / 2; see above.) Given that R1 is likely to be used for other purposes (area calculations and the like), there a good reason to stick with this choice for distance calculations.
The bottom line:
• For any kind of systematic work, where you can tolerate a 1% error in distance calculations, use a sphere of radius R1. The maximum relative error is 0.56%. Use this value consistently when you approximate the earth with a sphere.
• It you need additional accuracy, solve the ellipsoidal geodesic problem.
• For back of the envelope calculations, use R1 or 6400 km or 20000/pi km or a. These result in a maximum relative error of about 1%.
ANOTHER ADDENDUM: You can squeeze a little more accuracy out of the great circle distance by using μ = tan−1((1 − f)3/2 tanφ) (a poor man's rectifying latitude) as the latitude in the great circle calculation. This reduces the maximum relative error from 0.56% to 0.11% (using R1 as the radius of the sphere). (It's not clear whether it's really worth taking this approach as opposed to computing the ellipsoidal geodesic distance directly.) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343200087547302, "perplexity": 1222.6209452085552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256571.66/warc/CC-MAIN-20190521202736-20190521224736-00385.warc.gz"} |
https://zbmath.org/?q=ci%3A1198.60005 | ×
# zbMATH — the first resource for mathematics
A white noise approach to stochastic partial differential equations driven by the fractional Lévy noise. (English) Zbl 1448.60142
Summary: In this paper, based on the white noise theory for $$d$$-parameter Lévy random fields given by H. Holden et al. [Stochastic partial differential equations. A modeling, white noise functional approach. 2nd ed. New York, NY: Springer (2010; Zbl 1198.60005)], we develop a white noise frame for anisotropic fractional Lévy random fields to solve the stochastic Poisson equation and the stochastic Schrödinger equation driven by the $$d$$-parameter fractional Lévy noise. The solutions for the two kinds of equations are all strong solutions given explicitly in the Lévy-Hida stochastic distribution space.
##### MSC:
60H15 Stochastic partial differential equations (aspects of stochastic analysis) 60H40 White noise theory 60G51 Processes with independent increments; Lévy processes 60G52 Stable stochastic processes
Full Text:
##### References:
[1] Bender, C.; Marquardt, T., Stochastic calculus for convoluted Lévy processes, Bernoulli, 14, 499-518, (2008) · Zbl 1173.60017 [2] Bers, L., John, F., Schechter, M.: Partial Differential Equations, Interscience (1964) [3] Durrett, R.: Brownian Motion and Martingales in Analysis. Wadsworth, Belmont (1984) · Zbl 0554.60075 [4] Elliott, R. C.; Hoek, J., A general fractional white noise theory and applications to finance, Math. Finance, 13, 301-330, (2003) · Zbl 1069.91047 [5] Holden, H., Oksendal, B., Uboe, J., Zhang, T.: Stochastic Partial Differential Equations: a modeling, white noise functional approach, 2nd edn. Springer, (2010) · Zbl 1198.60005 [6] Huang, Z.; Li, C., On fractional stable processes and sheets: white noise approach, J. Math. Anal. Appl., 325, 624-635, (2007) · Zbl 1116.60018 [7] Huang, Z.; Li, P., Generalized fractional Lévy processes: a white noise approach, Stoch. Dyn., 6, 473-485, (2006) · Zbl 1109.60057 [8] Huang, Z.; Li, P., Fractional generalized Lévy random fields as white noise functionals, Front. Math. China, 2, 211-226, (2007) · Zbl 1135.60326 [9] Huang, Z.; Lü, X.; Wan, J., Fractional Lévy processes and noises on Gel’fand triple, Stoch. Dyn., 10, 37-51, (2010) · Zbl 1185.60053 [10] Lokka, A.; Oksendal, B.; Proske, F., Stochastic partial differential equations driven by Lévy space-time white noise, Ann. Appl. Probab., 14, 1506-1528, (2004) · Zbl 1053.60069 [11] Lü, X., Dai, W.: White noise analysis for fractional Lévy processes and its applications. (to appear) [12] Lü, X.; Huang, Z.; Dai, W., Generalized fractional Lévy random fields on Gel’fand triple: a white noise approach, Front. Math. China, 6, 493-506, (2011) · Zbl 1288.60086 [13] Lü, X.; Huang, Z.; Wan, J., Fractional Lévy processes on Gel’fand triple and stochastic integration, Front. Math. China, 3, 287-303, (2008) · Zbl 1146.60309 [14] Marquardt, T., Fractional Lévy processes with an application to long memory moving average processes, Bernoulli, 12, 1099-1126, (2006) · Zbl 1126.60038 [15] Nualart, D.; Schoutens, W., Chaotic and predictable representations for Lévy processes, Stoch. Process. Appl., 90, 109-122, (2000) · Zbl 1047.60088 [16] Samko, S.G., Kilbas, A.A., Marichev, O.I.: Fractional Integrals and Derivatives: Theory and Applications. Gordon & Breach, New York (1987) · Zbl 0617.26004
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7340343594551086, "perplexity": 8434.582837160053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00087.warc.gz"} |
https://crad.ict.ac.cn/CN/10.7544/issn1000-1239.2016.20160178 | ISSN 1000-1239 CN 11-1777/TP
• 人工智能 •
### 异质网中基于张量表示的动态离群点检测方法
1. 1(吉林大学计算机科学与技术学院 长春 130012); 2(符号计算与知识工程教育部重点实验室(吉林大学) 长春 130012) (liulu12@mails.jlu.edu.cn)
• 出版日期: 2016-08-01
• 基金资助:
国家自然科学基金项目(60903098);吉林省工业技术研究和开发项目(JF2012c016-2);吉林大学研究生创新基金项目(2015040)
### Tensor Representation Based Dynamic Outlier Detection Method in Heterogeneous Network
Liu Lu1, Zuo Wanli1,2,Peng Tao1,2
1. 1(College of Computer Science and Technology, Jilin University, Changchun 130012);2(Key Laboratory of Symbol Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun 130012)
• Online: 2016-08-01
Abstract: Mining rich semantic information hidden in heterogeneous information network is an important task in data mining. The value, data distribution and generation mechanism of outliers are all different from that of normal data. It is of great significance of analyzing its generation mechanism or even eliminating outliers. Outlier detection in homogeneous information network has been studied and explored for a long time. However, few of them are aiming at dynamic outlier detection in heterogeneous networks. Many issues need to be settled. Due to the dynamics of the heterogeneous information network, normal data may become outliers over time. This paper proposes a dynamic tensor representation based outlier detection method, called TRBOutlier. It constructs tensor index tree according to the high order data represented by tensor. The features are added to direct item set and indirect item set respectively when searching the tensor index tree. Meanwhile, we describe a clustering method based on the correlation of short texts to judge whether the objects in datasets change their original clusters and then detect outliers dynamically. This model can keep the semantic relationship in heterogeneous networks as much as possible in the case of fully reducing the time and space complexity. The experimental results show that our proposed method can detect outliers dynamically in heterogeneous information network effectively and efficiently. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3947240710258484, "perplexity": 1787.5614640646877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00100.warc.gz"} |
http://mathhelpforum.com/discrete-math/7851-property-help-print.html | Property Help
Printable View
• Nov 21st 2006, 12:20 PM
k1ll3rdr4g0n
Property Help
I have this problem:
a * (a + b) = a
And I can only use the commutative, distributive, identity, negation properties.
A hint that my teacher gives is to use is
a * a (a + b) = (a + 0) * (a + b)
Can anyone help me?
My teacher doesn't explain it well at all!
• Nov 21st 2006, 12:46 PM
Plato
You have to tell us more about what is going on here.
What is the definition of the operation *?
How does a(a+b) work or what does a(a+b) mean?
Please give more information about this problem.
• Nov 21st 2006, 01:04 PM
k1ll3rdr4g0n
Quote:
Originally Posted by Plato
You have to tell us more about what is going on here.
What is the definition of the operation *?
How does a(a+b) work or what does a(a+b) mean?
Please give more information about this problem.
Whops the hint is actually a * (a + b) = (a + 0) * (a + b)
But I think you got it.
* = and
+ = or
• Nov 21st 2006, 02:27 PM
Plato
This a simple Boolean Algebra.
This proof may or may not fit your text book.
[tex]This a simple Boolean Algebra.
This proof may or may not fit your text book.
$\begin{array}{rcl}
a*\left( {a + b} \right) & = & \left( {a + 0} \right)*\left( {a + b} \right)\quad \mbox{Identity Law} \\
& = & a + \left( {0*b} \right)\quad \mbox{Disributive Law} \\
& = & a + 0\quad \mbox{Null Law}l \\
& = & a\quad \mbox{Identity Law} \\
\end{array}
$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6570351719856262, "perplexity": 1700.0431349213468}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104565.76/warc/CC-MAIN-20170818043915-20170818063915-00168.warc.gz"} |
https://electronics.stackexchange.com/questions/205614/how-to-make-a-boost-converter-circuit | # How to make a boost converter circuit
I've seen multiple different explanations on boost converter circuits, these two specifically explaining in very intimate detail how they work, and on how to calculate certain values:
How to design a boost converter? And how to specify the inductor and capacitor values?
Understanding Boost Converter
the basic layout of a boost converter looks like this:
simulate this circuit – Schematic created using CircuitLab
In this example, the 1 volt input would be increased to a higher voltage across the resistor, which would be the load in this case.
## Problem:
I need to switch a 5 volt current on the high side with an NPN switch (for many reasons this is the only switch layout available, no other layout will work as desired), so I need 7 volts to the gate to switch it, however only 5 volts are available. For this I need a boost converter that will convert 5 volts to 7 volts to be able to switch the switch
But, I'm not sure how to calculate any of the values. For example, the solutions in the links I provided use the switching time of the switch, how can I find that? What size inductor and capacitor would I need?
Also as an aside, the thing labelled "L" on the diagram, the inductor, what is it's purpose?
• Before you attempt to design a switching power supply, you need to understand what an inductor fundamentally does. – Funkyguy Dec 11 '15 at 16:25
• I would like you to clarify two things: 1) Is it correct that you have another circuit that is using a transistor for switching, and that you need this circuit to provide the base drive for the other? 2) Are you sure that it is not possible to use an N-channel MOSFET instead of an NPN transistor? The MOSFET would make the drive easier in this case because you don't need as much current, just voltage, which you can create with a simpler capacitor based step-up. – pipe Dec 11 '15 at 16:30
• @pipe I'm sorry, I thought n channel mofsets were npn switches. It is in fact an n-channel mofset that I'm referring to here. NTE490 specifically. – Skyler Dec 11 '15 at 16:35
• In that case you don't need anything fancy. If you already have a microcontroller you can take a look at this answer: electronics.stackexchange.com/a/147975/91862. Otherwise there's the very cheap ICL7660 and clones available. – pipe Dec 11 '15 at 16:41
• @pipe that answer's circuit won't work. – Andy aka Dec 11 '15 at 18:27
The simplest solution would be to use a small, integrated boost IC. For example here is one from TI, but there are lots of other choices from other vendors as well (The component value choices are clearly explained in the datasheet):
TPS61046
However, you mention you need 2V above your 5V rail to switch your FET. Are you sure that you will be able to turn the FET fully on with 2V VGS? Most FETs have a threshold voltage in the 2V region meaning they are just starting to conduct there. You may want to switch the gate with 5V Vgs or a 10V boosted supply.
If you don't need the FET to be on continuously, and you have a defined minimum off time you may be able to use a bootstrap circuit to generate the gate drive voltage. Without knowing more about what you are doing it's impossible to say if a that would work or not. You might start another thread with more details if you don't know how to make a bootstrap work.
Finally, the purpose of the inductor is energy storage. When the switch is on the current in the inductor ramps up according to V=L*di/dt. The output voltage is held up by the output cap and the diode isolates the output from the switch. The energy stored is 1/2*L*I^2. Then when the switch turns off the inductor ramps down, though at this point the voltage across the inductor is in the opposite direction and equal to Vout-Vin. So the inductor provides energy to the output during the off time, allowing the output voltage to rise above the input voltage.
• Thank you very much! I think this will lead me in just the right direction. :) – Skyler Dec 11 '15 at 16:32
• So John, this ICU is actually only 0.8 x 1.2 mm, which ironically is a bit too compact. Way too small to work with! If you know of any others off of the top your head that are large enough to at least be soldered to without advanced machinery. If so, thank you, and if not, thank you very much again for your help. – Skyler Dec 11 '15 at 18:07
• Here's a part in a SOT-23-5 package, which I find pretty easy to work with manually. ti.com/lit/ds/symlink/lmr64010.pdf – John D Dec 11 '15 at 18:18
Also as an aside, the thing labelled "L" on the diagram, the inductor, what is it's purpose?
If you have an elastic band stretched across an opening and you pull it down in the middle (to ground) then release it, said elastic band responds by rapidly rising and if you have your hand in the way then you can get a slightly painful little recoil slap. Here's a nice picture to consider: -
You can fire the little object attached to the rubber band quite a distance.
Now that analogy isn't precisely what happens with the inductor but it's near enough to make use of.
So, instead of allowing the elastic band to freely recoil (and waste all the energy you have put into it), imagine you mechanically harvested that recoil to push a little mass/object a little bit higher each time you released the elastic band.
You'd need the mechanical equivalent of a diode so that when the elastic band was pulled down you didn't return the object/mass to ground. That wouldn't be too hard to build but you don't need to build it - you just need to understand the analogy.
So, to get 7V from 5V you need to turn on the transistor for long enough to build up a certain current - that current defines the energy stored in the coil and when that energy is released via the diode and into the capacitor and load, it's enough energy to keep it at 7V and the cap is big enough so that the 7V doesn't droop too much when the inductor is being pulled to ground.
However, the difficulty arises when you try and control the output voltage to a precise value - it is highly load dependent and highly input voltage dependent and so normally what happens is that an op-amp control loop defines the duty cycle of the inductor to prevent too big an output voltage being generated.
• Thank you for this info, I'm new to electrical engineering and this may greatly help me in the near future. – Skyler Dec 11 '15 at 16:33
• I think that example is more analogous to a capacitor rather than an inductor. A more appropriate mechanical equivelant of inductance is inertia, as for example the hydraulic ram pump is the water version of a boost converter, and the inertia of the water performs the same function of the inductor in the electrical circuit. – whatsisname Dec 11 '15 at 18:15
• @whatsisname - it's an analogy and a capacitor can certainly be modelled as a flyweel possessing inertia. They are interchangeable analogies. – Andy aka Dec 11 '15 at 18:18
• Electricity doesn't behave quite like anything else, so analogies always break down at some point. The key is understanding that energy is being stored in the magnetic field, and later that same energy is released in a slightly different form. – MarkU Dec 11 '15 at 21:45
May be this is not a direct answer but it gives a simple explanation, using old time experience in boost and buck converters using transformers.
BOOST
BUCK
• This is just regular old transformer usage, boost and buck converters exploit the transient behavior of inductors rather than their steady states. – whatsisname Dec 11 '15 at 18:21
• Or a variable autotransformer effectively can do both. But only for AC input & AC output; OP is asking about DC to DC converter. – MarkU Dec 11 '15 at 21:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6682528853416443, "perplexity": 668.9583654106651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00201.warc.gz"} |
https://www.arxiv-vanity.com/papers/1606.08027/ | # Star-planet interactions
II. Is planet engulfment the origin of fast rotating red giants?
Giovanni Privitera 1Geneva Observatory, University of Geneva, Maillettes 51, CH-1290 Sauverny, Switzerland 13Istituto Ricerche Solari Locarno, Via Patocchi, 6605 Locarno-Monti, Switzerland 3 Georges Meynet 1Geneva Observatory, University of Geneva, Maillettes 51, CH-1290 Sauverny, Switzerland 1 Patrick Eggenberger 1Geneva Observatory, University of Geneva, Maillettes 51, CH-1290 Sauverny, Switzerland 1 Aline A. Vidotto 1Geneva Observatory, University of Geneva, Maillettes 51, CH-1290 Sauverny, Switzerland 1 Eva Villaver 2Department of Theoretical Physics, Universidad Autónoma de Madrid, Módulo 8, 28049 Madrid, Spain 2 Michele Bianda 3Istituto Ricerche Solari Locarno, Via Patocchi, 6605 Locarno-Monti, Switzerland 3
###### Abstract
Context: Fast rotating red giants in the upper part of the red giant branch have surface velocities that cannot be explained by single star evolution.
Aims:We check whether tides between a star and a planet followed by planet engulfment can indeed accelerate the surface rotation of red giants for a sufficient long time in order to produce these fast rotating red giants.
Methods:Using rotating stellar models, accounting for the redistribution of the angular momentum inside the star by different transport mechanisms, for the exchanges of angular momentum between the planet orbit and the star before the engulfment and for the deposition of angular momentum inside the star at the engulfment, we study how the surface rotation velocity at the stellar surface evolves. We consider different situations with masses of stars in the range between 1.5 and 2.5 M, masses of the planets between 1 and 15 M (Jupiter mass), and initial semi-major axis between 0.5 and 1.5 au. The metallicity Z for our stellar models is 0.02.
Results:We show that the surface velocities reached at the end of the orbital decay due to tidal forces and planet engulfment can be similar to values observed for fast rotating red giants. This surface velocity then decreases when the star evolves along the red giant branch but at a sufficiently slow pace for allowing stars to be detected with such a high velocity. More quantitatively, star-planet interaction can produce a rapid acceleration of the surface of the star, above values equal to 8 km s, for periods lasting up to more than 30% the red giant branch phase. As found already by previous works, the changes of the surface carbon isotopic ratios produced by the dilution of the planetary material into the convective envelope is quite modest. Much more important might be the increase of the lithium abundance due to this effect. However lithium may be affected by many different, still uncertain, processes. Thus any lithium measurement can hardly be taken as a support or as an argument against any star-planet interaction.
Conclusions:The acceleration of the stellar surface to rotation velocities above limits that depend on the surface gravity does appear at the moment as the clearest signature of a star-planet interaction.
## 1 Introduction
It is well known that when stars evolve into the red giant branch after the main-sequence phase, the very large expansion of the envelope imposes very low surface rotation velocities. Indeed, early survey of projected rotational velocities (gray81; gray82) showed that giants cooler than about 5000 K are predominantly slow rotators and characterized by of a few km s (see Fig. 7). However, there exists a few percents of red giants presenting much higher (fekel93; massarotti08; carlberg11; Tayar2015).
To explain the high rotation rates of (apparently) single red giants, two kinds of scenarios have been proposed: a first scenario involves mechanism occurring in the star itself. simon89 proposed that, at the time of the first dredge-up, the surface could be accelerated by the transfer through convection of angular momentum from the central fast spinning regions to the surface. fekel93 explained the high rotation and the high lithium abundance they observed in their sample of red giants as resulting from such a scenario. The dredge-up episode would bring to the surface not only angular momentum but also freshly synthesized lithium. It is interesting to underline here that this dredge-up scenario is expected to cause rapid rotation at a particular phase in giant stars evolution, namely when the first dredge-up occurs. In the large sample studied by carlberg11, a clustering of the rapid rotators at this phase (between T equal to 4500 and 5500 K or log T between 3.732 and 3.740) is not seen. Moreover, we showed in paperI that the dredge-up actually produces no significant acceleration of the surface and thus cannot be a realistic reason for the high surface rotations that we are discussing here.
A second scenario proposed to explain high rotation rates in giants involves the swallowing of planet (peterson83; siess99I; livio02; massarotti08; carlberg09). The phenomenon of planet/brown dwarf ingestion was studied theoretically by sandquist98; sandquist02 for Main-Sequence (MS) stars and by soker84; siess99I; siess99II and recently by Passy2012 and Staff2016 for giant stars. siess99I; siess99II studied the accretion of a gaseous planet by a red giant and an asymptotic giant branch star (AGB). They considered cases where the planet is destroyed in the stellar envelope. They focused their study on possible consequences of this engulfment on the luminosity and surface composition of the star. Here we want to study the impact on the surface rotation of the star.
In the first paper of this series (paperI), we followed the orbital evolution of a planet accounting for all the main effects impacting the orbit (changes of masses of the star and the planet, frictional and gravitational drags, tidal forces) and computed the changes of the stellar rotation due to the planet orbital changes. We used rotating stellar models allowing to follow in a consistent way the angular momentum transport inside the star. In the present paper, we study what happens when the orbital decay makes the planet to be engulfed by the star. More precisely we address the following questions:
• By how much the surface rotation can increase through a planet engulfment process?
• How long is the period during which a rapid surface velocity can be observed after an engulfment?
• Can the increase of the surface velocity trigger some internal mixing?
• Are there other signatures in addition to fast surface rotation linked to a planet engulfment event?
In Section 2, we discuss the physics included in our models Section 3 presents post-engulfment evolution of various stellar models. Comparisons with observations are discussed in Sect. 4. Finally in Sect. 5 the main results are listed.
## 2 Ingredients of the models
### 2.1 The stellar models
The rotating stellar models are computed using the Geneva stellar evolution code (for a detailed description see egg08). The reader can refer to ekstrom12 for all the details about the ingredients of the models (nuclear reaction rates, opacities, mass loss rates, initial composition, overshooting, diffusion coefficients for rotation) and to paperI for how the exchanges of angular momentum between the planetary orbit and the star are treated.
The present models allow to make predictions of the evolution of the surface and also of the interior velocity resulting from the following processes: shear turbulence and meridional currents in radiative zones, convection, changes of the structure, loss of angular momentum by stellar winds, changes of angular momentum of the star resulting from tidal forces between the star and the planet and due to the process of planet engulfment. In convective zones, solid body rotation is assumed, while in radiative zone radial differential rotation can develop.
### 2.2 The planet model
Some knowledge of the structure of the planet is needed to determine where, once engulfed, the planet dissolves and deposits its angular momentum into the star. More precisely, we need to know two quantities (see Eq. 2 below): the planet radius, , and the mean molecular weight of the planet gas, .
The radius of the planet is determined using the mass-radius relation by zapolsky69
Rpl≈0.105(2x1/41+x1/2) [R⊙] , (1)
with and M is the planetary mass given in solar mass.
The same initial chemical composition is considered for the star and the planet. Here we consider a mass fraction of hydrogen , a mass fraction of helium and a mass fraction of the heavy elements . With that composition, . Below, we shall consider different compositions for the planet and the star for what concerns carbon and lithium. These changes have however very limited impacts on the value of and are neglected in the estimate of this quantity.
### 2.3 Physics of the engulfment
In paperI, given a mass of a star and a mass of the planet, we studied the range of initial semi-major axis that leads to a planet engulfment along the red giant branch. Now, in order to describe what will happen next, we need some prescriptions for the fate of the planet inside the star. According to the work by soker84, planets with masses inferior or equal to about 20 M will be destroyed in the star. In the present work, we consider only planets with masses equal or below 15 M, and therefore we assume that they are destroyed.
In order to be able to compute the impact on the surface velocity of this destruction process we need to know two quantities. The first one is the timescale for the destruction of the planet and the second one is the location of this destruction inside the star. For instance, if the destruction occurs in a very short timescale (much shorter than the evolutionary timescale) then, once engulfment occurs, the whole orbital angular momentum of the planet orbit will be given to the star in one shot. Moreover, if the destruction occurs in the convective envelope, then this angular momentum can be added in a very simple way to the convective envelope (see below).
#### 2.3.1 Destruction timescale
livio84 studied the evolution of star-planet systems during the red giant phase. They find that the engulfed planets with masses equal or below 10 M are destroyed after a few thousand years (see their Fig. 3). This is very short with respect to the evolutionary timescale during the red giant phase. Indeed, the duration of the first ascent of the red giant branch branch (i.e., before the ignition of helium burning in the core) is 180 Myr for a 1.7 M star. Therefore, we can consider that the angular momentum that remains in the orbit of the planet at the moment of the engulfment will be delivered to the star in one shot.
#### 2.3.2 Where is the angular momentum deposited?
The location of the planet dissolution (i.e. the dissolution point) depends on the physical mechanism that is responsible for it. We can consider two main mechanisms: thermal and mechanical destruction of the planet. Thermal dissolution is obtained where the virial temperature of the planet becomes smaller than the local stellar temperature. Deeper beyond this point, the thermal kinetic energy of the stellar material is larger than the binding energy of the planet.
Knowing the internal structure of the star, it is possible to compare the local temperature in the star and the virial temperature of the planet (siess99I):
Tv,pl∼GMplμplmHkRpl∼105μplMplMJRJRpl[K] . (2)
where is the Boltzmann constant, , , the mass and radius of Jupiter.
In Fig. 1, the regions where the stellar temperature is equal to the virial temperature of planets of various masses are indicated. The lines indicated in Fig. 1 are isothermal lines111These lines would not be isothermal lines in case the structure of the planet would change during its journey inside the star. Here it is assumed that during the very short migration time the planet structure does not change significantly. The only change occurs at the very end, when the planet dissolves.. We also have indicated by vertical dashed lines the engulfment time of planets of various masses and initial semi-major axis as obtained in paperI. As indicated in Sec. 2.3.1, the migration time of the planet is very short, thus the destruction of those planets will occur at the intersection of the isothermal line for the planet mass considered with the corresponding vertical lines indicating the time of engulfment (only a few cases are shown for illustration). A few interesting points can be noted looking at Fig. 1:
• When the star evolves, the dissolution points reach (in general) deeper layers in the stellar interior. This is a consequence of the expansion of the envelope during the red giant phase that produces a lowering of the temperature at a given Lagrangian mass. Only when the star contracts, for example during the bump, the dissolution point shifts for a time outwards. This explains the local minimum that can be seen for instance for the 5 M mass planet at a time around 0.08 Gyr for the 2 M model (right panel of Fig. 1).
• More massive planets go deeper inside the star. This is of course expected, since a more massive planet requires more extreme conditions to be destroyed than lighter ones.
• Under the assumptions above and in the light of Fig. 1, it is reasonable to assume that the planets are destroyed in the convective envelope of the star.
siess99I also examined the possibility that when the planet gets closer to the stellar core, tidal effects induce strong distortions of the planet that can destroy it. Using the elongation stress at the centre of the planet as approximated by soker87, we find in most cases that the planet would be destroyed in the convective envelope as is the case when the criterion based on the virial temperature is used. Only in a few cases, the planet would be destroyed by tides just below the convective envelope. In the following we shall consider only the criterion based on the virial temperature and thus assume that all our engulfed planets will deliver their angular momentum in the convective envelope.
Let us call the angular momentum of the planet orbit at the time of engulfment and the angular momentum of the external convective envelope () of the star just before engulfment. To obtain the new angular velocity of the envelope after engulfment, , we write
¯¯¯¯Ω(ce)=Ω(ce)(1+LplL⋆(ce)) , (3)
where is the angular velocity of the convective envelope just before the engulfment.
Equation (3) assumes that the moment of inertia of the convective envelope is not changed by the planet engulfment process. Actually it might happen that the engulfment process would add thermal energy in the upper layers of the stellar envelope, making these layers to expand for a while. But the excess of energy will be rapidly radiated away and thus the star will evolve back rapidly to its initial state. Just as a numerical example, in case all the kinetic energy in the planetary orbit of a 15 M initially orbiting a 1.7 M at 0.5 au would be added as an increase of the internal energy in the outer layers, then the excess of energy would be radiated away in about 55 years, so in a very short time compared to the evolutionary timescale.
### 2.4 Initial conditions considered
We consider stars with initial masses in the range between 1.5 and 2.5 M, with a metallicity and an initial rotation equal to and 0.5, where is the initial angular velocity and the critical angular velocity on the ZAMS. These correspond to initial surface velocities of 30 and 160 km s respectively.
As explained in paper I, we focused here on stellar masses larger or equal to 1.5 M, because stars in this mass range do not have a sufficiently extended outer convective zone to activate a dynamo during the main sequence, so that unless they host a fossile magnetic field, they do not undergo any significant surface magnetic braking. The results obtained in the present work are thus not sensitive to the uncertainties related to the modeling of this braking during the main sequence. In contrast, lower initial mass stars have an extended outer convective zone during the main sequence. Therefore they can activate a dynamo and suffer a strong magnetic braking. The evolution of the rotational properties of these lower initial mass stars is then different and will be the topic of another paper in this series.
Planets with masses equal to 1, 5, 10 and 15 M have been considered222In our models, we take planets with masses not larger than 15 M that means below the limiting value 20 M estimated by soker84, above which the planet is no longer completely destroyed when engulfed by the stellar envelope. This limit is also consistent with the value found in the numerical simulations by Staff2016.. The initial semi-major axes () have been taken equal to 0.5, 1 and 1.5 au and the eccentricities of the orbits are equal to 0.
In paperI, the evolution of the planetary orbit and of the star have been computed in a consistent way up to the point of engulfment.
The end of the computation in paperI represents the initial conditions for the present work (see Table 1 in the Appendix).
## 3 Impact of planet engulfment on the surface velocities of red giants
The impact of tides and engulfment on the surface velocity of a red giant depends at least on the following parameters: the mass of the planet, its initial distance to the star and the mass of the star. Other parameters like the metallicity (not discussed here) and the rotation (that will be discussed here) of the star also affect the results.
Figure 2 shows the evolution of the surface equatorial velocities for the present stellar models with and without engulfment of a planet. Tables 2 and 3 in the Appendix present some characteristics of the star after an engulfment.
### 3.1 Planet engulfment by a 1.5 M⊙ star
Let us begin by discussing the case of the 1.5 M stellar model (upper left diagram in Fig. 2). The case of an engulfment of a 15 M initially at a distance of 0.5 au is considered for the purpose of the discussion. We can note the following features:
• Planet engulfment may have a very strong effect on the surface rotation of a red giant. Compare for instance in the upper left panel, the (black) dotted line that shows the evolution of the surface velocity of a 1.5 M red giant having begun its evolution with an angular velocity equal to 50% the critical velocity (142 km s) and with no planet engulfment, with the (blue) dashed line showing the evolution of the surface velocity of a similar star engulfing a planet. We see that while the isolated star has a surface velocity that remains below 7 km s for log below 2.8, the star having engulfed the planet shows an extremely rapid increase of the surface velocity up to a maximum value of about 40 km s.
• The left panel of Fig. 3 shows the variation of the angular velocity inside the 1.5 M with an initial angular velocity equal to 10% the critical velocity just before and just after the engulfment of a planet. The angular velocity in the convective zone extending from a mass equal to about 0.3 M up to the surface has its angular velocity increased by about 1.7 dex (a factor 50).
At engulfment (around a log equal to 2.5), the ratio between the angular velocity of the core and that of the envelope decreases from a value of about 32 000 to a value equal to 1000 (see the right panel of Fig. 3) . Then the contrast increases again as a result of the expansion of the envelope and the contraction of the core.
• The present models cannot account for the small contrast of about one dex between the core and the envelope rotation rates that has been obtained by asteroseismology for red giants at the base of the giant branch (beck12; Mosser2012; deh12; deh14). In that respect, an additional internal angular momentum transport process is missing in the radiative interior of the present models (egg12; mar13; can14). This weakness has however no major impact on the process studied here. The missing transport mechanism would add a fraction of the angular momentum of the core to the envelope, but this will not much accelerate the envelope because the angular momentum in the core is quite small with respect to the one contained in the envelope. Therefore the angular momentum coming from the planetary orbit remains in any case much larger than the angular momentum of the envelope and the final result of a planet engulfment would remain nearly unchanged.
• After the engulfment, the surface velocity decreases because the stellar envelope continues to expand but it remains above 8 km s for a time which is nearly 36 Myr (see Table 3) that means for about 15% of the total duration of the red giant branch phase. No significant changes of the structure of the star (change of the extent of the convective envelope and/or of the mixing of the chemical elements) is produced by the increase of the rotational speed of the convective envelope. So in our models, any changes of the surface abundances can only be attributed to the dilution of the planet material into the convective envelope. This will be briefly investigated in the next section when comparisons with observations are discussed.
• In Fig. 2 we have also indicated the maximum value for the surface rotation that is expected during the red giant phase (see the black long-dashed line) for a star with no planet. This line has been obtained assuming that the star rotates near the critical velocity on the ZAMS and that a solid body rotation is maintained at every time. This is clearly an extremum. This case of solid body represents an extremum case in terms of efficiency of angular momentum transport and also in terms of surface velocities333One could argue that, once the solid body rotation is reached, some processes might still advect angular momentum from the center to the surface producing a negative gradient of , i.e. a core rotating slower than the envelope, and increasing the surface velocity. However, this is not supported by asteroseismic determinations of the interior rotation of red giants (e.g. beck12; Mosser2012; deh12; deh14) that show that red giants have a clear increase of rotation towards the central regions. In that respect, the hypothesis of solid body rotation gives a very conservative upper limit for the surface rotation of red giants.. This means that, any observed star, with initial masses of about 1.5 M, with a surface velocity larger than the one given by the curve in the left upper panel of Fig. 2 needs some acceleration process which might come either from tidal forces or from an engulfment.
We see that for a 1.5 M star at a metallicity Z=0.02, this maximum velocity is well below the 40 km s reached by the engulfment of a 15 M planet with an initial distance to the star equal to 0.5 au. Therefore planet engulfment may be a sufficiently strong mechanism to produce surface acceleration beyond any reasonable process that can occur in single stars. Of course not any planet engulfment will produce such strong acceleration, but at least some events can produce surface velocities beyond what can be reasonably explained by single star evolution.
• We see that decreasing the initial distance between the star and the planet produces higher surface velocities and longer periods during which high surface rotations can be observed. The main reason for this is that smaller the initial distance, earlier along the red giant branch the engulfment occurs. Earlier the engulfment, smaller is the moment of inertia of the convective envelope and thus more important will be the effect produced by the injection of the planetary angular momentum in the stellar convective envelope. We could argue however that larger the distance, larger the amount of angular momentum in the planet orbit. However the effect indicated above (the small moment of inertia of the envelope) is the most important one and clearly overcomes the effect of the larger angular momentum associated to a larger orbital radius. An interesting consequence of that point was noted already by carlberg09; carlberg11: planet engulfment produces the largest increases of the surface rotation when the engulfment occurs at the beginning of the red giant branch.
• We see also that the higher the mass of the planet, the larger the maximum surface velocity reached (assuming identical initial distance of the planet to the star) and longer the period during which the surface velocity is superior to a given limit (see Tables 2 and 3).
• If keeping all the other parameters equal, we change the initial rotation of the star, passing from to 0.5, we obtain very similar behaviors. This indicates that the outcome at least in the case of the 1.5 M does not much depend on the initial rotation of the star.
### 3.2 Planet engulfment by stars with masses between 1.7 and 2.5 M⊙
In Fig. 4, the increase of the surface velocity due to a planet engulfment is compared for stars with different initial masses. We see that in general, for a given planet mass and a given initial distance between the star and the planet, the engulfment occurs at a higher surface gravity when the mass of the star decreases. The maximum surface velocities that are reached are higher for smaller initial mass stars. We have added a track for a one solar mass model to provide a clear illustration of this. As explained in Sect. 2, such a star suffers magnetic braking during the Main-Sequence phase. This is introduced for the one solar mass model by using a solar-calibrated value for the efficiency of this braking. As a result, the surface velocity at the base of the red-giant branch is much lower for the one solar mass model than for stars with higher initial masses. When the engulfment occurs, the acceleration is larger for this model than for the more massive stars, because of the larger ratio between the planetary orbital angular momentum and the stellar envelope angular momentum. Since the 2.5 M star does not evolve through the He-flash, we pursued the evolution during the core He-burning phase. It is why its evolution looks so different from the lower initial mass stars (compare the lower right panel of Fig. 2 with the other panels). Actually, just after the engulfment, we can see a very similar behavior as those we could see in the other panels. The surface velocity increases nearly vertically at engulfment and after engulfment decreases with the surface gravity. When the tip of the red giant branch is reached, the star contracts and the track evolves again at high surface gravities following first the same path as the one used when the star was evolving up the red giant branch, passing again through the same point where engulfment occurred and evolving still to higher surface velocities when it continues to contract. Most of the core He-burning phase will occur along the extremity of the loop at surface gravities between 2.7 and 3.0. After the core He-burning phase, the envelope of the star expands, the surface gravity decreases and the star join the asymptotic giant branch.
Interestingly, the surface velocity during the core He-burning phase remains quite high after an engulfment (above the curve in the lower right panel of Fig. 2). Therefore, the signature of planet engulfment may remain visible for the whole core He-burning phase. This would likely be the case for lower initial mass stars too evolving through a He-flash although this remains to be confirmed by computations.
In general, as was the case for the 1.5 M star, changing the initial rotation rate of the star has not a very large impact on the results. We note however the very thin loop at engulfment in the lower left panel of Fig. 2 in the case of 2 M models with an initial rotation equal to 50% the critical angular velocity and with a 15 M at initial distance of 0.5 au. For this model, the time of engulfment coincides with the time when the bump occurs. During the bump, the star contracts (hence the increase of the surface gravity seen on Fig. 2). This contraction occurs when the H-shell enters the domain whose composition has been enriched in hydrogen by the downward extension of the external convective zone. As a consequence, the H-burning shell expands and reduces its temperature. By a mirror effect, the envelope contracts provoking an additional increase of the surface velocity and the thin loop that appears at the end of the engulfment process.
### 3.3 How long does a red giant star remain fast rotating after an engulfment?
In the literature, a value of larger than 8 km s is sometimes used as a criterion to qualify a red giant as fast rotating (see e.g carlberg12). We see on Fig. 2 that indeed higher values than 8 km s would definitively require some interaction with a second body for surface gravities smaller than Log equal 2.0 (1.5 M), 2.1 (1.7 M), 2.15 (2 M) and 2.1 (2.5 M). Of course, this does not exclude that stars showing surface velocities below this limit have not undergone any interaction, but in that case, alternative single star models could be proposed to explain the high velocity as well.
Figure 5 allows to have a synthetic view of the durations of the high surface velocity periods. We see that, for a given stellar model, the duration increases when the initial semi-major axis decreases and the mass of the planet increases (see also Tables 2 and 3). The conditions for, let us say, having a high surface velocity during about 10% of the red giant phase are less restricted when the mass of the star passes from 1.5 to 2 M444The duration of the period during which the surface velocity remains superior to a given limit after an engulfment is larger in smaller mass stars. However, when this period is normalized to the total duration of the red giant branch, then the fraction spent above a given limit decreases when the mass decreases.. The same occurs when, keeping the stellar mass constant, the initial rotation passes from 10 to 50% the critical velocity although the impact is less important than changing the initial mass of the star.
### 3.4 Impact of tidal forces with respect to engulfment
The physics of what happens during the engulfment is not so well known and thus we may wonder what is the reliability of the present results. An interesting point to mention in that context, is the fact that already the tidal forces may lead to significant accelerations of the stellar surface. Fig. 6 shows the evolution of the surface rotation as a function of the luminosity for a few selected models of 2 M stars. The small dots along the curves with engulfment indicate the surface velocity reached just before engulfment. The acceleration up to this point is due only to tidal forces. The acceleration after that point is due to the engulfment process itself.
We see that tides alone may be sufficient to accelerate significantly the surface rotation of the star. Since the acceleration due to tides is very rapid (see paperI), as is the acceleration due to the engulfment, there is no chance using the rotational velocity of the star to distinguish observationally between tides- and engulfment-acceleration processes, would both occur.
## 4 Comparisons with observations
The observed sample of carlberg11 (1287 stars) is shown in Fig. 7 superposed to the evolution of the surface velocities for different 2.5 M models as computed in the present work. A significant number of stars are rotating much faster than what is predicted by these models. An extreme case is the star Tyc5854-011526-1 that has a of 84 km s which corresponds to 2/3 the critical velocity of a 2.5 M at the considered effective temperature! In the following of this section, we compare our theoretical predictions with the observations of red giants performed by Carlberg2012. We choose this sample since it gives surface velocities and surface gravities for a significant number of red giants (91 stars). Moreover, it provides also indications for some stars on the C/C ratio at the surface as well as for the lithium abundance allowing to check whether stars that are believed to have acquired their high surface rotation due to a planet engulfment show also some special features in their surface composition.
We shall not perform a very detailed analysis since that analysis was already done in Carlberg2012, but we want to address the following questions:
• Are there among the observed stars, cases that cannot be explained by single star models?
• Can the observed surface velocities of such cases be reproduced by invoking a planet engulfment?
• If yes, is it possible to deduce the initial conditions required to explain these systems by a planet engulfment?
• Is there any chance to see some signatures of the planet engulfment in the surface composition?
Before investigating these points based on the present computations, we must see whether the metallicities and the mass range of our stellar models are adequate for a comparison with the sample of Carlberg2012. For what concerns the metallicity, the correspondence between the metallicity of the observed sample and the metallicity of the models is marginal. The [Fe/H] of the sample studied by Carlberg2012 is on average lower than solar, while the present computations were performed with a heavy mass fraction of which is over solar (). However, whatever the metallicity, the engulfment would produce a strong acceleration of the surface. It may be that a given surface velocity would be reached with slightly different initial conditions but the main outcomes will not be changed.
In the left panel of Fig. 8, the observed sample is shown together with the tracks of the single (no engulfment) stellar models computed in the present work (case with ). In case we would have made our computations of stellar models with a lower initial metallicity, the tracks would have been slightly shifted to the left, i.e. to the blue. Keeping this in mind, we see that the range of masses followed here, would more or less go through the averaged observed positions. It may be that some stars have lower initial mass than 1.5 M and some may have larger initial mass than 2.5 M, but on the whole the mass range of the models is relatively well representative of most of the masses of the sample.
### 4.1 Stars with a clear signature of a past interaction
In many works, a red giant is considered as fast rotating and thus as being a possible candidate for having gone through a planet engulfment process if its is larger than 8 km s. This criterion is however too schematic. To understand this point, let us look at the right panel of Fig. 8. We see that, actually, many of the red-filled points (8 km s) are not very far from the line corresponding to the track for the 1.5 M with allowing such stars to be possibly explained without invoking any interaction with a companion. On the contrary, much lower surface velocities can be the indication of a planet engulfment. This can be seen looking at the magenta-dotted line which is in the lower-right corner on the right panel of Fig. 8. The tracks with no engulfment pass well below the slowly rotating points observed with surface gravities below about 1.6, while the track with engulfment of a one Jupiter mass planet initially at a distance of 0.5 au of a 1.5 M star with would allow to go through at least part of these observed points. Thus, the surface rotation above which some interaction may be needed depends strongly on the surface gravity and cannot be given by only one number.
How can we distinguish stars that would nearly certainly require some interaction as the cause for their high surface velocities? The continuous (blue) line shows the evolution of the surface velocity for a 2.5 M that would begin its evolution at the critical velocity and evolves as a solid body rotating model. This evolution, as explained in the previous section, represents an extremum in term of efficiency of the internal angular momentum transport and of the surface rotation. Actually, this is a quite generous upper limit because we know that red giants are not rotating as solid body.
Turning now to stars that are found above this limit, we can be quite confident that, for these cases, an interaction with a companion must have occurred. These stars are shown as circled blue magenta filled points and individually labeled by a capital letter. So, this answers our first question above. We see that stars are indeed observed with surface velocities that cannot be explained by single star evolution555We may wonder, however, whether a more massive star (like a 3 M) could explain these points. At least this does not appear to be the case for point A which has, according to its position in the theoretical HR diagram, an initial mass clearly below 2.5 M..
As a final remark, we note, as was already done in the introduction, that the bulk of stars with lower than 8 km s are in between the single star evolutionary tracks with and . This shows that single star models can well account for the surface rotation of the bulk of red giants.
### 4.2 Can planet engulfment explain the cases likely resulting from an interaction?
Can planet engulfment reproduce the surface velocities observed for those red giants that have such large surface rotations that no single star models can reproduce them (see the points labeled by high-case letters A to F in Figs. 9 and 8)? The answer is yes, as can be seen by comparing the locations of these stars in the right panel of Fig. 8 with the black continuous tracks. Only a few cases are shown for purpose of illustration, but there is no doubt that playing with the initial conditions (mass of the star, of the planet, its initial distance), it is indeed possible to produce a sufficient surface acceleration to reach these observed surface velocities and this for a sufficiently long duration to allow these high surface velocities to be observed.
### 4.3 Can the initial conditions be deduced from observations?
Can we determine the initial conditions needed to reproduce these systems? The solutions are unfortunately not unique. For instance, the positions of the points E and F could be reproduced by the engulfment of typically a 10 M planet initially at a distance of 0.5 au of a 2.5 M with , or by the engulfment of a 10 M planet initially at a distance of 0.5 au from a 1.5 M with , and there are likely other solutions. Of course, additional information can provide some further constraints. For instance, from the position in the HR diagram of the E and F stars (see left panel), the solution with the 2.5 M would be favored with respect to the one involving a 1.5 M. In that case the star would be likely in the clump and not along the red giant branch.
### 4.4 The surface compositions after an engulfment
The left panel of Fig. 9 shows the isotopic ratios at the surface of the observed stars as well as the predictions of the theoretical models. The continuous lines show the predictions of the present stellar models without any planet engulfment. The predictions depend on the initial rotation. Larger initial rotations lead to lower C/C ratios. Models with an initial rotation equal to 50% the critical one go roughly through the middle of the observed points666The present models do not account for thermohaline mixing for instance, which has been proposed by CZ2007 as causing an additional decrease of the C/C ratio and of the lithium along the red giant branch (see also CL2010)..
We can deduce the following points looking at the left panel of Fig. 9:
• The planet engulfment process decreases the shear between the core and the convective envelope and thus weakens the shear mixing in that region. However this effect is negligible because, even in models without engulfment, which have a stronger shear in that region, we see no effect. Indeed, such an effect would produce a continuous decrease of the C/C ratio once the convective envelope has reached its deepest point. This is obviously not the case as shown in the left panel of Fig. 9.
• Another process can change the isotopic ratio: the dilution of the planet material into the convective envelope. To estimate the importance of this process, we have followed the same line of reasoning as presented in Carlberg2012. We applied their equation (3), which gives the new isotopic ratio, accounting for the dilution of the planetary material into the convective envelope:
(12C/13C)new=10A(C)prpqe1+rp+10A(C)∗r∗1+r∗10A(C)pqe1+rp+10A(C)∗11+r∗
where is Log in the planet and is the same quantity for the star ( is the number of carbon per unit volume accounting for the two isotopes C and C). The ratio is equal to , and is the ratio of the mass of the planet to the mass of the convective envelope. For the quantities concerning the star () we took the values directly from our stellar models. For the planet, we considered the same values as in Carlberg2012 namely dex (about three times the solar carbon abundance as determined by Wong2004) and which is a standard value for the solar system (Lodders1998). Using the above formula, we obtain after engulfment the level of shown by the dashed segments in Fig. 9. We see that since the planetary material has a higher ratio than the stellar envelope, this effect makes the surface isotopic ratio a bit larger. The effect is however small and thus the measure of this isotopic ratio appears as a poor indicator of a planet engulfment, especially that changes of the initial mass, rotation and likely other possible mixing processes have stronger effects than the engulfment.
• Looking only at the observed points, we see that the fast rotators and those stars whose surface velocities cannot be explained by single star evolution do not present different carbon isotopic ratios than the rest of the sample, confirming that this isotopic ratio does not appear to be very sensitive to the engulfment.
In the right panel of Fig. 9, observed lithium abundances are shown. It seems as indicated by Carlberg2012 that the fast rotators present on average higher lithium abundances. However, if we consider the Li-rich giants observed by Adamow2012; Adamow2014; Adamow2015, no one of these stars has a km s (see the right panel of Fig. 8) somewhat blurring the trend.
Similar estimates as the one done above for the carbon isotopic ratio can be performed for the lithium abundance. Using equation (2) of Carlberg2012
A(Li)new=Log(qe10A(Li)p+10A(Li)∗)−Log(1+qe)
we estimate the new surface lithium abundance, , after accretion of the planetary material. In our stellar models, the abundance of lithium is not followed explicitly, thus we did as in Carlberg2012, considering for a value equal to -0.18 dex corresponding to the averaged abundance of lithium observed at the surface of slow rotators (see the horizontal continuous line in the right panel of Fig. 9). For , we took a value equal to 3.3 (Lodders1998), while for , we used the stellar models quantities. The results are shown by the dotted tracks in the right panel of Fig. 9. The lines represent the surface lithium abundances that would be observed in case the engulfment would occur at various surface gravities. When the outer convective zone begins to appear, the convective envelope is very small and the Li abundance that results from a planet dilution is very high. This makes the vertical part of the track. Then when the envelope increases in mass, the evolution goes to lower Li abundances and to lower gravities.
From the right panel of Fig. 9, we see that indeed the dilution of the planetary material into the convective envelope can have a strong effect on the surface Li abundance. The conditions at the base of the convective of our stellar models are not favorable to a rapid destruction of that element by proton captures, thus it will not be burned unless some mixing occurs below the convective envelope (see also Aguilera2016).
The strong effect on lithium abundance found here after the planet engulfment confirms what was suggested by many authors in the past (see e.g. alexander67; siess99I; Carlberg2012; Adamow12), namely that the planet engulfment process can indeed produce lithium-rich red giants. However, the lithium-signature of planet engulfment remains difficult to disentangle from other processes. Lithium is a fragile element destroyed at about 2.6 million degrees, so any surface enrichment might in some stellar models disappear rapidly. Lithium can also be produced in some red giant models blurring completely the picture for what concerns the origin of a high lithium abundance.
In that respect it is interesting to see what are the lithium surface abundances of those stars whose high surface velocities cannot be explained by single star models (see the circled blue magenta filled points in Fig. 9). Their positions are not very peculiar in the sense that slowly rotating stars are also found with similar levels of Li abundances. However we see that all these stars are above the averaged level for the (apparently) slowly rotating stars, leaving open the possibility that they could have received some lithium from a planet.
On the other hand, as noted already above, none of the Li-rich giants observed by Adamow2012; Adamow2014; Adamow2015 are apparently fast rotators. These kind of stars are therefore difficult to be explained by a planet engulfment process unless some mechanism, not accounted for in the present models, as a strong wind magnetic braking for instance has occurred. In that case those red giants would have been once fast rotators, but the wind magnetic braking was efficient enough for the star to have lost memory of its post-engulfment fast rotating stage.
## 5 Perspectives and Conclusions
The main conclusions of the present paper can be summarized in the following way:
• As was already shown in paperI, there are observed red giant stars whose high surface velocities cannot be explained by single star evolution. The upper velocities that can be reached by single star models of the considered mass are indicated as a function of the surface gravity in Fig. 2 by the long-dashed black line labeled V. The use of such gravity dependent limits to isolate red giants having likely engulfed a planet or at least interact with it is probably more realistic than the commonly used limit of 8 km s.
• The present models show that tidal interactions followed by planet engulfment can reproduce the high surface velocities of these stars during sufficiently long periods for allowing these high rotations to be observable. Surface velocities beyond the upper limit allowed by single star evolution can be reached already just by tidal interaction.
• We obtain also that the high rotation obtained after the engulfment is maintained beyond the end of the red giant branch for the 2.5 M. It would be extremely interesting to check whether such a conclusion would hold for our lower initial mass models that go through a He-flash episode.
• Conditions also exist for producing engulfments with much less spectacular impacts. Let us remind that the engulfment of a one M planet by a 1.5 M star would produce surface velocities of only a few km per seconds at gravities below about 1.6. This is quite small and these stars would hardly be recognized as having engulfed a planet.
• The above conclusions make the link between observed fast-rotating red giants and red giants having engulfed a planet much less clear, in the sense that not all fast rotating red giants need an interaction to have reached their surface velocities (this may reflect simply a high initial rotation rate of the progenitor) and not all slowly rotating red giants may be explained by single star evolution (typically in the upper part of the red giant branch).
• We discussed in a simple way the consequences of an engulfment on the surface composition of red giants and confirmed the results obtained by Carlberg2012 that the effect on the carbon isotopic ratios are very small and the impact on lithium might be quite large, although remaining difficult to interpret (see below).
• We showed that the chemical signatures of an engulfment are still quite ambiguous because planet engulfment either produce too small signal as in the case of the carbon isotopic ratios, or produce signals that cannot be attributed in a non ambiguous way to a planet engulfment. This makes the surface velocity of a red giant the most stringent observable feature indicating a past star-planet interaction.
• The present results also show that the evolution after an engulfment is not very different from that obtained without any engulfment. The main difference is in the surface rotation (and of course the rotation as a whole of the external convective zone). A planet engulfment would somewhat lower the contrast between the angular velocity of the core and that of the envelope but not at a level that could be useful to reproduce the small contrast as obtained by asteroseismology for a few red giants (e.g. beck12; Mosser2012; deh12; deh14).
There are many other points that should be addressed in further works. In particular, we shall investigate whether the acceleration of the surface due to a planet engulfment may trigger a magnetic field.
###### Acknowledgements.
This research has made use of the Exoplanet Orbit Database and the Exoplanet Data Explorer at exoplanets.org. The project has been supported by Swiss National Science Foundation grants 200021-138016, 200020-160119 and 200020-15710.
## Appendix A Detailed properties of the stellar models before and after engulfment
This appendix contains three tables giving information on the characteristics of the star before and after an engulfment. Table 1 indicates the properties of the stellar models just before the engulfment. The first column is the mass of the planet. Columns 2 to 5 give the mass of the star, its radius, surface angular velocity, mass loss rate and mass of the convective envelope at this time.
Tables 2 and 3 present the changes occurring due to the tides/engulfment processes. The first column is the mass of the planet. Columns 2 gives the equatorial surface velocity before the engulfment. Note that this velocity is already significantly higher than the one of the corresponding single star at that stage. Actually, the velocity indicated in that column would be the velocity acquired by the tidal forces only. The duration of the phases during which, starting from the velocity indicated in column 2, the surface velocity would be higher than 8 km s, , is given in column 3. To compute this duration we used the results shown in Fig. 10. This figure shows the evolution as a function of time of the surface velocity just after an engulfment. We see that roughly the surface velocity varies as , where (in Myr) is , with the age of the star, the age of the star at the end of the MS phase, and the surface velocity reached just after the engulfment. This relation does not much depend on the mass of the planet, or equivalently on the value of . It means that after 2.5 Myr, whatever the initial velocity, the surface velocity will have decreased by 10% with respect to its initial value777Actually, this timescale changes a bit with the mass of the planet as can be seen in Fig. 10, being slightly shorter for the 5 than for the 15 M planet. On the other hand the differences between the various planet masses are not very large.. We can use such a plot to estimate the time during which a star will maintain a certain velocity after the engulfment evolving up along the red giant branch. This is how we have estimated the time, , indicated in column 3 of Tables 2 and 3 showing the duration of the period when the star has a surface velocity higher than 8 km s when only the acceleration due to tides is accounted for.
In Tables 2 and 3, columns 4 to 6 indicate the surface velocity right after the engulfment, the duration of the phases during which the surface velocity, after an engulfment, is higher than 8 km s (), and finally the fraction of the red giant branch phase spent with a surface velocity superior to 8 km s. Tables 2 show the results obtained with stellar models having an initial angular velocity equal to 10% the critical velocity, Tables 3 the results for models having an initial angular velocity equal to 50% the critical velocity. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571086764335632, "perplexity": 672.1705745832195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00750.warc.gz"} |
http://physics.stackexchange.com/users/11977/lai?tab=reputation | # lai
less info
reputation
4
bio website location age member for 2 years, 6 months seen Mar 25 at 8:19 profile views 7
Hi! I'm from a university in China. I'm interested in physics phenomenons and physics theories, even though I am not good at it. I am studying English HARD, besides!!So, Gentleman, Please forgive my chinglish!! :)
# 28 Reputation
5 Feb 15
+5 06:45 upvote Is there a relation between quantum theory and Fourier analysis?
7 Dec 30 '14
+5 10:32 upvote Is there a relation between quantum theory and Fourier analysis? +2 14:51 accept Is there a relation between quantum theory and Fourier analysis?
5 Jan 12 '13
+5 23:42 upvote Is there a relation between quantum theory and Fourier analysis?
0 Dec 13 '12
5 Sep 7 '12
5 Sep 6 '12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8799411654472351, "perplexity": 2666.6727718412194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298387.35/warc/CC-MAIN-20150323172138-00118-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/trig-limit-problem.434939/ | # Trig limit problem
• Start date
• #1
84
0
## Homework Statement
is there a way to solve
lim (x->0) $$\frac{x - sinx}{x^{3}}$$
using the fact that sinx / x = 1
or is it much more complicated.. i've tried to break it down into sinx/x everyway i can think of with no luck..
• #2
128
0
Do you know l'Hopitals rule?
I am thinking of a way to do it with only sin(x)/x but can't come up with a good way immediately.
• #3
84
0
I am not familiar with l'hopitals theorem yet...
• Last Post
Replies
10
Views
3K
• Last Post
Replies
6
Views
1K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
1
Views
889
• Last Post
Replies
7
Views
1K
• Last Post
Replies
4
Views
1K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
6
Views
1K
• Last Post
Replies
4
Views
924
• Last Post
Replies
10
Views
2K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451234102249146, "perplexity": 6562.757485777951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00451.warc.gz"} |
https://datascience.stackexchange.com/questions/5817/airline-fares-what-analysis-should-be-used-to-detect-competitive-price-setting/5853 | # Airline Fares - What analysis should be used to detect competitive price-setting behavior and price correlations?
I want to investigate price-setting behavior of airlines -- specifically how airlines react to competitors pricing.
As I would say my knowledge about more complex analysis is quite limited I've done mostly all basic methods to gather a overall view of the data. This includes simple graphs which already help to identify similar patterns. I am also using SAS Enterprise 9.4.
However I am looking for a more number based approach.
## Data Set
The (self) collected data set I am using contain around ~54.000 fares. All fares were collected within a 60 day time window, on a daily basis (every night at 00:00).
Hence, every fare within that time window occurs $$n$$ times subject to the availability of the fare as well as the departure date of the flight, when it is passed by the collection date of the fare. (You can't collect a fare for a flight when the departure date of the flight is in the past)
The unformatted that looks basically like this: (fake data)
+--------------------+-----------+--------------------+--------------------------+---------------+
| requestDate | price| tripStartDeparture | tripDestinationDeparture | flightCarrier |
+--------------------+-----------+--------------------+--------------------------+---------------+
| 14APR2015:00:00:00 | 725.32 | 16APR2015:10:50:02 | 23APR2015:21:55:04 | XA |
+--------------------+-----------+--------------------+--------------------------+---------------+
| 14APR2015:00:00:00 | 966.32 | 16APR2015:13:20:02 | 23APR2015:19:00:04 | XY |
+--------------------+-----------+--------------------+--------------------------+---------------+
| 14APR2015:00:00:00 | 915.32 | 16APR2015:13:20:02 | 23APR2015:21:55:04 | XH |
+--------------------+-----------+--------------------+--------------------------+---------------+
"DaysBeforeDeparture" is calculated via $$I=s-c$$ where
• I & interval (days before departure)
• s & date of the fare (flight departure)
• c & date of which the fare was collected
Here is a example of grouped data set by I (DaysBeforeDep.) (fake data!):
+-----------------+------------------+------------------+------------------+------------------+
| DaysBefDeparture | AVG_of_sale | MIN_of_sale | MAX_of_sale | operatingCarrier |
+-----------------+------------------+------------------+------------------+------------------+
| 0 | 880.68 | 477.99 | 2,245.23 | DL |
+-----------------+------------------+------------------+------------------+------------------+
| 0 | 904.89 | 477.99 | 2,534.55 | DL |
+-----------------+------------------+------------------+------------------+------------------+
| 0 | 1,044.39 | 920.99 | 2,119.09 | LH |
+-----------------+------------------+------------------+------------------+------------------+
## What I came up with so far
Looking at the line graphs I can already estimate that several lines will have a high correlation factor. Hence, I tried to use correlation analysis first on the grouped data. But is that the correct way? Basically I try now to make correlations on the averages rather then on the individual prices? Is there an other way?
I am unsure which regression model fits here, as the prices do not move in any linear form and appear non-linear. Would I need to fit a model to each of price developments of an airline
PS: This is a long text-wall. If I need to clarify anything let me know. I am new to this sub.
Anyone a clue? :-)
Word of warning from a former airline Revenue Management analyst: you might be barking up the wrong tree with this approach. Apologies for the wall of text that follows, but this data is a lot more complex and noisy than might appear at first glance, so wanted to provide a short description of how it's generated; forewarned is forearmed.
Airline fares have two components to them: all the actual fares (complete with fare rules and what have you) that an airline has available for a certain route, most of which are published the Airline Tariff Publishing Company (a few special-use ones are not, but those are the exception rather than the rule) and the actual inventory management performed by the airline on a day-to-day basis.
Fares can be submitted to ATPCO four times a day, at set intervals, and when airlines do so, it will usually consist of a mixture of additions, deletions, and modifications of existing fares. When an airline initiates a pricing action (assuming their competitors aren't trying to make their own moves here), they usually have to wait until the next update to see if their competitors follow/respond. The converse goes when a competitor initiates a pricing action, as the airline has to wait until the next update before they can respond.
Now, this is all well and good with respect to fares, but the problem is that, because this is all getting published in ATPCO, fares are the next best thing to public information... all your competitors get to see what you've got in your arsenal, so attempts to obfuscate are not unheard of, such as publishing fares that will never actually be assigned any inventory, listing all the fares as day-of-departure, etc.
In many ways, the secret sauce comes down to the actual inventory allocation, i.e. how many seats on each flight will you be willing to sell for a given fare, and this information is not publicly available. You can get some glimpses by scraping web info, but the potential combinations of departure time/date and fare rules are quite numerous and may quickly escalate beyond your ability to easily keep track of.
Typically an airline will only be willing to sell a handful of seats for a very low fare and the people who snag those have to book quite far in advance lest the fare rules lock them out, or other travelers simply beat them to the punch. The airline will be willing to sell a few more seats for a higher fare, and so on and so forth. They will be quite happy to sell all of the seats for the highest fare they've got published, but this is not usually feasible.
What you're seeing with fares getting higher the closer you get to the day of departure is simply the natural process of having the cheap seats get booked farther out, while the remaining inventory gradually gets more expensive. Of course, there are some caveats here. The RM process is actively managed and human intervention is quite common as the RM team generally strives to meet its revenue goals and maximize revenue on each flight. As such, flights that fill up quickly may be "tightened up" by closing out low fares. Flights that are booking slowly may be "loosened up" by allocating more seats to lower fares.
There is a constant interplay and competition between airlines in this area, but you are not very likely to capture the actual dynamics just from scraping fares. Don't get me wrong, we had such tools at our disposal, and, despite their limitations, they were quite valuable, but they were just one data source that fed into the decision-making process. You'd need access to the hundreds, if not thousands of operational decisions made by RM teams on a daily basis, as well as state-of-the-world information as they see it at the time. If you cannot find an airline partner to work with in order to get this data, you might need to consider alternate data sources.
I'd recommend looking into getting access to O&D fare data from the Official Airline Guide (or one of their competitors) and try to use that for your analysis. It's sample-based (about 10% of all tickets sold) and aggregated at a higher level than would be ideal so careful route selection is imperative (I'd recommend something with plenty of airlines, flying non-stop multiple times a day, with large aircraft), but you may be able to get a better picture of what was actually sold (average fare) and how much of it was sold (load factor), vs. merely what is available for sale at a given point in time. Using that information you might be in better position to at least explore the outcomes of the airlines' pricing strategy, and make your inferences from there.
• Thanks for your thorough explanation. I agree with you that such analysis based on prices only are quite limited. This also includes notably fare rules (Refundable tickets, minimum stay etc.) Some of those limitation can be overcome by collecting always same fares to make the comparable. However, a important information - as you mentioned, is missing the amount of seats available (can be != seats in a plane) and the the actually amount of sold tickets. – s1x May 21 '15 at 13:16
• Access to such data is very limited and if - outdated (eg. Databank 1B from US DOT). Some research such as Clark R. and Vincent N. (2012) Capacity-contingent pricing [...] link includes such data and offer much better insights. I'am aware of the limitations (hopefully ;-) ) and as you mentioned as there are much more information influencing prices. Still when observing a specific market you can get a feeling of what happens. You can see if there is any compeitive behaviour and different pricing strategy approachs. However, you would never be able to find the cause. – s1x May 21 '15 at 13:19
• @s1x - I agree and I wish I had a solid alternative to offer, but, as you've learned yourself, detailed revenue data is the most jealously guarded secret at any airline. Just wanted to make sure you're aware of that and what goes into the data generation process. Beyond, that, I like what you're trying to do and I think the other answer is a step in the right direction, technique-wise. If I might suggest, you could also take a look at using cross-correlation between your various TS during your data exploration, as it is often valuable for discerning patterns between linked TS. – habu May 21 '15 at 13:24
In addition to exploratory data analysis (EDA), both descriptive and visual, I would try to use time series analysis as a more comprehensive and sophisticated analysis. Specifically, I would perform time series regression analysis. Time series analysis is a huge research and practice domain, so, if you're not familiar with the fundamentals, I suggest starting with the above-linked Wikipedia article, gradually searching for more specific topics and reading corresponding articles, papers and books.
Since time series analysis is a very popular approach, it is supported by most open source and closed source commercial data science and statistical environments (software), such as R, Python, SAS, SPSS and many others. If you want to use R for this, check my answers on general time series analysis and on time series classification and clustering. I hope that this is helpful.
• Thank you for your answer @Aleksandr Blekh - really appreciated. Ill digg right into that. Maybe a stupid question, but please correct me here if I'am wrong here:a correlation analysis, while using one airline as the variable to correlate with. The results were compelling so far, as some airlines espc. those who had codeshare agreements had similar prices. Would such high correlations e.g.: ColumnUA(LH) 0.90435 <.0001 ColumnSQ 0.32544 <.0001 ColumnAF(DL) 0.55336 <.0001 I assume such results indicate similar price patterns. With a regression analysis, what would I find out? – s1x May 18 '15 at 2:55
• @s1x: You're very welcome (feel free to upvote/accept, if you value the answer and when you'll get enough reputation to do so, of course). Now, on to your question. As I said, TS analysis is more sophisticated and comprehensive. In particular TS regression, accounts for so-called autoregression and other TS complexities. Hence, my suggestion to use TS regression analysis instead of simpler traditional one. Also, you should always start with EDA, no matter what data analysis you plan to perform (actually, EDA will often change your plans). – Aleksandr Blekh May 18 '15 at 3:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24731187522411346, "perplexity": 1408.1401773208252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00172.warc.gz"} |
http://scitation.aip.org/content/aip/journal/pop/17/12/10.1063/1.3512937 | • journal/journal.article
• aip/pop
• /content/aip/journal/pop/17/12/10.1063/1.3512937
• pop.aip.org
1887
No data available.
No metrics data to plot.
The attempt to plot a graph for these metrics has failed.
Acoustic solitons in inhomogeneous pair-ion plasmas
USD
10.1063/1.3512937
View Affiliations Hide Affiliations
Affiliations:
1 Theoretical Plasma Physics Division, PINSTECH, P.O. Nilore, Islamabad 44000, Pakistan
Phys. Plasmas 17, 122302 (2010)
/content/aip/journal/pop/17/12/10.1063/1.3512937
http://aip.metastore.ingenta.com/content/aip/journal/pop/17/12/10.1063/1.3512937
View: Figures
## Figures
FIG. 1.
The variation in soliton amplitude for different values of temperature ratio, . The normalized densities are and . The density gradient parameter . The other parameters are , , , , and , respectively.
FIG. 2.
The variation in soliton amplitude for different values of temperature ratio, . The normalized densities are and . The density gradient parameter . The other parameters are , , , , and , respectively.
FIG. 3.
The variation in soliton amplitude for different values of density gradient parameter, , 0.05, and 0.1/cm. The normalized densities are and . The temperature ratio . The other parameters are , , , , and , respectively.
FIG. 4.
The variation in soliton amplitude for different values of density gradient parameter, , 0.05, 0.1/cm. The normalized densities are and . The temperature ratio . The other parameters are , , , , and , respectively.
/content/aip/journal/pop/17/12/10.1063/1.3512937
2010-12-01
2014-04-16
Article
content/aip/journal/pop
Journal
5
3
### Most cited this month
More Less
This is a required field | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855321645736694, "perplexity": 4147.851548696467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.dtu.dk/english/collaboration/collaboration-news?at=%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7BB7E2489D-00DF-43F7-A887-639B3AB7F9CF%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7BFDBAEFE2-1258-4550-AA8B-B810B393302E%7D&fr=1&lid=d6ce4f30-f9d6-4268-94cb-66ffd181ba2b&mr=10&qt=NewsQuery | # Collaboration news
SELECT INTERVAL
2019
07 JUN
## Recommendations for catalysis research in electrolysis
Two leading Danish catalysis researchers have prepared a comment for the distinguished scientific journal Nature Energy with three clear recommendations for their colleagues...
Catalysis Electrochemistry Energy storage Fossil fuels Energy production
2017
19 JAN
## Electrocatalysis can advance green transition
Renewable energy is providing an increasing share of the energy supply, but to ensure the green transition continues, it must also be able to furnish us with the fuels...
Catalysis Micro and nanotechnology Metals and alloys Nanoparticles Fuel cells Energy storage Fossil fuels Solar energy Wind energy
2016
11 AUG
## New research centre can create breakthroughs in fossil fuel alternatives
A grant of DKK 150 million from VILLUM FONDEN will enable a group of the world’s leading researchers to seek new energy, fuel and chemical alternatives to replace oil...
Solar energy Fossil fuels Energy production Energy storage Climate adaption
11 MAR
## The cities can become fossil-free, if they think 'smart'
Cities are major energy consumers and thus also CO 2 emitters. However, it is precisely the many different urban infrastructures and energy units that may be the key...
Electronics Electricity supply Energy efficiency Energy storage Fossil fuels Energy production Energy systems IT systems Climate adaption CO2 separation and CO2 storage
https://www.dtu.dk/english/collaboration/collaboration-news?at=%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7BB7E2489D-00DF-43F7-A887-639B3AB7F9CF%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7BFDBAEFE2-1258-4550-AA8B-B810B393302E%7D&fr=1&lid=d6ce4f30-f9d6-4268-94cb-66ffd181ba2b&mr=10&qt=NewsQuery
17 OCTOBER 2019 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967568278312683, "perplexity": 7132.272564170378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00552.warc.gz"} |
https://link.springer.com/chapter/10.1007/978-3-319-65235-1_10?error=cookies_not_supported&code=77d01e0c-9c58-4519-97ac-21ed467ffc4f | # Relativizations
Chapter
## Abstract
The notion of a relativization of a relation algebra $$\mathfrak{U}$$ is closely related to the notion of a subalgebra of $$\mathfrak{U}$$. In the latter, the fundamental operations of $$\mathfrak{U}$$ are restricted to a subset of $$\mathfrak{U}$$ that is closed under these operations. In the former, the fundamental operations of $$\mathfrak{U}$$ are relativized to the subset of all elements below a given element. Relativizations play a fundamental role in the study of direct decompositions of relation algebras. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970978319644928, "perplexity": 101.73723649600683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00339.warc.gz"} |
http://math.stackexchange.com/questions/73033/expected-number-of-intersection-points-when-n-random-chords-are-drawn-in-a-cir | Expected number of intersection points when $n$ random chords are drawn in a circle
There are $n$ random chords drawn in a circle. I am trying to determine the expected number of points in which the chords intersect within the circle.
-
You need to specify the distribution the random chords are chosen with. There are several intutively "reasonable" distributions on the chords in a circle that give different answers; see Bertrand's paradox. – Henning Makholm Oct 16 '11 at 16:02
If the chords are identically distributed, this is $\frac12n(n-1)$ times the probability that two given chords meet. Two chords are determined by four points on the circle. Amongst the three possible pairings of these four points, one produces meeting chords and the two others do not (place without loss of generality the four points at North, South, East and West positions, then the winning pairing is N-S vs W-E).
Assume that by symmetry, each pairing of the four points is equally probable. Then the mean total number of intersection points is $\frac16n(n-1)$.
The condition for this result to hold is that, for every two chords $C=[A,B]$ and $C'=[A',B']$, the distribution of $\{A,B,A',B'\}$ is exchangeable. For example, one can choose each endpoint of every chord uniformly on the circle and independently of the others.
Other procedures to choose the chords may produce other values, for example if one chooses randomly uniformly the midpoint of every chord in the disc and if these $n$ choices are independent, the mean total number of intersection points is $\frac12pn(n-1)$ where $p$ is the probability that two chords chosen according to this procedure meet.
Are you sure that all those $\frac12n(n-1)$ events are independent? Perhaps they are, but to me at least, it's not obvious. – TonyK Oct 16 '11 at 14:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108241200447083, "perplexity": 259.5295976507019}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645261055.52/warc/CC-MAIN-20150827031421-00059-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://hal.in2p3.fr/in2p3-01212683 | # Lifetime measurements of the first 2+ states in 104,106Zr: Evolution of ground-state deformations
Abstract : The first fast-timing measurements from nuclides produced via the in-flight fission mechanism are reported. The lifetimes of the first 2+states in 104,106Zr nuclei have been measured via β-delayed γ-ray timing of stopped radioactive isotope beams. An improved precision for the lifetime of the 2+1state in 104Zr was obtained, τ(2+1) =2.90+25−20ns, as well as a first measurement of the 2+1state in 106Zr, τ(2+1) =2.60+20−15ns, with corresponding reduced transition probabilities of B(E2; 2+1→0+g.s.) =0.39(2)e2b2and 0.31(1)e2b2, respectively. Comparisons of the extracted ground-state deformations, β2=0.39(1)(104Zr) and β2=0.36(1)(106Zr) with model calculations indicate a persistence of prolate deformation. The data show that 104Zr is the most deformed of the neutron-rich Zr isotopes measured so far.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-01212683
Contributor : Emmanuelle Vernay <>
Submitted on : Thursday, October 8, 2015 - 9:34:48 AM
Last modification on : Tuesday, May 22, 2018 - 9:48:10 PM
### Citation
F. Browne, A.M. Bruce, T. Sumikama, I. Nishizuka, S. Nishimura, et al.. Lifetime measurements of the first 2+ states in 104,106Zr: Evolution of ground-state deformations. Physics Letters B, Elsevier, 2015, 750, pp.448-452. ⟨10.1016/j.physletb.2015.09.043⟩. ⟨in2p3-01212683⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988559246063232, "perplexity": 19712.726099627773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00533.warc.gz"} |
https://www.texasgateway.org/resource/113-expenditure-output-or-keynesian-cross-model?book=79091&binder_id=78456 | # Learning Objectives
### Learning Objectives
By the end of this section, you will be able to:
• Explain what the expenditure-output model/Keynesian cross diagram shows and what the equilibrium point on the diagram represents.
• Analyze the consumption function, investment function, government spending, taxes, imports and exports is relation to the expenditure-output model.
• Analyze the location of an equilibrium point on an expenditure-output model to determine characteristics of the economy, including the presence of a recessionary gap or an inflationary gap.
• Evaluate the stability of an economy based on the multiplier effect in relation to the expenditure-output model.
The fundamental ideas of Keynesian economics were developed before the AD/AS model was popularized. From the 1930s until the 1970s, Keynesian economics was usually explained with a different model, known as the expenditure–output approach. This approach is rooted strongly in the fundamental assumptions of Keynesian economics: It focuses on the total amount of spending in the economy, with no explicit mention of aggregate supply or of the price level, although, as you will see, it is possible to draw some inferences about aggregate supply and price levels based on the diagram.
# The Axes of the Expenditure-Output Diagram
### The Axes of the Expenditure-Output Diagram
The expenditure–output model, sometimes also called the Keynesian cross diagram, determines the equilibrium level of real GDP by the point where the total or aggregate expenditures in the economy are equal to the amount of output produced. The axes of the Keynesian cross diagram presented in Figure 11.7 show real GDP on the horizontal axis as a measure of output, and aggregate expenditures on the vertical axis as a measure of spending.
Figure 11.7 The Expenditure-Output Diagram The aggregate expenditure-output model shows aggregate expenditures on the vertical axis and real GDP on the horizontal axis. A vertical line shows potential GDP where full employment occurs. The 45° line shows all points where aggregate expenditures and output are equal. The aggregate expenditure schedule shows how total spending or aggregate expenditure increases as output or real GDP rises. The intersection of the aggregate expenditure schedule and the 45° line is the equilibrium. Equilibrium occurs at E0, where aggregate expenditure (AE0) is equal to the output level (Y0).
Remember that GDP can be thought of in several equivalent ways. It measures both the value of spending on final goods and also the value of the production of final goods. All sales of the final goods and services that make up GDP will eventually end up as income for workers, for managers, and for investors and owners of firms. The sum of all the income received for contributing resources to GDP is called national income (Y). At some points in the discussion that follows, it is useful to refer to real GDP as national income. Both axes are measured in real—inflation-adjusted—terms.
# The Potential GDP Line and the 45° Line
### The Potential GDP Line and the 45° Line
The Keynesian cross diagram contains two lines that serve as conceptual guideposts to orient the discussion. The first is a vertical line that shows the level of potential GDP. Potential GDP means the same thing here that it means in the AD/AS diagrams: It refers to the quantity of output the economy can produce with full employment of its labor and physical capital.
The second conceptual line on the Keynesian cross diagram is the 45° line, which starts at the origin and reaches up and to the right. A line that stretches up at a 45° angle represents the set of points (1, 1), (2, 2), (3, 3), and so on, where the measurement on the vertical axis is equal to the measurement on the horizontal axis. In this diagram, the 45° line shows the set of points where the level of aggregate expenditure in the economy, measured on the vertical axis, is equal to the level of output or national income in the economy, measured by GDP on the horizontal axis.
When the macroeconomy is in equilibrium, it must be true that the aggregate expenditures in the economy are equal to the real GDP because, by definition, GDP is the measure of what is spent on final sales of goods and services in the economy. Thus, the equilibrium calculated with a Keynesian cross diagram will always end up where aggregate expenditure and output are equal—which occurs only along the 45° line.
# The Aggregate Expenditure Schedule
### The Aggregate Expenditure Schedule
The final ingredient of the Keynesian cross or expenditure-output diagram is the aggregate expenditure schedule, which shows the total expenditures in the economy for each level of real GDP. The intersection of the aggregate expenditure line with the 45° line—at point E0 in Figure 11.7—shows the equilibrium for the economy, because it is the point where aggregate expenditure is equal to output or real GDP. After developing an understanding of what the aggregate expenditures schedule means, we return to this equilibrium and examine how to interpret it.
# Building the Aggregate Expenditure Schedule
### Building the Aggregate Expenditure Schedule
Aggregate expenditure is the key to the expenditure-income model. The aggregate expenditure schedule shows, either in the form of a table or a graph, how aggregate expenditures in the economy rise as real GDP or national income rises.
Thus, in thinking about the components of the aggregate expenditure line—consumption, investment, government spending, exports, and imports—the key question is how expenditures in each category adjust as national income rises.
# Consumption as a Function of National Income
### Consumption as a Function of National Income
How do consumption expenditures increase as national income rises? People can do two things with their income: consume it or save it—for the moment, let’s ignore the need to pay taxes with some of it. Each person who receives an additional dollar faces this choice. The marginal propensity to consume (MPC) is the share of the additional dollar of income a person decides to devote to consumption expenditures. The marginal propensity to save (MPS) is the share of the additional dollar a person decides to save. It must always hold true that
For example, if the marginal propensity to consume out of the marginal amount of income earned is 0.9, then the marginal propensity to save is 0.1.
With this relationship in mind, consider the relationship among income, consumption, and savings shown in Figure 11.8. Note that we use Aggregate Expenditure on the vertical axis in this and the following figures, because all consumption expenditures are parts of aggregate expenditures.
An assumption commonly made in this model is that even if income were zero, people would have to consume something. In this example, consumption is $600 even if income were zero. Then, the MPC is 0.8 and the MPS is 0.2. Thus, when income increases by$1,000, consumption rises by $800 and savings rises by$200. At an income of $4,000, total consumption is the$600 that would be consumed even without any income, plus $4,000 multiplied by the marginal propensity to consume of 0.8, or$3,200, for a total of $3,800. The total amount of consumption and saving must always add up to the total amount of income. Exactly how a situation of zero income and negative savings works in practice is not important, because even low-income societies are not literally at zero income, so the point is hypothetical. This relationship between income and consumption, illustrated in Figure 11.8 and Table 11.2, is called the consumption function. Figure 11.8 The Consumption Function In the expenditure-output model, how does consumption increase with the level of national income? Output on the horizontal axis is conceptually the same as national income because the value of all final output produced and sold must be income to someone, somewhere in the economy. At a national income level of zero,$600 is consumed. Then, each time income rises by $1,000, consumption rises by$800, because, in this example, the marginal propensity to consume is 0.8.
The pattern of consumption shown in Table 11.2 is plotted in Figure 11.8. To calculate consumption, multiply the income level by 0.8, for the marginal propensity to consume, and add $600, for the amount that would be consumed even if income was zero. Consumption plus savings must be equal to income. Income Consumption Savings$0 $600 –$600
$1,000$1,400 –$400$2,000 $2,200 –$200
$3,000$3,000 $0$4,000 $3,800$200
$5,000$4,600 $400$6,000 $5,400$600
$7,000$6,200 $800$8,000 $7,000$1,000
$9,000$7,800 $1,200 Table 11.2 The Consumption Function However, a number of factors other than income can also cause the entire consumption function to shift. These factors are summarized in the earlier discussion of consumption and are listed in Table 11.2. When the consumption function moves, it can shift in two ways: The entire consumption function can move up or down in a parallel manner, or the slope of the consumption function can shift so it becomes steeper or flatter. For example, if a tax cut leads consumers to spend more, but does not affect their marginal propensity to consume, it causes an upward shift to a new consumption function is parallel to the original one. However, a change in household preferences for saving that reduces the marginal propensity to save causes the slope of the consumption function to become steeper: That is, if the savings rate is lower, then every increase in income leads to a larger rise in consumption. # Investment as a Function of National Income ### Investment as a Function of National Income Investment decisions are forward-looking, based on expected rates of return. Precisely because investment decisions depend primarily on perceptions about future economic conditions, they do not depend primarily on the level of GDP in the current year. Thus, on a Keynesian cross diagram, the investment function can be drawn as a horizontal line, at a fixed level of expenditure. Figure 11.9 shows an investment function where the level of investment is, for the sake of concreteness, set at the specific level of 500. Just as a consumption function shows the relationship between consumption levels and real GDP or national income, the investment function shows the relationship between investment levels and real GDP. Figure 11.9 The Investment Function The investment function is drawn as a flat line because investment is based on interest rates and expectations about the future, and so it does not change with the level of current national income. In this example, investment expenditures are at a level of 500. However, changes in factors such as technological opportunities, expectations about near-term economic growth, and interest rates would all cause the investment function to shift upward or downward. The appearance of the investment function as a horizontal line does not mean the level of investment never moves. It means only that, in the context of this two-dimensional diagram, the level of investment on the vertical aggregate expenditure axis does not vary according to the current level of real GDP on the horizontal axis. However, all the other factors that vary investment—new technological opportunities, expectations about near-term economic growth, interest rates, the price of key inputs, and tax incentives for investment—can cause the horizontal investment function to shift upward or downward. # Government Spending and Taxes as a Function of National Income ### Government Spending and Taxes as a Function of National Income In the Keynesian cross diagram, government spending appears as a horizontal line, as in Figure 11.10, where government spending is set at a level of$1,300. As in the case of investment spending, this horizontal line does not mean government spending is unchanging. It means only that government spending changes when Congress decides on a change in the budget, rather than shifting in a predictable way with the current size of the real GDP shown on the horizontal axis.
Figure 11.10 The Government Spending Function The level of government spending is determined by political factors, not by the level of real GDP in a given year. Thus, government spending is drawn as a horizontal line. In this example, government spending is at a level of $1,300. Congressional decisions to increase government spending cause this horizontal line to shift upward whereas decisions to reduce spending would cause it to shift downward. The situation of taxes is different because taxes often rise or fall with the volume of economic activity. For example, income taxes are based on the level of income earned, and sales taxes are based on the amount of sales made, and both income and sales tend to be higher when the economy is growing and lower when the economy is in a recession. For the purposes of constructing the basic Keynesian cross diagram, it is helpful to view taxes as a proportionate share of GDP. In the United States, for example, taking federal, state, and local taxes together, the government typically collects about 30 percent to 35 percent of income as taxes. Table 11.3 revises the earlier table on the consumption function so that it takes taxes into account. The first column shows national income. The second column calculates taxes, which in this example are set at a rate of 30 percent, or 0.3. The third column shows after-tax income; that is, total income minus taxes. The fourth column then calculates consumption in the same manner as before: Multiply after-tax income by 0.8, representing the marginal propensity to consume, and then add$600, for the amount that would be consumed even if income were zero. When taxes are included, the marginal propensity to consume is reduced by the amount of the tax rate, so each additional dollar of income results in a smaller increase in consumption than before taxes. For this reason, the consumption function, with taxes included, is flatter than the consumption function without taxes, as Figure 11.11 shows.
Figure 11.11 The Consumption Function before and after Taxes The upper line repeats the consumption function from Figure 11.8. The lower line shows the consumption function if taxes must first be paid on income, and then consumption is based on after-tax income.
Income Taxes After-Tax Income Consumption Savings
$0$0 $0$600 –$600$1,000 $300$700 $1,160 –$460
$2,000$600 $1,400$1,720 –$320$3,000 $900$2,100 $2,280 –$180
$4,000$1,200 $2,800$2,840 –$40$5,000 $1,500$3,500 $3,400$100
$6,000$1,800 $4,200$3,960 $240$7,000 $2,100$4,900 $4,520$380
$8,000$2,400 $5,600$5,080 $520$9,000 $2,700$6,300 $5,640$660
Table 11.3 The Consumption Function Before and After Taxes
# Exports and Imports as a Function of National Income
### Exports and Imports as a Function of National Income
The export function, which shows how exports change with the level of a country’s own real GDP, is drawn as a horizontal line, as in the example in Figure 11.12 (a), where exports are drawn at a level of $840. Again, as in the case of investment spending and government spending, drawing the export function as horizontal does not imply that exports never change. It just means they do not change because of what is on the horizontal axis—that is, a country’s own level of domestic production—and instead are shaped by the level of aggregate demand in other countries. More demand for exports from other countries causes the export function to shift upward; less demand for exports from other countries causes it to shift downward. Figure 11.12 The Export and Import Functions (a) The export function is drawn as a horizontal line because exports are determined by the buying power of other countries and thus do not change with the size of the domestic economy. In this example, exports are set at$840. However, exports can shift upward or downward, depending on buying patterns in other countries. (b) The import function is drawn in negative territory because expenditures on imported products are subtracted from expenditures in the domestic economy. In this example, the marginal propensity to import is 0.1, so imports are calculated by multiplying the level of income by –0.1.
Imports are drawn in the Keynesian cross diagram as a downward-sloping line, with the downward slope determined by the marginal propensity to import (MPI), out of national income. In Figure 11.12 (b), the marginal propensity to import is 0.1. Thus, if real GDP is $5,000, imports are$500; if national income is $6,000, imports are$600, and so on. The import function is drawn as downward sloping and negative, because it represents a subtraction from the aggregate expenditures in the domestic economy. A change in the marginal propensity to import, perhaps as a result of changes in preferences, alters the slope of the import function.
### Work It Out
#### Using an Algebraic Approach to the Expenditure–Output Model
In the expenditure–output or Keynesian cross model, the equilibrium occurs where the aggregate expenditure line (AE line) crosses the 45° line. Given algebraic equations for two lines, the point where they cross can be calculated easily. Imagine an economy with the following characteristics.
Y = Real GDP or national income
T = Taxes = 0.3Y
C = Consumption = 140 + 0.9(Y – T)
I = Investment = 400
G = Government spending = 800
X = Exports = 600
M = Imports = 0.15Y
Step 1. Determine the aggregate expenditure function. In this case, it is
Step 2. The equation for the 45° line is the set of points where GDP or national income on the horizontal axis is equal to aggregate expenditure on the vertical axis. Thus, the equation for the 45° line is: AE = Y.
Step 3. The next step is to solve these two equations for Y, or AE, since they will be equal to each other. Substitute Y for AE
Step 4. Insert the term 0.3Y for the tax rate T. This produces an equation with only one variable, Y.
Step 5. Work through the algebra and solve for Y
This algebraic framework is flexible and useful in predicting how economic events and policy actions will affect real GDP.
Step 6. Say, for example, that because of changes in the relative prices of domestic and foreign goods, the marginal propensity to import falls to 0.1. Calculate the equilibrium output when the marginal propensity to import is changed to 0.1.
Step 7. Because of a surge of business confidence, investment rises to 500. Calculate the equilibrium output.
For issues of policy, the key questions are how to adjust government spending levels or tax rates so that the equilibrium level of output is the full employment level. In this case, let the economic parameters be
Y = National income
T = Taxes = 0.3Y
C = Consumption = 200 + 0.9(Y – T).
I = Investment = 600
G = Government spending = 1,000
X = Exports = 600
Y = Imports = 0.1(Y – T)
Step 8. Calculate the equilibrium for this economy (remember, Y = AE).
Step 9. Assume that the full employment level of output is 6,000. What level of government spending is necessary to reach that level? To answer this question, plug in 6,000 as equal to Y, but leave G as a variable, and solve for G. Thus
Step 10. Solve this problem arithmetically. The answer is G = 1,240. In other words, increasing government spending by 240, from its original level of 1,000, to 1,240, would raise output to the full employment level of GDP.
Indeed, the question of how much to increase government spending so that equilibrium output rises from 5,454 to 6,000 can be answered without working through the algebra. Just use the multiplier formula. The multiplier equation in this case is
Thus, to raise output by 546 requires an increase in government spending of 546/2.27 = 240, which is the same as the answer derived from the algebraic calculation.
This algebraic framework is highly flexible. For example, taxes can be treated as a total set by political considerations, such as government spending, and not dependent on national income. Imports might be based on before-tax income, not after-tax income. For certain purposes, it may be helpful to analyze the economy without exports, and imports. A more complicated approach could divide up consumption, investment, government, exports and imports into smaller categories, or to build in some variability in the rates of taxes, savings, and imports. A wise economist shapes the model to fit the specific issue under investigation.
# Building the Combined Aggregate Expenditure Function
### Building the Combined Aggregate Expenditure Function
All the components of aggregate demand—consumption, investment, government spending, and the trade balance—are now in place to build the Keynesian cross diagram. Figure 11.13 builds up an aggregate expenditure function, based on the numerical illustrations of C, I, G, X, and M that have been used throughout this text. The first three columns in Table 11.4 are lifted from the earlier Table 11.3, which showed how to bring taxes into the consumption function. The first column is real GDP or national income, which is what appears on the horizontal axis of the income-expenditure diagram. The second column calculates after-tax income, based on the assumption, in this case, that 3 percent of real GDP is collected in taxes. The third column is based on a marginal propensity to consume (MPC) of 0.8, so that as after-tax income rises by $700 from one row to the next, consumption rises by$560 (700 × 0.8) from one row to the next. Investment, government spending, and exports do not change with the level of current national income. In the previous discussion, investment was $500, government spending was$1,300, and exports were $840, for a total of$2,640. This total is shown in the fourth column. Imports are 0.1 of real GDP in this example, and the level of imports is calculated in the fifth column. The final column, aggregate expenditures, expresses C + I + G + X – M. This aggregate expenditure line is illustrated in Figure 11.13.
Figure 11.13 A Keynesian Cross Diagram Each combination of national income and aggregate expenditure—after-tax consumption, government spending, investment, exports, and imports—is graphed. Equilibrium occurs when aggregate expenditure is equal to national income. This occurs where the aggregate expenditure schedule crosses the 45° line, at a real GDP of $6,000. Potential GDP in this example is$7,000, so equilibrium is occurring at a level of output or real GDP below the potential GDP level.
National Income After-Tax Income Consumption Government Spending + Investment + Exports Imports Aggregate Expenditure
$3,000$2,100 $2,280$2,640 $300$4,620
$4,000$2,800 $2,840$2,640 $400$5,080
$5,000$3,500 $3,400$2,640 $500$5,540
$6,000$4,200 $3,960$2,640 $600$6,000
$7,000$4,900 $4,520$2,640 $700$6,460
$8,000$5,600 $5,080$2,640 $800$6,920
$9,000$6,300 $5,640$2,640 $900$7,380
Table 11.4 National Income-Aggregate Expenditure Equilibrium
The aggregate expenditure function is formed by stacking on top of each other the consumption function after taxes, the investment function, the government spending function, the export function, and the import function. The point at which the aggregate expenditure function intersects the vertical axis is determined by the levels of investment, government, and export expenditures, which do not vary with national income. The upward slope of the aggregate expenditure function is determined by the marginal propensity to save, the tax rate, and the marginal propensity to import. A higher marginal propensity to save, a higher tax rate, and a higher marginal propensity to import all make the slope of the aggregate expenditure function flatter, because out of any extra income, more is going to savings or taxes or imports and less is going to spending on domestic goods and services.
Equilibrium occurs when national income is equal to aggregate expenditure, which is shown on the graph as the point where the aggregate expenditure schedule crosses the 45° line. In this example, equilibrium occurs at $6,000. Equilibrium can also be read off Table 11.4. It is the level of national income where aggregate expenditure is equal to national income. # Equilibrium in the Keynesian Cross Model ### Equilibrium in the Keynesian Cross Model With the aggregate expenditure line in place, the next step is to relate it to the two other elements of the Keynesian cross diagram. Thus, the first subsection interprets the intersection of the aggregate expenditure function and the 45° line; the next subsection relates this point of intersection to the potential GDP line. # Where Equilibrium Occurs ### Where Equilibrium Occurs The point where the aggregate expenditure line, which is constructed from C + I + G + X – M, crosses the 45° line is the equilibrium for the economy. It is the only point on the aggregate expenditure line where the total amount being spent on aggregate demand equals the total level of production. In Figure 11.13, this point of equilibrium (E0) happens at$6,000, which can also be read off Table 11.4.
The meaning of equilibrium remains the same; that is, equilibrium is a point of balance where no incentive exists to shift away from that outcome. To understand why the point of intersection between the aggregate expenditure function and the 45° line is a macroeconomic equilibrium, consider what happens if an economy finds itself to the right of the equilibrium point E—say, at point H in Figure 11.14—where output is greater than the equilibrium. At point H, the level of aggregate expenditure is below the 45° line, so that the level of aggregate expenditure in the economy is less than the level of output. As a result, at point H, output is piling up unsold—not a sustainable state of affairs.
Figure 11.14 Equilibrium in the Keynesian Cross Diagram If output is above the equilibrium level, at H, then the real output is greater than the aggregate expenditure in the economy. This pattern cannot hold, because it would means goods are produced, but are piling up unsold. If output is below the equilibrium level, at L, then aggregate expenditure is greater than output. This pattern cannot hold either, because it means spending exceeds the number of goods being produced. Only point E can be at equilibrium, where output, or national income and aggregate expenditure, are equal. Equilibrium (E) must lie on the 45° line, which is the set of points where national income and aggregate expenditure are equal.
Conversely, consider the situation when the level of output is at point L, where real output is less than equilibrium. In this case, the level of aggregate demand in the economy is above the 45° line, indicating the level of aggregate expenditure in the economy is greater than the level of output. When the level of aggregate demand has emptied the store shelves, it cannot be sustained. Firms respond by increasing their level of production. Thus, equilibrium must be the point where the amount produced and the amount spent are in balance, at the intersection of the aggregate expenditure function and the 45° line.
### Work It Out
#### Finding Equilibrium
Table 11.5 gives some information on an economy. The Keynesian model assumes there is some level of consumption even without income. In this example, this amount is $236 –$216 = $20;$20 is consumed when national income equals zero. Assume that taxes are 0.2 of real GDP. Let the marginal propensity to save of after-tax income be 0.1. The level of investment is $70, the level of government spending is$80, and the level of exports is $50. Imports are 0.2 of after-tax income. Given these values, complete Table 11.5 and then answer these questions: • What is the consumption function? • What is the equilibrium? • Why is a national income of$300 not at equilibrium?
• How do expenditures and output compare at this point?
National Income Taxes After-Tax income Consumption I + G + X Imports Aggregate Expenditures
$300$236
$400$500
$600$700
Table 11.5
Step 1. Calculate the amount of taxes for each level of national income—reminder—GDP equals national income for each level of national income using the following as an example
Step 2. Calculate after-tax income by subtracting the tax amount from national income for each level of national income using the following as an example
Step 3. Calculate consumption. The marginal propensity to save is given as 0.1. This means that the marginal propensity to consume is 0.9, since MPS + MPC = 1. Therefore, multiply 0.9 by the after-tax income amount using the following as an example
Step 4. Consider why the table shows a consumption of $236 in the first row. As mentioned earlier, the Keynesian model assumes there is some level of consumption even without income. That amount is$236 – $216 =$20.
Step 5. There is now enough information to write the consumption function. The consumption function is found by figuring out the level of consumption that happens when income is zero.
Remember that
$C = Consumption when national income is zero + MPC (after-tax income).C = Consumption when national income is zero + MPC (after-tax income).$
Let C represent the consumption function, Y represent national income, and T represent taxes.
Step 6. Use the consumption function to find consumption at each level of national income.
Step 7. Add investment (I), government spending (G), and exports (X). Remember, these do not change as national income changes
Step 8. Find imports, which are 0.2 of after-tax income at each level of national income. For example,
Step 9. Find aggregate expenditure by adding C + I + G + X – I for each level of national income. Your completed table should look like Table 11.6.
National Income (Y) Tax = 0.2 × Y(T) After-Tax income (Y – T) Consumption C = $20 + 0.9(Y – T) I + G + X Minus Imports (M) Aggregate Expenditures AE = C + I + G + X – M$300 $60$240 $236$200 $48$388
$400$80 $320$308 $200$64 $444$500 $100$400 $380$200 $80$500
$600$120 $480$452 $200$96 $556$700 $140$560 $524$200 $112$612
Table 11.6 Completed Table
Step 10. Answer the question: What is equilibrium? Equilibrium occurs where AE = Y. Table 11.6 shows that equilibrium occurs where national income equals aggregate expenditure at $500. Step 11. Find equilibrium mathematically, knowing that national income is equal to aggregate expenditure. Since T is 0.2 of national income, substitute T with 0.2 Y so that Solve for Y. Step 12. Answer this question: Why is a national income of$300 not an equilibrium? At a national income of $300, aggregate expenditures are$388. The two values are not equal, thus they are not at equilibrium.
Step 13. Answer this question: How do expenditures and output compare at this point? Aggregate expenditures cannot exceed output (GDP) in the long run, because there are not be enough goods to be bought.
# Recessionary and Inflationary Gaps
### Recessionary and Inflationary Gaps
In the Keynesian cross diagram, if the aggregate expenditure line intersects the 45° line at the level of potential GDP, then the economy is in sound shape. There is no recession, and unemployment is low. But, there is no guarantee equilibrium occurs at the potential GDP level of output. Equilibrium might be higher or lower.
For example, Figure 11.15 (a) illustrates a situation in which the aggregate expenditure line intersects the 45° line at point E0, which is a real GDP of $6,000, and which is below the potential GDP of$7,000. In this situation, the level of aggregate expenditure is too low for GDP to reach its full employment level, and unemployment occurs.
The distance between an output level such as E0, which is below potential GDP, and the level of potential GDP is called a recessionary gap. Because the equilibrium level of real GDP is so low, firms will not wish to hire the full employment number of workers, and unemployment will be high.
Figure 11.15 Addressing Recessionary and Inflationary Gaps (a) If equilibrium occurs at an output below potential GDP, then a recessionary gap exists. The policy solution to a recessionary gap is to shift the aggregate expenditure schedule up from AE0 to AE1 using policies such as tax cuts or government spending increases. Then, the new equilibrium, E1, occurs at potential GDP. (b) If equilibrium occurs at an output above potential GDP, then an inflationary gap exists. The policy solution to an inflationary gap is to shift the aggregate expenditure schedule down from AE0 to AE1 using policies such as tax increases or spending cuts. Then, the new equilibrium, E1, occurs at potential GDP.
What might cause a recessionary gap? Anything that shifts the aggregate expenditure line downward is a potential cause of recession, including a decline in consumption, a rise in savings, a fall in investment, a drop in government spending or a rise in taxes, or a fall in exports or a rise in imports. Moreover, an economy that is at equilibrium with a recessionary gap may just stay there and suffer high unemployment for a long time; remember, the meaning of equilibrium is that there is no particular adjustment of prices or quantities in the economy to chase the recession away.
The appropriate response to a recessionary gap is for the government to reduce taxes or increase spending so the aggregate expenditure function shifts upward from AE0 to AE1. When this shift occurs, the new equilibrium, E1, now occurs at potential GDP, as shown in Figure 11.15 (a).
Conversely, Figure 11.15 (b) shows a situation in which the aggregate expenditure schedule (AE0) intersects the 45° line above potential GDP. The gap between the level of real GDP at equilibrium E0 and potential GDP is called an inflationary gap. The inflationary gap also requires a bit of interpreting. After all, a naive reading of the Keynesian cross diagram might suggest that if the aggregate expenditure function is just pushed up high enough, real GDP can be as large as desired—even doubling or tripling the potential GDP level of the economy. This implication is clearly wrong. An economy faces some supply-side limits on how much it can produce at a given time with its existing quantities of workers, physical and human capital, technology, and market institutions.
The inflationary gap should be interpreted, not as a literal prediction of how large real GDP will be, but as a statement of how much extra aggregate expenditure is in the economy beyond what is needed to reach potential GDP. An inflationary gap suggests that because the economy cannot produce enough goods and services to absorb this level of aggregate expenditures, the spending instead causes an inflationary increase in the price level. In this way, even though changes in the price level do not appear explicitly in the Keynesian cross equation, the notion of inflation is implicit in the concept of the inflationary gap.
The appropriate Keynesian response to an inflationary gap is shown in Figure 11.15 (b). The original intersection of aggregate expenditure line AE0 and the 45 line occurs at $8,000, which is above the level of potential GDP at$7,000. If AE0 shifts downward to AE1, so that the new equilibrium is at E1, then the economy is at potential GDP without the pressures for inflationary price increases. The government can achieve a downward shift in aggregate expenditure by increasing taxes on consumers or firms, or by reducing government expenditures.
# The Multiplier Effect
### The Multiplier Effect
The Keynesian policy prescription has one final twist. Assume that, for a certain economy, the intersection of the aggregate expenditure function and the 45° line is at a GDP of $700 whereas the level of potential GDP for this economy is$800. By how much does government spending need to be increased so that the economy reaches the full employment GDP? The obvious answer might seem to be $800 –$700 = $100; so raise government spending by$100. But that answer is incorrect. A change of, for example, $100 in government expenditures has an effect of more than$100 on the equilibrium level of real GDP. The reason is that a change in aggregate expenditures circles through the economy: Households buy from firms, firms pay workers and suppliers, workers and suppliers buy goods from other firms, those firms pay their workers and suppliers, and so on. In this way, the original change in aggregate expenditures is actually spent more than once. This is called the multiplier effect—an initial increase in spending, cycles repeatedly through the economy and has a larger impact than the initial dollar amount spent.
# How Does the Multiplier Work?
### How Does the Multiplier Work?
To understand how the multiplier effect works, return to the example in which the current equilibrium in the Keynesian cross diagram is a real GDP of $700, or$100 short of the $800 needed to be at full employment, potential GDP. If the government spends$100 to close this gap, someone in the economy receives that spending and can treat it as income. Assume that those who receive this income pay 30 percent in taxes, save 10 percent of after-tax income, spend 10 percent of total income on imports, and then spend the rest on domestically produced goods and services.
As shown in the calculations in Figure 11.16 and Table 11.7, of the original $100 in government spending,$53 is left to spend on domestically produced goods and services. That $53, which was spent, becomes income to someone, somewhere in the economy. Those who receive that income also pay 30 percent in taxes, save 10 percent of after-tax income, and spend 10 percent of total income on imports, as shown in Figure 11.16, so that an additional$28.09, that is, 0.53 × $53, is spent in the third round. The people who receive that income then pay taxes, save, and buy imports, and the amount spent in the fourth round is$14.89, that is, 0.53 × $28.09. Figure 11.16 The Multiplier Effect An original increase of government spending of$100 causes a rise in aggregate expenditure of $100. But that$100 is income to others in the economy, and after they save, pay taxes, and buy imports, they spend $53 of that$100 in a second round. In turn, that $53 is income to others. Thus, the original government spending of$100 is multiplied by these cycles of spending, but the impact of each successive cycle gets smaller and smaller. Given the numbers in this example, the original government spending increase of $100 raises aggregate expenditure by$213; therefore, the multiplier in this example is $213/$100 = 2.13.
The original increase in aggregate expenditure from government spending is $100. This is income to people throughout the economy, who pay 30 percent in taxes, save 10 percent of after-tax income, spend 10 percent of income on imports. The second-round increase is 70 – 7 – 10 =$53. This is $53 of income to people through out the economy, who pay 30 percent in taxes, save 10 percent of after-tax income, and spend 10 percent of income on imports. The third-round increase is 37.1 – 3.71 – 5.3 =$28.09. This is $28.09 of income to people through out the economy, who pay 30 percent in taxes, save 10 percent of after-tax income, and spend 10 percent of income on imports. The fourth-round increase is 19.663 – 1.96633 – 2.809 =$14.89.
Table 11.7 Calculating the Multiplier Effect
Thus, over the first four rounds of aggregate expenditures, the impact of the original increase in government spending of $100 creates a rise in aggregate expenditures of$100 + $53 +$28.09 + $14.89 =$195.98. Figure 11.16 shows these total aggregate expenditures after these first four rounds, and then the figure shows the total aggregate expenditures after 30 rounds. The additional boost to aggregate expenditures shrinks during each round of consumption. After about 10 rounds, the additional increments are very small indeed. After 30 rounds, the additional increments in each round are so small they have no practical consequence. After 30 rounds, the cumulative value of the initial boost in aggregate expenditure is approximately $213. Thus, the government spending increase of$100 eventually, after many cycles, produced an increase of $213 in aggregate expenditure and real GDP. In this example, the multiplier is$213/$100 = 2.13. # Calculating the Multiplier ### Calculating the Multiplier Fortunately for everyone who is not carrying around a computer with a spreadsheet program to project the impact of an original increase in expenditures over 20, 50, or 100 rounds of spending, there is a formula for calculating the multiplier. The data from Figure 11.16 and Table 11.7 are • marginal propensity to save (MPS) = 30 percent, • tax rate = 10 percent, and • marginal propensity to import (MPI) = 10 percent. The MPC is equal to 1 – MPS, or 0.7. Therefore, the spending multiplier is A change in spending of$100 multiplied by the spending multiplier of 2.13 is equal to a change in GDP of $213. Not coincidentally, this result is exactly what was calculated in Figure 11.16 after many rounds of expenditures cycling through the economy. The size of the multiplier is determined by the proportion of the marginal dollar of income that goes into taxes, saving, and imports. These three factors are known as leakages, because they determine how much demand leaks out in each round of the multiplier effect. If the leakages are relatively small, then each successive round of the multiplier effect will have larger amounts of demand, and the multiplier will be high. Conversely, if the leakages are relatively large, then any initial change in demand will diminish more quickly in the second, third, and later rounds, and the multiplier will be small. Changes in the size of the leakages—a change in the marginal propensity to save, the tax rate, or the marginal propensity to import—changes the size of the multiplier. # Calculating Keynesian Policy Interventions ### Calculating Keynesian Policy Interventions Returning to the original question: How much should government spending be increased to produce a total increase in real GDP of$100? If the goal is to increase aggregate demand by $100, and the multiplier is 2.13, then the increase in government spending to achieve that goal would be$100/2.13 = $47. Government spending of approximately$47, when combined with a multiplier of 2.13, which is, remember, based on the specific assumptions about tax, saving, and import rates, produces an overall increase in real GDP of $100, restoring the economy to potential GDP of$800, as Figure 11.17 shows.
Figure 11.17 The Multiplier Effect in an Expenditure-Output Model The power of the multiplier effect is that an increase in expenditure has a larger increase on the equilibrium output. The increase in expenditure is the vertical increase from AE0 to AE1. However, the increase in equilibrium output, shown on the horizontal axis, is clearly larger.
The multiplier effect is also visible on the Keynesian cross diagram. Figure 11.17 shows the example we have been discussing: a recessionary gap with an equilibrium of $700, potential GDP of$800, the slope of the aggregate expenditure function (AE0) determined by the assumptions that taxes are 30 percent of income, savings are 0.1 of after-tax income, and imports are 0.1 of before-tax income. At AE1, the aggregate expenditure function is moved upward to reach potential GDP.
Now, compare the vertical shift upward in the aggregate expenditure function, which is $47, with the horizontal shift outward in real GDP, which is$100, as these numbers were calculated earlier. The rise in real GDP is more than double the rise in the aggregate expenditure function. Similarly if you look back at Figure 11.15, you will see that the vertical movements in the aggregate expenditure functions are smaller than the change in equilibrium output that is produced on the horizontal axis. Again, this is the multiplier effect at work. In this way, the power of the multiplier is apparent in the income-expenditure graph, as well as in the mathematical calculation.
The multiplier does not just affect government spending; it also applies to any change in the economy. Say that business confidence declines and investment falls off, or that the economy of a leading trading partner slows down so that export sales decline. These changes reduce aggregate expenditures and then have an even larger effect on real GDP because of the multiplier effect. Read the following Clear It Up feature to learn how the multiplier effect can be applied to analyze the economic impact of professional sports.
### Clear It Up
#### How can the multiplier be used to analyze the economic impact of professional sports?
Attracting professional sports teams and building sports stadiums to create jobs and stimulate business growth is an economic development strategy adopted by many communities throughout the United States. In his recent article, “Public Financing of Private Sports Stadiums,” James Joyner (2012) of Outside the Beltway looked at public financing for National Football League teams. Joyner’s findings confirm the earlier work of John Siegfried of Vanderbilt University and Andrew Zimbalist of Smith College.
Siegfried and Zimbalist (2000) used the multiplier to analyze this issue. They considered the amount of taxes paid and dollars spent locally to determine whether there was a positive multiplier effect. Because most professional athletes and owners of sports teams are rich enough to owe a lot of taxes, let’s say that 40 percent of any marginal income they earn is paid in taxes. Because athletes are often high earners with short careers, let’s assume they save one third of their after-tax income.
However, many professional athletes do not live year-round in the city in which they play, so let’s say that one half of the money they do spend is spent outside the local area. One can think of spending outside a local economy, in this example, as the equivalent of imported goods for the national economy.
Now, consider the impact of money spent at local entertainment venues other than professional sports. Although the owners of these other businesses may be comfortably middle-income earners, few of them are in the economic stratosphere of professional athletes. Because their income is lower, so are their taxes. Let's say they pay only 35 percent of their marginal income in taxes. They do not have the same ability, or need, to save as much as professional athletes, so let’s assume their marginal propensity to consume is just 0.8. Finally, because more of them live locally, they will spend a higher proportion of their income on local goods—say, 65 percent.
If these general assumptions hold true, then money spent on professional sports has less local economic impact than money spent on other forms of entertainment. For professional athletes, out of a dollar earned, 40¢ goes to taxes, leaving 60¢. Of that 60¢, one third is saved, leaving 40¢, and half is spent outside the area, leaving 20¢. Only 20¢ of each dollar is cycled into the local economy in the first round. For locally owned entertainment, out of a dollar earned, 35¢ goes to taxes, leaving 65¢. Of the rest, 20 percent is saved, leaving 52¢, and of that amount, 65 percent is spent in the local area, so that 33.8¢ of each dollar of income is recycled into the local economy.
Siegfried and Zimbalist make the plausible argument that, within their household budgets, people have a fixed amount to spend on entertainment. If this assumption holds true, then money spent attending professional sports events is money not spent on other entertainment options in a given metropolitan area. Because the multiplier is lower for professional sports than for other local entertainment options, the arrival of professional sports to a city would reallocate entertainment spending in a way that causes the local economy to shrink, rather than to grow. Thus, their findings seem to confirm what Joyner reports and what newspapers across the country are reporting. A quick Internet search for “economic impact of sports” yields numerous reports questioning this economic development strategy.
# Multiplier Tradeoffs: Stability Versus the Power of Macroeconomic Policy
### Multiplier Tradeoffs: Stability Versus the Power of Macroeconomic Policy
Is an economy healthier with a high multiplier or a low one? With a high multiplier, any change in aggregate demand tends to be magnified substantially so the economy becomes more unstable. With a low multiplier, in contrast, changes in aggregate demand are not multiplied much, so the economy tends to be more stable.
However, with a low multiplier, government policy changes in taxes or spending tend to have less impact on the equilibrium level of real output. With a higher multiplier, government policies to raise or reduce aggregate expenditures have a larger effect. Thus, a low multiplier means a more stable economy, but also weaker government macroeconomic policy, whereas a high multiplier means a more volatile economy, but also an economy in which government macroeconomic policy is more powerful. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6259979009628296, "perplexity": 1953.8802849774527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00682.warc.gz"} |
https://koreauniv.pure.elsevier.com/en/publications/remodeling-pearsons-correlation-for-functional-brain-network-esti | # Remodeling pearson’s correlation for functional brain network estimation and autism spectrum disorder identification
Weikai Li, Zhengxia Wang, Limei Zhang, Lishan Qiao, Dinggang Shen
Research output: Contribution to journalArticlepeer-review
23 Citations (Scopus)
## Abstract
Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson’s Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L1-norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegantmathematical formulation for sparsifying PC-based networks.More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autismspectrumdisorders (ASD) fromnormal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52%classification accuracy which outperforms the baseline and state-of-the-art methods.
Original language English 55 Frontiers in Neuroinformatics 11 https://doi.org/10.3389/fninf.2017.00055 Published - 2017 Aug 31
## Keywords
• Autism spectrum disorder
• Functional brain network
• Functional magnetic resonance imaging
• Pearson’s correlation
• Scale-free
• Sparse representation
## ASJC Scopus subject areas
• Neuroscience (miscellaneous)
• Biomedical Engineering
• Computer Science Applications | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8047276139259338, "perplexity": 5852.846908951696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00594.warc.gz"} |
http://aheader.org/author/aheader/ | Records when constructing the scheduler
1. How does Resnet code in tensorflow/models be distributed?
High API Estimator
distributed_strategy in utils/misc
2. How to make codes in https://github.com/geifmany/cifar-vgg distributed?
Refer Tensorflow tutorial:
Set a distributed strategy and scope including model construction and model compile
However
VGG uses data augmentation which is in conflict with distribution!
In Keras tutorial, if we use fit_generator method, then we will meet this error:
fit_generator` is not supported for models compiled with tf.distribute.strategy.
Our Tensorflow version is 1.14
If we use ‘manual’ example in the official tutorial, then the training will become wield:
Use single GPU this is the std output:
Above is normal(though different from using fit_generator). Below is the distributed version using mirror strategy, it’s abnormal:
Distributed version stuck in the first epoch and the loss is high for a long time.
Solution:
This issue suggests using tf.data.Dataset.from_generator to deal with the generator.
Categories: 未分类
Scheduling1
• Outline
• Scheduling
• Components:
Decides Server order
manage queue
• Why do we need one?
• What can scheduling disciplines do?
• Requirements of a scheduling discipline
• Ease of implementation
• Fairness
Fairness is global, scheduling(congestion avoidance) is local.
• Notion of Fairness
• Fundamental choices
Work-conserving and non-work-conserving
Degree of aggregation
• Scheduling disciplines
FIFO & other disciplines(SPT, SRPT), the performance among them
(SRPT process the remaining time, which means if I’m processing a package which still needs 5 min, then comes a package which only need 1 min, then I go to process the new package)
• The Conservation Law
scheduling is independent of the packet service time
$\sum ρ_iq_i=constant$
$ρ_i$ mean utilization of connection i and $q_i$ mean waiting time of connection i
The average delay with FIFO is a tight lower bound for
work conserving and service time independent scheduling
disciplines
• Fairness
Jain’s index use equal share as the
objective:
$f=\frac{(\sum_{i=1}^{N}x_i)^2}{(n\sum_{i=1}^{N}x_i^2)}$
• Max-Min Fairness
• General Process Sharing (GPS)
Conceptually, GPS serves packets as if they are
in separate logical queues, visiting each nonempty
queues in turn.
Generalized processor sharing assumes that traffic is fluid (infinitesimal packet sizes), and can be arbitrarily split.
How to emulate GPS as fair as possible, also efficient
• (Weighted) round robin
Different weights, fixed packet size
Different weights, variable size packets:normalize weights by mean packet size
Problems:
1. With variable size packets and different weights,
need to know mean packet size in advance
2. Can be unfair for long periods of time
Categories: 未分类
Registration Notes
This note is for CS5240
Last time we discussed Rigid & Nonrigid and their methods:
[su_table]
Rigid Nonrigid similarity transformation affine transformation ICP nonrigid ICP
[/su_table]
Methods below are approximation. Now we discuss interpolation.
Thin Plate Spline
How to get TPS?
[su_custom_gallery source=”media: 243″ limit=”19″ target=”blank” width=”800″ height=”480″]
Minimizing bending energy!TPS maps $p_i$ to $q_i$ exactly.
Consider jth component $v_{ij}$ of $q_i$, TPS maps $p_i$ to $v_{ij}$ by $f(p_i)=v_{ij}$ which minimize bending energy denoted as $E_d(f)$.
Bending energy function takes two parameters the first is d(the dimension of the point), the second is m, which denotes order-m derivatives.
Finally the function f that minimize the Bending energy takes the form
$f(x’) = a^Tx’+\sum_{i=1}^{n} w_iU(||x-p_i||)$
a are affine parameters. w are weights. U(r) is increasing function of distance r.
Categories: 未分类
summary for the graph processing bottlenecks paper
1. INTRODUCTION
graph model: Bulk Synchronous Parallel model, vertex-centric
(superstep: 1. Concurrent computation, 2. Communication, 3. Barrier synchronisation)
GPU use SIMT(Single Instruction Multiple Threads) parallel execution model
12 graph applications & non-graph applications
CUDA
platform: cycle-accurate simulator & NVIDIA GPU
tools: performance counters and software simulators
2. BACKGROUND
CPU-GPU heterogeneous structure
GPU: massive parallel processing
CPU: organize and invoke application kernel function
CPU & GPU connect with PCI-E
3. METHODOLOGY
graph & non-graph algorithms
CUDA
4. RESULT AND ANALYSIS
A. Kernel execution pattern
a. The average number of kernel invocations is much (nearly an order magnitude) higher in graph applications than non-graph applications.
b. The amount of computation done per each kernel invocations is significantly smaller in graph applications than non-graph applications.
c. Short messages require long latencies over PCI and graph applications interact with CPU more frequently.
d. The total time spent on PCI transfers is higher in graph applications
e. Graph applications only transfer smaller amount of data in each PCI transfer.
B. Performance bottlenecks
a. Long memory latency is the biggest bottleneck that causes over 70% of all pipeline stalls(bubble) in graph applications.
b. Graph applications suffer from high cache miss rates.
C. SRAM resource sensitivity
a. register file: most effectively leveraged
b. shared memory:
If there’s not enough reuse of data then moving data from global memory to shared memory actually consume more. So shared memory is used less
c. constant memory: developers are less inclined to use it
d. L1&L2 cache: L1 cache is entirely ineffective for graph processing
reason: In graph applications, memory transfer between CPU and GPU; In non-graph applications, shared memory is actively used
D. SIMT lane utilization
The number of iterations executed by each SIMT lane varies as the degree of each vertex varies. Thus the SIMT lane utilization varies significantly in graph applications.
E. Execution frequency of instruction types
The execution time differences between graph and non-graph applications are not influenced by the instruction mix.
F. Coarse and fine-grain load balancing
(i) number of CTAs assigned to each SM
SM level imbalance depends on input size and program characteristisc
assume m SMs, maximum n CTAs for each SM
CTAs (default round-robin)
>m*n: two reasons balancing
(1) higher likehood to assign similar number of CTAs per SM
(2) Large inputs lead to more CTAs and hence the likehood of balancing CTA assignments per SM also increse
=m*n: perfact balance
<m*n: unevenly
(i) execution time difference across CTAs
opposing force to achieving balance
Large input size increases the execution time variation
applications that exhibit more warp divergence also have high execution time variance at the CTA level.
(ii) execution time variance across warps within a CTA
(σ/μ
σ: standard deviation
μ: average execution time)
Execution time variation for warps within CTAs is not high.
G. Scheduler Sensitivity
three scheduler strategies: GTO, 2LV, LRR
Due to poor memory performance and divergence issues, graph applications have significantly lower IPC than non-graph applications.
5. DISCUSSION
A. Performance bottleneck
PCI calls and Long latency memory operation, solved by:
a. unified system memory
b. actively leverage the underutilized SRAM structures such as cache and shared memory
(data prefetching)
Input large enough, well-balanced.
determined by the longest warp execution, solved by:
programmer’s effort
6. RELATED WORK
others investigated performance, similar to the paper’s work
7. CONCLUSION
how GPU interact with microarchitectural features
set non-graph applications as comparison
Categories: 未分类
wineqq install instructions
extract the package followed by instructions on that page
Then important things:
• Run wine-QQ once, wine will auto install mono or something else
• copy simsun.ttc to ~/.wine/drive_c/windows/Fonts
• edit ~/.wine/system.reg
change
“MS Shell Dlg”=”Tahoma”
“MS Shell Dlg 2″=”Tahoma”
to
“MS Shell Dlg”=”SimSun”
“MS Shell Dlg 2″=”SimSun”
• create zh.reg, insert these:
REGEDIT4
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\FontSubstitutes]
“Arial”=”simsun”
“Arial CE,238″=”simsun”
“Arial CYR,204″=”simsun”
“Arial Greek,161″=”simsun”
“Arial TUR,162″=”simsun”
“Courier New”=”simsun”
“Courier New CE,238″=”simsun”
“Courier New CYR,204″=”simsun”
“Courier New Greek,161″=”simsun”
“Courier New TUR,162″=”simsun”
“FixedSys”=”simsun”
“Helv”=”simsun”
“Helvetica”=”simsun”
“MS Sans Serif”=”simsun”
“MS Shell Dlg”=”simsun”
“MS Shell Dlg 2″=”simsun”
“System”=”simsun”
“Tahoma”=”simsun”
“Times”=”simsun”
“Times New Roman CE,238″=”simsun”
“Times New Roman CYR,204″=”simsun”
“Times New Roman Greek,161″=”simsun”
“Times New Roman TUR,162″=”simsun”
“Tms Rmn”=”simsun”
• run command: regedit zh.reg
• run wine-QQ again
Done!
Categories: 未分类
copy problem in Python
Look at codes below:
>>> v = [0.5, 0.75, 1.0, 1.5, 2.0]
>>> m = [v, v, v]
>>> v[0] = ‘Python’
>>> m
[[‘Python’, 0.75, 1.0, 1.5, 2.0], [‘Python’, 0.75, 1.0, 1.5, 2.0], [‘Python’, 0.75, 1.0, 1.5, 2.0]]
>>> from copy import deepcopy
>>> v = [0.5, 0.75, 1.0, 1.5, 2.0]
>>> m = 3*[deepcopy(v), ]
>>> v[0] = ‘Python’
>>> m
[[0.5, 0.75, 1.0, 1.5, 2.0], [0.5, 0.75, 1.0, 1.5, 2.0], [0.5, 0.75, 1.0, 1.5, 2.0]]
Categories: 未分类
Random Forest
In order to learn svm(support vector machine), we have to learn about what the Random Forest is.
1. What is a decision tree
A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm.(from wiki)
Two Types of decision tree
1.Categorical Variable Decision Tree
2.Continuous Variable Decision Tree
Example:- Let’s say we have a problem to predict whether a customer will pay his renewal premium with an insurance company (yes/ no). Here we know that income of customer is a significant variable but insurance company does not have income details for all customers. Now, as we know this is an important variable, then we can build a decision tree to predict customer income based on occupation, product and various other variables. In this case, we are predicting values for continuous variable.
Important Terminology related to Decision Trees
Root Node, Splitting, Decision Node, Leaf/Terminal Node:
Pruning: When we remove sub-nodes of a decision node, this process is called pruning. You can say opposite process of splitting.
Branch / Sub-TreeParent and Child Node
1. Over fitting: Over fitting is one of the most practical difficulty for decision tree models. This problem gets solved by setting constraints on model parameters and pruning (discussed in detailed below).
2. Not fit for continuous variables: While working with continuous numerical variables, decision tree looses information when it categorizes variables in different categories.
2. Regression Trees vs Classification Trees
Categories: Programming
Zero to C-Mips Compiler
According to CS143 course task, complete a compiler by my own.
There are four steps:
1. Lexical & Syntax Analysis
2. Semantic Analysis & Type Checking
3. Intermediate Code
4. Translated MIPS Code
5. Optimization
Categories: Programming
组成原理处理器部分(1)
好久没有看组成原理了,先补几个概念:
寄存器堆(register file):CPU中多个寄存器组成的阵列,通常由快速的静态随机读写存储器(SRAM)实现。这种RAM具有专门的读端口与写端口,可以多路并发访问不同的寄存器。寄存器堆是指令集架构的一部分,程序可以访问,这与透明的CPU高速缓存(cache)不同。
数据选择器(multiplexer):
或称多路复用器,是一种可以从多个模拟数字输入信号中选择一个信号进行输出的器件。 一个有 2n 输入端的数据选择器有 n 个可选择的输入-输出线路,可以通过控制端来选择其中一个信号被选择作为输出。[3] 数据选择器主要用于增加一定量的时间和带宽内的可以通过网络发送的数据量。
数据选择器使多个信号共享一个设备或资源,例如一个模拟数字转换器或一个传输线,而不必给每一个输入信号配备一个设备。
当然还有datapath(中文:数据通路),给出英文解释:A datapath is a collection of functional units, such as arithmetic logic units or multipliers, that perform data processing operations, registers, and buses.[1] Along with the control unit it composes the central processing unit (CPU).
注:以上摘自wikipedia,吐槽一下百度百科,实在是laji
4.1到4.3的内容分别为引言、逻辑设计的一般方法和建立数据通路,我们一步步来:
4.1 引言
重点是A Basic MIPS Implementation,分为三类指令:存储访问指令,算数逻辑指令和分支指令。
这里我们重温一下实现方式:
上图是一个MIPS子集实现的抽象视图
一下我尽量遵循原著,括号内为中文版翻译
我们怎么样实现每条指令呢?首先PC(program counter)这个寄存器,它存放着当前执行指令的地址,把PC的值送入Instruction memory(内存),取出执行的指令,这个指令读取一到两个寄存器。接下来读取寄存器后,除了跳转指令,其他的都要用上ALU(算数逻辑单元),各类指令中ALU的功能各不相同。存储指令用ALU计算地址,算术逻辑指令用ALU执行运算,分支指令用ALU进行比较。还有一点值得注意:分支指令如果不修改下一条指令地址,则下一条指令地址默认是当前指令地址+4。
当然,上图的设计并非尽善尽美,对于一些组件单元,它的数据来源远不止这么简单。比如说PC的来源有两个加法器,实际上不能简单得将两个加法器的线连在一起(图:),我们应该增加一个multiplexor,或者叫data selector(多选器),它是这个样子的:
对应的两个输入分别连上两个加法器,像这样需要data selector的地方还有不少。
加入control unit的设计:
control unit的定义:It tells the computer’s memory, arithmetic/logic unit and input and output devices how to respond to a program’s instructions.
在这里,control unit接收指令,对寄存器、Data memory(数据存储器)还有data selector都进行控制,比如说寄存器的读写,数据的读写和data selector中对输入的选择等等。
在这里强调的是,这样的设计效率并不高,因为这样的话时钟周期必须设为足够容纳执行时间最长的指令,后面会介绍流水线方式的计算机控制设计。
4.2 逻辑设计的一般方法
先是重点概念:
组合单元(combinational element)
状态单元(state element)
两者差别是组合单元没有内部存储功能,而状态单元有。此外,一个状态单元至少有两个输入和一个输出,其中有个输入是时钟信号,输出提供了在前一个时钟信号写入单元的数据值。什么意思呢?最近才学了数电,里面有讲这部分内容:计算机时钟周期
Categories: Programming Tags: Tags: , | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31188711524009705, "perplexity": 16274.679274520857}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146127.10/warc/CC-MAIN-20200225172036-20200225202036-00313.warc.gz"} |
https://www.arxiv-vanity.com/papers/1109.2795/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# Chiral states in bilayer graphene: magnetic field dependence and gap opening
M. Zarenia, J. M. Pereira Jr., G. A. Farias, and F. M. Peeters Department of Physics, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen, Belgium.
Departamento de Física, Universidade Federal do Ceará, Fortaleza, Ceará, 60455-760, Brazil.
###### Abstract
At the interface of electrostatic potential kink profiles one dimensional chiral states are found in bilayer graphene (BLG). Such structures can be created by applying an asymmetric potential to the upper and the lower layer of BLG. We found that: i) due to the strong confinement by the single kink profile the uni-directional states are only weakly affected by a magnetic field, ii) increasing the smoothness of the kink potential results in additional bound states which are topologically different from those chiral states, and iii) in the presence of a kink-antikink potential the overlap between the oppositely moving chiral states results in the appearance of crossing and anti-crossing points in the energy spectrum. This leads to the opening of tunable minigaps in the spectrum of the uni-directional topological states.
###### pacs:
71.10.Pm, 73.21.-b, 81.05.Uw
## I Introduction
Carbon-based electronic structures have been the focus of intense research since the discovery of fullerenes and carbon nanotubes Millie . More recently, the production of atomic layers of hexagonal carbon (called graphene) has renewed that interest, with the observation of striking mechanical and electronic properties, as well as ultrarelativistic-like phenomena in this condensed matter systemReview . In that context, bilayer graphene (BLG), which is a system with two Van der Waals coupled sheets of graphene, has been shown to have features that make it a possible substitute of silicon in microelectronic devices. The carrier dispersion of pristine BLG is gapless and approximately parabolic at two points in the Brillouin zone ( and ). However, it was found that the application of perpendicular electric fields produced by external gates deposited on the BLG surface can induce a gap in the spectrum, by creating a charge imbalance between the two graphene layersMccann ; Castro . The tailoring of the gap by an external field may be particularly useful for the development of devices Milton1 ; zarenia .
It was recently recognized that a tunable energy gap in BLG can allow the observation of new confined electronic states, which could be obtained by applying a spatially varying potential profile to create a position-dependent gap analogous to semiconductor heterojunctions.
An alternative way to create one dimensional localized states in BLG has recently been suggested by Martin et al. Morpurgo and relies on the creation of a potential ”kink” by an asymmetric potential profile (see Fig. 1). Such kink potential can also be realized in p-n junctions. They showed that localized chiral states arise at the location of the kink, with energies inside the energy gap. These states correspond to unidirectional motion of electrons which are analogous to the edge states in a quantum Hall system. From a practical standpoint, the kinks may be envisaged as configurable metallic nanowires embedded in a semiconductor medium. Moreover, the carrier states in this system are expected to be robust with regards to scattering and may display Luttinger liquid behavior Killi .
An additional tool for the manipulation of charged states in BLG is the use of magnetic fields. The application of an external magnetic field perpendicular to the BLG sheet causes the appearance of Landau levels which can be significantly modified by the induced gap, leading to effects such as the lifting of valley degeneracy caused by the breaking of the inversion symmetry by the electrostatic bias Falko ; Milton2 . Recently the transport properties of p-n-p junctions in bilayer graphene were experimentally investigated in the presence of a perpendicular magnetic fieldJing .
In the present paper we generalize previous work on topological confinement in bilayer graphene on three levels: i) we investigate the effect of smoothing the kink potential on the topological states, ii) the effect of a perpendicular magnetic field is studied, and iii) we investigate a new system that consists of a coupled kink-antikink structure. We demonstrate that the latter opens a gap in the 1D electron states. The paper is organized as follows. In Sec. II we present the theoretical formalism. The results for a single kink potential profile are discussed in Sec. III(A,B). In Sec. IV(A) and Sec. IV(B) we show the results for the kink-antink potential, respectively, for zero and non-zero magnetic fields. Finally we conclude the remarks of the paper in Sec. V.
## Ii Model
We employ a two-band continuum model to describe the BLG sheet. In this model, the system is described by four sublattices, two in the upper (, ) and two in the lower ( and ) layerMilton1 . The interlayer coupling is given by the hopping parameter between sites and . The Hamiltonian around the valley of the first Brillouin zone can be written as
H=−1t[0(π†)2(π)20]+[U(x)00−U(x)] (1)
where , is the momentum operator in the presence of an external magnetic field with being the components of the vector potential , m/s is the Fermi velocity, and is the electrostatic potential respectively applied to the upper and lower layers. The eigenstates of the Hamiltonian Eq. (1) are two-component spinors , where are the envelope functions associated with the probability amplitudes at sublattices and at the respective layers of the BLG sheet. Since the momentum along the -direction is a conserved quantity and therefore we can write:
ψ(x,y)=eikyy[φa(x),φb(x)]T (2)
where, is the wave vector along the direction.
In order to apply a perpendicular magnetic field to the bilayer sheet we employ the Landau gauge for the vector potential . The Hamiltonian (1) acts on the wave function of Eq. (2) which leads to the following coupled second-order differential equations,
[∂∂x′+(k′y+βx′)]2φb=[ϵ−u(x′)]φa, (3a) [∂∂x′−(k′y+βx′)]2φa=[ϵ+u(x′)]φb. (3b)
where, in the above equations we used dimensionless units , , , and where , for . The step-like kink (see Fig. 1(c)) is modeled by
u(x′)=ubtanh(x′/δ), (4)
where is the maximum value of the gate voltage, in dimensionless unit, in each BLG layer. Here, denotes the width of the region in which the potential switches its sign in each layer. This parameter is determined by the distance between the gates used to create the gap. Next, we numerically solve Eqs. (3) to obtain the dependence of the energy levels on the magnetic field and potential parameters. For the case of a sharp kink potential and in the absence of a magnetic field , i.e. , Eqs. (3) reduces to
[∂∂x′+k′y]2φb=[ϵ−u(x′)]φa, (5a) [∂∂x′−k′y]2φa=[ϵ+u(x′)]φb. (5b)
where . We simply decouple Eqs. (5) and obtain
[∂2∂x′2+λ2±]φa=0 (6)
where, which can be a complex quantity. The solution for () and () are given by
ψ(x′)<±=(eiλ±x′f±eiλ±x′), (7a) ψ(x′)>±=(e−iλ±x′g±e−iλ±x′) (7b)
where, , and . The above solutions should satisfy the asymptotics and . Matching the solutions and the first derivatives at gives a homogeneous set of algebraic equations which in matrix form becomes
⎛⎜ ⎜ ⎜⎝11−1−1f+f−−g+−g−λ+λ−λ+λ−f+λ+f−λ−g+λ+g−λ−⎞⎟ ⎟ ⎟⎠⎛⎜ ⎜ ⎜⎝C1C2C3C4⎞⎟ ⎟ ⎟⎠=0. (8)
Solutions are found when the determinant of the matrix is set to zero from which we obtain the energy spectrum. Notice that Eq. (8) leads to four solutions, of which two of them, i.e. , do not satisfy Eqs. (5) and are not acceptable. In the limiting case we are able to obtain an analytical expression for the energy,
ϵ±=ubα×{4k′y√ϵ0[ubsin(θ/2)+k′2ycos(θ/2)]±[56k′8y+14u4b+70u2bk′4y−k′2yϵ0(40k′4y+46ub2)]1/2 }. (9)
where, , and . Solving the above equation for we find that ( for ).
Next we consider a sharp kink potential in parallel with an antikink potential which are located at and . In this case we have to consider three regions, i.e. (), () and () and the solutions are given by
ψI(x′)±=(eiλ±x′g±eiλ±x′), (10a) ψII(x′)±=(e±iλ±x′f±eiλ±x′), (10b) ψIII(x′)±=(e−iλ±x′g±e−iλ±x′) (10c)
Matching the solutions and their first derivatives at leads to a set of eight algebraic equations which in matrix form becomes
⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝κ−+κ−−−κ−+−κ−−−κ++−κ−−00g+κ−+g−κ−−−f+κ−+−f−κ−−−f+κ++−f−κ−−00λ+κ−+λ−κ−−−λ+κ−+−λ+κ−−λ+κ++λ−κ−−00g+λ+κ−+g−λ−κ−−−f+λ+κ−+−f−λ−κ−−f+λ+κ++f−λ−κ−−0000κ++κ+−κ−+κ−−−κ−+−κ−−00f+κ++f−κ+−f+κ−+f−κ−−−g+κ−+−g−κ−−00λ+κ−+λ−κ+−−λ+κ−+−λ−κ−−λ+κ−+λ−κ−−00f+λ+κ++f−λ−κ+−−f+λ+κ−+−f−λ−κ−−g+λ+κ−+g−λ−κ−−⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝C1C2C3C4C5C6C7C8⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠=0. (11)
where, and . Setting the determinant to zero gives the energy spectrum.
## Iii Single kink
### iii.1 Influence of the smoothness of the kink profile
In the general case of we solve the set of second order differential Eqs. 3(a,b) numerically, using the finite difference technique. Figure 2(a) shows the spectrum for a single potential kink as function of the wavevector along the kink for zero magnetic field. We consider a relatively sharp kink, i.e. , and compare the numerical results with the analytical solution (dashed black curves) form Eq. (9) for the case of a sharp profile (). The shaded region corresponds to the continuum of free states. The solid red curves correspond to the energy levels of a biased BLG which can be obtained using Eq. (1) as,
ϵ=±√k′4y+u2b (12)
The dotted horizontal lines correspond to and . These results are valid in the vicinity of a single valley () and show that the topological states have a unidirectional character of propagation, i.e. they are chiral statesMorpurgo , with positive group velocity. The topological levels can be fitted to with and being the fitting parameters (see green solid curve). For localized states around the valley, we have and the charge carriers move in the opposite direction. In order to consider the energy levels for the valley, in Eqs. (3) should be replaced with . Then using the transformations and (or ) leads to the same equation as for the valley. Thus the symmetry remains even in the presence of an uniform perpendicular magnetic field (i.e. ). Notice that the wavespinors corresponding to the and the valleys are related to each other by (or ) while the sign of the other component does not change.
Panels (b) and (c) of Fig. 2 present the real parts of the spinor components and the probability density for the states indicated by the arrows (b) and (a) in panel (a), corresponding to (b) and (c). These electron states are localized at the position of the potential kink. Notice that the solutions of Eqs. (3) are related by the transformations , , and and consequently for the solutions in Figs. 2(b) and 2(c) have the same probability distribution. For the case of the solutions of Eq. (9) are which result in the following wavespinors,
φa=ei(λ+x′+π4)∓e−iλ−x′, (13b) φb=−1ϵ±+ub[λ2+ei(λ+x′+π4)∓λ2−e−iλ−x′]. (13d)
where, with being . Notice that in the above equations leads to an oscillating contribution with an evanescent part. The oscillating part is strongly damped and therefore Eqs. (13) corresponds to localized wavespinors. Expanding Eqs. (13) around we obtain for the second derivative of the wavespinors,
∂2∂x′2R[φ
This indicates that has its maximum value located at () for while the opposite is found for which is also evident from Figs. 2(b,c).
In Fig. 3 we show the probability densities corresponding to one of the topological branches (for the state which is labeled by in Fig. 2(a)) at several values. As shown in the inset of Fig. 3 for those values where the topological state merged with the continuum spectrum the carries are no longer confined by the kink potential.
Next we increase the smoothness of the kink potential and investigate how the energy spectrum changes. In Fig. 4 the energy levels as function of are shown for the smooth kink profile , where in addition to the chiral states several branches are seen which are split off from the continuum. In order to understand the physical origin of those new states we show in the lower panels of Fig. 5 a cartoon of the low energy spectra for the (a) sharp and (b) smooth profiles where, the chiral states appear in the yellow regions and those additional states are found in the orange region. Increasing the smoothness of the kink potential leads to the creation of a region below the energy gap which allows for carriers to be confined near the kink. Therefore, extra bound states can be created in the orange region (lower panel in Fig. 5(b)). The wavefunctions for of the two chiral states and the new bound states are shown in the lower panels of Fig. 4. The new bound states are also bound in the -direction near but the electron states are more extended and have a clear nodal character near .
Figure 6 shows the velocity of the carriers for the states which are indicated by in Fig. 4. The chiral states ((5),(6)) are only shown for the valley and they have positive velocity. The curves (5),(6) can be fitted to (see the solid gray curves) with being the fitting parameters and corresponds to the minimum point in the curves (5) and (6). Notice that the extra bound states have a slightly nonzero velocity at which is a consequence of the asymmetric energy dispersion as seen in Fig. 4. Curve corresponds to the energy spectrum of a biased BLG which is given by Eq. (12) and results in the velocity which is zero for in a biased BLG (black solid curve in Fig. 6).
As mentioned before for smooth kink potentials additional 1D bound states appear and the number of these bound states can be related to the height of the gate voltage and the smoothness() at the interface. Figure 7 shows the number of these extra bound states for three different values as function of the width . The first bound state for appears respectively at in the absence of magnetic field. Notice also that for fixed the number of extra bound states increases with in agreement with the qualitative picture shown in Fig. 5(b).
We also calculate the transmission of an electron through the kink structure in a system of size and . No bias nor magnetic field is assumed in the and regions. We assume that and the electrons are free to move in the -direction whereas, they are confined in the -direction. Associated with each real there are two right(left) propagating modes, which are given by Eqs. (7). In the region () two incident right-traveling modes can be reflected into two left-traveling modes ,
ΨI±=ψ>±+r+±ψ<++r−±ψ<−. (15)
where, () are the transmission (reflection) amplitudes. The propagating modes in region can also be transmitted to region () in the right-traveling modes,
ΨIII±=t+±ψ>++t−±ψ>−. (16)
The wevefunctions in regions and can be connected by the transfer matrix where at the kink-potential boundaries we have
ΨI±(−Lx/2)=MΨIII±(Lx/2). (17)
The transmission (or reflection) amplitude can be found by substituting Eqs. (15) and (16) in the above equation. The four transmission amplitudes for given and can be combined in the transmission matrix
t(ϵ,ky)=(t++t+−t+−t−−). (18)
The total transmission amplitude is given bysnyman . The two-terminal conductance of such an asymmetric potential profile in bilayer graphene can be calculated using the Landauer formula which is given byramezani ; michael
G=G0∫T(EF,k′y)dk′y (19)
Here, is the conductance unit per valley and per spin. In Figs. 8(a,b) we show a contour plot of the transmission probability in logarithmic scale for the kink structure with (in dimensionless unit). The transmission probability has the symmetry . The conductance as function of the Fermi energy for the single kink profile is shown in panel (c) for (blue solid curve) and (red dashed curve). For the case the smoothness of the potential at leads to a higher transmittance and consequently a higher conductance around (see panel (b) and the dashed curve in panel (c)).
### iii.2 Magnetic field dependence
Dependence of the energies of the 1D bound states on an external magnetic field is shown in Fig. 9 for (a) and (b) . In order to show the effect of a magnetic field on the chiral states (blue solid curves) and the other localized bound states (red dashed curves) we present the results for a smooth potential (i.e. ). It is seen that the chiral states are very weakly influenced by the magnetic field. This is a consequence of the strong confinement of these states in the kink potential (see Fig. 3 and Fig. 4(a,c)). In a semiclassical view, the movement of the carriers is constrained by the kink potential and that together with the unidirectional propagation, prevents the formation of cyclotron orbits. For the energy levels above the chiral states, the energy values increase as the magnetic field increases, because of the weaker confinement of these states as is apparent from Figs. 4(b,d,e,f).
Figure 10 shows the spectrum of a sharp () single kink potential in the presence of an external magnetic field as function of the orbit center where is the magnetic length. The solid lines represents the applied kink potential to upper (black) and lower (green) layer. The results show that the topological states are practically not affected by the magnetic field. The free energy region (i.e. ) in the absence of magnetic field now is replaced with Landau levels (the solid red lines are the Landau levels of a biased bilayer graphene). In some region the Landau levels are influenced by the kink potential and anti-crossings appear in the low energy spectrum. Some of these anti-crossings are situated along the extension of the topological states into the region. In addition of these anti-crossings the Landau levels display some resonances along the energy levels of a biased BLG (red solid curves in Fig. 2(a)) which can be linked to the edge effects of the potential profile. The position of the resonances can be fitted to where and are fitting parameters and indicates the th Landau level of a biased BLG (see solid purple curves). Also the topological levels can be fitted to (dashed gray curve) with and . Panels (b,c) show the probability densities for the points that are indicated by full red circles in the energy spectrum. For the points on the purple solid curves (2a,3a) the distribution of the carriers by the magnetic field is influenced by the small confinement by the interface potential (see solid and dashed curves in panel b). The probability density for the point on the fitted curve along the topological level (2b) shows a higher peak at the kink interface () indicating that the kink potential acts as an attractive potential (solid curve in panel (c)). The other probabilities are clearly those of free electron LL. The result for a smooth kink potential is shown in Fig. 11 where the energy value of the extra bound states are increased by the magnetic field and the topological levels are practically not affected by the magnetic field.
The localization of the states is reflected in the position dependence of the current. The current in the -direction is obtained using
jy=ivF[Ψ†(∂xσy−∂yσx)Ψ+ΨT(∂xσy+∂yσx)Ψ∗] (20)
where . By substituting we have
jy=2vF[Re{φ∗a∂xφb−φ∗b∂xφa}+2kyRe{φ∗aφb}]. (21)
The -component of the current vanishes for the confined states. In Fig. 12, the -component of the persistent current for a sharp (blue curves) and smooth (black curves) single potential kink profile is shown as function of the direction without magnetic field (solid curves) and in the presence of the magnetic field (dashed curves). In the absence of a magnetic field the current is localized around for both sharp () and smooth potentials. For a smooth profile the wavefunction of the topological states and consequently also the current density profile is broadened (compare Figs. 2(b,c) with Figs. 4(a,c)). A magnetic field shifts the density profile slightly to the right (see the inset of Fig. 12) due to the Lorentz force and there is also a very small narrowing of the current distribution.
Next we consider the density of states (DOS) for the kink potential. The number of k-states per unit energy is given by
D(E)=D02π∑n∫dky′δ(ϵ−ϵn,k′y). (22)
where . To calculate the DOS numerically we introduce a Gaussian broadening,
δ(ϵ−ϵn,k′y)→1Γ√πexp[−(ϵ−ϵn,k′y)2Γ2], (23)
where is the broadening which is taken as in our calculations. Figure 13 shows the DOS as function of Fermi energy in the absence and presence of an external magnetic field for sharp () and smooth () kink potentials. For a sharp profile the topological levels contribute an almost constant value to the DOS for even in the presence of an external magnetic field. For the smooth profile, peaks corresponding to the non-topological levels appear in the DOS and note that only these peaks are shifted in the presence of a magnetic field while the DOS of the topological states are not affected by the magnetic field.
We now turn to the transport properties of a kink potential and look at the influence of the topological states on the conductivity in the -direction . For elastic scattering the diffusive conductivity is given bycharbonneau ,
σyy=e2vF2πℏkBT∑n∫dk′yτv2n,yfn,k′y(1−fn,k′y). (24)
Here is the temperature, is the electron velocity, is the equilibrium Fermi-Dirac distribution function, and is the momentum relaxation time. For low temperatures we assume that is approximately constant, evaluated at the Fermi level (), and replace the product by the delta function given in Eq. (23). The results are presented as function of in Fig. 14 in the units of for both sharp () and smooth () potentials with and . Due to the robust confinement of the topological levels the conductivity is constant in the energy gap even for a non-zero magnetic field (solid blue curve for and red dashed curve for ). The extra localized levels in the case of lead to an increasing conductivity as function of . Note that in the presence of an external magnetic field some of the additional electron(hole) states are shifted up(down) in energy (see Figs. 4 and 11) which results in smaller at the region compared to the conductivity in the absence of magnetic field (black dotted-dashed curve).
## Iv Kink-antikink
### iv.1 Zero magnetic field
Next we consider a potential profile with a pair kink-antikink. The kink-antikink potential is modeled by,
u(x′)=ub[tanh(x′−dδ)−tanh(x′+dδ)+1] (25)
where, is the distance between the kink and the antikink in units of . The spectrum of the localized states in the absence of a magnetic field is shown in Fig. 15(a) for , and . The black dashed curves are the analytical results for which are obtained using Eqs. (11). Note that there are only two chiral states per kink which leads to the appearance of crossing points in the energy spectrum (at and ). The spinor components and probability densities associated with the points indicated inside the circle in Figs. 15(a) are shown in the panels (1a, 2a,…,5a). In the absence of a magnetic field and for the points around the energy level crossing the carriers are strongly confined at either the position of the kink or antikink. The wavefunction corresponding to an energy at the crossing point (panel 5a) is localized at both the kink and antikink.
Next we investigate smooth potential kink profiles. In Figure 16(a) the energy spectrum of a smooth kink-antikink profile (i.e. ) is presented for zero magnetic field. As in the case of the single kink profile additional bound states appear in the energy spectrum. The overlap between theses states leads to the appearance of crossing points in the energy spectrum. The wavespinors and the corresponding probability density for the points indicated by arrows in panel (a) are shown in panels (1a,2a,3a,4a). In the absence of a magnetic field and for the states are localized at both kink and antikink (panels 1a, 2a and 4a) whereas, panel 3a shows that the confinement tends to the kink or antikink at .
Decreasing the distance between the kink and antikink generate an unperfect kink-antikink profileunperfect . This profile is illustrated in Fig. 17(a). The energy spectrum of such a profile is shown in Fig. 17(b) for , and . The analytical results (obtained from Eq. (11)) for are shown by the black dashed curves. Now the crossing points in the energy spectrum for the case of (see Fig. 15(a)) are replaced with anticrossings and an energy gap appears in the energy spectrum. The positions of these minigaps move when we increase the magnetic field as in apparent from Fig. 17(c). The panels (1b,2b,3b,4b) show the real parts of the wavespinors and corresponding probability density for the indicated points in Fig. 17(b) by red arrows. Note that due to the decreasing distance of the kink and antikink the carriers can be localized between the kink and antikink.
Figure 18(a) displays the energy spectrum of a smooth () kink-antikink potential with for . Now the kink and antikink are close to each other and the smoothness of the potential leads to extra localized levels. Therefore the crossing and anticrossing points between the additional bound states are seen to disappear and the energy gap between the topological levels is increased. The magnitude of the energy gap depends on the width of the interface region , the maximum value of the potential and the distance between the kink and antikink. This is shown in Fig. 19, where is plotted as function of , and respectively in panels (a), (b) and (c) in the absence of magnetic field (blue solid curves). As shown in panels (a,b) the energy gap is an increasing function of and . When increases the first energy level at the spectrum changes from a Mexican hat shape to a parabola. Therefore, increases with increasing (compare the potentials illustrated in Figs. 17(a) and 18(a)). Increasing the distance of the kink and antikink results in perfect unidirectional states and the gap disappears (panel 19(c)).
Next we consider the transmittance of a kink-antikink potential. In Fig. 20 we show a contour plot of the transmission probability (in logarithmic scale) for the kink-antikink structure with for (a) (sharp) and (b) (smooth) potentials. The results show a nonzero region for the transmittance below the gap where the topological levels corresponding to the kink and antikink cross each other. The conductance as function of Fermi energy is plotted in Fig. 20(b). A small region of transmittance appears in the energy gap due to the chiral states that appears as small peaks in the conductance (see the inset of panel (c)).
### iv.2 Magnetic field dependence
Figure 15(b) shows the kink-antikink energy levels in the presence of an external magnetic field (). The results show a shift of the four intra-gap energy branches as the magnetic field increases. In addition, the continuum of free states at zero magnetic field (shadowed region in Fig. 15(a) is replaced by a set of Landau levels. The spinor components and probability densities associated with the points indicated inside the circle in Figs. 15(b) are shown in the panels (1b,2b,…5b). For non-zero magnetic field the states show a shift of the probability density towards the region between the kink and the antikink. This is caused by the additional confinement due to the magnetic field.
The energy levels of a smooth kink-antikink profile (i.e. ) in the presence of a perpendicular magnetic field is presented in Fig. 16(b). Now the crossings points (in the case of ) changed into anticrossings. In the inset of Fig. 16(b) an anti-crossing is enlarged. Due to the strong confinement of the potential the magnetic field can only lead to a shift up in energy of the localized chiral states. The wavespinors and the corresponding probability density for the points indicated by arrows in panel (b) are shown in the panels (1b,2b,3b,4b). In the presence of an external magnetic field and at the crossing points of the topological states (panels 1b, 2b) due to the strong confinement by the potential the magnetic field can only affect weakly the electrons. At the first anticrossing (panels 3b and 4b) which arises form the overlap of the first bound states in the kink and antikink potentials, the electrons are confined closer to the center of the potential.
The energy levels for a sharp () kink- antikink potential with is presented in Fig. 17(c). The crossings which appeared in the energy spectrum due to the overlap of the extra bound states in the absence of magnetic field (see Fig. 17(b)) now are replaced with anti-crossings and the energy gap between the kink and antikink states is shifted up in energy due to the confinement by the magnetic field. Panels (1c,2c,3c,4c) show the wavespinors and probability density for the points indicated by arrows in panel (c). The energy spectrum of a smooth kink-antikink potential with , and in the presence of an external magnetic field is shown in Fig. 18(b). In the presence of a magnetic field the energy gap is shifted and the symmetry of the spectrum around for is broken (see panel (a)). The energy gap in the presence of a magnetic field () is shown in Fig. 19 as red dashed curves. Notice that an external magnetic field only shifts up the energy gap and the gap size remains constant.
The energy spectrum of a kink-antikink potential is shown in Fig. 21 as function of orbit center for , and . The kink-antikink potential is depicted in the figure by the dashed curves. Such as for the single kink potential the topological levels can be fitted to (see gray solid curves) where corresponds to the kink(antikink) branches ( and are the fitting parameters). Now the Landau levels above the gap are affected by the kink-antikink potential where anti-crossing points appear along the topological levels. The solid red lines are the Landau levels in a biased BLG.
Figure 22 shows the dependence of the energies on the external magnetic field for (a) and (b) . The branches that appear for correspond to Landau levels that arise from the continuum of free states. For the kink-antikink case, however, the overlap between the states associated with each confinement region allows the formation of Landau orbits. Therefore, the proximity of an antikink induces a strong dependence of the states on the external field.
Figure 23 shows plots of the -component of the current density as function of for the states labeled (1) to (6) in panels (a) and (b) of Fig. 22. It should be noticed that a non-zero current can be found for and , as can be deduced from the dispersion relations. For the results presented in Fig. 23(a) show a persistent current carried by electrons localized at each kink region, irrespective of the direction of , as exemplified by the states (1) and (2) which correspond to opposite directions of the magnetic field. For non-zero wave vector, however, as shown in panels (b) and (c), the current is strongly localized around one of the potential kinks. In Fig. 23(b), the current density curve shows an additional smaller peak caused by the strong magnetic field () where, the carriers can also be confined closer to the center.
The density of states of the topological states for (a) and (b) kink-antikink potential is shown in Fig. 24 with and . The results show additional peaks for a sharp kink-antikink with which is due to the splitting of the topological levels. Note that the energy gap leads to a zero density at for zero magnetic field (blue circles in (a)) while shifting the gap in the presence of a magnetic field results in a non-zero DOS at (red diamonds in (a)). For the smooth profiles the non-topological 1D states lead to the appearance of additional peaks in the DOS (panel (b)) that shift with the magnetic field.
## V Concluding remarks
In summary we obtained the energy spectrum, the density of states, the transmission and conductivity for carriers moving in BLG in the presence of asymmetric potentials (i.e. kink and kink-antikink profiles) in each layer of the BLG. Uni-directional chiral states are localized at the location of the kink (or antikink). By controlling the gate voltages and/or the smoothness of the kink profile the number of one-dimensional metallic channels and their subsequent magnetic response can be configured.
The effect of an external magnetic field perpendicular to the bilayer sheet was investigated. We found that the influence of the magnetic field is very different for single and double kinks. Due to the strong confinement by the kink potential, the topological states are weakly affected by the magnetic field in the case of a single kink profile.
Changing the sign of the kink potential smoothly (i.e. broadening the kink potential) leads to extra bound states which have a very different behavior as compared to the uni-directional topological states. First, these states are no longer uni-directional and they have a quasi-1D free electron-type of spectrum which is asymmetric around . Second, they are less strongly localized at the kink of the potential as compared to the chiral states and their probability distribution appears as those of excited states of the chiral state.
In the case of parallel kink-antikink profiles apparent crossings of the energy levels are found in the spectrum. Decreasing the distance between (and/or smoothing) the kink-antikink profiles turns into anti-crossings. It opens a gap in the topological state spectrum. This allows for a robust 1D system having a tunable minigap.
## Vi Acknowledgment
This work was supported by the Flemish Science Foundation (FWO-Vl), the Belgian Science Policy (IAP), the European Science Foundation (ESF) under the EUROCORES program EuroGRAPHENE (project CONGRAN), the Brazilian agency CNPq (Pronex), and the bilateral projects between Flanders and Brazil and the collaboration project FWO-CNPq. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085591435432434, "perplexity": 600.7672796204617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739134.49/warc/CC-MAIN-20200814011517-20200814041517-00192.warc.gz"} |
https://mathoverflow.net/questions/90379/equivalence-between-e-infty-spaces-and-connective-spectra | # Equivalence between $E_\infty$-spaces and connective spectra
It is well know that the $\infty$-category of group-like $E_\infty$-spaces and the $\infty$-category of connective spectra are equivalent, see e.g. May - "$E_\infty$-spaces, group completions and permutative categories" or Lurie - "Higher Algebra", Remark 5.1.3.17
Now the category of $E_\infty$-spaces (here space means simplicial set) carries a model structure as well as the category of spectra. Is there a direct (left) Quillen functor
$E_\infty$-space $\to$ Spectra
whose derived functor restricts to such an equivalence? I have been unable to find a discussion of this in the litertatur. The only thing I can find are indirect functors going through $\Gamma$-spaces or related categories. The Bar-construction which is usually used is not left Quillen (!?).
• What exactly do you mean by an $E_\infty$ space? If you have a particular $E_\infty$ operad in mind, then it depends which one you are using. If you want a version of the category of $E_\infty$ spaces in which the operad is allowed to vary, you should state that as well. Similarly, the precise answer will depend on your version of the category of spectra. – Neil Strickland Mar 6 '12 at 16:50
• Take for example the Barrat-Eccles Operad and the classical (Bousfield-Frielander) category of spectra in simplicial sets. I was hoping that there might be an answer which is independet of the specific choice of $E_\infty$-algebra. But I want the Operad to remain fix. – Thomas Nikolaus Mar 6 '12 at 16:53
• I would have expected that a suitable version of the bar construction would provide a left Quillen functor. Could you clarify exactly which version you are considering, and why it is not left Quillen? (I would be inclined to use the operad from Steiner's paper "A canonical operad pair" to construct orthogonal spectra of topological spaces, but no doubt there are other possibilities, including some that are more simplicial.) – Neil Strickland Mar 7 '12 at 8:41
• One way to make sense of 'restricting to an equivalence' is to consider the group-like $E_\infty$ spaces as a left Bousfield localization of the category of $E_\infty$ spaces. I think you just need to invert the map from the free $E_\infty$ space on $S^0$ to $QS^0$. The fibrant objects in this category will be group-like and fibrant replacement will be group completion. To do such a construction I would want the category of $E_\infty$ spaces to be left proper. This should follow from the $E_\infty$ operad being cofibrant, by Spitzweck's thesis. – Justin Noel Mar 7 '12 at 9:07
• The bar construction $B$ for a $E_{\infty}$-spaces is a model for the suspension functor $\Sigma$ (of coarse in the category of $E_{\infty}$-spaces). I think it is not a left Quillen functor. I would say that we have an adjunction at the level of $\mathrm{Ho}(E_{\infty}-spaces)$ between $\Omega$ and $B\sim\Sigma$. I think that in the case of commutative topological monoids, the bar construction $B$ is a left adjoint to $\Omega$. – Ilias A. Mar 7 '12 at 14:54
Of course, as several people have noted, the answer depends on the choice of details. There is a variant of my original passage from $E_{\infty}$ spaces to spectra that certainly works, as was noted in Units of ring spectra and Thom spectra'' by Ando, Blumberg, Gepner, Hopkins, and Rezk (arXiv: 0810.4535v3).
Take the Steiner $E_{\infty}$ operad for definiteness and denote the monad on based spaces associated to it by $\mathbf{C}$. Take spectra to mean Lewis-May spectra since it is very convenient to have the $(\Sigma^{\infty},\Omega^{\infty})$ adjunction for the question at hand, and that is incompatible with symmetric monoidal categories of spectra. Of course, that means I'm not using simplicial sets, but I don't suffer from a prejudice in their favor: when I write space I prefer to actually mean space.
Then, as discussed in modern terms in my paper "What precisely are $E_{\infty}$ ring spaces and $E_{\infty}$ ring spectra?'' Geometry & Topology Monographs 16(2009), 215--282, the spectrum associated to a $\mathbf{C}$-space $X$ is the two-sided bar construction $B(\Sigma^{\infty},\mathbf{C},X)$. For cofibrant $X$, this is equivalent to the tensor product''
$\Sigma^{\infty}\otimes_{\mathbf{C}}X$, which is defined by an obvious coequalizer. This functor from $\mathbf{C}$-spaces to spectra is left adjoint to $\Omega^{\infty}$. Further details are as one would expect.
• Thank you Peter, thats exactly the kind of thing I was looking for. Actually I do not see a reason why this should not work for simplicial $E_\infty$-spaces and simplicial spectra.... – Thomas Nikolaus Mar 8 '12 at 14:03
• Think about what $\Omega^{\infty}$ would mean in the simplicial world. There are serious advantages to being eclectic. For a different, Segalic answer to your original question see Mandell, May, Schwede, Shipley Model categories of diagram spectra'', especially \S 18. – Peter May Mar 9 '12 at 0:23
• I agree 100% with you comment about being eclectic. Its just that I got a situation where I naturally obtain simplicial $E_\infty$-spaces and thats why I was interested in them... – Thomas Nikolaus Mar 9 '12 at 19:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034424185752869, "perplexity": 388.6244421776185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00625.warc.gz"} |
https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_University_Physics_(OpenStax)/Map%3A_University_Physics_III_-_Optics_and_Modern_Physics_(OpenStax)/01%3A_The_Nature_of_Light/1.0S%3A_1.S%3A_The_Nature_of_Light_(Summary) | $$\require{cancel}$$
# 1.S: The Nature of Light (Summary)
## Key Terms
birefringent refers to crystals that split an unpolarized beam of light into two beams Brewster’s angle angle of incidence at which the reflected light is completely polarized Brewster’s law $$\displaystyle tanθ_b=\frac{n_2}{n_1}$$, where $$\displaystyle n_1$$ is the medium in which the incident and reflected light travel and $$\displaystyle n_2$$ is the index of refraction of the medium that forms the interface that reflects the light corner reflector object consisting of two (or three) mutually perpendicular reflecting surfaces, so that the light that enters is reflected back exactly parallel to the direction from which it came critical angle incident angle that produces an angle of refraction of 90° direction of polarization direction parallel to the electric field for EM waves dispersion spreading of light into its spectrum of wavelengths geometric optics part of optics dealing with the ray aspect of light horizontally polarized oscillations are in a horizontal plane Huygens’s principle every point on a wave front is a source of wavelets that spread out in the forward direction at the same speed as the wave itself; the new wave front is a plane tangent to all of the wavelets index of refraction for a material, the ratio of the speed of light in a vacuum to that in a material law of reflection angle of reflection equals the angle of incidence law of refraction when a light ray crosses from one medium to another, it changes direction by an amount that depends on the index of refraction of each medium and the sines of the angle of incidence and angle of refraction Malus’s law where $$\displaystyle I_0$$ is the intensity of the polarized wave before passing through the filter optically active substances that rotate the plane of polarization of light passing through them polarization attribute that wave oscillations have a definite direction relative to the direction of propagation of the wave polarized refers to waves having the electric and magnetic field oscillations in a definite direction ray straight line that originates at some point refraction changing of a light ray’s direction when it passes through variations in matter total internal reflection phenomenon at the boundary between two media such that all the light is reflected and no refraction occurs unpolarized refers to waves that are randomly polarized vertically polarized oscillations are in a vertical plane wave optics part of optics dealing with the wave aspect of light
## Key Equations
Speed of light $$\displaystyle c=2.99792458×10^8m/s≈3.00×10^8m/s$$ Index of refraction $$\displaystyle n=\frac{c}{v}$$ Law of reflection $$\displaystyle θ_r=θ_i$$ Law of refraction (Snell’s law) $$\displaystyle n_1sinθ_1=n_2sinθ_2$$ Critical angle $$\displaystyle θ_c=sin^{−1}(\frac{n_2}{n_1})$$ for $$\displaystyle n_1>n_2$$ Malus’s law $$\displaystyle I=I_0cos^2θ$$ Brewster’s law $$\displaystyle tanθ_b=\frac{n_2}{n_1}$$
## Summary
#### 1.1: The Propagation of Light
• The speed of light in a vacuum is $$\displaystyle c=2.99792458×10^8m/s≈3.00×10^8m/s$$.
• The index of refraction of a material is $$\displaystyle n=c/v$$, where v is the speed of light in a material and c is the speed of light in a vacuum.
• The ray model of light describes the path of light as straight lines. The part of optics dealing with the ray aspect of light is called geometric optics.
• Light can travel in three ways from a source to another location: (1) directly from the source through empty space; (2) through various media; and (3) after being reflected from a mirror.
#### 1.2: The Law of Reflection
• When a light ray strikes a smooth surface, the angle of reflection equals the angle of incidence.
• A mirror has a smooth surface and reflects light at specific angles.
• Light is diffused when it reflects from a rough surface.
#### 1.3: Refraction
• The change of a light ray’s direction when it passes through variations in matter is called refraction.
• The law of refraction, also called Snell’s law, relates the indices of refraction for two media at an interface to the change in angle of a light ray passing through that interface.
#### 1.4: Total Internal Reflection
• The incident angle that produces an angle of refraction of 90° is called the critical angle.
• Total internal reflection is a phenomenon that occurs at the boundary between two media, such that if the incident angle in the first medium is greater than the critical angle, then all the light is reflected back into that medium.
• Fiber optics involves the transmission of light down fibers of plastic or glass, applying the principle of total internal reflection.
• Cladding prevents light from being transmitted between fibers in a bundle.
• Diamonds sparkle due to total internal reflection coupled with a large index of refraction.
#### 1.5: Dispersion
• The spreading of white light into its full spectrum of wavelengths is called dispersion.
• Rainbows are produced by a combination of refraction and reflection, and involve the dispersion of sunlight into a continuous distribution of colors.
• Dispersion produces beautiful rainbows but also causes problems in certain optical systems.
#### 1.6: Huygens’s Principle
• According to Huygens’s principle, every point on a wave front is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wave front is tangent to all of the wavelets.
• A mirror reflects an incoming wave at an angle equal to the incident angle, verifying the law of reflection.
• The law of refraction can be explained by applying Huygens’s principle to a wave front passing from one medium to another.
• The bending of a wave around the edges of an opening or an obstacle is called diffraction.
#### 1.7: Polarization
• Polarization is the attribute that wave oscillations have a definite direction relative to the direction of propagation of the wave. The direction of polarization is defined to be the direction parallel to the electric field of the EM wave.
• Unpolarized light is composed of many rays having random polarization directions.
• Unpolarized light can be polarized by passing it through a polarizing filter or other polarizing material. The process of polarizing light decreases its intensity by a factor of 2.
• The intensity, I, of polarized light after passing through a polarizing filter is $$\displaystyle I=I_0cos^2θ$$, where $$\displaystyle I_0$$ is the incident intensity and $$\displaystyle θ$$ is the angle between the direction of polarization and the axis of the filter.
• Polarization is also produced by reflection.
• Brewster’s law states that reflected light is completely polarized at the angle of reflection $$\displaystyle θ_b$$, known as Brewster’s angle.
• Polarization can also be produced by scattering.
• Several types of optically active substances rotate the direction of polarization of light passing through them.
## Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6872573494911194, "perplexity": 368.73565604770783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601241.42/warc/CC-MAIN-20200121014531-20200121043531-00172.warc.gz"} |
https://xanhacks.gitlab.io/ctf-docs/pwn/format-string/01-introduction-format-string/ | # Introduction - Format string
• %d or %i: Argument will be used as decimal integer (signed or unsigned)
• %o: An octal unsigned integer
• %u: An unsigned decimal integer - this means negative numbers will wrap around
• %x or %X: An unsigned hexadecimal integer
• %f, %g or %G: A floating-point number. %f defaults to 6 places after the decimal point (which is locale-dependent - e.g. in de_DE it will be a ,). %g and %G will trim trailing zeroes and switch to scientific notation (like %e) if the numbers get small or large enough.
• %e or %E: A floating-point number in scientific (XXXeYY) notation
• %s: A string
• %b: As a string, interpreting backslash escapes, except that octal escapes are of the form 0 or 0ooo.
• %n: Write the number of characters printed thus far to an int variable | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5287355184555054, "perplexity": 11987.292294867417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104678225.97/warc/CC-MAIN-20220706212428-20220707002428-00532.warc.gz"} |
https://stats.stackexchange.com/questions/332089/numerical-gradient-checking-best-practices | # Numerical gradient checking (best practices)
I've implemented a neural network and am using numerical gradient checking to validate the back-propagation algorithm is working correctly.
I'm using the standard method to calculate the numerical gradient:
$f'(x) \approx \frac{J(\theta + \epsilon) - J(\theta - \epsilon)}{2 \epsilon}$ where $\epsilon = 10^{-4}$, and
norm(gradients - numericalGradients)/norm(gradients + numericalGradients) < 10e-8
to check back-propagation is operating correctly.
I'm currently checking throughout training for testing purposes and I think that it is functioning correctly, however, as the network approaches its solution the gradients become very small and I think this is why the check fails.
So my question is what are the best practices for when to perform gradient checking?
Once at the start only, only when the minimum gradients are N times bigger than epsilon?
#!/usr/bin/python
import numpy as np
import matplotlib.pyplot as plt
class NeuralNet:
EPSILSON = 10e-4
def __init__(self, _maxIter=250, _nHidden=3):
# Set hyperparameters
self.maxIter = _maxIter
self.nHidden = _nHidden
self.learningRate = np.linspace(0.5, 0.05, self.maxIter)
self.momentum = 0.5
self.enNesterov = False # Nesterov accelerated gradient (https://arxiv.org/pdf/1212.0901v2.pdf)
self.cost = []
self.dW1 = 0
self.dW2 = 0
def initNet(self, _nInput, _nOutput):
self.nInput = _nInput
self.nOutput = _nOutput
self.W1 = np.random.rand(self.nInput, self.nHidden)/100
self.W2 = np.random.rand(self.nHidden, self.nOutput)/100
def train(self, X, y):
# Initialise network structure
self.initNet(X.shape[1], y.shape[1])
# Train - full batch
for m in range(self.maxIter):
# Check gradient descent is operating as expected
# Train net
yHat = self.feedforward(X)
J = self.costFunction(y, yHat)
self.cost.append(J)
dJdW1, dJdW2 = self.backprop(X, y, yHat)
self.updateWeights(dJdW1, dJdW2, self.learningRate[m])
def feedforward(self, X):
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def costFunction(self, y, yHat):
J = 0.5 * np.sum((y - yHat)**2)
return J
def backprop(self, X, y, yHat):
# NB: delta 3 depends on the cost function applied
delta3 = np.multiply(-(y - yHat), self.sigmoidPrime(self.z3))
dJdW2 = np.dot(self.a2.T, delta3)
delta2 = np.dot(delta3, self.W2.T) * self.sigmoidPrime(self.z2)
dJdW1 = np.dot(X.T, delta2)
return dJdW1, dJdW2
def updateWeights(self, dJdW1, dJdW2, learnRate):
if self.enNesterov:
dW1_prev = self.dW1
dW2_prev = self.dW2
self.dW1 = learnRate*dJdW1 + self.momentum*self.dW1
self.dW2 = learnRate*dJdW2 + self.momentum*self.dW2
self.W1 = self.W1 - (1+self.momentum)*self.dW1 - self.momentum*dW1_prev
self.W2 = self.W2 - (1+self.momentum)*self.dW2 - self.momentum*dW2_prev
else:
self.dW1 = learnRate*dJdW1 + self.momentum*self.dW1
self.dW2 = learnRate*dJdW2 + self.momentum*self.dW2
self.W1 = self.W1 - self.dW1
self.W2 = self.W2 - self.dW2
def getWeights(self):
return np.concatenate((self.W1.ravel(), self.W2.ravel()))
def setWeights(self, weights):
W1_start = 0
W1_end = self.nInput * self.nHidden
self.W1 = np.reshape(weights[W1_start:W1_end], (self.nInput, self.nHidden))
W2_end = W1_end + self.nHidden * self.nOutput
self.W2 = np.reshape(weights[W1_end:W2_end], (self.nHidden, self.nOutput))
yHat = self.feedforward(X)
dJdW1, dJdW2 = self.backprop(X, y, yHat)
# Compare
str = 'PASS'
else:
str = 'FAIL'
print('[{0}] Gradient checking. diff = {1}'.format(str, diff))
weights = self.getWeights()
perturb = np.zeros(weights.shape)
for p in range(len(weights)):
# Set pertubation for this weight only
perturb[p] = self.EPSILSON
# Positive perturbation
self.setWeights(weights + perturb)
yHat = self.feedforward(X)
Jpos = self.costFunction(y, yHat)
# Negative perturbation
self.setWeights(weights - perturb)
yHat = self.feedforward(X)
Jneg = self.costFunction(y, yHat)
numGrad[p] = (Jpos - Jneg) / (2 * self.EPSILSON)
# Reset perturbation for next iteration
perturb[p] = 0
# Reset weights
self.setWeights(weights)
def sigmoid(self, z):
return 1 / (1 + np.exp(-z))
def sigmoidPrime(self, z):
x = np.exp(-z)
return (x / ((1 + x)**2))
# Data
X = np.array(([3, 5], [5, 1], [10, 2]), dtype=float)
y = np.array(([75], [82], [93]), dtype=float)
# Normalize
X = X/np.amax(X, axis=0)
y = y/100
# Train a network
NN = NeuralNet()
NN.train(X, y)
h = NN.feedforward(X)
# PLotting
f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=False)
ax1.plot(NN.cost)
ax1.set(xlabel='Iteration', ylabel='Cost', title='')
ax1.grid()
• Have you tried smaller values for $\epsilon$ (e.g. $10^{-8}$)? Near a minimum the gradient usually changes more quickly so a large step size might yield too coarse of a numerical approximation. – Ruben van Bergen Mar 6 '18 at 20:54
• @RubenvanBergen Thanks, yes I have played with the epsilon value, and indeed $10^{-8}$ produces a pass to a tolerance of $10^{-8}$ throughout training as the gradients tend towards their minima. Is the test just as valid with a much smaller epsilon (within numerical tolerances)? I read $10^{-4}$ throughout the literature and am wondering why this value? – CatsLoveJazz Mar 6 '18 at 21:04
• Personally, as a general rule I always try to use as small a step-size as possible (i.e. without causing numerical underflow issues) to get the best approximation. I don't know why $10^{-4}$ would be special - perhaps if you use some kind of standardization for your variables/cost function this is a step-size that is usually good enough? But yes, as far as I know the test is just as valid, if not more so, with smaller $\epsilon$. – Ruben van Bergen Mar 7 '18 at 7:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4805803894996643, "perplexity": 13020.518313283239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00424.warc.gz"} |
https://stats.stackexchange.com/questions/159990/bayesian-regression-full-conditional-distribution | # Bayesian regression full conditional distribution
I have a problem with the derivation of the full conditional distribution of the regression coefficients in a simple Bayesian regression. The source of the following equations is:
• Lynch (2007). Introduction to Applied Bayesian Statistics and Estimation for Social Scientists, page 170 & 171.
The posterior distribution (with uniform priors on all parameters) is given by: $$P(\beta, \sigma | X, Y) \propto (\sigma^{2})^{-(n/2+1)} \exp\{-\frac{1}{2\sigma^{2}}(Y-X\beta)^{T}(Y-X\beta)\}$$
Hence the full conditional distribution of the coefficient $\beta$ only requires the kernel: $$\exp\{-\frac{1}{2\sigma^{2}}(Y-X\beta)^{T}(Y-X\beta)\}$$ where $\beta$ is a $p$ dimensional parameter vector $X$ is a $n\times p$ predictor matrix and $Y$ is the $n$ dimensional vector of responses.
This can be expanded into: $$\exp\{ -\frac{1}{2\sigma^{2}} [Y^{T}Y - Y^{T}X\beta - \beta^{T}X^{T}Y + \beta^{T}X^{T}X\beta] \}$$
Since Y is constant with respect to $\beta$, it can be dropped and the middle two terms can be grouped together. This leads to: $$\exp\{ -\frac{1}{2\sigma^{2}} [ \beta^{T}X^{T}X\beta - 2\beta^{T}X^{T}Y ] \}$$ The next step confuses me. The author multiplies the whole equation by $(X^{T}X)(X^{T}X)^{-1}$. The problem is not the multiplication itself. Its the authors result: $$\exp\{\frac{1}{2\sigma^{2}(X^{T}X)^{-1}} [\beta^{T}\beta - 2\beta^{T}(X^{T}X)^{-1}(X^{T}Y) ] \}$$ What is confusing to me is the multiplication inside the brackets. For me pre-multiplying by $(X^{T}X)^{-1}$ leads to: $$(X^{T}X)^{-1}\beta^{T}X^{T}X\beta - 2(X^{T}X)^{-1}\beta^{T}X^{T}Y$$
Why is it allowed to swap the terms $\beta^{T}$ and $(X^{T}X)^{-1}$?
Since $\beta^{T}X^{T}X\beta$ gives you a scalar value and multiplying it by $(X^{T}X)^{-1}$ results in a square matrix, this is not the same as $\beta^{T}(X^{T}X)^{-1}(X^{T}X)\beta$. But the author seems to "ignore" this in his solution. What am I missing?
Edit Here is a picture of the relevant pages:
As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot divide by $(X^{T}X)^{-1}$. (This is a terrible way of explaining this standard derivation!)
What you can write instead is $$\beta^{T}X^{T}X\beta - 2\beta^{T}X^{T}Y=\beta^{T}X^{T}X\beta - 2\beta^{T}(X^{T}X)(X^{T}X)^{-1}X^{T}Y$$ These are the first two terms of the perfect squared norm $$\left(\beta-\hat\beta\right)^T (X^{T}X) \left(\beta-\hat\beta\right)$$ where $\hat\beta=(X^{T}X)^{-1}X^{T}Y$ is the least square estimator.
Therefore, the full conditional posterior distribution of $\beta$, given $\sigma$ and $\hat\beta$ is a normal distribution$$\mathcal{N}_p(\hat\beta,(X^{T}X)^{-1})$$
Note: The prior on $\sigma$ corresponding to the joint posterior is $1/\sigma^2$ rather than a uniform prior. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469435811042786, "perplexity": 134.5236219560031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869785.9/warc/CC-MAIN-20201020021700-20201020051700-00314.warc.gz"} |
http://tex.stackexchange.com/questions/155194/tufte-like-axis-with-pgfplots?answertab=active | # Tufte like axis with pgfplots
How to achieve this kind of axis with pgfplot ? Image taken from Szymon Beczkowski's awesome PhD thesis design (thesis link)
I may provide you with some data to play with.
\documentclass{standalone}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\begin{axis}[xlabel={$L$ [H]},ylabel={$\hat{I}_{DM}$ [A]},axis lines*=left,grid,xtick=data]
\addplot coordinates {(948e-6,1.61981) (1.5e-3,1.02377) (2e-3,0.769047) (2.5e-3,0.614994) (3e-3,0.503511)};
\end{axis}
\end{tikzpicture}
\end{document}
-
This thesis is truly amazing. Is there a TeX source for it? – Eekhoorn Jan 21 '14 at 17:50
Ask the author. I'd be very interested as well. – s__C Jan 21 '14 at 18:05
Here is the main source file pastebin.com/7dPDyucr and also the whole project so you can build it yourself (tables, graphics and bibliography) dl.dropboxusercontent.com/u/487668/Thesis.zip Hope it will be useful for you. – Szymon Bęczkowski Jan 22 '14 at 10:13
@SzymonBęczkowski Thank you very much. – s__C Jan 22 '14 at 11:32
You can shift your axes, ticks and labels to obtain the axis effect. Adjusting the color of the plot and the size of the marks, gets you closer to the general style. Labels may be added via nodes referencing points in the data coordinate system.
\documentclass{article}
\usepackage{pgfplots}
\pgfplotsset{compat=1.9}
\pgfkeys{/pgfplots/x axis shift down/.style={
x axis line style={yshift=-#1},
xtick style={yshift=-#1},
xticklabel shift={#1}}}
\pgfkeys{/pgfplots/y axis shift left/.style={
y axis line style={xshift=-#1},
ytick style={xshift=-#1},
yticklabel shift={#1}}}
\begin{document}
\begin{tikzpicture}[every pin/.style={red!50!black,font=\small\sffamily}]
\begin{axis}[xlabel={$L$ [H]},
ylabel={$\hat{I}_{DM}$ [A]},
separate axis lines,
axis x line*=bottom,
x axis shift down=10pt,
enlarge x limits=false,
axis y line*=left,
y axis shift left=15pt,
xtick=data,
ytick={0.4,0.6,...,1.801},
ymin=0.4,ymax=1.8]
(2e-3,0.769047) (2.5e-3,0.614994) (3e-3,0.503511)};
\node[coordinate,pin=above right:{2007}] at (axis cs:1.5e-3,1.02377) {};
\end{axis}
\end{tikzpicture}
\end{document}
(The last ytick value is 1.801 instead of 1.8 because of rounding problems in the internal arithmetic.)
To adjust the thickness of the ticks, you can issue:
\pgfplotsset{every axis/.append style={semithick,tick style={major tick
length=4pt,semithick,black}}}
before the tikzpicture. Note that the ticks at the end of the axis are by default half the width of the others, so making them too thick will give a bad visual appearance. Making them all of equal width seems to be non-trivial, the coding being buried in the internals of pgfplots. Update Christian Feuersänger points out a solution to this is provided in a comment to How to make the tick thickness as the axis line?. A full example in the current case would then be:
\documentclass{article}
\usepackage{pgfplots}
\pgfplotsset{compat=1.10}
\makeatletter
\def\pgfplots@drawticklines@INSTALLCLIP@onorientedsurf#1{}%
\def\pgfplots@drawgridlines@INSTALLCLIP@onorientedsurf#1{}%
\makeatother
\pgfplotsset{every axis/.append style={semithick,tick style={major tick
length=4pt,semithick,black}}}
\pgfkeys{/pgfplots/x axis shift down/.style={
x axis line style={yshift=-#1},
xtick style={yshift=-#1},
xticklabel shift={#1}}}
\pgfkeys{/pgfplots/y axis shift left/.style={
y axis line style={xshift=-#1},
ytick style={xshift=-#1},
yticklabel shift={#1}}}
\begin{document}
\begin{tikzpicture}[every pin/.style={red!50!black,font=\small\sffamily}]
\begin{axis}[xlabel={$L$ [H]},
ylabel={$\hat{I}_{DM}$ [A]},
separate axis lines,
axis x line*=bottom,
x axis shift down=10pt,
enlarge x limits=false,
axis y line*=left,
y axis shift left=15pt,
xtick=data,
ytick={0.4,0.6,...,1.801},
ymin=0.4,ymax=1.8]
(2e-3,0.769047) (2.5e-3,0.614994) (3e-3,0.503511)};
\node[coordinate,pin=above right:{2007}] at (axis cs:1.5e-3,1.02377) {};
\end{axis}
\end{tikzpicture}
\end{document}
-
Any option to get the "tick lines" same as the axis lines ? – s__C Jan 21 '14 at 16:00
@s__C Code added at end. – Andrew Swann Jan 22 '14 at 20:12
I think if you test it with scaled y ticks=base 10:2 you'll see the 10^-2 is badly positioned. Any fix? – s__C Jan 24 '14 at 9:29
@s__C You can add every y tick scale label/.append style={xshift=-#1} to the definition of the y axis shift left/.style. – Andrew Swann Jan 26 '14 at 13:09
thank you for the complement – s__C Jan 26 '14 at 14:07
In addition to shifting the axis, you can go one step further and create what Tufte calls "range frames", where the axis lines only cover the range of the data. One way of doing this is described in Creating Tufte-style bar charts and scatterplots using PGFPlots in TUGBoat issue 34 :
\documentclass{standalone}
\usepackage{pgfplots}
\begin{document}
\makeatletter
\def\pgfplotsdataxmin{\pgfplots@data@xmin}
\def\pgfplotsdataxmax{\pgfplots@data@xmax}
\def\pgfplotsdataymin{\pgfplots@data@ymin}
\def\pgfplotsdataymax{\pgfplots@data@ymax}
\makeatother
\pgfplotsset{
range frame/.style={
tick align=outside,
axis line style={opacity=0},
after end axis/.code={
\draw ({rel axis cs:0,0}-|{axis cs:\pgfplotsdataxmin,0}) -- ({rel axis cs:0,0}-|{axis cs:\pgfplotsdataxmax,0});
\draw ({rel axis cs:0,0}|-{axis cs:0,\pgfplotsdataymin}) -- ({rel axis cs:0,0}|-{axis cs:0,\pgfplotsdataymax});
}
}
}
\begin{tikzpicture}
\begin{axis}[
range frame,
xlabel={$L$ [H]},
ylabel={$\hat{I}_{DM}$ [A]},
axis lines*=left,
xtick=data, ymin=0.41
]
\addplot +[black, mark options=fill=black] coordinates {(948e-6,1.61981) (1.5e-3,1.02377) (2e-3,0.769047) (2.5e-3,0.614994) (3e-3,0.503511)};
\end{axis}
\end{tikzpicture}
\end{document}
-
In my code, if you just specify enlarge y limits=false instead of yticks, ymax, ymin you get the same output. – Andrew Swann Jan 21 '14 at 14:25
@AndrewSwann: Good point! – Jake Jan 21 '14 at 14:31
I tried to use all the solutions. This one is working except for one point: the axis line style={opacity=0} seems to have no effect, therefore masking the frame. – Alessandro Cuttin Sep 14 '14 at 8:27
@AlessandroCuttin: What version of TikZ and PGFPlots are you using? – Jake Sep 14 '14 at 10:38
Package: tikz 2013/12/13 v3.0.0 (rcs-revision 1.142) Package: pgfplots 2014/02/28 v1.10 Data Visualization (1.10-2-gb39fe75) I recently updated to TeXlive 2014 – Alessandro Cuttin Sep 14 '14 at 10:49
Pgfplots allows a very fine axis customization thanks to options like:
• x/y/z axis line style
• x/y/ztick style
• x/yt/zticklabel style
Basically, if you shift elements in a proper manner, you can achieve something similar. An humble attempt intended as proof of concept:
\documentclass[border=10pt]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.9}
\begin{document}
\begin{tikzpicture}[font=\sffamily]
\begin{axis}[extra description/.code={% to place xlabel and ylabel more arbitrarily
\node[below=7pt] at ([xshift=1.5cm]xticklabel* cs:1){$L$ [H]};
\node at ([xshift=-1.5cm]yticklabel* cs:0.5){$\hat{I}_{DM}$ [A]};
},
axis x line=left,
axis y line=left,
y axis line style={xshift=-6pt,-},
x axis line style={yshift=-6pt,-},
xtick style={yshift=-4pt,black}, % black to override the default style
ytick style={xshift=-4pt,black}, % black to override the default style
xticklabel style={yshift=-5pt},
yticklabel style={xshift=-5pt},
scaled ticks=false,
ymin=0.5,
ymax=1.75,
ytick={0.5,0.75,1,1.25,1.5,1.75},
yticklabels={0.5,0.75,1,1.25,1.5,1.75},
xmin=0.9e-3,
xmax=3.2e-3,
xtick={0.9e-3,1.4e-3,2e-3,2.6e-3,3.2e-3},
xticklabels={0.9e-3,1.4e-3,2e-3,2.6e-3,3.2e-3}]
\addplot coordinates {(948e-6,1.61981) (1.5e-3,1.02377) (2e-3,0.769047) (2.5e-3,0.614994) (3e-3,0.503511)};
\end{axis}
\end{tikzpicture}
\end{document}
The result:
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6194642186164856, "perplexity": 14691.342358487704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375100481.40/warc/CC-MAIN-20150627031820-00059-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Finite_Sequences_in_Set_Form_Acyclic_Graph | # Finite Sequences in Set Form Acyclic Graph
## Theorem
Let $S$ be a set.
Let $V$ be the set of finite sequences in $S$.
Let $E$ be the set of unordered pairs $\{p, q\}$ of elements of $V$ such that either:
$q$ is formed by extending $p$ by one element or
$p$ is formed by extending $q$ by one element.
That is:
$| \operatorname{Dom}(p) * \operatorname{Dom}(q) | = 1$, where $*$ is symmetric difference and
$p \restriction D = q \restriction D$, where $D = \operatorname{Dom}(p) \cap \operatorname{Dom}(q)$
Then $T = (V, E)$ is an acyclic graph. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7382907271385193, "perplexity": 134.82840174644224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00034.warc.gz"} |
https://library.kiwix.org/quantumcomputing.stackexchange.com_en_all_2021-04/A/question/12914.html | ## Relation between Wigner quasi-probabability distribution and statistical second-moment
3
Is there any relation between the Wigner quasi-probability distribution function $$W$$ and the statistical second-moment (also known as covariance matrix) of a density matrix of a continuous variable state, such as Gaussian state?
3
You mean something like $$W_{G}(\mathbf{r}) =\frac{2^{n}}{\pi^{n} \sqrt{\operatorname{Det} \sigma}} \mathrm{e}^{-(\mathbf{r}-\overline{\mathbf{r}})^{\top} \boldsymbol{\sigma}^{-1}(\mathbf{r}-\overline{\mathbf{r}})},$$ where $$W_{G}(\mathbf{r})$$ is the Wigner function corresponding to a Gaussian state, $$\mathbf{\sigma}$$ its covariance matrix, and $$\overline{r}$$ the vector of first moments?
If yes, then, see, for example, Eqn. (4.50) of Quantum Continuous Variables.
Thank you very much. It's exactly what I wanted. – Kianoosh.kargar – 2020-07-16T14:21:50.010
1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905128479003906, "perplexity": 540.3993820725478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00684.warc.gz"} |
https://www.physicsforums.com/threads/quantum-degeneracy-problem-electron-on-a-ring.630216/ | # Quantum degeneracy problem, electron on a ring
• #1
146
0
Below
## The Attempt at a Solution
So this is a lot like the infinite square well, except periodic. If S is an arc length, then $S = \theta R$ so $\frac{d^2}{dS^2} = \frac{1}{R^2}\frac{d^2}{d\theta^2}$, which is more convenient to use in the hamiltonian. So for the hamiltonian I get:
$$H = \frac{-\hbar^2}{2m}\frac{1}{R^2}\frac{d^2}{d\theta^2} + V_0$$
With Schrodinger's equation, I get
$$\frac{d^2 \psi^2}{d\theta^2} = -\frac{2mR^2(E - V_0)}{\hbar^2}\psi$$
Which gives solutions of the form $\psi = \frac{1}{\sqrt{2\pi}}e^{\pm i k \theta}$, where $k = \sqrt{\frac{2mR^2(E - V_0)}{\hbar^2}}$.
Then, because it's a ring, we need $\psi(x + 2\pi) = \psi(x)$ for any x, which gives us the requirement that k is an integer. So our energy levels are $E = \hbar^2 k^2/2mR^2 + V_0$, and it seems like they have a degeneracy of 4 because we have two functions for each k with the same energy, and then for each of them, the electron's spin can be up or down. Is that right?
As for part (b), I have no idea... Benzene has 6 free electrons, so according to my degeneracy, it completely fills the first energy level, and then there are 2 electrons in the 2nd energy level. Ok...then they ask about a compound with 4 electrons. This seems like it just fills the first energy level, but that seems too simple and stupid to be right.
Can anyone help me out?
Thanks!
Related Advanced Physics Homework Help News on Phys.org
• #2
TSny
Homework Helper
Gold Member
12,490
2,917
k is an integer including zero. What is the degeneracy of the energy level corresponding to k = 0?
• #3
146
0
k is an integer including zero. What is the degeneracy of the energy level corresponding to k = 0?
Well I guess that just has a degeneracy of 2 due to the spin, right?
Any idea on the aromatic thing? I'm totally clueless about that...
• #4
146
0
Ah, I think I see, after what you just said and reading the wiki article on Huckel's Rule... The number of states for k =/= 0 is 4 for each energy level. For k = 0 it's 2. So like Huckel's Rule, for energy level n, there are 4n + 2 states. So benzene is aromatic because it has 6 electrons, so fully completes the n = 1 energy level. The other compound has 4 however, so it fills the n = 0 level but only half fills the n = 1 level, so it's not stable (though from the little I know of chemistry, I thought that just meant it's more reactive, not less stable).
• #5
TSny
Homework Helper
Gold Member
12,490
2,917
I think your analysis is now correct. I'm not very clear on the reactive vs. stable interpretation either. Maybe someone can clarify it.
• Last Post
Replies
2
Views
1K
• Last Post
Replies
0
Views
2K
• Last Post
Replies
0
Views
2K
• Last Post
Replies
13
Views
2K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
7
Views
545
• Last Post
Replies
3
Views
5K
• Last Post
Replies
1
Views
775
• Last Post
Replies
2
Views
3K
• Last Post
Replies
5
Views
3K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001358151435852, "perplexity": 597.4521806619305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00010.warc.gz"} |
http://pandasthumb.org/archives/2004/07/icons-of-id-neu.html | # Icons of ID: Neutrality
In this posting I will start exploring the importance of neutrality on evolution. In fact as the evidence will show, neutrality is not only a requirement for evolvability but it also can be selected for itself. In other words, evolutionary principles can lead to neutrality and evolution will tend towards areas with many neutral neighbours
Neutrality is a fascinating concept which has been shown of importance in understanding RNA and protein evolution. Additionally the vaste neutral networks may help understand ‘convergent evolution’. Perhaps the lack of sequence similarity, but strong phenotype similarity can be understood by neutral evolution and drift. In other words, due to the vasteness of neutral networks, the sequence may have drifted while the phenotype remained basically the same. Thus what may appear to be convergent evolution may very well have been divergent evolution after all. And homologies, which fail to be detected at the sequence level may still exist at the phenotype level.
There is an important consequence of this observation. Because of the redundancy in the genotype-fitness map, different genotypes are bound to have very similar (identical from any practical point of view) fitnesses. Unless there is a strongly “non-random” assignment of fitnesses (say all well-fit genotypes are put together in a single “corner” of the genotype space), a possibility exists that well-fit genotypes might form connected clusters (or networks) that might extend to some degree throughout the genotype space. If this were so, populations might evolve along these clusters by single substitutions and diverge genetically without going through any adaptive valleys. Another consequence of the extremely high dimensionality of the genotype space is the increased importance of chance and contingency in evolutionary dynamics. Because a) mutation is random (which gene will be altered to which allele is unpredictable), b) each specific mutation has a very small probability, and c) the number of genes subject to mutation is very large, the genotypes present will be significantly affected by the random order in which mutations occur. Thus, mutational order represents a major source of stochasticity in evolution in the genotype hyperspace82;91;92. One should expect that even with identical initial conditions and environmental factors different populations will diverge genetically.
## Neutral networks
A few words about the terminology. In what follows a neutral network is a contiguous set of sequences possessing the same fitness.
The existence of chains of well-fit genotypes that connect reproductively isolated genotypes was postulated by Dobzhansky and other earlier workers. In contrast, the models just described show it to be inevitable under broad conditions. The existence of percolating nearly-neutral networks of well-fit genotypes which allow for “nearly-neutral” divergence appears to be a general property of adaptive landscapes with a very large number of dimensions. Do existing experimental data substantiate this theoretical claim?
## Gene duplication
Conrad 29 puts forward an idea of an “extra-dimensional bypass” on adaptive landscapes. According to Conrad an increase in the dimensionality of an adaptive landscape is expected to transform isolated peaks into saddle points that can be easily escaped resulting in continuing evolution.
29 Conrad, M. “The Geometry of Evolution.” BioSystems 24 (1990): 61-81.
Such an increase in dimensionality is quite relevant since some ID proponents have argued that evolution cannot increase the dimension of its search space. Of course nature does not seem to be restricted by the imagination of ID proponents.
Gene duplication allows nature to add new dimensions to the genetic space. Natural evolution can thus begin searching in a simple space even if more advanced phenotypes cannot be found in that space. Because major biological shifts in body-plan complexity have resulted from adding new genes, EC should be able to utilize this kind of mutation as well. However, adding new genes requires variable-length genomes, which can be difficult to implement, as the next section discusses.
In the previous section, we gave examples of positive aspects of neutrality, e.g., in evolution strategies. However, one might argue that neutral encodings are in general disadvantageous, because they enlarge the search space (Radcliffe, 1991). In this section, we want to quantify this effect. Therefore, we derive the average time to detect a globally optimal, solution in NFL scenarios depending on the cardinality of the search space. This enables us to compare general properties of search algorithms on search spaces of different sizes.
This means that in the considered NFL scenario enlarging the genotype space by adding redundancy without a bias does not considerably increase the average number of iterations needed to find a desirable solution if initially m is large enough.
In the worst case, when initially only one element encodes a desirable solution, still the deterioration of the average search performance is bounded by a factor of two.
Note: m is the number of genotypes mapping to a an optimal solution
The Geometry of Evolution
Abstract. Some structures are more suitable for self-organization through the Darwin-Wallace mechanism of variation and selection than others. Such evolutionary adaptability (or evolvability) can itself evolve through variation and selection, either by virtue of being associated with reliability and stability or by hitchhiking along with the advantageous traits whose appearance it facilitates. In order for a structure to evolve there must be a reasonable probability that genetic variation carries it from one adaptive peak to another; at the same time the structure should not be overly unstable to phenotypic perturbations, as this is incompatible with occupying a peak. Organizations that are complex in terms of numbers of components and interactions are more likely to meet the former condition, but less likely to meet the latter. Biological structures that are characterized by a high degree of component redundancy and multiple weak interactions satisfy these conflicting pressures.
The vast extension of the network of neutral paths suggests that extensive neutral networks pf sequences folding into the same structure percolate the entire sequence space. the existence of extensive neutral networks meets a claim raised ny Maynard-Smith for protein space that are suitable for efficient evolution. The evolutionary implications of neutral networks are explored in detail in [30]. Empirical evidence for a large degree of functionaly neutrality in protein space was presented recently by Wain-Hobson and co-workers [34].
[34] Martinez MA, Pezo V, Marliere P, Wain-Hobson S. Exploring the functional robustness of an enzyme by in vitro evolution. EMBO J. 1996 Mar 15;15(6):1203-10.
The evolution of natural proteins is thought to have occurred by successive fixation of individual mutations. In vitro protein evolution seeks to accelerate this process. RNA hypermutagenesis, cDNA synthesis in the presence of biased dNTP concentrations, delivers elevated mutant and mutation frequencies. Here lineages of active enzymes descended from the homotetrameric 78 residue dihydrofolate reductase (DHFR) encoded by the Escherichia coli R67 plasmid were generated by iterative RNA hypermutagenesis, resulting in > 20% amino acid replacement. The 22 residue N-terminus could be deleted yielding a minimum functional entity refractory to further changes, designating it as a determinant of R67 robustness. Complete substitution of the segment still allowed fixation of mutations. By the facile introduction of multiple mutations, RNA hypermutagenesis allows the generation of active proteins derived from extant genes through a mode unexplored by natural selection.
The effect of neutrality on the shape of the fitness landscape. From Fitness Landscapes and Evolvability
Neutrality is but one of the many amazing aspects of protein and RNA networks. Additionally there is the scale free nature which combined with neutrality can help understand many aspects of evolution such as robustness, evolvability, modularity, and degeneracy. And the fascinating part of it all is that scale free networks can be explained by simple models of gene duplication and divergence. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085209131240845, "perplexity": 1745.8193064837708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00086.warc.gz"} |
http://www.lofoya.com/Solved/1325/there-are-20-couples-in-a-party-every-person-greets-every-person | # Moderate Permutation-Combination Solved QuestionAptitude Discussion
Q. There are 20 couples in a party. Every person greets every person except his or her spouse. People of the same sex shake hands and those of opposite sex greet each other with a namaskar (It means bringing one's own palms together and raising them to the chest level). What is the total number of handshakes and namaskar's in the party?
✖ A. $760$ ✔ B. $1,140$ ✖ C. $780$ ✖ D. $720$
Solution:
Option(B) is correct
There are 20 men and 20 women.
When a man meets a woman, there are two namaskars, whereas when a man meets a man (or a woman) there is only 1 handshake.
Number of handshakes $= 2\times {^{20}C_2}$ (men and women ) $= 2 \times \dfrac{20 \times 19}{2}=380$
For number of Namskars Every man does 19 namaskars (to the 20 women excluding his wife) and they respond in the same way.
$= 2\times (20)(19)$
$= 2\times 380$
$=760$
Total,
$= 760 + 380$
$= \textbf{1140}$
Edit: Thank you meghna for pointing our the error in the solution. Solution has been updated.
## (6) Comment(s)
Tejas Patil
()
in ur explanation u havent consider women greeting women (they fall under category of same sex) as stated in question.
solution:
same sex greetings(HANDSHAKES)
case1 : man greeting man / handshakes
20c2= 380
case2 : women greeting women / handshakes
20c2=380
case 3: a male can greet 19 female and viceversa.
so, total no of greetings= 20*19=380
now, if u consider ONE greetings equal to TWO namskars i.e. one bye each sex
thanks.
Meghna
()
$^{20}C_2$ is given as $380$. Number of namaskars has to be doubled.
Though the answer is correct there is a mistake in the derivation.
Deepak
()
Thank you Meghna for pointing out the error, solution has been corrected. Hope this is okay now.
Manu Mathur
()
Total handshakes(men and women) $=2*^{20}c_2=380$
Total namaskar $=20*19=380$
Balaji
()
Guess the answer is 780, because handshakes $=2*^{20}C_2=380$ and namaskaar $=20*20=400$.
So,in total $400+380=780$.
Ankit
()
the solution here i think is wrong
as there will be :
$2(19 *20)$ handshakes for men and woman (between same sex)
and $19 *20$ namaskar between men and woman..
so $2*380 +380 = 1140$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6398347616195679, "perplexity": 5629.921448494557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00168-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://calculator.tutorvista.com/midpoint-calculator.html | Top
Midpoint Calculator
Top
The midpoint is the middle point of a line segment which is equidistant from both endpoints.
Midpoint Calculator (or Mid Point Calculator) is an online tool to calculate the midpoint of a line segment when two end points are given. It takes the end points of a line-segment as x1, y1 and x2, y2 and gives the co-ordinates of the midpoint of the line-segment and displays them as xm, ym.
Midpoint Formula -
Midpoint co-ordinates, $(x_{m}, y_{m}) = (\frac{x_{1} + x_{2}}{2}, \frac{y_{1} + y_{2}}{2})$
I.e. $x_{m} = (\frac{x_{1} + x_{2}}{2})$ & $y_{m} = \frac{y_{1} + y_{2}}{2}$
## Midpoint Calculation Steps
Step 1 :
Observe the value of x1, x2, y1 and y2 from the given points
Step 2 :
Apply the formula for finding mid-point as: ( $\frac{(x_{1} + x_{2})}{2}$, $\frac{(y_{1} + y_{2})}{2}$)
## Midpoint Calculation Examples
1. ### Find the mid point of the line segment having end points as (2,3) and (4,5)?
Step 1 :
Given that: x1 = 2, x2 = 4, y1 = 3 and y2 = 5
Step 2 :
Mid-point = ( $\frac{(x_{1} + x_{2})}{2}$, $\frac{(y_{1} + y_{2})}{2}$)
Mid-point = ( $\frac{(2 + 4)}{2}$, $\frac{(3 + 5)}{2}$)
Mid-point = ( $\frac{(6)}{2}$, $\frac{(8)}{2}$)
Mid-point = (3,4)
Mid-point = (3,4)
2. ### Find the mid point of line segment having end points (-2, -3) and (-6, -9)?
Step 1 :
Given that: x1 = -2, x2 = -6, y1 = -3 and y2 = -9
Step 2 :
Mid-point = ( $\frac{(x_{1} + x_{2})}{2}$, $\frac{(y_{1} + y_{2})}{2}$)
Mid-point = ( $\frac{(-2 + (-6))}{2}$, $\frac{(-3 + (-9))}{2}$)
Mid-point = ( $\frac{(-8)}{2}$, $\frac{(-12)}{2}$)
Mid-point = (-4,-6) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.796984851360321, "perplexity": 1103.6821357648867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948511435.4/warc/CC-MAIN-20171210235516-20171211015516-00732.warc.gz"} |
http://math.stackexchange.com/questions/226569/equality-holds-in-triangle-inequality-iff-both-numbers-are-positive-both-are-ne | Equality holds in triangle inequality iff both numbers are positive, both are negative or one is zero
How do we show that equality holds in the triangle inequality $|a+b|=|a|+|b|$ iff both numbers are positive, both are negative or one is zero? I already showed that equality holds when one of the three conditions happens.
-
You can simplify your hypothesis to "both non-negative or both non-positive". – Hurkyl Nov 1 '12 at 5:33
If $a$ and $b$ are positive, then $|a+b|=a+b=|a|+|b|$. If they are negative, then $|a+b|=-a-b=|a|+|b|$. Suppose one of them is $0$. Without loss of generality suppose $a=0$. Then $|a+b|=|b|=|a|+|b|$.
If none of the three situations occurs, then between $a$ and $b$ one is positive and one negative. Without loss of generality, suppose $a$ is positive. Suppose $|a+b|=|a|+|b|$. If $a+b\geq 0$, then $a+b=a-b$ so that $b=0$, a contradiction. If $a+b<0$, then $-a-b=a-b$ so that $a=0$, a contradiction.
-
This is exactly the end of the proof I started writing. Do I have to show what happens when a is positive and b is negative and also a is negative and b is positive or one of the situations is enough? – Georgey Nov 1 '12 at 7:16
can I write the following after showing when the inequality is equal: In order to complete the proof we will show an equality isn't a result when one of the parameters A or B is positive and the other is negative (which of them is not important because A*B is a product): $$ab-|ab|<=0$$ – Georgey Nov 1 '12 at 7:49
I clicked Enter by mistake without finishing the proof: can I write the following after showing when the inequality is equal: In order to complete the proof we will show an equality isn't a result when one of the parameters A or B is positive and the other is negative (which of them is not important because A*B is a product): $$ab-|ab|<=0$$ $$ab+ab<=0$$ (We get rid of the absolute value by taking minus out of it) $$2ab<=0$$ $$ab<=0$$ (We divide both sides of the inequality by 2) $$ab<0$$ Because the product of A positive and B negative is negative. – Georgey Nov 1 '12 at 7:55
After showing that when A,B are both positive or negative or one of them equals to zero the inequality equals, I show the only one left scenario which is one one of them is positive and the other is negative. This way I cover all of the options and I complete the IFF proof, right? – Georgey Nov 1 '12 at 8:02
If we have $$|a + b| = |a| + |b|$$
Then we have two cases. First $$a + b = |a| + |b| \implies a-|a| =|b|-b$$ Both sides in the above are either simultaneously zero (in which $a = |a|$ and $b = |b|$) or simultaneously not zero, in which ($a \neq |a|$ and $b \neq |b|$). The first case is simultaneously positive and the second implies $|a| = |b| = 0$.
Similarly for the other case $$-a - b = |a| + |b| \implies -|a|-a = b+|b|$$ in which the same analysis applies. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757798910140991, "perplexity": 201.35895796821904}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163933724/warc/CC-MAIN-20131204133213-00078-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.zbmath.org/?q=ai%3Aseamone.ben+ai%3Agagnon.alizee | ×
zbMATH — the first resource for mathematics
A method for eternally dominating strong grids. (English) Zbl 1450.05065
Summary: In the eternal domination game, an attacker attacks a vertex at each turn and a team of guards must move a guard to the attacked vertex to defend it. The guards may only move to adjacent vertices and no more than one guard may occupy a vertex. The goal is to determine the eternal domination number of a graph which is the minimum number of guards required to defend the graph against an infinite sequence of attacks. In this paper, we continue the study of the eternal domination game on strong grids. Cartesian grids have been vastly studied with tight bounds for small grids such as $$2 \times n, 3 \times n, 4 \times n$$, and $$5 \times n$$ grids, and recently it was proven in [I. Lamprou et al., Theor. Comput. Sci. 794, 27–46 (2019; Zbl 1433.05225)] that the eternal domination number of these grids in general is within $$O(m + n)$$ of their domination number which lower bounds the eternal domination number. S. Finbow et al. [Australas. J. Comb. 61, Part 2, 156–174 (2015; Zbl 1309.05134)] proved that the eternal domination number of strong grids is upper bounded by $$\frac{mn}{6} + O(m + n)$$. We adapt the techniques of I. Lamprou et al. [loc. cit.] to prove that the eternal domination number of strong grids is upper bounded by $$\frac{mn}{7} + O(m + n)$$. While this does not improve upon a recently announced bound of $$\lceil \frac{m}{3}\rceil \times \lceil \frac{n}{3}\rceil + O(m \sqrt{n})$$ [F. Mc Inerney et al., “Eternal domination in grids”, in: 11th International Conference, CIAC 2019, Rome, Italy, May 27–29. 311–322 (2019)] in the general case, we show that our bound is an improvement in the case where the smaller of the two dimensions is at most 6179.
MSC:
05C69 Vertex subsets with special properties (dominating sets, independent sets, cliques, etc.) 05C57 Games on graphs (graph-theoretic aspects) 91A43 Games involving graphs 91A46 Combinatorial games
Full Text:
References:
[1] J. Arquilla and H. Fredricksen. “Graphing” an optimal grand strategy.Military Oper. Res., 1(3):3-17, 1995. [BCG+04]A. Burger, E. J. Cockayne, W. R. Gr¨undlingh, C. M. Mynhardt, J. H. van Vuuren, and W. Winterbach. Infinite order domination in graphs.J. Combin. Math. Combin. Comput., 50:179-194, 2004. [2] I. Beaton, S. Finbow, and J.A. MacDonald. Eternal domination numbers of4×ngrid graphs.J. Combin. Math. Combin. Comput., 85:33-48, 2013. · Zbl 1274.05348 [3] A. Braga, C. Souza, and O. Lee. The eternal dominating set problem for proper interval graphs.Inform. Process. Lett., 115:582-587, 2015. · Zbl 1329.05224 [4] N. Cohen, F. Mc Inerney, N. Nisse, and S. P´erennes. Study of a combinatorial game in graphs through linear programming.Algorithmica, 82(2):212-244, Feb 2020. · Zbl 1437.91112 [5] A. Z. Delaney and M. E. Messinger. Closing the gap: Eternal domination on3×ngrids.Contrib. Discrete Math., 12(1):47-61, 2017. · Zbl 1376.05114 [6] S. Finbow, M. E. Messinger, and M. F. van Bommel. Eternal domination in3×ngrids.Australas. J. Combin., 61:156-174, 2015. · Zbl 1309.05134 [7] S. Finbow and M. van Bommel. The eternal domination number for 3×n grid graphs.Australas. J. Combin., 76(1):1- 23, 2020. · Zbl 1439.05177 [8] W. Goddard, S. M. Hedetniemi, and S. T. Hedetniemi. Eternal security in graphs.J. Combin. Math. Combin. Comput., 52, 2005. · Zbl 1067.05051 [9] J. L. Goldwasser, W. F. Klostermeyer, and C. M. Mynhardt. Eternal protection in grid graphs.Util. Math., 91:47-64, 2013. · Zbl 1300.05177 [10] D. Gonc¸alves, A. Pinlou, M. Rao, and S. Thomass´e. The domination number of grids.SIAM J. Discrete Math., 25(3):1443-1453, 2011. [11] W. F. Klostermeyer and G. MacGillivray. Eternal dominating sets in graphs.J. Combin. Math. Combin. Comput., 68, 2009. · Zbl 1176.05057 [12] W. F. Klostermeyer and C. M. Mynhardt. Protecting a graph with mobile guards.Appl. Anal. Discrete Math., 10:1-29, 2016. · Zbl 06750189 [13] I. Lamprou, R. Martin, and S. Schewe. Eternally dominating large grids.Theoret. Comput. Sci., 794:27 - 46, 2019. · Zbl 1433.05225 [14] F. Mc Inerney, N. Nisse, and S. P´erennes. Eternal domination in grids. InAlgorithms and complexity, volume 11485 ofLecture Notes in Comput. Sci., pages 311-322. Springer, Cham, 2019. · Zbl 07163796 [15] C. S. Revelle. Can you protect the roman empire?Johns Hopkins Mag., 50(2), 1997. [16] C. S. Revelle and K. E. Rosing.Defendens imperium romanum: A classical problem in military strategy. Amer. Math. Monthly, 107:585-594, 2000. · Zbl 1039.90038 [17] I. Stewart. Defend the roman empire!Scientific American, pages 136-138, 1999.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7911532521247864, "perplexity": 4571.456935704946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00103.warc.gz"} |
http://www.springer.com/us/book/9783764387235 | # A Natural Introduction to Probability Theory
Authors: Meester, R.
• Discovering of the theory along the way, rather than presenting it matter of factly at the beginning
• Contains many original and surprising examples
• A rigorous study without any measure theory
• Compactly written, but nevertheless very readable
• A probabilistic approach, appealing to intuition, introducing technical machinery only when necessary
see more benefits
eBook $29.99 price for USA (gross) • ISBN 978-3-7643-8724-2 • Digitally watermarked, DRM-free • Included format: PDF • ebooks can be used on all reading devices • Immediate eBook download after purchase Softcover$39.95
price for USA
• ISBN 978-3-7643-8723-5
• Free shipping for individuals worldwide
• Usually dispatched within 3 to 5 business days.
According to Leo Breiman (1968), probability theory has a right and a left hand. The right hand refers to rigorous mathematics, and the left hand refers to ‘pro- bilistic thinking’. The combination of these two aspects makes probability theory one of the most exciting ?elds in mathematics. One can study probability as a purely mathematical enterprise, but even when you do that, all the concepts that arisedo haveameaningontheintuitivelevel.Forinstance,wehaveto de?newhat we mean exactly by independent events as a mathematical concept, but clearly, we all know that when we ?ip a coin twice, the event that the ?rst gives heads is independent of the event that the second gives tails. Why have I written this book? I have been teaching probability for more than ?fteen years now, and decided to do something with this experience. There are already many introductory texts about probability, and there had better be a good reason to write a new one. I will try to explain my reasons now.
Reviews
"The book [is] an excellent new introductory text on probability. The classical way of teaching probability is based on measure theory. In this book discrete and continuous probability are studied with mathematical precision, within the realm of Riemann integration and not using notions from measure theory…. Numerous topics are discussed, such as: random walks, weak laws of large numbers, infinitely many repetitions, strong laws of large numbers, branching processes, weak convergence and [the] central limit theorem. The theory is illustrated with many original and surprising examples and problems."
—ZENTRALBLATT MATH
"Most textbooks designed for a one-year course in mathematical statistics cover probability in the first few chapters as preparation for the statistics to come. This book in some ways resembles the first part of such textbooks: it's all probability, no statistics. But it does the probability more fully than usual, spending lots of time on motivation, explanation, and rigorous development of the mathematics…. The exposition is usually clear and eloquent…. Overall, this is a five-star book on probability that could be used as a textbook or as a supplement."
—MAA ONLINE
"It seems that a task to provide an introductory course on probablitity fulfilling the following requirements arises not so rarely: (A) The course should be accessible to studnets having only very modest preliminary knowledge of calculus, in particular, with no acquaintance with measure theory. (B) The presentation should be fully rigorous. (C) Nontrivial resuilts should be give. (D) Motivation for further strudy of measure theoretic probability ought to be provided, hence to contetn oneself to countable probability spaces is undesirable. R. Meester's book is an attametp to shot that all these demands may be fulfilled in a reasonalb eway, however incompatible they may look at first sight."
---Mathematica Bohemica
• Experiments
Pages 1-33
$29.95 • Random Variables and Random Vectors Pages 35-70$29.95
• Random Walk
Pages 71-79
$29.95 • Limit Theorems Pages 81-88$29.95
• Intermezzo
Pages 89-92
$29.95 ### Buy this book eBook$29.99
price for USA (gross)
• ISBN 978-3-7643-8724-2
• Digitally watermarked, DRM-free
• Included format: PDF
• ebooks can be used on all reading devices
Softcover \$39.95
price for USA
• ISBN 978-3-7643-8723-5
• Free shipping for individuals worldwide
• Usually dispatched within 3 to 5 business days.
## Bibliographic Information
Bibliographic Information
Book Title
A Natural Introduction to Probability Theory
Authors
2008
Publisher
Birkhäuser Basel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5770877003669739, "perplexity": 2065.9403445143644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427429.9/warc/CC-MAIN-20170727042127-20170727062127-00047.warc.gz"} |
http://libros.duhnnae.com/2017/jun8/14983158605-S-from-LEP-High-Energy-Physics-Experiment.php | # α S from LEP - High Energy Physics - Experiment
α S from LEP - High Energy Physics - Experiment - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: Recent results on measurements of the strong coupling $\alpha S$ from LEP arereported. These include analyses of the 4-jet rate using the Durham orCambridge algorithm, of hadronic $Z^0$ decays with hard final state photonradiation, of scaling violations of the fragmentation function, of thelongitudinal cross section, of the $Z^0$ lineshape and of hadronic $\tau$lepton decays.
Autor: Stefan Kluth
Fuente: https://arxiv.org/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9607120156288147, "perplexity": 17342.634406317848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805049.34/warc/CC-MAIN-20171118210145-20171118230145-00725.warc.gz"} |
http://digitalhaunt.net/Maryland/calculate-standard-error-function.html | Address 1 Jacksonville Rd, Crisfield, MD 21817 (410) 968-1750 http://www.crisfieldcomputers.com
# calculate standard error function Deal Island, Maryland
With n = 2 the underestimate is about 25%, but for n = 6 the underestimate is only 5%. Sign in to add this to Watch Later Add to Loading playlists... The sample proportion of 52% is an estimate of the true proportion who will vote for candidate A in the actual election. Add to Want to watch this again later?
This estimate may be compared with the formula for the true standard deviation of the sample mean: SD x ¯ = σ n {\displaystyle {\text{SD}}_{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} You could manually type the formula into a cell. Under the columns of data calculate the standard error of the mean (standard deviation divided by the square root of the sample size), and calculate the mean. Because the 5,534 women are the entire population, 23.44 years is the population mean, μ {\displaystyle \mu } , and 3.56 years is the population standard deviation, σ {\displaystyle \sigma }
The graph shows the ages for the 16 runners in the sample, plotted on the distribution of ages for all 9,732 runners. For a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above Sampling from a distribution with a large standard deviation The first data set consists of the ages of 9,732 women who completed the 2012 Cherry Blossom run, a 10-mile race held This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle
Larger sample sizes give smaller standard errors As would be expected, larger sample sizes give smaller standard errors. The mean age was 23.44 years. Rating is available when the video has been rented. This approximate formula is for moderate to large sample sizes; the reference gives the exact formulas for any sample size, and can be applied to heavily autocorrelated time series like Wall
The age data are in the data set run10 from the R package openintro that accompanies the textbook by Dietz [4] The graph shows the distribution of ages for the runners. The following expressions can be used to calculate the upper and lower 95% confidence limits, where x ¯ {\displaystyle {\bar {x}}} is equal to the sample mean, S E {\displaystyle SE} However, different samples drawn from that same population would in general have different values of the sample mean, so there is a distribution of sampled means (with its own mean and When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution.
Conversely, plotrix's function was always slower than even the slowest runs of those two functions - but it also has a lot more going on under the hood. –Matt Parker Apr The ages in one such sample are 23, 27, 28, 29, 31, 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. The spreadsheet with the completed graph should look something like: Create your bar chart using the means as the bar heights. Copyright © 2016 Statistics How To Theme by: Theme Horse Powered by: WordPress Back to Top Standard Error Bars in Excel Enter the data into the spreadsheet.
Hyattsville, MD: U.S. Because the 9,732 runners are the entire population, 33.88 years is the population mean, μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ. Sign in to make your opinion count. The standard error of a proportion and the standard error of the mean describe the possible variability of the estimated value based on the sample around the true proportion or true
Sign in to add this video to a playlist. Bence (1995) Analysis of short time series: Correcting for autocorrelation. Loading... If people are interested in managing an existing finite population that will not change over time, then it is necessary to adjust for the population size; this is called an enumerative
T-distributions are slightly different from Gaussian, and vary depending on the size of the sample. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean describes bounds on a random sampling process. Because of random variation in sampling, the proportion or mean calculated using the sample will usually differ from the true proportion or mean in the entire population. Blackwell Publishing. 81 (1): 75–81.
Expected Value 9. Popular Articles 1. As a result, we need to use a distribution that takes into account that spread of possible σ's. The mean age for the 16 runners in this particular sample is 37.25.
The distribution of the mean age in all possible samples is called the sampling distribution of the mean. It will be shown that the standard deviation of all possible sample means of size n=16 is equal to the population standard deviation, σ, divided by the square root of the Up next Calculating the Standard Error of the Mean in Excel - Duration: 9:33. The data set is ageAtMar, also from the R package openintro from the textbook by Dietz et al.[4] For the purpose of this example, the 5,534 women are the entire population
Sampling from a distribution with a small standard deviation The second data set consists of the age at first marriage of 5,534 US women who responded to the National Survey of Eric Gamble 25,049 views 9:10 How to calculate Standard Deviation, Mean, Variance Statistics, Excel - Duration: 4:35. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Note that the standard error of the mean depends on the sample size, the standard error of the mean shrink to 0 as sample size increases to infinity.
Step 6: Click the "Descriptive Statistics" check box. If σ is known, the standard error is calculated using the formula σ x ¯ = σ n {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} where σ is the For the age at first marriage, the population mean age is 23.44, and the population standard deviation is 4.72. Assumptions and usage Further information: Confidence interval If its sampling distribution is normally distributed, the sample mean, its standard error, and the quantiles of the normal distribution can be used to
Notice that the population standard deviation of 4.72 years for age at first marriage is about half the standard deviation of 9.27 years for the runners. Required fields are marked *Comment Name * Email * Website Find an article Search Feel like "cheating" at Statistics? Now click on the fx symbol again. Choose “Statistical” on the left hand menu, and then “COUNT” on the right hand menu. 7. In other words, it is the standard deviation of the sampling distribution of the sample statistic.
Transcript The interactive transcript could not be loaded. Standard error of the mean It is a measure of how precise is our estimate of the mean. #computation of the standard error of the mean sem<-sd(x)/sqrt(length(x)) #95% confidence intervals of National Center for Health Statistics (24). The sample mean will very rarely be equal to the population mean.
Loading... The researchers report that candidate A is expected to receive 52% of the final vote, with a margin of error of 2%. All Rights Reserved. How to Calculate a Standard Error of the Mean in Excel This guide assumes you have already taken the average or mean. 1.
The formula to calculate Standard Error is, Standard Error Formula: where SEx̄ = Standard Error of the Mean s = Standard Deviation of the Mean n = Number of Observations of | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303522109985352, "perplexity": 631.2553590999702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827137.61/warc/CC-MAIN-20181215222234-20181216004234-00023.warc.gz"} |
http://codydunne.blogspot.com/ | ## Thursday, October 23, 2014
### Verizon Wireless injecting tracking UIDH header into HTTP requests
Reading Hacker News today, I found a frightening post on Verizon Wireless injecting tracking UIDs into HTTP requests. The upshot is that Verizon Wireless is sending a unique identifier for you to each and every unencrypted website you visit, which means that advertisers (or worse) can track everywhere you have been. This occurs even if you opt out of all the Verizon tracking, use a privacy mode in your browser, enable Do Not Track, use a different browser, send your own bogus UIDH header, change to a new phone, or use a tethered laptop for browsing. The only known solution is to encrypt all your browsing. You can do this using HTTPS Everywhere, but this only works if the website supports HTTPS. The best solution is to use full encryption using a VPN like Tunnelbear or TOR. More details follow.
Verizon Wireless is adding its own header, X-UIDH, which includes a unique identifier that it sends to the webpage. You can check whether your phone is getting the header added here or here. Just make sure you turn off wifi before running the test. Verizon has two patents on the subject: Obtaining targeted services using a unique identification header (uidh) and Multi-factor authentication using a unique identification header (uidh). The most illuminating part is Figure 5 from the first patent:
It becomes very clear that all this is intentional, which was confirmed by my call to Verizon. I talked with a representative of Verizon Wireless, and once they understood the situation they offered several (ineffective) solutions. (1) Use HTTPS instead of HTTP. Naturally, this will only work for the small subset of web services that provide HTTPS. (2) Use Do-Not-Track in the browser. However, my testing showed this had no effect. (2) Use a privacy mode. Again, this had no effect. After talking with a supervisor, the representative then told me that this behavior is normal and expected. Moreover, he claimed that the UIDH header and a standard HTTP connection are a sign to the webserver that you are a good internet citizen, and not a hacker trying to do something untoward. This was a blatant misrepresentation of why some websites do not support HTTPS. After further discussion he ended up agreeing with me, but said there was nothing he could do to help.
What can we do? First off, this is already being exploited in the wild so start using a VPN. Next, let's get Verizon Wireless to change this policy. Do your own testing, tell your friends, and post your complaints online! There is already a bunch on UIDH on Twitter.
## Monday, April 7, 2014
### Visualization Intern & Res. Sci. Positions @ IBM Watson & Research, Cambridge MA
Update: The intern positions are no longer available.
IBM's Watson Group (Cambridge, MA) is looking to hire several summer Research Interns and Research Scientists to join the Cognitive Visualization Lab. We are looking for candidates with a research track record in Information Visualization, preferably with experience in Human-Computer Interaction, decision-making processes, and social sciences.
Our research group aims to advance the state of the art on visual analytics. We are an interdisciplinary group comprised of computer scientists, data scientists, social network analysts, and designers. We are working on a diverse set of truly fascinating projects, including pure Research and Development (R&D)/papers (VIS/InfoVis/VAST, CHI, EuroVis), applied mathematics, developing prototypes for the most important industries in the world, and gallery installations. These positions would be working directly with Cody Dunne and Mauro Martino.
Our laboratory is located a few minutes from the MIT campus in an inclusive and friendly work environment. Despite being small geographically, Boston has 58 colleges and universities and hosts a vibrant academic atmosphere.
## Research Keywords
Information Visualization, Data Science, Big Data Analytics, Information Design, Social Computing, Network Analysis, Human-Computer Interaction, Cognitive Science
## Key Responsibilities
• Design, implement, and evaluate a novel visual analytics prototype following user-centered design principles.
• Investigate creative Human-Computer Interaction systems for deeper levels of expression and engagement.
• Publish and present results to both the academic community and to non-scientists.
## Thursday, November 29, 2012
### Cygwin package manager and auto updates
I use Cygwin on my Windows machine to get access to all the wonderful Linux tools like grep, wget, etc. One problem with Cygwin is you have to run it's GUI installer again manually each time you want to add tools or update the ones you already have.
apt-cyg provides a command-line package manager that you can use to install tools without using the GUI installer. However, I didn't see a way to update the existing tools. You can write a simple batch script to do the automatic updates for you. You only need the three lines below, assuming you've installed cygwin to C:\cygwin. Then, run the batch file as administrator or create a shortcut to do that for you.
cd C:\cygwin
wget -N http://cygwin.com/setup.exe
If you want to pretty it up so you can scan the results of the commands easier, just add some echo statements:
@ECHO off
cd C:\cygwin
echo ======================================
echo ======================================
echo.
wget -N http://cygwin.com/setup.exe
echo.
echo ======================================
echo Updating all cygwin packages...
echo ======================================
echo.
echo.
echo ======================================
echo Update finished.
echo ======================================
echo.
pause
## Thursday, January 12, 2012
### Suppressing BibTeX fields for specific biblatex entry types
I use LaTeX for writing academic papers and biblatex for handling the citations and references in them. One problem I ran into is that biblatex prints out the location, address, month, and publisher for a lot of entries, which I prefer not to have in my reference list. Rather than editing the BibTeX .bib file and losing that data forever, you can tell biblatex to ignore or suppress specific pieces of it.
Below is my code. It suppresses location, address, month, etc. for all entries, and suppresses the publisher and editor field unless the entry is a book. You may need to modify this for whatever style you're using.
% Loads biblatex with clickable links from citations and the reference list,
% with back references if the style supports them.
\usepackage[hyperref,doi,url=false,backref,style=alphabetic,maxbibnames=99]{biblatex}
\bibliography{refs.bib}
\AtEveryBibitem{% Clean up the bibtex rather than editing it
\clearfield{date}
\clearfield{eprint}
\clearfield{isbn}
\clearfield{issn}
\clearlist{location}
\clearfield{month}
\clearfield{series}
\ifentrytype{book}{}{% Remove publisher and editor except for books
\clearlist{publisher}
\clearname{editor}
}
}
Edit on 2/9/2012: As @siretart helpfully points out in the comments, biblatex makes distinctions between fields, name lists, and literal lists in the source file. To see whether to use \clearfield, \clearname, or \clearlist check the biblatex manual for the data type. For example, date and series are fields, location is a literal list, and editor is a name list. I've updated the code above to reflect this.
## Monday, August 8, 2011
### Microsoft Security Essentials Automatic Updates
I love Microsoft Security Essentials, but I'm annoyed by having to do the virus signature updates manually with each Windows Update. This could be because I have Windows Update set to download but let me choose which ones to install. However, you can use the Task Scheduler to automatically run the signature update every day.
AddictiveTips provides instructions, but on my 64-bit Win7 machine the file location was different.
C:\Program Files\Microsoft Security Essentials\MpCmdRun.exe SignatureUpdate
I used
"C:\Program Files\Microsoft Security Client\Antimalware\MpCmdRun.exe" SignatureUpdate
So far, so good!
## Tuesday, May 31, 2011
### Make Firefox 4 to save passwords for ALL websites
Some websites ask web browsers to disable their password auto-complete features. This allows developers to increase the password security for important sites like banking, but can also be used for less important sites.
You can force Firefox to ignore the website settings and save all passwords, but you must then be extra vigilant about which passwords you save. Banking sites and the like will now prompt you to save the password, which you probably shouldn't do for them.
There was an easy fix for Firefox before version 4 (see this post), but there are a few hoops to jump through for the latest version. Firefox 4 packages all the necessary files into omni.jar in the program folder, which is a non-standard archive format that needs to be specially altered. Below are the directions from this comment:
0. Make a backup copy of omni.jar
1. Unzip the omni.jar (using either 7zip, Winrar)
2. Edit accordingly
3. Pack again using ZIP format + SFX option (Self-Extract)
4. Rename back to omni.jar
5. Launch Firefox!
## Friday, March 25, 2011
### Better APA-style: working around hyperref and apacite problems
I'm writing an article in LaTeX using APA style, so I'm using the popular
apa.cls style. It defaults to using apacite for citations and references, which works well enough.
But if I have a URL field in the BibTeX, like I always do in JabRef to remember where I found things, it prints it for each reference wasting a lot of space and not breaking lines properly. I also like the URLs I do show to be clickable hyperlinks, and my citations and cross-references as well. You can usually do this using hyperref, but a lot of things break when using apacite and hyperref together. Here's the top of the file:
\documentclass[jou]{apa}\usepackage{hyperref}
And here is the output of pdflatex:
! Undefined control sequence.\hyper@@link ->\let \Hy@reserved@a \relax \@ifnextchar [{\hyper@link@ }{\hyp...l.83 \cite{Aris09Visual}...! Argument of \@@cite has an extra }. \par l.83 \cite{Aris09Visual}...Runaway argument?>{\hyper@link@ }\def \reserved@b {\hyper@link@ [link]}\futurelet \@let@token \ETC.! Paragraph ended before \@@cite was complete. \par l.83 \cite{Aris09Visual}
Other people have had this problem before, but there aren't any great solutions. See the end for a good solution using biblatex-apa instead of apacite. If you insist on using apacite, there are instructions here for how to make things mostly work:
The simplest way to fix the problem is to put a single
instance of \protect into hyperref.sty.
Turn this:
\def\bibcite#1#2{% \@newl at bel{b}{#1\@extra at binfo}{% \hyper@@link[cite]{}{cite.#1\@extra at b@citeb}{#2}%
into this:
\def\bibcite#1#2{% \@newl at bel{b}{#1\@extra at binfo}{% \protect\hyper@@link[cite]{}{cite.#1\@extra at b@citeb}{#2}%
This occurs at
line 3972 in hyperref.sty [2007/02/07 v6.75r
and at
line 4939 in hyperref.sty [2008/04/05 v6.77l
(line 8328 of the corresponding hyperref.dtx ).
but this breaks the citations. Later they added additional code to the tex file:
It's really just a matter of executing APA's version of \bibcite
before doing the extra stuff that hyperref needs to create the hyper-linking (which seems to work just fine).
For example, the following coding seems to work OK.
\usepackage{apacite}\let\APAbibcite\bibcite %%%% add this line\usepackage{color}\definecolor{darkblue}{rgb}{0.0,0.0,0.3}\usepackage[bookmarks=true]{hyperref}\hypersetup{ pdfauthor={Salvatore Enrico Indiogine}, pdftitle={}, pdfsubject={TAMU EDCI}, pdfkeywords={}, pdfcreator={LaTeX with hyperref package}, pdfproducer={dvips + ps2pdf}, colorlinks,breaklinks, linkcolor={darkblue}, urlcolor={darkblue}, anchorcolor={darkblue}, citecolor={darkblue}}%%%% add the following 2 lines\let\HYPERbibcite\bibcite\def\bibcite#1#2{\APAbibcite{#1}{#2}\HYPERbibcite{#1}{#2}}
This fixes most problems, but there are still warnings and ampersands missing in the references.
A better solution for me was to use biblatex-apa with biblatex instead of apacite.
First replace \bibliography{...} at the end of your tex file with \printbibliography. Then modify the the top to look like this (note the noapacite option for apa.cls).
\documentclass[jou,noapacite]{apa} %%%% apacite is buggy with hyperref\usepackage{color}\usepackage[]{hyperref}\hypersetup{ pdfauthor={AUTHORS}, pdftitle={TITLE}, pdfkeywords={KEYWORDS}, colorlinks,breaklinks, linkcolor={blue}, urlcolor={blue}, anchorcolor={blue}, citecolor={blue}}%%%% bilatex-apa\usepackage[american]{babel}\usepackage{csquotes}\usepackage[style=apa,hyperref,doi,url]{biblatex}\DeclareLanguageMapping{american}{american-apa}\bibliography{} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4447106719017029, "perplexity": 6054.152622182493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400375630.34/warc/CC-MAIN-20141119123255-00132-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.lessonplanet.com/teachers/ghana-artifacts | # Ghana Artifacts
Students study to artifacts from Ghana and discuss how these aid in understanding the civilization. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.879801332950592, "perplexity": 11151.26757119931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607726.24/warc/CC-MAIN-20170524001106-20170524021106-00112.warc.gz"} |
http://math.stackexchange.com/questions/161319/gauss-manin-connection-for-curves | # gauss-manin connection for curves
Let $\pi: X \to Y$ be a finite morphism between smooth projective curves over the complex numbers. I would like to known:
(1) what the Gauss-Manin connection with respect to $\pi$ (that is, the connexion corresponding to the local system $\pi_\ast\mathbb{C}$ on $Y$ minus the ramification points) looks like
(2) what kind information does the Grothendieck-Riemann-Roch theorem provide when applied to $\pi$
Thanks!
-
Let $\pi : \mathbb C \to \mathbb C$ be $z \mapsto z^2$. This is a finite morphism, of the admittedly non-projective affine plane to itself, but it can be extended to a finite morphism on the projective line that is ramified at $0$ and $\infty$ only. The only nontrivial local system associated to this morphism, outside of the ramification points, is a copy if two disjoint $\mathbb C$, one for each point in the preimage of a given point. A parallel section of the associated bundle over $U \subset \mathbb C$ then corresponds to the choice of a square root of $z$ over $U$, and if $U$ is connected this choice of square root does not "jump" between branches, which would correspond to jumping from one point in a preimage of $\pi$ to another.
The case of a general finite morphism should maybe be thought of as similar to this one; parallel sections of the vector bundle associated to the local system correspond to picking a branch of local solutions $x$ of $\pi(x) = y$ when $y$ varies on $Y$.
(2) I haven't worked out the details, but I'm willing to bet good money that we get an extreme overkill proof of the Riemann-Hurwitz formula by applying Grothendieck-Riemann-Roch to the finite morphism $\pi : X \to Y$.
NB: There's something subtle going on due to the fact that the fibers of $\pi$ are not connected; in particular the "vector bundle" associated to the local system is actually a disjoint union of line bundles and not an honest vector bundle. Thus the space of sections of this "bundle" is a disjoint union of vector spaces, and not itself a vector space. – Gunnar Þór Magnússon Nov 23 '12 at 11:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963055431842804, "perplexity": 144.1630767830695}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869264.47/warc/CC-MAIN-20150124161109-00111-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://export.arxiv.org/list/math/1109?300 | Mathematics
Authors and titles for Sep 2011
[ total of 2139 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 2126-2139 ]
[ showing 25 entries per page: fewer | more ]
[1]
Title: Vertex unfoldings of tight polyhedra
Comments: 8 pages, 5 figures
Subjects: Combinatorics (math.CO); Computational Geometry (cs.CG)
[2]
Title: A Generalized Goursat Lemma
Comments: 22 pages. This version is substantially changed from previous versions. We have retained the proof of Theorem 2.3, but have included attribution of this result to [Sch94]. We have substantially expanded applications to include pro-finite groups and Sylow $p$-subgroups (including infinite groups that are virtual $p$-groups), among others
Subjects: Group Theory (math.GR); Category Theory (math.CT)
[3]
Title: Multiplicity estimate for solutions of extended Ramanujan's system
Authors: Evgeniy Zorin
Subjects: Number Theory (math.NT)
[4]
Title: A structure theorem in probabilistic number theory
Subjects: Number Theory (math.NT); Probability (math.PR)
[5]
Title: A converse to Halasz's theorem
Comments: 14 pages. Part of my B. Sc. thesis: arxiv:0909.5274
Subjects: Number Theory (math.NT); Probability (math.PR)
[6]
Title: On truncated variation, upward truncated variation and downward truncated variation for diffusions
Comments: Added Remark 6 and Remark 15. Some exposition improvement and fixed constants
Journal-ref: Stoch. Proc. Appl. 123 (2013), pp. 446-474
Subjects: Probability (math.PR)
[7]
Title: High host density favors greater virulence: a model of parasite-host dynamics based on multi-type branching processes
Comments: 30 pages, 6 figures
Subjects: Probability (math.PR); Populations and Evolution (q-bio.PE)
[8]
Title: Milnor K-theory and the graded representation ring
Comments: This version includes the last-minute corrections made while reading the proofs, before the publication in Journal of K-theory
Subjects: K-Theory and Homology (math.KT); Algebraic Topology (math.AT); Number Theory (math.NT)
[9]
Title: Closure of the cone of sums of 2d-powers in certain weighted $\ell_1$-seminorm topologies
Subjects: Commutative Algebra (math.AC)
[10]
Title: The Four Number Game
Comments: This paper has been difficult for later researchers to locate. Zhenghan Wang kindly reformated it to be available to arXiv readers; Scripta Mathematica vol. 14, 1948
Subjects: Dynamical Systems (math.DS)
[11]
Title: Configuration space integrals and the cohomology of the space of homotopy string links
Comments: Added coauthor Robin Koytcheff, many revisions and corrections, final version, 65 pages
Journal-ref: J. Knot Theory Ramif. 22, no. 11, 73 pp. (2013)
Subjects: Algebraic Topology (math.AT); Geometric Topology (math.GT)
[12]
Title: On the existence of maximizing measures for irreducible countable Markov shifts: a dynamical proof
Journal-ref: Ergod. Th. Dynam. Sys. 34 (2014) 1103-1115
Subjects: Dynamical Systems (math.DS)
[13]
Title: A spanning tree cohomology theory for links
Journal-ref: Advances in Math. 255 (2014) 414-454
Subjects: Geometric Topology (math.GT); Quantum Algebra (math.QA)
[14]
Title: Running Markov chain without Markov basis
Subjects: Statistics Theory (math.ST); Commutative Algebra (math.AC)
[15]
Title: Deformation Expression for Elements of Algebras (IV) --Matrix elements and related integrals--
Subjects: Mathematical Physics (math-ph)
[16]
Title: Weighted norm inequalities for Schrödinger type operators
Authors: Lin Tang
Subjects: Functional Analysis (math.FA)
[17]
Title: Weighted norm inequalities for commutators of Littlewood-Paley functions related to Schrödinger operators
Authors: Lin Tang
Subjects: Functional Analysis (math.FA)
[18]
Title: Extrapolation from $A_\fz^{ρ,\fz}$, vector-valued inequalities and applications in the Schrödinger settings
Authors: Lin Tang
Subjects: Functional Analysis (math.FA)
[19]
Title: A characterisation of algebraic exactness
Authors: Richard Garner
Subjects: Category Theory (math.CT)
[20]
Title: Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Differential Operators
Authors: Qi Ye
Comments: Technical Report of Illinois Institute of Technology 2010
Subjects: Numerical Analysis (math.NA)
[21]
Title: Hermitian-Einstein connections on polystable parabolic principal Higgs bundles
Journal-ref: Adv. Theor. Math. Phys. 15 (2011) 1503-1521
Subjects: Differential Geometry (math.DG); Algebraic Geometry (math.AG)
[22]
Title: On simulating a medium with special reflecting properties by Lobachevsky geometry (One exactly solvable electromagnetic problem)
Comments: 20 pages, 3 figures, 37 references
Subjects: Mathematical Physics (math-ph); Classical Physics (physics.class-ph)
[23]
Title: Interpolating between constrained Li-Yau and Chow-Hamilton Harnack inequalities for a nonlinear parabolic equation
Authors: Jia-Yong Wu
Comments: 13 pages; references and explanations added
Journal-ref: J. Math. Anal. Appl. 396 (2012) 363-370
Subjects: Differential Geometry (math.DG); Analysis of PDEs (math.AP)
[24]
Title: Plünnecke and Kneser type theorems for dimension estimates
Authors: Cédric Lecouvey (LMPT) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5669734477996826, "perplexity": 8961.230682474417}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572879.28/warc/CC-MAIN-20190916155946-20190916181946-00033.warc.gz"} |
https://www.studypug.com/uk/uk-year6 | # UK Year 6 Maths made completely easy!
Get better maths marks with our complete Year 6 maths help, whether it’s for National curriculum in England (Key stage 2), National curriculum in Wales (Key stage 2), or reviewing for Year 6 maths SAT (National Curriculum Maths test). We got you all covered!
Keeping with your class or textbook, our thorough help for year 6 maths help includes topics such as Adding and Subtracting Fractions, Surface area and Volume, Integers, Coordinates, Symmetry, Circles, and more. Learn the concepts with our video tutorials that show you step-by-step solutions to even the hardest year 6 maths questions. Then, strengthen your understanding with tons of year 6 maths practice.
All our lessons are taught by experienced Year 6 maths teachers. Let’s finish your homework in no time, and ACE that final.
• Meet Andy, your UK Year 6 Maths tutor
• Watch how we take you through a lesson
• Learn how to find and search for topics
##### Common Questions
###### My textbook is Mathematics Year 6 by Galore Park. Is your year 6 maths help right for me?
Yes! We cover all the topics you’ll find in your textbook. For those who use other year 6 maths textbooks like Abacus Year 6 Textbook 3 – don’t worry, we have maths help on everything in your textbook too.
###### How do I get my child ready for the KS 2 maths SAT with StudyPug?
First of all, you can try to identify the topics with which your child needs help. Access your child with the year 6 maths questions available on our site would be a good start. Once you know where your child should put more effort on, you may ask your child to watch our video lessons on those topics, and then access them again. Your child should be ready for the KS2 SAT or any maths quizzes in no time!
##### Customer Reviews
4.8 stars based on 6 total reviews
Issac Clarke
I really like using StudyPug, especially when my mom isn’t around or if my dad can’t help me.
Kristen Hill
StudyPug’s courses are very easy to navigate. That’s why I use it to help teach my y6 pupils for their KS2 assessments. It covers the topics very well, and their practice problems help revise the concepts brilliantly.
Louisa Benn
I was looking online for some SATs papers for my child who’s doing y6 maths. I couldn’t find any that properly aligned with what my son was learning, and there were a lot of practice problems that didn’t provide answers for my son to double check his work. StudyPug not only has maths questions, they also provide solutions. For example problems, they’ll actually have an instructor in a video walk you through them, and my son loves those. StudyPug is working wonderfully for him and I’m glad we found out about this site!
Jerome G.
The parents in my school all know that the y6 maths teacher is rubbish. That’s when someone from my circle of parent friends told us about StudyPug. Our children’s SATs were coming up and we couldn’t all afford private tutors. StudyPug is a great alternative. When I sat down to talk with my daughter’s teacher, he reported that she was excelling at maths!
Victoria H.
It was worrying when my daughter first started taking her maths quiz home. She was failing, and I couldn’t teach the concepts to her properly. Neither could her school, it seemed. Back at home, we signed her up to try out StudyPug. We’ve found that the teachers break down topics and then instantly show examples of what is learned through a question. It’s maths help that works.
Sarah McConnell
My child wanted to do maths revision prior to his SATs. We knew we’d probably be able to find a free online maths test, but I’m glad we decided to go with StudyPug. He’s able to complete questions in record time (he used to take hours!), and get the correct answers. I’m no longer worried about his SATs papers.
13.2Mean
## What is Key Stage 2 Maths?
Key stage 2 maths is the term given for the 4 years you spend learning maths in primary school. It covers the maths taught in years 3, 4, 5 and 6. Typically, students who study key stage 2 (KS2) maths are between the ages of 7 and 11.
Towards the end of KS2, year 6 students will sit national curriculum maths tests known as SATs (Standard Assessment Test). Unlike the tests throughout the year, these will be marked externally and the results are published in cthe Department for Education performance tables.
The structure of KS2 maths will be the same across all schools in the UK, as they all follow the same national curriculum. The aim of KS2 maths is to develop your understanding of the basics within arithmetic and reasoning, preparing you for future study at secondary school.
In Year 6, your KS2 maths classes will cover the following:
$\blacksquare$
$\blacksquare$
Multiplying and dividing fractions
$\blacksquare$
Long multiplication
$\blacksquare$
Place value
$\blacksquare$
Rotational symmetry
$\blacksquare$
Multiplying and dividing decimals
$\blacksquare$
Common factors
$\blacksquare$
Surface area and volume of prisms
$\blacksquare$
And more
If that sounds like a lot, don’t worry! StudyPug covers each of these topics and we’ve broken them down into easy to follow video lessons that provide simple explanations to even the trickiest year 6 maths questions.
Our experienced tutors have crafted the year 6 content to meet your KS2 maths curriculum. The videos cover all of the same topics you’d expect to cover in class, and we also make sure that our videos cover the content found in modern maths textbooks like Mathematics Year 6 by Galore Park and Abacus Year 6 Textbook 3.
Many of our students prefer the video format over traditional textbook revision. They find it more engaging and user friendly with tutors that are able to break difficult maths problems into plain english.
## How Many Maths Tests Within Year 6 SATs?
In your KS2 SATs, you will sit a total of 3 maths papers. The 1st paper will be a 30 minute arithmetic paper. Papers 2 and 3 will both be focused on reasoning, and you’ll have 40 minutes per paper.
The questions in Paper 1 will be made up of fixed response questions. This means that you’ll need to work out the answers to various calculations, such as division and long multiplications.
The second and third paper will be slightly different. These papers will involve different forms of questioning in which you may be asked true or false questions as well as multiple choice questions. You will still have questions that require you to answer calculations and you can expect questions that will ask you to draw specific shapes or to complete a chart/table.
You should also expect to face questions that require you to explain how you would approach a problem and find the solution. This tests your reasoning and assesses your mathematical understanding beyond memorization.
## How Can I Pass Year 6 Maths?
The best way to secure a pass in year 6 maths is to pay attention in class. As we’ll explore below, the end of year tests will assess your learning, so if you don't listen in class, you won't learn, and you may end up failing. Additionally, listen to your teacher and try your best to avoid distractions like smartphones and your friends.
If you’re having trouble in class or find that you don’t understand something, ask your teacher for help. There’s absolutely nothing wrong with taking ownership of your learning, so don’t be embarrassed! There’s a good chance that your classmates are just as confused as you are. Don’t wait for them to ask the question though. Asking for help will greatly benefit your learning and in turn will help you in your exams too.
To successfully pass your year 6 SATs, you’ll have to be able to confidently show the examiners that you possess strong mathematical reasoning abilities. This means that you can evaluate problems and are able to find the most effective way to solve them.
They’re essentially testing you to see what you’ve learnt whilst at school. Beyond the right answers, examiners want to see your working out and will be more interested in how you arrived at the answers. Your working out and showing your thought process will allow them to assess your understanding of the problem.
To improve your chances of passing, you should be taking notes in class and assessing where your strengths and weaknesses lie. Outside of the class notes, you can use online maths quizzes, and past KS2 SATs papers to test your knowledge.
When sitting these past papers, try your best to work within the time constraints of each paper. This will help you manage your time more effectively in your real exams. If you struggle with a question, move on and get to it later. You don’t want to run out of time and miss questions you could have answered.
## Is Year 6 Maths Hard?
You may find certain aspects of year 6 maths to be a bit challenging at times. This is to be expected though as you’ll be introduced to new areas of maths. The skills and knowledge learned here will better prepare you for further study at secondary school. Many students across the UK find this transition a little difficult. Understandably, it seems like a daunting prospect, but if you have a firm grasp on the basics, you shouldn’t find it too difficult.
If you do come out of class feeling confused, or perhaps you’ve missed a lesson or two, jump online and watch our online maths revision videos. We have a video for every topic within KS2 maths, and our step-by-step examples explain everything in plain English, which makes learning maths less stressful and more enjoyable.
To help get you started on your year 6 maths quest, we have provided a selection of free lessons that cover ratios, fractions, angles, and more. You can find our free lessons within the topics section of this page.
##### Students and parents love our maths help
###### But don't take our word for it…
Carson E.
parent
When we saw our son's grades we looked online for a convenient, affordable and effective solution. StudyPug has been a great investment.
Jason G.
high school senior
This website saved my butt last semester. I am using it againthis semester. Dennis is the best online tutor... I also like that I can watch videos over and over until I really understand the concept. If you want to save time, sign up...it's only ten bucks and it saved me hours of study time. Thanks, Dennis!
Aaron M.
high school student
I get a whole library of videos that covers everything from basic to complex mathematics for a very low monthly price. Excellent videos, easy-to-understand and most of all they work. My math test results are 10 points higher than last semester.
See all our testimonials
### Ready to do better in UK Year 6 Maths?
Don't procrastinate any longer, it could be too late!
• Popular
• Key Stage 3
• GSCE
• A Level
• University
• All Courses | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32431095838546753, "perplexity": 1669.9160029768582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937090.0/warc/CC-MAIN-20180420003432-20180420023432-00607.warc.gz"} |
http://mathhelpforum.com/algebra/84400-basic-inequality-question.html | 1. ## Basic Inequality question.
$\frac{2}{x-1} > 3$
Now I know the answer is 1 < X < 5/3 but I'm trying to work through it.
I can work it out for 5/3 but I'm not sure how I'm meant to solve for 1.
$2 > 3x - 3$
$5 > 3x$
$\frac{5}{3} > x$
Just got a mind blank as to how I'm meant to solve it to get 1 as the other case.
2. Originally Posted by Peleus
$\frac{2}{x-1} > 3$
Now I know the answer is 1 < X < 5/3 but I'm trying to work through it.
I can work it out for 5/3 but I'm not sure how I'm meant to solve for 1.
$2 > 3x - 3$
$5 > 3x$
$\frac{5}{3} > x$
Just got a mind blank as to how I'm meant to solve it to get 1 as the other case.
What you've done is fine. The only observation you have to make is that the denominator in your original expression MUST be greater than 0, or else the LHS will be negative. And obviously, the inequality can't be true if the LHS is negative.
$\frac{2}{x-1} > 3$
Multiply through by $x-1$
$2x - 2 < 3x - 3$
$1 < x$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245191812515259, "perplexity": 140.99264532152256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427750.52/warc/CC-MAIN-20170727082427-20170727102427-00149.warc.gz"} |
https://bioinformatics.stackexchange.com/tags/makeblastdb/hot | # Tag Info
5
You can use these options: blastp -query my.faa -db VFDB.fas \ -perc_identity 50 -outfmt 6 \ -evalue 10e-5 -out results.txt Then select the 5th column, qlen, and the 13th columns, slen, and get the percentage higher than 50. awk -F'\t' '{ if ($NF >= 50) printf "%s\t%.2f\n",$0 , ($2/$5)*100 }' result.txt Check the documentation for the ...
4
The Refseq team and also the NCBI resource coordinators team publish a new paper every few years, so check out the many papers (e.g. here or here), but to answer your 2nd question, non-redundancy here is (I think) defined very strictly as proteins that are identical in terms of sequence and length, so the clustering is trivial, without the need for a ...
4
This error could be due to the fact that you are using legacy blast, have you tried using BLAST+ instead? If you wish to use PSIPRED with BLAST+, then use the runpsipredplus script here, rather than the normal runpsipred script.
2
The problem was that the database must be specified without suffix, i.e. blastn -db testdb as opposed to blastn -db testdb.nal. By the way, it produces exactly the same output as blastn -db "FAM1079.ffn FAM3228.ffn FAM6161.ffn FAM19036.ffn" -query query_single_nucl.fasta -outfmt 6 and the performance is not better.
2
Did a little more searching and found the answer in the README on BLAST's ftp site: ftp://ftp.ncbi.nlm.nih.gov/blast/db/README 6. Non-redundant defline syntax The non-redundant databases are nr, nt and pataa. Identical sequences are merged into one entry in these databases. To be merged two sequences must have identical lengths and every residue at every ...
2
From looking at this publication "Database indexing for production MegaBLAST searches" which although deals with MegaBLAST and it's indexing method, it would appear that this was rolled into the NCBI C++ toolkit as the command makembindex so I'm inclined to think the current implementation must have been derived from this strategy. Looking at the relevant ...
1
There's an app for that, apparently. Often we need to search multiple databases together or wish to search a specific subset of sequences within an existing database. At the BLAST search level, we can provide multiple database names to the “-db” parameter, or to provide a GI file specifying the desired subset to the “-gilist” parameter. However for these ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249758243560791, "perplexity": 3990.8778598010535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00682.warc.gz"} |
https://brilliant.org/problems/mystical-field/ | # Mystical field
Imagine a point mass moving through a universe where there are no electric or magnetic fields, and thereby no electric or magnetic forces. There is, however, an unknown "mystical force" that is everywhere. As the point mass moves its mass starts evaporating at a rate of $$r = 1~\mbox{g/s}$$. However it maintains a constant velocity of $$V = 5~\mbox{m/s}$$. You observe the point mass at a point A and $$10~\mbox{s}$$ later you observe it at a point B. What magnitude of work in Joules did the mystical force do on the point mass between the points B and A?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9952922463417053, "perplexity": 343.4701395086113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863277.18/warc/CC-MAIN-20180520092830-20180520112830-00359.warc.gz"} |
https://sw.khanacademy.org/video?format=lite&lang=sw&v=8kHKdm02ae4 | # SSS to Show a Radius is Perpendicular to a Chord that it Bisects
More on the difference between a theorem and axiom. Proving a cool result using SSS | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687282204627991, "perplexity": 1327.9909127770634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00366.warc.gz"} |
https://www.rocketryforum.com/threads/gps-tracker-radio-range-how-much-is-enough.764/ | # GPS-tracker radio range: how much is enough?
## What is enough range for the rockets you fly?
• ### None of the above- or i'll never need one- please explain
Results are only viewable after voting.
#### FROB
##### Well-Known Member
Joined
Jan 23, 2009
Messages
389
Reaction score
1
Hi Folks!
The topic of this week's poll is GPS and radio telemetry;
If you're like me, and you'd like to get such a setup to easily and accurately track your rockets, (or balloons, RC airplanes, or anything else or that matter)
Tell us what you think is the minimum acceptable range either in the air or on the ground (at recovery time)
for your current needs or for what you are planning to build in the near future.
Of course its tempting just to go for the longest range offered but there's a catch you have to keep in mind:
As radio power (range) goes up, so does the cost, the size, and the amount of power it uses.
As an example, lets say for 50,000 feet range in the air (Line of sight) the best designed radio needs ~ 1Watt of transmit RF power- and uses about 3 watts of electricity-
though for our uses it can be run on a pretty low duty cycle, maybe 50:1 or so, so actually ~50mW average power consumption.
Every doubling of range requires at least 4x the transmit power and battery size, and adds significantly to the cost-
at the lower power levels, its maybe 50% added cost, increasing exponentially so at higher power levels, it can go up by 200% or more.
You will notice the range on the ground is listed at about 1/5 of the air "line-of-sight" range- thats a very approximate guess since it depends a lot on things like vegetation, terrain, buildings humidity, and how the thing landed. The point i you should expect the range on the ground to be a small fraction of what you'd get in the air. Normally, if you're getting live telemetry up until it almost touches the ground and loses contact, you just go to the last known coordinates from telemetry and you'll likely re-acquire the signal as you approach.
The question I'd ask you to comment on is, which is the limiting factor in your choice? Is it the range in the air, or the range on the ground?
Again as in my other polls, if this actually bears fruit as a working system, i will draw a winner to receive one "beta" unit from everyone who posts a thoughtful comment on the subject.
Last edited:
#### jderimig
Joined
Jan 23, 2009
Messages
3,348
Reaction score
757
How about a tethered helium balloon (or another rocket....) carrying a repeater to pick up the ground based signal? Then I think you can keep the output power small and economical.
Also the airborne range is going to be very high even with modest power at lets say 900Mhz. The receiver can record the 1 Hz signal on the way down and extrapolate the ground level coordinates after LOS.
#### FROB
##### Well-Known Member
Joined
Jan 23, 2009
Messages
389
Reaction score
1
How about a tethered helium balloon (or another rocket....) carrying a repeater to pick up the ground based signal? Then I think you can keep the output power small and economical.
Also the airborne range is going to be very high even with modest power at lets say 900Mhz. The receiver can record the 1 Hz signal on the way down and extrapolate the ground level coordinates after LOS.
Hi John,
thats an excellent idea. A radio relay unit would not be that hard to do once the basic radio is done.
A large toy balloon (like some of the Valentine's day specials i saw yesterday- )
could easily lift one with a small Lipo cell a couple hundred feet - A kite also, if there's enough of a steady breeze, but then you might not want to launch in those circumstances.
You're also right about extrapolating the final position from the history on the way down.
Easy enough to do manually even if the software doesn't do it automatically, though that shouldn't be too hard to program either.
Last edited:
#### bobkrech
##### Well-Known Member
Joined
Jan 20, 2009
Messages
8,353
Reaction score
37
I think I would suggest a combination of methods.
1.) Transmit the current GPS position during the entire flight through a few minutes after landing.
2.) Switch to once every 5 minutes for an hour after the landing.
3.) Switch to once an hour an hour after landing. Stop after 6 hours.
4.) Transmitter actuation of GPS transmitter on demand.
Logic.
1.) If you have a laptop and GPS tracking software, you should be able to track the trajectory of your rocket in real time. While you may not be able to track it to the ground, you can certainly dead reconn the landing site to a rather small area.
2.) Once the rocket is on the ground, the rocket location will not change, so there is no reason to constantly broadcast the transmission.
3.) In all likely hood, if you proceed to the last known projected location of ground impact, you should be within a few hundred yards of the rocket. If you have a hand held remote transmitter, you could trigger the GPS transmitter to send out the rocket location. You should be able to quickly find the rocket if it is intact from that range.
Bob
#### FROB
##### Well-Known Member
Joined
Jan 23, 2009
Messages
389
Reaction score
1
It's interesting to note that so far:
- Nearly half of you would be satisfied with something that can do 5 miles range LOS
- For most of the others, 20 miles in the air seems to be the other magic number. Probably a 1 watt radio needed there.
- For the two extremists who want 50 miles in the air, depending on the data rate required, that will likely mean a 4-20W radio;
So the 5-mile one can be license-free, The 20 mile one will likely need an amateur radio license to operate, and the 50+mile one definitely will.
Also, the 50+ mile one will probably be a 2-part affair, which is the 5-mile one plus an external RF power booster.
Is that OK with everyone concerned ?
Last edited:
#### FROB
##### Well-Known Member
Joined
Jan 23, 2009
Messages
389
Reaction score
1
Ok, so i'm thinking it should be made to fit in a 29mm airframe.
Does anyone feel it would be really useful to have a GPS tracker smaller than that? If so please give some examples why.
I'll be doing the final board design in the next few days, so now's the time to speak up if you have any thoughts on the subject.
Thanks!
#### TheAviator
##### Well-Known Member
Joined
Jan 18, 2009
Messages
832
Reaction score
24
This kind of item strikes me as something that would be used mostly by the large ModRoc (F/G Motors) and up. I don't know what the price point on something like this is, but I wouldn't really want to spend $50 or more (read: expensive) on a GPS system to get back a$20 rocket with a single use motor (read: cheap), especially if it gets hung in a tree somewhere.
That being said, I think you'd be able to capture most of the market with something that could hit 50k. Most rockets stay under that ceiling, and judging based on dual deploy systems and such, you're not likely to see anything drift farther than max. altitude. Therefore, the LOS range is the most important because the landing zone can be extrapolated fairly easily based on the last few data points received, and then the signal can be re-acquired when the receiver comes within range of the transmitter again.
Also, if this does make it to the market, be sure to include special instructions for carbon fibre airframes, as they tend to attenuate RF signals. One of my friends nearly lost a handlaunch glider at less than 200 feet because he forgot about the shielding effect.
Last edited:
#### mattvd
##### Well-Known Member
Joined
Feb 27, 2009
Messages
143
Reaction score
0
Is there any GPS solution for less than $300++? If I could find anything even close to$50 I would snap it up in a minute!!
#### Mike Di Venti
Joined
Feb 17, 2009
Messages
404
Reaction score
1
I don't think so. Most receivers are about $200 for somethnig decent and could be used for other things as well. i guess one could set up a scanner for the transmitter freq. and track that way. scanners aren't that costly, but can be. #### troj ##### Wielder Of the Skillet Of Harsh Discipline, Potent Joined Jan 19, 2009 Messages 14,337 Reaction score 245 Is there any GPS solution for less than$300++? If I could find anything even close to $50 I would snap it up in a minute!! Nope. The cheapest way (that I've found) to do tracking of any sort is via Big Red Bee's BeeLine products, with 70cm Ham radios. Their GPS transmitter package is$289 (transmitter, battery, USB interface, antenna -- everything you need, except the receive side).
The challenge is on the receive side. I use a Yaesu VX-2r (~$140) and a Byonics PicPac as the decoder (~$70 for the kit, when they were still available) plus $60 for an Arrow Antenna 7-element Yagi. I've used that solution, and it works well. But, I want to upgrade to the Yaesu VX-8r, which has the APRS decoder built in; that radio is$400.
With the PicPac off the market, you can do the same with a TinyTrak and someone with the electronics knowledge to get an LCD mounted on it (a friend has that combination). Still significantly cheaper than the VX-8r.
In addition, all the BeeLine stuff requires an amateur radio license ($15 to take the test, and the time required to study). If you don't want to invest in GPS, you can buy the BeeLine tracking transmitter (~$90 for the kit -- charger/programming module, transmitter and battery), plus a radio and antenna.
##### Well-Known Member
Joined
Jan 30, 2009
Messages
1,731
Reaction score
0
25k is more than enough
for most of the Florida launch fields....
License free and around $200.- would be a must have.... #### rdmmdr ##### Well-Known Member Joined Jun 15, 2009 Messages 45 Reaction score 0 then you shall have 2 ea. xbee proxsc modules 80.00 the price just went down 1 ea gps engine rated to 30 g's 60.00 rocket side power supply and board 10.00$ 3.3 volt regulator and filter
xbee usb interface board 25.00
cables and antennas 40.00
box 3.00
total 218.00
tracking software is free. "gpsflight is 1200\$", don,t think so.but lets say we wanted to get the altimeter data. add a pic with three serial channels, write the software for the laptops or for another 150.00 just add one more channel and use two laptops. so for 365.00 you get everything
so now that we know the what lets go for the how. take one of the xbee radios and the gps engine (see here)
id_num=11216
and a interface board/power supply(see here)
https://www.makingthings.com/store/accessories/xbee-connector-pack.html
you will not use the side headers. solder the caps, regulator and xbee headers to the board and solder the powerleads from the gps to the regulated outputs on the board. solder the power leads for your 9 volt clip to the unregulated inputs. soldier the rx of the gps to the xbee tx and the tx from the gps to ths xbee rx. attach the antenna to the module and glue every thing to your mount, don't glue the antenna or the xbee in yet. i use 3m 5200 for this , since it holds great and won't corrode the board like silicon can. so for less then any hour you have the rocket side done.
now for the hard part connect a usb to usb mini cable to your laptop connect the other end to the xbeeusb board(see here)https://www.sparkfun.com/commerce/product_info.php?products_id=8687
insert xbee and connect antenna, box optional
for configuring see the tutorial here
https://www.makingthings.com/documentation/tutorial/xbee-wireless-interface
for an hour and a half worth of work you have a working tracker. 215.00 plus shipping
rick
tripoli level 3 11994
tcc | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3205707371234894, "perplexity": 2172.7785191055787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00507.warc.gz"} |
http://www.research.lancs.ac.uk/portal/en/publications/extended-inflation-with-an-exponential-potential(9cf94ef0-80a6-41f0-9b66-4d49d0cf1ebc).html | 12,000
We have over 12,000 students, from over 100 countries, within one of the safest campuses in the UK
93%
93% of Lancaster students go into work or further study within six months of graduating
Home > Research > Publications & Outputs > Extended inflation with an exponential potential
Extended inflation with an exponential potential
Research output: Contribution to journalJournal article
Published
Article number 083512 22/04/1998 Physical Review D 8 58 6 English
Abstract
In this paper we investigate extended inflation with an exponential potential $V(\sigma)= V_0 e^{-\kappa\sigma}$, which provides a simple cosmological scenario where the distribution of the constants of Nature is mostly determined by $\kappa$. In particular, we show that this theory predicts a uniform distribution for the Planck mass at the end of inflation, for the entire ensemble of universes that undergo stochastic inflation. Eternal inflation takes place in this scenario for a broad family of initial conditions, all of which lead up to the same value of the Planck mass at the end of inflation. The predicted value of the Planck mass is consistent with the observed value within a comfortable range of values of the parameters involved.
Bibliographic note
© 1998 The American Physical Society 6 pages, 2 figures | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391384720802307, "perplexity": 625.4991766915898}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009900.4/warc/CC-MAIN-20141125155649-00076-ip-10-235-23-156.ec2.internal.warc.gz"} |