category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
optics
|
Optics for projecting OLED screen DLP style
|
https://physics.stackexchange.com/questions/68609/optics-for-projecting-oled-screen-dlp-style
|
<p>I just bought a small <a href="https://www.sparkfun.com/products/11676" rel="nofollow">OLED screen</a> and was wondering is it possible to make a screen surface a little bit bigger ( 2x ) by projecting it through some optics on translucent surface. It reminds me of viewfinders in some old cameras or SLR lens adapters or DIY LCD projectors.</p>
<p>Does it even make any sense?</p>
<p>I'm positive that some fresnel lenses will be needed but can't get a grasp on their configuration for this to work. </p>
<p>I've found pretty simple <a href="http://physics.bu.edu/~duffy/java/Opticsa1.html" rel="nofollow">lens simulator</a> and it makes it look like it can be done with only 2 lenses. But it don't seem right.</p>
<p>I know that with LCD projectors you need a fresnel lens and a triplet. Can it be simplified at the expense of image distortion?</p>
|
<p>I'll compare the projector to the more appealing magnifying screen. A simplified approach would put a lens with low distortion within its field of view (<a href="https://en.wikipedia.org/wiki/Angle_of_view" rel="nofollow noreferrer">FOV</a>) at a specific distance to the screen. Luckily these screen magnifiers are <a href="http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias=hpc&field-keywords=screen%20magnifier&rh=n:3760901,k:screen%20magnifier" rel="nofollow noreferrer">commercially available</a> for a 12" display, about 40 US<span class="math-container">$\$$</span>.
<img src="https://i.sstatic.net/9CSr8.jpg" alt="Comercially available product example"></p>
<p><strong>Fresnel Lens basics</strong></p>
<p>The principle of a magnifying glass with just one lens is applied. Remember the the thick and heavy lenses of a <a href="https://en.wikipedia.org/wiki/Magnifying_glass" rel="nofollow noreferrer">magnifying glass</a>. Since its radius of curvature is defined by its focal length, a standard lens must have a certain thickness.
<a href="https://i.sstatic.net/vzOYl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vzOYl.jpg" alt="Frontal view shows rings"></a>
Why does it show rings? That is the sections, where the lens is going over to the next lens segment. The following graphics depicts the mental process to create a Fresnel structure. Original lens function is maintained in a diminished effectiveness.
<a href="https://i.sstatic.net/nJB9Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nJB9Z.png" alt="Mental process to create a Fresnel lens."></a>
Image taken from the <a href="http://academic.greensboroday.org/~regesterj/potl/Waves/Refraction/RefractionA.htm" rel="nofollow noreferrer">website with further refraction basics</a>.
A Frensel lens may be used to get a thinner lens. Imagine slicing the thick lens in concentrical rings. Assemble their surface together in order to geht a thinner lens. This is the basic idea of a <a href="http://en.wikipedia.org/wiki/Fresnel_lens" rel="nofollow noreferrer">Fresnel lens</a>. A very thin lens can be engineered on one surface of sheet of plastics.</p>
<p>The OLED screen can be magnified by the use of a Fresnel lens. Optics principle is just like a magnifying glas. However a needed focal length relates to a certain radius of curvature of the lens. A second criterion is to see the full OLED screen (<a href="https://en.wikipedia.org/wiki/Field_of_view" rel="nofollow noreferrer">field of view)</a>. This requires the lens to be of a certain dimension.</p>
| 700
|
optics
|
Transmission vs reflection grating
|
https://physics.stackexchange.com/questions/70353/transmission-vs-reflection-grating
|
<p>What are the advantages/disadvantages of a transmission vs a reflection grating? It seems like a transmission grating would be easiest to use. I'm trying to get a spectrum from Thomson scattered light in a plasma. The broader the spectrum of the scattered light, the hotter the plasma. It's a weak effect, so it's important to keep as much light as possible. It would also be important that the image of light is not distorted (at least not along a chosen axis... obviously one direction will have the light spread).</p>
|
<p>Transmission gratings are less sensitive to polarization and alignment, but cannot transmit at higher wavelengths (ie typically ~2000nm). </p>
| 701
|
optics
|
What is the optical illusion called where our sun seems to disappear from view and then rise on the third day? And where can it be viewed from?
|
https://physics.stackexchange.com/questions/71735/what-is-the-optical-illusion-called-where-our-sun-seems-to-disappear-from-view-a
|
<p>What is the optical illusion called where our sun seems to disappear for 3 days and then rise again into view on the third day? And where can it be viewed from?</p>
|
<p>I do not know about optical illusions but there are 72 hour nights (refraction taken into account) by coincidence in 72 degrees <a href="http://encyclopedia2.thefreedictionary.com/Polar+Night" rel="nofollow noreferrer">latitude</a>.</p>
<blockquote>
<p>Polar Night</p>
<p>a night lasting more than 24 hours, occurring in polar regions north of the arctic circle and south of the antarctic circle. At points in the northern hemisphere with a geographic latitude ϕ, the sun will not rise above the horizon at certain times of the year. This occurs whenever the sun, in its apparent annual motion along the ecliptic, enters an area of the sky that is not visible at that given latitude.</p>
</blockquote>
<p>Its name is "polar night at latitude 72 degrees".</p>
<p>Have a look at this <a href="http://www.mapsofworld.com/world-maps/world-map-with-latitude-and-longitude.html" rel="nofollow noreferrer">map</a> to see the location where the three days happen, northenmost parts of Greenland, Canada and Russia.</p>
| 702
|
optics
|
Vibrations after polarization of light
|
https://physics.stackexchange.com/questions/72404/vibrations-after-polarization-of-light
|
<p>When we polarize a light, do we get electric vibrations, magnetic vibrations or the mixture of both. </p>
<p>If both, then how can both electric and magnetic vibrations occur in single plane because polarization actually means to confine these vibrations into one plane?</p>
|
<p>The plane of polarization refers to the plane in which the electric field oscillates. The magnetic field oscillates perpendicular to the electric field, and also perpendicular to the direction of propagation (assuming we are talking about a plane wave here.)</p>
| 703
|
optics
|
Is there any optical phenomenon can't be explained without magnetic field?
|
https://physics.stackexchange.com/questions/72494/is-there-any-optical-phenomenon-cant-be-explained-without-magnetic-field
|
<p>Almost all optical phenomenon can be explained considering a fluctuating electric field. Is there any optical phenomenon which can't be explained without considering two fluctuating fields, electric and magnetic?</p>
|
<p>As you say, almost everything optical can be explained through the electric field. Lets say you just have a plane wave traveling in the k direction:</p>
<p>$$E_0e^{ikx}e^{-i\omega t} \hat E$$
Where $\hat E$ depends on the polarization. What we see or what we record in optics is Intensity, which is $|\Phi_{TOT}(x,t)|^2$ </p>
<p>You have to remember that light is an electromagnetic wave, with orthogonal electric and magnetic components. These are related through <strong>Maxwells Equations</strong></p>
<p>$$\nabla \times \vec E=- \frac{\partial \vec B} {\partial t} $$
$$\nabla \times \vec B=\mu_0 \vec J+\mu_0\epsilon_0 \frac {\partial \vec E} {\partial t}$$</p>
<p>So I am not completely sure what you mean by <code>can't be explained without considering two fluctuating fields</code> but whenever you have the electric field, you will have a magnetic field related by maxwells equations.</p>
| 704
|
optics
|
Rayleigh scattering in three dimensions
|
https://physics.stackexchange.com/questions/72938/rayleigh-scattering-in-three-dimensions
|
<p>How does the Rayleigh scattering intensity depend on the polarization angle of the incident, linearly polarized light, and the observation angle in three dimensions?</p>
|
<p>In short, Rayleigh scattering is like ideal dipole radiation: the radiation pattern for Rayleigh scattering is exactly the same as dipole radiation when the light is perfectly and linearly polarised, so that the intensity has a $(\cos(\theta))^2$ dependence. For natural, randomly polarised light (e.g. from the Sun), the intensity varies as $1 + (\cos(\theta))^2$. Here $\theta$ is the altitude angle - the angle the scattering makes with the wavevector (Poynting vector) of the incoming plane field. See the <a href="http://en.wikipedia.org/wiki/Rayleigh_Scattering" rel="nofollow">Wikipedia page on Rayleigh Scattering</a> for details with the formula for unpolarised light. For other scenarios, you'll have to dig into the theory as follows.</p>
<p>At the level of individual particles, you can get all you need from Section 13.5 "Diffraction of a Conducting Sphere: Theory of Mie" in Born and Wolf "Principles of Optics" (Mine is the sixth edition). in Section 13.5.2 under the subheading "limiting cases" we have:</p>
<p>"[begin quotation] It is easy to see that equations (88) [first order approximation in general Mie theory] are identical with those for the radiation zone ($r\gg\lambda$) of an <em>electric dipole</em> at O [the origin], <em>oscillating parallel to the x-axis</em>, i.e. parallel to the electric vector of the primary incident field, and of moment $p=p_0 e^{-i\,\omega\,t}$ where:</p>
<p>$p_0 = a^3 \left|\frac{\hat{n}^2-1}{\hat{n}^2+2}\right|$</p>
<p>[end quotation]" The emphasis (italics) is Born and Wolf's, not mine.</p>
<p>Here $a$ is the Mie scatterer's radius ($a \ll \lambda$ for Rayleigh scattering), and $\hat{n}$ is the ratio of the particle's refractive index to that of the medium the particle is steeped in (in general complex, because the discussion is mainly about metal particles, but the theory is perfectly general). The $p_0$ formula assumes that the incident field has an electric field vector of unit amplitude, and Born and Wolf use Gaussian units. So beware to make the right conversion factor! The moment in general of course scales linearly with the incident electric field.</p>
<p>So you just look up the Maxwell equation solutions for the small dipole and use the recipe above to set the dipole moment in your formulas.</p>
<p>For Rayleigh scattering from a general shape rather than a sphere, the above will hold just as well but with an effective radius $a = \sqrt[3]{\frac{3 V}{4 \pi}}$, where $V$ is the scatterer's volume. This is because the particle just "looks like a dipole" to the electromagnetic field, whose wavelength is too big to "see" the details of the particle's shape: it simply sees that there is a "volume of stuff" at the origin. So, the scattered field from $N$ tiny spheres in a small volume at the origin will give $N$ times the scattered field (to first order, where the scattered field from one sphere is too small to influence the behaviour of another), by linearity of Maxwell's equations. Therefore, we'll get the right answer if we replace $a$ by a calibrated scaling of the cube root of the particle's volume. </p>
<p>Thanks for asking the question. I'm ashamed to say I've never really thought about it, but now that you mention it, I think you can see this effect when you are riding in an aeroplane: the blue cast of the sky looks layered and the blueness looking horizontally out of the window seems less bright than it does if you cast your gaze almost horizontally but slightly towards the ground or sky (i.e. you get less scattering from scatterers at your selfsame height above ground).</p>
| 705
|
optics
|
A simple question
|
https://physics.stackexchange.com/questions/75062/a-simple-question
|
<p>This is a simple question but it seems to me that both the explanations are acceptable.</p>
<p>Let say the least distance of least distinguished vision is 25 cm.</p>
<p>An object is placed 12.5 cm in front of a plain mirror. Where should my eyes be placed such that I can see the image clearly?</p>
<p>Actually, there are only two options:-</p>
<p>A) My eye should be placed 12.5cm from the mirror.</p>
<p>----[Because object distance = 12.5 cm and image distance = 12.5cm .]</p>
<p>B) My eye should be placed 25cm from the mirror.</p>
<p>----[Because the image is “printed” on the mirror. It is just like looking at a picture on the wall.]</p>
<p>So, which is correct?</p>
|
<p>The <a href="http://en.wikipedia.org/wiki/Plane_mirror" rel="nofollow">virtual image</a> created by the flat mirror is 12.5 cm behind the mirror. Thus, if the <a href="http://en.wikipedia.org/wiki/Lens_%28anatomy%29" rel="nofollow">lens</a> of your eye is making an image of something at a distance 25 cm, you should be placed at 12.5 cm from the mirror so that the total distance to the virtual image is 25 cm.</p>
<p>The image is <strong>not</strong> "printed" on the mirror. If it was, then it would just be like something printed on a piece of paper: no three-dimensional perspective.</p>
| 706
|
optics
|
Law of reversibility of light and total internal reflection
|
https://physics.stackexchange.com/questions/78678/law-of-reversibility-of-light-and-total-internal-reflection
|
<p>When a light passes from a denser to a rarer medium at critical angle of incidence the light rays graces through the surface of the denser medium.According to the law of reversibility of light same thing should happen when we reverse the direction of light.But, how can that be? How does the light know when to go into the denser medium? </p>
| 707
|
|
optics
|
counting refractive index of a plano convex lens
|
https://physics.stackexchange.com/questions/81538/counting-refractive-index-of-a-plano-convex-lens
|
<p>suppose that, there is a plano convex lens and its thickness is 5.00cm. If you watch it straight from the convex side, it seems that its of 4.4 cm. What is the refractive index of this lens?</p>
|
<p>Refraction occurs because of the slowing of light in a medium. So the ratio between the actual thickness and perceived thickness is directly related to the ratio of refractive index between that of the lens material and that of air.</p>
| 708
|
optics
|
conceptual meaning of "virtual image"
|
https://physics.stackexchange.com/questions/83755/conceptual-meaning-of-virtual-image
|
<p>I am trying to learn about optics and I am having a hard time understanding the meaning of "real" vs "virtual" image.</p>
<p>My understanding is that for a concave mirror, the image focuses on the same side as the object so it is a real image.</p>
<p>For a convex mirror, the image focuses on the opposite side of the object from the mirror so it is a virtual image.</p>
<p>However, we can both see either real or virtual images, so how are we seeing something that focuses on the other side of the mirror in the convex case ?</p>
<p>I am very confused.</p>
|
<p>The distinction is very simple. A real image is one that the EM radiant energy (rays) actually pass through, so you can put a screen there, and see the image.</p>
<p>A virtual image is an imaginary image. No rays or EM radiation actually passes through it, so you can't see it on a screen placed there, it doesn't exist; you just think it is there. But you can photograph it, by putting a camera where the rays do emerge from, where you were able to see the virtual image with your eye.</p>
<p>Virtual, means it doesn't exist; it isn't real. For some crazy reason, people use "virtually" to mean, it is almost certain to be true; the exact opposite of what it really means.</p>
| 709
|
optics
|
Fluorescence lifetime imaging
|
https://physics.stackexchange.com/questions/86896/fluorescence-lifetime-imaging
|
<p>Actually I am not a physics student but I have to give a lecture about Fluorescence lifetime imaging(FLIM). I am thinking of comparing this techniques to another imaging techniques.
Does anyone know what is the best imaging techniques that is comparable to FLIM?
Thank you</p>
<p>Soha</p>
|
<p>Although this is kind of trivial, nonetheless I find that many people do not grasp the following points at first glance and struggle under the weight of the misconceptions that follow.</p>
<p>You might try making the analogy between FLIM and doppler shift imaging, such as used in echocardiograms, or with interferometry or even a rainfall plot as a function of position over a map. The point being is that the FLIM image reports properties of the imaged specimen as a function of position that are unrelated to the light's intensity. Each pixel in the FLIM image assumes that fluorescence with a good enough signal to noise can be gathered from the corresponding position in the imaged specimen. <em>Given that this is so</em> (otherwise the image for poor SNR pixels is simply garbage) the FLIM image reports how long fluorescence takes to emerge from each point in the sample after excitation. Sometimes the FLIM image "looks like a conventional image" (i.e. a brightfield microscopy image) but this is only insofar that the cellular structures imaged correspond to different chemical compositions and therefore different fluorescence interactions with the driving light.</p>
| 710
|
optics
|
Propagating higher-order Hermite-Gauss modes using the Complex Beam Parameter?
|
https://physics.stackexchange.com/questions/92736/propagating-higher-order-hermite-gauss-modes-using-the-complex-beam-parameter
|
<p>A Gaussian laser beam can be propagated through an optical system (consisting of free space, thin lenses, curved and flat interfaces, etc) by using the "ABCD" <a href="http://en.wikipedia.org/wiki/Ray_transfer_matrix_analysis" rel="nofollow">ray-transfer matrices</a>, and the <a href="http://en.wikipedia.org/wiki/Ray_transfer_matrix_analysis#Ray_transfer_matrices_for_Gaussian_beams" rel="nofollow">complex beam parameter</a> $\tilde{q}$.</p>
<p>A higher-order Hermite-Gauss or Laguerre-Gauss laser beam will gain Gouy phase more quickly than the fundamental Gaussian mode. Is there a simple modification to the complex beam parameter propagation that will also work for these higher order modes?</p>
|
<p>The complex beam parameter $\tilde{q}$, otherwise known as the complex radius of curvature, describes the transformation of the fundamental Gaussian mode through an optical system. All of the parameters of the higher order modes can be related to this fundamental mode transformation. </p>
<p>In the case of Gouy phase; it can be calculated <strong>relative to the waist</strong> for the Hermite-Gauss modes by
$$
\eta(\tilde{q})=(m+n+1)\arctan\left(\frac{\Re(\tilde{q})}{\Im(\tilde{q})}\right),
$$
where $m$ and $n$ describe the particular higher order mode of interest. A similar expression gives the Gouy phase of the Laguerre-Gauss modes. When working with Gouy phase in an optical system, the quantity of interest is the <strong>total, accumulated</strong> Gouy phase. To calculate this you need to calculate the Gouy phase picked up in each portion seperately and add them together. </p>
| 711
|
optics
|
Could a spatial filter improve a heterodyne signal?
|
https://physics.stackexchange.com/questions/93520/could-a-spatial-filter-improve-a-heterodyne-signal
|
<p>Consider two beams of light at slightly difference frequencies that are interfered at a detector. The signal of interest is contained in the phase of the observed signal. As the beams travel around they pass through a variety of optics, which slightly distort the beams' wavefronts. As I understand it, a spatial filter is supposed to clean up the wavefronts. What would be the effect of the spatial filter on the observed signal? Would it give a less noisy measurement of the signal phase?</p>
|
<p>It depends somewhat on the specifics of your measurement, but there are some cases in which it will improve your SNR to clean the beam up. This could be accomplished with the use of a spatial filter or a mode cleaner; the mode cleaner will do a better job but will add complexity. </p>
<p>To qualify the first statement a bit; generally your signal will be contained in the beat note between the fundamental modes ($TEM_{00}$) of your two beams. It is possible that the higher order modes of the two beams can be picked up on common, in which case they can interfere and give you signal. If the two beams travel around different paths, they will pick up different distortions which will add higher order mode components which <em>will not</em> interfere with each other at your detector. This means that these components of the light will add noise, in the form of shot noise, but will not add signal, decreasing your SNR. </p>
<p>You should be careful though, particularly with the spatial filter. If the beams are mostly $TEM_{00}$ to begin with you may end up making the SNR worse by using a tricky to align spatial filter. In addition, the fast lenses used in spatial filters will introduce spherical aberrations which could also cause headaches.</p>
| 712
|
optics
|
Beam power and electric field after a beam splitter
|
https://physics.stackexchange.com/questions/93669/beam-power-and-electric-field-after-a-beam-splitter
|
<p>Consider a beam with power $P_1$ and electric field amplitude $E_{01}$. It is sent through a 50/50 beam splitter that produces beams with power $P_2=P_3=P_1/2$. What are the electric field amplitudes of the split beams, $E_{02}$ and $E_{03}$? </p>
<p>From what I understand, $P=KE_0^2$ where $K$ is a constant. Therefore, </p>
<p>$E_{02}= \sqrt{P_2/K}=\frac{1}{\sqrt{2}}\sqrt{P_1/K}$,</p>
<p>$E_{03}= \sqrt{P_3/K}= \frac{1}{\sqrt{2}}\sqrt{P_1/K}$</p>
<p>Now say I can recombine the beams in phase and without any losses, then $E_{04}=E_{02}+E_{03}$. The power of the recombined beam is</p>
<p>$P_4=KE_{04}^2 $</p>
<p>$= K[E_{02}^2 + E_{03}^2 + 2E_{02}E_{03}]$</p>
<p>$=\frac{1}{2}P_1+\frac{1}{2}P_1 + P_1$</p>
<p>$=2P_1$</p>
<p>So $P_4 >P_1$ and I've created power from nowhere! What is wrong with this picture?</p>
|
<p>Your error is in how the electric fields are combined by a 50/50 beam splitter. If you have two entry ports $a$ and $b$ with electric field amplitudes $E_a$ and $E_b$, and exit ports $c$ and $d$ with electric field amplitudes $E_c$ and $E_d$, then the correct way to combine them is
$${E_c=\frac1{\sqrt2}(E_a+E_b),\\ E_d=\frac1{\sqrt2}(E_a-E_b).}$$</p>
<p>You know one instance of this already: if port $b$ is shut off, then each of the output ports should get $1/\sqrt2$ of the amplitude. Similarly, if port $a$ is shut off, the same thing should happen, so both $E_a$ and $E_b$ should have equal weights in the expressions for $E_c$ and $E_d$. (Here, of course, I'm invoking the principle of superposition to combine the solutions for different sets of sources.) </p>
<p>The phases are a little trickier. The minus sign along the bottom can be imposed by saying the interferometer is aligned so that no output comes out of that port for equal input intensities. The sign of $E_a$ at the top can also be arbitrarily set, by adding an appropriate phase delay on port $c$.</p>
<p>(A brief note for an intermission. The formulas above can of course be derived rigorously once you know how the beam splitter is implemented. However, it's not necessary to know the details to derive them, since they also follow from the general considerations I'm expounding here.)</p>
<p>By now, your problem has gone away. Even if you had an arbitrary phase in the final coefficient, i.e. $E_c=\frac1{\sqrt2}(E_a+e^{i\theta}E_b)$ for some $\theta$, then you can't create energy:</p>
<p>$$P_c=K|E_c|^2=\frac12K(E_a^2+2\cos\theta E_a E_b+E_b^2)\leq \frac12K(E_a+E_b)^2 = P_1.$$</p>
<p>In fact, in order not to <em>destroy</em> energy, you must have both components coming out in phase, with $\theta=0$ and a plus sign in the equation for $E_c$.</p>
<p>So, what's the bottom line? In order to recombine beams in phase and without any losses, you must put them through a beam splitter and ensure that interference kills the other output port. To do that, though, the amplitude of each beam gets further reduced by $1/\sqrt2$, and this ensures that the total beam energy is conserved.</p>
| 713
|
optics
|
Reflectance of Titanium as Function of Thin Film Thickness
|
https://physics.stackexchange.com/questions/33212/reflectance-of-titanium-as-function-of-thin-film-thickness
|
<p>As far as I know, transmittance equals $e^{-\alpha x}$, where $\alpha$ is absorption coefficient and $x$ is thin film thickness($100-300\,nm$). My team and I have engineered a way to find absorption. Transmittance, T= (Output intensity)/(initial intensity). And, absorption A=(initial intensity-output intensity)/(initial intensity). After simplifying the equation, one should get
$$A= 1-T$$</p>
<p>The problem I am facing is I do not know any equation, which will give me reflectance of Ti as a function of thin film thickness at $808\,nm$ wavelength. If someone gives me an equation of transmittance, please include the effects of absorption so that I can calculate the reflectance afterwards. </p>
|
<p>Grab a copy of Optical Properties of Thin Solid Films by O. S Heavens. This discusses the transmission and reflection in great detail. There is more info on Google Books <a href="http://books.google.co.uk/books/about/Optical_properties_of_thin_solid_films.html?id=hixRAAAAMAAJ" rel="nofollow">here</a>, but it hasn't been scanned so you'll need to find a paper copy.</p>
<p>However, the overall transmission and reflection are calculated by calculating the transmission and reflection at each interface and summing them to one. So for films thin enough that you don't get interference the end result is just that R = 1 - T. There is no equation for reflection that is separate to the equation for transmittance.</p>
| 714
|
optics
|
What is primary reason for a matter to be transparent as Cornea is?
|
https://physics.stackexchange.com/questions/38979/what-is-primary-reason-for-a-matter-to-be-transparent-as-cornea-is
|
<p>Is it because its internal structure is Crystalline? I mean by transparency <a href="http://en.wikipedia.org/wiki/Transparency_%28optics%29" rel="nofollow">following</a>.</p>
|
<p>No, the <a href="http://en.wikipedia.org/wiki/Cornea" rel="nofollow noreferrer">Cornea</a> is mostly fibrous - not crystalline. </p>
<p>As for the physics of optical transparency - <a href="http://van.physics.illinois.edu/qa/listing.php?id=2046" rel="nofollow noreferrer">here's</a> an elementary introduction</p>
<blockquote>
<p>In order to go through a material evenly, light has to avoid two things:</p>
<ol>
<li><p>being absorbed</p></li>
<li><p>being scattered off into another direction.</p></li>
</ol>
</blockquote>
<p>See also </p>
<ul>
<li><a href="https://physics.stackexchange.com/questions/7437/why-glass-is-transparent">Why glass is transparent?</a></li>
<li><a href="https://physics.stackexchange.com/q/1836/12613">Why is air invisible?</a></li>
<li><a href="https://physics.stackexchange.com/q/30210/12613">How can a body be transparent?</a></li>
<li><a href="https://physics.stackexchange.com/q/29197/12613">Why is the atmosphere transparent in the visible spectrum?</a></li>
<li><a href="https://physics.stackexchange.com/q/30365/12613">Why does paper become translucent when smeared with oil but not (so much) with water?</a></li>
<li><a href="https://physics.stackexchange.com/questions/30366/why-wet-is-dark">Why do wet objects become darker?</a></li>
<li><a href="https://physics.stackexchange.com/questions/11138/transparency-of-materials">Transparency of materials</a></li>
<li><a href="https://physics.stackexchange.com/questions/30325/why-isnt-light-scattered-through-transparency">Why isn't light scattered through transparency?</a></li>
</ul>
| 715
|
optics
|
Maximum resolution per lens size
|
https://physics.stackexchange.com/questions/41597/maximum-resolution-per-lens-size
|
<p>This question is more practical than theoretical, but I am interested in the theoretical considerations as well.</p>
<p>My wife just bought a Samsung S3 phone with a 8 MP image sensor hiding behind a tiny lens. In daylight the pictures come out fine, but it suffers horribly in low-light conditions. Is there a theoretical limit as to how fine an image sensor can be behind a lens of a specific aperture, given a reasonable amount of ambient light and a reasonable shutter speed? Will increasing the sensor resolution beyond this limit decrease the actual resolution (the ability to resolve two points as individual points) of the final image?</p>
<p>Thanks.</p>
|
<p>The resolution is controlled by diffraction at the smallest part of the lens system. The Wikipedia article on <a href="http://en.wikipedia.org/wiki/Angular_resolution" rel="nofollow">angular resolution</a> goes into this in some detail. To quote the headline from this article, for a camera the spatial resolution at the detector (or film) is given by:</p>
<p>$$ \Delta \ell = 1.22 \frac{f\lambda}{D} $$</p>
<p>where $f$ is the distance from the plane of the lens to the detector, $\lambda$ is the wavelength of the light and $D$ is the camera aperture. Making the pixel size smaller than $ \Delta \ell$ won't do any harm, but it won't make the pictures any sharper.</p>
<p>I don't know if smartphone cameras contain a variable aperture. With conventional cameras larger apertures produce less diffraction so the picture quality should actually improve in low light. However larger apertures expose a larger area of lens and optical aberration dominates the quality. The end result is that there is an optimum aperture below which diffraction dominates and above which optical aberration dominates.</p>
<p>Incidentally, the poor performance at low light probably isn't due to diffraction. I'd guess it's just that the signal to noise ratio of the detected light falls so far the pictures get very noisy.</p>
| 716
|
optics
|
How big of a lens or parabolic mirror would it take
|
https://physics.stackexchange.com/questions/44716/how-big-of-a-lens-or-parabolic-mirror-would-it-take
|
<p>...to heat a piece of steel so its glowing yellow (1100 C)? Assuming you had a cloudless day at a latitude of, say, San Francisco...</p>
<p>Basically I'm wondering if it is possible/feasible to be able to do basic metal working without a traditional forge, just using the power of the sun to heat the metal. So the diameter of the heated spot would have to be about 6" in order to heat a large enough area of the metal to work it...</p>
<p>I always thought you would need several huge pieces of equipment to do this, but just thought I'd ask if anyone here knew how to figure out it roughly...</p>
<p>Thanks!</p>
|
<p>For steel, the specific heat would be $c_p=0.5 kJ/kg K$, with a density of $
\rho=7000 kg/m^3$. Suppose you want to increase the temperature bij say $\Delta T=1100K$ of a piece of size $V=(15cm)^3$</p>
<p>Then you would need a total energy of.</p>
<p>$$E=\rho c_p V \Delta T$$
Which gives you typically $E=10^7 J$</p>
<p>Now, the power of the sun on a bright day, would be of the order of $p=10^3 W/m^2$.</p>
<p>Assuming that</p>
<ul>
<li>all the energy input is converted into heat</li>
<li>the mirror is perfectly aligned</li>
<li>no heat is lost during heating,</li>
<li>no melting, e.g. no latent heat</li>
</ul>
<p>and your mirror had diameter $D$ and you let the process run for a time $t$, then</p>
<p>$$E=p \frac{\pi}{4}D^2 t $$</p>
<p>Then you will get, in approximation</p>
<p>$$D = \sqrt{\frac{E}{pt}}$$</p>
<p>So, suppose you are willing to wait for ten minutes, then the mirror diameter would be $D\approx 4m$. Considering we assumed an ideal system, this is only an order of magnitude assumption. </p>
| 717
|
optics
|
Does light shine better through a matt surface or a glossy surface?
|
https://physics.stackexchange.com/questions/46463/does-light-shine-better-through-a-matt-surface-or-a-glossy-surface
|
<p>I am currently designing a lighting solution for Phillips as part of my university degree. However I am stuck on a small problem, as stated above. If I was to have a strip of perspex, with the sides matt and the bottom polished, would I achieve more light through the polished surface or the matt surface. And why?? </p>
|
<p>Reading this may help you out: <a href="http://en.wikipedia.org/wiki/Fresnel_equations" rel="nofollow">http://en.wikipedia.org/wiki/Fresnel_equations</a></p>
<p>What you are describing seems to be some sort of light pipe, where you are counting on internal reflection to transport the light along the strip. Whether yor pipe is surrounded by a higher refractive index material, as in optic fiber, or not, as in surrounded by air, to keep all, or at least most, of the light from refracting out of the pipe, you need the incidence angle to be shallow enough.</p>
<p>On a matte surface, rather than having a well defined incidence angle for a ray of light, this will be spread over a range of different angles, due to the irregularities of the surface. <a href="http://en.wikipedia.org/wiki/Diffuse_reflection" rel="nofollow">This</a> may help understanding that. It will depend on your exact configuration, but if you look at the graphs of the Fresnel formulas above, that is almost certainly going to mean that in a light pipe configuration more light will refract out of a frosted surface than a smooth one.</p>
<p>The new <a href="http://rads.stackoverflow.com/amzn/click/B008GEKXUO" rel="nofollow">Kindle Paperwhite</a> uses a smooth light pipe, with little protubeances at selected points to guide light out of the pipe, they used to have a nice video explaining it.</p>
| 718
|
optics
|
Spectral luminous efficiency as a function of wavelength
|
https://physics.stackexchange.com/questions/51957/spectral-luminous-efficiency-as-a-function-of-wavelength
|
<p>I've come across plenty of figures demonstrating the spectral luminous efficiency as a function of wavelength (meaning the humans eye's sensitivity to different wavelengths)
<a href="http://www.yorku.ca/eye/photopik.htm" rel="nofollow">http://www.yorku.ca/eye/photopik.htm</a>
but I've been unable to find the equation which gives this function so I can plot it myself.</p>
|
<p>The equation is approximately:</p>
<blockquote>
<p>V(\lambda)=exp-0.5[(((\lambda-559)/41.9)]^2)</p>
</blockquote>
| 719
|
optics
|
Erratic light spot in optical projection
|
https://physics.stackexchange.com/questions/54245/erratic-light-spot-in-optical-projection
|
<p>I once had an old microscope, that included a projection screen that could be mounted instead the eyepiece. It showed a quite decent palm-sized image. </p>
<p>Now I have a new microscope, and removed the eyepiece to mount a single lens reflex camera. However, the image projected on the ground glass, and also on the imaging sensor , shows a brightened fluff around the image center. By moving the microscope in respect to the camera, the spot can be moved a little, but would always stay inside the field of view. What is the reason of this effect ? </p>
|
<p>The type of microscope and illumination scheme are not described. However, in general the microscope produces a real image down inside the tube just below the eyepiece. For attaching a camera, a separate port is sometimes provided such that the camera image sensor can be placed at this real image plane, or at a relayed real image plane. With an SLR type camera, however, it might be difficult to get the ground glass/ image plane down this low just by removing the eyepiece -- it depends on the microscope design.</p>
<p>By grossly adjusting the objective focus, it may be possible to move the nominal image plane up far enough beyond its usual position to reach the SLR image plane, but at that point both the microscope imaging and illumination optics may be operating very far from their design points, and this could possibly explain the observation of "fluff".</p>
<p>Absent an engineered camera adaptor, another approach which can produce usable results is to leave the eyepiece in place and place a camera (with its lens focussed at infinity) as close as possible to the eyepiece. The camera objective aperture should be kept fairly wide open.</p>
| 720
|
optics
|
Is there a formula for determining the focal point of a sphere?
|
https://physics.stackexchange.com/questions/57070/is-there-a-formula-for-determining-the-focal-point-of-a-sphere
|
<p>I guess this is the same as for cylinders, when light is shone through parallel to the cross-section, but Google-ing this only turns up lenses like the ones used in glasses.</p>
<p>I'm looking for something like what's described in this article: <a href="http://spie.org/x34513.xml" rel="nofollow">http://spie.org/x34513.xml</a></p>
<p>I hope to determine the veracity of the equation described there.</p>
|
<p>At your request. Here's <a href="http://mysite.du.edu/~jcalvert/astro/heilig.htm" rel="nofollow">somebody</a> working it out for you.</p>
| 721
|
optics
|
Circular polarisation
|
https://physics.stackexchange.com/questions/57123/circular-polarisation
|
<p>If we have a planar and harmonic EM wave, with $B$ field:</p>
<p>$$B=A\left(\begin{array}{c}
1\\
i\\0
\end{array}
\right)e^{-i(\omega t-\vec k\cdot\vec r)}$$</p>
<p>and with it's corresponding $E$ field. This is a circularly polarised wave, but that field does not have 0 divergence, the three components of it, when taking real part, are:</p>
<p>$$x=\cos (\vec k\cdot\vec r-\omega t)$$
$$y=i\cdot i\sin(\vec k\cdot\vec r-\omega t)=-\sin(\vec k\cdot\vec r-\omega t)$$
$$z=0$$</p>
<p>Tha divergence won't be 0 unless $\vec k=(0,0,a)$, for some $a$. so what's the problem here? Isn't that a wave unless it spread on the $z$ axis? If the $E$ field is just the same but with different phase, I guess the same thing would have to hold if the wave had not matter around: $\nabla\cdot E =0$, right?</p>
|
<p>$\vec{E}$ has not different phase, but different polarisation :
$\vec{B} = \vec{n}\times\vec{E}$</p>
<p>Yes, this solution cannot be a solution of Maxwell's equations for all $\vec{k}$, cause $\nabla{ \vec{E}} =0$ for $\vec{E} = \vec{E_0} e^{-i(\omega t - \vec{k}\vec{r})} $ implies $\vec{k} \vec{E_0} = 0$, so electromagnetic waves in the empty space are transverse. The profound reason of this is the gauge invariance of the EM field and, finally, the masslessness of photons.</p>
| 722
|
optics
|
What is the effect of refractive index of an object for imaging?
|
https://physics.stackexchange.com/questions/56114/what-is-the-effect-of-refractive-index-of-an-object-for-imaging
|
<p>My Question is as follows.</p>
<p>What is the effect of refractive index of an object for imaging (Photographs by high speed camera) on its size and shape information incurred from image?</p>
<p>Lets say ,</p>
<p>I keep the camera focal length, aperture, distance between camera & object, light intensity of the background constant.
Put two objects of same diameter 'd' with different refractive index n1 and n2 in front of the camera one at a time.
Size incurred from image of those object is of diameter d1 & d2. </p>
<p>My question is will d1 & d2 be the same ? or it will differ? if it differs how can I co-relate analytically/ theoretically or by ray tracing projection?</p>
|
<p>A while back I did an experiment imaging oil droplets in water, where we used oils of differing refractive index. Is this the sort of thing you're interested in?</p>
<p>If so, assuming your camera is in focus it will accurately record the size of the oil droplet so varying the refractive index will not cause the <em>apparent</em> size of the drop to change. However the difference between the refractive index of the oil and the refractive index of the water will affect the contrast.</p>
<p>To see this imagine using an oil with the same refractive index as water. Because there is no refractive index at the oil/water boundary there will be no reflection of light and therefore the oil drop will be invisible. As you increase the refractive index of the oil you increase the amount of light reflected at the edges of the drop and it becomes easier to see.</p>
| 723
|
optics
|
Fraunhofer diffraction simulation for a hexagonal aperture, what are the typical units?
|
https://physics.stackexchange.com/questions/60070/fraunhofer-diffraction-simulation-for-a-hexagonal-aperture-what-are-the-typical
|
<p><a href="https://physics.stackexchange.com/a/9910/22775">Kostya answered a question</a> that was asking what the diffraction pattern looks like for a hexagonal aperture in front of a lens. He lists an equation which was derived using a Heaviside function to describe the shape of the aperture.</p>
<p>I was wondering if anyone knows what units were used for the display of his Fourier Transform. He mentions he plotted that from -100 to 100 for (what I am assuming) is $\omega_x$ and $\omega_y$. The units seem to be degrees but I wanted to make sure, as sometimes these problems regarding FTs are treated as being unitless. </p>
<p>I have tried other approaches to predict what that diffraction pattern would look like, and so far Kostya's answer agrees by far the best with my own experimental results and with graphical Fourier Transforms of similarly shaped apertures.</p>
|
<p>What I've calculated is just a Fourier transform of the aperture $h(x,y)$:</p>
<p>$$f(\omega_x,\omega_y)=\int dx\, dy\, h(x,y)e^{-i(\omega_xx+\omega_yy)}$$</p>
<p>(And I was plotting $|f|^2$ as a function of $\omega_{x,y}$ each changing from -100 to 100.)<br>
Already here one can see that $\omega_{x,y}$ both have a dimension of inverse length. So it is not an angle.</p>
<p>Now, the actual expression for Fraunhofer diffraction is something like:
$$u(x',y') = \int dx\,dy\,h(x,y)e^{-i\frac{k}{Z}(xx'+yy')}$$
Where $x'$ and $y'$ are coordinates on the screen, where you observe the diffraction pattern, $k$ is a wave-vector $k=\frac{2\pi}{\lambda}$, and $Z$ is the distance to the screen. </p>
<p>As you can see these formulae are very similar. Namely you get the Fourier transform by renaming:
$$\frac{kx'}{Z}\leftrightarrow \omega_x\quad\frac{ky'}{Z}\leftrightarrow \omega_y$$
So, in practice $\omega_{x,y}$ denote a position on the screen -- given $\omega_x$, you get an $x'$ by rescaling:
$$x'=\frac{\lambda Z}{2\pi}\omega_x$$
Finally let us put some numbers. Let's say that $\lambda=600nm$ and $Z = 2m$. Then a point with $\omega_x = 50cm^{-1}$ will have a coordinate on the disk $x' = 1.8cm$.</p>
| 724
|
optics
|
How do you calculate heat flux (Kw/m2) at the focal point of a mirror?
|
https://physics.stackexchange.com/questions/60541/how-do-you-calculate-heat-flux-kw-m2-at-the-focal-point-of-a-mirror
|
<p>can anyone help me to determine the heat flux (Kw/m2) on a focal point of a parabolic dish having a diameter of 1.5 meter and a focal length 60 cm ???
please awaiting your soonest reply for my senior project :(
Regards</p>
|
<p>In any practical application, the flux measured is the <em>average</em> over the area of the measuring apparatus.</p>
<p>In the limit of a perfect dish, an infinitely far away source, and no diffraction, then the flux would be infinite at a single point and zero everywhere else in that plane. But all of these conditions are violated, and so the flux is spread out over a region in the focal plane.</p>
<p>The best thing to do is measure it experimentally. If you have $1~\mathrm{kW}$ of power hitting a receiver that is $2~\mathrm{cm} \times 2~\mathrm{cm}$, then you know the average flux falling on the receiver is $2500~\mathrm{kW}/\mathrm{m}^2$.</p>
<p>One could work out the diffraction-limited value, but making a dish that large be that perfect for visible light is <em>very hard</em>, so I doubt your flux will reach this theoretical maximum.</p>
| 725
|
optics
|
How does Telescope lens work?
|
https://physics.stackexchange.com/questions/8693/how-does-telescope-lens-work
|
<p>1.How does a Telescope work?
2.What factors increase the magnification of the lens?</p>
|
<p>It's not quite clear what you mean by "telescope lens" - do you mean the system of lenses that make up a telescope? If so, there are two basic types. The actual lenses in your telescope are probably more complicated and correct for all kinds of <a href="http://en.wikipedia.org/wiki/Spherical_aberration" rel="nofollow noreferrer">aberrations</a>, but they work like this.</p>
<p>The Keplerian telescope (top one in the diagram) consists of two positive lenses, with different focal distances, with their foci at the same point, in between the lenses. Imagine your eye on the left. Two parallel rays will converge to a point at the focus of the right-hand lens, and since this point is also at the focus of the left-hand lens, they will become parallel again, but inverted. Of course you are usually not looking at parallel rays with your telescope, but picturing it this way is what helped me to understand why we see a magnified image.</p>
<p>You might think at first glance from the diagram that this makes the image smaller; but what it does is take parallel rays traveling at different angles to the optical axis, and make them parallel again on the other side of the telescope, but traveling at a <em>larger</em> angle. This increases the apparent size of the object. (Google Docs isn't very good for drawing detailed diagrams - you could take a look at the more complicated ones on the <a href="http://en.wikipedia.org/wiki/Galilean_telescope#Refracting_telescope_designs" rel="nofollow noreferrer">Wikipedia page</a> on telescopes.)</p>
<p>The Galilean telescope (bottom one in the diagram) consists of a negative and a positive lens, again with their foci at the same point. This time the point is on the outside of the telescope, at where your eye is. The positive lens focuses the parallel rays to that point, and the negative lens takes the converging rays and makes them parallel again. This time, the image is not inverted.</p>
<p><img src="https://i.sstatic.net/NE1EW.png" alt="Illustration of Galilean and Keplerian telescopes"></p>
<p>Then the second part of your question is about magnification. The focal lengths of the lenses are the only factors that influence the magnification: it is equal to</p>
<p>$$M=-\frac{f_2}{f_1}$$</p>
<p>The minus sign seems counter-intuitive, but think about it - we fill in a negative focal length for the negative lens in the Galilean telescope, so the magnification comes out positive. For the Keplerian telescope, the magnification comes out negative - this indicates that the image is magnified, but also inverted.</p>
| 726
|
optics
|
Optical distortions and focus losses calculation
|
https://physics.stackexchange.com/questions/11875/optical-distortions-and-focus-losses-calculation
|
<p>I'm working with a megapixel camera and lens that needs to be focused for an OCR application. In order to measure the focus quality during the set-up, I've built a tool that gives the contrast value between two pixels. In very simply words, more the contrast is high more the focus is good...
Due to optical distortions, the focus value in the sides of my field of view, is not the same as in the center.
My question is how could I calculate the distortion between center and sides, in %.
For examples fcenter = 62 ; fleftside = 42.
Is that correct to say Leftdistortion = 1-(42/62)*100 ??</p>
<p>Thanks,</p>
<p>Raphael</p>
|
<p>You cannot figure that via pure math. It 100% depend of specific lens design you have - you could have 0 difference, you could have sphere-like focal plane, you could have anything.</p>
<p>So your best bet is to measure focal plane differences across the frame, and interpolate.</p>
| 727
|
optics
|
Impact of covering glass on lens performance
|
https://physics.stackexchange.com/questions/11978/impact-of-covering-glass-on-lens-performance
|
<p>I've seen microscope lenses optimized for 0.17mm covering glass. I don't see what needs to be optimized here? As glass does not touch the lens (as in case of oil/water immersion) - it should just affect focal distance without introducing any aberrations. </p>
<p>Is that correct, or covering glass will cause aberrations imbalance, and will require recalculating the lens? (I can probably only think of very slight chromatic aberration, but it's not important in my case)</p>
<p>Same for water immersion: If we cover sample with 1mm of water, but water does not touch the lens, will it cause any additional aberrations and require recalculating the lens?</p>
<p>Same for diffraction-limited laser focusing optics: I've seen some of them are optimized for laser output windows - what is the nature of this optimization?</p>
|
<p>Any time you know there will be something in your optical system, it is generally good practice to include it in your model during the design process. Of course, in many applications these things will not make a difference. The image quality of your point-and-shoot camera will not become unacceptable if somebody chooses to take a picture through their bedroom window, for example. In the cases you cited, however, the extra aberrations produced by the cover glass or laser window can indeed be significant. I'll explain why, starting with some background.</p>
<h2>Aberration Theory</h2>
<p>Wavefront aberrations are usually expressed as a polynomial function <sup>1</sup>:</p>
<p>$$ W = \sum_{i,j,k} W_{i j k} h^i \rho^j \phi^k $$</p>
<p>where $h$ is the object height (or angle, in the case of an object at infinity), and $(\rho, \phi)$ are polar coordinates in the pupil, normalized such that $h=1$ is the maximum object height and $\rho = 1$ is the edge of the pupil.</p>
<p>For example, $W_{040}$ is the coefficient of spherical aberration. It is often called "third-order" spherical, because lens designers are concerned with the ray angle errors, which are given by the first derivative of the wavefront error; this also distinguishes it from higher order aberrations, like 5th order spherical, $W_{060}$.</p>
<p>There are a number of ways that aberration coefficients can be computed. Continuing with spherical aberration as our example, the contribution from a single surface can be expressed:</p>
<p>$$ W_{040}= -\frac{1}{8} A^2 Y_a \Delta\left(\frac{u_a}{n}\right) \rho^4 $$</p>
<p>where $A \equiv n i_a$, $n$ is the refractive index, $i_a$ is the angle of incidence of the marginal ray (the "a-ray") on the surface, $Y_a$ is the marginal ray height on the surface, $u_a$ is the (paraxial) marginal ray angle, and $\Delta(x) \equiv x' - x$, where primed variables represent quantities after refraction at the surface, and un-primed variables are quantities immediately before refraction.<sup>2</sup></p>
<p>There are a number of techniques for computing aberration coefficients for complex optical systems, such as "Seidel Sums" or the more modern and efficient technique of "G-sums,"<sup>3</sup> but the important point here with respect to your question is this: <strong>the aberrations due to a surface depend just as much on the characteristics of the rays at the surface as they do on the shape of the surface.</strong></p>
<hr>
<p>So, you can see that the aberrations introduced by a flat plate can be very significant indeed. In the formalism I have presented here, a high NA microscope system would have very large values for $u_a$, and the design must absolutely include these in balancing aberrations if very high performance is to be achieved.</p>
<p>This should also help make clear why liquid immersion is helpful in getting to very high NA without aberrations becoming a problem. <strong>However</strong>, in your question you also ask about the case of a water droplet over the object, without being in contact with the microscope objective as in a true liquid immersion objective. In this case you must keep in mind that the curved surface of the droplet would be a refractive surface and <strong>must</strong> be included in the optical calculations if the resulting lens is to function at all!</p>
<p>One last note, on the brief discussion in comments about the reason for cover glass. Cover glass serves a variety of purposes, but isn't really intended as a beneficial component of the microscope imaging system. It is primarily used to hold the sample still; protect it from air; and most importantly, to hold it <em>flat</em>, because a high NA microscope system will have a <em>very</em> shallow depth of field. It would be quite inconvenient for the microscope user to have micron-scale surface variations in his sample produce focus variations between adjacent features in the image. Cover glass can be accommodated by the optical design, but if it weren't for these practical considerations the lens designer would likely not call for it.</p>
<hr>
<p>Footnotes:</p>
<p><sup>1</sup>: These are Seidel aberrations; more modern approaches use an orthogonal set of polynomials as their basis functions, such as Zernike polynomials, which are more appropriate for numerical techniques. Seidel aberrations are left over from the time when aberrations had to be calculated by hand, and are still useful and well understood by modern lens designers.</p>
<p><sup>2</sup>: The <em>marginal ray</em> is the ray from an object point on the optical axis, which goes through the edge of the pupil/aperture stop. Also important is the <em>chief ray</em>, which is a ray from the edge of the field of view through the center of the pupil/aperture stop. These rays are used heavily in the geometrical analysis of optical systems. For more on this, see "Modern Optical Engineering" by W.J. Smith.</p>
<p><sup>3</sup>: Again, these are covered by Smith.</p>
| 728
|
optics
|
Magnification multiplication using telescope arrays?
|
https://physics.stackexchange.com/questions/12016/magnification-multiplication-using-telescope-arrays
|
<p>If we have an array of telescopes attached one after another, would the resultant magnification be multiplied?Also would such a contraption be feasible to make telescopes with amazing magnification?</p>
|
<p>If you head down to your local telescope shop and check out their inventory, you might see two telecopes. One is a refractive telescope with a 2 inch aperture, and it has its magnification listed as 1 million. Another is a reflective telescope with a 6 inch aperture and a magnification of 200. Which will allow you to see planets better?</p>
<p>The second one.</p>
<p>Telescopes do two things: collect light and spread it out (magnify it). How much light they collect is limited by the size of the aperture. The magnification is determined by the optics. Note that since magnification spreads light out, it makes the light dimmer. The reason things usually don't look dim in a telescope is because the light has been concentrated from an aperture that's several times larger than the pupil of a human eye.</p>
<p>Note that because of diffraction effects, the amount of detail you can resolve is also limited by the size of the aperture (sometimes; other things can obscure detail more than this, but this is a big one).</p>
<p>To answer your question, you would indeed magnify the images more and more with each telescope, but the reason telescopes are useful is not the magnification so much as the light collection, and that was accomplished by the first telescope. You would be better off spreading the telescopes out to get the full light collection power of all of their apertures, using cameras instead of eyes, and combining the pictures in a computer. This is what some actual observatories do.</p>
| 729
|
optics
|
Projecting image without manual focussing
|
https://physics.stackexchange.com/questions/12563/projecting-image-without-manual-focussing
|
<p>I was wondering if it was possible to project a magnified image on a wall without the need of focusing, so just by dimensioning the lenses right.</p>
<p>I know I have to use the principal of Maxwellian View for the illumination of the slide. However, there are a lot of parameters left and I can design a system that requires a very minimal (say 1mm) displacement of the projection lens when projecting on 0.5m and, say, 2.5m. However, I was wondering if I oversee something here. I've seen that it is possible however I was also wondering if some special kind of lens is needed here.</p>
|
<blockquote>
<p>I was wondering if it was possible to project a magnified image on a
wall without the need of focusing, so just by dimensioning the lenses
right.</p>
</blockquote>
<p>Of course You can!</p>
<p>The real problem is that You do not know the exact position of the slide
in its holder.
Second problem is, that it is cheaper to use a standard commercial
projector including manual focus, as opposed to a custom made
projector without manual focus. </p>
<p>I do not know abot "principal of maxwellian wiev" , but I know why You
shoud not rely on the method of fixed focus often used in cheap cameras:
This would be wasting a lot of light shone on the slide. </p>
| 730
|
optics
|
Diffraction pattern threshhold
|
https://physics.stackexchange.com/questions/12359/diffraction-pattern-threshhold
|
<p>What is the characteristic bump height of periodic grating below which diffraction effect cease to exist (let assume a threshold of peaks to valleys intensity of 20% as the minimum detectable by human eye). Is it significantly less than lambda?</p>
|
<p>Assume sinusoidal grate -- <code>sin(x)</code> is the simplest possible periodic function. Direct axis <code>x</code> horizontally, and <code>y</code> vertically, and lets calculate diffraction at the point <code>(x,y)</code>. According to Fraunhofer diffraction we must first calculate distance from the observation point <code>(x,y)</code> to the source of secondary wave <code>(x_1, h*sin(d*x_1)))</code> which is <code>r=sqrt((y-h*sin(d*x_1))^2+(x-x_1)^2)</code>. Here the <code>h</code> is the height of the ripple, and <code>d</code> its frequency. This r is plugged in into the amplitude integral <code>Integrate(i*exp(-2*pi*i*r/lambda)/(r*lambda),x1=-R..R)</code>. Mathematica expression:</p>
<pre><code>Integrate[i*exp(-2*pi*i*sqrt((y-h*sin(x1))^2+(x-x1)^2)/lambda)/(sqrt((y-h*sin(x1))^2+(x-x1)^2)*lambda),{x1,-d,d}]
</code></pre>
<p>Maple:</p>
<pre><code>Int(i*exp(-2*pi*i*sqrt((y-h*sin(d*x1))^2+(x-x1)^2)/lambda)/(sqrt((y-h*sin(d*x1))^2+(x-x1)^2)*lambda),x1=-R..R);
</code></pre>
<p>Neither succeeded solving it analytically, so one have to regress to numerics. Therefore, I assigned the following numeric values:</p>
<p>The radius of the flat mirror:</p>
<pre><code>R=20 mm
</code></pre>
<p>Wavelength:</p>
<pre><code>lambda=0.0004 mm (=400 nm)
</code></pre>
<p>Defect density/frequency:</p>
<pre><code>d=10 (10 ripples/mm -- with higher values Wolfram alpha times out)
</code></pre>
<p>Defect height</p>
<pre><code>h=0.0001 mm (1/1 wave)
</code></pre>
<p>Location</p>
<pre><code>y=100 mm (10 cm above the mirror).
</code></pre>
<p>With all the assignments the only free variable remaining is x, so one should be able to plot intensity graph. Unfortunately, Wolfram Alpha refuse to understand what <code>Plot[Integral[]]</code> is and suggests some stock market graphs instead. I had to calculate pointwise. Here is an expression which calculates intensity at the axis (<code>x=0</code>):</p>
<pre><code>Integral[i*exp(-2*pi*i*sqrt((100-0.0001*sin(10*x1))^2+(0.0-x1)^2)/0.0004)/(sqrt((100-0.0001*sin(10*x1))^2+(0.0-x1)^2)*0.0004),x1=-20..20]
</code></pre>
<p>Which walpfa evaluates to <code>0.62+3.81i</code></p>
<p>The characteristic angle of periodic grating is <code>lambda*d</code> so the diffraction pattern linear dimension is <code>lambda*d*y</code> which in our case is conveniently <code>1</code>. Therefore, we could see diffraction pattern by probing only three points: <code>x=0.0</code>, <code>x=0.5</code>, and <code>x=1.0</code>. Here are the intensities:</p>
<pre><code>x=0.0 -> 0.62+3.81*i
x=0.5 -> 7.76+3.0*i
x=1.0 -> 3.51+1.8*i
</code></pre>
<p>In other words, the diffraction pattern is quite noticeable when defect size is comparable to lambda. To doublecheck, what if we shrink grating height 10 fold? There:</p>
<pre><code>x=0.0 -> 3.4+3.5*i
x=0.5 -> 4.1+3.1*i
x=1.0 -> 3.88+3.33*i
</code></pre>
| 731
|
optics
|
Refractive index liquids: Why hard to buy?
|
https://physics.stackexchange.com/questions/12466/refractive-index-liquids-why-hard-to-buy
|
<p>Does anyone know the refractive index suppliers? I've found Cargille Labs (which customer service is terrible so far but the liquids may actually be OK), but nothing else comparable. I'd like to have a set of liquids with the refractive index in the range of 1.3...1.4. Are they so hard to produce that almost nobody does it or am I missing something?</p>
| 732
|
|
optics
|
Is the Sun-set and the Sun-rise Symmetrical for the Observer?
|
https://physics.stackexchange.com/questions/15324/is-the-sun-set-and-the-sun-rise-symmetrical-for-the-observer
|
<p>Is there the effect of sun rising and sun setting, in terms of Rayleigh scattering and visual spectrum and other factors completely similar and symmetric? I mean can one recognise them from a picture taken from the sky?</p>
|
<p>The average air temperature is always lower at sunrise, which changes the atmospheric refraction infinitesimally. On the moon, you would only have the tiny difference from the doppler shift due to your motion relative to the sun, so that sunrise would be a teeny-weeny bit bluer than sunset.</p>
| 733
|
optics
|
Which device amplifies the optical signal in fiber optics?
|
https://physics.stackexchange.com/questions/15388/which-device-amplifies-the-optical-signal-in-fiber-optics
|
<p>My teacher ask me one question. How the optical signal or information was transfered in fiber optics communication?</p>
<p>Then i explained it and use the word optical amplifier then teacher ask me what is optical amplifier i said which amplifies optical signal then again which device amplifies the optical signal in fibre optics communication?</p>
<p>I was dump at that moment i dont have the answer for it and i google it then i read about Doped fiber amplifier(DFA) but i didn't get it so can you give me any suggestion to understand it like any website or explain it.</p>
|
<p>An optical amplifier works just like a laser, but with a few small differences. In a laser, light is created, usually due to spontaneous emission from the gain medium. Light then circulates in an optical cavity while being further amplified in the gain medium. A single photon can traverse the laser cavity many millions of times before being released.</p>
<p>In contrast, an amplifier does not (ideally) produce any laser output on it's own. An amplifier consists of a gain medium, just like a laser, but has no optical cavity in which light circulates. Instead, light enters one end of the cavity, traverses the gain medium only a few times (in the case of a fiber amplifier, only once) and then immediately exits.</p>
<p>In the case of telecommunication equipment, the input light is the data signal that needs to be amplified by the relay station. In many other applications, coupling a low power seed laser with an optical amplifier can be a convenient and efficient way to get a high power laser beam. A lot of laser cutting and machining is done this way, for example. In this case engineers may call the laser system a <em>Master Oscillator/Power Amplifier</em> or MOPA system.</p>
| 734
|
optics
|
What sets the resolution on analog film?
|
https://physics.stackexchange.com/questions/17245/what-sets-the-resolution-on-analog-film
|
<p>When taking a picture with old fashioned film what sets the resolution of the picture? Is it the wavelength, or the chemical makeup of the film?</p>
|
<p>Both the diffraction limit and the grain size could affect the resolution limit of analog pictures. Let's see how they compare.</p>
<p><strong>Diffraction limit</strong>
The Abbe diffraction limit states that the size of the spot is $d=\lambda f\#$. The $f$-number will depend widely for different cameras and illumination settings. If, for example, you use $f/10$, this gives about 5μm for green light. See Edgar Bonet's comment below for more details on different lenses.</p>
<p><strong>Grain size</strong>
From <a href="http://www.tmax100.com/photo/pdf/film.pdf" rel="nofollow">this document</a>, it seems that the mean grain size can go from 500 nm up to 30 microns. It depends on the film and the amount of development. More details can be found in <a href="http://en.wikipedia.org/wiki/Photographic_film" rel="nofollow">this article</a></p>
<p>In short, both the diffraction circle and the grain size vary from a fraction of a micron to a few microns. The resolution will be determined by the both of them.</p>
<p>You can also check out <a href="http://en.wikipedia.org/wiki/Science_of_photography" rel="nofollow">http://en.wikipedia.org/wiki/Science_of_photography</a>. There's some further info on this topic.</p>
<p>EDITS: I made several edits thanks to the commenters.</p>
| 735
|
optics
|
Does a perfect mirror behave the same as a blackbody radiator?
|
https://physics.stackexchange.com/questions/19668/does-a-perfect-mirror-behave-the-same-as-a-blackbody-radiator
|
<p>If I put a perfect mirror(i.e. reflects with no attenuation) next to a blackbody radiator its spectra should be the same as the blackbody radiator.</p>
<p>Looking only at the spectra - is there any difference between a blackbody radiator and a perfect mirror? </p>
<p>For instance, suppose the mirror is accelerating as in the dynamic Casimir effect - does the spectra change in the same way as the blackbody radiator?</p>
|
<p>A mirror at rest (or moving at constant velocity) emits no thermal radiation whatsoever.</p>
<p>Detailed balance, i.e. 2nd law of thermodynamics, requires a relationship between absorbing incoming radiation and turning it into heat, versus emitting thermal radiation. A perfect mirror at rest does not absorb any incoming radiation, therefore it is a "whitebody", not a blackbody, and emits no thermal radiation.</p>
<p>By special relativity, a mirror moving at constant velocity should not emit radiation either.</p>
<p>An accelerating mirror, on the other hand, does emit radiation, at least according to the <a href="http://arxiv.org/abs/gr-qc/0110010" rel="nofollow">paper linked by @Mark Beadles</a>. Don't ask me why, or what spectrum, I don't know!</p>
| 736
|
optics
|
How it's possible to enhance the depth effect of 3d pictures without increasing the cameras distance?
|
https://physics.stackexchange.com/questions/20088/how-its-possible-to-enhance-the-depth-effect-of-3d-pictures-without-increasing
|
<p>I always thought that for a 3d picture, which is in fact 2 pictures that are displayed one for each eye, the more far the cameras are, the more "depth" you will see. And it's a fact i think.</p>
<p>So, for example, if you take the picture of a house and the cameras distance is let's say, 10m, you'll have the impressions that the house is a miniature, (and you could also take the pic using a very large aperture to enhance the effect).</p>
<p>However, today a little friend showed me the 3Ds which is a portable videogame that can also take 3d pictures, and the first thing i notice is that the distance between the cameras is minimal, i thought it would only be capable of taking close photos of small objects and at the photo they would look larger, so you'll have the same impression of an eagle for example.</p>
<p>For my surprise however, it's not what happens, the thing can take pictures of even very far (and big) objects and their scale look right, or even smaller by adjusting some options. You have the impression of an even larger distance between captures than the human eyes.</p>
<p>How can it be possible? I think it's "physically impossible", I mean, you could even map the picture on a 3d mesh and enhance the effect, but i think we'll be able to notice artifacts in this case, because, for example, if you have a box, and the left camera can't see the side on the right, and at the distance of a human eye, the right camera will be able to see, but if you take the right camera closer to the left one, it will not see the right side of the box, so it's a visual information that can't be captured when you have a small distance between the cameras, and i can't see how it's possible to simulate something like this.</p>
<p>I couldn't make extensive tests, but I want to know how that little device tricked my eyes so impressively. Because obviously it's a trick, i think...</p>
<p>Any clues? Thank you.</p>
|
<p>For the reasons that you already mentioned in the question, it would not be possible to modify stereoscopic depth without adding artifacts.</p>
<p>So instead I would argue if the depth is actually increased. It probably isn't.</p>
<p>We have gotten used to watching monoscopic (regular) photographs with both our eyes. Theoretically the perceived scale of the scenes on those photographs should be huge. But we have gotten used to it. </p>
<p>So, the depth effect of an stereoscopic image that is captured in the range of monoscopic (0 camera distance) up to a regular human eye-distance will be perceived natural. A small amount of depth is already obvious to the brain. </p>
<p>If you would increase the camera distance beyond human eye distance, then your brain would get an unusual stimulus hence it might conclude that the scene is a miniature. </p>
<p>Also note that the viewing angle (field of view) of a photograph or mobile screen is also significantly smaller than the captured angle. Not that it would compensate for reduced depth, but just another example of our tolerance in accepting the illusion of a photograph. </p>
<p>If you are shooting images for a more immersive viewing experience (eg. 3D cinema), then the tolerances are probably much smaller. </p>
| 737
|
optics
|
How could I translate a field of view value into a magnification value?
|
https://physics.stackexchange.com/questions/26046/how-could-i-translate-a-field-of-view-value-into-a-magnification-value
|
<p>When I zoom in with <a href="http://www.stellarium.org/" rel="nofollow">Stellarium</a>, it indicates a field of view (FOV) value in degrees, but most binoculars and telescopes are advertised with value like "nX magnification power."</p>
<p>How could I translate this value so I get an idea of what I will see with a telescope or binocular?</p>
<p>For example, I if got a 30X telescope, how much should I zoom to get similar view?</p>
|
<p>Different telescope and binocular eyepieces have different fields of view, so that there is no direct relationship between magnification and field of view.</p>
<p>Eyepieces range in apparent field of view from 30° to 110°, typically being in the range of 50° to 70°. For any given eyepiece, you can calculate the actual field of view by dividing the apparent field of view by the magnification. Thus a 30x eyepiece with a 60° apparent field of view will show you an actual field of view of 60°÷ 30x = 2°.</p>
| 738
|
optics
|
How much detail can telescopes actually provide?
|
https://physics.stackexchange.com/questions/26172/how-much-detail-can-telescopes-actually-provide
|
<p>For example, could the numbers / letters on a postage stamp in a randomly specified location be clearly visible from space.</p>
<p>This is to settle a discussion with a friend that piqued my curiosity. </p>
|
<p>Two major issues here. Well, maybe three...</p>
<ul>
<li>Optical limits of the instrument. Think <a href="http://en.wikipedia.org/wiki/Angular_resolution">Rayleigh Criterion</a>, but beware of the existence of interferometric methods (hard to do in the optical for now, but...). It's going to take a <em>big</em> lens to image the a postage stamp even from low Earth orbit, and you might expect to find a spy satellite a bit higher up than that.</li>
<li>Stuff in the way. The "twinkle" you see in stars is related to atmospheric interference. There are methods to compensate. Trying to look through clouds in the optical is a lost cause.</li>
<li>Is the platform stable? If your optics are not well isolated from any vibration of your platform the camera will jump around.</li>
</ul>
| 739
|
optics
|
Is periscope window mathematically possible?
|
https://physics.stackexchange.com/questions/413909/is-periscope-window-mathematically-possible
|
<p>I was always wondering, why don't we have periscope windows?
What I imagine is a "light intake" on a roof, from which the light is concentrated into a straight long <strong>narrow</strong> tube that takes it to an underground flat where the light is dispersed into a fake window.</p>
<p>Is such design mathematically possible? What would determine the viewing angles of the fake window?</p>
<p>Maybe it could use Fresnel lenses to save on cost and weight. We would probably not get a clear picture of the outside world but at least lot of natural daylight.</p>
|
<p>Depends on how narrow this tube is. Without moving optics that can track the sun, the tube must be as large as the collection area, and no kinds of tricks with mirrors or lenses can make it better. If sun-tracking optics are allowed then you can do much better, provided it's not cloudy.</p>
<p>Let's start with a practical example: <a href="http://www.veluxusa.com/products/sun-tunnels" rel="nofollow noreferrer">sun tunnels</a> which transmit light from a skylight into a fixture below:</p>
<p><a href="https://i.sstatic.net/RZaLz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RZaLz.jpg" alt="enter image description here"></a></p>
<p>The light is diffuse, so you get natural light, but no "picture".</p>
<p>The <a href="https://www.veluxusa.com/professional/tools/architects/skylight-testing-data" rel="nofollow noreferrer">technical data</a> gives a <a href="http://www.efficientwindows.org/vt.php" rel="nofollow noreferrer">visual transmittance</a> of 0.36 for the most efficient models, meaning 36% of the light hitting the collector on the roof ends up coming out the fixture at the bottom. That may not seem like much, but since our visual perception is logarithmic it's not nearly as bad as it seems.</p>
<p>As an example, the 14" model has a collection area of approximately 0.1 square meters. The illuminance outside on a sunny day is around 150,000 lux. That means the luminous flux at the collector is:</p>
<p>$$ 150000\:\mathrm{lx} \cdot 0.1 \:\mathrm{m^2} = 15000 \:\mathrm{lm} $$</p>
<p>15000 lumens. The visual transmittance coefficient of 0.36 means the luminous flux of the fixture, after the light lost in the optical system between the collector and fixture is:</p>
<p>$$ 15000 \:\mathrm{lm} \cdot 0.36 = 5400 \:\mathrm{lm} $$</p>
<p>That's a lot of light. For comparison, a typical "60 watt equivalent" LED is only about 800 lumens.</p>
<p>Could we improve on the performance of this sun tunnel with some optical system? Perhaps we'd like to collect more light from a larger area, while keeping the tube small. Can some arrangement of mirrors or lenses help?</p>
<p>There's an optical law called the conservation of étendue which is relevant. The best explanation I've found is in an <a href="http://what-if.xkcd.com/145/" rel="nofollow noreferrer">XKCD what-if on starting a fire with moonlight</a>:</p>
<blockquote>
<p>Maybe you can't overlay light rays, but can't you, you know, sort of smoosh them closer together, so you can fit more of them side-by-side? Then you could gather lots of smooshed beams and aim them at a target from slightly different angles.</p>
<p><a href="https://i.sstatic.net/1fIQv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1fIQv.png" alt="enter image description here"></a></p>
<p>Nope, you can't do this.</p>
<p>It turns out that any optical system follows a law called <em>conservation of étendue</em>. This law says that if you have light coming into a system from a bunch of different angles and over a large "input" area, then the input area times the input angle equals the output area times the output angle. If your light is concentrated to a smaller output area, then it must be "spread out" over a larger output angle.</p>
<p><a href="https://i.sstatic.net/1kAtm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1kAtm.png" alt="enter image description here"></a></p>
<p>In other words, you can't smoosh light beams together without also making them less parallel, which means you can't aim them at a faraway spot.</p>
</blockquote>
<p>So your first attempt at improving the sun tunnel might be to put a big lens at the top which focuses light on the smaller entrance to the tunnel.</p>
<p>If the sky is equally bright all over (it's cloudy), you have a problem: you're already collecting light from a 180 degree hemisphere. If you attempt to focus light down on a smaller point it must be spread out even more, beyond 180 degrees. But that means some of the light is turned around, going back at the sky, which doesn't help your objective of lighting the room below. So in this case, you simply can't cram any more light in the tube.</p>
<p>If it's sunny, maybe the optics should focus the brightest part of the sky, the sun's disk? This is a good idea, because now the input angle is only about 0.53 degrees. This means you have some margin to focus the incoming light without spreading it out beyond 180 degrees. But while physically possible, I'm not sure it's economically feasible since it would require expensive sun-tracking optics, and on a cloudy day it wouldn't work any better than the cheap variety with no optics at all.</p>
| 740
|
optics
|
Is a converging gaussian beam just another gaussian beam with a much larger Rayleigh length?
|
https://physics.stackexchange.com/questions/414673/is-a-converging-gaussian-beam-just-another-gaussian-beam-with-a-much-larger-rayl
|
<p>This link seems to imply it, but I'm confused. Also, I don't intuitively see how the general shape of the gaussian remains the same. I'd expect it to be some kind of surposition between a normal gaussian and converging lines, like a $e^{-x} * sin$ function. <a href="https://www.ophiropt.com/blog/laser-measurement/focusing-a-gaussian-laser-beamwhich-formula-to-use/" rel="nofollow noreferrer">https://www.ophiropt.com/blog/laser-measurement/focusing-a-gaussian-laser-beamwhich-formula-to-use/</a></p>
|
<p>Yes, all (theoretical) Gaussian beams are the same, no matter whether they are collimated or converging. There is always a waist (though it may not be in the portion of the beam that actually exists.)</p>
<p>A sharply converging Gaussian beam has a short Rayleigh length.</p>
<p>There is no such thing as a perfectly collimated Gaussian beam, only a beam with a very long Rayleigh length, since in reality the Rayleigh length cannot be infinite.</p>
| 741
|
optics
|
how does PBS(polarized beam splitter) work?
|
https://physics.stackexchange.com/questions/415758/how-does-pbspolarized-beam-splitter-work
|
<p>I know that beam splitter work with evanescent wave, and by adjusting the thickness between the two prism you can control the R,T to be 50%,50%.
But how does polarized beam splitters work?</p>
|
<p>There are several different types of polarizing beam splitters. For example, if we have incident unpolarized light, a stack of plates tilted at the brewster angle would do the trick. The main pillar of this theory is that s-polarized and p-polarized light have different reflectivities for different incident angles.The Brewster angle is an angle at which incoming unpolarized light reflects s-polarized light and transmits the rest. By using a number of these plates, the incoming ray is sequentially depleted of s-polarized light and p-polarized light should be what comes out the other end and in the reflected direction we'd have nothing but s-polarized light.</p>
<p>EDIT: thought i'd clarify that this is ONE method of performing polarized beam splitting.</p>
| 742
|
optics
|
Evaluation of Fresnel coefficient for $\theta_{i}>\theta_{cr}$
|
https://physics.stackexchange.com/questions/450992/evaluation-of-fresnel-coefficient-for-theta-i-theta-cr
|
<p>For a TE wave the Fresnel coefficient <span class="math-container">$r$</span> is:
<span class="math-container">$$r=\frac{\cos(\theta_{i})-\sqrt{n^2-\sin^2(\theta_{i})}}{\cos(\theta_{i})+\sqrt{n^2-\sin^2(\theta_{i})}}$$</span>
if <span class="math-container">$\theta_{i}>\theta_{cr} \rightarrow (n^2-\sin^2(\theta_{i}))<0 \rightarrow $</span>
<span class="math-container">$$\sqrt{n^2-\sin^2(\theta_{i})}=\pm i\sqrt{-n^2+\sin^2(\theta_{i})}$$</span>
So I have for <span class="math-container">$r$</span> four possibilities, according to the signs you put. You can verify that two possibilities give the real number 1, while the two other give two complex conjugate number with module=1. I mean <span class="math-container">$r=[1;e^{ia};e^{-ia}]$</span>. So in all the cases the coefficient <span class="math-container">$R=|r^2|$</span> is 1 but, the reflected wave can have 3 different phases, which doesn't make sense. Someone knows where I am wrong?</p>
| 743
|
|
optics
|
How do Raman pulses, transitions and diffraction processes work?
|
https://physics.stackexchange.com/questions/508675/how-do-raman-pulses-transitions-and-diffraction-processes-work
|
<p>I want to know how Raman pulses work. Also, can someone explain to me Raman transitions and Raman diffraction processes?</p>
<p>Additionally, how does phase get accumulated in the free evolution of any hyper-fine states in an atom? </p>
|
<p>Well the conventional lasers rely on electronic transitions for amplification of light. The Raman lasers on the other hand make use of Raman scattering for light amplification. Raman lasers are optically pumped systems. This pumping does not produce an inversion of our photon group as in electronically stimulated lasers. Also, take a look at Stokes photons. It might help. </p>
| 744
|
optics
|
How to derive equation (upper bound for the period to-wavelength, grating)
|
https://physics.stackexchange.com/questions/513667/how-to-derive-equation-upper-bound-for-the-period-to-wavelength-grating
|
<p>I am currently reading the paper entitled "Antireflection behavior of silicon subwavelength periodic structures for visible light" [P. Lalanne & G Michael Morris, <a href="https://doi.org/10.1088/0957-4484/8/2/002" rel="nofollow noreferrer"><em>Nanotechnology</em> <strong>8</strong>, 53 (1997)</a>].</p>
<p>On the first page, they write:</p>
<blockquote>
<p>To suppress Fresnel reflections for randomly polarized light, we
consider 2D SWS '(Subwavelength structured)' surfaces. The free parameters of the design are the
period <span class="math-container">$\Lambda$</span>, the grating profile and the depth. The period is related to
the smallest wavelength <span class="math-container">$\lambda_{min}$</span> so that, for a given angle of incidence
<span class="math-container">$\theta_i$</span>, the SWS surface acts as a zero<span class="math-container">$^\mathrm{th}$</span>-order filter by reflection. If the
reflected zero<span class="math-container">$^\mathrm{th}$</span> order alone is to propagate in the incident medium of
refractive index equal to 1, the upper bound for the
period to-wavelength ratio <span class="math-container">$\Lambda$</span>/<span class="math-container">$\lambda_{min}$</span> is simply equal to 1/(1+sin(<span class="math-container">$\theta_i$</span>)).</p>
</blockquote>
<p>I would like to know how they derived this, more specifically, why for <span class="math-container">$m=0$</span> (zero<span class="math-container">$^\mathrm{th}$</span> order) and an incident medium equal to <span class="math-container">$n=1$</span> (refractive index), we have:</p>
<p><span class="math-container">$$\frac{\Lambda}{\lambda_{min}} = \frac{1}{1+\sin(\theta_i)}$$</span> </p>
| 745
|
|
optics
|
What causes a transmissive diffraction grating to be more effective than a reflective one?
|
https://physics.stackexchange.com/questions/607498/what-causes-a-transmissive-diffraction-grating-to-be-more-effective-than-a-refle
|
<p>I was curious as to what would make a transmissive grating more effective than a reflective one. I have been searching but am having trouble finding an answer.</p>
<p>In literature, some semitransparent biological materials are iridescent under (oblique) transmitted light. However, the same materials, under epi illumination with a black background, no matter the angle, are non-iridescent.</p>
<p>I was wondering why this might be the case. In literature, biological iridescence is explained as being a result of periodic nanostructures in the material that act as diffraction gratings (e.g. peacock feathers).
But shouldn't diffraction gratings act the same way in reflection and transmission (if they're semitransparent rather than opaque)?</p>
<p>Some relevant information:</p>
<ul>
<li>Many of these materials are not thin, so it can't be thin film interference causing the iridescence.</li>
<li>Is it possible that diffraction gratings at interfaces act differently (only transmissive) from those that are not at an interface?</li>
</ul>
|
<p>The efficiency of a diffraction grating, whether transmissive or reflective, depends on several factors. One of them is the shape and depth of a grating. The depth of the grating controls how much energy gets sent to each diffracted order. The ideal depth for a transmissive grating depends on the refractive index of the material, and is in general different than the ideal depth for a reflective grating. The technical term used to describe the depth is blaze height.</p>
<p>The other factor that affects how a grating appears is what coating is on it. For example, gratings used in laboratory instruments such as spectrometers, are typically reflective gratings. The grooves in the grating are made by a molding process from a master grating, and then the gratings are overcoated with gold or aluminum. (The molding process is actually called replication; it's a bit more complicated than regular plastic molding). This overcoating makes them much higher efficiency than they would be without the coating. So for your example of the biological material that is iridescent in transmission but not reflection, it may be that in reflection the diffraction efficiency is just so low as to not be easily visible.</p>
<p>====== New note added in response to comments below. ======
There are different processes that can produce these visual effects. I don't know what particular things you are looking at, but there may be a combination of effects. In the first part of my answer, I was talking about diffraction by a regular repeating structure, with the physical variation of the structure occuring along an axis perpendicular to the primary light direction.</p>
<p>However, multilayer thin films may also produce colorful effects. For example, gasoline spread on water produces a colorful effect. This is caused by the light reflections reinforcing themselves for only certain angles and wavelengths. The would generally be called "thin-film interference". The transmission or reflection of multilayer thin films can be predicted by theory. You can see more about iridescence in nature here: <a href="https://en.wikipedia.org/wiki/Iridescence" rel="nofollow noreferrer">Iridescence</a></p>
<p>Of course, one can have structures that combine both of these effects. A diffractive grating, used in transmission or reflection, can be used to direct light of certain wavelengths. The coating on the diffractive surface would be designed to help achieve the desired optical function.</p>
| 746
|
optics
|
Does no airy disk/pattern mean that a converging beam of light is not at its focal plane?
|
https://physics.stackexchange.com/questions/415042/does-no-airy-disk-pattern-mean-that-a-converging-beam-of-light-is-not-at-its-foc
|
<p>From wikipedia: "In optics, the Airy disk (or Airy disc) and Airy pattern are descriptions of the best focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light."</p>
<p>So, is this saying that if I have a camera looking at a converging beam laser, that I am going to see an airy pattern (concentric circles) if and only if the camera is at the focal plane?</p>
| 747
|
|
optics
|
Focal length problem
|
https://physics.stackexchange.com/questions/201776/focal-length-problem
|
<p>Let we have an biconvex lens with equal radii, that means the two radii of curvature are equal. We know that from lens equation, </p>
<p><a href="https://i.sstatic.net/jxvqA.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jxvqA.gif" alt="enter image description here"></a></p>
<p>For biconvex lens, $r_1= r$ and the opposite radius $r_1= -r$, we finally get, </p>
<p>$$\frac{1}{f} = (n-1)\frac{2}{r}$$ and the focal length become positive (n>1) as we know the focal length of bi-convex lens is positive. </p>
<p>On the other hand, if another person assumes $r_1= -r$ and the opposite radius $r_1= r$, He gets
$$\frac{1}{f} = -(n-1)\frac{2}{r}$$</p>
<p>Which is minus but focal length of biconvex lens is always positive. Whats the wrong here?</p>
|
<p>Any formula in physics comes with a set of definitions of what each variable in the equation represents, and how to interpret positive or negative values. This is particularly rue in the case of lens and mirror formulae. In each case, a different form of the equation, with a different set of definitions, will give the same correct result.</p>
<p>In this case, Wikipedia <a href="https://en.wikipedia.org/wiki/Lens_(optics)#Lensmaker.27s_equation" rel="nofollow">https://en.wikipedia.org/wiki/Lens_(optics)#Lensmaker.27s_equation</a> adds to the above equation:</p>
<blockquote>
<p>The signs of the lens' radii of curvature indicate whether the
corresponding surfaces are convex or concave. The sign convention used
to represent this varies, but in this article a positive R indicates a
surface's center of curvature is further along in the direction of the
ray travel (right, in the accompanying diagrams), while negative R
means that rays reaching the surface have already passed the center of
curvature. Consequently, for external lens surfaces as diagrammed
above, R1 > 0 and R2 < 0 indicate convex surfaces (used to converge
light in a positive lens), while R1 < 0 and R2 > 0 indicate concave
surfaces. The reciprocal of the radius of curvature is called the
curvature. A flat surface has zero curvature, and its radius of
curvature is infinity</p>
</blockquote>
<p>Thus, your "on the other hand" individual is not respecting the sign convention, and will not get the correct result...</p>
<p>EDIT to expand
This source, <a href="http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/lenmak.html" rel="nofollow">http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/lenmak.html</a>, with its links, define <strong><em>exactly</em></strong> how the direction of light flow, sign of radius of curvature and position of focal point are to be defined. If you fail to follow these conventions with this form of the equation, chaos results...</p>
| 748
|
optics
|
Difference in the grating spectrum with different sources
|
https://physics.stackexchange.com/questions/206214/difference-in-the-grating-spectrum-with-different-sources
|
<p>If we use a slit source [of monochromatic light] in a diffraction grating setup we obtain parallel bands on screen. If instead of slit we use a point source we don't have such bands. Can someone please explain why and how does this happen?
<a href="https://i.sstatic.net/lp0sy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lp0sy.jpg" alt="enter image description here"></a></p>
|
<p>The lenses project the image of the light source on the screen. The grating does not affect the propagation in the z-direction, so in one case you have lines that are offset in the x-direction depending on diffraction order and wavelength in the other case you have points that are offset.</p>
<p>So the grating does the same thing in both cases (in geometrical optics language split each beam into diffraction peaks, where the angle between peaks depends on the wavelength).</p>
| 749
|
optics
|
Incident angle and refracted angle
|
https://physics.stackexchange.com/questions/220606/incident-angle-and-refracted-angle
|
<blockquote>
<p><a href="https://i.sstatic.net/8EmUm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8EmUm.jpg" alt="enter image description here"></a></p>
</blockquote>
<p>I came across this picture while studying about Huygens' principle and Laws of Refraction.In my book I saw that it is mentioned $\theta_1$ is the incident angle and $\theta_2$ is the refracted angle but I could not understand why this should be.Maybe I could not understand this as I am familiar with ray diagrams where the normal is perpendicular to the surface whereas here the green lines are at $90^0$ to the incident rays.</p>
<p>So,I cannot understand which and why is the incident ray and the refracted ray.Please explain which and why is the incident angle and refracted angle.</p>
<p>Thanks for any help!! </p>
<p>Note-The light rays are traveling from medium air to water.The surfaces are separated the blue line. </p>
|
<p>I agree that's confusing, and that $\theta_2$ is just plain wrong. I've always seen it explained with the normal perpendicular to the surface, just like you say, and exactly as drawn in <a href="https://en.wikipedia.org/wiki/Snell%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Snell%27s_law</a> But note that this gives the same $\theta_1$ as in your drawing. But the way you've drawn your drawing, $\theta_2=\theta_1$, period, independent of the refracted ray. Check your textbook again. I think maybe you transcribed its illustration wrong. It's hard to believe such a blatant blunder slipped by the editors and made it into print.</p>
| 750
|
optics
|
3D glasses: how do we tell whether a doubly-rendered image is closer or farther than the screen?
|
https://physics.stackexchange.com/questions/228987/3d-glasses-how-do-we-tell-whether-a-doubly-rendered-image-is-closer-or-farther
|
<p>This is a followup to this question:
<a href="https://physics.stackexchange.com/questions/228887/3d-glasses-giving-the-opposite-effect-to-that-expected">3D glasses giving the opposite effect to that expected</a></p>
<p>The current top answer explains that objects perceived as beyond the plane of the screen, as well as object perceived as closer to the viewer than the screen, are displayed by rendering two 2d images slightly apart from each other on the screen. This diagram is used in the explanation.
<img src="https://i.sstatic.net/LRhwa.jpg" alt="diagram"></p>
<p>My question is: how do our brains decide whether an object is in front of or behind the screen, given two images that are apart from each other? How does it know the green dot is far away and the blue dot is close?</p>
|
<p>As you can see from the sketch you posted, there is exactly one point where the corresponding rays traveling to each eye cross (marked by the solid dots). Our brain sees this light as if it emerged out of this single point (while in fact it came from two spots on the screen).</p>
<p>The brain processes the images provided by both eyes into a perception of three dimensions. This happens because both our eyes see the same images <em>as if</em> we really were looking at a physical three dimensional scene.</p>
| 751
|
optics
|
Why is there "ringing" at the violet end of a rainbow but not the red end?
|
https://physics.stackexchange.com/questions/230705/why-is-there-ringing-at-the-violet-end-of-a-rainbow-but-not-the-red-end
|
<p>I've recently noticed that if you look closely at the violet end of a rainbow, you can see a sort of "ringing" effect where there are alternating bands of color and lack of color. You can apparently photograph this:</p>
<p><a href="https://i.sstatic.net/Fa5au.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fa5au.jpg" alt="upload.wikimedia.org"></a> </p>
<p>Zoom in closely to the violet end of the rainbow to see it.</p>
<p>What causes this? And why doesn't the same effect produce "ringing" on the red side of the rainbow?</p>
|
<p>On rainbow formation: <a href="https://en.wikipedia.org/wiki/Rainbow#Supernumerary_rainbow" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Rainbow#Supernumerary_rainbow</a> . See a bit more about supernumeraries here <a href="http://www.atoptics.co.uk/rainbows/supers.htm" rel="nofollow noreferrer">http://www.atoptics.co.uk/rainbows/supers.htm</a>, and <a href="http://www.atoptics.co.uk/rainbows/supers.htm" rel="nofollow noreferrer">http://www.atoptics.co.uk/rainbows/supers.htm</a> . </p>
<p>Classical rainbow is explained by geometric optics, which applies on big enough droplets. When these approach the scale of light wavelength, geometric optic get less and less valid, replaced by wave optics. Smallest droplets lies at the limit, giving the quite complicated Mie scattering profile <a href="https://en.wikipedia.org/wiki/Mie_scattering" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Mie_scattering</a> </p>
<p>Look as this nasty profile - and it's a log plot ! ( The 2 lobes at 180 +- 40° are the classical geometry rainbow. )</p>
<p><a href="https://i.sstatic.net/QKIvl.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QKIvl.gif" alt=""></a></p>
<p>As a wave diffraction effects it oscillates, through interference bands that are very wavelength-dependant and thus appear colored. On general clouds are made of drops having different sizes, which tend to blur out these oscillations at averaging. But when droplets do have quite the same size, then the oscillation stands and show as the supernumerary.</p>
| 752
|
optics
|
Diffraction pattern in the image plane?
|
https://physics.stackexchange.com/questions/237147/diffraction-pattern-in-the-image-plane
|
<p>Consider the setup below:
<a href="https://i.sstatic.net/z4GxS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z4GxS.png" alt="enter image description here"></a></p>
<p>In all cases the relationship between $u_o(x_o)$ and $u_f(x_f)$ is given by a Fourier transform. My question is, when is the same true for the relationship between $u_f(x_f)$ and $u_i(x_i)$?</p>
|
<p>Strictly speaking, it's not true that in all cases the relationship is an exact Fourier Transform. Since we're dealing with electric-fields, unless the object is exactly one focal length away from the lens all other cases will have a quadratic factor that needs to be dealt with, unless you're interested in the purely incoherent case.</p>
<p>Also, I'm not sure if I understand your question regarding relating uf and uo. </p>
| 753
|
optics
|
Why is the wavelength of light proportional to the minimum angle of resolution?
|
https://physics.stackexchange.com/questions/255421/why-is-the-wavelength-of-light-proportional-to-the-minimum-angle-of-resolution
|
<p>E.g. why does the minimum angle of resolution increase as wavelength increases?</p>
<p><a href="https://i.sstatic.net/yebCM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yebCM.jpg" alt="Resolution and the Rayleigh Criterion"></a></p>
|
<p>I like to look at it the other way around: the minimum resolving angle depends on your wavelength. That's why people use X-rays to look at crystal structures for instance. In imaging, where one typically uses spherical apertures, the best focus that can be obtained is not a single spot but a so-called Airy pattern. There's a bright spot in the middle and some fainter rings around it due to the difraction of the light. The light forms a pattern of maxima and minima. You can see the first few in the picture you posted. The Reyleigh criterion states that two peaks are reolved when the maximum of one of the peaks coincides with the minimum of the other peak. The angles where the minima occur depend on the wavelength and the aperture</p>
<p>$\sin \theta_m=m\frac{\lambda}{D}$,</p>
<p>where $m=1.22, 2.233, 3.238,\ldots$ corresponds to the 1st, 2nd and 3rd,... minimum. For small angles $\sin x \approx x$ and the first minimum ($m=1.22$), you'll find the equation that you mentioned. </p>
| 754
|
optics
|
Why don't windows and mirrors cancel light?
|
https://physics.stackexchange.com/questions/264465/why-dont-windows-and-mirrors-cancel-light
|
<p><a href="https://i.sstatic.net/I28Gl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I28Gl.png" alt="enter image description here"></a>So I understand that when light goes from a material of low index to a material of higher index it picks up a phase change of 180. Most glass has an index of around 1.5. I know that when light goes from a low index to a high index there is a phase change of 180. So light coming from air would reflect back off the glass with a phase change of 180. Also destructive interference takes place at 180 degrees of phase shift. I was thinking about this and I was wondering how we could even have reflections off of glass and why anti-reflective coatings are needed at all? Is is because of the polarization where only light polarized in the plane of incidence will have an interference? </p>
|
<p>What you have not taken account of is that the light wave reflected is travelling in the opposite direction. The incoming and outgoing waves may interfere with each other, but they will not completely cancel each other out - in fact a standing wave may be formed as described below.</p>
<p>Now - the way an antiflection coating works is to reflect the wave back twice so that the <strong>two reflected waves</strong> which travel in the same direction are 180 degrees out of phase and these two cancel each other out by destructive interference (this is the simplest form of antireflective coating).</p>
<p><strong>Standing wave formation</strong></p>
<p>There can be interference between an incoming and outgoing wave. In general if a wave hits a barrier and reflects back the way it came there may be a standing wave formed by the addition of the forward and backward waves. The waves appear to 'pass through' each other. </p>
<p>A nice description with pictures to show how standing waves are set up is <a href="http://www.s-cool.co.uk/a-level/physics/progressive-waves/revise-it/standing-waves" rel="nofollow noreferrer">here</a>. In brief the two waves going in opposite directions look like the picture below...</p>
<p><a href="https://i.sstatic.net/NmG1o.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NmG1o.jpg" alt="enter image description here"></a></p>
<p>combine by constructive and destructive interference to form a standing wave that looks like the picture below, where <strong>A</strong> is the amplitude of the wave and <strong>N</strong> are the null-points which are always at zero.</p>
<p><a href="https://i.sstatic.net/i0KPn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i0KPn.jpg" alt="enter image description here"></a></p>
<p><strong>Antireflective coating</strong></p>
<p>For an antireflective coating the two reflected waves produced are out of phase by 180 degrees as in the picture below</p>
<p><a href="https://i.sstatic.net/sXqlp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sXqlp.png" alt="enter image description here"></a></p>
| 755
|
optics
|
A confusion about single slit diffraction
|
https://physics.stackexchange.com/questions/303688/a-confusion-about-single-slit-diffraction
|
<p>I am studying wave optics at home. Yesterday I came across a diagram depicting single slit diffraction experiment of light, where they placed a convex lens. I could not understand the purpose of this convex lens.<a href="https://i.sstatic.net/WUH8r.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WUH8r.jpg" alt="enter image description here"></a></p>
| 756
|
|
optics
|
3D glasses which bend light about 5-10 degrees - how do they work?
|
https://physics.stackexchange.com/questions/596903/3d-glasses-which-bend-light-about-5-10-degrees-how-do-they-work
|
<p>A child's jigsaw puzzle came with two pairs of cheap cardboard-framed 3D glasses.</p>
<p>The glasses had thin plastic film in them, as lenses.</p>
<p>These thin flat plastic lenses bend about 99% of visible light by around 5-10 degrees. You can see a very faint image of the unbent scene.</p>
<p>This produces a weak 3d effect when parts of an image have a black halo.</p>
<p>What is this thin film than bends light?</p>
<p>Why are the black halos required?</p>
| 757
|
|
optics
|
Geometrical paraxial optics
|
https://physics.stackexchange.com/questions/597607/geometrical-paraxial-optics
|
<p>Why is geometrical optics treated in 2d? I actually never thought about it and calculated all the problems straightforward with the ABCD matrices. And then this question came to my mind for which I have no answer so far.</p>
|
<p>Usually optical systems are axially symmetric, so that you only need 2 d to understand the paths of all the rays traversing the system.</p>
| 758
|
optics
|
How to choose the right focal length lens for my photodetector?
|
https://physics.stackexchange.com/questions/606315/how-to-choose-the-right-focal-length-lens-for-my-photodetector
|
<p>I wanted to know how to choose the right focal length lens for my silicon carbide photodetector which has a peak response of 315nm and an active area of 25mm^2 such that I can mount that lens on my photodetector to reject any ambient light and allow only UV light to pass through it. Kindly share your thoughts.</p>
| 759
|
|
optics
|
Is there a relationship between thickness of matter and luminance?
|
https://physics.stackexchange.com/questions/613530/is-there-a-relationship-between-thickness-of-matter-and-luminance
|
<p>I'm searching for a relationship between thickness of matter and luminance.</p>
<p>I'm a tribologist and I'm doing friction and wear experiments. During these experiments I'm illuminating the surface and capturing images during the whole test. The incoming light is a diffused light generated by a lighting dome. I'm observing how the pixels that are directed to the surface are getting darker over time due to wear debris deposition.</p>
<p>From what I'm understanding is that the pixel value of each image represent some kind of an uncalibrated luminance value. And my surface is getting darker due to attenuation of radiation (Beers Law). Therefore, is it possible to establish a relationship between Beers Law and luminance?</p>
<p>What I've found so far:</p>
<p>(1) Attenuation of radiation:
<span class="math-container">$$I(d) = I_0 e^{-\epsilon d},$$</span>
while <span class="math-container">$\epsilon$</span> refers to absorption coefficient, <span class="math-container">$d$</span> to the material thickness, <span class="math-container">$I_0$</span> to the incoming luminous flux and <em>I(d)</em> the transmitted luminous flux.</p>
<p>(2) Lambertian reflection:
<span class="math-container">$$L = (E R)/\pi,$$</span>
while <span class="math-container">$L$</span> refers to the luminance, <span class="math-container">$E$</span> to the illuminance and <span class="math-container">$R$</span> to the reflectance.
As <span class="math-container">$E=I_0/A$</span>,
<span class="math-container">$$L = (I_0 R)/(A \pi),$$</span>
while <span class="math-container">$A$</span> refers to the surface area.</p>
<p>I believe there is some kind of a relationship that could link the thickness of the deposited materials to the luminance, maybe like this
<span class="math-container">$$L(d) = L_0 e^{-\varepsilon d} + L_1$$</span>.</p>
<p>So far, I could not find any link that come close to this relationship. I'm I on the wrong path? Thanks for any help!</p>
| 760
|
|
optics
|
Why does light refract if photons are not bound by an axle?
|
https://physics.stackexchange.com/questions/2169/why-does-light-refract-if-photons-are-not-bound-by-an-axle
|
<p>In the classic metaphor, a light beam bends for the same reason that a wagon getting one wheel stuck in the sand does...the wheels travel at uneven speeds, and the wheel on the smoother surface travels faster.</p>
<p>But the key to the wagon scenario is the axle - if the two wheels were not bound, the faster wheel would sail on, heedless of the other wheel's difficulty. So, if the metaphor is useful at all, there is an axle force binding the photos in a beam of light, which causes it to turn when it hits a different medium non-perpendicularly.</p>
<p>Or is this just a bad metaphor for the rubes?</p>
|
<p>First important point is that light always travels in a straight lines in an environment of constant index of refraction:</p>
<p><img src="https://i.sstatic.net/eejRR.gif" alt="alt text"></p>
<p>What creates the dispersion is the fact that the index of refraction depends on the frequency of the light $n = n(f)$. This also means that phase velocity (defined as $v = {c \over n}$) depends on the frequency. You don't see this in the image but you can imagine that in usual materials (e.g. glass) a wavefront will propagate faster along red tracks than along the violet tracks (equivalently, red tracks are less bent than the violet tracks, as can be seen in the picture). So this will make the wavefront bulging up in the red part of the spectrum (the precise shape depending on the precise material and angle of incidence).</p>
<p>So it's probably just the wavefront the metaphor is talking about. But you can see that wavefront will spread out as it propagates in the material (precisely because there is no "axle" between individual photons). In my opinion it's a pretty bad metaphor except for giving a correct illustration of the relation "denser environment" $\leftrightarrow$ "slower speed".</p>
| 761
|
optics
|
Will watering tea down make it clearer?
|
https://physics.stackexchange.com/questions/4709/will-watering-tea-down-make-it-clearer
|
<p>If I poured water into my tea, would I see more or less of the bottom of the tea-cup?</p>
<p>Intuitively, there would be as many particles blocking as many photons, and so I'd see the bottom just as clearly as before.</p>
|
<p>There is some misconception in Tims question already: </p>
<blockquote>
<p>as many particles blocking as many photons, and so I'd see the bottom just as clearly as before</p>
</blockquote>
<p>Blocking of photons will not impair clarity at all! </p>
<p>One has to distinguish absorption and scattering! Tea is a solution of
some dyestuffs in water (optically) this dyestuff absorbs light of
certain wavelengths (making the tea looking yellow/brown) but the tea
is not made turbid ("Unclear") by that absorption. (Think of some color filter
for cameras!)</p>
<p>The absorption is governed by <a href="http://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law" rel="nofollow">Lambert/Beer law</a>.
The law shows, that doubling the length of the liquid column is
"neutralized" by reducing concentration to half of original value. </p>
<p>Now the question of turbidity, In practice a tea may contain
some particles from lime in the water, often some agglomerates from
lime and some tanning agents in the tea. On top there is some
fat/oil which drift as droplets or collect on the surface. </p>
<p>Any particle floating in the tea, which is about 50 nm up to
some µmeters and having a index of refraction different enough
from the index of water, will cause some scattering. </p>
<p>This scattering will blur the picture. </p>
<p>Most of the scattering in such suspensions like tea or milk
are governed by <a href="http://en.wikipedia.org/wiki/Mie_theory" rel="nofollow">Mie scattering</a>. Scattering/concentrations/length
is not as simple as Lambert-Beer, but at low concentrations
one can assume an analog dependence. </p>
<p>Some practical note: when having a good (low lime) water and
and the leaves are in a bag with good filter action, and You
wait for a moment until the oil accumulates at the surface,
tea will be a nearly perfectly clear liquid. </p>
<p>Georg </p>
| 762
|
optics
|
Will the size of the pinhole affect the size of the image produced?
|
https://physics.stackexchange.com/questions/582174/will-the-size-of-the-pinhole-affect-the-size-of-the-image-produced
|
<p>If we have a larger pinhole, would it produce a larger image? <br />
I know the size of the opening will affect the brightness and sharpness of the image, but what about its size? If you think about it, a wider opening will make the light rays more spread out and therefore consequently produce a bigger (and also blurrier) image on the screen. I guess I'm just really confused.</p>
<p>Thanks a lot!</p>
|
<p>A larger pinhole in a pinhole camera does not make the image larger ; it simply makes each "point" in the image blurrier. A way to think about it is to consider the image to be the superposition of a large number of essentually identical images, each produced by a different sub-pinhole in the large pinhole. The result is that each image "point" is blurred by an amount equal to the size of the large pinhole.</p>
| 763
|
optics
|
Why does incident ray, refracted ray and normal lie in the same plane? (looking for physical reasons)
|
https://physics.stackexchange.com/questions/585403/why-does-incident-ray-refracted-ray-and-normal-lie-in-the-same-plane-looking
|
<p>If we have a ray striking a plane then the unit vector in direction of the incident ray can be thought of as rotating in a plane that contains the normal to interface and the part of the unit vector along the plane of the interface.</p>
<p>However, I can not understand why this unit vector is not rotated around lines parallel to the plane of the interface which it strikes. To summarize, why does normal, incident and refracted ray lie in the same plane? I am asking for a physical answer, not a mathematical one.</p>
<p>I have already seen a mathematical answer <a href="https://physics.stackexchange.com/questions/582972/when-visible-light-waves-get-reflected-why-are-the-incident-ray-reflected-ra">here in this post</a> but I don't think it really explained anything of 'why' it happens and, even if it does, uses some strange operations which I have not heard of it.</p>
<p>I seek either a less mathematically sophisticated answer/ one that directly explains the reasoning for this physical phenomenon happens. Also, I already know of snell's law of refraction, I'm asking for the intuition for why it should be true.</p>
<p><strong>An afterthought:</strong> If light travels as a wave then shouldn't It technically refract in a lot of directions when it hits a surface?</p>
|
<p>Why ... because of the equations. That is the best answer there can be, IMHO. Maths is the pure logic</p>
<p>One short-hand I would use it to point out that Maxwell's equations are invariant under rotations, i.e. Maxwell's equations have no preferred direction. In the problem with refraction you have:</p>
<p>(1) Plane of your surface, which is fully characterized by the normal in 3d space</p>
<p>(2) Direction of the incident light</p>
<p>These two vectors fix the situation, i.e. I would expect all refraction-related phenomena to be describable in terms of these two vectors, since there is no third preferred vector in the setting of the problem.</p>
<p>You could also play it this way. Say refracted ray was to be refracted out of the normal-incident plane. Which way would it go, out of the page or into the page? There is no information in Maxwell's equations to give this answer.</p>
<p>Final note. This only applies in case of isotropic dielectrics of course. Once you have crystalline solids, which do have special directions, the situation can change. I am sure one could then come up with configuration where light would be refracted in the off-plane direction.</p>
| 764
|
optics
|
Is there a limit to demagnification of an image as there is for magnification?
|
https://physics.stackexchange.com/questions/587416/is-there-a-limit-to-demagnification-of-an-image-as-there-is-for-magnification
|
<p>In making wafers for chips, when using masks and demagnifying to a smaller size, is there any limit to demagnification? For light, a microscope’s numerical aperture and wavelength limitation prevents the level of magnification from becoming higher than a certain level.</p>
|
<p>The optical resolution limit is <span class="math-container">$p = 0.5 \lambda / NA$</span> for off-axis illumination. The numerical aperture <span class="math-container">$NA=n \sin \theta$</span> cannot exceed the refractive index <span class="math-container">$n$</span> of the medium (vacuum or sometimes water) below the lens. In principle you can reduce lambda indefinitely.</p>
| 765
|
optics
|
Bulb inside the prism
|
https://physics.stackexchange.com/questions/591428/bulb-inside-the-prism
|
<p>What if a source of white light is within the prism itself. Let's say it emits a thin beam of white light. Now the speed of different wavelengths is different but it is not being refracted at a surface (atleast until it emerges from the other side), so what will happen to it? Will it stay together as white light only? And what happens once it crosses the other side?</p>
<p>This just popped up in my mind.</p>
|
<p>Put simply: the white light, assuming it hits the face of the prism at an angle other than perpendicular, will be spread over angles of exit by color (so long as the prism glass has a nonuniform index of refraction).<br />
You are correct that there will be some phase lag inside the prism, so if you were to generate a very short pulse of "white" light and had sufficiently fast measurement tools you might see different color components exiting at slightly different times. When you have a continuous source, of course, there's no "time zero" to measure against.</p>
| 766
|
optics
|
A lens with a variable refractive index
|
https://physics.stackexchange.com/questions/173808/a-lens-with-a-variable-refractive-index
|
<p>(60th Polish Olympiad in Physics)</p>
<blockquote>
<p>A planoconvex lens of radius of curvature <span class="math-container">$R$</span> and thickness <span class="math-container">$d$</span> (on the optical axis) is made of a material, whose refractive index changes with the distance from the axis according to the following formula: <span class="math-container">$$n(r) = n_1 + a \cdot r^2$$</span> where <span class="math-container">$n_1$</span> and <span class="math-container">$a$</span> are constant. The refractive index outside the lens is <span class="math-container">$n_0$</span></p>
<p>Let's consider a ray which is parallel to the optical axis, falling from the flat side. Its distance from the axis is <span class="math-container">$r_1$</span>. Describe the course of the ray.</p>
</blockquote>
<p>The solution claims that the ray will be deviated inside the lens, to the axis if <span class="math-container">$a>0$</span> and apart from the axis if <span class="math-container">$a<0$</span>. But why? The ray is falling perpendicularly, so it's not refracted, and its distance from the axis inside the lens is constant too. So I'd say it should refract only on the other end of the lens, depending on the <span class="math-container">$n_1,a, n_0$</span> and <span class="math-container">$r$</span></p>
| 767
|
|
optics
|
Helical wavefronts of vortex beams
|
https://physics.stackexchange.com/questions/568765/helical-wavefronts-of-vortex-beams
|
<p>Vortex beams are characterized by an azimuthal angle phase dependence, basically <span class="math-container">$e^{i l\phi}$</span>. Why is this azimuthal angle phase dependence crucial and will I get a helix if I were to plot the surface of constant phase for Laguerre Gaussian beams?</p>
<p>NB: I have edited the question because of a confusion.</p>
|
<p>If the wave propagates in the <span class="math-container">$z$</span> direction and <span class="math-container">$\Phi(z,r, \phi,t)$</span> is the phase of the wave
<span class="math-container">$$
A(r)e^{i\Phi(z,r, \phi,t)}=
A(r)e^{ikz -i\omega t+il\phi},
$$</span>
then the surfaces of constant <span class="math-container">$\Phi$</span>, are indeed helical.</p>
| 768
|
optics
|
Why rainbow doesn't happen in each drop independently?
|
https://physics.stackexchange.com/questions/244199/why-rainbow-doesnt-happen-in-each-drop-independently
|
<p>When rainbow happens,we see the rainbow as a whole.</p>
<p>When the light enter each drop,the chromatic dispersion should happen in each drop independently and we should see multiple small rainbows.</p>
<p>So why do we see it as a big one?</p>
|
<p>What you see when you're viewing a rainbow at any particular point is the light from the sun reflecting off droplets at that particular point to hit your eye. The angle from which the light hits those droplets to get to your eye is always the same, hence all the droplets in that vicinity show one single color.</p>
<p>Next time you see a rainbow, notice how if you move your head, the rainbow moves as well. This is because the rainbow is always where the light would hit the droplets at the right angle to reach your eye. Someone from a complete different location would not see the rainbow in the exact same spot if at all. </p>
| 769
|
optics
|
rare occurence of diffraction in light
|
https://physics.stackexchange.com/questions/246335/rare-occurence-of-diffraction-in-light
|
<p>My question is that how the diffraction is not a common phenomenon of light.Here the lunar eclispe also is on the same basis but the diffraction is only a feature of sound .how?</p>
|
<p>Noticeable diffraction occurs when the wavelength is similar in size to the obstacle in the path of the waves. Sound waves are in the order of metres so we observe diffraction of sound fairly routinely. Since the wavelength of light is so much smaller (and particularly tiny compared to the moon) we rarely see diffraction effects with light.</p>
<p>One place you do see diffraction effects with light is a diffraction grating for example with the separation of colours produced by a compact disc or the screen of a smartphone (turned off).</p>
<p>My personal favourite way to demonstrate refraction is to have someone hold their two thumbs parallel and very close together - less than 1 mm and then look between their thumbs at a light source (could be just daylight through a window). If they look carefully they will see tiny light and dark bands parallel to the edges of their thumbs. </p>
| 770
|
optics
|
Spherical aberration in concave mirrors
|
https://physics.stackexchange.com/questions/249384/spherical-aberration-in-concave-mirrors
|
<p>I saw this posted on the forum with an answer:
<a href="https://physics.stackexchange.com/questions/209716/cause-of-spherical-aberrations">Cause of Spherical Aberrations</a></p>
<p>However, the answer helps explain aberration in lenses. Why is there spherical aberration in concave mirrors?</p>
<p>Thanks,
-Prasad</p>
|
<p>It just geometry. If you want all incident rays parallel to the principal axis to reflect through a single point the mirror needs to parabolic in shape. For a focal length <em>f</em> the equation of the parabola (opening upwards) would be $4fy=x^{2}$.</p>
<p>A spherical mirror will have approximately the same shape as a parabolic mirror near the vertex. The focal point will be at a distance of half the radius of curvature from the vertex. However, with a spherical mirror, as incident rays get farther from the principal axis they produce reflected rays that cross the principal axis at points nearer to the mirror than half the radius. This is the origin of the spherical aberration.</p>
<p><a href="https://i.sstatic.net/6NsyX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6NsyX.png" alt="Spherical aberration in concave mirror"></a></p>
<p><sup>Image source: <a href="https://commons.wikimedia.org/wiki/File:Aberration_de_sph%C3%A9ricit%C3%A9_d%27un_miroir_sph%C3%A9rique_concave.svg" rel="nofollow noreferrer"><br>
Aberration de sphéricité d'un miroir sphérique concave</a> by Jean-Jacques MILAN, GNU Free Documentation License.</sup></p>
| 771
|
optics
|
How does total internal reflection cause sparkling of diamonds?
|
https://physics.stackexchange.com/questions/249767/how-does-total-internal-reflection-cause-sparkling-of-diamonds
|
<p>How does total internal reflection cause sparkling of diamonds? What is the problem with ordinary glass?</p>
|
<ol>
<li>The <em>index of refraction</em> for diamond is much larger than for ordinary glass which means the critical angle in air is much smaller. More rays inside the diamond will experience total internal reflection than would in glass.</li>
<li>The <em>dispersion</em> value for diamond is much higher than for glass. In both glass and diamond red and blue light will refract at slightly different angles for the same angle of incidence but in diamond this difference is much greater.</li>
</ol>
| 772
|
optics
|
Michelson interferometer
|
https://physics.stackexchange.com/questions/228678/michelson-interferometer
|
<p>I'm reading about an experiment done with this piece of equipment. The aim is to measure the thickness of a piece of plastic. They use white light, so the central fringe in the interference pattern, corresponding to equal path lengths in the two beams can be used as a reference. Why does that work?? I don't understand why it can be used as a reference. </p>
<p>The central fringe corresponds to both beams having travelled equal distances... The displacement of one fringe has to be tracked when the plastic is placed. That's about as much as I understand. Why is the central fringe a reference? If it's the only one that doesn't move, why is that?</p>
|
<p>I'm not sure that I understand your set up, but it appears to refer to <a href="https://en.wikipedia.org/wiki/Optical_coherence_tomography" rel="nofollow">Optical Coherence Tomography</a>. There are various ways to do it, but in the simplest incarnation, light with a short coherence length is used, and one "mirror" of a Michelson is a semi-transparent interface (such as the surface of a piece of plastic). As you move the other mirror, fringes are observed when the optical path length of the first, and later the second, surface match the distance to the mirror. The optical distance between the surfaces is equal to the distance that the mirror was moved. The technique is used to produce 3D images of semitransparent objects.</p>
<p>This is now widely used in opthalmology, but has other medical applications. In non-medical fields it's sometimes referred to as Optical Low Coherence Reflectometry, but given the success of the medical devices, one doesn't see that name too much anymore.</p>
| 773
|
optics
|
Why we see more diverging light rays than converging light rays?
|
https://physics.stackexchange.com/questions/148154/why-we-see-more-diverging-light-rays-than-converging-light-rays
|
<p>While the apparent fact that there's more diverging light rays than convergings ones seems to be intuitive, mathematically I can't find a reason to be so.</p>
<p>More specifically, given a vector field of light rays $\overrightarrow{v}\left(x,y,z,t\right)$, we want to find its overall divergence, i.e integrating it along the whole space:</p>
<p>$\int \nabla\cdot\overrightarrow{v} dV\, = \oint_S \overrightarrow{v}\cdot d\overrightarrow{n}\,$</p>
<p>by the divergence theorem. This seems to suggest that the overall divergence of light rays in some space depends on the net light rays going out and in this space, which should be 0 for a closed space, and always positive if the rays are always being more divenging than converging.</p>
<p>Are there anything missing there? Or is there some better mathematical treatment on this matter?</p>
|
<p>Yes, you are missing the second law of thermodynamics. It is related to entropy, the arrow of time, etc. (search wikipedia for great details). Basically, your scenario of equal diverging and converging rays will only happen at thermodynamic equilibrium. The laws of physics are invariant to time reversal, but particular solutions are sensitive to the initial conditions. Our universe started far from thermodynamic equilibrium, and is evolving towards it. In the mean time, we will have a difference between diverging and converging rays, and between a lot of other things that should be symmetrical (such as anything that will look weird in a movie shown backwards) </p>
| 774
|
optics
|
High power beam splitting
|
https://physics.stackexchange.com/questions/86398/high-power-beam-splitting
|
<p>I'm trying to split a high power laser (1064nm, 20W, beam diameter @ 1/e^2 intensity: 3.2mm) into two beams and couple them after some additional optics into two SMF's (single mode fiber). So far I tried a PBS (polarizing beam splitter) cube and a NPBS (non polarizing beam splitter) cube. I can split the beam but the cubes seem to destroy the mode in the reflected port at high power. I conclude this because I couple approx. 20% less efficient in the SMF.
Does anybody know how to split(50:50 would be sufficient although polarisation dependent would be preferred) a high power beam without aggravating the mode?</p>
<p>Note1:
Plate beam splitters are not preferable as they usually have the same dielectric beam splitter coating as cubes and the reflected port tends to interfere with reflections from the back side of the plate
Note2:
I'm aware of Glan-Laser Calcite, Glan-Thompson and Glan-Taylor polarizers but as they are, similar to PBS/NPBS cubes, two cemented prisms and I personally suspect this layer to be the reason. I would like to know if anybody has experience with them concerning high power.</p>
|
<p>Well do we know if the high powered laser is itself polarized, or not?? Using any ordinary 50-50 beam splitter cube at 20 Watts, is likely to create problems with damage to the polarizing splitter medium of such a cube.</p>
<p>It is possible to make beam splitter cubes, specially at well controlled wavelengths, using a Frustrated TIR cube, but they aren't cheap. The FTIR cube operates, because of the evanescent field leakage beyond the face of a TIR (45 deg.) mirror. The leakage field won't propagate beyond the order of a wavelength in air or vacuum, but if you bring another glass prism or medium with higher than 1.0 index, near to the first mirror face, such that there would not be TIR, if they were in optical contact, then the field can then enter the new medium, but at a reduced amplitude, because of the gap. So the split ratio, between transmitted, and reflected rays, is determined by the small spacing between the two separated glass faces, and the wavelength of the laser, along with the index of the prisms at that wavelength. And since your 1064 laser wavelength is well known, then the splitter prism can be designed for just that wavelength. In principle, you can use a spacing material, with the right Temperature coefficient of expansion, so that the 50-50 split is Temperature compensated; but expect to pay well for such a prism; but people know how to make them.</p>
<p>And then good luck on getting 10 Watts into the fiber. You will have to fully understand the laws of optics, and coupling lens design, to do it, and those lenses won't be cheap either .</p>
| 775
|
optics
|
Mirrors/optics and physical adaptation of the eye
|
https://physics.stackexchange.com/questions/675951/mirrors-optics-and-physical-adaptation-of-the-eye
|
<p>Is it possible to use a clever combination of lenses and mirrors placed between your eye and a screen 1m away from you to make the eye react to the screen as if it were 20m away from you?</p>
<p>Note that I'm not asking whether it's possible to make the light rays travel a total distance of 20m, but whether the eyes will react as if the screen were 20m away.</p>
<p>(Not sure if this is the right StackExchange for this. If it isn't, please let me know where else I should post this, thanks!)</p>
|
<p>You mean aside from using lots of mirrors so that the light actually travels 20 m before it hits your eye?</p>
<p>Yes you can do this, from a lens point of view you just want to change the divergence of the rays from a single point. If it is 1 m away they will be more divergent and require more power to focus by your eye. If it's further away they will be less divergent. From an image size point of view, think of looking through a set of binoculars the wrong way around.</p>
| 776
|
optics
|
Do the Fresnel equations of reflection apply to monochromatic light?
|
https://physics.stackexchange.com/questions/498946/do-the-fresnel-equations-of-reflection-apply-to-monochromatic-light
|
<p>I have a monochromatic light source (wavelength ~ 420 nm), which will be incident on the interface of two different media. Could someone please explain if the Fresnel equations applies with monochromatic light when estimating the reflectance and transmittance?</p>
<p>Fresnel equation:</p>
<p><span class="math-container">\begin{align}
R_s
&=
\left|
\frac{n_1 \cos(\Theta_\mathrm{in}) - n_2 \cos(\Theta_\mathrm{out})}{n_1 \cos(\Theta_\mathrm{in}) + n_2 \cos(\Theta_\mathrm{out})}
\right|^2
=
\left|
\frac{n_1 \cos(\Theta_\mathrm{in}) - n_2\sqrt{1-\left(\frac{n_1}{n_2}\sin(\Theta_\mathrm{in})\right)}}{n_1 \cos(\Theta_\mathrm{in}) + n_2 \sqrt{1-\left(\frac{n_1}{n_2}\sin(\Theta_\mathrm{in})\right)}}
\right|^2
\\
R_p
&=
\left|
\frac{n_1 \cos(\Theta_\mathrm{out}) - n_2 \cos(\Theta_\mathrm{in})}{n_1 \cos(\Theta_\mathrm{out}) + n_2 \cos(\Theta_\mathrm{in})}
\right|^2
=
\left|
\frac{n_1 \sqrt{1-\left(\frac{n_1}{n_2}\sin(\Theta_\mathrm{in})\right)} - n_2 \cos(\Theta_\mathrm{in})}{n_1 \sqrt{1-\left(\frac{n_1}{n_2}\sin(\Theta_\mathrm{in})\right)} + n_2 \cos(\Theta_\mathrm{in})}
\right|^2
\end{align}</span></p>
|
<p>Monochromatic simply means that the light is of constant frequency. The frequency only affects the value of the refractive index for dispersive media. It has no affect on the formulae for the Fresnel coefficients. If your monochromatic light is unpolarized, then you need to average over all possible polarizations to properly estimate the reflectance. </p>
| 777
|
optics
|
Optical Tweezers: Trap Stiffness for Calculating External Force
|
https://physics.stackexchange.com/questions/679449/optical-tweezers-trap-stiffness-for-calculating-external-force
|
<p>I've been reading <a href="https://www.osapublishing.org/aop/fulltext.cfm?uri=aop-13-1-74&id=448989" rel="nofollow noreferrer">this article</a> about calibrating optical tweezers (finding the trap stiffness <span class="math-container">$\kappa$</span>). Around the end of section 2.2 it says</p>
<blockquote>
<p>Once an optical tweezers is calibrated, a constant and homogeneous external force <span class="math-container">$F_{ext,x}$</span> shifts the equilibrium position of the trap. The value of the force can be obtained as
<span class="math-container">$$F_{ext,x}=\kappa_{x}\Delta x_{eq}$$</span>
where <span class="math-container">$\Delta x_{eq}$</span> is the average particle displacement from the original equilibrium position without the external force.</p>
</blockquote>
<p>I'm confused why there is a constant external force that's shifting the equilibrium position of the trap. As I understood, the trapped particle should be in an equilibrium position where the force (or rather momentum) of the laser beam is equal to the force of gravity acting on the particle.</p>
|
<p>Once you have your optical trap calibrated and stable, you want to do something useful with it. Maybe you wish to measure flow-induced drag; this is an external force. Maybe you wish to measure the stiffness of a molecule that you'll stretch between the particle and a rigid substrate; this is an external force. Maybe you wish to measure the pulling force of a transmembrane receptor of a biological cell; this is an external force. Maybe you wish to weigh the particle by its Brownian motion; this depends on the effective stiffness of its equilibrium position, and temperature acts as a driving force for motion around that position. As you note, gravity has already been corrected for in the calibration process, but there are many other forces of interest!</p>
| 778
|
optics
|
How would one characterize intensity of diffuse scattering as a function of roughness
|
https://physics.stackexchange.com/questions/208251/how-would-one-characterize-intensity-of-diffuse-scattering-as-a-function-of-roug
|
<p>How would one even characterize roughness of a material experimentally using statistical analysis or by a set of equations
And how would I relate the roughness to the intensity of light scattering?</p>
|
<p>To answer the second part: it's almost always done empirically by measuring a sample of the surface. The standard equation is known as the <a href="https://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function" rel="nofollow">bidirectional reflectance distribution function</a>, which one uses to estimate the scattered power into a given angle based on the incoming angle of the light.</p>
<p>Surface roughness and its parametrization is a science unto itself. The behavior of the surface depends on the material type and on the "shape" of the roughness. For a couple examples, a perfectly regular set of grooves is known as a diffraction grating, while a randomized-polished surface is described by "Scratch and Dig," denoting length and depth parameters of the irregularities. But there are other surfaces, such as "gold black," where gold is deliberately deposited as microscopic spheres of random location and size so as to produce a surface with almost zero reflectivity.</p>
| 779
|
optics
|
Spherical wave solution for paraxial equation in far field region
|
https://physics.stackexchange.com/questions/685317/spherical-wave-solution-for-paraxial-equation-in-far-field-region
|
<p>I am reading this <a href="https://doi.org/10.1078/0030-4026-00177" rel="nofollow noreferrer">paper</a>. It consider the far-field region for the paraxial equation, <span class="math-container">$(\partial^2_x+\partial^2_y-2ik\partial_z)u(x,y,z)=0$</span>. Within the Rayleigh range, the solution will be described by Gaussian beams.</p>
<p>However, in the far-field region, the beams will diverge and will become spherical-like waves. It is assumed that the center of curvature of such a spherical wave is located at the origin (x,y,z)=(0,0,0). Then it reads the solutions may be factored as <span class="math-container">$$u(x,y,z)=\left \{\frac 1 z \exp \left [ -i \frac k {2z} (x^2+y^2)\right ] \right \} w(x,y,z) ,$$</span>where the factor in braces corresponds to the transverse distribution of a paraxial spherical wave whose
amplitude is given by the function <span class="math-container">$w(x,y,z)$</span>. See Eq.(2) in the above paper.</p>
<p>I know the spherical wave diminishes as <span class="math-container">$1/r$</span>, but I do not understand the exponential term <span class="math-container">$\exp \left [ -i \frac k {2z} (x^2+y^2)\right ] $</span>. What does it represent?</p>
|
<p>It's a binomial expansion approximation to the <span class="math-container">$r$</span> in <span class="math-container">$e^{ikr}$</span>.
<span class="math-container">$$
r= \sqrt{x^2+y^2+z^2} = z\sqrt{ 1+ (x^2+y^2)/z^2}\\
\approx z\left(1+ \frac 12 (x^2+y^2)/z^2+\dots\right)
= z+ \frac 1{2z}(x^2+y^2)+ \ldots
$$</span>
The approximation is acceptable as long as one is much closer to the <span class="math-container">$z$</span> axis than the origin. This is the essenece of the paraxial appoximation after all.</p>
| 780
|
optics
|
Can a light beam be used as a wire for telephony?
|
https://physics.stackexchange.com/questions/22382/can-a-light-beam-be-used-as-a-wire-for-telephony
|
<p>I've heard that Edison (maybe) invented a way to talk over a light beam. Is this true and how do you modulate the light beam?</p>
|
<p><a href="http://en.wikipedia.org/wiki/Heliograph" rel="nofollow">Heliographs</a> permit communication over large distances with visible light, albeit with a rather low modulation frequency. It seems fairly clear that if you simply increase the switching rate you can create enough bandwidth to carry voice. As Ron has already pointed out, radio, however there are many extant technologies that use light around the visible region for communication, such as the through-space <a href="http://en.wikipedia.org/wiki/Infrared_Data_Association" rel="nofollow">infrared links</a> briefly popular on computers and PDAs, as well as <a href="http://en.wikipedia.org/wiki/Optical_fiber" rel="nofollow">fibre optics</a> which are ubiquitous in long-range communication and can also be seen in the household in the form of <a href="http://en.wikipedia.org/wiki/TOSLINK" rel="nofollow">toslink connections</a> for high definition audio. In this case, the light is constrained within the fibre by total internal reflection.</p>
<p>High frequency modulation of light is simple with modern LEDs and laser diodes.</p>
| 781
|
optics
|
Moon landing: artificial generation of parallel beams of light
|
https://physics.stackexchange.com/questions/665059/moon-landing-artificial-generation-of-parallel-beams-of-light
|
<p>In <a href="https://youtu.be/dWBYAxhH3u4" rel="nofollow noreferrer">this video</a> of Adam ruins everything, Adam tries to debunk the claim that moon landing was fake using a central argument that we couldn't produce parallel beam of white light to mimic sunlight with computer graphics or lasers.</p>
<p><strong>But can't we easily produce parallel beam of light using a parabolic mirror and placing the light source at the focal point?</strong></p>
<p><strong>Am I missing something? or Are the makers of this video and a scientist who features in the video is unaware of this?</strong></p>
<p>Note: I'm not arguing about moon landing being fake, I'm only arguing that this particular argument of parallel light beams is invalid.</p>
|
<p><strong>Tl/Dr:</strong> Adam's handling is certainly incomplete, but there are interesting threads to tug on which were not covered in the video -- particularly in dealing with just how <em>massive</em> such a light source must be.</p>
<p>This is an interesting one. I don't think Adam and his guest covered everything about the forensics. If you had a parabolic mirror the size of the set, you could indeed generate parallel light across the entire scene simply by putting a light source at the focus. But how large does it have to be?</p>
<p>The moon landing footage also demonstrates the expected parallax. Things in the background move slower than the things in the foreground, and at an expected rate. This gives strong indications as to the size of the set needed. A mat painting behind the actors would not generate the expected parallax (you can see this in old movies if the camera moves in any scene with a mat).</p>
<p>Which means the set has to be pretty honking big. It has to be a pretty big parabolic reflector.</p>
<p>So where's the focus? We can put the light source really close (making a really strongly curved parabolic shape), or really far (making it more and more like a plain ol' sun). The sound stage can be smaller if you can put it really close. But there's a catch. Parabolic reflectors do not light a scene evenly. They light the centers much stronger. The light source emits light evenly in all directions (spherically), but the parts of the mirror on the edge see less of that light. They are at a very oblique angle. You can observe this effect on your own with plain ol' sunlight and a flat surface (like a can lid). As you rotate the surface, you'll see it reflects more light when it is oriented such that the sunlight hits it dead on, and reflects almost no light when its lit on-edge (you'll get some lighting from the sky or from the ceiling, if you do it indoors).</p>
<p>This means that there would be a very obvious uneven lighting. This would have to be addressed with a very special gel on the front of the light, which dims the light in the middle.</p>
<p>At this point I must admit that I have not done the equations, so I cannot prove that the lighting theory here is debunked. But I do point out that the closer the light is (shorter focal length on the parabolic mirror), the more raw light intensity is needed to sufficiently light the edges. This means you need a <em>very</em> bright light. The shorter the focal length, the worse the issue will be.</p>
<p>Go grab a flashlight with a standard parabolic reflector (one that can focus the light into a decently small spot). Look at the light pattern. Imagine lighting the entire scene with only the dimmest fringe of the light, perhaps by printing out a clever pattern on transparency paper and putting it in front of the light.</p>
<p>Now research just how bright studio lights had to be around 1970 to get a good picture quality. Think about how bright that light has to be.</p>
<p>It's not a complete debunking, but it is a direction you can take to analyze just what an astonishing lighting system it would have to be. Issues like cooling start to become complicated. You certainly couldn't make a gel, as lighting technicians think of it today. It'd burn to a crisp. You'd probably have to make it out of metal, and its fun to think about the technical challenge of making nice smooth gradients using holes in metal.</p>
<p>You can also compare the brightness required to the beam that is currently considered the brightest light on the Earth: the <a href="https://en.wikipedia.org/wiki/Luxor_Las_Vegas#Luxor_Sky_Beam" rel="nofollow noreferrer">skybeam on the Luxor</a>. See how that compares to the lighting requirements for this set, and ask whether something that bright could be built in the 1970 timeframe (Xenon arc lamps have undergone continual improvement). Remember that the skybeam is solidly situated on an unmoving platform, while this "sunlight" would have to move from shot to shot.</p>
<p>Modern <a href="https://homestudioexpert.com/how-many-lumens-do-you-need-for-video-lighting/" rel="nofollow noreferrer">recommendations</a> are on the order of 3000 lumens per 100 square feet of stage. The Luxor skybeam is 13650000 lumens, so it could light 455,000 square feet. For scale, a Manhattan city block is about half of that. However, those calculations are done without accounting for any adjustment of intensity that is needed to get the light even. If you lose 90% of your light in this way, suddenly you're only covering 45,000 square feet. Now we're talking about lighting a football field -- and using a mobile-mounted Luxor skybeam to do it! And even worse, those were modern lighting recommendations. Professional 1970's cameras were extremely weak compared to the cameras available to the modern home videographer.</p>
<p>You can resolve some of these issues by using a light with a longer focus, but now you're increasing the size requirements of your set. Now we have to start questioning where such a building could have been made. Certainly not Hollywood. It would have to be elsewhere, which opens up a whole different thread of conspiracy. Remember that the lighting needs to be able to come from any direction the "sunlight" could have come from. It'd be a big building. Yes, we made Los Alamos in secret... but its up to you to decide if this is the same thing, warranting the same level of secrecy.</p>
<p>So I think you're right, that Adam fails to properly debunk this myth. However, it is less clear that his guest, a forensic analyst, has failed to debunk the myth. There are several directions that could have been taken, they simply were not addressed in the 5 minute video. I suppose we could email him and ask him for more details.</p>
| 782
|
optics
|
Transmittance through a reversed one-way mirror
|
https://physics.stackexchange.com/questions/509209/transmittance-through-a-reversed-one-way-mirror
|
<p>I am looking to use a half silvered one-way mirror to allow as much light and solar energy as possible into a room where there are highly reflective surfaces within the room - and I want to keep the energy from reflecting back outside - trap as much light/energy as I can in the room. That's why I am wondering what percentage of sun-light would be transmitted through a "reversed" one-way mirror with the clear glass side of the mirror to the outside and the half-silvered side on the surface inside the room? And to what extent might this arrangement reflect solar light/energy back into the room? </p>
|
<p>There is <a href="https://physics.stackexchange.com/questions/101380/how-does-a-one-sided-glass-work/101387#101387">no such thing</a> as a <a href="https://physics.stackexchange.com/questions/498482/optical-systems-that-make-a-ray-of-light-travel-a-different-way-to-and-from/">one</a>-<a href="https://physics.stackexchange.com/questions/162678/is-there-something-equivalent-to-a-diode-for-light/">way</a> <a href="https://physics.stackexchange.com/questions/179171/can-a-one-way-mirror-effect-be-reversed">mirror</a>. </p>
<p>If you want a mirror to reflect 99% of the light that tries to get out of the cavity, then that mirror will also need to reject 99% of the light that tries to get in.</p>
| 783
|
optics
|
Focal length calculation not matching intuition
|
https://physics.stackexchange.com/questions/700213/focal-length-calculation-not-matching-intuition
|
<p>I'm a computer engineering student working on my senior project that involves some optics. Illadvised though it may be, I'm contemplating building a telescope based on a simple achromatic refractor design for use in an automated astrophotography rig. I'm trying to work out how to build a compact telescope such as commercial manufacturers create, and still get an image that fills the sensor of a digital camera.
The entire problem can be distilled to a simple example:</p>
<p>Suppose we're trying to get an image of the Orion Nebula, at a distance of 1.334e3 light years away. If we want to fill the entire sensor, what should our focal length be if the distance from the straightening lens to the sensor is 44mm?</p>
<p>From my EM course I recall that <span class="math-container">$\frac{1}{F} = \frac{1}{D} + \frac{1}{I}$</span> where <span class="math-container">$F$</span> = Focal Length, <span class="math-container">$D$</span> = Object Distance, and <span class="math-container">$I$</span> = Image Distance.</p>
<p>For large distances I can rewrite this as <span class="math-container">$\frac{1}{F} = lim_{D\longrightarrow{\infty}} \frac{1}{D} + \frac{1}{I} \longrightarrow \frac{1}{F} = \frac{1}{I}$</span></p>
<p>This doesn't fit with my intuition. I'm an amature photographer, and I don't know much about focal length except that if I want to take an image of something at a great distance I use a lens with focal lengths in the 100s of mm, and that's images of things within a mile or less from view, not light years. What I'm missing in all of this</p>
|
<p>The question lacks some information. Specifically, the angular size of the Orion nebula, and the linear diameter of the camera sensor.</p>
<p>The Orion nebula is about <span class="math-container">$1$</span> arc degree in size (Google). This means that the angular size of the image will also be about <span class="math-container">$1$</span> arc degree in size, as seen from the lens.</p>
<p>If the sensor is <span class="math-container">$44$</span> mm from the lens, then the focal length of the lens must be the same <span class="math-container">$44$</span> mm. An object at infinity will always form a real image at the focal distance.</p>
<p>Combining these two parameters, the size of a <span class="math-container">$1^o$</span> image at <span class="math-container">$44$</span> mm is <span class="math-container">$S$</span>:<span class="math-container">$$S=44 \times 1 \times \frac{\pi}{ 180}=0.768 \text{ mm}$$</span>
where the <span class="math-container">$\frac{\pi}{180} $</span> changes degrees to radians.</p>
<p>So, the linear size of the object is irrelevant (as long as it is essentially an infinite distance away). Angular size of the object, focal length of the lens and sensor size are all related: pick two, and the third is determined</p>
| 784
|
optics
|
Why is distance to an object inversely proportional to image size when using perspective projection?
|
https://physics.stackexchange.com/questions/700720/why-is-distance-to-an-object-inversely-proportional-to-image-size-when-using-per
|
<p><a href="https://www.cse.unr.edu/%7Ebebis/CS791E/Notes/PerspectiveProjection.pdf" rel="nofollow noreferrer">In this PDF</a>, it says the following about perspective projection:</p>
<blockquote>
<p>The distance to an object is inversely proportional to its image size.</p>
</blockquote>
<p>What causes this equasion? Why is it inversely proportional and not something else?</p>
|
<p>I think that the title of you post should be, <em>Why is <strong>distance to an object</strong> inversely proportional to image size when using perspective projection?</em></p>
<p><a href="https://i.sstatic.net/QtCKb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QtCKb.jpg" alt="enter image description here" /></a></p>
<p>Triangle <span class="math-container">$ABC$</span> and <span class="math-container">$A'B'C$</span> are similar so <span class="math-container">$\dfrac zf = \dfrac{r}{r'} \Rightarrow z \propto \dfrac {1}{r'}\Rightarrow \text {object distance} \propto \dfrac {1} {\text{image size}}$</span> if <span class="math-container">$r$</span> and <span class="math-container">$f$</span> are kept constant.</p>
| 785
|
optics
|
"There aren’t seven colors in the rainbow" is this claim appropriate?
|
https://physics.stackexchange.com/questions/496318/there-aren-t-seven-colors-in-the-rainbow-is-this-claim-appropriate
|
<p>This <a href="https://www.quora.com/Why-are-there-7-colours-in-the-rainbow-while-there-are-only-3-or-4-colours-that-appear-to-us" rel="nofollow noreferrer">post</a> says</p>
<blockquote>
<p><code>There aren’t seven colors in the rainbow</code>, or any other specific number; Newton described seven colors in large part because of the supposed “special” qualities associated with that number. In reality, the spectrum of visible light is continuous, and the number of distinguishable shades within the total range depends on the individual’s sensitivities. Color is simply a perception, after all; in a very real sense, it’s “all in your head.”</p>
</blockquote>
<p>Actually, the colors could be defined by frequency and wavelength, for example, the <a href="https://en.wikipedia.org/wiki/Visible_spectrum#Spectral_colors" rel="nofollow noreferrer">wavelength</a> of red ranges 625–740 nm</p>
<p>So, "there aren’t seven colors in the rainbow", is this claim appropriate?</p>
<p>if yes, how could fit that to <a href="https://en.wikipedia.org/wiki/Dispersive_prism" rel="nofollow noreferrer">Sir Isaac Newton's prism experiment</a>, where, the dispersion of white light into colors by a prism led Sir Isaac Newton to conclude that white light consist of a mixture of different colors.</p>
|
<p>Yes, the seven colours are an arbitrary classification. Our eyes are sensitive to three colour ranges, see <a href="https://en.wikipedia.org/wiki/Color_vision" rel="nofollow noreferrer">Color vision</a>.</p>
| 786
|
optics
|
Can electro-optic amplitude modulator generate sidebands?
|
https://physics.stackexchange.com/questions/572353/can-electro-optic-amplitude-modulator-generate-sidebands
|
<p>As the phase modulator, can electro-optic amplitude modulator generate sidebands?</p>
<p>I'm really confused now...</p>
<p>Please help me out.</p>
| 787
|
|
optics
|
Can single-mode fiber collect incoherent light?
|
https://physics.stackexchange.com/questions/704813/can-single-mode-fiber-collect-incoherent-light
|
<p>I am now studying the rotational Doppler effect of partially coherent light, and I theoretically calculate that the cross-spectral density of completely incoherent light when coupled into a fiber is zero, but my instructor doesn't think so.</p>
| 788
|
|
optics
|
I have been told that in an xyz plane.The normal on incident point is given,how do I visualise a normal which is like this
|
https://physics.stackexchange.com/questions/708118/i-have-been-told-that-in-an-xyz-plane-the-normal-on-incident-point-is-given-how
|
<p>Suppose I have a ray of light incident on a plane mirror in the general form of <span class="math-container">$$ai+bj-ck$$</span>. I have a normal that is along <span class="math-container">$$\frac{i+j}{√2}$$</span>. The question asks me to find the unit vector but that is not what I'm asking. My question is ,where should I visualise a vector like this,if it was like, <span class="math-container">$$\frac{i+j}{2}$$</span> it should have been easier or in a single direction or no fractions,but I'm finding it hard to visualise <span class="math-container">$$\frac{i+j}{√2}$$</span>. On a graph or a diagram, where would this lie?
(i ,j,and k are unit vectors)</p>
|
<p>I'm not sure what you're asking, but :<span class="math-container">$$\frac{\hat i + \hat j }{\sqrt{2}} = \frac{1}{\sqrt{2}} \hat i + \frac{1}{\sqrt{2}} \hat j$$</span></p>
<p><a href="https://i.sstatic.net/7kayY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7kayY.jpg" alt="enter image description here" /></a></p>
| 789
|
optics
|
Can you see an image created by projector with bare eyes?
|
https://physics.stackexchange.com/questions/636236/can-you-see-an-image-created-by-projector-with-bare-eyes
|
<p>The question is as simple as I described in the title. Can you see an image created by a projector with your bare eyes? If so, where should you stand with respect to projector?</p>
|
<p><strong>Don't try this because the light will be too bright to be safe for your eyes.</strong></p>
<p>The projector works by focusing an image onto the screen. If you remove the screen, then the image still exists that you can view if you are situated where the light can reach your eyes. The screen normally reflects that light to your eyes. <strong>If you remove the screen and stand behind where the screen normally sits you will be able to see the image, but you will only see a very small portion of the image.</strong> The only light visible to you would be coming directly from the projector lens, so you'd be viewing a very small portion of the total image. You could test this by putting a very very small figure at the center of your slide, stand directly behind where the image would normally form on the screen, and focus your eyes to where you would normally see it if a semi-transluscent screen was between you and the projector. Again, take my word for this and don't try it unless you don't care about damaging your eyes. Alternatively, you could try using a magnifying lens and object situated as in the ray diagram below.</p>
<p><a href="https://i.sstatic.net/WSMJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WSMJj.png" alt="enter image description here" /></a></p>
| 790
|
optics
|
Is the dimming of a phone screen under bright sunlight an example of destructive interference of visible light?
|
https://physics.stackexchange.com/questions/699093/is-the-dimming-of-a-phone-screen-under-bright-sunlight-an-example-of-destructive
|
<p>I have heard things like polarized phone screens being attributed to this effect but this wouldn't explain this phenomenon for non polarized phone screens under bright sunlight. Am I missing something?</p>
|
<p>No. This is just because difference between white pixels and black pixels is overwhelmed by the sunlight that affects all pixels equally, and your eye can't tell the difference between white pixels and black pixels any more.</p>
<p>Interference of light waves (whether constructive or destructive) is not experienced in everyday life. Perhaps the simplest way is to shine a laser at a wall (or other surface) and observe a speckle pattern, or at a CD and observe diffraction.</p>
| 791
|
optics
|
Why does looking through an LCD panel make things blurry?
|
https://physics.stackexchange.com/questions/242551/why-does-looking-through-an-lcd-panel-make-things-blurry
|
<p>When you take the back off an LCD panel and remove the backlight, you are left with a translucent panel. But when I look <em>through</em> the panel, even in its "clear" state, objects look blurry. Why is this? Shouldn't they just be dimmer, as the front polarizer blocks half of the light? Is diffraction to blame?</p>
|
<p>You want the backlight to be diffuse and even. That is not simple. If you look at a lamp of any kind you don't just see an evenly bright area. A diffuser is needed to make a uniformly bright spot. An everyday example of such a diffuser is fog. Or your hand if you look at a red light through it. Another good example is a lightbox, a box with lamps in it and ground glass on it, that people used to look at negatives or photographic slides. </p>
| 792
|
optics
|
A mysterious phase difference
|
https://physics.stackexchange.com/questions/706047/a-mysterious-phase-difference
|
<p>Today my teacher was discussing the Poisson spot and gave a simple explanation for why there must be a bright spot on the axis of the disc when illuminated with parallel monochromatic light.</p>
<p>What he said was:</p>
<p>Say we instead have a circular aperture in an infinite plane, we know there must be a bright spot at the centre/axis (The Airy Disk pattern). Now, instead of making a circular hole in the plane, we remove the plane itself to get an opaque disk.</p>
<p>So, we have 2 systems one of an opaque disk and one of a circular aperture of the same dimensions in an infinite plane. If we superimpose both these systems, the light ceases to exist as there is no longer an opening. And since only bright can cancel bright, the center of our first system (opaque disk) must be brightly illuminated to cancel the airy disk pattern.</p>
<p>Now my issue is, how did a phase difference of <span class="math-container">$\pi$</span> come about between the two systems to induce a destructive interference? Is it just because of complementarity or is there something fundamental going on.</p>
<p>There is a path difference between them for sure, but how are we certain that it is <span class="math-container">$(n+\frac{1}{2}) \lambda$</span>. Also, how are the intensities same.</p>
|
<p>Your teacher was referring to Babinet's principle. It is often a good idea to fix your ideas on actual computation. You have an incident light field <span class="math-container">$\phi_i$</span> on the plane. As it crosses the plane, it either gets multiplied by <span class="math-container">$h_a=1{[r\leq R]}$</span> in the case of the aperture of radius <span class="math-container">$R$</span> (origin at the center of the aperture), either multiplied by <span class="math-container">$h_d=1-h_a$</span> in the case of the disk due to the complementary nature.</p>
<p>The resulting diffraction pattern you are interested in is mathematically described by Faunhofer diffraction, which amounts to a Fourier transform, which is crucially linear. This means that your final field will be <span class="math-container">$\phi_f=\mathcal F (h\phi_i)$</span>, so you see that for the disk:
<span class="math-container">$$
\phi_f=\mathcal F (\phi_i)-\mathcal F (h_a\phi_i)
$$</span>
The first term is what you would get without the plane, the overall forward beam, while the second term is the field diffracted by the aperture with an opposing phase, which came from the complementarity. For example, in the case of a monochromatic plane wave of normal incidence, <span class="math-container">$\phi_i$</span> is constant, the first term would be a Dirac peak at the origin and the second term your usual Airy diffraction pattern.</p>
<p>Hope this helps and tell me if you need more details.</p>
| 793
|
optics
|
Why the real image is always inverted while virtual image is always erect?
|
https://physics.stackexchange.com/questions/714724/why-the-real-image-is-always-inverted-while-virtual-image-is-always-erect
|
<p>While studying optics and the ray diagrams, I observed that the real images were always inverted and virtual were always erect. I have read several articles but haven't got any convincing answer. Someone please help !</p>
|
<p>For a counterexample, for a thin lens:</p>
<p><span class="math-container">$$\frac{1}{p'}-\frac{1}{p}=\frac{1}{f'}
\quad\text{with}\quad
\begin{cases}
p=\overline{OA} & \text{object}\\
p'=\overline{OA'} & \text{image}
\end{cases}$$</span></p>
<p>The zoom factor is:</p>
<p><span class="math-container">$$\gamma=\frac{p'}{p}$$</span></p>
<p>If you want to get a real image (<span class="math-container">$p'>0$</span>) but not inverted (<span class="math-container">$\gamma>0$</span>), then the object has to be virtual (<span class="math-container">$p>0$</span>). The first equation shows that it is possible. For example for a convergent lens (<span class="math-container">$f'>0$</span>), it can work with <span class="math-container">$p'<p$</span> (both object and image on the right-hand side, the image closer to the lens than the object).</p>
<p>Thick lenses, mirrors or more complex systems have their own laws similar to this one, so you have to study them on a case-by-base basis.</p>
| 794
|
optics
|
Comparison of fabric bluing and blue glaciers
|
https://physics.stackexchange.com/questions/714740/comparison-of-fabric-bluing-and-blue-glaciers
|
<p>In learning about the effects of fabric bluing (it seems to mask yellowed fabric using a small amount of blue dye) I've wondered how this compares with the also-perceived blue of glaciers.</p>
|
<p>I think they are different phenomena. The blue from glaciers is due to scattering of light and absorption. The longer wavelengths reds are absorbed while the blue light not absorbed as much continues to scatter forward.</p>
<p>For the fabric the blueing has a small amount of blue dye and is a complementary color for yellow. So if you have a fabric that is old or yellowed due to the blue portion of the spectrum is being absorbed having a small amount of blue dye added increases the amount of blue light reflected so this helps the fabric appear white to the eye again.</p>
| 795
|
optics
|
Partial Internal Reflection
|
https://physics.stackexchange.com/questions/524870/partial-internal-reflection
|
<p>What is partial internal reflection? Please elaborate with examples.</p>
<p>Why this phenomenon happens?
I searched on the internet but found nothing much. </p>
<p>Is this phenomenon involved in rainbow formation?
I think so because in the formation of a rainbow, TIR can't be involved. The geometry of a rain droplet doesn't allow it. </p>
|
<p>when light goes from medium A to medium B you get a reflection, if medium B is transparent the reflection is partial(the other part is the light transmitted to B).
Internal or external is arbitrary.
Air is a medium, so normal reflection from a glass window is just another case of partial reflection.</p>
<p><a href="https://en.wikipedia.org/wiki/Reflection_(physics)#Reflection_of_light" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Reflection_(physics)#Reflection_of_light</a></p>
| 796
|
optics
|
Do TN and IPS displays are subject to diffraction?
|
https://physics.stackexchange.com/questions/145949/do-tn-and-ips-displays-are-subject-to-diffraction
|
<p>Sometimes displays with the same resolution, diagonal and aspect ratio appear to have different level of sharpness .</p>
<p>In optics there is this concept of diffraction and if you can lower your diffraction the image gets sharper .</p>
<p>It's possible to get a simple explanation about why displays that appear to be so similar end up having such diversified performances ?</p>
<p>It's also true that monitors usually have multiple layers on top of the panel, usually glass or plastic, but still, the final result is different and I would like to know why and which properties have the major impact on the final result .</p>
|
<p>If two displays have the same pixel sizes they will have the same amount of diffraction. The larger changes in image quality will come from a number of factors:</p>
<ol>
<li>The thickness of the display. The thicker the display, the lower the viewing angle</li>
<li>The LC material, this can have impact on the brightness dynamic range, among other factors.</li>
<li>The quality of the backlighting </li>
<li>The display electronics, many cheaper displays can only display a limited number of colours</li>
</ol>
<p>All of these combine together to effect the quality of the display.</p>
| 797
|
optics
|
Illuminance Formula
|
https://physics.stackexchange.com/questions/725265/illuminance-formula
|
<p><a href="https://math24.net/optimization-problems-physics.html#example6" rel="nofollow noreferrer">This page</a> says illuminance is
<span class="math-container">$$E=\frac{I}{L^2} cos \space \alpha$$</span></p>
<p><a href="https://www.toppr.com/ask/question/a-lamp-is-hanging-along-the-axis-of-a-circular-table-of-radius-r-at/" rel="nofollow noreferrer">This page</a> does something similar, but it ignores the <span class="math-container">$cos \space \alpha$</span> factor. Which is the correct formula?</p>
<p>Note: I don't have a physics background. I was looking at optimization problems in Calculus (which is why I came across the first page).</p>
|
<p>A physical formula always applies to a given situation, and it is important to first check whether your situation matches the one this formula is meant for.</p>
<p>The formula without the <span class="math-container">$\cos \alpha$</span> term is meant for a situation <strong>where the light hits the surface in a right angle</strong>. It's a special case of the more general formula, using <span class="math-container">$\alpha = 0°$</span> (because then <span class="math-container">$\cos \alpha$</span> is 1).</p>
<p>Both pages you linked describe the same situation, one where this isn't true, so the <span class="math-container">$\cos \alpha$</span> term is needed.</p>
<p><strong>Nitpicking addition:</strong></p>
<p>The <span class="math-container">$E=\frac{I}{L^2}\cos\alpha$</span> formula applies to angles between -90° and 90°, otherwise you'd get negative illuminations (physical nonsense) out of the formula.</p>
<p>Instead the physically correct result would be 0, as a surface pointing away from a light source gets no light at all.</p>
<p>You see, even the "general" formula still has its limitations, being applicable only under specific circumstances.</p>
| 798
|
optics
|
Where did this equation come from ∠I+ ∠E = ∠A+ ∠D?
|
https://physics.stackexchange.com/questions/56624/where-did-this-equation-come-from-%e2%88%a0i-%e2%88%a0e-%e2%88%a0a-%e2%88%a0d
|
<p>$\angle I +\angle E=\angle A+\angle D$</p>
<p>Angle of incidence + angle of emergence = angle of prism (Normally $60^\circ$) + angle of deviation.</p>
<p>If their sum is not equal,we made personal error in doing an experiment with prism. Please make sense of this equation. </p>
|
<p>The following diagram shows the prism with the incoming and outgoing light rays.</p>
<p><img src="https://i.sstatic.net/E4rrg.gif" alt="Prism"></p>
<p>If you follow the incident light ray in, it gets bent by an angle $\theta_1 = i- r_1$. If you follow the light ray where it leaves the glass, it gets bent again by an angle $\theta_2 = e - r_2$, so the total deviation is:</p>
<p>$$ \begin{align}
D &= \theta_1 + \theta_2 \\
&= i + e - (r_1 + r_2)
\end{align} $$</p>
<p>For the next step look at the triangle formed by the top of the prism and the light ray, and note that the internal angles must add up to 180°. So:</p>
<p>$$ A + (90 - r_1) + (90 - r_2) = 180 $$</p>
<p>and a quick rearrangement gives:</p>
<p>$$ A = r_1 + r_2 $$</p>
<p>Now substitute for $r_1 + r_2$ in our first equation and we get:</p>
<p>$$ D = i + e - A $$</p>
<p>or:</p>
<p>$$ D + A = i + e $$</p>
| 799
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.