url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/question-about-turning-a-hemisphere-into-an-equivalent-circle.712553/
Question about turning a hemisphere into an equivalent circle? 1. Sep 25, 2013 goldust I was thinking about this when I was trying to work out a simpler way of finding the volume of a sphere. Suppose we cover a hemisphere with a piece of pliable thin cover. Stretching the cover flat would make a circle. The circumference of the sphere is 2*pi*r. The distance along the surface of the hemisphere from the top to the bottom is a quarter of the circumference of the sphere, or 0.5*pi*r. Is it correct that the radius of the circle after stretching out the cover is also 0.5*pi*r? Many thanks in advance! Last edited: Sep 25, 2013 2. Sep 25, 2013 goldust My bad, I meant turning *the surface of* a hemisphere into an equivalent circle 3. Sep 25, 2013 Staff: Mentor And finding the area of the resulting circle? This won't work. The surface area of the hemisphere is $2\pi r^2$. If you take the thin cover off the hemisphere and stretch it out, it's true that you get a circle of radius $(1/2)\pi r$. The area of that circle is $\pi (\frac{\pi r}{2})^2 = \frac{\pi ^2}{4} \pi r^2 \approx. 2.47 \pi r^2$. I wrote the area in this form to make it easier to compare with the surface area of the hemisphere. The area of the flattened hemisphere skin is larger than the surface area of the hemisphere, because in flattening the skin, you would need to stretch it, which adds area. 4. Sep 25, 2013 goldust Very interesting. Many thanks! Similar Discussions: Question about turning a hemisphere into an equivalent circle?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461060523986816, "perplexity": 425.8934499043974}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685129.23/warc/CC-MAIN-20170919112242-20170919132242-00266.warc.gz"}
https://blender.stackexchange.com/questions/28438/how-do-i-ensure-a-sequence-of-quaternions-from-matrix-decompose-is-continuous
# How do I ensure a sequence of quaternions from matrix.decompose() is continuous? It seems that blender's Matrix decompose() function can exhibit some numerical "instability". What I mean is that orientation matrices that are relatively close can have significantly different elements in their quaternion matrix that makes interpolated keyframing unusable. The following simple python script will calculate a sequence of matrices representing rotations around the Z axis in a smooth manner. It then decomposes the matrix into a quaternion, and inserts that as a keyframe. When you view the resulting fcurves in blender you can see that the Z value jumps from -1 to 1 about halfway through the animation. import bpy from math import * from mathutils import * def matrix_for_time(t): theta = 2 * pi * t mat = Matrix([[cos(theta), sin(theta), 0], [-sin(theta), cos(theta), 0], [0, 0, 1]]).to_4x4() # for illustration we use rotation about Z axis, # but in the arbitrary case, the orientation could be for a pine cone bouncing down a hill. return mat def mission(obj): res = 36 qs = QuaternionStabilizer() for z in range(res): mat = matrix_for_time(z/res) (loc,quat,scale) = mat.decompose() obj.rotation_quaternion = qs.stabilize(quat) for ai in range(len(quat)): obj.keyframe_insert(frame=z*5, data_path="rotation_quaternion", index=ai) mission(bpy.context.active_object) " How to make a true linear quaternion rotation? " has some screenshots that imply quaternions aren't necessarily discontinuous. Is there a technique for blender's python API to get a sequence of quaternions that can be keyframed and interpolated without discontinuity? For bonus points: link to an article that explains this mathematical oddity and why decomposition works this way. This technique should be usable with arbitrary orientation matrices, because they are calculated from something a little more complex than this simple Z rotation (in my specific case I'm flying along a bezier curve). After doing a little more research I think I have found a technique that will work. Examining the matrices that are computed from the quaternion led me to conclude that q and -q both represent the same orientation, so I compare both q and -q to the previous quaternion orientation and pick whichever one is closest in 4-space. The following udpated code sample illustrates the technique: import bpy from math import * from mathutils import * def mission(obj): res = 36 qs = QuaternionStabilizer() for z in range(res): theta = 2*pi *z / res mat = Matrix( [ [ cos(theta), sin(theta), 0], [ -sin(theta), cos(theta), 0], [ 0,0,1]]).to_4x4() (loc,quat,scale) = mat.decompose() obj.rotation_quaternion = qs.stabilize(quat) for ai in range(len(quat)): obj.keyframe_insert(frame=z*5, data_path="rotation_quaternion", index=ai) class QuaternionStabilizer: def __init__(self): self.old=None def stabilize(self, q): if self.old is None: rval = q else: d1 = (self.old-q).magnitude d2 = (self.old+q).magnitude if (d1<d2): rval = q else: rval = -q self.old = rval return rval mission(bpy.context.active_object) And now my fcurves don't look discontinuous: This will not give good results if your orientations are flailing about wildly, but that's a case of Garbage In Garbage Out. • I don't want to waste your time, but it would be nice if you could explain the math and the code a bit to learn from it :) – p2or Apr 14 '15 at 19:48 • Well, the most advanced math in there is where I compute the Z rotation matrix. That's covered by en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations . The decomposition of quaternions is a pretty heavy subject and is covered by the link in my comment under the original question. I don't fully understand the math behind quaternions myself, but I know enough to recognize some situations where they are being abused. – Mutant Bob Apr 14 '15 at 19:58 • Since the rotated vector p' is q * p * q^-1 and the scalar multiplication by -1 is commutative, q and -q should represent the same rotation. – pink vertex Apr 15 '15 at 18:59 I fiddled around with the Quaternion(axis, angle) constructor and Quaternion.slerp but both gave non-continous results as well. (Quaternion.slerp worked using a third value inbetween though) cos(θ / 2) + sin(θ / 2) * (ux * i + uy * j + uz * k) where θ is the rotation angle and u = (ux, uy, uz) the unit vector for the rotation axis. import bpy import math from mathutils import Quaternion obj = bpy.context.active_object action = obj.animation_data.action def from_axis_angle(axis, angle): half_angle = angle / 2.0 scalar = math.cos(half_angle) factor = math.sin(half_angle) return Quaternion(( scalar, axis[0] * factor, axis[1] * factor, axis[2] * factor )) def set_rot_kf(frame, quat, action): for fcu in action.fcurves: if fcu.data_path == "rotation_quaternion": index = fcu.array_index fcu.keyframe_points.insert(frame, quat[index]) axis = (0.0, 0.0, 1.0) angle = 2.0 * math.pi frame_start = 0 frame_end = 60 dt = frame_end - frame_start frac = 1 / dt for frame in range(dt + 1): quat = from_axis_angle(axis, frame * frac * angle) set_rot_kf(frame_start + frame, quat, action) #or #quat *= dq #where #dq = from_axis_angle(axis, frac * angle) The result looked like this, where you can see the graphs of cos(θ / 2) and sin(θ / 2) from 0 to 2π: • While this technique seems appropriate for axis/angle rotations, I need a solution that works with arbitrary orientation matrices (as I stated in the question: I am driving along a bezier curve) – Mutant Bob Apr 14 '15 at 20:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6334409117698669, "perplexity": 3080.6202757813385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00170.warc.gz"}
https://geo.libretexts.org/Bookshelves/Oceanography/Coastal_Dynamics_(Bosboom_and_Stive)/05%3A_Coastal_hydrodynamics/5.05%3A_Wave-induced_set-up_and_currents/5.5.01%3A_Wave-induced_mass_flux_or_momentum
# 5.5.1: Wave-induced mass flux or momentum $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ Propagating waves not only carry energy across the ocean surface but momentum as well. Momentum is defined as the product of mass and velocity. It can be thought of as mass in motion or a mass transport or flux: a water particle has a mass, and if the particle is moving it has momentum. Momentum per unit volume can thus be written as the product of the mass density $$\rho$$ and the velocity $$\vec{u} = (u_x, u_y, w)$$ of the water particles. Momentum (per unit volume) $$\rho \vec{u} = (\rho u_x, \rho u_y, \rho w)$$ is a vector quantity, a quantity which is fully described by both magnitude and direction. The direction of the momentum vector is the same as the direction of the velocity vector. The total amount of wave momentum per unit surface area in wave propagating direction is obtained by integration over the depth. Averaged over time this gives (with $$u$$ the horizontal orbital velocity in the wave propagation direction): $q = \overline{\int_{-h}^{\eta} \rho u dz}\label{eq5.5.1.1}$ There is only a contribution to the momentum from the wave trough level to the wave crest level, since below the wave trough the velocity varies harmonically in time (see Sect. 5.4.1), giving a zero time-averaged result. If we measure the velocity at some point above MSL we will only record velocities during part of the wave period and all of the recordings will be positive (and in the wave propagation direction). Between the wave trough level and MSL we will record velocities for a larger part of the wave period and although a part of the recording will be negative, the wave-averaged mean velocity will still be positive. The momentum $$q$$ can thus be interpreted as a net flux of mass between wave trough and wave crest associated with wave propagation. We can compute the integral of Eq. $$\ref{eq5.5.1.1}$$ for a single harmonic component (non-breaking) by substituting the velocity according to linear wave theory (see Eq. 5.4.1.1) at $$z = 0$$ and integrating from $$z = 0$$ to the instantaneous surface elevation $$\eta = a \cos \omega t$$. We then find for the mean momentum in a plane perpendicular to the wave propagation direction per unit surface area: $q_{\text{non-breaking}} = \overline{\int_0^{a \cos \omega t} \rho \dfrac{a \omega}{\tanh kh} \cos \omega t dz} = \overline{a \cos \omega t \rho \dfrac{a \omega}{\tanh kh} \cos \omega t} = \dfrac{\rho a^2 \omega}{2\tanh kh} = \dfrac{\rho g a^2}{2c} = \dfrac{E}{c}\label{eq5.5.1.2}$ This expression shows that $$q$$ is a non-linear quantity in the amplitude $$a$$. The result is valid to second-order accuracy in the amplitude (linear wave theory is first order). Apparently, wave-induced mass flux occurs even for a perfect sinusoidal orbital motion, but is a second-order effect. In the linear, small amplitude approximation, $$q$$ is zero and wave propagation is merely a matter of movement of the wave form, not of mass. In relation to net mass flux associated with wave propagation the term Stokes’ drift is often used (see Intermezzo 5.3). Equation $$\ref{eq5.5.1.2}$$ is valid outside the surf zone. In the surf zone the mass flux is substantially larger than outside the surf zone. It is assumed to consist of two parts, one due to the progressive character of the waves (Eq. $$\ref{eq5.5.1.2}$$) and the other due to the surface roller in breaking waves: $q_{\text{drift}} = q_{\text{non-breaking}} + q_{\text{roller}} = \dfrac{E}{c} + \dfrac{\alpha E_r}{c}\label{eq5.5.1.3}$ In this equation, $$E_r$$ is the roller energy. The first part of the right hand side is the mass flux for non-breaking waves, whereas the second part accounts for the contribution of the mass of the surface roller. Various authors have argued values for the factor $$\alpha$$ in Eq. $$\ref{eq5.5.1.3}$$ to be in the range of 0.22 to 2 (Nairn et al., 1990; Roelvink & Stive, 1989). Their arguments are too involved to treat in this course and here we assume that $$\alpha$$ is order 1. In the case of a closed boundary like a coastline, there is a zero net mass transport through the vertical as otherwise water will increasingly pile up against the coast. This means that there must be a net velocity below the wave trough level to compensate for the flux above the wave trough level: a return current. The cross-shore depth-mean velocity below the wave trough level must compensate for the mass flux perpendicular to the shore and is therefore given by: $U_{\text{below through}} = -\dfrac{q_{\text{drift}, x}}{\rho h} = -\dfrac{q_{\text{drift}} \cos \theta}{\rho h}$ In breaking waves, the mass transport towards the coast between wave crest and wave trough may be quite large, resulting in rather large seaward directed velocities under the wave trough level (see Fig. 5.25). The large return current in the surf zone is called undertow. Also in the two dimensional case of a laboratory wave flume, the same mass of water has to return to the ‘sea’ again. In the lower part of the water column this gives a return flow (see Fig. 5.26). The figure also shows the Longuet-Higgins streaming close to the bed (see Sect. 5.4.3). In Fig. 5.25, we have assumed that in the surf zone the steady Longuet-Higgins streaming may well be overridden by the undertow. The distribution over the depth of the return current or undertow (and the streaming) can be solved using a horizontal momentum equation (not depth-averaged!), see Sect. 5.5.6. Appendix B provides an example of wave flume experiments of periodic and random waves on a gently sloping beach (Stive, 1985). Figure B.2 shows the measurements of return currents in shoaling and breaking periodic waves. In non-breaking waves there is relatively small return current. In breaking waves, the mass transport towards the coast between wave crest and wave trough may be quite large, resulting in rather large seaward directed time-mean velocities under the wave trough level. ## Intermezzo 5.3 Stokes' drift We explained mass transport from an Eulerian point of view in the text above (by placing a measuring pole in a fixed cross-section and concluding that above the wave trough level the recorded velocity has a non-zero time-averaged result in the wave propagation direction). It can also be explained from a Lagrangian point of view, viz. by following a water particle moving in its orbital motion (see Fig. 5.20). Since the horizontal movement is in general smaller closer to the bed (see Eq. 5.22 and Fig. 5.21), the water particle moves faster in the wave propagation direction when it is located under the wave crest, than it runs backward when under the trough of the wave. As a result particle paths are not entirely closed orbits and there is a residual motion in the wave propagation direction over one wave period. This residual motion is referred to as Stokes’ drift and gives rise to a net mass transport in the direction of wave propagation. When we integrate the Lagrangian mass transport over the vertical we get the same result as the Eulerian mass transport above the wave trough level. The undertow is important for seaward sediment transport because of the relatively high offshore-directed velocity in the lower and middle part of the water column in a zone with relatively high sediment concentrations (due to wave breaking). The undertow is thought to be responsible for the severe beach erosion during heavy storms. Sediment transport due to return currents is also important for shallow areas that have a deeper area between them and the coast. Examples are shallow areas on the ebb-tidal deltas of coastal inlet systems, where the mass flux is not (entirely) compensated by undertow as the water can flow away at the back of these flats (where often tidal channels are present). This is further discussed in Sect. 9.4.1. 5.5.1: Wave-induced mass flux or momentum is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Judith Bosboom & Marcel J.F. Stive via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852238297462463, "perplexity": 617.682715890131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00337.warc.gz"}
https://www.arxiv-vanity.com/papers/cond-mat/9901229/
# Orbital occupation, local spin and exchange interactions in V2O3 S. Yu. Ezhov, V. I. Anisimov, D. I. Khomskii and G. A. Sawatzky Institute of Metal Physics, Russian Academy of Sciences, 620219, Ekaterinburg, GSP-170, Russia Laboratory of Applied and Solid State Physics, Materials Science Centre, University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands February 25, 2022 ###### Abstract We present the results of an LDA and LDA+U band structure study of the monoclinic and the corundum phases of VO and argue that the most prominent (spin ) models used to describe the semiconductor metal transition are not valid. Contrary to the generally accepted assumptions we find that the large on site Coulomb and exchange interactions result in a total local spin of 1 rather than and especially an orbital occupation which removes the orbital degeneracies and the freedom for orbital ordering. The calculated exchange interaction parameters lead to a magnetic structure consistent with experiment again without the need of orbital ordering. While the low-temperature monoclinic distortion of the corundum crystal structure produces a very small effect on electronic structure of VO, the change of magnetic order leads to drastic differences in band widths and band gaps. The low temperature monoclinic phase clearly favors the experimentally observed magnetic structure, but calculations for corundum crystal structure gave two consistent sets of exchange interaction parameters with nearly degenerate total energies suggesting a kind of frustration in the paramagnetic phase. These results strongly suggest that the phase transitions in VO which is so often quoted as the example of a S= Mott Hubbard system have a different origin. So back to the drawing board! ###### pacs: 71.27.+a, 71.20.-b, 75.30.Et [ ] The VO system has been a topic of intense study for over more than fifty years by both theoreticians and experimentalists because of it’s “rich” phase diagram. It undergoes a first order metal-insulator transition with a seven orders of the magnitude change in the electrical conductivity[1], which can be induced by temperature, pressure, alloying or by nonstoichiometry. It also exhibits an antiferromagnetic inslulator to paramagnetic insulator transition which also is first order and a first order paramagnetic insulator to paramagnetic metal transition. Many theoretical models have been put forward to understand the electronic and magnetic behavior of this compound. Goodenough proposed a model involving both itinerant and localized orbitals [2, 3] to describe the the VO electronic structure. Various models have been suggested and worked out in some detail using mainly the Mott-Hubbard picture[4, 5, 6]. In fact VO is nowadays used as the best studied example of a Mott Hubbard system with a semiconductor to metal transition. To explain the peculiar antiferromagnetic order (Fig.1) in the low temperature insulating phase (AFI) with pairs of parallel spins coupled antiferromagnetically in the basal plane the intriguing idea of orbital ordering in the presumably doubly degenerate orbitals was suggested[7, 8, 9] and revived recently[10] to explain new neutron scattering results[11]. Also within the context of infinite dimension calculations to describe strongly correlated systems VO is now used as a good example of the success of this approximation[12]. However, the character of the phase transition and the nature of the ground state is still very controversial. Recent photoemission and X-Ray absorption results[13] strongly suggest that the AFI ground state should not be described as a spin antiferromagnetic Mott-Hubbard insulator as assumed in the above theories and, related to this, that the orbital occupation is quite different from that conventionally assumed. In this letter we present the study of the electronic structure of VO in LDA[14, 15] and LDA+U[16, 17] approximations. In contrast to the standard LDA, in LDA+U method the influence of the on-site - Coulomb interaction is included in Hamiltonian, as an effective potential which is different for electron removal than for electron addition which has been shown to be crucial to describe strongly correlated materials. The crystal structure in the low temperature AFI phase is monoclinic [18], above this changes to the corundum structure. (T[19]). The calculated densities of states for both structures are very similar showing that the electronic structure itself is little influenced by the lattice distortions so that the strong change in properties must be a rather subtle effect with respect to the electronic structure. In Fig.2 the partial densities of states (DOS) obtained in the LDA calculation for the monoclinic crystal structure of VO are shown. They are very similar to those found by Mattheis’s [20] for the corundum lattice. As the monoclinic distortion of the lattice is not very strong, we can plot DOS’s assuming approximate trigonal symmetry. The V ions are somewhat off center in a slightly trigonally distorted octahedron of O ions. This distortion causes the otherwise 3 fold degenerate 3d orbitals to split into a non-degenerate and double degenerate levels. In this representation the orbital has symmetry in a hexagonal coordinate system, i.e. with z axis along direction (V-V pairs), and orbitals are directed more towards the V ions in the basal plane. At first glance the LDA picture looks very simple and one may conclude that the conventionally used ideas concerning the splitting of the orbitals into bonding and antibonding partners because of a hopping integral between the V ions in the pairs along the axis with two electrons per pair in the bonding orbital are confirmed. However upon closer examination a different picture emerges which is further supported by the LDA+U calculations below as well as the recent experimental XAS results[13]. First we see indeed a rather broad band with a total width of a little more than 2 eV and an about  eV narrower band both stradelling the Fermi energy and resulting in a metallic state. The band, although it shows some structure that might be interpreted as a bonding-antibonding pseudo gap, actually exhibits a strong peaking above and only relatively little weight below . This looks very different from a bonding-antibonding splitting which should have exhibited a more symmetric structure about the center of gravity if dimer hoping integrals were dominating the problem. We also note that the total band width is only 2.5 eV which is considerably smaller than the expected value of the Hubbard U of about 3 eV including the screening due to the strongly bonding electrons[21] or about 4-5 eV without this screening channel[22]. Such a small band width would invalidate a molecular orbital like approach also for the orbitals. So even these results already cast doubt on the validity of the most commonly used starting point with a strong bonding-antibonding splitting of the orbitals, with the orbitals in this gap ending up with two spin-antiparallel electrons per -axis pair in the bonding state leaving only one electron in an assumed narrow doubly degenerate band[7, 8, 9]. It was this starting point which led to the now very much used one electron per site Hubbard model for VO. The total occupation number of the band is only 0.5 electrons per V atom. Another very important point, which can be derived from the LDA result is that the - actual splitting due to the trigonal crystal field is as large as 0.4 eV with the band center of gravity lower than the band center of gravity. This large splitting makes the configuration for the localized AFI phase more favorable than the configuration removing the orbital degeneracy. We can clearly see at this point that if we switch on the on-site Coulomb interaction in our LDA+U calculations the states will be pushed up above the Fermi level and the resulting ground state will be . This implies that the ground state (AFI phase) will not be degenerate leaving no place for any orbital ordering in the usual sense. The above LDA results give us rough ideas which should be further checked. The LDA+U calculations can provide us with some more details about the electronic structure of VO. We performed LDA+U calculations for different magnetic structures, namely the “real” AF (Fig.1), a “simple” AF (all the nearest neighbors are antiferromagnetically coupled), a “layered” AF (all the nearest neighbors in basal plane are antiferromagnetically coupled and the neighbor along hexagonal axis is coupled ferromagnetically) and FM structure. In Fig. 3 the LDA+U partial DOS are plotted. Here the results for U=2.8 (eV) and J=0.93 (eV) are presented. These values of U and J were calculated by Solovyev et. al., taking into account the screening of interactions by electrons[21]. The striking point of the LDA+U result is that the electronic structure strongly depends on the magnetic structure. The band gap for instance in the real AF magnetic structure is 0.6 (eV) close to the experimental value [Fig.3(a)], while in the FM structure we find a half-metal[Fig.3(c)]. The nature of the DOS changes strongly even far below and and above Fermi level indicating the very strong sensitivity of the electronic structure to the spin structure. However, the occupation numbers for all magnetic structures are for each of orbital, and for , which formally corresponds to configuration. In the present calculation there is no sign of any orbital ordering at all. We have calculated quadrupole moment tensor for d-shell of V ion and have found that while 3-fold symmetry is broken for “real” AFM structure, the quadrupole moment tensor is the same on all V ions. We should mention that we did not include the spin orbit coupling in these calculations and these could have appreciable effects for V. For example the crystal structure allows for a Dzyaloshinsky-Moria coupling which could result in a small non colinear spin allignement. This could in the end increase the unit cell and could be visible in an annomalous scattering experiment. We should stress here that the total energy difference between different magnetic structures is really small. For the monoclinic crystal structure the “real” AF state has the lowest total energy followed by the simple AF state, which is higher by 80 K. In the case of the corundum crystal structure the simple AF state has the lowest energy, but the energy difference with the “real” AF state was only 5 K. Thus in the corundum phase these two magnetic structures (real and simple) turned out to be almost degenerate. In short, the LDA+U results show us how important the small monoclinic distortions are for the magnetic structure and how low energy scale excitations, involving only spin reorientation, can effectively change large energy scale values such as band gaps. There are two problems arising from the present results. First the spin should be considered to be 1 per V atom rather than 1/2 and secondly that the orbital occupation is consistent with a configuration of the ground state, which is in-plane symmetric and orbitally nondegenerate, but still with the complex ”Real” magnetic structure of the AFI phase. In this magnetic structure shown in Fig 1, every atom has three closest neighbors in the basal plane, one of which is ferromagnetically aligned () and the other two — antiferromagnetically ( and ), and also there is one neighbor along the hexagonal axis which is ferromagnetically aligned ()(see Fig.1 for definitions of exchange interaction parameters). To check the above mentioned consistency we calculated the exchange interaction parameters (EIP) using a well tested method described in[23]. We calculated EIP’s for both crystal structures: monoclinic and corundum. It was found that according to EIP calculation only one stable magnetic structure exists in the monoclinic phase. (”Stable” (or ”consistent”) means that if in the EIP calculations the certain pair of spin was parallel, the corresponding exchange parameter came out as ferromagnetic, and if antiparallel, as antiferromagnetic.) It is the “real” AF magnetic structure with the following values of EIP’s:  K (ferromagnetic),  K,  K,  K. For the corundum crystal structure we obtained two stable magnetic configurations from the EIP calculation point of view: “real” AF (, , ) and magnetic structure with uniform AF exchange in the basal plane ( ) and small frustrated exchange along the hexagonal axis (). From these results we can say, that indeed the monoclinic distortion of the crystal structure stabilizes the real AF magnetic structure, and the configuration of the electrons is consistent with this magnetic structure. The values of EIP’s depend strongly on the magnetic structure, which tells us that one cannot adequately model the magnetic interactions in VO as only nearest neighbor Heisenberg exchange. This is most probably connected with the fact that VO is close to being metallic which is also reflected in the strong dependence of the electronic structure on the magnetic one (and vice versa) which we saw in LDA+U calculation. The polarized neutron scattering experiments [24] show the qualitative change of magnetic interactions in the transition from the antiferromagnetic insulator with the monoclinic crystal structure to both the metallic phase and paramagnetic insulator with the corundum structure. Instead of a peak in reciprocal space corresponding to AFI magnetic structure (“real” AF) it has a peak corresponding to a magnetic structure with all three V-V interactions in basal plane antiferromagnetic (“layered” AF). Another peculiarity of the neutron scattering results is a very large width of the peak. Our results show that for the distorted monoclinic crystal structure the “real” AF magnetic structure is the lowest one and the only one which gives a consistent set of exchange interaction parameters. Transition to the corundum crystal structure leads to the coexistence of the two well defined sets of exchange interaction parameters with nearly degenerate values of total energy for those two magnetic configurations, the symmetric in basal plane “layered” AF structure being slightly lower in energy. That would result, in reciprocal space, in a peak centered at the corresponding Q-vector but this peak will be strongly broadened due to the transitions to the excited state and corresponding fluctuations in the magnetic structure. Our calculations also give a value of the magnetic moment per V , nearly the same in all the structures studied. This value is somewhat larger than the value of obtained from neutron scattering for the antiferromagnetic phase, but is consistent with the value (1.7 ) obtained from the high-temperature susceptibility. This relatively large value of is evidently a consequence of a strong Coulomb interaction on V which tends to destroy the formation of the molecular orbital singlet state on -orbitals of V-V pair, which was assumed in most previous studies. The fact that this is independent of the magnetic or crystal structure suggests that this should be treated as a high energy scale parameter in any model. Summarizing, we have carried out LDA+U calculations of the electronic structure and exchange constants of VO in both the monoclinic and corundum structures and obtained a consistent description of the main properties of the antiferromagnetic insulating phase and of the paramagnetic insulating one. In contrast to the previous assumptions, in both these phases the electronic configuration is predominantly one, i.e. two -electrons of occupy the doubly degenerate -orbitals. In addition the spins of the two electrons are parallel leading to a high spin S=1 local moment. As a result there is no orbital degeneracy left and correspondingly no orbital ordering of the kind which was invoked previously to explain magnetic properties of VO[7, 8, 9, 10]. Despite that we are able to obtain the correct magnetic structure of VO: the signs of the exchange constants in the monoclinic phase are consistent with the observed antiferromagnetic structure. The calculated values of the energy gap eV and the magnitude of the magnetic moment per V also agree with the experiment. Strong change of magnetic correlations through is ascribed to the near degeneracy of different magnetic structures in the corundum magnetic structure of VO, the state with antiferromagnetic correlations in all three directions in a basal plane having slightly lower energy. Thus orbital ordering is not required to explain the physical properties of VO in the antiferromagnetic and in the paramagnetic insulating phases. The LDA+U results also demonstrate that a spin 1/2 Hubbard model is not the correct starting point but that one should use a spin 1 model with very strong hunds rule exchange. Within such a model with two electrons S=1 in orbitals on each site the hopping would have to involve both the minority spin orbitals as well as the orbitals which are close in energy. Having found this it is also not so surprising any more that the electronic structure and especially the d band widths and splittings are strongly dependent on the nearest neighbor spin spin correlation functions and that there may be a redistribution of electrons between the and states on going into the PI or Metallic phase as found in recent electron spectroscopic studies[13]. This investigation was supported by the Russian Foundation for Fundamental Research (RFFI Grant No. 98-02-17275), by the Netherland Organization for Fundamental Research on Matter (FOM) with financial support by the Netherlands Organization for the Advance of Pure Science (NWO).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880258798599243, "perplexity": 946.9262724976003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00874.warc.gz"}
https://itprospt.com/num/2106306/for-the-following-ordinary-annuity-determine-the-size-of
5 # For the following ordinary annuity; determine the size of the periodic payment: Present Payment Conversion Future Value Value Period Term of Annuity Interest Rate P... ## Question #### Similar Solved Questions ##### Tho probabllltv ot & person gettlng the Ilu thls year Is0.15.Mwe have & group (0f 20 compute the following: (10 polnts) Standard devlatlon Probability that 2 people have the Ilu Probabillty that Iess than 4 people have thc (lu Probability that at least one person has the flu Probability that at least 3 people have the flu A normal dlstributlon of has & mean of 20 and a standard deviation of 10. Find the z- scores corresponding to each of the following values: (10 points) Tho probabllltv ot & person gettlng the Ilu thls year Is0.15.Mwe have & group (0f 20 compute the following: (10 polnts) Standard devlatlon Probability that 2 people have the Ilu Probabillty that Iess than 4 people have thc (lu Probability that at least one person has the flu Probability that... ##### You are investigating # Dew Substance X. und you determine its hesting curve 4 Alm @AleTu[ presufe; +eatna Cune Loa mok d5811964Lcal CEyi Dclermnine Ihe normal mc ltng pnt and norual boiling point of Substance X Delerine Ihe enthalpy of fusion und the enthalpy o vaporation = Subslanc€ X Determine Ihe molar heat epacdty of the liqquid phase of Substance Determine the vapr pres sure of Substance 4IO € You are investigating # Dew Substance X. und you determine its hesting curve 4 Alm @AleTu[ presufe; +eatna Cune Loa mok d5811964 Lcal CEyi Dclermnine Ihe normal mc ltng pnt and norual boiling point of Substance X Delerine Ihe enthalpy of fusion und the enthalpy o vaporation = Subslanc€ X Deter... ##### Vibrating string fixed at € 0 and x L undergoes oscillations described by the wave equationOx2dt2where u(z,t) represents the displacement from equilibrium of the string: Initially the profile of the string is u(z,0) sin (4z) and its initial vertical velocity is du 8r sin (&2) Ot t=0 The four basic periodic solutions of the wave equation are cos(kx) cos(kct) B cos(kz) sin(kct) sin(kx) cos(kct) D sin(kx) sin(kct) where k is a real constant and A,B, C and D are arbitrary constants_ Use the bo vibrating string fixed at € 0 and x L undergoes oscillations described by the wave equation Ox2 dt2 where u(z,t) represents the displacement from equilibrium of the string: Initially the profile of the string is u(z,0) sin (4z) and its initial vertical velocity is du 8r sin (&2) Ot t=0 The... ##### Evaluate the definite integral_ Use graphing utility to verify your result: Evaluate the definite integral_ Use graphing utility to verify your result:... ##### In a double-slit arrangement the slits are separated by a distance equal to 82 times the wavelength of the light passing through the slits. (a) What is the angular separation in radians between the central maximum and an adjacent maximum? rad(b) What is the distance between these maxima on a screen 53.0 cm from the slits? mm In a double-slit arrangement the slits are separated by a distance equal to 82 times the wavelength of the light passing through the slits. (a) What is the angular separation in radians between the central maximum and an adjacent maximum? rad (b) What is the distance between these maxima on a screen...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318271279335022, "perplexity": 5231.830200349486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00742.warc.gz"}
http://www.ck12.org/book/Texas-Instruments-Algebra-I-Student-Edition/r1/section/7.1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Linear Inequalities | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Texas Instruments Algebra I Student Edition Go to the latest version. # 7.1: Linear Inequalities Created by: CK-12 0  0  0 This activity is intended to supplement Algebra I, Chapter 6, Lesson 3. ## Problem 1 – Table of Values In this problem, you will explore and graph a simple inequality: $x \ge 4$. Press PRGM to access the Program menu and choose the LINEQUA program. Enter the left side of the inequality, $X$, and press ENTER. Then enter the right side of the inequality, $4$, and press ENTER. Select an inequality symbol. Press $2$ to choose $\ge$. Choose 1:View Table to see a table of values. The calculator displays a table with several columns. The first column $X$ shows the values of the variable, $x$. The second column $Y1$ shows the value of the left side for each $x-$value. The third column $Y2$ shows the value of the right side for each $x-$value. 1. Describe the numbers in the $Y1$ column. How do they compare to the $x-$values? Explain. 2. Describe the numbers in the $Y2$ column. Are they affected by the $x-$values? Explain. Now look at the fourth column $Y3$. Each entry in this column is either a $1$ or a $0$. Examine this column. (Note: To scroll up or down, return to the $X$ column, scroll, and then return to the $Y3$ column.) 3. For what $x-$values is there a $1$ in the $Y3$ column? 4. Substitute one of these $x-$values into $x \ge 4$. Is the inequality true for this value of $x$? 5. For what $x-$values is there a $0$ in the $Y3$ column? 6. Substitute one of these $x-$values into $x \ge 4$. Is the inequality true for this value of $x$? Press ENTER to exit the table, then press ENTER again to select 1:Another Ineq. This time enter the inequality $x < -2$. Choose 1:View Table from the menu. Look at each column. Again, each entry in the $Y3$ column is either a $1$ or a $0$. 7. For what $x-$values is there a $1$ in the $Y3$ column? 8. Substitute one of these values in for $x$ in $x < -2$. Is the inequality true for this value of $x$? 9. For what $x-$values is there a $0$ in the $Y3$ column? 10. Substitute one of these values in for $x$ in $x < -2$. Is the inequality true for this value of $x$? 11. Complete each statement. a) If the $x-$value makes the inequality true, the entry in the $Y3$ column is _______. b) If the $x-$value makes the inequality false, the entry in the $Y3$ column is _______. ## Problem 2 – Graphing Now you are going to look at the graph of a simple inequality. Press ENTER to exit the table, and then press ENTER again to select 1:Another Ineq. Enter the inequality $x > 2$. Choose 2:View Graph from the menu. The calculator draws a line above the $x-$values on the number line where the inequality is true. The inequality is not true when $x = 2$, so an open circle is displayed there. Press ENTER to exit the graph screen, and then press ENTER again to select 1:Another Ineq. Graph $x \ge 2$. 12. Describe the difference between the graphs of $x > 2$ and $x \ge 2$. Exercises 13. Graph each inequality using your graphing calculator. Sketch the graphs here. a) $t > 5$ b) $p < -2$ c) $z \ge -2$ d) $y \le 0$ ## Problem 3 – Solving Inequalities using Addition and Subtraction You can use your calculator to check that two inequalities are equivalent. To see that $x - 3 > 5$ and $x > 8$ are equivalent, run the LINEQUA program. Enter the inequality $x - 3 > 5$. Choose Compare Ineq. to compare this inequality to another. Enter $x > 8$. The calculator displays the graphs of $x - 3 > 5$ and $x > 8$ on the same screen. The graphs are the same, so the inequalities are equivalent. • Caution: In some graphs, the open circle will appear to be filled in. This is because of the size of the pixels on the graph screen. For this reason, a “closed circle” is shown as a cross, and an “open circle” as a dot. Exercises 14. Solve each inequality. Use your calculator to compare the original inequality with the solution. Then sketch the graph of the solution. a) $f - 5 \ge 2$ b) $-4 > g - 2$ c) $u + 1 \le 5$ d) $1 < 8 + v$ e) $-5 > h - 1$ f) $-5 \le 1 + t$ ## Problem 4 – Solving Inequalities using Multiplication and Division Use your calculator to compare the graphs of the given inequalities. 15. $\frac{x}{5} \le - 1$ and $x \le -5$ a) Are these equivalent inequalities? Explain. b) Can you multiply both sides of an inequality by $5$ without changing its solutions? 16. $4x > 8$ and $x > 2$ a) Are these equivalent inequalities? Explain. b) Can you divide both sides of an inequality by $4$ without changing its solutions? 17. $-x > 4$ and $x > -4$ a) Are these equivalent inequalities? Explain. b) Can you multiply both sides of an inequality by $-1$ without changing its solutions? 18. $-x > 4$ and $x < -4$ a) Are these equivalent inequalities? Explain. Exercises 19. Compare graphs to find the inequality symbol that makes each pair of inequalities equivalent. a) $\frac{v}{-4} \ge 2 \quad v \underline{\;\;\;\;\;\;\;\;}-8$ b) $- \frac{d}{3} < -3 \quad d \underline{\;\;\;\;\;\;\;\;} 9$ c) $-2h > -2 \quad h \underline{\;\;\;\;\;\;\;\;} 1$ d) $-5r \le 10 \quad r \underline{\;\;\;\;\;\;\;\;} -2$ To solve an inequality using multiplication or division, multiply or divide both sides of the inequality by the same number. However, if you multiply or divide both sides by a negative number, you must reverse the inequality symbol to obtain an equivalent inequality. 20. Solve each inequality. Use your graphing calculator to compare the original inequality with the solution. Then sketch its graph. a) $\frac{c}{4} \ge 1$ b) $2 < - \frac{d}{4}$ c) $3w \le -9$ d) $20 > -5x$ e) $18d < -12$ f) $- \frac{5}{7} g > - 5$ Feb 22, 2012 Aug 19, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 95, "texerror": 0, "math_score": 0.7454479336738586, "perplexity": 663.628859715963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066654.4/warc/CC-MAIN-20141017150106-00269-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=326647
Optics: Leveling Components by splitringtail Tags: components, leveling, optics P: 59 I have a apparatus with a thin electrode that is suppose to be parallel with surface beneath it. I can level the surface, but you cannot put a level on this electrode. I was thinking have like a block precisely made to the expected distance between the electrode and bottom to determine the alignment, but I was thinking if I was not careful I could damage the electrode. Another thought, maybe use a laser level and calibrated to the level surface, then some how move it up to the elevation of the electrode to check its alignment. However, I have heard that even the levels used by contractors are not very precise and can be quite cumbersome in those applications. I was wondering if there is any specialized instrumentation and/or produces used in the laboratory setting. I figure those who work in optics have developed something, but I cannot find any information on the issue. I would prefer to get some direction/recommendation for literature and documentation. Thank You HW Helper Sci Advisor P: 8,962 To get it parallel you can either measure the distance electrode-substrate at each end Using something like this http://www.sensorland.com/HowPage056.html is fairly cheap and accurate to <1um and non-contact. Or you can use an old fashioned travelling microscope to do the same thing. Another apporach if your electrode is reflective enough is to shine a laser at each end (eg slide the unit under a fixed laser) and measure the distance apart the reflected spots appear on the ceiling. With a bit of trig and assumign the position of the laser is fixed you can work out the slope of the electrode. Related Discussions General Physics 6 Materials & Chemical Engineering 0 Introductory Physics Homework 2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607396483421326, "perplexity": 686.8132899290409}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164554256/warc/CC-MAIN-20131204134234-00010-ip-10-33-133-15.ec2.internal.warc.gz"}
https://pos.sissa.it/363/071/
Volume 363 - 37th International Symposium on Lattice Field Theory (LATTICE2019) - Main session Determination of the endpoint of the first order deconfiniement phase transition in the heavy quark region of QCD S. Ejiri,* S. Itagaki, R. Iwami, K. Kanaya, M. Kitazawa, A. Kiyohara, M. Shirogane, Y. Taniguchi, T. Umeda on behalf of the WHOT-QCD Collaboration *corresponding author Full text: pdf Pre-published on: January 03, 2020 Published on: August 27, 2020 Abstract We study the endpoint of the first order deconfinement phase transition of 2 and 2+1 flavor QCD in the heavy quark region. We perform simulations of quenched QCD and apply the reweighting method to study the heavy quark region. The quark determinant for the reweighting is evaluated by a hopping parameter expansion. To reduce the overlap problem, we introduce an external source term of the Polyakov loop in the simulation. We study the location of critical point at which the first order phase transition changes to crossover by investigating the histogram of the Polyakov loop and applying the finite-size scaling analysis. We estimate the truncation error of the hopping parameter expansion, and discuss the lattice spacing dependence and the spatial volume dependence in the result of the critical point. DOI: https://doi.org/10.22323/1.363.0071 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778082728385925, "perplexity": 2518.2157311292826}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747774.97/warc/CC-MAIN-20201205104937-20201205134937-00475.warc.gz"}
https://www.physicsforums.com/threads/questions-on-kinematics-distance-velocity-acceleration.88808/
Questions on kinematics (distance/velocity/acceleration) 1. Sep 13, 2005 dnt these should be pretty simple but for some reason i cant get them. ok they give you a basic time vs distance table: • 0 sec - 0 m • 1 sec - 2 m • 2 sec - 8 m • 3 sec - 18 m • 4 sec - 32 m • 5 sec - 50 m its a ball rolling down a hill which makes sense for the data. now the question is what is the distance after 2.2 seconds. can you figure that out exactly or do you just look at a graph and estimate it? second question is what is the slope at 3 seconds? now i konw slope is change in y over change in x (meaning change in distance over change in time) which is also the velocity at 3 seconds but which two points do you use? do you use the 3 second data with the 2 second data or the 3 second data with the 4 second data? they give different answers. third, if you do look at all the differences from one second to the next, you get a patten of how much it jumps - 2, 6, 10, 14, 18 - those are the velocities at each second, right? well since it jumps by 4 each time can i then say the acceleration is 4 m/s^2? thanks. 2. Sep 13, 2005 amcavoy s = .5at2 + v0t + s0 Plug in! 3. Sep 13, 2005 dnt ok so i can do this: s = .5(4)(2.2)^2 + (0)(2.2) + 0 is that correct? am i right the a=4? 4. Sep 13, 2005 amcavoy Yes, because 2=.5a -> a=4 m/s2. 5. Sep 14, 2005 dnt anyone know this question? 6. Sep 15, 2005 mukundpa $$v_{f} = v_{i} + at$$ 7. Sep 15, 2005 dnt i dont think that answers my question. the question specifically asked for slope (which in turn, is velocity) at 3 seconds. to find slope you need two points, right? well do you use 2 and 3 seconds OR 3 and 4 seconds to find the slope at that point? 8. Sep 15, 2005 HallsofIvy You are asked, apparently, for an "instantaneous" speed and you don't have enough information to do that. To find instantaneous speed (unless the distance-time graph is a line and speed is constant), you would have to know the distance-time function and you only have it at specific times. What you can do is anyone of what you are suggesting- use the distances at 3 and 4 seconds to get the "average speed between t=3 and t= 4", (32-18)/1= 14 m/s, use the distances at 2 and 3 seconds to get the "average speed between t= 2 and t= 3,(18-8)/1= 10 m/s, or even average those two answers- (10+ 14)/2= 12. For this particular problem, that turns out to be the same as if you find the "average speed between t= 2 and t= 4", symmetric about t= 3, (38- 8)/2= 24/2= 12 m/s. If you want to do a lot of work to get a very accurate answer, you might find a polynomial function that fits all of the points and differentiate that! Actually, here, it is not a "lot of work". It look pretty obvious that each distance, in m, is equal to twice the time, in s, squared: d= 2t2. The instantaneous speed, at t= 3, is the derivative of that, 4t, evaluated at t= 3: 4(3)= 12 m/s. Of course, that was to be expected. Something moving under gravity has a constant acceleration and so its distance function is a quadratic. That was what apmcavoy was telling you in the first reply to your post. 9. Sep 15, 2005 dnt ah. thanks! 10. Sep 15, 2005 curly_ebhc Solving the slope I don't think that you need to take the derivative especially if you are not in a calculus based physics course. To find the slope at one point draw a tangent line. I promise there is an explantion of how to do this in your book. It should just touch the curve. The tangent line is straight so take any 2 points and find the slope. Last edited: Sep 16, 2005 Similar Discussions: Questions on kinematics (distance/velocity/acceleration)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7529245018959045, "perplexity": 817.2020952678782}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814393.4/warc/CC-MAIN-20180223035527-20180223055527-00036.warc.gz"}
http://wims.cse-institute.org/wims.cgi?lang=en&+module=U1%2Falgebra%2Frankmult.en
# Rankmult --- Introduction --- Rankmult is an exercise on the multiplication of matrices: one knows that if C is a matrix of size m×n and of rank r, then there exist two matrices A and B, of sizes m×r and r×n respectively, such that C=AB. The server will therefore give you such a matrix C, randomly generated, which is . And you are asked to find the matrices A and B. Remark. It's normal if you don't know a formula allowing you to compute'' A and B from C, as the solution is not unique (in fact there are infinitely many solutions for each C). What you need is a good dose of reflexion. Try first the case where rank=1, which is rather easy. For bigger ranks, computations of linear combinations of vectors will be necessary in general. Other exercises on: Rank   matrices   linear algebra In order to access WIMS services, you need a browser supporting forms. In order to test the browser you are using, please type the word wims here: and press Enter''.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528855323791504, "perplexity": 831.7611187536497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645362.1/warc/CC-MAIN-20180317214032-20180317234032-00144.warc.gz"}
http://math.stackexchange.com/users/27197/scibuff?tab=reputation
# scibuff less info reputation 17 bio website location age member for 1 year, 11 months seen Feb 27 at 16:55 profile views 20 # 306 Reputation 5 Jan 28 +5 20:25 upvote Average bus waiting time 5 Oct 23 '13 +5 17:54 upvote Is there any pythagorean triple (a,b,c) such that $a^2 \equiv 1 \bmod b^{2}$ 5 Sep 10 '13 +5 20:54 upvote Is there any pythagorean triple (a,b,c) such that $a^2 \equiv 1 \bmod b^{2}$ 5 Aug 11 '13 5 Jul 18 '13 10 Jul 17 '13 25 Jul 16 '13 47 Jul 15 '13 10 Jul 14 '13 32 Jan 11 '13 27 Oct 28 '12 17 Apr 11 '12 12 Mar 30 '12 100 Mar 20 '12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5246298313140869, "perplexity": 14483.517966210373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999677515/warc/CC-MAIN-20140305060757-00094-ip-10-183-142-35.ec2.internal.warc.gz"}
https://hal-insu.archives-ouvertes.fr/insu-01246623
# On the decadal scale correlation between African Dust and Sahel rainfall: the role of Saharan heat Low-forced winds 2 TROPO - LATMOS LATMOS - Laboratoire Atmosphères, Milieux, Observations Spatiales Abstract : A large body of work has shown that year-to-year variations in North African dust emission are inversely proportional to previous year monsoon rainfall in the Sahel, implying that African dust emission is highly sensitive to vegetation changes in this narrow transitional zone. However, such a theory is not supported by field observations or modeling studies, as both suggest that interannual variability in dust is due to changes in wind speeds over the major emitting regions, which lie to the north of the Sahelian vegetated zone. Here we reconcile this contradiction showing that interannual variability in Sahelian rainfall, and surface wind speeds over the Sahara, are the result of changes in lower tropospheric air temperatures over the Saharan Heat Low (SHL). As the SHL warms an anomalous tropospheric circulation develops that reduces windspeeds over the Sahara and displaces the monsoonal rainfall northward, thus simultaneously increasing Sahelian rainfall and reducing dust emission from the major dust “hot-spots” in the Sahara. Our results shed light on why climate models are, to-date, unable to reproduce observed historical variability in dust emission and transport from this region. Document type : Poster communications Domain : https://hal-insu.archives-ouvertes.fr/insu-01246623 Contributor : Catherine Cardon <> Submitted on : Saturday, December 19, 2015 - 1:30:48 PM Last modification on : Friday, January 10, 2020 - 3:42:34 PM Document(s) archivé(s) le : Saturday, April 29, 2017 - 10:37:12 PM ### File Poster (1).pdf Files produced by the author(s) ### Identifiers • HAL Id : insu-01246623, version 1 ### Citation Weijie Wang, Amato Evan, Cyrille Flamant, Christophe Lavaysse. On the decadal scale correlation between African Dust and Sahel rainfall: the role of Saharan heat Low-forced winds. AGU Fall Meeting 2015, Dec 2015, San Francisco, United States. pp.A23C-0318, 2015. ⟨insu-01246623⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808574378490448, "perplexity": 17381.950788209648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00313.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=CENBG&action_todo=view&langue=en&id=in2p3-00726669&version=1
160 articles – 2210 references  [version française] HAL: in2p3-00726669, version 1 arXiv: 1208.6115 Physical Review C 87 (2013) 034601 High-precision measurement of total fission cross sections in spallation reactions of 208Pb and 238U (2013) Total cross sections for proton- and deuteron-induced-fission of 208Pb and 238U have been determined in the energy range between 500 MeV and 1 GeV. The experiment has been performed in inverse kinematics at GSI Darmstadt, facilitating the counting of the projectiles and the identification of the reaction products. High precision between 5 and 7 percent has been achieved by individually counting the beam particles and by registering both fission fragments in coincidence with high efficiency and full Z resolution. Fission was clearly distinguished from other reaction channels. The results were found to deviate by up to 30 percent from Prokofiev's systematics on total fission cross sections. There is good agreement with an elaborate experiment performed in direct kinematics. Research team: ACENNEX Subject(s) : Physics/Nuclear Experiment
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917949795722961, "perplexity": 2029.4108405421111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997881423.31/warc/CC-MAIN-20140722025801-00237-ip-10-33-131-23.ec2.internal.warc.gz"}
https://uia.brage.unit.no/uia-xmlui/handle/11250/139239/browse?type=dateissued
Now showing items 1-20 of 86 • #### A survey of the outside language pressure on modern faroese: with a view to assessing a possible increase in the use of English loanwords  (Master thesis, 2006) The focus of this work has been on the outside pressure on the Faroese language with an particular view to assessing a possible increase of new English Imports inot the language in view of the present-day technological ... • #### The influence of English on Norwegian morphology: aspects of a contemporary development  (Master thesis, 2006) Three tentative hypotheses are suggested and tested in this thesis:1. That compounds, clips words and the genitive case are areas of Norwegian morphology which in fact are realms of uncertainty for users of the Norwegian ... • #### Narrative entrapments in the novels of J.M.Coetzee: a postmodern feminist reading of his three female narrating personas  (Master thesis, 2006) • #### Root canals : identity in Zadie Smiths White teeth  (Master thesis, 2007) • #### From cohesion to thematics in the multimodal situation : the problem of the modal barrier  (Master thesis, 2007) • #### The emergence of christianity within the Anglo-Saxon kingdom of Northumbria : an exploration into its earlist roots and an investigation in to the influences it had on the culture in the region then and today  (Master thesis, 2007) This thesis commences with a description of Pre-Roman and Roman Britain, to identify the people who lived in Northumbria and their background. The description then progresses with an explanation about the withdrawal of ... • #### The silencing of women in westerns : a psychoanalytic, lacanian, and feminist approach  (Master thesis, 2007) • #### Between good end evil: on the moral ambiguity in "Buffy and the vampire slayer"  (Master thesis, 2007) Joss Whedon’s Buffy the Vampire Slayer aims to empower young women through a declared feminist agenda. The main body of this thesis explores what it is that makes Buffy a television show with a feminist agenda. This ... • #### Nineteen eighty-four´s dystopian vision: power and the individual  (Master thesis, 2007) Orwell’s Nineteen Eighty-Four is by many known for, and consequently discussed in terms of, its “predictions” of the future, and its political satire. This thesis does not aim at discussing Orwell’s political ambitions, ... • #### The new England code : controlling female agency in contemporary American tv drama  (Master thesis, 2008) In an attempt to contextualize the DC observations, the overarching super-genre of melo-drama has been given particular attention. The relevance of melodrama is evident as the main ‘corpus’ of the survey, Cold Case, embodies ... • #### Hunter S. Thompson and gonzo journalism  (Master thesis, 2008) • #### The life of Tom Marvolo Riddle aka Lord Voldemort : a study of the origin of evil and how it is portrayed in fantasy  (Master thesis, 2008) • #### The river potential and the river chronotope : reading rivers in Mark Twains the adventures of Huckleberry Finn and Cormac McCharthys Suttree  (Master thesis, 2008) • #### Utopian freedom : individual freedom and social order in Thomas Mores Utopia, Marge Piercys Woman on the edge of time and Ursula Le Guins The Dispossessed  (Master thesis, 2009) • #### Living in dangerous times : identity, volition and anxiety in Don DeLillos White noise and Falling man  (Master thesis, 2009) • #### Angry black males and the tales they tell : African-American autobiographies represented by Malcolm Xs The autobiography of Malcolm X and Barack Obama`s Dreams from my father  (Master thesis, 2010) • #### From noise to filter : cybernetics, information and communication in Thomas Pynchon´s The crying of lot 49 and William Gibson´s Neuromancer  (Master thesis, 2010) • #### South Park´s ambiguoues satiric expression  (Master thesis, 2010) • #### Liminal characters in the science fiction of Philip K. Dick  (Master thesis, 2011) • #### Negation in English : compared to Norwegian  (Master thesis, 2011)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23509222269058228, "perplexity": 22454.194968956956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00292.warc.gz"}
https://github.com/ropensci/software-review/issues/213
/ software-review Public submission: outcomerate#213 Closed opened this issue Apr 28, 2018 · 26 comments Closed submission: outcomerate #213 opened this issue Apr 28, 2018 · 26 comments Assignees Labels Summary • What does this package do? (explain in 50 words or less): outcomerate is a lightweight R package that implements the standard outcome rates for surveys, as defined in the Standard Definitions of the American Association of Public Opinion Research (AAPOR). • Paste the full DESCRIPTION file inside a code block below: Package: outcomerate Version: 0.0.1 Title: AAPOR Survey Outcome Rates Description: Standardized survey outcome rate function, including the response rate, contact rate, cooperation rate, and refusal rate. Authors@R: person("Rafael", "Pilliard Hellwig", email = "rafael.taph@gmail.com", role = c("aut", "cre")) Encoding: UTF-8 LazyData: true ByteCompile: true RoxygenNote: 6.0.1 Depends: R (>= 2.10) Suggests: dplyr, forcats, knitr, survey, testthat, covr Imports: Rdpack (>= 0.7) RdMacros: Rdpack URL: https://github.com/rtaph/outcomerate BugReports: https://github.com/rtaph/outcomerate/issues • URL for the package (the development repository, not a stylized html page): https://github.com/rtaph/outcomerate • Please indicate which category or categories from our package fit policies this package falls under *and why(? (e.g., data retrieval, reproducibility. If you are unsure, we suggest you make a pre-submission inquiry.): • data munging: because it allows researchers to compute rates of interest given specific input data • reproducibility: because it implements standardized definitions so that the research community can better compare the quality of survey data •   Who is the target audience and what are scientific applications of this package? Survey analysts are the primary target. The package has applications for any survey that uses standard disposition codes and requires the calculation of AAPOR outcome rates. • Are there other R packages that accomplish the same thing? If so, how does yours differ or meet our criteria for best-in-category? Not to my knowledge. •   If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted. Requirements Confirm each of the following by checking the box. This package: • has a CRAN and OSI accepted license. • contains a README with instructions for installing the development version. • includes documentation with examples for all functions. • contains a vignette with examples of its essential functions and uses. • has a test suite. • has continuous integration, including reporting of test coverage, using services such as Travis CI, Coveralls and/or CodeCov. • I agree to abide by ROpenSci's Code of Conduct during the review process and in maintaining my package should it be accepted. Publication options • Do you intend for this package to go on CRAN? • Do you wish to automatically submit to the Journal of Open Source Software? If so: • The package has an obvious research application according to JOSS's definition. • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/. • The package is deposited in a long-term repository with the DOI: • (Do not submit your package separately to JOSS) • Do you wish to submit an Applications Article about your package to Methods in Ecology and Evolution? If so: • The package is novel and will be of interest to the broad readership of the journal. • The manuscript describing the package is no longer than 3000 words. • You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see MEE's Policy on Publishing Code) • (Scope: Do consider MEE's Aims and Scope for your manuscript. We make no gaurantee that your manuscript willl be within MEE scope.) • (Although not required, we strongly recommend having a full manuscript prepared when you submit here.) • (Please do not submit your package separately to Methods in Ecology and Evolution) Detail • Does R CMD check (or devtools::check()) succeed? Paste and describe any errors or warnings: • Does the package conform to rOpenSci packaging guidelines? Please describe any exceptions: • If this is a resubmission following rejection, please explain the change in circumstances: Not applicable. • If possible, please provide recommendations of reviewers - those with experience with similar packages and/or likely users of your package - and their GitHub user names: changed the title outcomerate submission submission: outcomerate Apr 28, 2018 Editor checks: • Fit: The package meets criteria for fit and overlap • Automated tests: Package has a testing suite and is tested via Travis-CI or another CI service. • Repository: The repository link resolves correctly • Archive (JOSS only, may be post-review): The repository DOI resolves correctly • Version (JOSS only, may be post-review): Does the release version given match the GitHub release (v1.0.0)? 👋 @rtaph. Thanks for this submission. I've run goodpractice::gp() on your package and everything comes out clean. The package meets all of our recommended guidelines really well so thanks! 🚀 I'll start working on finding two reviewers who are able to put the package through a bunch more testing, user interface review etc so you can get additional feedback. Reviewers: @carlganz, @nealrichardson Due date: May 22, 2018 karthik commented Apr 30, 2018 🙏@carlganz for agreeing to review. Note tentative due date above. karthik commented May 1, 2018 Thanks @nealrichardson for agreeing to review. I've also tagged you in the issue above. Note review date and let me know if you need more time. karthik commented May 3, 2018 • edited @rtaph You can add a badge to your readme right away. Badge location is: https://badges.ropensci.org/213_status.svg It will update as your review progresses and change to green once accepted. 🙏 Package Review • As the reviewer I confirm that there are no conflicts of interest for me to review this work Documentation The package includes all the following forms of documentation: • A statement of need clearly stating problems the software is designed to solve and its target audience in README • Installation instructions: for the development version of package and any non-standard dependencies in README • Vignette(s) demonstrating major functionality that runs successfully locally • Function Documentation: for all exported functions in R help • Examples for all exported functions in R Help that run successfully locally • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R). Functionality • Installation: Installation succeeds as documented. • Functionality: Any functional claims of the software been confirmed. • Performance: Any performance claims of the software been confirmed. • Automated tests: Unit tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine. • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines Final approval (post-review) • The author has responded to my review and made changes to my satisfaction. I recommend approving this package. Estimated hours spent reviewing: 3 This package is very focused on a specific problem, and it solves it well. It wraps and automates what would otherwise be tedious arithmetic and formulas you'd have to keep looking up. Here are a few notes on ways I think you can improve the code and documentation of the package. Code functionality • It seems that I can't get "all the rates" unless I provide a value for e. I think I should be able to easily get all of the rates that don't depend on e just by calling outcomerate(my_data). Instead, I get the error "The parameter e must be provided for RR3, RR4, REF2, CON2". It would be nice if instead you excluded those 4 metrics and calculated the rest. • Why would I ever want return_nd = TRUE? I don't see any documentation or examples that do that. In my experience it's wise not to add extra arguments and features with no application or demand, even if it's trivial to do---they clutter your interface and create a liability for you to maintain them. So I'd advocate removing it. But, if you determine that it has value, consider returning numden before even bothering to calculate rates (outcomerate.R#L151). • In asserters.R, assert_weight: what if the weight vector contains missing values? (It errors, but not helpfully.) Code style • NULL is generally a better default value for function arguments than NA. You're less likely to accidentally provide NULL, NA is actually type logical and that can be confusing, and is.null() is a much more natural check than is.na(), which will break your if() statements if the input is length > 1. • outcomerate.factor <- outcomerate.character works--no need to define a different method and coerce the data as.character(). • Likewise, outcomerate.table <- outcomerate.numeric passes the tests too. • Doing that, then outcomerate.R#L108-L114 simplifies a bit too because you don't have to munge the table objects you get: freq <- table(x) (perhaps multiplied by weight) • Speaking of multiplying by weight, what's the point of a scalar (length 1) weight when you're calculating proportions? Doesn't it difference out? (Empirically, it does, just experimenting with altering a scalar weight in the tests.) • outcomerate.R#L116 might be more readable as freq <- xtabs(weight ~ x). This would require adding stats to Imports, though stats is a base package so it's "free" from a dependency perspective. Docs • I recommend the spelling package to automatically check your spelling. (It will catch for you a few misspellings in the readme and the vignette.) You can have it run every time you run CHECK. To do so, add this to your DESCRIPTION, and then a tests/spelling.R file with this. CHECK will then report all of your misspellings. Once you've fixed them, you can do spelling::update_wordlist() to whitelist the other words that you use (like "middleearth") that it will flag as misspelled but that are ok. Note that that test file I linked has a special extra check I like: by default, spelling errors won't fail CHECK, but I have it check to see if it's running on Travis-CI and fail the build there if there are misspellings. I find that to be a good way to get me to notice spelling errors without risking a CRAN check failure. • roxygen: I recommend using the markdown support added in roxygen2 v6.0 (see note in "DESCRIPTION" below for how to add). That way, your \item{}s can just become * lists. Will make the inline docs more readable. • Also would help to offer some more words to explain what the different rates are beyond just "RR1" etc. https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf starting at p. 60 is very readable and helped me. (That might be a better link to use in your docs, too.) Some summary e.g. that the "RR" are "Response rates", which are (explain that in your words), and that the differences among RR1-6 are in whether partial interviews are included and how the unknown types are factored in. Something like that. • What is fmat and where does it come from? That seems to be where all of the real business is. It should probably be documented and more visible. Vignette • I'm a bit more cautious about adding package dependencies than some, and it's only in Suggests, but tidyverse is a lot, and you probably don't need all of that for your vignette. • Why do you assume e = 0.8 in this example? Seems that if the goal is to prevent "claims regarding survey quality to become opaque", this shouldn't be opaque either. Some discussion around this, around how one might estimate or reason about e and not just make up a number that suits their goals, would be helpful. • The "small wrapper function" or() has an ambiguous sounding name. Perhaps something like get_rates() instead? Tests • context() can be more useful as a descriptive string rather than the file name, which any test failure will report anyway • (test-params.R) It's good to assert the error message in expect_error() since helpful error messages are an important part of a user interface and can serve as a source of documentation, too. (cf. http://style.tidyverse.org/error-messages.html). As I mentioned above, I found at least one condition that errored but not in a way that would have helped me to understand what I had done wrong. DESCRIPTION • "Description" needs to be more than one complete sentence. This is a good place for CRAN human checks to hold up accepting your package submission, so you should try to give them the prose they want. Here's a few of the gotchas I remind myself of each time I submit a new package: https://github.com/nealrichardson/skeletor/blob/master/inst/pkg/DESCRIPTION#L4-L9 • add Roxygen: list(markdown = TRUE), as suggested above, so you can use the markdown syntax in your roxygen docs. • "Suggests" includes survey but I don't see it used anywhere, though perhaps I'm missing it. rtaph commented May 24, 2018 Thank you @karthik, @nealrichardson, and @carlganz for your support. I will review these comments over the weekend and get back to you shortly. carlganz commented May 24, 2018 Sorry that I have not written the review yet. I will get done by tomorrow I promise. Package Review Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review). Documentation The package includes all the following forms of documentation: • A statement of need clearly stating problems the software is designed to solve and its target audience in README • Installation instructions: for the development version of package and any non-standard dependencies in README • Vignette(s) demonstrating major functionality that runs successfully locally • Function Documentation: for all exported functions in R help • Examples for all exported functions in R Help that run successfully locally • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R). Functionality • Installation: Installation succeeds as documented. • Functionality: Any functional claims of the software been confirmed. • Performance: Any performance claims of the software been confirmed. • Automated tests: Unit tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine. • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines Final approval (post-review) • The author has responded to my review and made changes to my satisfaction. I recommend approving this package. Estimated hours spent reviewing: 2 The goal of the outcomerate package is to simplify the calculation of AAPOR's different definitions for response rates of surveys. The package is lightweight, and accomplishes its primary goal. The vignette does a good job explaining response rates, and how to use the package. The package also makes good use of 3-dimensional matrices. There are a few areas where the package can be improved most of which @nealrichardson has already mentioned. Here is a list in no particular order: • Definitely use NULL instead of NA for the default parameters. • Similarly, for assert_weights consider giving a warning if any of the weights are NA • When I was reading source code I initially didn't understand what fmat was. Even though the data structure is not exported, I encourage you to include source code that generated fmat in the "data-raw" folder as Hadley suggests here. I think it makes reading the source code easier if the code for all the data structures used is available. • I don't think the apply used here is necessary. numden[,1]/numden[,2] gives you what you want but drops names. Up to you if you want to keep apply approach. • Replacing tidyverse with the specific packages used in the vignette (i.e. dplyr, tidyr, ggplot2) is a good idea. • In the documentation for the fake middleearth dataset you should include a description of each variable. • Please change name of or function in vignette to something more informative. • As Neal already said some of the outcome.* definitions can be simplified. Good work! karthik commented Jun 1, 2018 👋 @rtaph both reviews are now in. Please let us know when you're had a chance to read and respond to these comments. 🙏 rtaph commented Jun 3, 2018 Gentlemen-- thank you all for the feedback and sorry it has taken a while to respond. I am integrating your points and hope to be able to share a revised version soon. Aiming for end of week. Here are my replies to each point: It seems that I can't get "all the rates" unless I provide a value for e. I think I should be able to easily get all of the rates that don't depend on e just by calling outcomerate(my_data). Instead, I get the error "The parameter e must be provided for RR3, RR4, REF2, CON2". It would be nice if instead you excluded those 4 metrics and calculated the rest. Yes, I see where you are coming from and I agree in the case of the default where rate = NULL. I would like to further address this with an eligibility_rate() function to estimate ‘e’ based on ineligible respondents, “NE”, in the x vector. The reason not to integrate it into the other collection of rates is that if you calculate grouped outcome rates (one of the motivations for the package), ‘e’ should not be calculated using a split-apply-combine approach, but globally. Why would I ever want return_nd = TRUE? I don't see any documentation or examples that do that. In my experience it's wise not to add extra arguments and features with no application or demand, even if it's trivial to do---they clutter your interface and create a liability for you to maintain them. So I'd advocate removing it. But, if you determine that it has value, consider returning numden before even bothering to calculate rates (outcomerate.R#L151). I find it useful to calculate grouped outcome rates while fieldwork is happening and when counts are small. For example, computing outcome rates by researcher helps spot early problems with field work. However, when counts are low at the start of a project, the rates are unstable. Having numerators and denominators can be useful in combination with empirical Bayes methods to smooth out the raw estimates. I plan to write a vignette on this but have not had time yet. In asserters.R, assert_weight: what if the weight vector contains missing values? (It errors, but not helpfully.) Good catch. I will add a case for this. NULL is generally a better default value for function arguments than NA. You're less likely to accidentally provide NULL, NA is actually type logical and that can be confusing, and is.null() is a much more natural check than is.na(), which will break your if() statements if the input is length > 1. Great point. I will make the change. outcomerate.factor <- outcomerate.character works--no need to define a different method and coerce the data as.character(). Clever. I’ve made the change. Likewise, outcomerate.table <- outcomerate.numeric passes the tests too. One thing I am concerned about with this change is that the table would be coerced to an unnamed numeric vector, and that the order of the input then suddenly matters to the function output. A table object whose first element is the counts for “I” would not give the same results as a table whose first element is, say, counts for “NC”. Doing that, then outcomerate.R#L108-L114 simplifies a bit too because you don't have to munge the table objects you get: freq <- table(x) (perhaps multiplied by weight) So if I understand you, you are suggesting to have the different methods resolve to the table method. Is that correct? I will look into this. Speaking of multiplying by weight, what's the point of a scalar (length 1) weight when you're calculating proportions? Doesn't it difference out? (Empirically, it does, just experimenting with altering a scalar weight in the tests.) Yes, it does, unless you return the numerator and denominator. Otherwise it makes no difference. outcomerate.R#L116 might be more readable as freq <- xtabs(weight ~ x). This would require adding stats to Imports, though stats is a base package so it's "free" from a dependency perspective. Interesting—I am open to the idea. Will see how this works when I try to address the point about using the table method as the ultimate method called. I recommend the spelling package.... Great suggestion. roxygen: I recommend using the markdown support added in roxygen2 v6.0 (see note in "DESCRIPTION" below for how to add). That way, your \item{}s can just become * lists. Will make the inline docs more readable. Much better, yes. Thanks for the suggestion. Also would help to offer some more words to explain what the different rates are beyond just "RR1" etc. Good point. Will add more text to briefly explain what each rate represents. What is fmat and where does it come from? That seems to be where all of the real business is. It should probably be documented and more visible. I will write a separate doc for fmat. It is a formula matrix that defines all of the equations from the AAPOR definitions in one place (instead of writing them out separately as code). I'm a bit more cautious about adding package dependencies than some, and it's only in Suggests, but tidyverse is a lot, and you probably don't need all of that for your vignette. Agreed. Will simplify it down to only those packages used. Why do you assume e = 0.8 in this example? Seems that if the goal is to prevent "claims regarding survey quality to become opaque", this shouldn't be opaque either. Some discussion around this, around how one might estimate or reason about e and not just make up a number that suits their goals, would be helpful. You make a fair point. I think once I add the function eligibility_rate() and show how to calculate it based on ineligible respondents, this will no longer be an issue. The 0.8 in the example is purely illustrative for the fictional case. The "small wrapper function" or() has an ambiguous sounding name. Perhaps something like get_rates() instead? Yes, sure. context() can be more useful as a descriptive string rather than the file name, which any test failure will report anyway Yep, good point. (test-params.R) It's good to assert the error message in expect_error() since helpful error messages are an important part of a user interface and can serve as a source of documentation, too. (cf. http://style.tidyverse.org/error-messages.html). As I mentioned above, I found at least one condition that errored but not in a way that would have helped me to understand what I had done wrong. Yes, good suggestion. "Description" needs to be more than one complete sentence. This is a good place for CRAN human checks to hold up accepting your package submission, so you should try to give them the prose they want. Agreed. Thanks for the catch. add Roxygen: list(markdown = TRUE), as suggested above, so you can use the markdown syntax in your roxygen docs. Thanks. "Suggests" includes survey but I don't see it used anywhere, though perhaps I'm missing it. I was originally going to use a dataset from the package but then changed my mind and made my own middleearth dataset. I will remove it. Definitely use NULL instead of NA for the default parameters. Sure. Similarly, for assert_weights consider giving a warning if any of the weights are NA Sure. When I was reading source code I initially didn't understand what fmat was. Even though the data structure is not exported, I encourage you to include source code that generated fmat in the "data-raw" folder as Hadley suggests here. I think it makes reading the source code easier if the code for all the data structures used is available. Yes, sure. I will add a raw data folder and docs for the fmat object. I don't think the apply used here is necessary. numden[,1]/numden[,2] gives you what you want but drops names. Up to you if you want to keep apply approach. I think I had implemented it this way to keep the names. I'll take a look at it again. Replacing tidyverse with the specific packages used in the vignette (i.e. dplyr, tidyr, ggplot2) is a good idea. Yes, will do. In the documentation for the fake middleearth dataset you should include a description of each variable. Good suggestion. Please change name of or function in vignette to something more informative. Yes, will do. As Neal already said some of the outcome.* definitions can be simplified. Sure nealrichardson commented Jun 4, 2018 Likewise, outcomerate.table <- outcomerate.numeric passes the tests too. One thing I am concerned about with this change is that the table would be coerced to an unnamed numeric vector, and that the order of the input then suddenly matters to the function output. As far as I know, this method definition doesn't coerce anything--it just says "use this function". A table behaves like a numeric vector with names, so the code in outcomerate.numeric just worked when given a table. When I tried it, all of your tests passed, so if there's some behavior that this changes or alters, it's not anything you're asserting in tests. If it's important, I suggest writing a test and seeing if it fails with this change. rtaph commented Jun 7, 2018 Dear @nealrichardson, @carlganz, and @karthik, I have implemented all recommendations to the package (ropensci/outcomerate@2a178ba), save for writing another vignette to explain the use of return_nd = TRUE which I hope to do soon. In the mean time, please feel free to review the changes and let me know if you are happy with the current state of the package. Thanks, karthik commented Aug 28, 2018 Thanks @rtaph! @carlganz and @nealrichardson can you please respond and sign off on this or raise any other concerns? LGTM nealrichardson commented Aug 30, 2018 Thanks for the reminder. Looks good to me too; I checked the final approval box on the checklist above. karthik commented Aug 31, 2018 • edited Congrats @rtaph, your submission has been approved! 🎉 Thank you for submitting and @carlganz and @nealrichardson for thorough and timely reviews. To-dos: Transfer the repo to the rOpenSci organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You'll be made admin once you do. Add the rOpenSci footer to the bottom of your README [![ropensci_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org) Fix any links in badges for CI and coverage to point to the ropensci URL. (We'll turn on the services on our end as needed) Welcome aboard! We'd also love a blog post about your package, either a short-form intro to it (https://ropensci.org/technotes/) or long-form post with more narrative about its development. ((https://ropensci.org/blog/). If you are, @stefaniebutland will be in touch about content and timing. rtaph commented Sep 1, 2018 Thank you @karthik, I will do so shortly. Regarding a blog post, I would be happy to participate. @nealrichardson and @carlganz, with your permission, I would like to add your names to the DESCRIPTION file as reviewers. Do you have an ORCID you would like me to include? nealrichardson commented Sep 4, 2018 Thanks @rtaph. Sure, you can add me to DESCRIPTION as a reviewer. I don't have an ORCID. rtaph commented Sep 8, 2018 Done. Thanks again, @karthik, @nealrichardson, and @carlganz! closed this as completed Sep 8, 2018 stefaniebutland commented Sep 10, 2018 @rtaph Glad to hear you're interested in contributing a blog post about outcomerate. (Thanks for your patience with my delayed response) This link will give you many examples of blog posts by authors of onboarded packages so you can get an idea of the style and length you prefer: https://ropensci.org/tags/review/. Here are some technical and editorial guidelines: https://github.com/ropensci/roweb2#contributing-a-blog-post. We ask that you submit your draft post via pull request a week before the planned publication date so we can give you some feedback. Deadline? We can publish 2018-10-02 if you submit your draft by 2018-09-25. Let me know if you prefer a later date. Happy to answer any questions. rtaph commented Sep 12, 2018 @stefaniebutland The 2018-09-25 sounds good. I'll get started on something and let you know if I have any questions. Thanks :) stefaniebutland commented Sep 25, 2018 Hi @rtaph. Still hoping to publish your post next Tuesday Oct 2nd. Will you be able to submit a draft by Thurs Sep 27? rtaph commented Sep 25, 2018 Hi @stefaniebutland ! Yes, sorry for being a bit late. Will try to submit by Thursday :) rtaph commented Sep 29, 2018 @karthik, can you help me initiate a gh-pages branch to host the \docs directory as a pkgdown site? I am writing the rOpenSci onboarding blog and would like to hyperlink examples in the vignette. karthik commented Oct 1, 2018 @rtaph Done. You also have admin now to take care of other housekeeping. 🙏 rtaph commented Oct 1, 2018 Much appreciated, @karthik !
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.301133394241333, "perplexity": 2280.867489791941}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00025.warc.gz"}
http://mathoverflow.net/questions/32442/how-to-generate-a-net-on-a-8-dimensional-sphere/33773
# How to generate a net on a 8-dimensional sphere Using Matlab, how to generate a net of 3^10 points that are evenly located (or distributed) on the 8-dimensional unit sphere? - How are you measuring evenness? In any case, you could just pick them uniformly at random... – Mariano Suárez-Alvarez Jul 19 '10 at 5:12 Mariano, Probably use the euclidean distance to measure the evenness. But how to pick the points uniformly at random in matlab? – user7738 Jul 19 '10 at 6:14 Google for "sample points uniformly on a sphere" and choose the link to Mathoverflow offered by google :P – Mariano Suárez-Alvarez Jul 19 '10 at 6:26 MathOverflow is not a place where it is appropriate to ask for MATLAB code, but we welcome interesting questions in applied mathematics. I don't know the precise meaning of "evenly located" as you use it, but if you search for "mesh-generation algorithms" you will find a lot of information. In particular, I think Ruppert's algorithm can be adapted to your purposes, and you can set lower bounds on the dihedral angles you get. – S. Carnahan Jul 19 '10 at 12:57 If it's really important for the points to be evenly distributed, and you don't mind doing a lot of calculation to get them that way, you can start with a randomly distributed set and then iterate over the entire set repeatedly, allowing each point in turn to make whatever small adjustment improves your chosen definition of uniformity, and repeat this until the set of points converges. If you're even pickier than that, and not satisfied by just a locally optimal arrangement, the canonical next thing to try is simulated annealing. For picking points at random, I agree with Peter Shor that taking the time to implement a one-to-one volume-preserving map from a product of intervals to a high-dimensional sphere would be much more wasteful (of time; you would learn a lot) than throwing away 98% of your random numbers. It's an interesting question, though, whether systematically chosen points in a product of intervals can be well-distributed under one of these volume-preserving (but distance-destroying) maps. The first interesting case of such a map is the axial projection from the curved surface of a cylinder of height 2 and radius 1 to the surface of the unit sphere it contains: projecting straight to the axis, one direction gets stretched out in exact counterbalance to the compression of the other direction. Call the coordinates of the cylinder surface z ∈ [$-1$, $1$] and θ ∈ [$0$, $2\pi$]. Choosing an ordinary regular rectangular grid in z and θ does terrible things to the projection. On the other hand, for any $N$, setting zi = $(-N+2 i - 1)$/$N$ and θi = $2\pi (\phi i$ mod $1$), where $\phi$ is the golden mean, actually gives a very nice distribution of points after projection. It's possible that in any dimension there is such a lattice in the cube that projects nicely, for any N, to the sphere. - If you're looking for points on the 8-dimensional sphere, another thing you could do is go to Neil Sloane's table of spherical codes, scroll down until you get to dimension 8, and obtain a sphere covering which has 2160 points fairly evenly distributed (obtained from second shell of vectors in the E8 lattice). Now, if you apply $3^{10}/2160 \approx 27$ random orthogonal matrices to this set, you'll get a set of points distributed on the 8-dimensional sphere which is presumably somewhat more uniform than a set of random points. Since I don't know why you want these vectors, I don't know how much an improvement this is over random points, and whether it's worth all the extra work. You can get random orthogonal transformations by starting with an $8 \times 8$ matrix with random Gaussian entries. First, normalize the top row to make it a length 1 vector. Next, subtract a multiple of the top row from the second row to make it perpendicular to the top row, and then normalize to make the second row a length 1 vector, and so on. - All of these operations should be straightforward using MATLAB. – Peter Shor Jul 19 '10 at 17:34 In case the goal is to draw points inside the sphere, the discussion Intuitive proof that the first (n-2) coordinates on a sphere are uniform in a ball seems relevant. In other words, one simply draws random points on the 10-dimensional sphere (by drawing a normal vector and normalizing it) and discards the last two coordinates. - Forgetting Matlab, the 'best' way to...hold on, do you mean -in- the sphere or -on- the sphere? For -in- the sphere, create 8-tuples where each element is from the uniform distribution from 0 to 1. Ignore those tuples whose Euclidean norm $\sqrt{\sum x_i^2}$ is greater than 1. Do this until you have $3^{10}$ points. This is a uniform distribution over the sphere. For -on- the sphere, create 8-tuples as before, but then divide each point by the norm (of course throw out $\langle 0,0,0,0,0,0,0,0\rangle$ ). This will place a point on the surface. Do this $3^{10}$ times. This is not an exact uniform distribution but is a very good approximation to one, and is very easy to do. - Because the volume of an $n$-sphere is small when $n$ is large, I believe there will be significant inefficiencies your first method: more than 97% of the volume of the cube is exterior to the enclosed 8-ball. – Joseph O'Rourke Jul 19 '10 at 17:27 Actually, 98%, as Peter's calculation shows! – Joseph O'Rourke Jul 19 '10 at 17:29 If I did the calculation right, only 1.6% of the points given by this method will be inside the sphere. However, in this case the overhead of choosing 60 times as many points as you need is pretty much negligible compared to the cost of programming a better method. – Peter Shor Jul 19 '10 at 17:30 I believe you meant to say draw 8-tuples from the uniform distribution from $-1$ to $1$, otherwise you are picking from just 1/256th of the the volume of the 8-sphere by limiting it to the region of space with all positive coordinates. Alternately, you could pick from the uniform distribution from 0 to 1 but change your selection criteria to accepting only those points where $$\sqrt{\sum (x_i-0.5)^2} \le 0.5$$. – sleepless in beantown Aug 29 '10 at 14:40 In general, for generating extra-regular but not-too-regular distributions of points (in a technical sense, "low discrepancy", meaning that the variance in the length of gaps between points is smaller than a uniform distribution), you can use a class of methods called quasi monte carlo methods. There are libraries in MATLAB. http://en.wikipedia.org/wiki/Quasi-Monte_Carlo_method http://www.mathworks.com/matlabcentral/fileexchange/17457-quasi-montecarlo-halton-sequence-generator Though if you want a totally uniform set of points, these won't help you. - Talking out of my depth so I am not sure this work, but here is a start of an idea... Say you were finding points uniformly on $S^3$, couldn't you use Hopf fibration? The idea is you select Hopf fiber uniformly from $S^2$ then it is easy to populate points uniformly on the Hopf fiber, which is a circle in this case. Unfortunately, there is no such relation for $S^8$. Instead one could potentially use the following $S^1\hookrightarrow S^3 \rightarrow S^2$ and then $S^3\hookrightarrow S^7 \rightarrow S^4$ but then one needs to construct uniformly spaced points on $S^4$ from uniformly spaced points in $S^3$ and same for $S^8$ and $S^7$, which doesn't seem that hard. - Shells of evenly spaced lattice points: To generate evenly spaced sets of non-random points on an n-sphere, start with the permutations of { 0 1 1 ... 2 2 ... }, then make 2^n flips of that. For example, in 4-space start with the 12 permutations of { 0 1 1 2 }. Each point is √6 from the origin, and each has 4 neighbours √2 away (+1 here, -1 there): 0 1 1 2 0 1 2 1 0 2 1 1 1 0 1 2 1 1 0 2 Make 2^4 sign-flipped copies of this, i.e. multiply by { 1 1 1 1 } .. { -1 -1 -1 -1 } except where there's a 0. This gives a shell of 96 points, 0 1 1 2 .. 0 -1 -1 -2. Each is √6 from the origin, and each now has 6 neighbours √2 away. For the 8-sphere, start with the 280 permutations of { 0 1 1 1 1 2 2 2 }. Each has of course the same distance from the origin, and each has 12 neighbours √2 away — a nice, regular graph. The shell of 280 * 2^7 = 35840 sign-flipped points is not quite 3^10, but. (I'd appreciate links to papers or programs on such graphs.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052289485931396, "perplexity": 361.34522478082823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00122-ip-10-164-35-72.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/223583/sines-and-cosines-law
# sines and cosines law Can we apply the sines and cosines law on the external angles of triangle ? - This answer assumes a triangle with angles $A, B, C$ with sides $a,b,c$. Law of sines states that $$\frac{\sin A}{a} = \frac{\sin B}{b} = \frac{\sin C}{c}$$ Knowing the external angle is $\pi - \gamma$ if the angle is $\gamma$, $\sin(\pi-\gamma) = \sin \gamma$ because in the unit circle, you are merely reflecting the point on the circle over the y-axis and the sine value represents the $y$ value of the point, so the $y$ value will remain the same. (Also, $\sin(\pi-\gamma) = \sin\pi \cos \gamma - \cos \pi \sin \gamma = \sin \gamma$), so $$\frac{\sin A}{a} = \frac{\sin B}{b} = \frac{\sin C}{c} = \frac{\sin (\pi -A)}{a} = \frac{\sin (\pi -B)}{b} = \frac{\sin(\pi - C)}{c}$$ The law of cosines state that $$c^2=a^2+b^2-2ab\cos\gamma \Longrightarrow \cos \gamma = \frac{a^2 + b^2 - c^2}{2ab}$$ But $\cos(\pi-\gamma) = -\cos\gamma$ for pretty much the same reasons I used for the sine above. So if you are using external angles, the law of cosines will not work. However, if $\pi - \gamma$ is the external angle, then $$c^2=a^2+b^2+2ab\cos(\pi-\gamma) \Longrightarrow \cos(\pi- \gamma) = -\frac{a^2 + b^2 - c^2}{2ab}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671720266342163, "perplexity": 189.26010566163725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00171-ip-10-171-96-226.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/285934/orders-of-reductions-of-rational-points-on-elliptic-curves
Orders of reductions of rational points on elliptic curves I am looking for references where the following (or similar questions) have been studied: Let $K$ be a number field or a function field in one variable over a finite field and let $E$ be an elliptic curve (or more generally, an abelian variety) over $K$. If $x \in E(K)$ is a point of infinite order then the order of its reduction modulo a good prime tends to infinity with the order of the residue field. Are there any results that are known about the prime factorisation of the order of the reduction of $x$? For example, is it known that there is an infinite sequence of rational primes $p_i$ and primes $P_i$ of (the ring of integers of) $K$ such that $p_i$ divides the order of the reduction of $x$ modulo $P_i$? I would also be interested in similar statements for the order of the group of rational points on the reduction of any elliptic curve $E$ modulo primes of $K$. (I expect that much stronger results should be true, but don't know the literature in this area.) • Maybe the results on elliptic divisibility sequences could help en.wikipedia.org/wiki/Elliptic_divisibility_sequence. – François Brunault Nov 13 '17 at 8:13 • @FrançoisBrunault: They do indeed. More specifically, an application of Siegel's integrality finiteness theorem as in section 2 of Silverman's Wieferich's criterion and the $abc$ conjecture is enough to give infinitely many $p_i$ (ineffectively). Much more can be said, as I tried to indicate in my answer. – Vesselin Dimitrov Nov 13 '17 at 8:30 For example, is it known that there is an infinite sequence of rational primes $p_i$ and primes $P_i$ of (the ring of integers of) $K$ such that $p_i$ divides the order of the reduction of $x$ modulo $P_i$? A lot more can be said. Conditionally on the GRH for Dedekind zeta functions, Miri and Murty proved that $|E(\mathbb{F}_P)|$ has at most $16$ prime factors (counting multiplicities!) for $\gg_E X / (\log{X})^2$ of the primes $P$ of norm $N(P) \leq X$. This they did by adapting Chen's method for his almost twin primes theorem; note that the problem of getting infinitely many prime orders of $E(\mathbb{F}_P)$ could be regarded as an elliptic variant of the twin prime problem. See Theorem 5 with the original reference [25] in this paper of Cojocaru, as well as this paper of hers for an unconditional proof in the CM case that $|E(\mathbb{F}_P)|$ is essentially squarefree infinitely often (with the right density in fact). • Thanks a lot! The Miri--Murty result is exactly the kind of statement that I was hoping would be true. (In dynamical terms, I expect that if $f:X \to X$ is a selfmap of a variety over a number/function field $K$ and $x \in X(K)$ is a point of infinite order, then there exists an integer $n>0$ and infinitely many primes $P_i$ of $K$ such that the reduction of $f^n(x)$ modulo $P_i$ is a periodic point.) – ulrich Nov 13 '17 at 9:56 • I've been looking at the proof of the Miri--Murty theorem and have some doubts. They claim (bottom of p. 95) that there are many primes $p$ so that $|E(\mathbb{F}_p)|$ has only one "small" prime factor. However, $|E(\mathbb{Q})|$ could have more than one prime factor so I don't see how their claim could possibly be correct (unless they are assuming that $E(\mathbb{Q})$ is trivial, but this is not stated explicitly anywhere). – ulrich Nov 17 '17 at 4:50 • Sorry, $E(\mathbb{Q})$ should have been $E(\mathbb{Q})_{tors}$. – ulrich Nov 17 '17 at 5:21 • I don't have an access to their paper at the moment, but as $|E(\mathbb{Q})_{\mathrm{tors}}|$ is in any case uniformly bounded, surely such a condition can't make much of a difference? Could they intend e.g. only one small prime factor $q > 11$, or not dividing the order of the torsion subgroup? – Vesselin Dimitrov Nov 17 '17 at 5:28 • I do want a stronger statement than what I had written: what I really need is that (for fixed $E$) given any integer $m > 0$, there exists an infinite set of primes $T_m$ so that the $m$-primary part of the order of $E(\mathbb{F}_p)$ is bounded above by a constant independent of $p \in T_m$. This follows immediately from Miri--Murty, but is of course much weaker. – ulrich Nov 19 '17 at 10:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303139448165894, "perplexity": 137.2636059818777}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517639.17/warc/CC-MAIN-20190418121317-20190418143317-00223.warc.gz"}
http://mathhelpforum.com/discrete-math/6625-asymptotic-help-needed.html
# Math Help - Asymptotic help needed 1. ## Asymptotic help needed Say you have a fuction for any non negative whole number, whereby it equals n + (n-1) + (n-2) + (n-3) + ... + (1) I'm trying to represent this as an asymptotic answer, is anyone able to point me in the right direction? 2. Originally Posted by Wiretron Say you have a fuction for any non negative whole number, whereby it equals n + (n-1) + (n-2) + (n-3) + ... + (1) I'm trying to represent this as an asymptotic answer, is anyone able to point me in the right direction? n + (n-1) + (n-2) + (n-3) + ... + (1) = n(n+1)/2 as n becomes large n becomes negligable compared to n^2 so: n + (n-1) + (n-2) + (n-3) + ... + (1) = n(n+1)/2 = (n^2)/2 +n/2 ~ (n^2)/2, or in big-O notations: ..................................................= O(n^2). RonL
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911275327205658, "perplexity": 876.7163476904799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682773.27/warc/CC-MAIN-20151001215802-00000-ip-10-137-6-227.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/53673/why-is-it-ok-to-rely-on-the-fundamental-theorem-of-arithmetic-when-using-g%c3%b6del-n?answertab=active
Why is it OK to rely on the Fundamental Theorem of Arithmetic when using Gödel numbering? Gödel's original proof of the First Incompleteness theorem relies on Gödel numbering. Now, the use of Gödel numbering relies on the fact that the Fundamental Theorem of Arithmetic is true and thus the prime factorization of a number is unique and thus we can encode and decode any expression in Peano Arithmetic using natural numbers. My question is, how can we use a non-trivial result like the Fundamental Theorem of Arithmetic in the meta-language that describes Peano Arithmetic, when, the result actually requires a proof from within Peano Arithmetic itself, not like other trivial things we believe to be true (i.e. existence of natural numbers and the axioms for addition and multiplication which we want to interpret in the natural way - Platonism)? I understand we could instead use a different way of enumeration, e.g. a pairing function and the Chinese Remainder Theorem or simply string concatenation, but, then the need for a proof of uniqueness when encoding and decoding remains and in general, I am interested in the structure of Gödel's original proof. Basically, I have two ideas of how it might be possible to resolve this: 1. Prove the Fundamental Theorem of Arithmetic within a different sound (?) system. 2. Maybe there is nothing needed to be done to 'resolve' this, because I am just misinterpreting something and it is actually acceptable to use provable sentences of PA in the meta-language. EDIT: I have realized how to make my question less confusing: Say PA proves FTA. Then if we only assume PA is consistent, that does not rule out the possibility of FTA being false. Now, if FTA is false, then PA and the meta-language too includes a false statement and thus the whole proof is useless. How is this resolved? Is it maybe related to the fact the originally we actually assume $\omega$-consistency and obviously, for each natural number $n$ separately its unique prime factorization can simply be found algorithmically? - The theorem states that your theory cannot be both consistent and complete, so it's a win-win. I don't see the problem. –  Thierry Zell Jan 29 '11 at 4:28 It has been a while since I worked through that proof, but is the unique factorization really needed or just a canonical one (without the fact that as it happens it is unique)? What if you use a greedy factorization where you divide out all 2's you can then all 3's (from what is left) then all 4's etc. That gives a code. –  Aaron Meyerowitz Jan 29 '11 at 20:51 Why are number theorists allowed to use numbers? Shouldn't they first establish that it is safe to use them? –  Andrej Bauer Jan 30 '11 at 8:32 The point of my question is this: if all other mathematicians are allowed to take numbers for granted, why isn't Gödel? All he does is apply the mathematical method to the study of a certain system for manipulation of symbols known as "first order logic" and "Peano arithmetic". The subject of his study is no less mysterious, magical, or "fundamental" as all the other subjects of study in mathematics. –  Andrej Bauer Jan 30 '11 at 18:49 And I am sure Gödel knew how to prove the fundamental theorem of arithmetic. Perhaps I should have been more precise: when Gödel does mathematics is allowed to use whatever mathematics is considered "standard" by his peers. My pet peeve is this: why do "ordinary" mathematicians think that "logicians" must secure the ground on which they stand, when those same mathematicians stand on the same ground together with logicians? –  Andrej Bauer Jan 31 '11 at 13:40 Just like most mathematical theorems, you can formalize Godel's Theorems in some first order language (with some "standard" interpretation under which the formalization means what it's supposed to mean), turn the proof into a purely syntactic string of formulas, and figure out which formulas in that first order language are needed as axioms. I'm embarrassed to say I don't know exactly how strong the assumptions we need are to carry out the proof of Godel's Theorems, but there will be some weak fragment of ZFC probably not much stronger than PA which will do. So we would be using a theory slightly stronger than PA to establish the incompleteness of PA, but why should that be a problem? The axioms needed for the proofs of Godel's Theorems are probably pretty natural, probably pretty close to PA, and probably have a natural interpretation. If you believe these axioms have this interpretation, then you would have no problem with Godel's proofs or the interpretation of the theorems. If not, then you're probably pretty close to believing PA is inconsistent, in which case you would probably: 1. Accept that the formalized versions of Godel's Theorems follow from whatever axioms are used, but only because you believe those axioms are inconsistent. 2. Deny that the formalized versions of Godel's Theorems mean what they're supposed to mean, and just regard what's happening in point 1 as a valid string of symbolic manipulations. 3. Accept the natural language meaning of Godel's Theorems, in spite of point 2, for trivial reasons, since they say, "if PA is consistent, ..." EDIT: (in response to your edit) So we're assuming PA proves FTA, PA is consistent, and FTA might be false? What do you mean by "false," you mean false in the standard intepretation? In that case, PA would be false in the standard interpretation. Now if we take Godel's first theorem to say, "If PA is consistent, then there is a true formula in the standard interpretation which is not provable from PA," then this meta-theorem is certainly true. EDIT: Ignas requested an explanation of some of the basics to make sense of my claim, "If PA proves FTA, and FTA fails in the standard model, then so does PA." It's too big to fit in a comment so I'm adding it to my response: Let $\mathcal{L}$ denote the first order language of number theory, we'll have lower case Greek letters vary over sentences of $\mathcal{L}$, upper case Greek letters vary over sets of sentences of $\mathcal{L}$, and upper case Roman letters vary over $\mathcal{L}$ structures. We write $M \vDash \varphi$ to denote that $\varphi$ is true in the model $M$, i.e. when its symbols are interpreted according to $M$. Tarski's definition of truth for a sentence in a given model is by recursion on the complexity of the sentence. We write $M \vDash \Sigma$ if every member of $\Sigma$ is true in $M$. We write $\Sigma \vDash \varphi$ if for every $M$, $M \vDash \Sigma$ implies $M \vDash \varphi$, i.e. if every model of $\Sigma$ is also a model of $\varphi$. For provability, we write $\Sigma \vdash \varphi$ to say that there is a (finite) proof of $\varphi$ using (finitely many) sentences from $\Sigma$ as axioms. The Soundness Theorem states that for all $\Sigma ,\ \varphi$, if $\Sigma \vdash \varphi$ then $\Sigma \vDash \varphi$. It's this theorem, with PA in place of $\Sigma$ and FTA in place of $\varphi$, that I'm using to establish the claim you're asking about. The converse of this theorem is also true; it's Godel's Completeness Theorem. Putting these two theorems together, they say that the relations $\vdash$ and $\vDash$ are the same relation between sets of sentences and sentences. One (perhaps not immediately obvious) way to rephrase this is, "being true in every model is the same as being provable from no axioms." Contrast this Godel's Incompletness Theorem, which says that "being true in the standard model of number theory is not the same as being provable from PA." - I am pretty sure that the incompleteness theorems can be proved in PA. I.e., you can prove something like "if ZFC is consistent then so is ZFC+$\neg$Con(ZFC)" in PA (the same for ZFC replaced by weaker theories such as PA itself). –  Stefan Geschke Jan 29 '11 at 5:58 I guess it depends on how you word Godel's Theorem. Take the first one for instance. If it says something like, "if basic arithmetic is consistent, then there's a formula that's true in $\mathbb{N}$ but not provable from basic arithmetic" then to express this you need some theory capable of expressing "true in $\mathbb{N}$." PA can't even talk about $\mathbb{N}$. I suppose there may be equivalent formluations of Godel's Theorem which sidestep the need to talk about $\mathbb{N}$, but then the question becomes: What axioms do you need to prove that equivalence? –  Amit Kumar Gupta Jan 29 '11 at 6:09 I see. This is not a problem if you formulate the first incompleteness theorem as something like "if T is r.e. axiomatizable, consistent, and allows representations of recursive functions and relations, then T is incomplete". This gets you that PA is incomplete (if consistent, which we all believe to be the case) and once you are in a theory that allows it to talk about $\mathbb N$ in any reasonable way, then you get the theorem in the form that you quote. –  Stefan Geschke Jan 29 '11 at 6:23 @Amit Kumar Gupta: whatever metatheory you formalize the proof in can talk about $\mathbb{N}$, though. –  Carl Mummert Jan 30 '11 at 23:14 Regarding your edit: Yes, I meant false under standard interpretation. What do you mean by "PA would be false"? Inconsistent? Well, it does not have to be as long as PA does not prove $\neg$FTA. What am I misunderstanding here? –  Ignas Jan 31 '11 at 0:32 Here is a way to think about it. When you use something like Fund Theorem of Arithmetic to prove Godel numbering, you are referring to the abstract set $\mathbf N$, which "exists". The Fundamental Theorem of Arithmetic is simply true about this set, irrespective of any logical system which one is constructing. If you like, you can prove the Fundamental Theorem of Arithmetic, more or less rigorously, in a metalanguage. But the proof in this metalanguage is completely orthogonal to any proofs one might construct in the logical system, such as Peano axioms. - This may well be true, but it is not the issue at hand. The numbering of formulas and its reliance or not on the FTA happens inside the theory (as explained by the answers below). –  Andres Caicedo Jan 29 '11 at 22:14 Oh no no no, that would be pretty much the worst case, since claiming that FTA is "simply true" would render the whole proof rather vague and non-rigorous. And on the contrary, as Andres pointed out, the proofs in the meta-language are actually established within PA itself, since PA is the meta language too. –  Ignas Jan 30 '11 at 1:39 The Godel incompleteness theorem shows that $Th(\cal N)$ is undecidable. Any first-order statement, such as FTA, which is true on $\mathbf N$ is part of $Th(\cal N)$. Proving that it is part of $Th(\cal N)$ can be regarded as simply a fact (which one prove if one wishes). –  David Harris Jan 31 '11 at 23:24 Stefan has explained the essential points well, but let me add some details and a reference. For the first incompleteness theorem, one can work with a very weak theory of arithmetic. One needs to be able to define the relevant recursive predicates and functions (in particular those that describe the coding process) and prove that these represent, in the theory under consideration, the intended functions and relations. Here "represent" is a rather weak requirement. A formula $\phi(x,y)$ represents a function $f$ in a theory $T$ if, for each natural number $n$, $T$ proves the formula $(\forall y)(\phi([n],y)\iff y=[f(n)])$, where the square brackets mean the numeral for the number inside. (Note that $T$ need not prove $\forall x\exists y\phi(x,y)$.) A weak theory like Robinson's Q suffices to represent all recursive functions and predicates, so the first incompleteness theorem applies to all theories with at least the strength of Q. For the second incompleteness theorem, on the other hand, one needs to prove, in the theory under consideration, some basic properties of the coding, for example that concatenations can always be coded. PA suffices for this (and so do some weaker theories; $\Sigma^0_1$ induction seems to suffice). So the second incompleteness theorem is usually stated for theories that have at least the strewngth of PA. The details of what needs to be proved, as well as most of the details of the proofs, can be found in Shoenfield's book "Mathematical Logic"; I believe they're also in Goedel's original paper. (I'm away from home and writing this on an unfamiliar computer, which I can't persuade to preview any of my TeX code, so I apologize if it contains errors.) - @Andreas , do you know of a reference that pursues the question of how little/much of PA is actually required? I seem to recall "Metamathematics of first order arithmetic" treats some of this, but don't recall the explicit suggestion that $\Sigma^0_1$ induction seems enough (I do not have the book with me, though, so I cannot currently check). –  Andres Caicedo Jan 29 '11 at 20:59 I learned only recently that Pudlak proved in 1985, by model-theoretic techniques, that Q does not prove Con(Q), along with finer results. This is in the paper "Cuts, consistency statements and interpretations" from the JSL volume 50. I don't think that anyone has proved a sort of reverse-mathematics result that characterizes the amount of induction required to verify the Hilbert-Bernays conditions. $Sigma^0_1$ induction is certainly enough, but I don't remember about weaker theories at the moment. –  Carl Mummert Jan 30 '11 at 23:12 @Carl, many thanks! I'll check the reference. –  Andres Caicedo Jan 31 '11 at 1:03 Feferman proves a version of the second incompleteness theorem in his system $FS_0$: citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.6331 I don't think $FS_0$ is weaker than $\Sigma_1$ induction though. –  Timothy Chow Jan 31 '11 at 15:58 It's a trick of encoding. You encode the formulas of the theory into a fixed collection of integers which have a certain form. Now, you have defined somewhere in the background a natural notion of multiplication, addition, and equality of integers. So you are free to ask questions about whether or not two numbers are equal, and whether or not for some number $x$, there is a formula in the language of PA, for which $x$ is the encoding of, and this allows you to go backwards, in the metatheory. (that is take a number and see if it encodes a sentence) That being said, you are never starting with a number and going backwards in the formal theory (that is to say turning it into the sentence it encodes, as this would require FTA), you are always going forward, taking the Gödel encoding of a given sentence. It's this careful dance and intermingling of the syntax and semantics that makes the proof of this theorem so much fun to go over. Hope this clears it up a little bit. I know it probably doesn't completely answer your question. - I am not absolutely sure I understand your problems with the meta-language and such. You take your favorite base theory, let's say PA or something stronger like ZFC. But I am quite sure that PA is enough. You prove the fundamental theorem of arithmetic in PA (or rather, a la Goedel, you prove that the $\beta$-function does what it is supposed to do). This tells you that you can code finite sequences and hence that you can talk about strings, languages, and proofs. Now you continue and define a formula Con(PA) that expresses that PA is consistent. In the same way you can define a formula Con(PA+$\neg$Con(PA)), that expresses the obvious thing. Note that the PA that appears in these formulas is not the meta-mathematical PA but the internal PA of the model that you are in. Anyhow. This is just the usual confusion that arises with the incompleteness theorems. Now you prove, still only using PA, the statement "Con(PA) implies Con(PA+$\neg$Con(PA))" and you are done. This is a purely number theoretic proof. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9066017270088196, "perplexity": 310.7139392045057}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447558026.18/warc/CC-MAIN-20141224185918-00092-ip-10-231-17-201.ec2.internal.warc.gz"}
https://fi.episciences.org/9349
## Martin Bača ; Andrea Semaničová-Feňovčíková ; Ruei-Ting Lai ; Tao-Ming Wang - On local antimagic vertex coloring for complete full $t$-ary trees fi:9336 - Fundamenta Informaticae, May 6, 2022, Volume 185, Issue 2 On local antimagic vertex coloring for complete full $t$-ary trees Authors: Martin Bača ; Andrea Semaničová-Feňovčíková ; Ruei-Ting Lai ; Tao-Ming Wang Let $G = (V, E)$ be a finite simple undirected graph without $K_2$ components. A bijection $f : E \rightarrow \{1, 2,\cdots, |E|\}$ is called a local antimagic labeling if for any two adjacent vertices $u$ and $v$, they have different vertex sums, i.e., $w(u) \neq w(v)$, where the vertex sum $w(u) = \sum_{e \in E(u)} f(e)$, and $E(u)$ is the set of edges incident to $u$. Thus any local antimagic labeling induces a proper vertex coloring of $G$ where the vertex $v$ is assigned the color (vertex sum) $w(v)$. The local antimagic chromatic number $\chi_{la}(G)$ is the minimum number of colors taken over all colorings induced by local antimagic labelings of $G$. It was conjectured \cite{Aru-Wang} that for every tree $T$ the local antimagic chromatic number $l+ 1 \leq \chi_{la} ( T )\leq l+2$, where $l$ is the number of leaves of $T$. In this article we verify the above conjecture for complete full $t$-ary trees, for $t \geq 2$. A complete full $t$-ary tree is a rooted tree in which all nodes have exactly $t$ children except leaves and every leaf is of the same depth. In particular we obtain that the exact value for the local antimagic chromatic number of all complete full $t$-ary trees is $l+1$ for odd $t$. Volume: Volume 185, Issue 2 Published on: May 6, 2022 Accepted on: April 12, 2022 Submitted on: April 12, 2022 Keywords: Mathematics - Combinatorics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7071276307106018, "perplexity": 474.58094841467005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00614.warc.gz"}
https://socratic.org/questions/what-is-first-order-half-life-derivation
Chemistry Topics # What is first order half life derivation? Jul 19, 2018 Well, let us begin from the rate law... Is the FIRST-order half-life dependent on concentration? The first-order rate law for the reaction $A \to B$ is $r \left(t\right) = k \left[A\right] = - \frac{d \left[A\right]}{\mathrm{dt}}$ where: • $r \left(t\right)$ is the rate as a function of time, which we take to be the initial rate in $\text{M/s}$ • $k$ is the rate constant. What are the units? You should know this. • $\left[A\right]$ is the concentration of a reactant $A$ in $\text{M}$. • $\frac{d \left[A\right]}{\mathrm{dt}}$ is the rate of disappearance of reactant $A$, a NEGATIVE value. But the negative sign forces the rate to be positive, fitting for a FORWARD reaction. By separation of variables: $- k \mathrm{dt} = \frac{1}{\left[A\right]} d \left[A\right]$ Integrate on the left from time zero to time $t$, and the right from initial concentration ${\left[A\right]}_{0}$ to current concentration $\left[A\right]$. We obtain: $- k {\int}_{0}^{t} \mathrm{dt} = {\int}_{{\left[A\right]}_{0}}^{\left[A\right]} \frac{1}{\left[A\right]} d \left[A\right]$ $- k t = \ln \left[A\right] - \ln {\left[A\right]}_{0}$ Therefore, the first-order integrated rate law is: $\textcolor{g r e e n}{\ln \left[A\right] = - k t + \ln {\left[A\right]}_{0}}$ As we should recognize, the half-life is when the concentration drops by half. Hence, we set ${\left[A\right]}_{1 / 2} \equiv 0.5 {\left[A\right]}_{0}$ to get: $\ln 0.5 {\left[A\right]}_{0} = - k {t}_{1 / 2} + \ln {\left[A\right]}_{0}$ $\ln 0.5 {\left[A\right]}_{0} - \ln {\left[A\right]}_{0} = - k {t}_{1 / 2}$ $\ln \setminus \frac{0.5 \cancel{{\left[A\right]}_{0}}}{\cancel{{\left[A\right]}_{0}}} = - k {t}_{1 / 2}$ $- \ln 0.5 = \ln 2 = k {t}_{1 / 2}$ As a result, the half-life is given by... $\textcolor{b l u e}{\overline{\underline{|}} \stackrel{\text{ ")(" "t_(1//2) = (ln2)/k" }}{|}}$ ##### Impact of this question 357 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829522371292114, "perplexity": 1061.851284490978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00039.warc.gz"}
http://ccms.claremont.edu/ancs-seminar/TBA-134
## Belyi maps on elliptic curves and dessin d'enfant on the torus 10/13/2015 - 12:15pm 10/13/2015 - 1:10pm Speaker: Edray Goins (Purdue University) Abstract: A Belyi map $\beta: P^1(C) \to P^1(C)$ is a rational function with at most three critical values; we may assume these values are ${0, 1, \infty }$. A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: $\beta^{-1} ([0,1]) \subseteq P^1(C) \simeq S^2(R)$. Replacing $P^1$ with an elliptic curve $E$, there is a similar definition of a Belyi map $\beta: E(C) \to P^1(C)$. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: $\beta^{-1} ([0,1]) \subseteq E(C) \simeq T^2(R)$. In this talk, we discuss the problems of (1) constructing examples of Belyi maps for elliptic curves and (2) drawing Dessins d'Enfants on the torus. This work is part of PRiME (Purdue Research in Mathematics Experience) with Leonardo Azopardo, Sofia Lyrintzis, Bronz McDaniels, Maxim Millan, Yesid Sanchez Arias, Danny Sweeney, and Sarah Thomaz with assistance by Hongshan Li and Avi Steiner. Where: Millikan 2099, Pomona College
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010345101356506, "perplexity": 804.8481536535219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806421.84/warc/CC-MAIN-20171121170156-20171121190156-00437.warc.gz"}
https://www.physicsoverflow.org/20197/what-current-state-research-about-hayden-preskill-circuit
# What is the current state of research about the Hayden-Preskill circuit? + 1 like - 0 dislike 160 views Can someone summarize as to what are the problems and/or the open questions with the Hayden-Preskill circuit? (in the context of understanding black-holes or as a computer science question)It gives a framework to see how the black-hole can thermalize in a $log (entropy)$ time scale. What next? To start off, The sort of problem with the Hayden-Preskill circuit is that it works via random unitary transformations on disjoint pairs of qubits and hence doesn't have an Hamiltonian interpretation. I believe one has always believed that matrix models (of matrix size $\sqrt{entropy}$) can saturate this logarithmic thermalization bound via Hamiltonian evolution. • Has this above belief about the matrix models been proven yet? • Is there an intuitive explanation as to why a theory with finite dimensional matrices should behave like an infinite dimensional system? This post imported from StackExchange Physics at 2014-07-13 09:10 (UCT), posted by SE-user user6818 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ys$\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47012969851493835, "perplexity": 1558.3965123063733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743717.31/warc/CC-MAIN-20181117164722-20181117190722-00172.warc.gz"}
https://arxiv.org/abs/0906.4725v1
quant-ph (what is this?) (what is this?) # Title: Interacting Quantum Observables: Categorical Algebra and Diagrammatics Abstract: Within an intuitive diagrammatic calculus and corresponding high-level category-theoretic algebraic description we axiomatise complementary observables for quantum systems described in finite dimensional Hilbert spaces, and study their interaction. We also axiomatise the phase shifts relative to an observable. The resulting graphical language is expressive enough to denote any quantum physical state of an arbitrary number of qubits, and any processes thereof. The rules for manipulating these result in very concise and straightforward computations with elementary quantum gates, translations between distinct quantum computational models, and simulations of quantum algorithms such as the quantum Fourier transform. They enable the description of the interaction between classical and quantum data in quantum informatic protocols. More specifically, we rely on the previously established fact that in the symmetric monoidal category of Hilbert spaces and linear maps non-degenerate observables correspond to special commutative $\dag$-Frobenius algebras. This leads to a generalisation of the notion of observable that extends to arbitrary $\dag$-symmetric monoidal categories ($\dag$-SMC). We show that any observable in a $\dag$-SMC comes with an abelian group of phases. We define complementarity of observables in arbitrary $\dag$-SMCs and prove an elegant diagrammatic characterisation thereof. We show that an important class of complementary observables give rise to a Hopf-algebraic structure, and provide equivalent characterisations thereof. Comments: 58 pages, many figures Subjects: Quantum Physics (quant-ph); Logic in Computer Science (cs.LO); Category Theory (math.CT); Quantum Algebra (math.QA) Cite as: arXiv:0906.4725 [quant-ph] (or arXiv:0906.4725v1 [quant-ph] for this version) ## Submission history From: Ross Duncan [view email] [v1] Thu, 25 Jun 2009 15:58:11 GMT (2780kb,D) [v2] Mon, 31 Jan 2011 13:49:04 GMT (1158kb,D) [v3] Thu, 21 Apr 2011 14:18:07 GMT (2275kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808012843132019, "perplexity": 1740.140125525659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320532.88/warc/CC-MAIN-20170625134002-20170625154002-00302.warc.gz"}
https://socratic.org/questions/how-do-you-graph-2tan3-x-30-1
Trigonometry Topics # How do you graph -2tan3(x - 30) + 1? Jul 29, 2018 See graph and details. #### Explanation: $y = - 2 \tan \left(3 \left(x - \frac{\pi}{6}\right)\right) + 1$, asymptotic $3 \left(x - \frac{\pi}{6}\right) \ne \left\{\left(2 k + 1\right) \frac{\pi}{2}\right\} , k = 0 , \pm 1 , \pm 2 , \pm 3 , \ldots$ $\Rightarrow x \ne \left\{\left(2 k + 2\right) \frac{\pi}{6}\right\} = \left\{2 k \left(\frac{\pi}{6}\right)\right\} = \left\{k \left(\frac{\pi}{3}\right)\right\}$ The period = the period of $\tan 3 x = \frac{\pi}{3}$. See graph, with asymptotes. graph{(1/2(y-1)cos(3(x+pi/4))+sin(3(x+pi/4)))(x-1/3pi+0.0001y)(x+1/3pi+0.0001y)(x-2/3pi+0.0001y)(x+2/3pi+0.0001y)(x-pi+0.0001y)(x+pi+0.0001y)=0[ -4 4 -2 2]} ##### Impact of this question 709 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3755990266799927, "perplexity": 13101.057656567165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058263.20/warc/CC-MAIN-20210927030035-20210927060035-00270.warc.gz"}
https://the-learning-machine.com/article/ml/model-selection
# Model selection ## Introduction Having trained several predictive models on available datasets, how do we know which of the trained models is a better performer compared to the rest? The act of choosing better models is known as model selection in machine learning. Model selection can help in choosing better hyperparameters of the same modeling family. For example, choosing the value of number of neighbors, $K$, in $K$-nearest neighbors is essential for getting good predictive performance. Model selection is also useful in comparing across multiple model families. For example, whether a support vector machine or a decision tree is a better predictive model for a task can be addressed using model selection strategies. In this article, we will explore the recommended strategies for performing model selection in supervised learning settings. ## Prerequisites To understand model selection strategies, we recommend familiarity with the concepts in It would also help to have some familiarity with some machine learning models for classification and regression, such as Follow the above links to first get acquainted with the corresponding concepts. ## The model selection recipe Model selection is straightforward. • To choose suitable settings for hyperparameter, select those that can help the model achieve the best predictive performance. • When choosing among several model families, again, select the family that (after hyperparameter tuning) with the best predictive performance. Seems, the most important step in model selection is actually estimating predictive performance. We have explained an elaborate list of evaluation metrics for classification and similarly, for regression. So, we have metrics for predictive performance. But how do we measure it? A naive approach would be evaluating predictive performance on training data. The problem with this approach is that the model has already seen all examples from the training set. The model may have just memorized a direct mapping from input instance to its output target variable, without learning a general signature or pattern for this mapping. Such a model will have superb predictive performance on the training data, but miserable performance on future unseen examples. An alternative strategy might involve splitting the dataset into two parts — a training set and a testing set. As the name implies, we train the model on the training set and evaluate its predictive performance on the testing set. Although better than the previous naive approach, this train-test splitting strategy has a problem — the predictive performance is specific to the testing set. If the test set is not big enough, it may not represent the variety of data the model may encounter in the future. We need a better strategy that ensures that the estimated predictive performance is generalized to multiple testing sets, instead of a single test set. The strategy of cross-validation offers this better strategy. ## Cross-validation Cross-validation generalizes the idea of training/testing splits in a principled way. The available supervised data is shuffled and split into $K$-folds, each containing approximately same number of observations. This splitting automatically leads to $K$ training/testing splits — for the $i$-th training/testing split, consider the $i$-th fold as the testing set and the remaining folds as the training set. The model's predictive performance is then estimated as the average of the evaluations from these $K$ training/testing splits. This strategy is known as $K$-fold cross-validation. Why is this better than just randomly training/testing multiple times on the overall set? Theory aside, $K$-fold cross-validation guarantees that every observation in the available labeled set appears in some test set. Such a guarantee is not possible with multiple random splits. ## Leave-one-out cross-validation An extreme case of $K$-fold cross-validation uses $K = 1$. With only one observation in each fold, this approach to evaluation is known as leave-one-out cross-validation (LOOCV). The benefit of LOOCV over going with higher values of $K$ is the availability of a relatively larger dataset for training the model. For example, with 100 labeled examples, LOOCV will offer 99 examples for training, while $5$-fold cross-validation will offer only 80 examples for training. On the flipside, the performance evaluation in LOOCV will depend on a single example in each test split, while the $5$-fold cross-validation will have less variance from being evaluated over 20 examples. Therefore, the default recommendation is to avoid using LOOCV, unless there is a significant labeled data scarcity. ## Repeated cross-validation The default recommendation is to use repeated $K$-fold cross-validation CITE[kohavi-1995]. For example, repeat a $5$-fold cross-validation experiment a total of 10 times, each time creating the folds randomly. Such an experiment would offer two benefits: • The predictive performance is estimated over multiple $K$-folds, implying independence from the splits that created the folds, since the splits were randomized several times. • The same examples have appeared in multiple test sets, once in each of the $K$-fold evaluations, possibly tested against different instantiations of the training set. Compare that a single $K$-fold cross-validation. It may have the rare chance of creating splits that are bad for evaluation — all tough examples grouped in a single test set and all the weak examples in the corresponding training set. A repeated cross-validation ensures that the performance on any examples is also training set independent. ## Stratified cross-validation In classification scenarios, it is crucial to ensure that the relative proportion of categories in the training set is similar to that in the test set. When applying cross-validation, a stratified approach to creating the folds ensures that the relation proportion of examples from different classes is approximately the same across the folds. Such a strategy is known as stratified cross-validation and it may be applied in conjunction with repeated cross-validation that we studied earlier. ## Nested cross-validation At the onset of this article, we suggested the two ways model selection is useful — hyperparameter tuning and comparing model families. A typical strategy of achieving both these goals is to use the so-called nested cross-validation. In nested cross-validation, there are two levels of cross-validation, with one within another. 1. In the outer level, the training and test splits are used to estimate model predictive performance for comparison across model families. 2. In the inner level, the training and testing splits are used to estimate predictive performance for different settings of hyperparameters of the same model family. In this case, the testing set is actually known as the validation set. Using randomized search or grid search, the best performing hyperparameter settings are identified.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4346447288990021, "perplexity": 776.1914813379908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038075074.29/warc/CC-MAIN-20210413213655-20210414003655-00489.warc.gz"}
https://www.zappos.com/inserts-ugg-mens-slipper
# inserts ugg mens slipper 97 items found 97 items Do these items match what you were searching for? ## Search Results UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Olsen \$109.95 5Rated 5 stars out of 5 UGG Neuman \$119.95 5Rated 5 stars out of 5 UGG Tasman \$99.95 5Rated 5 stars out of 5 UGG Scuff Romeo II \$99.95 5Rated 5 stars out of 5 UGG Tasman \$99.95 5Rated 5 stars out of 5 UGG Scuff Romeo II \$99.95 5Rated 5 stars out of 5 UGG Olsen \$109.95 5Rated 5 stars out of 5 UGG Tasman \$99.95 5Rated 5 stars out of 5 UGG Olsen \$109.95 5Rated 5 stars out of 5 UGG Neuman \$119.95 5Rated 5 stars out of 5 UGG Ascot Leather \$119.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Ascot Leather \$119.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Tasman \$99.95 5Rated 5 stars out of 5 UGG Scuff \$89.95 5Rated 5 stars out of 5 UGG Ascot \$119.95 4Rated 4 stars out of 5 UGG Tasman \$99.95 5Rated 5 stars out of 5 UGG Kenton \$99.95 4Rated 4 stars out of 5 UGG Kenton \$99.95 4Rated 4 stars out of 5 UGG Tasman Wool \$109.95 5Rated 5 stars out of 5 UGG Ascot Wool \$119.95 1Rated 1 stars out of 5 UGG Kenton \$99.95 4Rated 4 stars out of 5 UGG Dex Lace Slip-On \$129.95 5Rated 5 stars out of 5 UGG Ascot \$119.95 4Rated 4 stars out of 5 UGG Tasman \$99.95 5Rated 5 stars out of 5 UGG Tasman Wool \$109.95 5Rated 5 stars out of 5 UGG Kenton \$99.95 4Rated 4 stars out of 5 UGG Tasman Wool \$109.95 5Rated 5 stars out of 5 UGG Ascot Wool \$119.95 1Rated 1 stars out of 5 Koolaburra by UGG Tipton \$54.99MSRP: \$59.99 Koolaburra by UGG Tipton \$49.99MSRP: \$59.99 Koolaburra by UGG Tipton \$49.99MSRP: \$59.99 Koolaburra by UGG Bordon \$44.99MSRP: \$54.99 Koolaburra by UGG Tipton \$49.99MSRP: \$59.99 Koolaburra by UGG Bordon \$44.99MSRP: \$54.99 Koolaburra by UGG Bordon \$44.99MSRP: \$54.99 Koolaburra by UGG Bordon \$44.99MSRP: \$54.99 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Ascot \$119.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Tasman Slip-On \$89.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Tasman Slip-On \$89.95 5Rated 5 stars out of 5 UGG Tasman Slip-On \$89.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Tasman Slip-On \$89.95 5Rated 5 stars out of 5 UGG Scuff Wool \$89.95 5Rated 5 stars out of 5 UGG Scuff Wool \$89.95 5Rated 5 stars out of 5 UGG Chester \$109.95 5Rated 5 stars out of 5 UGG Dex \$59.95MSRP: \$89.95 5Rated 5 stars out of 5 UGG Dex \$64.95MSRP: \$99.95 5Rated 5 stars out of 5 UGG Tasman Sierra \$109.95 4Rated 4 stars out of 5 UGG Dex \$59.95MSRP: \$89.95 5Rated 5 stars out of 5 UGG Dex \$59.95MSRP: \$89.95 5Rated 5 stars out of 5 UGG Upshaw TS (Twinsole) \$119.95 5Rated 5 stars out of 5 UGG Upshaw TS (Twinsole) \$119.95 5Rated 5 stars out of 5 UGG Chester \$119.95 5Rated 5 stars out of 5 UGG Dex \$59.95MSRP: \$89.95 5Rated 5 stars out of 5 UGG Upshaw TS (Twinsole) \$119.95 5Rated 5 stars out of 5 UGG Cleaner & Conditioner \$9.95 5Rated 5 stars out of 5 Giesswein Vent \$147.95 4Rated 4 stars out of 5 Giesswein Vent \$147.95 4Rated 4 stars out of 5 Giesswein Vent \$147.95 4Rated 4 stars out of 5 Giesswein Camden \$141.95 4Rated 4 stars out of 5 Giesswein Vent \$147.95 4Rated 4 stars out of 5 Giesswein Camden \$141.95 4Rated 4 stars out of 5 Giesswein Vent \$147.95 4Rated 4 stars out of 5 Giesswein Vent \$147.95 4Rated 4 stars out of 5 Giesswein Vent \$147.95 4Rated 4 stars out of 5 Outdoor Research Tundra Slip-On Aerogel Booties \$69.00 5Rated 5 stars out of 5 Outdoor Research Tundra Aerogel Booties \$89.00 Outdoor Research Tundra Slip-On Aerogel Booties \$69.00 5Rated 5 stars out of 5 Outdoor Research Tundra Aerogel Booties \$89.00 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Ascot \$109.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Scuff \$79.95 5Rated 5 stars out of 5 UGG Dex Lace Slip-On \$129.95 5Rated 5 stars out of 5 Koolaburra by UGG Tipton \$49.99MSRP: \$59.99 UGG Ascot - WIDE \$109.95 5Rated 5 stars out of 5 UGG Ascot - WIDE \$109.95 5Rated 5 stars out of 5 UGG Ascot - WIDE \$109.95 5Rated 5 stars out of 5 UGG Ascot - WIDE \$109.95 5Rated 5 stars out of 5 UGG Ascot - WIDE \$109.95 5Rated 5 stars out of 5 UGG Knox \$44.98MSRP: \$99.95 5Rated 5 stars out of 5 UGG Chester \$109.95 5Rated 5 stars out of 5 SKECHERS Expected X Verson \$40.57MSRP: \$50.00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232363700866699, "perplexity": 18636.366237569808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00098.warc.gz"}
https://proofwiki.org/wiki/Partition_of_Indexing_Set_induces_Bijection_on_Family_of_Sets
# Partition of Indexing Set induces Bijection on Family of Sets ## Theorem Let $I$ be an indexing set. Let $\family {S_\alpha}_{\alpha \mathop \in I}$ be a family of sets indexed by $I$. Let $\family {I_\gamma}_{\gamma \mathop \in J}$ be a partitioning of $I$. Then there exists a bijection: $\ds \phi: \prod_{\gamma \mathop \in J} \paren {\prod_{\alpha \in \mathop I_\gamma} S_\alpha} \to \prod_{\alpha \mathop \in I} S_\alpha$ ## Proof First a lemma: Let $I$ be an indexing set. Let $\family {S_\alpha}_{\alpha \mathop \in I}$ be a family of sets indexed by $I$. Let $I = I_1 \cup I_2$ such that $I_1 \cap I_2 = \O$. Then there exists a bijection: $\ds \psi: \paren {\prod_{\alpha \mathop \in I_1} S_\alpha} \times \paren {\prod_{\alpha \mathop \in I_2} S_\alpha} \to \prod_{\alpha \mathop \in I} S_\alpha$ $\Box$ We can define a projection $\pr_\gamma$: $\ds \pr_\gamma: \prod_{\gamma \mathop \in J} \paren {\prod_{\alpha \in \mathop I_\gamma} S_\alpha} \to \prod_{\alpha \mathop \in I_\gamma} S_\alpha$ so that for $\ds X \in \prod_{\gamma \mathop \in J} \paren {\prod_{\alpha \in \mathop I_\gamma} S_\alpha}, X = \family {X_\gamma}_{\gamma \mathop \in J}$: $\map {\pr_\gamma} X = X_\gamma$ From the lemma we can build $\phi$ so that for $I_\gamma = \set {\alpha: \alpha \in I_\gamma}$: $\ds \map \phi {\mathbf x} = \prod_{\gamma \mathop \in J} \paren {\prod_{\alpha \mathop \in I_\gamma} \map {\psi_\alpha} {x_\alpha} }$ as $\phi$ uniquely sets all components of $\map {\pr_\gamma} X$ for all $\gamma$. The result follows. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901502728462219, "perplexity": 182.75495387694534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00045.warc.gz"}
https://www.physicsforums.com/search/5539714/
# Search results 1. ### Magnetic field of a moving charge Homework Statement A negative charge q = −3.20×10-6C is located at the origin and has velocity υ⃗ =(7.50×104m/s)ι^+((−4.90)×104m/s)j^. At this instant what is the magnetic field produced by this charge at the point x = 0.230 m , y = -0.300 m , z= 0? Give the x, y and x components Homework... 2. ### Gauss' Law vector form problem Thanks! I'll try and do it that way as i think thats the way we're supposed to! Thank you so much for all of your help. 3. ### Gauss' Law vector form problem I often get the vector part wrong - so for when x = 4m would E due to q = kq/r2 where r = √(22+42) but that would be in-between the x/y directions. Would I then resolve this is the x direction by taking Esin(θ) where θ=arctan(4/2). And to this term i finally add the field gained in part 1). 4. ### Gauss' Law vector form problem So could i just find the electric field of the point charge using kq/r2 and add it to the electric field found by using Gauss law? 5. ### Gauss' Law vector form problem Homework Statement (a) A spherical insulating shell of radius R = 3.00 m has its centre at the origin and carries a surface charge density σ = 3.00 nC/m2. Use Gauss’s law to find the electric field on the x axis at (i) x = 2.00m and (ii) x = 4.00 m. Give you answers in the vector form. (b) A... 6. ### Orbiting satellite Thank you so much. That all makes sense now. 7. ### Orbiting satellite Homework Statement Compute the speed and the period of a 240 kg satellite in an approximately circular orbit 610 km above the surface of Earth. The radius and mass of Earth are RE = 6400 km and ME = 6.0 × 1024 kg respectively. [5 marks] Suppose the satellite loses mechanical energy at the... 8. ### Waves and Optics Homework Statement An unpolarised light beam is shone horizontally through a cubic tank filled with weakly scattering fluid. Can vertically polarized light leave through the sides that are parallel to the beam’s propagation direction? Homework Equations The Attempt at a Solution My thinking...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9517208933830261, "perplexity": 910.5861984068532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00696.warc.gz"}
https://chat.stackexchange.com/transcript/3740/2018/10/15
11:01 PM The tag has been created - surely this tag name was discussed before. Or am I confusing this with the tag of the same name on MathOverflow? 0 I am trying to understand the definition of the double handlebody that is given here: https://warwick.ac.uk/fac/sci/maths/people/staff/karen_vogtmann/tcc2016/lecture_1_oct_15_2014_edited.pdf page 9 and 10. That is, it is $\#_{n} S^1 \times S^2$ or two handlebodies ($\#_{n} S^1\times D^2$) identif... Since the list of new tags says (topology) x 2, I suppose that one question with that tag (perhaps the one where it was created) was deleted. I have simply removed the tag from this instance: math.stackexchange.com/posts/2957001/revisions This tag was mentioned, for instance, here: chat.stackexchange.com/transcript/3740/2017/10/12 4 Resolved (a first step, anyway). The topology⇒general-topology tag synonym has been removed. Problem: I feel that there are quite a number of questions tagged with both general-topology AND algebraic-topology where the former is not really fitting. My understanding being that General Topolo... SEDE shows one deleted question with the (topology) tag: data.stackexchange.com/math/query/883845/… - of course, that is the status from the last update of the data in SEDE. The deleted question is: Show that $\int {a_n}\ \in l^1$ is the sum $\sum_{n=1}^\infty a_n$ using Lebesgue Theory.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.679783046245575, "perplexity": 1176.1578170584003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00151.warc.gz"}
https://socratic.org/questions/how-do-you-use-the-factor-theorem-to-determine-whether-x-1-is-a-factor-of-f-x-x-#209068
Precalculus Topics # How do you use the factor theorem to determine whether x-1 is a factor of f(x)=x^3+4x-5? Jan 6, 2016 Calculate $f \left(1\right)$. #### Explanation: If $f \left(1\right) = 0$, then $x - 1$ is a factor of $f$. So let's calculate $f \left(1\right)$. $f \left(1\right) = 1 + 4 - 5 = 5 - 5 = 0$. So $1$ is a root of this polynomial, and $x - 1$ is a factor of it. ##### Impact of this question 495 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410528302192688, "perplexity": 394.7072077410557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00154.warc.gz"}
http://math.stackexchange.com/questions/140174/linear-algebra-finding-eigenvalues-of-a-matrix
# Linear Algebra - Finding Eigenvalues of a Matrix $A=\begin{bmatrix}3 & -2 & 5\\ 1 & 0 & 7\\ 0 & 0 & 2\end{bmatrix}$, Find the eigenvalues of A. I realized that if I swap columns I and II then I can make it an upper triangular matrix. Then the detrminant would be the product of the elements of the main diagonal. And then I would just need to find the roots of that. However I know that swapping columns flips the sign of the determinant, but I don't know how that will effect finding the eigenvalues. So I tried it anyways and got determinant of $(x+2)(x-1)(x-2)$ which has roots -2, 1, and 2. But I know that this is incorrect because the answers are supposed to be $\lambda=1,2,2$. What did I do wrong? - To see that you can't swap columns, try finding the eigenvalues of $\pmatrix{1&0\cr0&1\cr}$ and $\pmatrix{0&1\cr1&0\cr}$, or of $\pmatrix{1&0\cr0&-1\cr}$ and $\pmatrix{0&1\cr-1&0\cr}$. –  Gerry Myerson May 3 '12 at 6:33 If you don't see the shortcut, you can at least compute the polynomial: $det(Ix-A)=det(\begin{bmatrix}x-3 & +2 & -5\\ -1 & x & -7\\ 0 & 0 & x-2\end{bmatrix})$ you can expand along the last row to save time and get: $(x-2)(x(x-3)+2)=(x-2)(x^2-3x+2)=(x-2)(x-2)(x-1)$ I would venture you might have made a sign error. This would be especially likely if you expanded the determinant any other way. - However you can think of your matrix as block upper triangular. The upper left $2 \times 2$ block will give you two eigenvalues, and the lower right entry gives the third eigenvalue, namely 2. Do you know how to find the eigenvalues of the $2 \times 2$ block? It is a theorem that eigenvalues of a block upper-triangular matrix are the combined eigenvalues of the blocks. It's easy to verify that $A$ has a corresponding left eigenvector, $(0~0~1)A = (0~0~2)$. It follows that there must be a right eigenvector as well, though it's not immediately evident what it might be. –  hardmath May 3 '12 at 4:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9162360429763794, "perplexity": 122.6707618287282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021227262/warc/CC-MAIN-20140305120707-00077-ip-10-183-142-35.ec2.internal.warc.gz"}
https://future.futureverse.org/reference/futures.html
Gets all futures in an environment, a list, or a list environment and returns an object of the same class (and dimensions). Non-future elements are returned as is. futures(x, ...) ## Arguments x An environment, a list, or a list environment. ... Not used. ## Value An object of same type as x and with the same names and/or dimensions, if set. ## Details This function is useful for retrieve futures that were created via future assignments (%<-%) and therefore stored as promises. This function turns such promises into standard Future objects.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30318504571914673, "perplexity": 5848.338887072425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00623.warc.gz"}
https://www.king4quizzes.com/2020/02/are-you-smarter-than-3rd-grader.html
• Child says: “Venus” Q 2. Look at the answer choices below and either choose the child’s or the adult’s answer for this question: What is the hardest natural mineral? Q 3. How many bones are in the human body? Q 4. Which of these trees is deciduous? Q 5. What is the least common denominator of these fractions? Q 6. Which of the following do not have a mass: solids, gases, liquids? • Adult says: “All of them have mass” Q 7. Do you know on which layer of the Earth landforms are formed? • 3rd grader says: “Outer Core” • 3rd grader says: “Inner Core” Q 8. Do you know which country is outlined here? Q 9. Let’s see how well you know the natural world! What type of penguin is this? • 3rd grader says: “African Penguin” Q 10. What about anatomy? Which joint has the widest range of movement? • 3rd grader says: “Gliding joint” • 3rd grader says: “Pivot joint” • Adult says: “Ball and socket” ************************************************* • 3rd grader says: “None of them” Q 2. Can you name the first planet from the sun? Q 3. Look at the answer choices below and either choose the child’s or the adult’s answer for this question: What is the hardest natural mineral? Q 4. How many bones are in the human body? Q 5. Think you can answer this math question? Q 6. Which of the following do not have a mass: solids, gases, liquids? • Adult says: “All of them have mass” Q 7. Do you know on which layer of the Earth landforms are formed? • 3rd grader says: “Inner Core” • 3rd grader says: “Outer Core” Q 8. Do you know which country is outlined here? Q 9. Let’s see how well you know the natural world! What type of penguin is this? • 3rd grader says: “African Penguin” Q 10. What about anatomy? Which joint has the widest range of movement? • 3rd grader says: “Pivot joint” • 3rd grader says: “Gliding joint” • Adult says: “Ball and socket” Q 11. Can you identify all the instruments that belong to the woodwind group? • 3rd grader says: “C and D” • 3rd grader says: “All of them” • 3rd grader says: “A, C and D” • Adult says: “B, C and D” Q 12. What colors are formed by mixing the primary colors? • 3rd grader says: “B and E” • 3rd grader says: “A, D and E” • 3rd grader says: “B and F” • Adult says: “B, D and E” Q 13. Spelling test! How many words are spelled incorrectly? Q 14. Which triangle are these two statements describing? • 3rd grader says: “Right angle triangle” • 3rd grader says: “Acute angle triangle” • 3rd grader says: “Equilateral triangle” • 3rd grader says: “Obtuse triangle” • 3rd grader says: “Isosceles triangle” Q 15. Can you put these events in the correct order, oldest to most recent? • 3rd grader says: “A, C, B, D, E” • 3rd grader says: “A, C, E, D, B” • 3rd grader says: “C, A, E, B, D” • 3rd grader says: “B, C, E, A, D” • 3rd grader says: “C, B, E, A, D” • Adult says: “C, B, A, E, D” ********************************************************* Q 2. Can you name the first planet from the sun? Q 3. Look at the answer choices below and either choose the child’s or the adult’s answer for this question: What is the hardest natural mineral? Q 4. How many bones are in the human body? Q 5. Think you can answer this math question? Q 6. Which of the following do not have a mass: solids, gases, liquids? • Adult says: “All of them have mass” Q 7. Do you know on which layer of the Earth landforms are formed? Q 8. Do you know which country is outlined here? Q 9. Let’s see how well you know the natural world! What type of penguin is this? Q 10.Can you identify all the instruments that belong to the woodwind group? • Adult says: “B, C and D” Q 11.What part of speech is the word ‘Red’? Q 12.What colors are formed by mixing the primary colors? • Adult says: “B, D and E” Q 13.Which city is located closest to the equator? Q 14.What about anatomy? Which joint has the widest range of movement? • Adult says: “Ball and socket” Q 15.Find out which country Machu Picchu is located in. Then choose the flag of that country! Q 16.Which triangle are these two statements describing? • 3rd grader says: “Obtuse triangle” Q 17.Spelling test! How many words are spelled incorrectly?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404901623725891, "perplexity": 8272.106219629191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357641.32/warc/CC-MAIN-20210226115116-20210226145116-00385.warc.gz"}
http://mathhelpforum.com/statistics/218312-someone-explain.html
# Math Help - Someone explain this. 1. ## Someone explain this. Take a look at the attachment, what is that, I should of got it right Attached Thumbnails 2. ## Re: Someone explain this. Originally Posted by pieRsquared Take a look at the attachment, what is that, I should of got it right how many are contained in the both playground and restrooms circles? I see 10 and 10, it's just that another 10 have also something else, that's why is confusing
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322484731674194, "perplexity": 3093.650780212578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011123461/warc/CC-MAIN-20140305091843-00076-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/using-power-to-find-velocity-a-car-meets-a-hill.263830/
# Homework Help: Using power to find velocity (a car meets a hill) 1. Oct 12, 2008 ### bopll 1. The problem statement, all variables and given/known data A car encounters an inclined plane Given- Weight of car in N (6500), velocity on the flat surface (22.5 m/s), power of the engine (78000), incline of the hill (8.1 degrees) Want to find- velocity on the hill (power and restistive forces remain constant) 2. Relevant equations P = Fvcos(theta) 3. The attempt at a solution I found F by plugging in the power and velocity. I then subtracted the gravitational force due to the hill from this number to get the resultant force. plugged this in to P = Fvcos(theta) and got a number bigger than the original... urgent help would be greatly appreciated since HW is due in 8 minutes, but i'm more worried about the concept for the test tomorrow. thanks. 2. Oct 12, 2008 ### Astronuc Staff Emeritus The engine power is constant so the car is traveling at constant speed so that resistance is constant. When going up hill - the car starts increasing its gravitational potential energy - and if the car's power output is constant, then the car's kinetic energy must be decreasing. 3. Oct 12, 2008 ### bopll okay, so if i have to use kinetic energy, does that need i need to find the mass of the car (1/2mv^2)? that doesnt seem right... 4. Oct 12, 2008 ### ShawnD edit: Ok I think I have it this time Your flat ground situation is just P = FV. Move it around to get F = P/V. This is the force coming from the engine; it does not change. When you get on the hill, gravity applies a force against the motor as Wsin(theta). With this new net force, you find the new velocity. Flat ground: F = P/V (solve for force) Hill: P = (F + gravity)V P is the same, F you find out, gravity is Wsin(theta), V is your answer. They are ADDED together because F and gravity represent DRAG as opposed to the force you are applying. Last edited: Oct 12, 2008 5. Oct 12, 2008 ### bopll i tried this also, maybe i made a calculation error...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735907673835754, "perplexity": 1205.4793468193393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00187.warc.gz"}
https://arxiv.org/abs/1101.3884
quant-ph (what is this?) # Title: Approximation algorithms for QMA-complete problems Abstract: Approximation algorithms for classical constraint satisfaction problems are one of the main research areas in theoretical computer science. Here we define a natural approximation version of the QMA-complete local Hamiltonian problem and initiate its study. We present two main results. The first shows that a non-trivial approximation ratio can be obtained in the class NP using product states. The second result (which builds on the first one), gives a polynomial time (classical) algorithm providing a similar approximation ratio for dense instances of the problem. The latter result is based on an adaptation of the "exhaustive sampling method" by Arora et al. [J. Comp. Sys. Sci. 58, p.193 (1999)] to the quantum setting, and might be of independent interest. Comments: 22 pages, comments welcome Subjects: Quantum Physics (quant-ph); Computational Complexity (cs.CC) Journal reference: SIAM Journal on Computing 41(4): 1028-1050, 2012. Also in Proceedings of 26th IEEE Conference on Computational Complexity (CCC), 178-188, 2011 DOI: 10.1137/110842272 Cite as: arXiv:1101.3884 [quant-ph] (or arXiv:1101.3884v1 [quant-ph] for this version) ## Submission history From: Sevag Gharibian [view email] [v1] Thu, 20 Jan 2011 12:42:53 GMT (29kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8963870406150818, "perplexity": 1597.0636158779528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00076.warc.gz"}
http://mathhelpforum.com/advanced-statistics/133019-solved-moments-mme.html
# Thread: [SOLVED] Moments and MME 1. ## [SOLVED] Moments and MME Given pdf is $\displaystyle f_y(y)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}y^{a-1}(1-y)^{b-1}$ for $\displaystyle 0\le y\le1$ $\displaystyle a>0$ $\displaystyle b>0$ a) Compute the moments about the origin $\displaystyle E(Y^m)$ b) Find the method of moments estimators for a and b (Using a sample of size n). Attempt: a)$\displaystyle E(Y^m) = \displaystyle\int_0^1 y^m \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}y^{a-1}(1-y)^{b-1}$ $\displaystyle =\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}*\beta(m+a, b)$ $\displaystyle =\frac{\Gamma(a+b)\Gamma(a+m)}{\Gamma(a+b+m)\Gamma (a)}$ b) Am i supposed to use part a to find the first and second central moments? I am not sure how to do this. Even so, by looking them up I try the following: $\displaystyle E(Y) = \frac{a}{a+b} = m_1$ $\displaystyle E(Y^2) = \frac{a(a+1)}{(a+b)(a+b+1)} = m_2$ where m1 and m2 are the sample moments. Basically when I try to solve this system of equations I get gross quadratics that I don't know how to simplify. Is there an elegant way of solving these, or if not, what is the best strategy for attacking these? 2. Use the fact that $\displaystyle \Gamma(a+1)=a\Gamma(a)$ which comes from integration by parts Hence $\displaystyle E(X)=\frac{\Gamma(a+b)\Gamma(a+1)}{\Gamma(a+b+1)\G amma(a)}$ $\displaystyle =\left({\Gamma(a+b)\over \Gamma(a+b+1)}\right)\left({\Gamma(a+1)\over \Gamma(a)}\right)$ $\displaystyle =\left({\Gamma(a+b)\over (a+b)\Gamma(a+b)}\right)\left({a\Gamma(a)\over \Gamma(a)}\right)$ $\displaystyle {a\over a+b}$ Now do it when m=2. 3. Ah, I did not know that fact. Thank you. I'm still clueless on solving for the MME though. From $\displaystyle \frac{a}{a+b} = m_1$ I've got that $\displaystyle b=\frac{a-m_1a}{m_1}$ and From $\displaystyle \frac{a(a+1)}{(a+b)(a+b+1)}$ Assuming I did all my algebra right, I get $\displaystyle a=\frac{-1+m_2+2bm_2\pm\sqrt{(1-m_2-2bm_2)^2-4(1-m_2)(m_2b^2-m_2b)}}{2(1-m_2)}$ How should I approach this to get it simplified? I suppose this may belong in an algebra section. 4. just apply parts to the gamma function $\displaystyle u=x^{\alpha-1}$ I'd bet your teacher went over it. 5. IF your moments are correct I would use a little observation here... From $\displaystyle \frac{a}{a+b} = m_1$ and $\displaystyle \frac{a(a+1)}{(a+b)(a+b+1)}=m_2$ insert m1 and FLIP it and divide...... $\displaystyle \frac{m_1(a+1)}{(a+b+1)}=m_2$ that gives $\displaystyle {a+b+1\over a+1}={m_1\over m_2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055590987205505, "perplexity": 470.16308125811094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863650.42/warc/CC-MAIN-20180620143814-20180620163814-00422.warc.gz"}
http://math.stackexchange.com/questions/150297/from-injective-map-to-continuous-map?answertab=oldest
From injective map to continuous map Let $X$ and $Y$ metric spaces, $f$ is an injective from $X$ to $Y$, and $f$ sets every compact set in $X$ to compact set in $Y$. How to prove $f$ is continuous map? Any comments and advice will be appreciated. - 1 Answer Since $X$ and $Y$ are metric spaces, it suffices to show that if $\langle x_n:n\in\Bbb N\rangle$ is a convergent sequence in $X$ with limit $x$, then $\langle f(x_n):n\in\Bbb N\rangle$ is a convergent sequence in $Y$ with limit $f(x)$; in words, f preserves convergent sequences. Suppose that $\langle x_n:n\in\Bbb N\rangle$ converges to $x$ in $X$. If there is an $n_0\in\Bbb N$ such that $x_n=x$ for all $n\ge n_0$, it’s trivially true that $\langle f(x_n):n\in\Bbb N\rangle\to f(x)$, so assume that $\langle x_n:n\in\Bbb N\rangle$ is a sequence of distinct points. For each $n\in\Bbb N$ set $K_n=\{x\}\cup\{x_k:k\ge n\}$; each $K_n$ is compact and infinite. (Why?) By hypothesis, therefore, each $f[K_n]$ is compact. For convenience let $y=f(x)$, and let $y_n=f(x_n)$ and $H_n=f[K_n]$ for $n\in\Bbb N$. By hypothesis each $H_n$ is compact and infinite, so each contains a limit point. Fix $n\in\Bbb N$. For each $k\ge n$, $Y\setminus H_{k+1}$ is an open nbhd of $y_k$ that contains only finitely many points of $H_n$ (why?), so $y_k$ can’t be a limit point of $H_n$. Thus, for each $n\in\Bbb N$ the only possible limit point of $H_n$ is $y$ itself. From here you should be able to prove without too much trouble that $\langle y_n:n\in\Bbb N\rangle\to y$ and hence that $f$ is continuous. - thank you. Brain M.Scott –  yaoxiao May 29 '12 at 12:28 @yaoxiao If the answer was useful to you, you should accept it by clicking the green checkmark on the left. This lets the rest of the community know that the question has an answer deemed as good by the person who asked and also awards the person who answered the deserved reputation points. –  user12014 May 30 '12 at 6:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436322450637817, "perplexity": 91.80148939606984}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645898.42/warc/CC-MAIN-20141024030045-00261-ip-10-16-133-185.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007%2Fs11868-017-0218-8
# Equivalence of K-functionals and modulus of smoothness generated by a Bessel type operator on the interval [0, 1] • R. Daher Article ## Abstract The purpose of this article is to establish the equivalence between a K-functional and a modulus of smoothness generated by a Bessel type operator on the interval [0, 1] in the metrics of $$\mathbb {L}_{2}$$ with a certain weight. ## Keywords Fourier–Bessel series Generalized translation operator K-Functionals Modulus of smoothness 41A36 44A20 ## References 1. 1. Bray, W.O., Pinsky, M.A.: Growth properties of the Fourier transform, arXiv:0910.1115v1 [math.CA] 6 Oct (2009) 2. 2. Peetre, J.: A theory of interpolation of normed spaces. Notes de Universidade de Brasilia, pp. 88, Brasilia (1963)Google Scholar 3. 3. Berens, H., Buter, P.L.: Semigroups of operators and approximation. In: Grundlehren der mathematischen Wissenschaften, vol. 145, pp. XII, 322. Springer, Berlin (1967)Google Scholar 4. 4. Vladimirov, V.S.: Equations of Mathematical Physics. Marcel Dekker, New York (1971) 5. 5. Levitan, B.M., Sargsjan, I.S.: Introduction to Spectral Theory: Self-Adjoint Ordinary Differential Operators. Am. Math. Soc, Providence, R.I. (1975) 6. 6. Nikol’skii, S.M.: Approximation of Functions in Several Variables and Embedding Theorems. Nauka, Moscow (1977). [in Russian]Google Scholar 7. 7. Levitan, B.M.: Expansion in Fourier series and integrals with Bessel functions. Usp. Mat. Nauk 6(2), 102–143 (1951) 8. 8. Löfstróom, J., Peetre, J.: Approximation theorems connected with generalized translations. Math. Ann. 181, 255–268 (1969) 9. 9. Ditzian, Z., Totik, V.: Moduli of smoothness. Springer-Verlag, UK (1987) 10. 10. Belkina, E.S., Platonov, S.S.: Equivalence of K-functionals and modulus of smoothness constructed by generalized dunkl translations. Izv. Vyssh. Uchebn. Zaved. Mat. 8, 315 (2008) 11. 11. Dai, F.: Some equivalence theorems with K-functionals. J. Appr. Theory 121, 143–157 (2003) 12. 12. Nikol’skii, S.M.: A generalization of an inequality of S. N. Bernstein. Dokl. Akad. Nauk. SSSR 60(9), 1507–1510 (1948). (Russian)Google Scholar 13. 13. Timan, A.F.: Theory of approximation of functions of a real variable, Fizmatgiz, Moscow, 1960. Pergamon Press, Oxford-New York, English transl (1963)Google Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196524620056152, "perplexity": 8454.135735267042}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826530.72/warc/CC-MAIN-20181214232243-20181215014243-00502.warc.gz"}
http://upcommons.upc.edu/handle/2117/2355
Pàgina principal | Llistar continguts | Cerca avançada | Com participar-hi Català   Castellano   English Títol: The geometry of t-cliques in k-walk-regular graphs Autor: Dalfó Simó, CristinaFiol Mora, Miquel ÀngelGarriga Valle, Ernest Altres autors/autores: Universitat Politècnica de Catalunya. Departament de Matemàtica Aplicada IV Matèries: Graph theoryWalk-regular graphsk-walk-regular graphsSpectral regularityCrossel local multiplicities of eigenvaluesGrafs, Teoria deClassificació AMS::05 Combinatorics::05C Graph theory Tipus de document: Article Descripció: A graph is walk-regular if the number of cycles of length $\ell$ rooted at a given vertex is a constant through all the vertices. For a walk-regular graph $G$ with $d+1$ different eigenvalues and spectrally maximum diameter $D=d$, we study the geometry of its $d$-cliques, that is, the sets of vertices which are mutually at distance $d$. When these vertices are projected onto an eigenspace of its adjacency matrix, we show that they form a regular tetrahedron and we compute its parameters. Moreover, the results are generalized to the case of $k$-walk-regular graphs, a family which includes both walk-regular and distance-regular graphs, and their $t$-cliques or vertices at distance $t$ from each other. Altres identificadors i accés: http://hdl.handle.net/2117/2355 Disponible al dipòsit: E-prints UPC Comparteix: Tots els ítems dipositats a UPCommons estan protegits per drets d'autor. Programari DSpace Copyright © 2002-2004 MIT and Hewlett-Packard Comentaris Universitat Politècnica de Catalunya. Servei de Biblioteques, Publicacions i Arxius
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5221244096755981, "perplexity": 7782.066079516826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052107/warc/CC-MAIN-20131204131732-00017-ip-10-33-133-15.ec2.internal.warc.gz"}
https://eprint.iacr.org/2019/1296
### FastSwap: Concretely Efficient Contingent Payments for Complex Predicates Mathias Hall-Andersen ##### Abstract FastSwap is a simple and concretely efficient contingent payment scheme for complex predicates, inspired by FairSwap. FastSwap only relies on symmetric primitives (in particular symmetric encryption and cryptographic hash functions) and avoids heavy-weight' primitives such as general ZKP systems. FastSwap is particularly well-suited for applications where the witness or predicate is large (on the order of MBs / GBs) or expensive to calculate. Additionally FastSwap allows predicates to be implemented using virtually any computational model (including branching execution), which e.g. enables practitioners to express the predicate in smart contract languages already familiar to them, without an expensive transformation to satisfiability of arithmetic circuits. The cost of this efficiency during honest execution is a logarithmic number of rounds during a dispute resolution in the presence of a corrupted party (compared to constant round complexity for existing schemes). Let the witness be of size $|w|$ and the predicate of size $|P|$, where computing $P(w)$ takes $n$ steps. In the honest case the off-chain communication complexity is $|w| + |P| + c$ for a small constant $c$, the on-chain communication complexity is $c'$ for a small constant $c'$. In the malicious case the on-chain communication complexity is $O(\log n)$ with small constants. Concretely with suitable optimizations the number of rounds (on-chain transactions) for a computation of $2^{30}$ steps can be brought to $2$ in the honest case with an estimated cost of $\approx 2$ USD on the Ethereum blockchain and to $14$ rounds with an estimated cost of $\approx 4$ USD in case of a dispute. Available format(s) Category Cryptographic protocols Publication info Preprint. MINOR revision. Keywords Contingent paymentsConcrete efficiencyFair exchangeSmart contractsProvable securityUniversal composabilityAuthenticated data structures. Contact author(s) mathias @ hall-andersen dk History 2020-01-05: revised See all versions Short URL https://ia.cr/2019/1296 CC BY BibTeX @misc{cryptoeprint:2019/1296, author = {Mathias Hall-Andersen}, title = {FastSwap: Concretely Efficient Contingent Payments for Complex Predicates}, howpublished = {Cryptology ePrint Archive, Paper 2019/1296}, year = {2019}, note = {\url{https://eprint.iacr.org/2019/1296}}, url = {https://eprint.iacr.org/2019/1296} } ` Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6546621322631836, "perplexity": 3966.9904148660125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00235.warc.gz"}
http://archived.moe/a/thread/11022361
709KiB, 895x500, Mizuki01.png No.11022361 And EF is done. I can't say there was really an episode that I was bored of. Actually, I'm finding that I don't have any real complaints about it.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8912821412086487, "perplexity": 1488.6496382439689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00174-ip-10-31-129-80.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/337272/behaviour-of-global-clustering-for-common-random-graph-models
# Behaviour of global clustering for common random graph models In order to develop some intuition for some of the commonly used random graph models, I've been looking at the global clustering coefficient as a means of comparing them. In particular, for the general ER-Bernoulli disturbed random graph, the Watts-Strogatz random graph, and the Barabasi-Albert random graph, their global clustering distributions are as follows respectively: • For a Bernoulli graph distribution for 1000 vertices with edge probability $$p$$ • A 1000 vertex random graph made according to Watts-Strogatz model with probability $$p$$ • 1000 vertex random graph made according to Barabasi-Albert model, where vertex deg on x-axis shows the fixed degree of each added node (until 1000 nodes is reached). In contrast to the ER and Watts-Strogatz models, the Barabasi-Albert random graph has a power-law distributed degree distribution. Question: • Could we have expected these trends in the global clustering of these models? Namely, the linear increase in the 1st one, the rapid decaying one in the 2nd and the log(?) scaling one in the last one. On the one hand, the ER-Bernoulli one is understandable since the higher the probability the more edges are added but in an independent fashion. • But on the other hand, the Watts-Strogatz model AFAIK is one that was designed such that it leads to high clustering, but why do we see a decrease in the global clustering unlike the other two models? With the help of choosing clustering as a factor of comparison, I'm basically trying to learn how one reasons between these 3 models. The meaning of the probability parameter (denoted by $$\beta$$) in the Watts--Strogatz model at the link you provided is very different from the meaning of the probability parameter $$p$$ in the Erdős–Rényi model. Indeed, $$p$$ is the probability that an edge is included into the random graph, and hence greater values of $$p$$ result in greater clustering. On the other hand, $$\beta$$ is the probability that one of the end-nodes of each edge in the initial regular ring lattice, a graph with $$N$$ nodes each connected to $$K$$ neighbors, is replaced by an almost completely arbitrary node, thus destroying the comparatively high clustering of the initial regular ring lattice. This explains the decrease of clustering in $$\beta$$. • Thanks! Very good point, this indeed explains the different trend (in terms of clustering) between the two. I guess in other words, by increasing $\beta$ we're approaching towards the random graph (Bernoulli distributed edges) model, where we know the clustering vanishes for high large node numbers. In this comparison, does it make a difference if we talk about local as opposed to global clustering? (my plots show the global one). – user929304 Jul 31 at 14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638452291488647, "perplexity": 260.8323994796346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00085.warc.gz"}
https://muscimol.xyz/2858804
2858804 # GABA and the behavioral effects of anxiolytic drugs. Article date: 1985/4/22 PubMed ID: 2858804 Journal name: Life sciences (ISSN: 0024-3205) ABSTRACT Much recent research has shown that benzodiazepine binding sites in the central nervous system are associated with GABA receptors. It is therefore possible that the pharmacological and therapeutic effects of benzodiazepines and drugs with similar profiles are mediated through GABAergic mechanisms. In this paper the evidence is considered for a possible involvement of GABA in the behavioral effects of anxiolytic drugs. There are a number of reports that the behavioral actions of anxiolytics can be antagonised by GABA antagonists such as bicuculline or picrotoxin but there are many contradictory findings and these drugs are difficult to use effectively in behavioral studies. In general, GABA agonists do not exert anxiolytic-like behavioral effects after systemic injection but intracerebral administration of muscimol has been shown to produce benzodiazepine-like actions. Although a number of questions remain unanswered, current evidence does not provide strong support for a role for GABA in the behavioral effects of anxiolytic drugs. This document is available from: http://directlinks.cc/files/muscimol/2858804.pdf Author List: Sanger D J Publication Types: Journal Article; Review Substances mentioned in the article: Anti-Anxiety Agents; Benzodiazepines; gamma-Aminobutyric Acid; Mesh terms: Animals; Anti-Anxiety Agents/pharmacology; Benzodiazepines/administration & dosage; Brain/drug effects; Conditioning, Operant/drug effects; Discrimination Learning/drug effects; Drinking/drug effects; Eating/drug effects; Injections, Intraventricular; Models, Biological; Motor Activity/drug effects; Punishment; Reward; gamma-Aminobutyric Acid/physiology;
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344067335128784, "perplexity": 23493.077930205218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529606.64/warc/CC-MAIN-20190420100901-20190420122901-00424.warc.gz"}
https://codereview.stackexchange.com/questions/239651/passing-argument-to-a-constructor-and-using-it-only-in-some-class-methods
# Passing argument to a constructor and using it only in some class methods I've created some controller for handling clicked links statistics. Does this class meet Single Responsibility Principle? class StatisticsController { protected $statisticsQuery; public function __construct(StatisticsQuery$statisticsQuery) { $this->statisticsQuery =$statisticsQuery; } public function recordClickedLink(array $request) { (...) if ($this->canRecordClick()) { } (...) } public function getStatistics(array $request) { (...)$someRequestParam = $request['id'];$statistics = $this->statisticsQuery->get($someRequestParam); (...) } protected function canRecordClick() { // returns true of false } } If has two public methods. The first is responsible for saving clicked link to a database by delegating it to a model class. The second public method is responsible for getting statistics and it is the only method which use \$statisticsQuery object passed to a class' constructor. Is it OK that recordClickedLink method doesn't need constructor's argument? I wonder also if it is OK that my class contain canRecordClick method with logic for checking if link click can be saved during particular request. If this class doesn't meet SRP, how can I refactor it? As far as I can see in the code example you've provided, it seems that the getStatistics and recordClickedLink methods are independent of each other. So it seems intuitive to have in this case two controllers: a StatisticsController and a something like a RecordController where the two methods recordClickedLink and canRecordClick live. In that way the StatisticsController is responsible for statisics and the RecordController for the record. It is ok that the class contains canRecordClick, but protected means that any class extending the StatisticsController will be able to call this method too. If it will only be used in the StatisticsController, private would be better. • Do you think that constructor's argument can be considered when telling if this class applies SRP? I wonder if argument or arguments passed to constructor should be obligatorily used by all class' public methods. – Szymon Czembor Mar 30 '20 at 11:47 • @SzymonCzembor The argument of a constructor or properties of a class can give a good indication but it is not a strict rule that each method should use all the properties of a class (there are probably some who disagree). Determining what falls under a single responsibility of a class can sometimes be difficult to determine and sometimes you also have to take into consideration how easy some code is to understand vs adding a new class. – pepijno Mar 30 '20 at 12:02 • And what do you think about this? If I have two classes (StatisticsController and RecordController) and then I need to change the way I record a link click (for example altering database table) in RecordController then I have to change also StatisticsController because data for calculating statistics come from the altered place. Does it imply that this two methods should be in one class, doesn't it? – Szymon Czembor Mar 30 '20 at 12:18 • If that is the case you could extract the code which alters and reads from the database to a separate repository class and pass that repository to both the StatisticsController and RecordController. Then when you change the way you record a link you only have to change the repository. – pepijno Mar 30 '20 at 13:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1812707483768463, "perplexity": 1512.9088274482845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00168.warc.gz"}
https://export.arxiv.org/abs/2012.12925
astro-ph.SR (what is this?) Title: Inverse analysis of asteroseismic data: a review Abstract: Asteroseismology has emerged as the best way to characterize the global and internal properties of nearby stars. Often, this characterization is achieved by fitting stellar evolution models to asteroseismic observations. The star under investigation is then assumed to have the properties of the best-fitting model, such as its age. However, the models do not fit the observations perfectly. This is due to incorrect or missing physics in stellar evolution calculations, resulting in predicted stellar structures that are discrepant with reality. Through an inverse analysis of the asteroseismic data, it is possible to go further than fitting stellar models, and instead infer details about the actual internal structure of the star at some locations in its interior. Comparing theoretical and observed stellar structures then enables the determination of the locations where the stellar models have discrepant structure, and illuminates a path for improvements to our understanding of stellar evolution. In this invited review, we describe the methods of asteroseismic inversions, and outline the progress that is being made towards measuring the interiors of stars. Comments: 12 pages, 1 figure. Invited review, Dynamics of the Sun and Stars Subjects: Solar and Stellar Astrophysics (astro-ph.SR) Journal reference: Astrophysics and Space Science Proceedings, vol. 57 (2020) 171-183 DOI: 10.1007/978-3-030-55336-4_25 Cite as: arXiv:2012.12925 [astro-ph.SR] (or arXiv:2012.12925v1 [astro-ph.SR] for this version) Submission history From: Earl Bellinger [view email] [v1] Wed, 23 Dec 2020 19:14:07 GMT (171kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100505709648132, "perplexity": 2444.104640955892}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00632.warc.gz"}
https://icml.cc/Conferences/2021/ScheduleMultitrack?event=9729
Timezone: » Poster Meta-Thompson Sampling Branislav Kveton · Mikhail Konobeev · Manzil Zaheer · Chih-wei Hsu · Martin Mladenov · Craig Boutilier · Csaba Szepesvari Thu Jul 22 09:00 PM -- 11:00 PM (PDT) @ Efficient exploration in bandits is a fundamental online learning problem. We propose a variant of Thompson sampling that learns to explore better as it interacts with bandit instances drawn from an unknown prior. The algorithm meta-learns the prior and thus we call it MetaTS. We propose several efficient implementations of MetaTS and analyze it in Gaussian bandits. Our analysis shows the benefit of meta-learning and is of a broader interest, because we derive a novel prior-dependent Bayes regret bound for Thompson sampling. Our theory is complemented by empirical evaluation, which shows that MetaTS quickly adapts to the unknown prior.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364271521568298, "perplexity": 5029.593445658048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00030.warc.gz"}
https://infoscience.epfl.ch/record/200495
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract In this work we present a stable proper orthogonal decomposition (POD)-Galerkin approximation for parametrized steady incompressible Navier-Stokes equations with low Reynolds number. Supremizers solutions are added to the reduced velocity space in order to obtain a stable reduced-order system, considering in particular the fulfillment of an inf-sup condition. The stability analysis is first carried out from a theoretical standpoint, then confirmed by numerical tests performed on a parametrized two-dimensional backward facing step flow.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476509690284729, "perplexity": 683.5073522074588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/warc/CC-MAIN-20210622155328-20210622185328-00135.warc.gz"}
https://math.stackexchange.com/questions/2482236/number-of-all-squares-placed-side-by-side-in-a-rectangle/2482283
# Number of all squares placed side by side in a rectangle. I've been struggling lately to find out a way to calculate the number of squares in a given rectangle. The thing is that I don't want to calculate the number of permutations of a NxN sized square in a rectangle but try to find out how many squares placed side by side of size NxN can fit into the rectangle. For example let's assume there's given a 3x2 rectangle. That means that there can fit 6 1x1 squares and only one 2x2 square(you cannot place 2 2x2 squares at the same time without having them overlap) so the total is 7 squares. Suppose you have a rectangle $w$ units wide and $h$ units high, and you place $2\times2$ squares in this rectangle. Wherever there is an edge of any of these squares parallel to the bottom edge of the rectangle, extend that edge into a line all the way across the rectangle. In this way you will divide the entire rectangle into horizontal strips of which parts are covered by parts of the $2\times2$ squares and parts are not. In the figure below, $2\times2$ squares have been placed in a $9\times7$ rectangle. The red lines cut the rectangle into strips of $9$ units from left to right and various distances from bottom to top. In this example the squares appear to be placed somewhat haphazardly, but no matter how you place the squares, you cannot make more than $4$ squares overlap any of the horizontal strips. In general, if the width of the rectangle is $w,$ you can make at most $\lfloor w/2 \rfloor$ of the $2\times2$ squares overlap each strip. This means that each horizontal strip contains at least $w_2\Delta h$ area that is not covered by the $2\times2$ squares, where $w_2 = (w - 2\lfloor w/2 \rfloor)$ and $\Delta h$ is the height of each strip. (Some of the strips have an even larger area uncovered.) If we add up the uncovered area over all the strips, we find that an area measuring at least $w_2 h$ is uncovered. Similarly, if we cut the rectangle into vertical strips by extending every vertical edge of every square, we find that each strip has at least $h_2 \Delta w$ uncovered area, where $h_2 = (h - 2\lfloor h/2 \rfloor)$ and $\Delta w$ is the width of the strip. Adding this up, we find that an area of at least $h_2 w$ is uncovered. These two uncovered regions overlap, but only to a limited extent, namely a total area of $(2\lfloor w/2 \rfloor)(2\lfloor h/2 \rfloor) = (w - w_2)(h - h_2).$ The total uncovered area comes out to $$w_2 h + h_2 w - (w - w_2)(h - h_2) = wh - w_2 h_2.$$ This is the same as you get if you simply arrange the $2\times2$ squares in an array $\lfloor w/2 \rfloor$ squares across and $\lfloor h/2 \rfloor$ squares high in the lower left corner of the rectangle. That is to say, we have just shown (in a rigorous, though long-winded way) that the best arrangement of the squares is just to stack them in continuous rows, leaving a gap (if necessary) along two edges of the rectangle. The total number of squares in the array fitted in this way is $\lfloor w/2 \rfloor \times \lfloor h/2 \rfloor.$ If we generalize this to $N\times N$ squares in a $w\times h$ rectangle, the maximum number of squares of that size is $\lfloor w/N \rfloor \times \lfloor h/N \rfloor.$ • You are justifying that the greedy algorithm works for packing a given size of square. You just pack the squares as tightly as possible into one corner and when you can't pack any more you are done for that size. Then you need to sum over sizes and you are done. +1 – Ross Millikan Oct 21 '17 at 2:31 For a rectangle of size $M \times N$, maximum number $\mathcal{N}(n;M \times N)$ of $n \times n$ sized squares placed adjacent to each other is can be arrived at as : $$\mathcal{N}(n;M \times N)=\mathcal{C}(M,n)\mathcal{C}(N,n),$$ where $\mathcal{C}(M,n)=\big[\frac{M}{n}\big]$ and $[]$ is the lower ceiling function. Then total number of squares that can be placed is : $$\sum_{n=1}^{\min\{M,N\}}\mathcal{N}(n;M \times N)=\sum_{n=1}^{\min\{M,N\}}\mathcal{C}(M,n)\mathcal{C}(N,n)=\sum_{n=1}^{\min\{M,N\}}\Bigg[\frac{M}{n}\Bigg]\Bigg[\frac{N}{n}\Bigg].$$ • Is it obvious that the greedy algorithm you describe packs as many squares into the rectangle as possible? I think it needs some support. Having done that, yours is the only answer that reflects the sum over sizes of squares. +1 – Ross Millikan Oct 21 '17 at 2:59 • @RossMillikan I had written it down based on intuitive observation. I don't know how to explain it. – Sunyam Oct 21 '17 at 3:06 $\require{begingroup} \begingroup \newcommand{idiv}[2]{\left\lfloor\frac{#1}{#2}\right\rfloor} \newcommand{R}{\mathcal R} \newcommand{L}{\mathcal L} \newcommand{S}{\mathcal S}$This answer is based on an answer to a related question, but generalized to account for uniform squares of any size within a rectangle of any size. The sides of the rectangle do not even need to be commensurate with the sides of the squares. It is assumed, however, that all squares placed in the rectangle are placed with their sides parallel to the sides of the rectangle. (It seems intuitively obvious that rotating the squares will not enable more squares to fit in the rectangle, but proving it is another matter.) Given a rectangle $\R$ of width $W$ and height $H,$ where $W$ and $H$ may be any real numbers, we first determine how many squares of side $N$ can fit in rectangle $\R$ without overlapping. That is, the sides of squares may touch other squares or the edges of the rectangle, but the interior of any square cannot intersect another square or the boundary of the rectangle. We can show that the maximum number of squares that can be arranged in rectangle $\R$ in this way is $\idiv WN \times \idiv HN.$ The following proof does this by constructing a rectangular lattice $\L$ of $\idiv WN \times \idiv HN$ points such that in any such arrangement of squares inside $\R,$ each square must contain at least one point of $\L.$ Proof. Choose a Cartesian coordinate system such that the vertices of rectangle $\R$ are at coordinates $(0,0),$ $(0,W),$ $(H,W),$ and $(0,H).$ Let \begin{align} w &= \frac{W}{\idiv WN + 1}, \\[0.7ex] h &= \frac{H}{\idiv HN + 1}, \end{align} and let $\L$ be the set of points $(jw, kh)$ where $j$ and $k$ are integers, $1 \leq j \leq \idiv WN,$ and $1 \leq k \leq \idiv HN.$ In other words, we can tile rectangle $\R$ completely with rectangles of width $w$ and height $h,$ and let the set $\L$ consist of all vertices of these rectangles that are in the interior of rectangle $\R.$ The points of $\L$ then form a rectangular lattice with $\idiv WN$ points in each row and $\idiv HN$ points in each column, a total of $\idiv WN \times \idiv HN$ points altogether. Since $\idiv WN + 1 > \frac WN,$ it follows that $w < N,$ and similarly $h < N.$ Therefore if we place a square $\S$ of side $N$ anywhere within rectangle $\R$ with sides parallel to the sides of $\R,$ at least one of the lines through the rows of points in $\L$ will pass through the interior of $\S,$ and at least one of the lines through the columns of points in $\L$ will pass through the interior of $\S;$ therefore $\S$ will contain the point of $\L$ at the intersection of those lines. That is, the interior of $\S$ must contain at least one point of the set $\L.$ Suppose now we have placed some number of squares of side $N$ inside rectangle $\R$ so that no two squares overlap (their boundaries may touch but their interiors must be disjoint). Then no two of these squares can both contain the same point of the set $\L.$ By the pigeonhole principle, we can place at most $\lvert\L\rvert = \idiv WN \times \idiv HN$ squares in this way. On the other hand, an array of squares with $\idiv HN$ rows and $\idiv WN$ columns fits inside rectangle $\R$ (using the "greedy algorithm"), so it is possible to achieve the upper bound of $\idiv WN \times \idiv HN$ squares. This completes the proof. $\square$ In the question, however, we are allowed to arrange squares of side $1$ inside the rectangle, then ignore them and arrange squares of side $2$ inside the rectangle, then ignore those squares and arrange squares of side $3,$ and so forth, as long as at least one square can fit inside the rectangle; and then the answer is the total number of squares of all sizes that were arranged in this way. The final answer therefore is $$\sum_{N=1}^\infty \left(\idiv WN \times \idiv HN\right).$$ Note that this is actually a finite sum, since for $N > W$ or $N > H$ all terms will be zero. The last non-zero term of the sum is the term for $N = \min\{\lfloor W\rfloor, \lfloor H\rfloor\}. \endgroup$ If the rectangle has size $m\times n$ then you can fit in $\lfloor m/N \rfloor × \lfloor n / N \rfloor$ squares of size $N\times N$. The idea: Fit as many squares as possible into the rectangle. Now look at the strip consisting of the top N rows. If a square meets this strip we can push it upwards so that it is fully contained in the strip. Therefore, by maximality, the strip contains $\lfloor n / N \rfloor$ squares. Now remove the top strip altogether and proceed via induction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8276981711387634, "perplexity": 142.49275056510328}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316718.64/warc/CC-MAIN-20190822022401-20190822044401-00231.warc.gz"}
https://cms.math.ca/Events/winter16/abs/otq
2016 CMS Winter Meeting Niagara Falls, December 2 - 5, 2016 Optimization Techniques in Quantum Information Theory Org: Nathaniel Johnston (Mount Allison University), Rajesh Pereira (University of Guelph) and Sarah Plosker (Brandon University) [PDF] ERIC CHITAMBAR, Southern Illinois University A Classical Analog to Entanglement Reversibility  [PDF] In this talk I describe intriguing similarities between the quantum theory of entanglement and the classical theory of secret key. Just as entanglement can be shared by two or more quantum systems, secret correlations can be shared by two or more classical systems, whose states are described by probability distributions. Entanglement cannot be increased under local (quantum) operations and classical communication, and likewise secret correlations cannot be increased under local (classical) operations and public communication. Analogous to the tasks of entanglement distillation and formation are the classical tasks of secret key distillation and secrecy formation. An old open problem in entanglement theory involves characterizing the states that possess reversible entanglement; i.e. states whose rate of entanglement distillation equals its rate of entanglement cost. In this talk, I introduce a similar notion of reversible secrecy. When one of the honest parties holds a binary random variable, the structure of distributions possessing reversible secrecy can be identified exactly. An indispensable tool used in this analysis is a conditional form of the Gacs-Korner common information. Finally, I describe how the structure of distributions possessing reversible secrecy can be related to the structure of quantum states possessing reversible entanglement. RICHARD CLEVE, University of Waterloo Efficient Quantum Algorithms for Simulating Lindblad Evolution  [PDF] The Lindblad equation is the natural generalization to open systems of the Schroedinger equation. We give a quantum algorithm for simulating the evolution of an n-qubit system for time T under the Lindblad equation with local terms. The gate cost of the algorithm is optimal within polylogarithmic factors. A key component of our algorithm is a new "linear combinations of unitaries" construction that pertains to channels which we believe is of independent interest. This is joint work with Chunhao Wang. HOAN DANG, University of Calgary Galois-unitary symmetry of mutually unbiased bases as a toy model for SIC-POVMs  [PDF] Besides applications in quantum information, symmetric informationally-complete (SIC) POVMs have deeply interesting mathematical properties due to their high degree of symmetry. Here we focus on g-unitary symmetry, which is a generalized notion of anti-unitary symmetry. G-unitary operators are defined with respect to a number field extension. For the case of mutually unbiased bases (MUB) where the relevant field extension is simple, we find that g-unitaries help us to solve problems such as MUB-cycling and finding MUB-balanced states. DOUG FARENICK, University of Regina Some extremal properties of quantum probability measures  [PDF] A quantum probability measure is a positive operator-valued measure (POVM) whose value on the entire sample space is the identity operator acting on a Hilbert space H. In the event that H is 1-dimensional, then a quantum probability measure is simply a probability measure in the classical sense. Optimality questions are often linked to the issues of optimisation on a convex set, in which case knowledge of the extreme points of the convex set becomes important. In this lecture, I will discuss the structure of extreme points and C*-extreme points in the space of quantum probability measures, and explain how quantum probability measures that satisfy a certain norm-theoretic optimality condition are related to the C*-extreme points. In connection with this latter result, the role of operator systems in the analysis of quantum probability measures will be explained. This lecture is drawn from collaborative work with R. Floricel, S. Plosker, and J. Smith. SEVAG GHARIBIAN, Virginia Commonwealth University Classical approximation algorithms for quantum constraint satisfaction problems  [PDF] The study of approximation algorithms for Boolean satisfiability problems such as MAX-k-SAT is a well-established line of research. In the quantum setting, there is a physically motivated generalization of MAX-k-SAT known as the k-Local Hamiltonian problem (k-LH), which is of interest for two reasons: From a complexity theoretic perspective, k-LH is complete for the quantum analogue of NP, and from a physics perspective, k-LH asks one to estimate the energy of a quantum system when cooled to very low temperatures. For the latter reason in particular, the condensed matter physics community has devoted decades to developing heuristic algorithms for k-LH and related problems. However, recent years have witnessed the development of the first classical approximation algorithms for k-LH, which shall be the focus of this talk. We will begin with an introductory overview of some existing results, with no background in quantum computing assumed. We will then discuss recent work in progress on generalizing the celebrated Goemans-Williamson algorithm for MAX-CUT to approximate physically motivated special cases of k-LH. The latter is joint work with Yi-Kai Liu (NIST, USA). MARK GIRARD, University of Calgary Conversion witnesses for transforming quantum states under PPT-operations.  [PDF] The primary goal in entanglement theory is to find conditions that determine whether one quantum state can be converted into another under the restriction to local operations and classical communication (LOCC). This is typically done by considering entanglement monotones, but this analysis is difficult due to the fact that the LOCC operations are difficult to describe. It is common to instead consider the larger, yet easier to describe mathematically, class of operations that preserve positivity under partial transpose (PPT). I consider the problem of finding conditions for PPT-conversion of bipartite states in the single-shot regime. In this talk, I will present a family of PPT-conversion witnesses and a new entanglement monotone that are based on the binegativity. I'll finish by presenting a new complete witness for PPT-conversion that is derived using duality properties of semidefinite programs. Semidefinite Programming and Quantum Resource Theories  [PDF] One of the main goals of any resource theory such as entanglement, quantum thermodynamics, quantum coherence, and asymmetry, is to find necessary and sufficient conditions (NSC) that determine whether one resource can be converted to another by the set of free operations. In this talk I will present such NSC for a large class of quantum resource theories which we call affine resource theories (ARTs). ARTs include the resource theories of athermality, asymmetry, and coherence, but not entanglement. Remarkably, the NSC can be expressed as a family of inequalities between resource monotones (quantifiers) that are given in terms of the conditional min entropy. The set of free operations is taken to be (1) the maximal set (i.e. consists of all resource non-generating (RNG) quantum channels) or (2) the self-dual set of free operations (i.e. consists of all RNG maps for which the dual map is also RNG). As an example, I will discuss the applications of the results to quantum thermodynamics with Gibbs preserving operations, and several other ARTs. Finally, I will discuss the applications of these results to resource theories that are not affine. DAVID KRIBS, University of Guelph Private Algebras, Private Quantum Channels, etc  [PDF] In this talk, I will discuss my ongoing work with collaborators on the development of a structure theory for a fundamental notion in quantum privacy; known in different settings as private quantum channels, private quantum codes, quantum secret sharing, private subsystems, decoherence-full subsystems, and private algebras. I'll also discuss connections with quantum error correction. Based on joint works with Jason Crann, Tomas Jochym-O'Connor, Raymond Laflamme, Rupert Levene, Jeremy Levick, Rajesh Pereira, Sarah Plosker, and Ivan Todorov. JEREMY LEVICK, University of Guelph An Uncertainty Principle for Quantum Channels  [PDF] We present an "uncertainty principle" for quantum channels, showing a relationship between the dimension of the range of a channel and the dimension of the range of its complement. We examine some interesting specific cases, and discuss consequences for privacy of quantum channels. CHI-KWONG LI, College of William and Mary Quantum states with prescribed reduced states, and special Quantum channels  [PDF] We consider the set of quantum states with prescribed reduced states, and the set of quantum channels with special properties. In particular, we study the geometrical properties of these sets, and special elements attaining optimal values of certain functions. Symmetry Reduction in Multiparty Quantum States  [PDF] Symmetries are ubiquitous in natural phenomena and also in their mathematical descriptions and according to a general principle in Mathematics, one should exploit a symmetry to simplify a problem whenever possible. In this talk, we focus on elimination of continuous symmetries from multi-particle quantum systems and discuss that the existing methods equip us with a powerful set of tools to compute geometrical and topological invariants of the resulting reduced spaces. As an intermediate step, we consider the maximal torus subgroup T of the compact Lie group of Local Unitary operations K and elaborate on the symmetry reduction procedure and use methods from symplectic geometry and algebraic topology to obtain some of the topological invariants of these relatively well-behaving quotients for multi-particle systems containing $r$ qubits. We elaborate on an explicit example with two qubits and discuss further implications in quantum information theory. SARAH PLOSKER, Brandon University Optimal bounds on fidelity of quantum state transfer with respect to errors  [PDF] Quantum state transfer within a quantum computer can be achieved through the use of a quantum spin chain as a "data bus" for quantum states. More generic graphs besides a chain can also be used to perform this important task. The fidelity, which measures the closeness between two quantum states, is used to determine the accuracy of the state transfer. Ideally the fidelity is 1, representing perfect state transfer. We analyse the sensitivity of the fidelity of a graph exhibiting perfect state transfer to small perturbations in readout time and edge weight in order to obtain physically relevant bounds on the fidelity. Joint work with Whitney Gordon, Steve Kirkland, Chi-Kwong Li, and Xiaohong Zhang ZBIGNIEW PUCHALA, Institute of Theoretical and Applied Informatics, Polish Academy of Sciences Asymptotic properties of random quantum channels  [PDF] We consider random quantum channels, especially the limiting behaviour of the diamond norm for two independent random quantum operations. In order to define random channels we use the Choi-Jamiołkowski isomorphism and consider Wishart matrices with normalized partial trace. Next, we derive an asymptotic behaviour of empirical eigenvalue distribution for this ensemble and, using free probability theory, we derive the eigenvalue distribution for the difference of two random Choi-Jamiołkowski matrices. We show the concentration of measure occurs for the diamond norm. In the case of flat measure on random channels, the limiting value of the diamond norm is equal to $\frac{1}{2} + \frac{2}{\pi}$. DANIEL PUZZUOLI, University of Waterloo Ancilla dimension in quantum channel discrimination  [PDF] Single-shot quantum channel discrimination is a fundamental task in quantum information theory. It is well known that entanglement with an ancillary system can help in this task. Thus, a fundamental question is: For a given pair of channels with the same input and output spaces, how much entanglement is necessary to optimally discriminate them? I will present results on a specific formulation of this question: Given a pair of channels, what is the minimum ancilla dimension necessary for optimal discrimination? It is known that, in general, an ancilla with dimension equal to that of the input space of the channels is always sufficient (and is sometimes necessary) for optimal discrimination. A natural question to ask is whether the same holds true for the output dimension. That is, in cases when the output dimension of the channels is (possibly much) smaller than the input dimension, is an ancilla with dimension equal to the output dimension always sufficient for optimal discrimination? I will present a family of counterexamples which show that the answer to this question is no. This family contains instances with arbitrary finite gap between the input and output dimensions, and still has the property that in every case, for optimal discrimination, it is necessary to use an ancilla with dimension equal to that of the input. JAMIE SIKORA, Centre for Quantum Technologies, National University of Singapore Completely Positive Semidefinite Rank  [PDF] A matrix X is called completely positive semidefinite (cpsd) if it can be represented as a Gram matrix of positive semidefinite matrices of some finite size d. The cpsd-rank of a cpsd matrix is the smallest integer d for which such a representation is possible. In this work, we initiate the study of the cpsd-rank which we motivate twofold. First, the cpsd-rank is a natural non-commutative analogue of the completely positive rank of a completely positive matrix. Second, we show that the cpsd-rank is physically motivated as it can be used to upper and lower bound the size of a quantum system needed to generate a quantum behavior. Unlike the completely positive rank which is at most quadratic in the size of the matrix, no general upper bound is known on the cpsd-rank. In fact, we show that the cpsd-rank can be exponential in terms of the size. The proof relies crucially on the connection between the cpsd-rank and quantum behaviors. In particular, we use a known lower bound on the size of matrix representations of extremal quantum correlations which we apply to high-rank extreme points of the n-dimensional elliptope. This is joint work with Anupam Prakash, Antonios Varvitsiotis, and Zhaohui Wei (arXiv:1604.07199). JOHN WATROUS, University of Waterloo Semidefinite programming, cone programming, and quantum state discrimination  [PDF] Semidefinite programming has found many applications in the theory of quantum information. In this talk I will will give a brief introduction to semidefinite programming and its applications to the study of quantum information, and also discuss the more general setting of cone programming and its application to a problem relating to quantum state discrimination by spatially separated parties. The talk will include joint work with Somshubhro Bandyopadhyay, Alessandro Cosentino, Nathaniel Johnston, Vincent Russo, and Nengkun Yu. XIAOHONG ZHANG, University of Manitoba Our work focuses on Hadamard diagonalizable graphs. For integer-weighted Hadamard diagonalizable graphs, we give an eigenvalue characterization of when such a graph exhibits perfect state transfer (PST) at time $\pi/2$, and then generalize the result to rational-weighted Hadamard diagonalizable graphs. We also define a new binary graph operation, the merge, which keeps the property of being Hadamard diagonalizable, and can be used to produce a lot of PST graphs. We give conditions on two integer-weighted Hadamard diagonalizable graphs for their merge to have PST. Finally we show an intriguing result about the merge operation: if exactly one of the two weights on this operation is an integer, and the other one is an irrational number, then the merge exhibits pretty good state transfer (PGST) from one vertex to several other vertices under certain circumstances. Properties of random mixed states of dimension $N$ distributed uniformly with respect to the Hilbert-Schmidt measure are investigated. We show that for large $N$, due to the concentration of measure phenomenon, the trace distance between two random states tends to a fixed number $1/4+1/\pi$, which yields the Helstrom bound on their distinguishability. To arrive at this result we apply free random calculus and derive the symmetrized Marchenko--Pastur distribution. Asymptotic value for the root fidelity between two random states, $\sqrt{F}=3/4$, can serve as a universal reference value for further theoretical and experimental studies. Analogous results for quantum relative entropy and Chernoff quantity provide other bounds on the distinguishability of both states in a multiple measurement setup due to the quantum Sanov theorem. Entanglement of a generic mixed state of a bi--partite system is estimated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438215255737305, "perplexity": 544.605861798956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218391.84/warc/CC-MAIN-20180821171406-20180821191406-00064.warc.gz"}
https://link.springer.com/article/10.1186/1556-276X-8-1
## Background Graphene is a single layer of carbon atoms ordered in a two-dimensional hexagonal lattice. In the literature, it is possible to find different experimental techniques in order to obtain graphene such as mechanical peeling, epitaxial growth or assembled by atomic manipulation of carbon monoxide molecules over a conventional two-dimensional electron system at a copper surface[14]. The physical properties of this crystal have been studied over the last 70 years; however, the recent experimental breakthroughs have revealed that there are still a lot of open questions, such as time-dependent transport properties of graphene-based heterostructures, the thermoelectric and thermal transport properties of graphene-based systems in the presence of external perturbations, the thermal transport properties of graphene under time-dependent gradients of temperatures, etc. On the other hand, graphene nanoribbons (GNRs) are quasi one-dimensional systems based on graphene which can be obtained by different experimental techniques[58]. The electronic behaviour of these nanostructures is determined by their geometric confinement which allows the observation of quantum effects. The controlled manipulation of these effects, by applying external perturbations to the nanostructures or by modifying the geometrical confinement[913], could be used to develop new technological applications, such as graphene-based composite materials[14], molecular sensor devices[1517] and nanotransistors[18]. One important aspect of the transport properties of these quasi one-dimensional systems is the resonant tunneling behaviour which, for certain configurations of conductors or external perturbations, appears into the system. It is has been reported that in S- and U-shaped ribbons, and due to quasi-bound states present in the heterostructure, it is possible to obtain a rich structure of resonant tunneling peaks by tuning through the modification of the geometrical confinement of the heterostructure[19]. Another way to obtain resonant tunneling in graphene is considering a nanoring structure in the presence of external magnetic field. It has been reported that these annular structures present resonance in the conductance at defined energies, which can be tuned by gate potentials, the intensity of the magnetic field or by modifying their geometry[20]. From the experimental side, the literature shows the possibility of modulating the transport response as a function of the intensity of the external magnetic field. In some configuration of gate potential applied to the rings, it has been observed that the Aharonov-Bohm oscillations have good resolution[2123]. In this context, in this work, we present a theoretical study of the transport properties of GNR-based conductors composed of two finite and parallel armchair nanoribbons (A-GNRs) of widths N d and N u , and length L (measured in unit cell units), connected to two semi-infinite contacts of width N made of the same material. We have thought this system as two parallel ‘wires’ connected to the same reservoirs, whether the the leads are made of graphene or another material. This consideration allows us to study the transport of a hypothetical circuit made of graphene ‘wires’ in different scenarios. A schematic view of a considered system is shown in Figure1. We have focused our analysis on the electronic transport modulations due to the geometric confinement and the presence of an external magnetic field. In this sense, we have studied the transport response due to variations of the length and widths of the central ribbons, considering symmetric and asymmetric configurations. We have obtained interference effects at low energies due to the extra spatial confinement, which is manifested by the apparition of resonant states at this energy range, and consequently, a resonant tunneling behaviour in the conductance curves. On the other hand, we have considered the interaction of electrons with a uniform external magnetic field applied perpendicular to the heterostructure. We have observed periodic modulations of the transport properties as function of the external field, obtaining metal-semiconductor transitions as function of the magnetic flux. ## Methods All considered systems have been described using a single Π-band tight binding Hamiltonian, taking into account only the nearest neighbour interactions with a hopping γ0 = 2.75eV[24]. We have described the heterostructures using surface Green’s function formalism within a renormalization scheme[16, 17, 25]. In the linear response approach, the conductance is calculated using the Landauer formula. In terms of the conductor Green’s function, it can be written as[26]: $G=\frac{2{e}^{2}}{h}\stackrel{̄}{T}\left(E\right)=\frac{2{e}^{2}}{h}\text{Tr}\left[{\mathrm{\Gamma }}_{L}{G}_{C}^{R}{\mathrm{\Gamma }}_{R}{G}_{C}^{A}\right],$ (1) where$\stackrel{̄}{T}\left(E\right)$, is the transmission function of an electron crossing the conductor region,${\mathrm{\Gamma }}_{L/R}=i\left[{\mathrm{\Sigma }}_{L/R}-{\mathrm{\Sigma }}_{L/R}^{†}\right]$ is the coupling between the conductor and the respective lead, given in terms of the self-energy of each lead: ΣL/R = VC,L/RgL/RVL/R,C. Here, VC,L/R are the coupling matrix elements and gL/Ris the surface Green’s function of the corresponding lead[16]. The retarded (advanced) conductor Green’s function is determined by[26]:${G}_{C}^{R,A}={\left[E-{H}_{C}-{\mathrm{\Sigma }}_{L}^{R,A}-{\mathrm{\Sigma }}_{R}^{R,A}\right]}^{-1}$, where H C is the hamiltonian of the conductor. Finally, the magnetic field is included by the Peierls phase approximation[2731]. In this scheme, the magnetic field changes the unperturbed hopping integral${\gamma }_{n,m}^{0}$ to${\gamma }_{n,m}^{B}={\gamma }_{n,m}^{0}{e}^{2\Pi i\mathrm{\Delta }{\mathrm{\Phi }}_{n,m}}$, where the phase factor is determined by a line integral of the vector potential A by: $\mathrm{\Delta }{\mathrm{\Phi }}_{n,m}=\frac{e}{h}\underset{{\mathbf{R}}_{n}}{\overset{{\mathbf{R}}_{m}}{\int }}d\mathbf{l}·\mathbf{A}.$ (2) Using the vectors exhibited in Figure1, R1 = (1, 0)a,${\mathbf{R}}_{2}=-\left(1,\sqrt{3}\right)a/2$ and${\mathbf{R}}_{3}=\left(-1,\sqrt{3}\right)a/2$, where a = |Rn,m| = 1.42 Å, the phase factors for the armchair configuration in the Landau gauge A = (0, Bx) are given by: $\begin{array}{l}\mathrm{\Delta }{\mathrm{\Phi }}_{n,m}\left({\mathbf{R}}_{n,m}={\mathbf{R}}_{1}\right)=-\mathit{\text{aB}}{y}_{n},\\ \mathrm{\Delta }{\mathrm{\Phi }}_{n,m}\left({\mathbf{R}}_{n,m}={\mathbf{R}}_{2}\right)=-\frac{\mathit{\text{aB}}}{2}\left({y}_{n}+\frac{a\sqrt{3}}{4}\right),\\ \mathrm{\Delta }{\mathrm{\Phi }}_{n,m}\left({\mathbf{R}}_{n,m}={\mathbf{R}}_{3}\right)=\frac{\mathit{\text{aB}}}{2}\left({y}_{n}+\frac{a\sqrt{3}}{4}\right),\end{array}$ (3) where y n is the carbon atom position in the transverse direction of the ribbons. In what follows, the Fermi energy is taken as the zero energy level, and all energies are written in units of γ0. ## Results and discussion ### Unperturbed systems Let us begin the analysis by considering the effects of the geometrical confinement. In Figure2, we present results of (a) Local density of sates (LDOS) and (b) conductance for a conductor composed of two A-GNRs of widths N d  = N u  = 5 connected to two leads of width N = 17 for different conductor lengths (L = 5, 10 and 20 unit cells). The most evident result is reflected in the LDOS curves at energies near the Fermi level. There are several sharp states at defined energies, which increase in number and intensity as the conductor length L is increased. These states that appear in the energy range corresponding to the gap of a pristine N = 5 A-GNRs[24, 32] correspond to a constructive interference of the electron wavefunctions inside the heterostructure, which can travel forth and back generating stationary (well-like) states. In this sense, the finite length of the central ribbons imposes an extra spatial confinement to electrons, as analogy of what happens in open quantum dot systems[16, 17, 19, 33, 34]. Independently of their sharp line shape, these discrete levels behave as resonances in the system allowing the conduction of electrons at these energies, as it is shown in the corresponding conductance curves of Figure2b. It is clear that as the conductor length is increased, the number of conductance peaks around the Fermi level is also increased, tending to form a plateau of one quantum of conductance (G0 = 2e2/h) at this energy range. These conductance peaks could be modulated by the external perturbations, as we will show further in this work. At higher energies, the conductance plateaus appear each as 2G0, which is explained by the definition of the transmission probability T(E) of an electron passing through the conductor. In these types of heterostructures, if the conductor is symmetric (N u  = N d ), the number of allowed transverse channels are duplicated; therefore, electrons can be conduced with the same probability through both finite ribbons. On the other hand, in Figure2c, we present results of conductance for a conductor of length L = 15 and composed of two A-GNRs of widths N d  = 5 and N u  = 7, connected to two leads of widths N = 17. As a comparison, we have included the corresponding pristine cases. As it is expected, the conductance for an asymmetric configuration (red curve) reflects the exact addition of the transverse channels of the constituent ribbons, with the consequent enhancement of the conductance of the systems. Nevertheless, there is still only one quantum of conductance near the Fermi energy due to the resonant states of the finite system, whether the constituent ribbons are semiconductor or semimetal. We have obtained these behaviours for different configurations of conductor, considering variations in length and widths of the finite ribbons and leads. ### Magnetic field effects In what follows, we will include the interaction of a uniform external magnetic field applied perpendicularly to the conductor region. We have considered in our calculations that the magnetic field could affect the ends of the leads, forming an effective ring of conductor. The results of LDOS and conductance as a function of the Fermi energy and the normalized magnetic flux (ϕ/ϕ0) for three different conductor configurations are displayed in the contour plots of Figure3. The left panels correspond to a symmetric system composed of two metallic A-GNRs of widths N u  = N d  = 5. The central panels correspond to an asymmetric conductor composed of two A-GNRs of widths N d  = 5 (metallic) and N u  = 7 (semiconductor). The right panels correspond to a symmetric system composed of two semiconductor A-GNRs of widths N u  = N d  = 7. All configurations have been considered of the same length L = 10 and connected to the same leads of widths N = 17. Finally, we have included as a reference, the plots of LDOS versus Fermi energy for the three configurations. From the observation of these plots, it is clear that the magnetic field strongly affects the electronic and transport properties of the considered heterostructures, defining and modelling the electrical response of the conductor. In this sense, we have observed that in all considered systems, periodic metal-semiconductor electronic transitions for different values of magnetic flux ratio ϕ/ϕ0, which are qualitatively in agreement with the experimental reports of similar heterosructures[2123]. Although the periodic electronic transitions are more evident in symmetric heterostructures (left and right panels), it is possible to obtain a similar effect in the asymmetric configurations. These behaviours are direct consequences of the quantum interference of the electronic wave function inside this kind of annular conductors, which in general present an Aharonov-Bohm period as a function of the magnetic flux. The evolution of the electronic levels of the system, depending of their energy, exhibits a rich variety of behaviours as a function of the external field. In all considered cases, the LDOS curves exhibit electronic states pinned at the Fermi Level, at certain magnetic flux values. This state corresponds to a non-dispersive band, equivalent with the supersymmetric Landau level of the infinite two-dimensional graphene crystal[30, 35]. At low energy region and for low magnetic field, it is possible to observe the typical square-root evolution of the relativistic Landau levels[36]. The electronic levels at highest energies of the system evolve linearly with the magnetic flux, like regular Landau levels. This kind of evolution is originated by the massive bands in graphene, which is expected for these kinds of states in graphene-based systems[37, 38]. By comparing the LDOS curves and the corresponding conductance curves, it is possible to understand and define which states contribute to the transport of the systems (resonant tunneling peaks), and which ones only evolve with the magnetic flux but remain as localized states (quasi-bond states) of the conductor. These kind of behaviour has been reported before in similar systems[19, 20]. This fact is more evident in the symmetric cases, where there are several states in the ranges ϕ/ϕ0 ∈ [0.1, 0.9] and E(γ0) ∈ [-1.0, 1.0] of the LDOS curves which evolve linearly with the magnetic flux, but are not reflected in the conductance curves. In fact, at these ranges, the conductance curves exhibit marked gaps with linear evolution as a function of the magnetic flux. For the asymmetric case, it is more difficult to define which states behave similarly; however, there are still some regions at which the conductance exhibits gaps with linear evolution as a function of the magnetic flux. All these electronic modulations could be useful to generate on/off switches in electronic devices, by changing in a controlled way the magnetic field intensity applied to the heterostructures. We have obtained these behaviours for different configurations of conductor, considering variations in length and widths of the finite ribbons and leads. ## Conclusions In this work, we have analysed the electronic and transport properties of a conductor composed of two parallel and finite A-GNRs, connected to two semi-infinite lead, in the presence of an external perturbation. We have thought these systems as two parallel wires of an hypothetical circuit made of graphene, and we have studied the transport properties as a function of the separation and the geometry of these ‘wires’, considering the isolated case and the presence of an external magnetic field applied to the system. We have observed resonant tunneling behaviour as a function of the geometrical confinement and a complete Aharonov-Bohm type of modulation as a function of the magnetic flux. These two behaviours are observed even when the two A-GNRs have different widths, and consequently, different transverse electronic states. Besides, the magnetic field generates a periodic metal-semiconductor transition of the conductor, which can be used in electronics applications. We want to note that our results are valid only in low temperature limits and in the absence of strong disorder into the systems. In the case of non-zero temperature, it is expected that the resonances in the conductance curves will become broad and will gradually vanish at room temperature[20]. ## Authors’ information LR is a professor at the Physics Department, Technical University Federico Santa Maria, Valparaiso, Chile. JWG is a postdoctoral researcher at the International Iberian Nanotechnology Laboratory, Braga, Portugal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9083855152130127, "perplexity": 637.1403391481865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00485.warc.gz"}
http://en.wikipedia.org/wiki/Cycloid
# Cycloid For other uses, see Cycloid (disambiguation). A cycloid generated by a rolling circle A cycloid is the curve traced by a point on the rim of a circular wheel as the wheel rolls along a straight line without slippage. It is an example of a roulette, a curve generated by a curve rolling on another curve. The inverted cycloid (a cycloid rotated through 180°) is the solution to the brachistochrone problem (i.e., it is the curve of fastest descent under gravity) and the related tautochrone problem (i.e., the period of an object in descent without friction inside this curve does not depend on the object's starting position). ## History It was in the left hand try-pot of the Pequod, with the soapstone diligently circling round me, that I was first indirectly struck by the remarkable fact, that in geometry all bodies gliding along the cycloid, my soapstone for example, will descend from any point in precisely the same time. Moby Dick by Herman Melville, 1851 The cycloid has been called "The Helen of Geometers" as it caused frequent quarrels among 17th-century mathematicians.[1] Historians of mathematics have proposed several candidates for the discoverer of the cycloid. Mathematical historian Paul Tannery cited similar work by the Syrian philosopher Iamblichus as evidence that the curve was likely known in antiquity.[2] English mathematician John Wallis writing in 1679 attributed the discovery to Nicholas of Cusa,[3] but subsequent scholarship indicates Wallis was either mistaken or the evidence used by Wallis is now lost.[4] Galileo Galilei's name was put forward at the end of the 19th century[5] and at least one author reports credit being given to Marin Mersenne.[6] Beginning with the work of Moritz Cantor[7] and Siegmund Günther,[8] scholars now assign priority to French mathematician Charles de Bovelles[9][10][11] based on his description of the cycloid in his Introductio in geometriam, published in 1503.[12] In this work, Bovelles mistakes the arch traced by a rolling wheel as part of a larger circle with a radius 120% larger than the smaller wheel.[4] Galileo originated the term cycloid and was the first to make a serious study of the curve.[4] According to his student Evangelista Torricelli,[13] in 1599 Galileo attempted the quadrature of the cycloid (constructing a square with area equal to the area under the cycloid) with an unusually empirical approach that involved tracing both the generating circle and the resulting cycloid on sheet metal, cutting them out and weighing them. He discovered the ratio was roughly 3:1 but incorrectly concluded the ratio was an irrational fraction, which would have made quadrature impossible).[6] Around 1628, Gilles Persone de Roberval likely learned of the quadrature problem from Père Marin Mersenne and effected the quadrature in 1634 by using Cavalieri's Theorem.[4] However, this work was not published until 1693 (in his Traité des Indivisibles).[14] Constructing the tangent of the cycloid dates to August 1638 when Mersenne received unique methods from Roberval, Pierre de Fermat and René Descartes. Mersenne passed these results along to Galileo, who gave them to his students Torricelli and Viviana, who were able to produce a quadrature. This result and others were published by Torricelli in 1644,[13] which is also the first printed work on the cycloid. This led to Roberval charging Torricelli with plagiarism, with the controversy cut short by Torricelli's early death in 1647.[14] In 1658, Blaise Pascal had given up mathematics for theology but, while suffering from a toothache, began considering several problems concerning the cycloid. His toothache disappeared, and he took this as a heavenly sign to proceed with his research. Eight days later he had completed his essay and, to publicize the results, proposed a contest. Pascal proposed three questions relating to the center of gravity, area and volume of the cycloid, with the winner or winners to receive prizes of 20 and 40 Spanish doubloons. Pascal, Roberval and Senator Carcavy were the judges, and neither of the two submissions (by John Wallis and Antoine Lalouvère) were judged to be adequate.[15]:198 While the contest was ongoing, Christopher Wren sent Pascal a proposal for a proof of the rectification of the cycloid; Roberval claimed promptly that he had known of the proof for years. Wallis published Wren's proof (crediting Wren) in Wallis's Tractus Duo, giving Wren priority for the first published proof.[14] Fifteen years later, Christiaan Huygens had deployed the cycloidal pendulum to improve chronometers and had discovered that a particle would traverse an inverted cycloidal arch in the same amount of time, regardless of its starting point. In 1686, Gottfried Wilhelm Leibniz used analytic geometry to describe the curve with a single equation. In 1696, Johann Bernoulli posed the brachistochrone problem, the solution of which is a cycloid.[14] ## Equations A cycloid generated by a circle of radius r = 2 The cycloid through the origin, generated by a circle of radius r, consists of the points (x, y), with \begin{align} x &= r(t - \sin t) \\ y &= r(1 - \cos t) \end{align} where t is a real parameter, corresponding to the angle through which the rolling circle has rotated, measured in radians. For given t, the circle's centre lies at x = rt, y = r. Solving for t and replacing, the Cartesian equation is found to be: $x = r \cos^{-1} \left(1 - \frac{y}{r}\right) - \sqrt{y(2r - y)}.$ An expression of the equation in the form y = f(x) is not possible using standard functions. The first arch of the cycloid consists of points such that $0 \le t \le 2 \pi.$ When y is viewed as a function of x, the cycloid is differentiable everywhere except at the cusps where it hits the x-axis, with the derivative tending toward $\infty$ or $-\infty$ as one approaches a cusp. The map from t to (xy) is a differentiable curve or parametric curve of class C and the singularity where the derivative is 0 is an ordinary cusp. The cycloid satisfies the differential equation: $\left(\frac{dy}{dx}\right)^2 = \frac{2r}{y} - 1.$ ## Evolute Generation of the evolute of the cycloid unwrapping a tense wire placed on half cycloid arc (red marked) The evolute of the cycloid has the property of being exactly the same cycloid it originates from. This can otherwise be seen from the tip of a wire initially lying on a half arc of cycloid describing a cycloid arc equal to the one it was lying on once unwrapped (see also cycloidal pendulum and arc length). ### Demonstration Demonstration of the properties of the evolute of a cycloid There are several demonstrations of the assertion. The one presented here uses the physical definition of cycloid and the cinematic property that the instantaneous velocity of a point is tangent to its trajectory. Referring to the picture on the right, $P_1$ and $P_2$ are two tangent points belonging to two rolling circles. The two circles start to roll with same speed and same direction without skidding. $P_1$ and $P_2$ start to draw two cycloid arcs as in the picture. Considering the line connecting $P_1$ and $P_2$ at an arbitrary instant (red line), it is possible to prove that the line is anytime tangent in P2 to the lower arc and orthogonal to the tangent in P1 of the upper arc. One sees that: • P1, Q and P2 are aligned because $\widehat{P1O1Q}=\widehat{P2O2Q}$ (equal rolling speed) and therefore $\widehat{O1QP1}=\widehat{O2QP2}$. Since $\widehat{O1QP2}+\widehat{O2QP2}=\pi$ by construction, it follows $\widehat{P1QP2}=\pi$. • If A is the meeting point between the perpendicular from P1 to the straight of O1O2 and the tangent to the circle in P2, then the triangle P1AP2 is isosceles because $\widehat{QP2A}=\frac{1}{2}\widehat{P2O2Q}$ and $\widehat{QP1A}=\frac{1}{2}\widehat{QO1R}=$(easy to prove seen the construction)$=\frac{1}{2}\widehat{QO1P1}$. For the previous noted equality between $\widehat{P1O1Q}$ and $\widehat{QO2P2}$ then $\widehat{QP1A}=\widehat{QP2A}$ and P1AP2 is isosceles. • Conducting from P2 the orthogonal straight to O1O2, from P1 the straight line tangent to the upper circle and calling B the meeting point is now easy to see that P1AP2B is a rhombus, using the theorems concerning the angles between parallel lines • Now consider the speed V2 of P2. It can be seen as the sum of two components, the rolling speed Va and the drifting speed Vd. Both speeds are equal in modulus because the circles roll without skidding. Vd is parallel to P1A and Va is tangent to the lower circle in P2 therefore is parallel to P2A. The total speed of P2, V2, is then parallel to P2P1 because both are diagonals of two rhombuses with parallel sides and has in common with P1P2 the contact point P2. It follows that the speed vector V2 lies on the prolongation of P1P2. Because V2 is tangent to the arc of cycloid in P2, it follows that also P1P2 is tangent. • Analogously, it can be easily demonstrated that P1P2 is orthogonal to V1 (other diagonal of the rhombus). • The tip of an inextensible wire initially stretched on half arc of lower cycloid and bounded to the upper circle in P1 will then follow the point along its path without changing its length because the speed of the tip is at each moment orthogonal to the wire (no stretching or compression). The wire will be at the same time tangent in P2 to the lower arc because the tension and the demonstrated items. If it would not be tangent then there would be a discontinuity in P2 and consequently there would be unbalanced tension forces. ## Area One arch of a cycloid generated by a circle of radius r can be parameterized by \begin{align} x &= r(t - \sin t) \\ y &= r(1 - \cos t) \end{align} with $\frac{dx}{dt} = r(1 - \cos t)$ $0 \le t \le 2 \pi.$ Since $\frac{dx}{dt} = r(1 - \cos t)$ the area under the arch is \begin{align} A &= \int_{t=0}^{t=2 \pi} y \, dx = \int_{t=0}^{t=2 \pi} r^2(1 - \cos t)^2 dt \\ &= \left. r^2 \left(\frac{3}{2}t - 2\sin t + \frac{1}{2} \cos t \sin t\right) \right|_{t=0}^{t=2\pi} \\ &= 3 \pi r^2. \end{align} ## Arc length The length of the cycloid as consequence of the property of its evolute The arc length S of one arch is given by \begin{align} S &= \int_0^{2\pi} \left[\left(\frac{\operatorname d\!y}{\operatorname d\!t}\right)^2 + \left(\frac{\operatorname d\!x}{\operatorname d\!t}\right)^2\right]^\frac{1}{2} \operatorname d\!t \\ &= \int_0^{2\pi} r \sqrt{2 - 2\cos(t)}\, \operatorname d\!t \\ &= \int_0^{2\pi} 2\,r \sin \frac{t}{2}\, \operatorname d\!t \\ &= 8\,r. \end{align} Another immediate way to calculate the length of the cycloid given the properties of the Evolute is to notice that when a wire describing an evolute has been completely unwrapped it extends itself along two diameters, a length of 4r. Because the wire does not change length during the unwrapping it follows that the length of half an arc of cycloid is 4r and a complete arc is 8r. ## Cycloidal pendulum Schematic of a cycloidal pendulum. If a simple pendulum is suspended from the cusp of an inverted cycloid, such that the "string" is constrained between the adjacent arcs of the cycloid, and the pendulum's length is equal to that of half the arc length of the cycloid (i.e., twice the diameter of the generating circle), the bob of the pendulum also traces a cycloid path. Such a cycloidal pendulum is isochronous, regardless of amplitude. The equation of motion is given by: \begin{align} x &= r[\theta(t) - \sin \theta (t)] \\ y &= r[\cos \theta (t) - 1]. \end{align} The 17th-century Dutch mathematician Christiaan Huygens discovered and proved these properties of the cycloid while searching for more accurate pendulum clock designs to be used in navigation.[16] ## Related curves Several curves are related to the cycloid. • Curtate cycloid: Here the point tracing out the curve is inside the circle, which rolls on a line. • Prolate cycloid: Here the point tracing out the curve is outside the circle, which rolls on a line. • Trochoid: refers to any of the cycloid, the curtate cycloid and the prolate cycloid. • Hypocycloid: The point is on the edge of the circle, which rolls not on a line but on the inside of another circle. • Epicycloid: The point is on the edge of the circle, which rolls not on a line but on the outside of another circle. • Hypotrochoid: As hypocycloid but the point need not be on the edge of its circle. • Epitrochoid: As epicycloid but the point need not be on the edge of its circle. All these curves are roulettes with a circle rolled along a uniform curvature. The cycloid, epicycloids, and hypocycloids have the property that each is similar to its evolute. If q is the product of that curvature with the circle's radius, signed positive for epi- and negative for hypo-, then the curve:evolute similitude ratio is 1 + 2q. The classic Spirograph toy traces out hypotrochoid and epitrochoid curves. ## Use in architecture Cycloidal arches at the Kimbell Art Museum The cycloidal arch was used by architect Louis Kahn in his design for the Kimbell Art Museum in Fort Worth, Texas. It was also used in the design of the Hopkins Center in Hanover, New Hampshire. ## Use in violin plate arching Early research indicated that some transverse arching curves of the plates of golden age violins are closely modeled by curtate cycloid curves.[17] Later work indicates that curtate cycloids do not serve as general models for these curves,[18] which vary considerably. ## References 1. ^ Cajori, Florian (1999). A History of Mathematics. New York: Chelsea. p. 177. ISBN 978-0-8218-2102-2. 2. ^ Tannery, Paul (1883), "Pour l'histoire des lignes et surfaces courbes dans l'antiquité", Bulletin des sciences mathèmatique (Paris): 284 (cited in Whitman 1943); 3. ^ Wallis, D. (1695). "An Extract of a Letter from Dr. Wallis, of May 4. 1697, Concerning the Cycloeid Known to Cardinal Cusanus, about the Year 1450; and to Carolus Bovillus about the Year 1500". Philosophical Transactions of the Royal Society of London 19 (215–235): 561. doi:10.1098/rstl.1695.0098. edit (Cited in Günther, p. 5) 4. ^ a b c d Whitman, E. A. (May 1943), "Some historical notes on the cycloid", The American Mathematical Monthly 50 (5): 309–315, doi:10.2307/2302830 (subscription required) 5. ^ Cajori, Florian, A History of Mathematics (5th ed.), p. 162, ISBN 0-8218-2102-4(Note: The first (1893) edition and its reprints state that Galileo invented the cycloid. According to Phillips, this was corrected in the second (1919) edition and has remained through the most recent (fifth) edition.) 6. ^ a b Roidt, Tom (2011). Cycloids and Paths (MS). Portland State University. p. 4. 7. ^ Cantor, Moritz (1892), Vorlesungen über Geschichte der Mathematik, Bd. 2, Leipzig: B. G. Teubner, OCLC 25376971 8. ^ Günther, Siegmund (1876), Vermischte untersuchungen zur geschichte der mathematischen wissenschaften, Leipzig: Druck und Verlag Von B. G. Teubner, p. 352, OCLC 2060559 9. ^ Phillips, J. P. (May 1967), "Brachistochrone, Tautochrone, Cycloid—Apple of Discord", The Mathematics Teacher 60 (5): 506–508(subscription required) 10. ^ Victor, Joseph M. (1978), Charles de Bovelles, 1479-1553: An Intellectual Biography, p. 42, ISBN 978-2-600-03073-1 11. ^ Martin, J. (2010). "The Helen of Geometry". The College Mathematics Journal 41: 17–28. doi:10.4169/074683410X475083. edit 12. ^ de Bouelles, Charles (1503), Introductio in geometriam ... Liber de quadratura circuli. Liber de cubicatione sphere. Perspectiva introductio., OCLC 660960655 13. ^ a b Torricelli, Evangelista (1644), Opera geometrica, OCLC 55541940 14. ^ a b c d Walker, Evelyn (1932), A Study of Roberval's Traité des Indivisibles, Columbia University (cited in Whitman 1943); 15. ^ Conner, James A. (2006), Pascal's Wager: The Man Who Played Dice with God (1st ed.), HarperCollins, p. 224, ISBN 9780060766917 16. ^ C. Huygens, "The Pendulum Clock or Geometrical Demonstrations Concerning the Motion of Pendula (sic) as Applied to Clocks," Translated by R. J. Blackwell, Iowa State University Press (Ames, Iowa, USA, 1986). 17. ^ Playfair, Q. "Curtate Cycloid Arching in Golden Age Cremonese Violin Family Instruments". Catgut Acoustical Society Journal. II 4 (7): 48–58. 18. ^ Mottola, RM (2011). "Comparison of Arching Profiles of Golden Age Cremonese Violins and Some Mathematically Generated Curves". Savart Journal 1 (1).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576874375343323, "perplexity": 2232.7702362963123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461988.0/warc/CC-MAIN-20150226074101-00302-ip-10-28-5-156.ec2.internal.warc.gz"}
https://byjus.com/questions/find-the-value-of-tan-240/
# Find the value of tan 2400. (a) √3 (b) -1 (c) -√3 (d) none of these Solution: tan 2400 = tan (180+60)0 180+θ is in the third quadrant. tan is positive in the 3rd quadrant. tan (180+60)0 = tan 600 = √3 Hence option a is the answer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933317065238953, "perplexity": 5783.102281403186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00468.warc.gz"}
http://mathhelpforum.com/math-topics/97526-maths-moments.html
Math Help - Maths by Moments 1. Maths by Moments Hi, i am still having a bit of trouble if some-one can please help me solve it. thanks IVA Attached Files
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9720795154571533, "perplexity": 2577.545146021422}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115927113.72/warc/CC-MAIN-20150124161207-00129-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.e-olymp.com/en/contests/21810/problems/240328
Competitions Server You are in charge of a server that needs to run some submitted tasks on a first-come, first-served basis. Each day, you can dedicate the server to run these tasks for at most t minutes. Given the time each task takes, you want to know how many of them will be finished today. Consider the following example. Assume t = 180 and the tasks take 45, 30, 55, 20, 80 and 20 minutes (in order they are submitted). Then, only four tasks can be completed. The first four tasks can be completed because they take 150 minutes, but not the first five, because they take 230 minutes which is greater than 180. Notice that although there is enough time to perform the sixth task (which takes 20 minutes) after completing the fourth task, you cannot do that because the fifth task is not done yet. Input The first line contains two integers n (1n50) and t (1t500) where n is the number of tasks. The next line contains n positive integers no more than 100 indicating how long each task takes in order they are submitted. Output Display the number of tasks that can be completed in t minutes on a first-come, first-served basis. Time limit 1 second Memory limit 64 MiB Input example #1 6 180 45 30 55 20 80 20 Output example #1 4 Input example #2 10 60 20 7 10 8 10 27 2 3 10 5 Output example #2 5 Source 2014 ACM North America - Rocky Mountain, Problem A
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47664493322372437, "perplexity": 733.2525952244738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00073.warc.gz"}
https://www.nature.com/articles/s41598-019-48986-5?error=cookies_not_supported&code=c8bbd383-417b-4195-bc27-6d52b545698e
## Introduction Debris flows are destructive mass movement, causing extensive economic losses and casualties around the world1,2,3. China is one of the mostly affected countries by debris flows, and approximately 50 thousand debris-flow sites distributed over 48% of the territory area of China4,5. These debris-flow sites are concentrated in Southwest China, particularly in Sichuan Province, Yunnan Province, and Tibet Autonomous Region. During 2000–2016, debris flows caused 90 deaths annually in Sichuan Province, which was twice as many as that caused by landslides6. Many earthquakes have happened in Sichuan, leading to massive debris flows7. Moreover, the magnitude and frequency of debris flows showed an increasing trend due to intensified environmental change and human activity8,9,10,11. The uncertainty of debris flows restricts the land-use planning and results in devastating effects on downstream areas. Susceptibility modeling is considered as the initial step towards hazard and risk assessment of debris flows, and it can also be used for debris flows warning system and environmental impact assessment. Therefore, it is indispensable to assess the susceptibility and identify the important factors associated with occurrence of debris flows for refining disaster management practices. The methods for modeling susceptibility of debris flows vary widely among different countries or regions, which were generally categorized into physical models and statistical models4. The physical models simulate the dynamic process of landslides or debris flows based on physical mechanisms which consider hydrological conditions, slope stability, and soil strength decrease associated with landslide and debris-flow initiation12,13. The physical models are commonly employed for small-scale studies by using geographic information system and/or Monte Carlo simulations14,15,16. On the basis of the mechanisms of slope failure, physical models analyzed the dynamic process of debris flows and the hydrological conditions. To estimate the safety factor for a specific unit, the physical models require a wide range of small-scale data regarding mechanical soil characteristics and triggering factors, such as the potentially maximum volume of debris flows, 24-hour maximum rainfall, watershed cutting density, height difference, sediment concentration and population densities17. Due to the high data requirement, the physical models are not applicable for large-scale studies, but can be used to qualitatively validate statistical model results. On the other hand, statistical models are less data-intensive and more suitable for simulating regional debris-flow susceptibility18,19. Parametric statistical models are commonly employed, such as analytic hierarchy process, logistic regression, and information value method, to link the regional susceptibility of debris flows to the potentially influencing factors20. In general, the factors include topographic geology, hydrometeorology, and human activities21. While the data categories are diverse, the data are all relatively easily retrieved through the Geographic Information Systems (GIS) and/or Remote Sensing (RS). For instance, the power-function model generates accurate and feasible estimates of debris-flow susceptibility in Yunnan, Southwest China22. A model comparison study found that the logistic regression model performed better than the physical models at regional scale12. While the parametric statistical models played an important role in simulating debris flows, they were inadequate to capture complex relationships that were difficult to be specified23. As a result, the prediction accuracy would be restrained. Machine learning is a sophisticated statistical approach to modeling complex relationships between predictor and response variables, which is critical for assessing susceptibility of debris flows24,25. Machine learning approach, which pertains to the algorithmic modeling culture, learns model structures from training data and generally shows better predictive performance than parametric statistical models, such as logistic regression models26,27. Machine learning algorithms have shown great success in modeling disasters, such as landslides28,29, floods30,31, and debris flows24. Several popular machine learning algorithms, such as neural networks23,32, support vector machine (SVM)33, and naïve Bayes34, showed reliable performance in predicting occurrences of debris flows. Compared with the aforementioned machine learning algorithms, gradient boosting machines (GBM) generally showed better predictive performance in a series of model comparisons27. By utilizing the strengths of classification/regression trees and boosting, GBM grows a series of weak decision trees in a stage-wise fashion in order to slowly but steadily achieve optimization35,36,37. This study aims to model the susceptibility of debris flows by watersheds in Sichuan, Southwest China to advance the management of risks related to debris flows. We compiled the data of debris-flow events for almost 70 years (1949–2017) in Sichuan, as well as a comprehensive predictor dataset. A sophisticated GBM model was developed to predict the susceptibility of debris flows by watershed units. The predictive performance of GBM was compared with four benchmark models, including the Logistic Regression (LR), the K-Nearest Neighbor (KNN), the Support Vector Machine (SVM), and the Artificial Neural Network (ANN). On the basis of the finely trained GBM model, the important predictor variables were identified, and the spatial distributions of debris-flow susceptibility were mapped. The results of this study are expected to provide a solid basis for predicting debris-flow disasters in the future, early warning, and risk prevention. ## Results and Discussion ### Predictive performance In the cross-validation, the final GBM model showed good performance in predicting the susceptibility of debris flows, with the AUC of 0.88 and accuracy of 82.0% (Table 1). The prediction accuracy for the watersheds without debris-flow observations (85.4%) was relatively higher than that for the watersheds with debris flow observed (73.5%). It thus indicated that the prediction tended to biased towards the low susceptibility of debris flow. As the important hyperparameters in the GBM model, the number of trees and the tree depth were tuned to be 700 and 10, respectively. The hyper-parameter tuning process was essential for improving the predictive performance of the GBM model. The final GBM model retained 37 of the 72 predictors in the initial GBM model through the variable selection process, during which the prediction deviance initially fluctuated and then increased dramatically after 35 iterations (Fig. 1). The operation of variable selection reduced the data requirement and avoided spurious details in estimating the susceptibility of debris flow. Due to the difficulty in data collection, the debris-flow events were compiled from multiple sources over the long-term span (1949–2017). As this study focused on the spatial pattern of debris flow, the effects of data-source inconsistency were assumed to be negligible. The GBM model was superior to the benchmark models (i.e., LR, KNN, SVM, and ANN) in predicting the susceptibility of debris flow (Table 2). For the KNN model, the best predictive performance was achieved when the number of neighbors considered equaled to 15. For the SVM model, the kernel, gamma, and cost of constraints violation were tuned to radial, 0.01, and 10, respectively. For the ANN model, the number of units in the hidden layer was set to 3, and the decay was set to 0.1. The previous studies also found that GBM models exhibited better performance in simulate susceptibility of debris flows than SVM and mixture discriminant analysis did, although the research domains of these studies were distinctive33,34. In the future, more comprehensive model comparisons will be necessary to guide the model selection for simulating debris flow. ### Important predictor variables The elevation range was the most important predictor variable in the final GBM model, with the importance value of 13.3, and the associated predictor variable of channel gradient exhibited an importance value of 4.1 (Fig. 2). The elevation range plays a critical role in the formation of debris flow by determining the level of potential energy. Larger elevation difference leads to higher potential energy, creating favorable conditions for debris flows. The debris flow mainly occurred in the mountainous areas, as well as the surroundings of undulating plateau38,39. A previous study found that debris flow tended to happen when the height difference reached more than 300 m38. In our study area, more than 97% of the river basins in the valley where debris flow happened, had a height difference ranging from 400 to 4000 m. In addition, channel gradient provided the conditions for the conversion of loose material forces in the watershed into kinetic energy. It has been acknowledged that higher channel gradient favored occurrence of debris flow40. The maximum daily rainfall was the second most important predictor variable, with the importance value of 8.6, while the importance values of the annual rainfall and the maximum 3-day rainfall were 2.9 and 2.5, respectively (Fig. 2). Rainfall is one of the essential trigger factors of debris flow41,42. Heavy rainfall indicated by maximum daily rainfall tend to trigger debris flow when source materials are abundant. The maximum 3-day rainfall with a longer time span is supplementary to the maximum daily rainfall. The annual rainfall together with the aridity index reflect the dry-wet condition in the long term. The aridity index, with the importance value of 8.0, was the third most important predictor variable (Fig. 2). Extremely arid climates have been found to be highly associated with occurrences of debris flows, which are usually caused by extremely dry periods followed by wet seasons43,44. The drought background or the dry-wet alternating climate conditions aggravate soil cracks, change the structure/composition of soil, and lower the rainfall thresholds triggering debris flows. Drought degraded vegetation cover, weakens soil structure, and increases loose solid materials prone to debris flows due to their distribution of varied debris and disturbed soil45,46,47. Debris flows were found to occur on the sunny side more frequently than the shady side of a mountain, suggesting that the hydrothermal conditions, particularly droughts, influenced occurrences of debris flows47. The water erosion intensity and the negative effects of anthropogenic activity were also important factors to the susceptibility of debris flows. As indicated above, Sichuan lies in the transition area between the Qinghai-Tibet Plateau and the plain region. The previous studies showed that the soil erosion was 0.5–7 mm/y in the Qinghai-Tibet Plateau from 30 Ma (million anniversary) ago to the present47,48,49. While the rock/soil types play a critical role in the formation and accumulation of surface sediments, the rapid soil erosion provides massive unconsolidated materials which is source material for debris flows. Earthquakes induce secondary disasters such as landslides providing debris flows with source materials, and the impact was indicated by the seismic intensity. In addition, the anthropogenic activities such as road construction and land overexploitation accelerate soil erosion and consequently exacerbate debris flow50, which is reflected by the high importance of the national road length, the number of settlement sites, and the population density (Fig. 2). As the predictor variables with respect to soil types, the area proportions of clay, silt, and sand exhibited relatively negligible importance to the susceptibility of debris flow, with importance values of 2.3, 2.2, and 1.3, respectively. The soil types directly affected the sediment concentration of debris flow, which in turn influenced its size and flow state. The clay content influences the formation of debris flow by affecting the initiation of debris flow, especially for viscous debris flow51. A moderate amount of clay content was an essential precondition for forming large-scale debris flow with a high amount of sediment concentration. Under the effect of precipitation, the loose clay soil expands after water absorption, leading to an increase in pore pressure and failure of viscous resistance, which accelerated the formation of debris flow. The effects of factors influencing debris flow formation were complicated by non-linearity and interactions. It was therefore very important to identify the key controlling factors. According to the present study modeling the susceptibility of debris flows in Sichuan Province, topographic conditions, geological background, precipitation, and anthropogenic activities played an important role in the formation of debris flow. In addition, the susceptibility of debris flow was also associated with the drought conditions, road construction, soil types, and land use, which were indispensable factors in evaluating the susceptibility of debris flow at regional level. ### Susceptibility mapping As a result of the GBM model, the debris-flow map was constructed. It shows the spatial distribution of the susceptibility, which was classified into five categories, including very low, low, moderate, high, very high (Fig. 3a). Table 3 shows the areas and numbers of watersheds by susceptibility category. The watersheds of very low susceptibility occupy the largest area (226,600 km2), with the largest number of watersheds (1,342) accounting for the 47% of the study area. These watersheds were mainly distributed in western plateau and mountainous areas, as well as eastern plain and hilly areas. The number of the moderate-susceptibility watersheds is the smallest (212), and the area is the smallest (33,500 km2; 7% of the study area). The watersheds with high or very high debris-flow susceptibility (110,100 km2), accounting for 22% of the total areas, are mainly distributed in the central mountainous region across Sichuan from north to south. These areas are located in the lower reaches of the Yalong River and the Dadu River, and the upper reaches of the Minjiang River near the Wenchuan earthquake. The susceptibility map of watershed-based debris flow evaluated by GBM was considerably different from those evaluated by the benchmark models (Fig. 3). The number of watersheds with very-high susceptibility was largest as predicted by GBM (297), followed by ANN (243), SVM (234), LR (194), and KNN (130). The watersheds with high or very-high susceptibility predicted by GBM were more concentrated near the Wenchuan earthquake region compared with the predictions made by the other models. In addition, the areas with moderate susceptibility were overestimated by KNN, which also underestimated the areas with very-low or very-high susceptibility, manifesting that KNN did not perform well in mapping the susceptibility of debris flow. The map predicted by using the GBM model qualitatively and quantitatively characterized the spatial distribution of the debris-flow susceptibility for the watersheds. The western part of the study area was mainly located in the hinterland of the Qinghai-Tibet Plateau. The topography was dominated by plateaus and hill-shaped areas with gentle fluctuation. The environmental conditions in all areas, except for some deep-cut river valleys, were insufficient for development of debris flow, where the watersheds were dominated by the ones with very low susceptibility. The eastern part of the study area was mainly distributed in the Sichuan basin and hilly landforms, where the topography did not vary largely. Other than the watersheds of the moderate susceptibility in the Qujiang River basin, most of the watersheds in the eastern Sichuan were of low or very low susceptibility of debris flow. The watersheds of high debris-flow susceptibility were mainly concentrated in the western part of the study area. Topographically, the highly susceptible areas were located in the topographic belt transiting from the Tibetan Plateau to the Sichuan Basin. In the Hengduan Mountains lying from north to south, the terrain is fragmented and the hills are steep, creating adequate conditions for debris flows. The fault zones of Longmenshan, Xianshuihe, and Anninghe distributed in an “Y” shape (shaded area in Fig. 4), which was generally consistent with the seismic zones. In these zones, earthquakes and rock fractures occurred frequently, with a number of secondary mountain disasters, providing abundant source materials to debris flows. In addition, the high-susceptibility areas were coupled with the dry valley landscape in the study area. Among those areas, the Yalong River and its tributaries, including the Anning River Valley, the Dadu River, the upper reaches of Min River, middle and lower reaches of the Jinsha River, were the concentrated areas of debris flow, which were also identified as the areas with high or very high susceptibility of debris flow. The dry valleys with fragile ecosystem and severe soil erosion were found in all the rivers of Yalong, Dadu, Min and Jinsha. The dry valleys were affected by local circulation and forming activity. The evaporation in the valleys was far greater than the precipitation, where the vegetations were hard to grow and the soil erosion was severe. Moreover, in the dry valleys, inappropriate cultivation, such as steep slope reclamation and smooth slope cultivation, led to severe gravity erosion prone to formation of debris flow. Meteorologically, heavy rains tended to trigger debris flows in these areas. In addition, the construction of roads and hydropower stations was intensive in these areas and tended to aggravate the susceptibility of debris flows. In general, the spatial distribution of high susceptibility of debris flow in Sichuan Province had a degree of overlap with the topographical extreme belt, fault zone, seismic belt, and dry valleys. Prevention and control of debris-flow risk in the study should be focused on these four types of highly coupled areas for preventing or mitigating sudden mass deaths caused by debris flow. We studied the spatial distribution of debris flow for the watersheds in Sichuan Province, and clearly identified the critical areas for the monitoring and early warning of debris flow. The results had very important practical significance and social benefits for disaster prevention and reduction. ## Conclusions On the basis of the comprehensive dataset associated with debris flows, a GBM model was developed to simulate the susceptibility of debris flows in Sichuan, Southwest China. The GBM model showed highlighted predictive performance by adequately capturing the complex relationships between the predictor and response variables, which was superior to the benchmark models (i.e., LR, KNN, SVM, and ANN). The elevation range, maximum daily rainfall, and aridity index were identified as the most important predictor variables influencing the occurrences of debris flows, which provided invaluable information for management. In addition, the high intensity area of water erosion, length of national roads, channel gradient, and number of settlement sites also played an important role in the susceptibility of watershed-based debris flow. The susceptibility map was produced by using the GBM model. This map could facilitate initial hazard evaluation for development planning. The spatial distributions of the high-susceptibility watersheds were highly coupled with the locations of the topographical extreme belt, fault zone, seismic belt, and dry valleys. It is essential to conduct monitoring and risk prevention in the highly susceptible areas. ## Materials and Methods ### Study area The study area, i.e., Sichuan Province, is located in Southwest China (26°03′–34°20′N, 97° 22′–110°10′E), covering an area of approximately 485 thousand km2 (Fig. 4). The complex landform of Sichuan is dominated by mountainous and hilly lands which account for 85% of the total terrain. The main part of Sichuan lies in the geomorphological transition area between the Tibetan Plateau and the Middle-Upper Yangtze River Plain, with elevation differences larger than 4000 m. Sichuan is mainly of monsoon climate, and approximately 70% of the annual average rainfall (around 1000 mm) happens from June to September. The major rivers in Sichuan, including the Yalong River, the Minjiang River, the Tuojiang River, the Jialing River, and the Wujiang River, are tributaries of the Yangtze River. The stratums of Sichuan were well developed from the Upper Archean to the Quaternary. The species of magmatic rocks are abundant, and granites account for the major proportion of the rocks. Being divided by the Longmenshan fault zone, the western and eastern Sichuan show large differences in terrain, stratigraphic structure, and meteorological conditions. Sichuan Province is a highly active seismic zone, where three major earthquakes happened in the last ten years, including the Wenchuan Ms 8.0 earthquake in 2008, the Lushan Ms 7.0 earthquake in 2013, and Jiuzhaigou Ms 7.0 earthquake in 201752. Similar to the previous studies1,53, the susceptibility of debris flows was modeled by watersheds, which are basic units for the whole phenomenon of debris flows, which includes triggering, propagation, and stoppage38. On the basis of the digital elevation model (DEM), streamline map, and satellite images, we delineated 2474 watersheds by using both the automatic and manual vectorization methods (Fig. 4). ### Data preparation A total of 3839 debris-flow events were identified in 774 watersheds of Sichuan during 1949–2017. The debris-flow data from 1949 to 2004 were obtained from the Sichuan Geo-Environment Monitoring Program6, and the debris-flow events during 2005–2017 were compiled from news reports and literatures. The locations of the debris flows were concentrated in the mid-western Sichuan, where a considerable number of population dwell (Fig. 4). The spatial distributions of the debris-flow events generally coincided with the arid valley extending from the Hengduan Mountains in the Eastern Tibetan Plateau to the Yuannan-Guizhou Plateau. As debris flows were rarely observed in the plateau and plain areas, this study focused on the watersheds located in the mountainous and hilly areas. The watershed with/without debris flow occurred were labelled as presence/absence of debris flow for the subsequent modeling. According to the present knowledge on debris flows and data availability, 72 predictor variables were determined for modeling the susceptibility of debris flows by watersheds (Table 4). The geomorphological factors, including the area, perimeter, elevation difference, channel gradient, average slope, average aspect, and channel length were derived from the DEM dataset (30 m resolution) retrieved through the Advanced Spaceborne Thermal Emission and Reflection Radiometer54. The geological factors, including the length of active faults and the type of seismic intensity (at 1:4000000 scale), were obtained from the China Seismic Information55. The rock hardness was rasterized from the 1:200000 lithological composition map of Sichuan55,56. The meteorological conditions, including the annual average rainfall, annual average temperature, annual accumulated temperature above 10 °C, aridity index, and moisture index, were acquired from the corresponding raster files (500 m resolutions) published in the Data Center for Resources and Environmental Sciences (RESDC) of the Chinese Academy of Sciences57. The maximum daily rainfall and the maximum 3-day rainfall were derived from the daily observations at meteorology sites58. The Normalized Difference Vegetation Index (NDVI; 300 m spatial resolution) were derived from the Proba-V satellite retrievals59. The land use types, population densities, soil erosion intensity, and soil textures were obtained from the RESDC57. The lengths of county roads, highways, and railways were summarized from the OpenStreetMap for each watershed60. The locations of settlement sites were obtained from the Socioeconomic Data and Applications Center (SEDAC)61. The values of the following predictor variables are discretized: seismic intensity, rock hardness, soil texture, water erosion intensity, wind erosion intensity, freeze-thaw erosion intensity, land use, and road length. The raw data of the predictor variables were preprocessed to the delineated watersheds by using various tools in the ArcGIS, including Calculate Geometry, Zonal statistics as Table, Spatial Join, Tabulate Intersection, Raster Calculator, Surface, Reclassify, Buffer, and Kriging Interpolation. The correlations between the predictor variables were evaluated with the Spearman correlation coefficients (Fig. 5). ### Model description For simulating the susceptibility of debris flow (i.e., occurrence probability) by watersheds, a GBM model was trained to minimize the following loss or deviance function62: $$L(y,f(x))=\sum \{\,\mathrm{log}(1+\exp (f({x}_{i}))-{y}_{i}f({x}_{i})\}$$ (1) where x represents the predictor variables (Table 4), y is the observation of debris flow event (i.e., occurrence/non-occurrence), and f(x) is the GBM model parameterized through the following procedure35,36: $${\rm{Model}}\,{\rm{initialization}}:\,{f}_{0}(x)=\,\mathrm{log}\,\frac{\sum {y}_{i}}{\sum (1-{y}_{i})}$$ (2) For k = 1 to K, repeat the steps below in order to obtain fK(x): • Draw a subsample from the training dataset at random without replacement • Use the model updated at step k-1 to calculate the residuals ($${\tilde{y}}_{j}$$) for this sub-sample: $${\tilde{y}}_{j}={y}_{j}-\frac{1}{1+\exp (\,-\,{f}_{k-1}({x}_{j})}$$ (3) • Develop a new classification tree ρk to fit $${\tilde{y}}_{j}$$ • Update the model by adding the fitted tree with a shrinkage rate (default: λ = 0.05): $${f}_{k}(x)={f}_{k-1}(x)+\lambda {\rho }_{k}$$ (4) The model output was the occurrence probability or susceptibility of debris flow. Hyperparameter tuning and variable selection were performed to further refine the GBM model. The values of the hyperparameters, including the number of trees (K) and the tree depth, were determined when the associated prediction deviance reached the minimum in the 10-fold cross-validation (explained in the next subsection). Similarly, the predictor variables of the GBM model (initially 72 variables) were selected by using the backward selection strategy, where the least important variable (explained in the next subsection) was removed from the model one at a time. The set of predictor variables with the lowest prediction deviance in the cross-validation was selected to build the final GBM. The R packages of gbm and dismo were used for training the GBM model and making predictions62,63. R package doParallel was used to run the modeling process in a parallel manner for reducing the computing time64. With the same data and predictor variables, the GBM model was compared with four benchmark models, including LR, KNN, SVM, and ANN, to evaluate the performance in predicting the susceptibility of debris flow. LR is a generalized linear model for classification parameterized by the maximum likelihood. KNN, a non-parametric algorithm, groups K samples nearest to a particular sample into the same category, and the prediction is the mode in this category. SVM classifies samples in feature spaces by hyperplanes based on maximal margin classifiers, and kernels are applied to expand the feature spaces for accommodating non-linear boundaries. ANN, a kind of adaptive system with multi-layer neurons, learns from the pre-provided input and output data. The LR, KNN, SVM, and ANN models were implemented with R packages of stats, class, e1071, and nnet65,66,67, respectively. All the parameters in GBM, KNN, SVM, and ANN models were tuned through the grid search method. ### Model evaluation The model predictive performance was evaluated with the commonly used metrics, including the prediction accuracy and the area under curve (AUC) of the receiver operating characteristic (ROC). The AUC illustrated the changes of true positive rate and false positive rate when the discrimination threshold varied. The 10-fold cross-validation approach was employed to obtain model predictions, where the training and prediction data were separated in order to reflect more realistic performance. Specifically, the training dataset was randomly partitioned into 10 similarly-sized groups. At each of 10 rounds, 9 groups were used to train the model which made predictions for the remaining group. After 10 rounds, every observation was paired with a prediction value. In addition, the variable importance measure, which is valuable for interpreting and diagnosing the GBM model35, was used to evaluate the effects of the predictor variables on the susceptibility of debris flows. The variable importance was indicated by the mean decrease in deviance resulted from the splits on that variable. A partial dependence plot showed the effects of a predictor variable on the susceptibility of debris flows after subtracting the average effects of all the other predictor variables. ### Susceptibility mapping The susceptibility of debris flow for each watershed in Sichuan was estimated by using the final GBM model and the benchmark models. The levels of susceptibility were divided into five classes, including very high, high, moderate, low and very low, based on the equal-interval classification method. Arc GIS was used to map the watershed-based susceptibility for intuitive visualization.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6346865892410278, "perplexity": 3257.74992094556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00722.warc.gz"}
https://publications.aap.org/pediatrics/article/147/5/e20201634/180891/Diagnostic-Stewardship-of-Endotracheal-Aspirate?searchresult=1?autologincheck=redirected
BACKGROUND: Clinicians commonly obtain endotracheal aspirate cultures (EACs) in the evaluation of suspected ventilator-associated infections. However, bacterial growth in EACs does not distinguish bacterial colonization from infection and may lead to overtreatment with antibiotics. We describe the development and impact of a clinical decision support algorithm to standardize the use of EACs from ventilated PICU patients. METHODS: We monitored EAC use using a statistical process control chart. We compared the rate of EACs using Poisson regression and a quasi-experimental interrupted time series model and assessed clinical outcomes 1 year before and after introduction of the algorithm. RESULTS: In the preintervention year, there were 557 EACs over 5092 ventilator days; after introduction of the algorithm, there were 234 EACs over 3654 ventilator days (an incident rate of 10.9 vs 6.5 per 100 ventilator days). There was a 41% decrease in the monthly rate of EACs (incidence rate ratio [IRR]: 0.59; 95% confidence interval [CI] 0.51–0.67; P < .001). The interrupted time series model revealed a preexisting 2% decline in the monthly culture rate (IRR: 0.98; 95% CI 0.97–1.0; P = .01), immediate 44% drop (IRR: 0.56; 95% CI 0.45–0.70; P = .02), and stable rate in the postintervention year (IRR: 1.03; 95% CI 0.99–1.07; P = .09). In-hospital mortality, hospital length of stay, 7-day readmissions, and All Patients Refined Diagnosis Related Group severity and mortality scores were stable. The estimated direct cost savings was $26 000 per year. CONCLUSIONS: A clinical decision support algorithm standardizing EAC obtainment from ventilated PICU patients was associated with a sustained decline in the rate of EACs, without changes in mortality, readmissions, or length of stay. What’s Known on This Subject: Endotracheal aspirate cultures (EACs) are commonly obtained in evaluation of suspected ventilator-associated infection. Bacterial growth in EACs does not distinguish bacterial colonization from infection and may lead to overtreatment with antibiotics. Diagnostic stewardship has been used to improve the use of other microbiology testing. What This Study Adds: Diagnostic stewardship of EACs by using a clinical decision support tool led to a reduction in EAC use and cost savings in a PICU and was not associated with changes in mortality, length of stay, or readmissions. Clinicians commonly obtain endotracheal aspirate cultures (EACs) in the evaluation of suspected ventilator-associated infections (VAIs),1,2 a common hospital-acquired infection.3,4 Although EACs cannot distinguish between bacteria colonizing the respiratory tract from bacteria causing infection,57 positive EAC results prompt clinicians to treat with antibiotics.1,2,8 Therefore, overtesting may contribute to excessive treatment with antibiotics because treatment of VAI accounts for up to one-half of the antibiotic use in the PICU.9,10 There is not a gold standard definition of VAI, and the diagnosis is challenging. VAIs encompass either ventilator-associated pneumonia (VAP) or ventilator-associated tracheobronchitis because these entities are, often, difficult to distinguish and are treated interchangeably.1113 The Centers for Disease Control and Prevention includes respiratory secretion Gram-stain and cultures as a component of the surveillance definition of VAP.14 The Infectious Diseases Society of America and American Thoracic Society guidelines for management of adults with VAP support obtainment of EACs.15 However, there are not national consensus recommendations regarding specific clinical indications for which to obtain EACs in either children or adult patients. A survey of clinicians revealed that fever was the most frequent symptom triggering EACs collection and more than one-half of EACs were obtained for isolated clinical changes and were of little to no utility for patient management.16 In studies of ventilated pediatric patients, researchers suggest that antibiotic treatment of VAI is driven by the presence or absence of bacterial growth in EACs,1,2,8 yet antibiotic treatment was not associated with improvement in clinical outcomes.1 Diagnostic stewardship is the promotion of judicious use of diagnostic tests by either modifying the process of ordering the test or how results are reported to improve the accuracy of diagnosis and treatment.17 Diagnostic stewardship approaches have improved the use of Clostridium difficile testing,18,19 urine cultures,20,21 and blood cultures.22,23 Our objective was to describe the development and implementation of a clinical decision support tool to standardize the use of EACs from mechanically ventilated patients in the PICU and assess the impact of this intervention on EAC use and balancing measures. This study was conducted in the PICU of the Johns Hopkins Children’s Center in Baltimore, Maryland, a quaternary care academic PICU with 36 beds caring for medical, surgical, and cardiac patients from birth to 24 years of age, with 2000 yearly admissions. We performed an analysis of a quality improvement (QI) program to improve EAC use to measure associated outcomes and safety. In this study, we specifically reviewed all patients admitted to the unit who were mechanically ventilated via endotracheal tube or tracheostomy 1 year before implementation of the algorithm (preintervention: April 1, 2017, to March 31, 2018) and 1 year after (postintervention: April 1, 2018, to March 31, 2019). A time line of the QI initiative is summarized in Table 1. In 2016, PICU and infectious diseases physicians expressed concern for the frequently repeated and reflexive use of EACs in the Johns Hopkins Children’s Center PICU. A multidisciplinary QI team reviewed baseline EAC use and conducted clinician surveys, which supported that there was a low threshold to obtain EACs.16 We held focus groups with PICU nurse practitioners, physicians, and respiratory therapists, discussing EAC practices and potential barriers to change. The formative work supported that there were drivers of EAC use including (1) reflexive testing in response to isolated clinical changes (eg, fever) and (2) significance attributed to reported changes in respiratory secretions. With the support of PICU leadership, the QI team used a translating-evidence-into-practice model to improve EAC use.24 The local drivers were combined with existing literature, local data, and workgroup consensus to inform algorithm development and implementation (detailed in Table 2). A workgroup, including critical care attending physicians, fellows, nurse practitioners, and infectious disease physicians, drafted an algorithm, shown in Fig 1, to guide clinician decision-making around EACs. The team solicited input from PICU respiratory therapists and nurses and clinical microbiology laboratory faculty. The primary objective of the algorithm was to reduce reflexive culturing from mechanically ventilated patients without signs or symptoms of a respiratory infection by prompting clinicians to consider specific changes supporting a respiratory infection and timing since last EAC. The algorithm page included guidance for specimen collection to avoid the use of saline lavage because this practice can dilute specimens and is not recommended.25 TABLE 1 Time Line of the Endotracheal Culture QI Initiative Event Winter 2016 Initial concern for overuse of EACs Fall 2017 Review of 1 year of baseline EAC use November 2017 to January 2018 Surveys of PICU staff after EACs obtained January to March 2018 Focus groups with PICU staff and development of clinical decision support algorithm March 26, 2018 Decision support algorithm approved by unit director March 26–28, 2018 E-mail announcement and algorithm sent to all PICU staff; paper copies placed in work rooms; algorithm posted to intranet Web site accessible to staff April 1, 2018 Initiation date to start using algorithm April to May 2018 Paper checklist of algorithm signed by attending physicians and fellows when EACs obtained; Meetings with clinicians to discuss initiative and any safety concerns or observations; QI team walk rounds through the PICU soliciting feedback May 2018 to June 2019 Quarterly discussion at PICU safety meetings February 2020 Electronic monitoring dashboard became available Event Winter 2016 Initial concern for overuse of EACs Fall 2017 Review of 1 year of baseline EAC use November 2017 to January 2018 Surveys of PICU staff after EACs obtained January to March 2018 Focus groups with PICU staff and development of clinical decision support algorithm March 26, 2018 Decision support algorithm approved by unit director March 26–28, 2018 E-mail announcement and algorithm sent to all PICU staff; paper copies placed in work rooms; algorithm posted to intranet Web site accessible to staff April 1, 2018 Initiation date to start using algorithm April to May 2018 Paper checklist of algorithm signed by attending physicians and fellows when EACs obtained; Meetings with clinicians to discuss initiative and any safety concerns or observations; QI team walk rounds through the PICU soliciting feedback May 2018 to June 2019 Quarterly discussion at PICU safety meetings February 2020 Electronic monitoring dashboard became available TABLE 2 Local Drivers, Supporting Literature, and Workgroup Consensus That Informed Development and Implementation of an Algorithm to Reduce EAC Use Primary DriversSecondary DriversLiterature, Data, or Workgroup ConsensusAction Taken Reflexive testing for clinical changes in ventilated patient Absence of guidance when to obtain EAC Common triggers for EACs were fever, ventilator changes, secretions, WBCs, and CRP.16 The workgroup consensus was that infections would be associated with abnormal temperatures and WBC but did not specify cutoff values consistent with recent evidence.35 Addressed common triggers in the algorithm Misconception of EACs’ ability to distinguish colonization from infection Airways are nonsterile and become colonized with bacteria. EAC with bacterial growth is not specific to infection.57 Paired algorithm implementation with education Ordering clinicians are unaware of previous EAC results A review of EAC results from 2017 revealed 75% of EACs repeated within 3 d did not have new bacterial species (unpublished observations). Microbiology laboratory does not repeat antibiotic susceptibilities if repeated within 3 d. Incorporated step necessitating review of existence of previous EACs and avoiding cultures repeated within 3 d Competing demands reducing time to consider utility of EAC Ensured algorithm is brief and easily accessible by print or online Variable experience of members of care team with complex communication pattern of clinical information Involved input from all groups of health care workers in guideline development and ensured the algorithm was accessible to everyone Significance attributed to a reported change in respiratory secretions Variability in interpretation between providers of secretion description Increased secretion quantity supported as a common feature of patients with respiratory infections.36 Sputum color and purulence has not correlated with bacterial respiratory infections.3741 Algorithm included quantity of secretions instead of sputum color or thickness descriptions EACs ordered from patients without sufficient secretions, and RTs used saline lavage to obtain cultures. Saline lavage is not a recommended practice.25 Discouraged saline lavage and empowered RT to inform ordering clinician if insufficient quantity of secretions to send for culture Primary DriversSecondary DriversLiterature, Data, or Workgroup ConsensusAction Taken Reflexive testing for clinical changes in ventilated patient Absence of guidance when to obtain EAC Common triggers for EACs were fever, ventilator changes, secretions, WBCs, and CRP.16 The workgroup consensus was that infections would be associated with abnormal temperatures and WBC but did not specify cutoff values consistent with recent evidence.35 Addressed common triggers in the algorithm Misconception of EACs’ ability to distinguish colonization from infection Airways are nonsterile and become colonized with bacteria. EAC with bacterial growth is not specific to infection.57 Paired algorithm implementation with education Ordering clinicians are unaware of previous EAC results A review of EAC results from 2017 revealed 75% of EACs repeated within 3 d did not have new bacterial species (unpublished observations). Microbiology laboratory does not repeat antibiotic susceptibilities if repeated within 3 d. Incorporated step necessitating review of existence of previous EACs and avoiding cultures repeated within 3 d Competing demands reducing time to consider utility of EAC Ensured algorithm is brief and easily accessible by print or online Variable experience of members of care team with complex communication pattern of clinical information Involved input from all groups of health care workers in guideline development and ensured the algorithm was accessible to everyone Significance attributed to a reported change in respiratory secretions Variability in interpretation between providers of secretion description Increased secretion quantity supported as a common feature of patients with respiratory infections.36 Sputum color and purulence has not correlated with bacterial respiratory infections.3741 Algorithm included quantity of secretions instead of sputum color or thickness descriptions EACs ordered from patients without sufficient secretions, and RTs used saline lavage to obtain cultures. Saline lavage is not a recommended practice.25 Discouraged saline lavage and empowered RT to inform ordering clinician if insufficient quantity of secretions to send for culture CRP, C-reactive protein; ETA, endotracheal aspirate; RT, respiratory therapist; WBC, white blood cell. FIGURE 1 Algorithm for obtaining endotracheal cultures from mechanically ventilated patients. CRP, C-reactive protein; ETA, endotracheal aspirate. FIGURE 1 Algorithm for obtaining endotracheal cultures from mechanically ventilated patients. CRP, C-reactive protein; ETA, endotracheal aspirate. Close modal The algorithm was disseminated to all PICU staff preceding initiation on April 1, 2018. For 2 months, fellows or attending physicians were asked to sign off on a paper checklist of the algorithm when EACs were obtained. The QI team conducted in-person walk rounds to solicit bidirectional feedback from staff. Walk rounds occurred daily for a 1 week, weekly for 1 month, monthly for 3 months, and quarterly thereafter. The QI initiative was reviewed quarterly during quality and safety meetings to monitor trends and discuss any concerns. EAC data were originally presented to the unit as a control chart of the monthly rate of cultures per 100 ventilator days (Fig 2). As part of a sustainability plan in Winter 2020, an electronic dashboard was created tracking monthly EACs per 1000 patient days. No hard stops or electronic medical record changes were implemented. Acknowledging that there can be patients with unique needs and in support of clinician autonomy, clinicians could order an EAC incongruent with the algorithm or treat with antibiotics regardless of culture obtainment. The algorithm has not changed since introduction. FIGURE 2 Control chart of the monthly rate of EACs per 100 ventilator days before and after implementation of a decision support algorithm. This U-chart was followed over the first year after implementation of the QI initiative. FIGURE 2 Control chart of the monthly rate of EACs per 100 ventilator days before and after implementation of a decision support algorithm. This U-chart was followed over the first year after implementation of the QI initiative. Close modal The primary outcome was the monthly rate of EACs defined as the number of EACs obtained per 100 ventilator days. Ventilator days were measured by using National Healthcare Safety Network (NHSN) methodology.14 To facilitate future comparisons and as a sustainability metric, we examined the rate of EACs per 1000 PICU patient days as an alternative denominator that was more readily accessible than ventilator days. Process measures included the number of EACs repeated from the same patient within 3 days and days since a patient’s previous EAC. Objective monitoring of adherence with the full algorithm was not feasible because increased quantity of secretions was not reliably documented. To better understand the impact of the algorithm, secondary outcomes included whether the EACs were obtained reflexively with blood and urine cultures (ie, all cultures were ordered at the same time), whether the patient was ventilated via an endotracheal tube >48 hours or had a tracheostomy at the time of EAC, whether the patient received antibiotic treatment, and if the antibiotic treatment was specifically for a new episode of VAI. Antibiotic treatment included ongoing or newly initiated antibiotics for any condition within 2 days of EAC culture obtainment and excluded prophylactic antibiotics. A new episode of treated VAI was defined as initiation of antibiotics continued >2 days with clinician-documented indication for tracheitis, pneumonia, or treating a positive EAC result. For example, if a patient had an EAC followed by 5 days of cefepime for tracheitis and, then, a new EAC followed by 7 days of meropenem for tracheitis, it was considered 2 VAI episodes. Balancing measures included in-hospital mortality, hospital and PICU length of stay, hospital and PICU readmissions within 7 days, and All Patients Refined Diagnosis Related Groups (APR-DRGs) mortality and severity scores. Lastly, we estimated cost savings. First, we calculated charge savings by applying the microbiology laboratory’s average charge to process 1 EAC (ie, culture, identify bacteria, and perform antibiotic susceptibilities) during the baseline period and taking the difference between the 2 years. Then, we transformed charges into costs by multiplying the charge estimates by the median national cost-to-charge ratio (CCR) from the Kids’ Inpatient Database CCR database, the largest publicly available resource of all-payer inpatient pediatric hospitalizations.26,27 Patient demographics; admission, transfer, and discharge data, mortality; APR-DRG mortality and severity scores; ventilator days, patient days; and EACs were queried from the electronic medical system. Clinical data pertaining to individual EACs (eg, time since last EAC, type of ventilation when the EAC was obtained, and antibiotic treatment after EAC) were completed by manual chart review. The Johns Hopkins University School of Medicine Institutional Review Board acknowledged the QI project and approved the study with a waiver of informed consent. As part of the QI project, we monitored the monthly rate of EACs per ventilator days from April 2017 to March 2019 in a statistical process control chart (U-chart) created in Microsoft Excel. After the initial analysis, to monitor sustainability, we examined the rate of EACs per patient days in a control chart using the electronic dashboard that displayed data from July 2016 (when the current electronic medical record system was introduced) to March 2020. The control limits in these charts were set to 3 SDs above and below the mean monthly rate of EACs. The baseline period included through March 2018, and we adjusted the centerline after special cause variation was demonstrated.28 We analyzed the rate of EAC use per 100 ventilator days and per 1000 patient days among PICU patients in the preintervention period (April 2017 to March 2018) compared with the postintervention period (April 2018 to March 2019) using a Poisson regression model with an indicator variable for the postintervention period; and then, using a quasi-experimental interrupted time series (ITS) model with segmented regression of the log-transformed monthly EAC rates.22,29 The ITS model was used to estimate (1) the monthly rate of change in the preintervention period (April 2017 to March 2018), (2) the immediate effect in the month the intervention was introduced (April 2018), and (3) the rate of change in the postintervention period (April 2018 to March 2019) in comparison with the preintervention period. Patient demographic and other outcomes were evaluated with analysis of variance for normally distributed continuous variables, Wilcoxon rank tests for nonnormally distributed variables, χ2 tests for categorical variable, and a 2-sided Poisson test for comparing incidence rates. Stata version 14 (StataCorp, College Station, TX) was used to perform these statistical analyses. In Figure 2, we show the control chart of EAC use per 100 ventilator days. We shifted the centerline in May 2018, after special cause was demonstrated by 1 data point below the lower control limit (the centerline mean shifted from 10.9 to 6.4 EACs per 100 ventilator days). EAC use per ventilator-days and patient-days data are provided in Table 3. In the preintervention period, there was an average of 46 EACs per month and total of 557 cultures over 5092 ventilator days (an incident rate of 10.9 EACs per 100 ventilator days). After introduction of the algorithm, there was an average of 19 EACs per month and total of 234 EACs over 3654 ventilator days (an incident rate of 6.5 EACs per 100 ventilator days). The absolute reduction in EACs was 58%, and the overall decrease in the monthly rate of EACs per ventilator days was 41% (incidence rate ratio [IRR]: 0.59; 95% confidence interval [CI] 0.51–0.67; P < .001). The ITS model revealed a preexisting 2% decline in the monthly culture rate (IRR: 0.98; 95% CI 0.97–1.0; P = .01), immediate 44% drop (IRR: 0.56; 95% CI 0.45–0.70; P = .02), and stable rate in the postintervention period (IRR: 1.03; 95% CI 0.99–1.07; P = .09). A sensitivity analysis varying the start dates confirmed that the immediate drop in EAC rates corresponded with the intervention start date in April 2018. Over the 2-year study period, there were 402 and 398 mechanically ventilated patients in each year, with similar sex, race, and ethnicity (Supplemental Table 5). TABLE 3 EAC Use, Process Measures, and Secondary Outcomes 1 Year Before and After Initiation of a Diagnostic Stewardship Program in the PICU OutcomesYear PreinterventionYear PostinterventionP Primary outcome, culture rates EACs, n 557 234 — Monthly EACs, mean (SD) 46.4 (10.1) 19.5 (4.3) <.001 Ventilator days, n 5092 3654 — Monthly EAC rate per 100 ventilator days, mean (SD) 10.9 (1.4) 6.5 (1.5) <.001 Patient days, n 11 442 10 763 — Monthly EAC rate per 1000 patient days, mean (SD) 48.9 (11.4) 21.8 (5.0) <.001 Process measures EAC repeated within 3 d, n (%) 108 (19.4) 21 (9.0) <.001 Time since previous EAC, d, median (IQR) 7.0 (3.0–14.0) 13.0 (5.0–37.0) <.001 Secondary outcomes EAC collected reflexive with blood and urine cultures, n (%) 197 (35.4) 90 (38.5) .41 Ventilator support at time of EACa Intubated ≥48 h, n (%) 255 (46.6) 90 (40.9) .06 Tracheostomy, n (%) 207 (37.8) 64 (29.1) .008 Antibiotic treatment of any condition after EAC,bn (%) 465 (83.5) 199 (85.0) .59 Antibiotic treatment of a new episode of VAI after EAC,cn (%) 148 (26.6) 60 (25.6) .79 OutcomesYear PreinterventionYear PostinterventionP Primary outcome, culture rates EACs, n 557 234 — Monthly EACs, mean (SD) 46.4 (10.1) 19.5 (4.3) <.001 Ventilator days, n 5092 3654 — Monthly EAC rate per 100 ventilator days, mean (SD) 10.9 (1.4) 6.5 (1.5) <.001 Patient days, n 11 442 10 763 — Monthly EAC rate per 1000 patient days, mean (SD) 48.9 (11.4) 21.8 (5.0) <.001 Process measures EAC repeated within 3 d, n (%) 108 (19.4) 21 (9.0) <.001 Time since previous EAC, d, median (IQR) 7.0 (3.0–14.0) 13.0 (5.0–37.0) <.001 Secondary outcomes EAC collected reflexive with blood and urine cultures, n (%) 197 (35.4) 90 (38.5) .41 Ventilator support at time of EACa Intubated ≥48 h, n (%) 255 (46.6) 90 (40.9) .06 Tracheostomy, n (%) 207 (37.8) 64 (29.1) .008 Antibiotic treatment of any condition after EAC,bn (%) 465 (83.5) 199 (85.0) .59 Antibiotic treatment of a new episode of VAI after EAC,cn (%) 148 (26.6) 60 (25.6) .79 IQR, interquartile range; —, not applicable. a Proportions do not add up to 100 because if the patient was not intubated ≥48 h or with a tracheostomy, the remaining EACs were obtained from patients intubated <48 h (85 EACs in year 1 and 66 EACs in year 2). b Antibiotic treatment included ongoing or newly initiated antibiotics within 2 d of EAC obtainment. Antibiotics given for prophylaxis were excluded. c Newly treated VAI episodes included initiation of antibiotic treatment continued >2 d with clinician-documented indication for tracheitis, pneumonia, or treating a positive EAC result. When examining EAC rates per patient days, there were 11 442 patient days in the preintervention year and 10 763 patient days in the postintervention year (Table 3). There was an overall 55% decline in the monthly rate of EACs in the year after introduction of the algorithm, from 49 to 22 cultures per 1000 patient days (IRR: 0.45; 95% CI 0.37–0.54; P < .001). The ITS model revealed a stable rate of cultures per 1000 patient days during the preintervention year (IRR: 0.98; 95% CI 0.95–1.02; P = .32), followed by an immediate 45% drop with the introduction of the algorithm in April 2018 (IRR: 0.55; 95% CI 0.40–0.70; P < .001) and stable rate in the postintervention period (IRR: 0.98; 95% CI 0.95–1.02; P = .29). A control chart of EACs per 1000 patient days revealed a sustained reduction in EAC use through April 2020 (Fig 3). FIGURE 3 Control chart of the monthly rate of EACs per 1000 patient days before and after implementation of a decision support algorithm. This U-chart was created by using data from an electronic dashboard established in February 2020. The data begin in July 2016, with the introduction of the electronic medical record system, Epic, and depict the EAC rate ∼2 years before and after the introduction of the clinical decision support algorithm in April 2018. FIGURE 3 Control chart of the monthly rate of EACs per 1000 patient days before and after implementation of a decision support algorithm. This U-chart was created by using data from an electronic dashboard established in February 2020. The data begin in July 2016, with the introduction of the electronic medical record system, Epic, and depict the EAC rate ∼2 years before and after the introduction of the clinical decision support algorithm in April 2018. Close modal The process measures associated with EAC obtainment are presented in Table 3. The number of EACs repeated within 3 days fell from 19% to 9% (P < .001), with an 81% absolute reduction. The median time to repeated EAC from the same patient increased from 7 to 13 days (P < .001). The secondary outcomes are presented in Table 3. There was a 54% absolute reduction of EACs obtained reflexively with blood and urine cultures (from 197 to 90 EACs), although the proportion was similar over time. There was a relative decline (from 46.6% to 40.9%; P = .06) and large absolute reduction of 65% (from 255 to 90 EACs) in EACs obtained from patients intubated >48 hours. Similarly, there was a relative decline (from 37.8% to 29.1%, P = .008) and large absolute reduction of 69% (from 207 to 64 EACs) in EACs obtained from patients with tracheostomies. The majority of EACs were associated with the patient receiving antibiotic treatment across both years (83% vs 85%). Although the proportion of EACs associated with treatment of new episodes of VAI was stable (26.6% vs 25.6%), there was a 59% absolute reduction in treated VAI episodes (from 148 to 60 episodes). In-hospital mortality, hospital length of stay, hospital 7-day readmissions, PICU length of stay, PICU 7-day readmissions, or APR-DRG severity and mortality scores were stable across the 2 years (Table 4). We observed a decrease in average monthly ventilator days (from 424 to 305 days; P < .001) and monthly patient days (from 954 to 897; P = .01) between the first and second years. TABLE 4 Balancing Measures 1 Year Before and After Initiation of a Diagnostic Stewardship Program in the PICU Year PreinterventionYear PostinterventionP Hospital admissions with a PICU visit and mechanical ventilation, n 658 637 — In-hospital mortality, n (%) 38 (5.8) 44 (6.9) .40 APR-DRG severity score 1–4, median (IQR)a 3.0 (2.0–4.0) 3.0 (2.0–4.0) .70 APR-DRG mortality score 1–4, median (IQR)a 4.0 (3.0–4.0) 4.0 (3.0–4.0) .80 Hospital length of stay, d, median (IQR) 9 (4–19) 9 (4–22) .81 7-d hospital readmissions, n (%) 50 (7.6) 45 (7.1) .71 PICU visits with mechanical ventilation, n 697 735 — PICU length of stay, d, median (IQR) 3.8 (1.7–8.7) 3.9 (1.7–9.8) .64 7-d PICU readmissions, n (%) 29 (4.2) 26 (3.5) .54 Year PreinterventionYear PostinterventionP Hospital admissions with a PICU visit and mechanical ventilation, n 658 637 — In-hospital mortality, n (%) 38 (5.8) 44 (6.9) .40 APR-DRG severity score 1–4, median (IQR)a 3.0 (2.0–4.0) 3.0 (2.0–4.0) .70 APR-DRG mortality score 1–4, median (IQR)a 4.0 (3.0–4.0) 4.0 (3.0–4.0) .80 Hospital length of stay, d, median (IQR) 9 (4–19) 9 (4–22) .81 7-d hospital readmissions, n (%) 50 (7.6) 45 (7.1) .71 PICU visits with mechanical ventilation, n 697 735 — PICU length of stay, d, median (IQR) 3.8 (1.7–8.7) 3.9 (1.7–9.8) .64 7-d PICU readmissions, n (%) 29 (4.2) 26 (3.5) .54 IQR, interquartile range; —, not applicable. a Denominator for admissions with available APR-DRG data are 645 of 658 admissions in the first year and 617 of 637 admissions in the second year. The average charge to process 1 EAC during the baseline period was$220. The estimated charges were $122 504 and$51 480 in the preintervention and postintervention years, respectively. Applying the median national CCR of 0.363,26  the estimated direct costs were $44 482 and$18 687, respectively. The estimated direct cost savings was \$25 795 per year. The introduction of a clinical decision support algorithm was associated with a 41% reduction in the rate of EAC per ventilator days and was not associated with potential safety concerns, such as increased in-hospital mortality, length of stay, or hospital readmissions. The reduction in EAC use was demonstrated by using a statistical process control chart and by using ITS analysis, and the findings were robust when measured on the basis of ventilated patient days and PICU patient days. The final implemented algorithm was a consensus among multidisciplinary experts informed by the local assessment of drivers of EAC use, available literature, and collective clinical experience. The introduction of the algorithm reduced reflexive testing and promoted more judicious EAC ordering practices. The median time to repeated cultures from the same patient increased from 7 to 13 days, and there was a 54% reduction in EACs obtained reflexively with blood and urine cultures. Given the paucity of literature around diagnostic stewardship of EACs, it is possible alternative algorithms or approaches could be successful. We believe our approach conducting formative investigation of local practices and potential barriers, along with engaging multidisciplinary stakeholders, contributed to the success of our initiative and led to a sustained change in culturing practices. Importantly, this algorithm may not apply to all patients. For example, during routine safety and quality meetings, we discussed that patients on extracorporeal membrane oxygenation may not exhibit sufficient signs and symptoms to meet the algorithm. To accommodate differing patient scenarios, no hard stops in the electronic medical record or microbiology laboratory were put in place, and clinician autonomy and discretion was preserved. We did not evaluate the impact of our intervention on VAI rates because NHSN surveillance definitions include positive EAC results. Therefore, a reduction in EACs would confound the interpretation of VAI rates. Furthermore, alternative surveillance criteria, such as ventilator-associated events, may have low specificity for VAIs.30,31  VAI remains a potential complication of mechanical ventilation, and EACs may have an important role in clinical care, helping to identify causative organisms and tailor antibiotic management when patients do have a VAI. A risk of overuse of EACs is potentiating unnecessary antibiotic treatment. EACs sample a nonsterile site, yet bacterial growth prompts clinicians to treat with antibiotics.1,2,8  We observed that >80% of patients with EACs still were treated with antibiotics. However, given the reduction in the number of EACs, we saw a 59% reduction in treated episodes of VAI after EAC. It is possible that there were patients treated for VAI who did not have an EAC obtained and were not included. In the postintervention period, the overall rate of EACs was still 1 EAC per 15 ventilator days. Therefore, we postulate a reduction in treated VAI episodes reflects a true reduction in antibiotic use in association with a reduced frequency of EACs. Previous studies have also revealed a reduction in antibiotic treatment after diagnostic stewardship of urine cultures.32,33  We plan to further explore the impact of reduced EAC use on antibiotic prescribing and associated cost savings among mechanically ventilated PICU patients in future studies. The reduction in EAC use led to substantial direct cost savings, which likely underestimated the true cost savings if indirect measures, such as staff time, or additional resources were considered. Improving EAC practice and reducing unnecessary costs of care aligns with the national Choosing Wisely campaign focused on reducing medical overuse.34  There may be additional opportunities to improve EAC use in other settings, such as in ambulatory, acute care, or long-term care facilities caring for patients with tracheostomies. We recognize a few limitations of this work. First, this was a single-center QI project with retrospective review. It is possible there could be changes in the patient population over time that influenced EAC use. We observed a decline in ventilator days and patient days in the year after implementing the algorithm. There were 3 notable changes, primarily affecting the second year: (1) an extubation readiness test implemented in February 2018; (2) a temporary reduction in PICU beds, from 36 to 30 PICU beds in September 2018; and (3) reduced cardiac surgeries starting in October 2018. Although we did not collect data around indications for PICU admission, we did not observe a change in basic patient demographics, and, if APR-DRG values were considered as a surrogate for complexity, there was not a change in complex admissions in the postintervention year. The overall decline in ventilator and patient days highlights the importance of measuring EAC rates rather than just absolute reductions. Second, although there was not a statistically significant change in mortality, this study is underpowered to detect small changes. This PICU conducts a review of all in-hospital mortality cases, and, in the many active discussions with PICU clinicians after implementation of the algorithm, there were no reported safety concerns attributed to reduced EAC use leading to delayed treatment of infections or complications. Multicenter studies would be necessary to provide sufficient power to statistically assess mortality. Third, we did not assess ventilator-free days because the NHSN ventilator days did not distinguish the route of ventilation and it would not be an applicable metric for ventilator-dependent patients with tracheostomies. Finally, our center could have unique patient characteristics, workflow, or local culture that influenced both preexisting EAC use and the response to the algorithm. There are no previous data regarding optimal EAC use rates to compare our EAC rates with those of other facilities as a benchmark. We used local assessments and existing literature to inform the development and implementation of a clinical decision support algorithm to standardize EAC practice among ventilated PICU patients. The introduction of the algorithm was associated with a significant decline in the rate of EACs without changes in mortality, readmissions, or length of stay. Further studies are needed to understand the safety and reproducibility of decision support tools to improve EAC use in other settings and patient populations. We acknowledge Dr Kristin Voegtline from the Johns Hopkins Biostatistics, Epidemiology and Data Management Core for assistance with data management, Dr Eili Klein for creation of the electronic dashboard, and Avi Gadala and Guanyu Li for clinical data coordination and retrieval from the Core for Clinical Research Data Acquisition, supported in part by the Johns Hopkins Institute for Clinical and Translational Research (UL1TR001079). We thank the Johns Hopkins Microbiology Laboratory and the staff of the Johns Hopkins Children’s Center PICU. Dr Sick-Samuels conceptualized and designed the study, collected data, analyzed data, drafted the initial manuscript, and reviewed and revised the manuscript; Mr Linz and Dr Bergman designed data collection instruments, collected data, analyzed data, and reviewed and revised the manuscript; Dr Hoops and Mr Dwyer conceptualized and designed the study and critically reviewed the manuscript for important intellectual content; Drs Fackler, Ralston, Berenholtz, and Milstone conceptualized and designed the study and reviewed and revised the manuscript; Dr Colantuoni participated in study design, analyzed data, and reviewed and revised the manuscript; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work. FUNDING: Funded in part by the Johns Hopkins Eudowood Board Bauernschmidt Award to Dr Sick-Samuels; National Institutes of Health grant KL2TR003099 to Dr Sick-Samuels and grant K24AI141580 to Dr Milstone; and Agency for Healthcare Research and Quality grant R18HS025642 to Dr Milstone. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. Funded by the National Institutes of Health (NIH). • APR-DRG All Patients Refined Diagnosis Related Group • • CCR cost-to-charge ratio • • CI confidence interval • • EAC endotracheal aspirate culture • • IRR incidence rate ratio • • ITS interrupted time series • • NHSN National Healthcare Safety Network • • QI quality improvement • • VAI ventilator-associated infection • • VAP ventilator-associated pneumonia 1 Willson DF , Hoot M , Khemani R , et al.; Ventilator-Associated INfection (VAIN) Investigators and the Pediatric Acute Lung Injury and Sepsis Investigator’s (PALISI) Network . Pediatric ventilator-associated infections: the Ventilator-Associated INfection Study . Pediatr Crit Care Med . 2017 ; 18 ( 1 ): e24 e34 2 Willson DF , Kirby A , Kicker JS . Respiratory secretion analyses in the evaluation of ventilator-associated pneumonia: a survey of current practice in pediatric critical care . Pediatr Crit Care Med . 2014 ; 15 ( 8 ): 715 719 3 Raymond J , Aujard Y ; European Study Group . Nosocomial infections in pediatric patients: a European, multicenter prospective study . Infect Control Hosp Epidemiol . 2000 ; 21 ( 4 ): 260 263 4 Grohskopf LA , Sinkowitz-Cochran RL , Garrett DO , et al.; Pediatric Prevention Network . A national point-prevalence survey of pediatric intensive care unit-acquired infections in the United States . J Pediatr . 2002 ; 140 ( 4 ): 432 438 5 Hill JD , Ratliff JL , Parrott JC , et al . Pulmonary pathology in acute respiratory insufficiency: lung biopsy as a diagnostic tool . J Thorac Cardiovasc Surg . 1976 ; 71 ( 1 ): 64 71 6 Durairaj L , Z , Launspach JL , et al . Patterns and density of early tracheal colonization in intensive care unit patients . J Crit Care . 2009 ; 24 ( 1 ): 114 121 7 Willson DF , Conaway M , Kelly R , Hendley JO . The lack of specificity of tracheal aspirates in the diagnosis of pulmonary infection in intubated children . Pediatr Crit Care Med . 2014 ; 15 ( 4 ): 299 305 8 Venkatachalam V , Hendley JO , Willson DF . The diagnostic dilemma of ventilator-associated pneumonia in critically ill children . Pediatr Crit Care Med . 2011 ; 12 ( 3 ): 286 296 9 Carcillo JA , Dean JM , Holubkov R , et al.; Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Collaborative Pediatric Critical Care Research Network (CPCCRN) . The randomized comparative pediatric critical illness stress-induced immune suppression (CRISIS) prevention trial . Pediatr Crit Care Med . 2012 ; 13 ( 2 ): 165 173 10 Fischer JE , Ramser M , Fanconi S . Use of antibiotics in pediatric intensive care and potential savings . Intensive Care Med . 2000 ; 26 ( 7 ): 959 966 11 Fayon MJ , Tucci M , Lacroix J , et al . Nosocomial pneumonia and tracheitis in a pediatric intensive care unit: a prospective study . Am J Respir Crit Care Med . 1997 ; 155 ( 1 ): 162 169 12 Gauvin F , Dassa C , Chaïbou M , Proulx F , Farrell CA , Lacroix J . Ventilator-associated pneumonia in intubated children: comparison of different diagnostic methods . Pediatr Crit Care Med . 2003 ; 4 ( 4 ): 437 443 13 Craven DE , Chroneou A , Zias N , Hjalmarson KI . Ventilator-associated tracheobronchitis: the impact of targeted antibiotic therapy on patient outcomes . Chest . 2009 ; 135 ( 2 ): 521 528 14 Centers for Disease Control and Prevention . National Healthcare Safety Network (NHSN) Patient Safety Component Manual . Atlanta, GA : Centers for Disease Control and Prevention ; 2019 15 Kalil AC , Metersky ML , Klompas M , et al . Management of adults with hospital-acquired and ventilator-associated pneumonia: 2016 Clinical Practice Guidelines by the Infectious Diseases Society of America and the American Thoracic Society. [published corrections appear in Clin Infect Dis. 2017;64(9):1298; Clin Infect Dis. 2017;65(8):1435; and Clin Infect Dis. 2017;65(12):2161] . Clin Infect Dis . 2016 ; 63 ( 5 ): e61 e111 16 Sick-Samuels AC , Fackler JC , Berenholtz SM , Milstone AM . Understanding reasons clinicians obtained endotracheal aspirate cultures and impact on patient management to inform diagnostic stewardship initiatives . Infect Control Hosp Epidemiol . 2020 ; 41 ( 2 ): 240 242 17 Morgan DJ , Malani P , Diekema DJ . Diagnostic stewardship-leveraging the laboratory to improve antimicrobial use . JAMA . 2017 ; 318 ( 7 ): 607 608 18 Sperling K , Priddy A , Suntharam N , Feuerhake T . Optimizing testing for Clostridium difficile infection: a quality improvement project . Am J Infect Control . 2019 ; 47 ( 3 ): 340 342 19 Truong CY , Gombar S , Wilson R , et al . Real-time electronic tracking of diarrheal episodes and laxative therapy enables verification of Clostridium difficile clinical testing criteria and reduction of Clostridium difficile infection rates . J Clin Microbiol . 2017 ; 55 ( 5 ): 1276 1284 20 Epstein L , Edwards JR , Halpin AL , et al . Evaluation of a novel intervention to reduce unnecessary urine cultures in intensive care units at a tertiary care hospital in Maryland, 2011-2014 . Infect Control Hosp Epidemiol . 2016 ; 37 ( 5 ): 606 609 21 Mullin KM , Kovacs CS , Fatica C , et al . A multifaceted approach to reduction of catheter-associated urinary tract infections in the intensive care unit with an emphasis on “stewardship of culturing” . Infect Control Hosp Epidemiol . 2017 ; 38 ( 2 ): 186 188 22 Woods-Hill CZ , Fackler J , Nelson McMillan K , et al . Association of a clinical practice guideline with blood culture use in critically ill children . JAMA Pediatr . 2017 ; 171 ( 2 ): 157 164 23 Woods-Hill CZ , Lee L , Xie A , et al . Dissemination of a novel framework to improve blood culture use in pediatric critical care . Pediatr Qual Saf . 2018 ; 3 ( 5 ): e112 24 Pronovost PJ , Berenholtz SM , Needham DM . Translating evidence into practice: a model for large scale knowledge translation . BMJ . 2008 ; 337 : a1714 25 Branson RD . Secretion management in the mechanically ventilated patient . Respir Care . 2007 ; 52 ( 10 ): 1328 1342, NaN–1347 26 Agency for Healthcare Research and Quality ; Healthcare Cost and Utilization Project . HCUP Cost-to-Charge Ratio for Kids’ Inpatient Database (KID) . Rockville, MD : Agency for Healthcare Research and Quality ; 2016 27 Healthcare Cost and Utilization Project . Cost-to-charge ratio files: user guide for Kids’ Inpatient Database (KID) CCRs. Available at: https://www.hcup-us.ahrq.gov/db/state/CCR_KID_UserGuide_2003-2016.pdf. Accessed January 13, 2020 28 Provost LP , Murray SK . The Health Care Data Guide: Learning from Data for Improvement . San Francisco, CA : Jossey-Bass ; 2011 29 Schweizer ML , Braun BI , Milstone AM . Research methods in healthcare epidemiology and antimicrobial stewardship-quasi-experimental designs . Infect Control Hosp Epidemiol . 2016 ; 37 ( 10 ): 1135 1140 30 Ziegler KM , Haywood JD , Sontag MK , Mourani PM . Application of the new Centers for Disease Control and Prevention surveillance criteria for ventilator-associated events to a cohort of PICU patients identifies different patients compared with the previous definition and physician diagnosis . Crit Care Med . 2019 ; 47 ( 7 ): e547 e554 31 Willson DF , Hall M , Beardsley A , et al.; Pediatric Acute Lung Injury and Sepsis Investigators (PALISI) Network . Pediatric ventilator-associated events: analysis of the pediatric ventilator-associated infection data . Pediatr Crit Care Med . 2018 ; 19 ( 12 ): e631 e636 32 Hartley SE , Kuhn L , Valley S , et al . Evaluating a hospitalist-based intervention to decrease unnecessary antimicrobial use in patients with asymptomatic bacteriuria . Infect Control Hosp Epidemiol . 2016 ; 37 ( 9 ): 1044 1051 33 Stagg A , Lutz H , Kirpalaney S , et al . Impact of two-step urine culture ordering in the emergency department: a time series analysis . BMJ Qual Saf . 2018 ; 27 ( 2 ): 140 147 34 Morgan DJ , Croft LD , Deloney V , et al . Choosing wisely in healthcare epidemiology and antimicrobial stewardship . Infect Control Hosp Epidemiol . 2016 ; 37 ( 7 ): 755 760 35 Cocoros NM , Kleinman K , Priebe GP , et al.; Pediatric Ventilator-Associated Conditions Study Team . Ventilator-associated events in neonates and children–a new paradigm . Crit Care Med . 2016 ; 44 ( 1 ): 14 22 36 Pugin J , Auckenthaler R , Mili N , Janssens JP , Lew PD , Suter PM . Diagnosis of ventilator-associated pneumonia by bacteriologic analysis of bronchoscopic and nonbronchoscopic “blind” bronchoalveolar lavage fluid . Am Rev Respir Dis . 1991 ; 143 ( 5 ): 1121 1129 37 Altiner A , Wilm S , Däubener W , et al . Sputum colour for diagnosis of a bacterial infection in patients with acute cough . Scand J Prim Health Care . 2009 ; 27 ( 2 ): 70 73 38 Brusse-Keizer MGJ , Grotenhuis AJ , Kerstjens HAM , et al . Relation of sputum colour to bacterial load in acute exacerbations of COPD . Respir Med . 2009 ; 103 ( 4 ): 601 606 39 Daniels JMA , de Graaff CS , Vlaspolder F , Snijders D , Jansen HM , Boersma WG . Sputum colour reported by patients is not a reliable marker of the presence of bacteria in acute exacerbations of chronic obstructive pulmonary disease . Clin Microbiol Infect . 2010 ; 16 ( 6 ): 583 588 40 Reychler G , Andre E , Couturiaux L , et al . Reproducibility of the sputum color evaluation depends on the category of caregivers . Respir Care . 2016 ; 61 ( 7 ): 936 942 41 Yalamanchi S , Saiman L , Zachariah P . Decision-making around positive tracheal aspirate cultures: the role of neutrophil semiquantification in antibiotic prescribing . Pediatr Crit Care Med . 2019 ; 20 ( 8 ): e380 e385 ## Competing Interests POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose. FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24897298216819763, "perplexity": 14369.68521790355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00797.warc.gz"}
https://chemistry.stackexchange.com/tags/adsorption/hot
# Tag Info 12 There are two common arguments presented as for why $\Delta H < 0$: Argument (1): Well, indeed I see nothing wrong with the argument presented by the textbook. If adsorption takes place spontaneously, then one can conclude that the change in Gibbs free energy of the process is indeed negative. Since, the entropy change associated with process is ... 11 Activated charcoal is a great adsorbent because of it's huge surface area. While it doesn't bind very many ions/atoms/molecules per surface area (which is the characteristic of a 'good' adsorbent), due to very big surface area per unit of mass it can adsorb a lot of particles. Actually, process of 'activating' charcoal is designed to maximize the surface ... 11 The second law of thermodynamics states that the entropy of the universe always increases. $$\mathrm{d}S > 0$$ In the case of adsorption the entropy of the system; the gas being adsorbed; decreases but the entropy of the surroundings;the rest of the gas and the surface (and everything else in the universe); increases and this outweighs the decrease in ... 9 OK, let's step through your questions one at a time. First off, what do they mean by steady-state number here? Intermediates formed in reactions usually exist in low concentration. The steady-state assumption assumes that the concentration of this (low-concentration) intermediate will not change appreciably over the course of the reaction once steady ... 9 Critical temperature is kind of measure of the strength of intermolecular van der Waals force. Comparison of actual and critical temperature is kind of comparison of average kinetic energy of molecules with energy needed to break the intermolecular bonds. So temperature itself does not say anything about the bonding strength. It only indirectly determine ... 8 First, a note on the Bell-Evans-Polanyi (BEP) equation: The form you've written it in disguises what the constant $\alpha$ tells you. From the wikipedia article, it's said that $\alpha$ is a number between 0 and 1 which tells you how close the transition state is to the reactant state. Thus, this constant actually tells you quite a lot about the shape of ... 8 The IUPAC defintion isn't more limited than the ones you've read. Instead, it is more general. Your confusion results from not being familiar with the term "condensed phase". You think a condensed phase is a 'more condensed liquid', and consequently think IUPAC is referring to a surface formed between a liquid and a 'more condensed liquid': "... 7 Charcoal isn't a particularly good adsorbent even though it is chemically very similar to activated carbon. Activated carbon is usually made by more specialised processes that guarantee the final product will have a very large surface area (often >1,000m2/g). Manufacture usually involves pyrolysation with hot gasses, but many forms are also further ... 6 Without knowing exactly what you were told its hard to be exact, but here is a description of the situation. The Langmuir model makes assumptions (a) adsorption is complete when the surface is filled with one gas molecule per site, (b) all adsorption sites are equivalent (i.e. the same) and (c) adsorption and desorption are separate processes, i.e. one ... 6 At hydrous equilibrium will the "stronger" desiccant contain virtually all of the water? Or is the equilibrium distribution of water a function of the desiccants' relative "hygroscopy," rates of sorption, or some other factor(s)? "Rates" aren't relevant for questions about equilibrium. I am trying to imagine a series of tests in which two different ... 6 When burning a blunt, joint, doobie or whatever, THC evaporates (bp of tetrahydrocannabinoles is around 155 °C). Depending on the length of the "object" , related to the temperature gradient between the front and the mouth piece, and the type of the "filling" and its adsorptive properties a part of the vaporized material might condense and/or get absorbed ... 6 I see three reasons: Activated carbon is commonly used to adsorb organics, that should bind well on coal. For many purposes, activated carbon is treated with potassium or iodine to provide ions for charged centers. Charring organic stuff is a cheap and easy way to create large surfaces for adsorption. 6 As acknowledged in your question, there are a large number of factors at play here. The most basic of these is that there are three fundamental mechanisms by which freezing of a droplet can initiate when in contact with a superhydrophobic surface: 1) heterogeneous nucleation where an ice freezing nucleus (IFN) is present in the water droplet, 2) ... 5 Cyclodextrin hydroxyls are the external surface (hydrophilic and hydrogen-bonding). They circle the larger open end. The carbon and hindered ether skeleton is internal (hydrophobic). A cyclodextrin looks like a bonded hollow detergent micelle in water. Before you believe advertising hype, remember that underivatized cyclodextrins are nephrotoxic, ... 5 Brunauer–Emmett–Teller (BET) explains the physical adsorption of gas molecules on a solid surface, and doesn't address chemisorption. Even the wikipedia link that you provided, makes that quite explicit. However, you are correct to think that it is in a extension of the Langmuir adsorption model, which only accounts for monolayers. The assumption that ... 5 Never forget to interpret in terms of physical/chemical forces, by order of energy: van der Waals, hydrophobic (pseudo forces), hydrogen bonds, disulfide bridges, covalent, electrostatic (for the most frequent). Indeed, in charcoal, there is a large number of $\pi$-electrons that can make bonds, in particular. And yes, the porosity is important. But ... 5 The ratio $$\frac{x}{m}= k P ^{1/n}$$ can be re-expressed in terms of the area $A$, surface-to-volume ratio $\lambda$, and density $\rho$ (note that $m=A\rho/\lambda)$: $$\frac{x}{A}= k' P ^{1/n}$$ where $$k'=k\rho/\lambda$$ The advantage of using $k$ as opposed to $k'$ is that you don't need to determine $\lambda$ to know how much (as in, what mass) ... 4 Charcoal (although activated carbon, a specially processed microporous charcoal product is better) is a material that is mostly carbon. The black allotrope of carbon is graphite, which is made of overlapping planar sheets of graphene. Graphite is very nonpolar (since it has only carbon atoms). Thus, nonpolar substances will adsorb. Most organic substances ... 4 Charcoal is a good absorbent because its quite selective and cheap. Its high surface area means nothing if it doesn't bond to the components that you want absorbed and not a lot else- witness a household sponge which has high surface area and mercury- the mercury just slides off. Charcoal is used in the gold mining/processing industry. The gold(and other ... 4 The author of the article has named the phenomenon incorrectly. Adsorption can indeed result in a lower volume than might be expected, for example when a gas interacts with a high surface area solid. In the case of hydrogen storage, for example, the adsorbed phase is dense enough at low temperatures that much more hydrogen can be stored in a container filled ... 4 Yes, it is meaningful, but often ignored for practical reasons. In the lab, e.g. in a desiccator, you would just use a large excess of the desiccant of choice which would always work. It’s often more meaningful to classify them as to whether they are acidic, basic or neutral. But let’s assume we wanted to create that scale. For simplicity reasons, let’s ... 4 For langmuir isotherm: Let say we are using linear langmuir isotherm. The formula is : $\frac{1}{q_e} = \frac{1}{q_m} + \frac{1}{q_mKL}$ find the value of $q_e$. Using the formula $q_e = \frac{(C_i-C_f)V}{\text{mass}}$ of adsorbent, and then plot the graph $\frac{1}{q_e}$ versus $\frac{1}{C_f}$. It would be easier by using excel. From the graph, we can ... 4 Why is this strange? Consider: $$\ce{2A <=>[k_1][k_{-1}] A_2}$$ Forward rate is second order: $k_1[\ce{A}]^{2}$. Reverse rate is first order $k_{-1}[\ce{A2}]$. 4 Short Answer The following equilibrium roughly represents what goes on during physisorption of a gases onto a a surface. $$\ce{Gas + Surface <=>[adsorption][desorption] Gas--Surface}$$ The effect of pressure can be understood on the basis of Le Chatelier's principle, which states that if a stress (such as a change in pressure) is applied to a ... 4 Well, it usually depends on the temperature: if temperature is too low, molecules cannot move so freely so they are localized on top of the solid without much movement. However, in general cases the adsorbed gas behaves like a liquid in two dimensions. Near the surface, the gas molecules are 'trapped' in the force field of the solid, so they don't escape so ... 4 Your problem is not the unit conversion, but the units you have used. The adsorption cross-section of $\ce{N2}$ ($s$) has been found using BET method is $\pu{0.162 nm^2}$ (Langmuir). Yet, you have used $\pu{0.162 nm^3}$, more than a factor of $1 \times 10^9~\mathrm{nm/m}$. 4 One version of Freundlich adsorption isotherm equation is: $$\frac xm = Kp^{\frac12},$$ which can also be written as: $$\log \frac xm = \log K + \frac12\log p$$ This is a straight line equation of type $( y = c + mx)$ as given in the question. However, I think the question has made a mistake saying the slope is $2$, but instead it should be $\frac12$. ... 3 On colliding with a surface the molecules do not necessarily bounce off because, depending on the collision energy, Van-der-Waals forces can act to hold the gas molecule close to the surface. These are typically London dispersion forces but depending on type of surface and molecule, any type of VDW force could occur. This process is called physisorption and ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7602678537368774, "perplexity": 925.6376691312037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00474.warc.gz"}
https://www.physicsforums.com/threads/trying-to-understand-linear-frame-dragging.403787/
# Trying to understand linear frame dragging 1. May 17, 2010 ### jaketodd See "linear frame dragging" here: http://en.wikipedia.org/wiki/Frame-dragging Two small masses, initially the same distance from a large mass. One of the small masses has a propulsion system that keeps it at a constant distance from the large mass. The other small mass has no propulsion system and falls toward the oncoming large mass. Do both objects experience linear frame dragging? If not, which one does? What effect does the linear frame dragging have on the small mass(es)? Thanks, Jake 2. May 17, 2010 ### Jonathan Scott As far as I know, linear frame dragging is related to an accelerating source mass (with changing momentum), and produces a tiny component of acceleration in the same direction. As any gravitational source will produce a much larger acceleration due to the static field, it is very difficult to construct a set-up in which this component might be detectable. If you look at the approximate analogy with electromagnetism, the usual gravitational field corresponds to the grad phi component of the E field, the linear frame dragging corresponds to the dA/dt component of the E field and the rotational frame dragging corresponds to the curl A = B field. In this (somewhat misleadingly simplified) model the gravitational equivalent of the vector potential A is effectively Gmv/r where v is the velocity, so it is like the potential due to the momentum. 3. May 17, 2010 ### IcedEcliptic Rotational frame dragging is the one that is commonly worked with, although I'd guess that two hypothetical cosmic strings racing past one another would cause linear frame dragging. The thing is, you can model RFD using Kerr black holes and their ergosphere, modeling or observing the linear version in a measurable fashion seems unlikely. 4. May 18, 2010 ### jaketodd He said "as far as I know" so I'm wondering if anyone else agrees with this interpretation. Thanks all, Jake 5. May 18, 2010 ### Jonathan Scott Note that linear frame-dragging works like inertia, in that a test object experiences a force proportional to its mass m if nearby objects are accelerating relative to it. It would be very neat if this could be extended so that when everything in the universe is accelerating relative to it with average acceleration a, it experiences a force exactly equal to ma. From the point of view of the rest of the universe, that force would then appear to be due to the inertia of the test object opposing its acceleration (in the opposite direction), and requires an equal and opposite force to maintain the acceleration. This is pointed out in Dennis Sciama's 1953 paper "On the Origin of Inertia". This idea is one of the possible simplifications that would arise from a gravity theory that satisfies Mach's Principle. Unfortunately, it can be shown that in GR this "Sum for inertia" of the effects of the individual accelerations cannot exactly duplicate this effect, mainly because in GR the gravitational constant G is fixed, but the sum depends on the distribution of the masses of the universe and therefore cannot be fixed. This means that either this neat Mach's Principle model is wrong or GR is wrong. (I personally suspect that GR is an approximation which is very accurate at the solar system scale but very inaccurate at larger scales). 6. May 18, 2010 ### jaketodd So it's an effect that moves small-mass objects in the direction a large-mass object is traveling? 7. May 18, 2010 ### Jonathan Scott It accelerates small-mass objects in the direction a large-mass object is accelerating (which is not necessarily the same as the direction in which it is travelling).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9061245322227478, "perplexity": 566.4719413870866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948464.28/warc/CC-MAIN-20180426183626-20180426203626-00486.warc.gz"}
https://sdgpulse.unctad.org/green-economy/
# Make or break for green economy SDG indicators Target 9.4: By 2030, upgrade infrastructure and retrofit industries to make them sustainable, with increased resource-use efficiency and greater adoption of clean and environmentally sound technologies and industrial processes, with all countries taking action in accordance with their respective capabilities Indicator 9.4.1: CO2 emission per unit of value added (Tier I) Target 7.3: By 2030, double the global rate of improvement in energy efficiency Indicator 7.3.1: Energy intensity measured in terms of primary energy and GDP Target SDG 12.6: Encourage companies, especially large and transnational companies, to adopt sustainable practices and to integrate sustainability information into their reporting cycle Indicator SDG 12.6.1: Number of companies publishing sustainability reports (Tier III) In light of recent scientific research -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , choices in climate policy taken now will be critical for our future and for the future of the ocean and cryosphere. According to the IPCC -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , climate change has already “caused impacts on natural and human systems on all continents and across the oceans”. We are experiencing more frequent natural disasters and extreme weather events, rising sea levels and diminishing Arctic sea ice, among other changes -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . In August 2019, the United Nations Secretary General, António Guterres, named 2020 a make-or-break year for climate policy, not anticipating that the COVID-19 pandemic would bring societies and economies to an abrupt halt, cutting emissions by an amount impossible to imagine under normal conditions. ## Economic downturn in 2020 took pressure off the atmosphere A growing concentration of the 'critical' greenhouse gases, mainly CO2, CH4, N2O and F-gases, in the atmosphere has been identified as the main cause of increased temperatures on the planet -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . In 2019, greenhouse gas emissions reached a record high of 52.4 Gt of CO2e. Emissions increased by 1.1 per cent from the previous year after a period of little or no growth from 2015 to 2016, a 1.3 per cent increase in 2017 and a 2.0 per cent increase in 2018. Including emissions from land-use change, which are difficult to measure, total emissions amounted to 57.4 Gt in 2019, according to a report from -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . The report notes that this level is about 59 per cent higher than in 1990 and 44 per cent higher than in 2000 (see figure 1). The COVID-19 pandemic had a major impact on global emissions. Estimates by the Global Carbon Project, a global consortium of experts, indicate a decrease of 7 per cent in total carbon dioxide emissions in 2020 -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . This is the largest reduction ever recorded and will bring us back to levels last seen almost 10 years ago. The previous record fall, caused by the global financial crisis, was a reduction of 1.2 per cent in 2009. In December 2020, energy-related CO2 emissions have already rebounded, and they are expected to grow by almost 5 per cent in 2021 as demand for coal, oil and gas recovers with the economy -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Figure 1. Greenhouse gas emissions and target reductions (SDG 9.4.1) (Gt of CO2e) Source: UNCTAD calculations based on -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- and -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Notes: Intermediate goals are shown as released by -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Emissions from land-use change are not included. The CO2 emission estimate for 2020 by -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , and the estimate for 2021 by -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . The baseline for the path towards 2030 targets is set to 2016, when the Paris Agreement became effective. What do these developments imply for global warming? The year 2020 was one of the three warmest on record, despite the unprecedented drop in emissions seen that year. The annual global temperature was already 1.2°C warmer than pre-industrial conditions -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . The 2015 Paris Climate agreement aims, by 2100, “to limit the temperature increase from pre-industrial levels to 2°C and pursue efforts to remain below 1.5°C” -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Even with a 1.5°C warming, climate scientists warn that the effects will be far greater than originally expected, including extinction of coral reefs, and many plants, insects and animals -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . According to simulations, reaching the Paris target of keeping global warming below 2°C required emissions of critical greenhouse gases to peak in 2020, and decline sharply thereafter. To remain below 2°C warming by 2100, global emissions should not exceed 40 Gt of CO2e in 2030, and to achieve the below 1.5°C warming target, total emissions should remain below 24 Gt of CO2e by 2030. Remaining below the 2°C target requires a reduction from 2018 levels of nearly 25 per cent and nearly 55 per cent to remain below 1.5°C -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Thus, although record-breaking, the forecast reduction of CO2 emissions caused by the COVID-19 outbreak will not be enough to achieve even the weakest of the targets set out by the Paris Climate agreement. Global emissions should be cut by almost 8 per cent every year for the next decade to keep us within reach of the 1.5°C target of the Paris Climate agreement. According to -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , the estimated 2020 fall in emissions translates to only a 0.01°C reduction of global warming by 2050, due to the increasing concentrations of greenhouse gases in the atmosphere. While emissions dropped in 2020, earlier emissions remain in the atmosphere for long periods. ## Most carbon dioxide emitted in Asia – per unit of GDP and in total The most prevalent greenhouse gas is CO2, as figure 1 reveals. It is a gas released through human activities, such as deforestation and burning of fossil fuels, and through natural processes, such as respiration and volcanic eruptions. Around 90 per cent of CO2 emissions are generated by burning of fossil fuels in the form of coal, oil and natural gas. However, CO2 concentrations in the atmosphere are also influenced by deforestation and other types of land-cover or land-use change, due to their impact on the land's potential to absorb or generate CO2. In recent years, CO2 has accounted for almost three quarters of total greenhouse gas emissions. Thus, by focusing on CO2, SDG indicator 9.4.1 helps monitor the largest part, although not the full amount of global greenhouse gas emissions. The regional concentration of CO2 emissions varies considerably across the globe. In 2019, many countries in Africa recorded emissions of less than 20 kg/km2. In Latin American countries and in Australia, emissions were mainly between 20-100 kg/km2. Much higher CO2 emissions, typically more than 200 kg/km2 and sometimes even higher than 2 000 kg/km2, were common for countries located in a band that ranges from the United States of America and Central America over to Europe, excluding Iceland and most of Scandinavia, and over the Near East, to Southern, Eastern and South-Eastern Asia. Within that band, particularly high emission levels were recorded in Central Europe and Eastern Asia. Farther to the North, in Canada, Northern Europe and in Northern and Central Asia, emission levels were lower, usually ranging between 50 and 200 kg/km2 on average per country. Map 1. Geographic concentration of carbon dioxide emissions (kg/km2 per year) – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- and -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Notes: CO2 emissions from fossil fuel use (combustion, flaring), industrial processes (cement, steel, chemicals and urea) and product use are included. Emissions from fuels burned on ships and aircrafts in international transport are not included. As figure 2 shows, three regions of the world emitted most of the CO2 from fuel combustion, industrial processes and product use: Eastern and South-Eastern Asia (15.5 Gt in 2019), Northern America (5.7 Gt) and Europe (6.6 Gt). Together, they accounted for about 75 per cent of global CO2 emissions in 2019. In Europe, one tenth less emissions were associated with each unit of output in Europe than in Northern America. However, as the European economy is larger, measured in terms of GDP, it also accounts for a higher amount of CO2 emissions than the economy of Northern America. Eastern and South-Eastern Asia was characterized by both higher GDP and higher carbon intensity than the other world regions shown in figure 2. The region alone emitted 41 per cent of world’s emissions. The least CO2 emissions per unit of production were caused by the economies of Latin America and the Caribbean. The economies of Sub-Saharan Africa produced only slightly more CO2 emissions per unit of production than European economies. Sub-Saharan Africa and Latin America and the Caribbean together only contributed 9 per cent of global CO2 emissions, while Europe contributed 17 per cent. Fuels burned on ships and aircrafts involved in international transport, which cannot be allocated to economies, would add about 3 per cent to global CO2 emissions -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Figure 2. CO2 emissions, emissions intensity and GDP, by region, 2019 (SDG 9.4.1) Source: UNCTAD calculations based on -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Notes: The area of bars measures CO2 emissions. Regions are arranged by order of emissions amount. CO2 emissions from fossil fuel use (combustion, flaring), industrial processes (cement, steel, chemicals and urea) and product use are included. Emissions from fuels burned on ships and aircrafts in international transport are not included. US$values are in constant 2011 prices, adjusted to purchasing power parities based to the United States of America. Central and Southern Asia includes developing economies in Oceania. ## Population growth and rising prosperity drive carbon dioxide emissions Since 1990, global CO2 emissions have increased by two thirds: from 22.7 Gt in 1990 to 38.0 Gt in 2019. This translates to 1.8 per cent average annual growth. Between 2014 and 2016 CO2 emissions remained almost constant, partly due to a sluggish world economy and slowing construction and weak demand for steel. But from 2017 CO2 emissions’ growth resumed, and by 2018 the annual growth rate had returned to 2.3 per cent -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . In 2019, the growth in emissions slowed down, before turning into negative in the face of the outbreak of COVID-19 (see above). Much of the increase in CO2 emissions observed over the last decades relates to world population growth and increased consumption per capita, since consumption relies on the production of goods and services. In fact, CO2 emissions can be expressed as the product of population size, GDP per capita (GDP/population), and the carbon intensity of production (CO2/GDP): An increase in GDP, the product of the first two factors in the equation above, leads to rising CO2 emissions, unless carbon intensity, the third factor, decreases at a higher rate than the growth of GDP. Some studies suggest that carbon intensity decreases as a country's level of development rises, to the extent that GDP growth can be offset. This would result in a bell-shaped relationship between GDP and emissions – the so-called "environmental Kuznets curve". So far, research has provided mixed empirical evidence for the validity of this curve -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . At the global level, real GDP has more than doubled over the last quarter century – from US$49 trillion in 1990 to US$127 trillion in 2019.1 This is the result of a 45 per cent increase in the world population (1971: 5.3 billion, 2019: 7.7 billion) and a three-quarters' increase in real GDP per capita (1990: US$9 290, 2019: US$16 470) (see figure 3). ## Decreasing carbon intensity cannot offset GDP growth in the less developed regions Global carbon intensity reduced by over one third from 1990 (458 g/US$) to 2019 (299 g/US$). That means, CO2 emissions have grown at a slower pace than GDP. This decoupling of CO2 emissions from GDP has been most significant in Europe, where carbon intensity fell by 55 per cent since 1990, and almost as much in Northern America (-49 per cent). Over the past 29 years, carbon intensity has decreased less in regions consisting mainly of developing economies. Eastern and South-Eastern Asia released over three times more CO2 in 2019 than in 1990, reducing carbon intensity by only 27 per cent. Recently, their carbon intensity has been declining notably, from 2012 to 2017 at an annual rate well above 3 per cent. However, the reduction in carbon intensity over the last three decades did not compensate for the extraordinary increase in GDP per capita; it was just enough to offset the population growth. In Sub-Saharan Africa carbon intensity of the economy dropped by 38 per cent from 1990 to 2019, compared to 17 per cent in Latin America and the Caribbean. In Australia and New Zealand, carbon intensity decreased by 34 per cent. Figure 3. Population, GDP per capita and carbon intensity contributions to CO2 emissions growth, by region Growth contribution (per cent) Source: UNCTAD calculations based on -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Notes: CO2 emissions from fossil fuel use (combustion, flaring), industrial processes (cement, steel, chemicals and urea) and product use are included. Rates based on US$ values are in constant 2011 prices, adjusted to purchasing power parities based to the United States of America. Central and Southern Asia includes developing economies in Oceania. Europe is the only region where the overall amount of CO2 emissions is lower today than in 1990, by almost 30 per cent. Northern America is close to 1990 levels, but the remaining regions are well above. As countries are connected by global value chains and trade relations, the observed growth in carbon intensity of GDP in developing regions may be driven by demand for carbon-intensive final products in other regions. In fact, studies based on inter-country input-output tables prepared by the -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- find that demand-based CO2 emissions of developed economies are generally higher than their production-based emissions, while most developing economies are net-exporters of CO2 emissions embodied in final products -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . As environmental policy is more stringent in some regions than in others, companies can save production costs by relocating carbon intensive production processes globally, a process described as "carbon leakage" -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . ## Energy demand dropped in early 2020, but rebounded at the end of the year Fuels are mostly burned to produce energy. For that reason, CO2 emissions and energy supply are closely interlinked. According to the -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , this subcomponent of total CO2 emissions, i.e. energy-related CO2 emissions, accounts for two thirds of CO2 emissions globally. Despite an extraordinary decline in energy demand in 2020, energy-related CO2 emissions still reached 31.5 Gt, compared to 33.0 Gt in 2019, according to -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . They estimate that the global energy demand will increase by 4.6 per cent in 2021, due to the expected recovery from the COVID-19 pandemic. Energy is an indispensable input for most processes generating value added in an economy. This means that energy intensity (Energy/GDP) is an important determinant of the carbon intensity of GDP (CO2/GDP). The other determinant is the carbon intensity of energy supply (CO2/energy), as the decomposition below shows: Figure 4 demonstrates the important role of efficient energy use in reducing the carbon intensity of GDP. From 2008 to 2018, energy intensity reduced on average by 1.7 per cent each year. During that time, developing and developed economies in Asia and Oceania achieved significant reductions in energy intensity, by 16 and 17 per cent respectively. However, due to rising emissions per unit of energy supplied, the reduction of carbon intensity of GDP was smaller: 8 per cent in developing and 13 per cent in developed Asia and Oceania. In Latin America and the Caribbean and in Africa, carbon intensity of energy supply remained almost unchanged. Cuts in energy intensity, however, enabled reducing the carbon intensity of GDP also in those regions by 4 and 11 per cent, respectively. By contrast, In Northern America and Europe, both energy intensity and carbon intensity of energy supply were considerably reduced. Due to higher saving in energy per unit of GDP, the overall reduction in carbon intensity of GDP was slightly higher in Northern America (-27 per cent) than in Europe (-23 per cent). Thus, if GDP did not grow, CO2 emissions from fuel combustions would have declined in all regions of the world over the last decade. Figure 4. Changes in energy intensity (SDG 7.3.1) and carbon intensity, by region, 2008-2018 Growth rate (per cent) Source: UNCTAD calculations based on -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Notes: Emissions not caused by fuel combustion are not included. US\$ values are in constant 2010 prices, adjusted to purchasing power parities based to the United States of America. Central and Southern Asia includes developing economies in Oceania. Soon after the start of 2020, demand for energy fell sharply due to the measures taken against the COVID-19 pandemic around the world. China, hit by the pandemic first, saw their weekly energy demand fall by 15 per cent, whereas in the Republic of Korea and Japan the estimated impact of COVID-19 measures on energy demand remains below 10 per cent. In Europe, the periods of partial lock down cut weekly energy demand by 17 per cent on average, while countries with a higher share of services and greater stringency of lockdowns saw their energy demand reduce by as much as 25 to 30 per cent. India’s full national lockdown reduced their weekly energy demand by almost 30 per cent. Overall, the IEA estimated that for each additional month of restrictions in place as of early April 2020, global annual energy demand would reduce by 1.5 per cent -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . In 2021, the pandemic continues to impact global energy demand. The falling demand has been reflected in sinking oil and gas prices. In March 2020, the UNCTAD free market commodity price index for fuels recorded a historic drop of 33.2 per cent month-on-month -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . The impact of COVID-19 has been especially pronounced on transport. Since the outbreak of the pandemic, people have not been travelling much, and the global number of flights collapsed from mid-March 2020. The number of weekly commercial flights available was about 75 per cent lower in the first half of May compared with the start of January 2020. In January 2021, the number of commercial flights remained almost 40 per cent lower than in January 2020. But May 2021, is clearly, by 140 per cent, above the low levels of May 2020 (see Tourism section of Developing economies in international trade). While air transportation generates about 2 per cent of global emissions, road transportation contributes almost 12 per cent -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- expects road transport activity to recover to pre-COVID-19 levels only in the last months of 2021, while air transport demand would remain markedly below 2019 levels for all of 2021. They expect a partial recovery: CO2 emissions from international aviation would remain one third below pre-pandemic levels in 2021, while emissions from road transport and domestic aviation would remain 5 per cent below 2019 levels. The impact of COVID-19 brought large changes to the global energy mix in spring 2020. While the share of coal declined to below 23 per cent, renewables jumped to almost 13 per cent. Regional differences were large with major geographic variations -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . IEA notes that, in 2021, coal demand has rebounded strongly, reversing all the declines in 2020. These developments led to notable short-term improvements in air quality, with NO2 levels, a gas emitted from burning fossil fuels for transportation and electricity generation, dropping recently. First, in some areas of China, NO2 concentrations dropped by 40 per cent from 2019 levels in January-February 2020. In March 2020, a 30 per cent drop was recorded in the North Eastern part of the United States of America, and the NO2 levels halved in Europe by April 2020 -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . After a decline in the COVID-19 measures, air pollution levels are bouncing back to their pre-pandemic levels, according to satellite imagery -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . ## A mixture of positive and negative trends – what will prevail? Climate change continues to be a development issue, demonstrated particularly by the trends in Asia, where CO2 emissions have dramatically increased in tandem with the rapid growth of GDP per capita over the last decades. Only decreasing energy intensity has limited the growth of CO2 emissions in that region. This is a sobering message, considering the urgent need to limit the concentration of greenhouse gases in the atmosphere. At the same time, some statistics give hope: in most developed regions, CO2 emissions have been diminishing for more than ten years, despite continuous GDP growth. This provides signs that a decoupling of emissions from the economic development is feasible. The prolonged outbreak of COVID-19 has brought about an unexpected deviation from many long-term trends, leading to an unprecedented fall of greenhouse gas emissions in early 2020 and a faster shift to renewable energy sources. However, in light of latest data these changes seem temporary. Even if the pandemic has induced historical reductions of CO2 emissions in 2020, it will not be enough in the fight against climate change, and a partial bounce back is expected in 2021. More effective and lasting efforts are needed to reduce CO2 emissions and other greenhouse gases to limit global warming to below 2°C or especially below the 1.5°C target by 2100. As populations and GDP per capita continue to grow, a drastic reduction in carbon intensity will be required. Rising energy efficiency serves as an important step in that direction, as well as renewable and cleaner energy. ## Involving the private sector in the sustainable development agenda Recent global trends, not the least of which is the COVID-19 pandemic, emphasize the role of sustainability reporting in transitioning to a more sustainable economy. The business sector is identified in the Addis Ababa Action Agenda as a significant player in the financing of sustainable development -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . Their actions contribute directly or indirectly to the attainment of all SDGs, including the state of the environment and greenhouse gas emissions. Nonetheless, the business sector is mostly absent from the SDG targets and is explicitly mentioned in only one of them: target 12.6, which calls for a greater integration of sustainability information in the regular reporting cycle of firms. This target and the related reporting are important for making companies’ contribution to the 2030 Agenda visible and for encouraging them to review how their operations affect their stakeholders and the environment. Sustainability reporting promotes transparency in the business sector and increases business accountability to society. SDG indicator 12.6.1 aims to measure the number of companies that publish sustainability reports. Developing consistent reporting on the indicator requires aligning multiple reporting frameworks, including the International Integrated Reporting Council -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- framework, the Global Reporting Initiative -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- standards, the standards proposed by the Sustainability Accounting Standards Board -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , the Climate-related financial disclosure recommendations -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , the EU non-financial reporting directive -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- , the Framework on environmental, social and governance factors -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- and the UNCTAD (2018) Guidance on Core Indicators.2 To this end, UNCTAD and UNEP, as joint custodians of SDG indicator 12.6.1, identified four dimensions for sustainability reporting: economic, environmental, social and institutional. As a “minimum reporting requirement”, only reports that cover certain elements in a meaningful way will be counted as sustainability reports for the SDG indicator. To further strengthen sustainable practices and accountability, the agencies also identified an “advanced reporting requirement” with more comprehensive reporting rules. In September 2019, the IAEG-SDGs approved the concepts and methods developed by UNCTAD and UNEP, and data collection for the indicator began. The framework does not add new reporting requirements, instead it suggests a way to reconcile the existing frameworks. ## Businesses striving to close large gaps in sustainability reporting UNCTAD regularly convenes a Group of Experts on ISAR to discuss international accounting and reporting standards in order to improve the availability, reliability and comparability of financial and non-financial enterprise reporting, and especially to integrate sustainability information into business reporting. Official statistics for SDG 12.6.1 are not yet available as companies are setting up the new sustainability reporting. However, an initial review is possible by looking at an unrepresentative sample of company sustainability reports as published by the United Nations Global Compact and GRI Sustainability Disclosure Database. In these samples, in 2020, 85 per cent of companies were reporting on the minimum requirements for SDG indicator 12.6.1 and 40 per cent on the advanced requirements the related UNCTAD Guidance -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . In March 2021, the preliminary review was based on a sample of almost 4 000 company reports in the two databases. Although this is a collection of voluntary reports and not representative of the world population of firms, the exercise still provides a first glimpse of current sustainability reporting practices and reveals some regional patterns. Studying every single report would be time consuming. Instead, machine learning and natural language processing techniques have been used to analyse text syntax structures in the CoPs and identify keywords based on the 33 core elements listed in the UNCTAD Guidance, organised according to the four themes.34 Figure 5 shows that most companies reporting in line with the minimum requirements cover three out of the four reporting dimensions, i.e., economic, environmental, institutional and social dimensions, with the institutional dimension as the least covered dimension. Among companies following the advanced reporting requirements, the environmental dimension has been the most under-reported area. The largest gaps in minimum reporting relate to indicators such as employee by contract type and gender; stakeholder engagement surrounding sustainability performance; materiality assessment, sustainability strategy and or principles related to sustainability; and employee training. While the largest gaps in advanced reporting include greenhouse gas emissions and waste intensity; material consumption, sourcing of materials and reclaimed or recycled materials used; biodiversity impacts; supplier and consumer engagement on sustainability issues; other local community impacts; supplier social assessment; details of remuneration; and supplier environmental assessment. Figure 5. Sustainability reporting by dimension (Number of reporting companies) Source: Global AI Corporation with data from United Nations Global Compact (2020) and UNCTAD (2018). Note: These are preliminary results from a non-representative sample. As much as the data on number of company reports reflect compliance with the minimum and advanced requirements, they also reflect current data gaps, Figure 6 reflects the availability of sustainability reports by reporting requirements and region. The figure also reflects the large data gaps in some regions. Still, these data can be taken as an indication of the regional differences in voluntary reporting. It appears that in certain regions, such as, the Americas, Asia and Europe, firms demonstrate a higher compliance with the UNCTAD Guidance than in others. Larger gaps in reporting of some regions are evident, especially in Africa, Central, Western and Southern Asia and the Oceania. Figure 6. Sustainability reporting, by region (Number of reporting companies) Source: Global AI Corporation with data from United Nations Global Compact (2020) and UNCTAD (2018). Note: These are preliminary results from a non-representative sample. As much as the data on number of company reports reflect compliance with the minimum and advanced requirements, they also reflect current data gaps, The overall quality of sustainability reports has improved across the world since the 2017 round of reports, especially in the environmental, social as well institutional and governance dimensions, where the ratio of reporting aligned with the minimum requirements almost doubled in these regions. All in all, the 2030 Agenda has increased sustainability reporting among businesses and led to closer engagement of international organizations and businesses to develop a commonly agreed upon and harmonized set of indicators. The coming years will show if sustainability reporting will be used by an increasing number of firms to demonstrate commitment to sustainable development. ## Notes 1. In constant 2017 prices adjusted to purchasing power parity based to the United States of America. 2. The Guidance on Core Indicators, developed by UNCTAD upon request by the 34th session of the Intergovernmental Working Group of Experts on ISAR, lists the main elements for entity reporting to monitor company-level contributions towards SDGs -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- -— – ‒ - – —- . 3. Additional complexity is caused by the fact that the CoPs are reported in over 20 different languages and in different formats. Therefore, the algorithms use multiple data cleaning, noise reduction and filtering methods to better identify relevant content for each indicator. 4. The calculations were performed by Global AI Corporation. ## References Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec tincidunt vel mauris a dignissim. Curabitur sodales nunc id vestibulum tempor. Nunc tortor orci, sodales nec eros eget. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec tincidunt vel mauris a dignissim. Curabitur sodales nunc id vestibulum tempor. Nunc tortor orci, sodales nec eros eget. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec tincidunt vel mauris a dignissim. Curabitur sodales nunc id vestibulum tempor. Nunc tortor orci, sodales nec eros eget. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec tincidunt vel mauris a dignissim. Curabitur sodales nunc id vestibulum tempor. Nunc tortor orci, sodales nec eros eget. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec tincidunt vel mauris a dignissim. Curabitur sodales nunc id vestibulum tempor. Nunc tortor orci, sodales nec eros eget.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2876758575439453, "perplexity": 2443.344794348606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00700.warc.gz"}
http://www.eskesthai.com/search/label/Octave
Showing posts with label Octave. Show all posts Showing posts with label Octave. Show all posts Wednesday, December 19, 2012 Binaural beats by Wiki Binaural beats or binaural tones are auditory processing artifacts, or apparent sounds, the perception of which arises in the brain for specific physical stimuli. This effect was discovered in 1839 by Heinrich Wilhelm Dove, and earned greater public awareness in the late 20th century based on claims that binaural beats could help induce relaxation, meditation, creativity and other desirable mental states. The effect on the brainwaves depends on the difference in frequencies of each tone: for example, if 300 Hz was played in one ear and 310 in the other, then the binaural beat would have a frequency of 10 Hz.[1][2] The brain produces a phenomenon resulting in low-frequency pulsations in the amplitude and sound localization of a perceived sound when two tones at slightly different frequencies are presented separately, one to each of a subject's ears, using stereo headphones. A beating tone will be perceived, as if the two tones mixed naturally, out of the brain. The frequencies of the tones must be below 1,000 hertz for the beating to be noticeable.[3] The difference between the two frequencies must be small (less than or equal to 30 Hz) for the effect to occur; otherwise, the two tones will be heard separately and no beat will be perceived. Binaural beats are of interest to neurophysiologists investigating the sense of hearing.[4][5][6][7] Binaural beats reportedly influence the brain in more subtle ways through the entrainment of brainwaves[3][8][9] and have been claimed to reduce anxiety[10] and to provide other health benefits such as control over pain.[11] Acoustical background Interaural time differences (ITD) of binaural beats For sound localization the human auditory system analyses interaural time differences between both ears inside small frequency ranges, called critical bands. For frequencies below 1000 to 1500 Hz interaural time differences are evaluated from interaural phase differences between both ear signals.[12] The perceived sound is also evaluated from the analysis of both ear signals. If different pure tones (sinusoidal signals with different frequencies) are presented to each ear, there will be time dependent phase and time differences between both ears (see figure). The perceived sound depends on the frequency difference between both ear signals: • If the frequency difference between the ear signals is lower than some hertz, the auditory system can follow the changes in the interaural time differences. As a result an auditory event is perceived, which is moving through the head. The perceived direction corresponds to the instantaneous interaural time difference. • For slightly bigger frequency differences between the ear signals (more than 10 Hz) the auditory system can no longer follow the changes in the interaural parameters. A diffuse auditory event appears. The sound corresponds to an overlay of both ear signals, which means amplitude and loudness are changing rapidly (see figure in the chapter above). • For frequency differences between the ear signals of above 30 Hz the cocktail party effect begins to work, and the auditory system is able to analyze the presented ear signals in terms of two different sound sources at two different locations, and two distinct signals are perceived. Binaural beats can also be experienced without headphones, they appear when playing two different pure tones through loudspeakers. The sound perceived is quite similar: with auditory events which move through the room, at low frequency differences, and diffuse sound at slightly bigger frequency differences. At bigger frequency differences apparent localized sound sources appear.[13] However, it is more effective to use headphones than loudspeakers. History Heinrich Wilhelm Dove discovered binaural beats in 1839. While research about them continued after that, the subject remained something of a scientific curiosity until 134 years later, with the publishing of Gerald Oster's article "Auditory Beats in the Brain" (Scientific American, 1973). Oster's article identified and assembled the scattered islands of relevant research since Dove, offering fresh insight (and new laboratory findings) to research on binaural beats. In particular,Oster saw binaural beats as a powerful tool for cognitive and neurological research, addressing questions such as how animals locate sounds in their three-dimensional environment, and also the remarkable ability of animals to pick out and focus on specific sounds in a sea of noise (which is known as the "cocktail party effect"). Oster also considered binaural beats to be a potentially useful medical diagnostic tool, not merely for finding and assessing auditory impairments, but also for more general neurological conditions. (Binaural beats involve different neurological pathways than ordinary auditory processing.) For example, Oster found that a number of his subjects that could not perceive binaural beats, suffered from Parkinson's disease. In one particular case, Oster was able to follow the subject through a week-long treatment of Parkinson's disease; at the outset the patient could not perceive binaural beats; but by the end of the week of treatment, the patient was able to hear them. In corroborating an earlier study, Oster also reported gender differences in the perception of beats. Specifically, women seemed to experience two separate peaks in their ability to perceive binaural beats—peaks possibly correlating with specific points in the menstrual cycle, onset of menstruation and during the luteal phase. This data led Oster to wonder if binaural beats could be used as a tool for measuring relative levels of estrogen.[3] The effects of binaural beats on consciousness were first examined by physicist Thomas Warren Campbell and electrical engineer Dennis Mennerich, who under the direction of Robert Monroe sought to reproduce a subjective impression of 4 Hz oscillation that they associated with out-of-body experience.[14] On the strength of their findings, Monroe created the binaural-beat technology self-development industry by forming The Monroe Institute, now a charitable binaural research and education organization. Unverified claims There have been a number of claims regarding binaural beats, among them that they may simulate the effect of recreational drugs, help people memorize and learn, stop smoking, help dieting, tackle erectile dysfunction and improve athletic performance. Scientific research into binaural beats is very limited. No conclusive studies have been released to support the wilder claims listed above. However, one uncontrolled pilot study[15] of 8 individuals indicates that binaural beats may have a relaxing effect. In absence of positive evidence for a specific effect, however, claimed effects may be attributed to the power of suggestion (the placebo effect). In a blind study (8 participants) of binaural beats' effects on meditation, 7 Hz frequencies were found to enhance meditative focus while 15 Hz frequencies harmed it.[16] Physiology The sensation of binaural beats is believed to originate in the superior olivary nucleus, a part of the brain stem. They appear to be related to the brain's ability to locate the sources of sounds in three dimensions and to track moving sounds, which also involves inferior colliculus (IC) neurons.[17] Regarding entrainment, the study of rhythmicity provides insights into the understanding of temporal information processing in the human brain. Auditory rhythms rapidly entrain motor responses into stable steady synchronization states below and above conscious perception thresholds. Activated regions include primary sensorimotor and cingulate areas, bilateral opercular premotor areas, bilateral SII, ventral prefrontal cortex, and, subcortically, anterior insula, putamen, and thalamus. Within the cerebellum, vermal regions and anterior hemispheres ipsilateral to the movement became significantly activated. Tracking temporal modulations additionally activated predominantly right prefrontal, anterior cingulate, and intraparietal regions as well as posterior cerebellar hemispheres.[18] A study of aphasic subjects who had a severe stroke versus normal subjects showed that the aphasic subject could not hear the binaural beats whereas the normal subjects could.[19] Hypothetical effects on brain function Overview Binaural beats may influence functions of the brain in ways besides those related to hearing. This phenomenon is called frequency following response. The concept is that if one receives a stimulus with a frequency in the range of brain waves, the predominant brain wave frequency is said to be likely to move towards the frequency of the stimulus (a process called entrainment).[20] In addition, binaural beats have been credibly documented to relate to both spatial perception & stereo auditory recognition, and, according to the frequency following response, activation of various sites in the brain.[21][22][23][24][25] The stimulus does not have to be aural; it can also be visual[26] or a combination of aural and visual[27] (one such example would be Dreamachine). Perceived human hearing is limited to the range of frequencies from 20 Hz to 20,000 Hz, but the frequencies of human brain waves are below about 40 Hz. To account for this lack of perception, binaural beat frequencies are used. Beat frequencies of 40 Hz have been produced in the brain with binaural sound and measured experimentally.[28] When the perceived beat frequency corresponds to the delta, theta, alpha, beta, or gamma range of brainwave frequencies, the brainwaves entrain to or move towards the beat frequency.[29] For example, if a 315 Hz sine wave is played into the right ear and a 325 Hz one into the left ear, the brain is entrained towards the beat frequency 10 Hz, in the alpha range. Since alpha range is associated with relaxation, this has a relaxing effect or if in the beta range, more alertness. An experiment with binaural sound stimulation using beat frequencies in the Beta range on some participants and Delta/Theta range in other participants, found better vigilance performance and mood in those on the awake alert state of Beta range stimulation.[30][31] Binaural beat stimulation has been used fairly extensively to induce a variety of states of consciousness, and there has been some work done in regards to the effects of these stimuli on relaxation, focus, attention, and states of consciousness.[8] Studies have shown that with repeated training to distinguish close frequency sounds that a plastic reorganization of the brain occurs for the trained frequencies[32] and is capable of asymmetric hemispheric balancing.[33] Brain waves Frequency range Name Usually associated with: > 40 Hz Gamma waves Higher mental activity, including perception, problem solving, fear, and consciousness 13–39 Hz Beta waves Active, busy or anxious thinking and active concentration, arousal, cognition, and or paranoia 7–13 Hz Alpha waves Relaxation (while awake), pre-sleep and pre-wake drowsiness, REM sleep, Dreams 8–12 Hz Mu waves Sensorimotor rhythm Mu_rhythm, Sensorimotor_rhythm 4–7 Hz Theta waves deep meditation/relaxation, NREM sleep < 4 Hz Delta waves Deep dreamless sleep, loss of body awareness (The precise boundaries between ranges vary among definitions, and there is no universally accepted standard.) The dominant frequency determines your current state. For example, if in someone's brain alpha waves are dominating, they are in the alpha state (this happens when one is relaxed but awake). However, other frequencies will also be present, albeit with smaller amplitudes. The brain entraining is more effective if the entraining frequency is close to the user's starting dominant frequency. Therefore, it is suggested to start with a frequency near to one's current dominant frequency (likely to be about 20 Hz or less for a waking person), and then slowly decreasing/increasing it towards the desired frequency. Some people find pure sine waves unpleasant, so a pink noise or another background (e.g. natural sounds such as river noises) can also be mixed with them. In addition to that, as long as the beat is audible, increasing the volume should not necessarily improve the effectiveness, therefore using a low volume is usually suggested. One theory is to reduce the volume so low that the beating should not even be clearly audible, but this does not seem to be the case (see the next paragraph). Other uses In addition to lowering the brain frequency to relax the listener, there are other controversial, alleged uses for binaural beats. For example, that by using specific frequencies an individual can stimulate certain glands to produce desired hormones. Beta-endorphin has been modulated in studies using alpha-theta brain wave training,[34] and dopamine with binaural beats.[1] Among other alleged uses, there are reducing learning time and sleeping needs (theta waves are thought to improve learning, since children, who have stronger theta waves, and remain in this state for a longer period of time than adults, usually learn faster than adults;[citation needed] and some people find that half an hour in the theta state can reduce sleeping needs up to four hours;[citation needed] similar to another method of achieving a theta state, e.g. meditation;[citation needed]) some use them for lucid dreaming and even for attempting out-of-body experiences, astral projection, telepathy and psychokinesis. However, the role of alpha-wave activity in lucid dreaming is subject to ongoing research).[35][36][37] Alpha-theta brainwave training has also been used successfully for the treatment of addictions.[34][38][39] It has been used for the recovery of repressed memories, but as with other techniques this can lead to false memories.[40] An uncontrolled pilot study of Delta binaural beat technology over 60 days has shown positive effect on self-reported psychologic measures, especially anxiety. There was significant decrease in trait anxiety, an increase in quality of life, and a decrease in insulin-like growth factor-1 and dopamine[1] and has been successfully shown to decrease mild anxiety.[41] A randomised, controlled study concluded that binaural beat audio could lessen hospital acute pre-operative anxiety.[42] Another claimed effect for sound induced brain synchronization is enhanced learning ability. It was proposed in the 1970s that induced alpha brain waves enabled students to assimilate more information with greater long term retention.[43] In more recent times has come more understanding of the role of theta brain waves in behavioural learning.[44] The presence of theta patterns in the brain has been associated with increased receptivity for learning and decreased filtering by the left hemisphere.[43][45][46] Based on the association between theta activity (4–7 Hz) and working memory performance, biofeedback training suggests that normal healthy individuals can learn to increase a specific component of their EEG activity, and that such enhanced activity may facilitate a working memory task and to a lesser extent focused attention.[47] A small media controversy was spawned in 2010 by an Oklahoma Bureau of Narcotics official comparing binaural beats to illegal narcotics, and warning that interest in websites offering binaural beats could lead to drug use.[48] References 1. ^ a b c Wahbeh H, Calabrese C, Zwickey H (2007). "Binaural beat technology in humans: a pilot study to assess psychologic and physiologic effects". Journal of alternative and complementary medicine 13 (1): 25–32. doi:10.1089/acm.2006.6196. PMID 17309374. 2. ^ Wahbeh H, Calabrese C, Zwickey H, Zajdel J (2007). "Binaural Beat Technology in Humans: A Pilot Study to Assess Neuropsychologic, Physiologic, And Electroencephalographic Effects". Journal of alternative and complementary medicine 13 (2): 199–206. doi:10.1089/acm.2006.6201. PMID 17388762. 3. ^ a b c Oster G (1973). "Auditory beats in the brain". Sci. Am. 229 (4): 94–102. doi:10.1038/scientificamerican1073-94. PMID 4727697. 4. ^ Fitzpatrick D, et al (2009). "Processing Temporal Modulations in Binaural and Monaural Auditory Stimuli by Neurons in the Inferior Colliculus and Auditory Cortex". JARO 10 (4): 579–593. doi:10.1007/s10162-009-0177-8. PMID 19506952. 5. ^ Gu X, Wright BA, Green DM (1995). "Failure to hear binaural beats below threshold". The Journal of the Acoustical Society of America 97 (1): 701–703. doi:10.1121/1.412294. PMID 7860843. 6. ^ Zeng F-G, et al (2005). "Perceptual Consequences of Disrupted Auditory Nerve Activity". Journal of Neurophysiology 93 (6): 3050–3063. doi:10.1152/jn.00985.2004. PMID 15615831. 7. ^ Jan Schnupp, Israel Nelken and Andrew King (2011). Auditory Neuroscience. MIT Press. ISBN 0-262-11318-X. 8. ^ a b Hutchison, Michael M. (1986). Megabrain: new tools and techniques for brain growth and mind expansion. New York: W. Morrow. ISBN 0-688-04880-3. 9. ^ Turmel, Ron. "Resonant Frequencies and the Human Brain". The Resonance Project. Retrieved 10 June 2011. 10. ^ 11. ^ Hemispheric-synchronisation during anaesthesia: a double-blind randomised trial using audiotapes for intra-operative nociception control, Jan 2000, Kliempt, Ruta, Ogston, Landeck & Martay 12. ^ Blauert, J.: Spatial hearing - the psychophysics of human sound localization; MIT Press; Cambridge, Massachusetts (1983), ch. 2.4 13. ^ Slatky, Harald (1992): Algorithms for direction specific Processing of Sound Signals - the Realization of a binaural Cocktail-Party-Processor-System, Dissertation, Ruhr-University Bochum, ch. 3 14. ^ "My Big TOE" book 1, Thomas Campbell, p79 ISBN 978-0-9725094-0-4 15. ^ Wahbeh H, Calabrese C, Zwickey H (2007). "Binaural beat technology in humans: a pilot study to assess psychologic and physiologic effects". J Altern Complement Med 13 (1): 25–32. doi:10.1089/acm.2006.6196. PMID 17309374. 16. ^ Lavallee, Christina F.; Koren, Persinger (7). "A Quantitative Electroencephalographic Study of Meditation and Binaural Beat Entrainment". Journal of Alternative and Complementary Medicine 17 (4): 351–355. doi:10.1089/acm.2009.0691. PMID 21480784. Retrieved 10 March 2012. 17. ^ Spitzer MW, Semple MN (1998). "Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity". J. Neurophysiol. 80 (6): 3062–76. PMID 9862906. 18. ^ Thaut MH (2003). "Neural basis of rhythmic timing networks in the human brain". Ann. N. Y. Acad. Sci. 999 (1): 364–73. doi:10.1196/annals.1284.044. PMID 14681157. 19. ^ Barr DF, Mullin TA, Herbert PS. (1977). "Application of binaural beat phenomenon with aphasic patients". Arch Otolaryngol. 103 (4): 192–194. PMID 849195. 20. ^ Gerken GM, Moushegian G, Stillman RD, Rupert AL (1975). "Human frequency-following responses to monaural and binaural stimuli". Electroencephalography and clinical neurophysiology 38 (4): 379–86. doi:10.1016/0013-4694(75)90262-X. PMID 46818. 21. ^ Dobie RA, Norton SJ (1980). "Binaural interaction in human auditory evoked potentials". Electroencephalography and clinical neurophysiology 49 (3-4): 303–13. doi:10.1016/0013-4694(80)90224-2. PMID 6158406. 22. ^ Moushegian G, Rupert AL, Stillman RD (1978). "Evaluation of frequency-following potentials in man: masking and clinical studies". Electroencephalography and clinical neurophysiology 45 (6): 711–18. doi:10.1016/0013-4694(78)90139-6. PMID 84739. 23. ^ Smith JC, Marsh JT, Greenberg S, Brown WS (1978). "Human auditory frequency-following responses to a missing fundamental". Science 201 (4356): 639–41. doi:10.1126/science.675250. PMID 675250. 24. ^ Smith JC, Marsh JT, Brown WS (1975). "Far-field recorded frequency-following responses: evidence for the locus of brainstem sources". Electroencephalography and clinical neurophysiology 39 (5): 465–72. doi:10.1016/0013-4694(75)90047-4. PMID 52439. 25. ^ Yamada O, Yamane H, Kodera K (1977). "Simultaneous recordings of the brain stem response and the frequency-following response to low-frequency tone". Electroencephalography and clinical neurophysiology 43 (3): 362–70. doi:10.1016/0013-4694(77)90259-0. PMID 70337. 26. ^ Cvetkovic D, Simpson D, Cosic I (2006). "Influence of sinusoidally modulated visual stimuli at extremely low frequency range on the human EEG activity". Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference 1: 1311–4. doi:10.1109/IEMBS.2006.259565. PMID 17945633. 27. ^ "[Abstract The Induced Rhythmic Oscillations of Neural Activity in the Human Brain"]. Retrieved 2007-11-14. 28. ^ Schwarz DW, Taylor P (2005). "Human auditory steady state responses to binaural and monaural beats". Clinical Neurophysiology 116 (3): 658–68. doi:10.1016/j.clinph.2004.09.014. PMID 15721080. 29. ^ Rogers LJ, Walter DO (1981). "Methods for finding single generators, with application to auditory driving of the human EEG by complex stimuli". J. Neurosci. Methods 4 (3): 257–65. doi:10.1016/0165-0270(81)90037-6. PMID 7300432. 30. ^ Lane JD, Kasian SJ, Owens JE, Marsh GR (1998). "Binaural auditory beats affect vigilance performance and mood". Physiol. Behav. 63 (2): 249–52. doi:10.1016/S0031-9384(97)00436-8. PMID 9423966. 31. ^ Beatty J, Greenberg A, Deibler WP, O'Hanlon JF (1974). "Operant control of occipital theta rhythm affects performance in a radar monitoring task". Science 183 (4127): 871–3. doi:10.1126/science.183.4127.871. PMID 4810845. 32. ^ Menning H, Roberts LE, Pantev C (2000). "Plastic changes in the auditory cortex induced by intensive frequency discrimination training". Neuroreport 11 (4): 817–22. doi:10.1097/00001756-200003200-00032. PMID 10757526. 33. ^ Gottselig JM, Brandeis D, Hofer-Tinguely G, Borbély AA, Achermann P (2004). "Human central auditory plasticity associated with tone sequence learning". Learn. Mem. 11 (2): 162–71. doi:10.1101/lm.63304. PMC 379686. PMID 15054131. 34. ^ a b Peniston EG, Kulkosky PJ (1989). "Alpha-theta brainwave training and beta-endorphin levels in alcoholics". Alcohol. Clin. Exp. Res. 13 (2): 271–9. doi:10.1111/j.1530-0277.1989.tb00325.x. PMID 2524976. 35. ^ Ogilvie RD, Hunt HT, Tyson PD, Lucescu ML, Jeakins DB (1982). "Lucid dreaming and alpha activity: a preliminary report". Perceptual and motor skills 55 (3 Pt 1): 795–808. PMID 7162915. 36. ^ Korabel'nikova EA, Golubev VL (2001). "[Dreams and interhemispheric asymmetry]" (in Russian). Zhurnal nevrologii i psikhiatrii imeni S.S. Korsakova / Ministerstvo zdravookhraneniia i meditsinskoĭ promyshlennosti Rossiĭskoĭ Federatsii, Vserossiĭskoe obshchestvo nevrologov Vserossiĭskoe obshchestvo psikhiatrov 101 (12): 51–4. PMID 11811128. 37. ^ Spoormaker VI, van den Bout J (2006). "Lucid dreaming treatment for nightmares: a pilot study". Psychotherapy and psychosomatics 75 (6): 389–94. doi:10.1159/000095446. PMID 17053341. 38. ^ Saxby E, Peniston EG (1995). "Alpha-theta brainwave neurofeedback training: an effective treatment for male and female alcoholics with depressive symptoms". Journal of clinical psychology 51 (5): 685–93. doi:10.1002/1097-4679(199509)51:5<685::aid-jclp2270510514>3.0.CO;2-K. PMID 8801245. 39. ^ Watson CG, Herder J, Passini FT (1978). "Alpha biofeedback therapy in alcoholics: an 18-month follow-up". Journal of clinical psychology 34 (3): 765–9. doi:10.1002/1097-4679(197807)34:3<765::aid-jclp2270340339>3.0.CO;2-5. PMID 690224. 40. ^ Loftus EF, Davis D (2006). "Recovered memories". Annual review of clinical psychology 2 (1): 469–98. doi:10.1146/annurev.clinpsy.2.022305.095315. PMID 17716079. 41. ^ Le Scouarnec RP, Poirier RM, Owens JE, Gauthier J, Taylor AG, Foresman PA. (2001). "Use of binaural beat tapes for treatment of anxiety: a pilot study of tape preference and outcomes". Altern Ther Health Med. (Clinique Psych in Montreal, Quebec.) 7 (1): 58–63. PMID 11191043. 42. ^ Padmanabhan R, Hildreth AJ, Laws D (2005). "A prospective, randomised, controlled study examining binaural beat audio and pre-operative anxiety in patients undergoing general anaesthesia for day case surgery". Anaesthesia 60 (9): 874–7. doi:10.1111/j.1365-2044.2005.04287.x. PMID 16115248. 43. ^ a b Harris, Bill (2002). Thresholds of the Mind. Centerpointe Press. Appendix 1, pp151–178. ISBN 0-9721780-0-7. 44. ^ Berry SD, Seager MA (2001). "Hippocampal theta oscillations and classical conditioning". Neurobiol Learn Mem 76 (3): 298–313. doi:10.1006/nlme.2001.4025. PMID 11726239. 45. ^ Seager MA, Johnson LD, Chabot ES, Asaka Y, Berry SD (2002). "Oscillatory brain states and learning: Impact of hippocampal theta-contingent training". Proc. Natl. Acad. Sci. U.S.A. 99 (3): 1616–20. doi:10.1073/pnas.032662099. PMC 122239. PMID 11818559. 46. ^ Griffin AL, Asaka Y, Darling RD, Berry SD (2004). "Theta-contingent trial presentation accelerates learning rate and enhances hippocampal plasticity during trace eyeblink conditioning". Behav. Neurosci. 118 (2): 403–11. doi:10.1037/0735-7044.118.2.403. PMID 15113267. 47. ^ Vernon D, Egner T, Cooper N, et al. (2003). "The effect of training distinct neurofeedback protocols on aspects of cognitive performance". International journal of psychophysiology : official journal of the International Organization of Psychophysiology 47 (1): 75–85. doi:10.1016/S0167-8760(02)00091-0. PMID 12543448. 48. ^ "Report: Teens Using Digital Drugs to Get High". Wired. 14 July 2010. Retrieved 22 November 2012. Tuesday, February 23, 2010 Calorimetric Equivalence Principle Test With Stefan shutting down the blog temporary I thought to gather my thoughts here. Gravitomagnetism This approximate reformulation of gravitation as described by general relativity makes a "fictitious force" appear in a frame of reference different from a moving, gravitating body. By analogy with electromagnetism, this fictitious force is called the gravitomagnetic force, since it arises in the same way that a moving electric charge creates a magnetic field, the analogous "fictitious force" in special relativity. The main consequence of the gravitomagnetic force, or acceleration, is that a free-falling object near a massive rotating object will itself rotate. This prediction, often loosely referred to as a gravitomagnetic effect, is among the last basic predictions of general relativity yet to be directly tested. Indirect validations of gravitomagnetic effects have been derived from analyses of relativistic jets. Roger Penrose had proposed a frame dragging mechanism for extracting energy and momentum from rotating black holes.[2] Reva Kay Williams, University of Florida, developed a rigorous proof that validated Penrose's mechanism.[3] Her model showed how the Lense-Thirring effect could account for the observed high energies and luminosities of quasars and active galactic nuclei; the collimated jets about their polar axis; and the asymmetrical jets (relative to the orbital plane).[4] All of those observed properties could be explained in terms of gravitomagnetic effects.[5] Williams’ application of Penrose's mechanism can be applied to black holes of any size.[6] Relativistic jets can serve as the largest and brightest form of validations for gravitomagnetism. A group at Stanford University is currently analyzing data from the first direct test of GEM, the Gravity Probe B satellite experiment, to see if they are consistent with gravitomagnetism. A group at Stanford University is currently analyzing data from the first direct test of GEM, the Gravity Probe B satellite experiment, to see if they are consistent with gravitomagnetism. While I am not as progressed in terms of the organization of your thought process(inexperience in terms of the education) I am holding the ideas of Mendeleev in mind as I look at this topic you've gathered. And Newton as well, but not in the way one might have deferred to as the basis if gravity research. It is more on the idea of what we can create in reality given all the elements at our disposal. This is also the same idea in mathematics that all the information is there and only has t be discovered. Such a hierarchy in thinking is also the idea of geometrical presence stretched to higher dimensions, as one would point to mater assmptins as t a higher order preset in the development of the material of earth as to the planet. *** Uncle Al, Overview:A parity calorimetry test offers a 33,000-fold improvement in EP anomaly sensitivity in only two days of measurements. we are not so different....that this quest may not be apparent for many, yet it is a simple question about what is contracted to help understand "principals of formation" had been theoretically developed in terms of the genus figures(Stanley Mandelstam) that we understand that this progression mathematically has been slow. So we scientifically build this experimental progression. But indeed, it's a method in terms of moving from "the false vacuum to the true?" What is the momentum called toward materialization? Such an emergent feature while discussing some building block model gives some indication of a "higher order principal" that is not clearly understood, while from a condense matter theorist point of view, this is a emergent feature? Best, Bordeaux, France is 44.83 N http://www.mazepath.com/uncleal/lajos.htm#b7 *** According to general relativity, the gravitational field produced by a rotating object (or any rotating mass-energy) can, in a particular limiting case, be described by equations that have the same form as the magnetic field in classical electromagnetism. Starting from the basic equation of general relativity, the Einstein field equation, and assuming a weak gravitational field or reasonably flat spacetime, the gravitational analogs to Maxwell's equations for electromagnetism, called the "GEM equations", can be derived. GEM equations compared to Maxwell's equations in SI are:[7] [8][9][10] GEM equations Maxwell's equations $\nabla \cdot \mathbf{E}_\text{g} = -4 \pi G \rho \$ $\nabla \cdot \mathbf{E} = \frac{\rho_\text{em}}{\epsilon_0} \$ $\nabla \cdot \mathbf{B}_\text{g} = 0 \$ $\nabla \cdot \mathbf{B} = 0 \$ $\nabla \times \mathbf{E}_\text{g} = -\frac{\partial \mathbf{B}_\text{g} } {\partial t} \$ $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B} } {\partial t} \$ $\nabla \times \mathbf{B}_\text{g} = -\frac{4 \pi G}{c^2} \mathbf{J} + \frac{1}{c^2} \frac{\partial \mathbf{E}_\text{g}} {\partial t}$ $\nabla \times \mathbf{B} = \frac{1}{\epsilon_0 c^2} \mathbf{J}_\text{em} + \frac{1}{c^2} \frac{\partial \mathbf{E}} {\partial t}$ where: Wednesday, April 11, 2007 Physical geodesy: A Condensative Result The cave was discovered in 2000The 120m-deep Cueva de las Espadas (Cave of Swords), discovered in 1912, is named for its metre-long shafts of gypsum (a calcium sulphate mineral that incorporates water molecules into its chemical formula). And although individually there are fewer crystals in the 290m-deep Cueva de los Cristales, its beams are considerably bigger. Professor Garcia-Ruiz and colleagues believe they can now show how these differences emerged. Clifford of Asymptotia wrote a post talking about Mexican Super lattices. Plato Apr 7th, 2007 at 7:30 am I tried to look for some comparative feature on a small scale that might be associated to the cave construction and immediately thought of the geoids and “the condition” that would have formed, while “the environment was trapped” in the earth, while cooling. Finding these kinds of stones and cutting them in half reveals some amazing crystalline structures. This used to be part of our “family outing” going through gravel pits looking for agates, and other stones. We would use the “sunlight for discovery” to capture them. Refractive indexes? I’ll have to show picture on my blog of the collection in the future, as well as other crystals that I had acquired. This does provide a further thoughts on Physical geodesy? Well I wanted to expand on this a bit here. Gems' color form from light - such as a ruby collects all the colors of the white light-(red, blue, green, etc) and reflects red back to the sun. Color is the most obvious and attractive feature of gemstones. The color of any material is due to the nature of light itself. Sunlight, often called white light, is actually a mixture of different colors of light. When light passes through a material, some of the light may be absorbed, while the rest passes through. The part that isn't absorbed reaches our eyes as white light minus the absorbed colors. A ruby appears red because it absorbs all the other colors of white light - blue, yellow, green, etc. - and reflects the red light to the viewer. A colorless stone absorbs none of the light, and so it allows the white light to emerge unchanged. A calcite crystal laid upon a paper with some letters showing birefringence If you wanted to know something about gems, when I mentioned "refractive index" is what was used in terms of how we would walk through the gravel pit at a time of day(preferably evening). This would allow the sun to shine through the agates and capture our attention, as they sat amongst all the other stones in the gravel pit. We would make a game of it, and who ever got three agates first would be a winner that day. Opticks is a book written by English physicist Isaac Newton that was released to the public in 1704. It is about optics and the refraction of light, and is considered one of the great works of science in history. Opticks was Newton's second major book on physical science. Even if he had not made his better-known discoveries concerning gravity and the invention of the calculus, Opticks would have given him the reputation as one of the greatest scientists of his time. This work represents a major contribution to science, different from—but in some ways rivaling—the Principia. The Opticks is largely a record of experiments and the deductions made from them, covering a wide range of topics in what was later to be known as physical optics. That is, this work is not a geometric discussion of catoptrics or dioptrics, the traditional subjects of reflection of light by mirrors of different shapes and the exploration of how light is "bent" as it passes from one medium, such as air, into another, such as water or glass. Rather, the Opticks is a study of the nature of light and colour and the various phenomena of diffraction, which Newton called the "inflexion" of light. In this book Newton sets forth in full his experiments, first reported in 1672, on dispersion, or the separation of light into a spectrum of its component colours. He shows how colours arise from selective absorption, reflection, or transmission of the various component parts of the incident light. His experiments on these subjects and on the problems of diffraction (which he never fully mastered) set the subject of optics on a new level. Sunday, February 11, 2007 Neutrino Mixing Explained in 60 seconds I added this post to demonstrate the connection to what is behind the investigation to "neutrino mixing" that needs further clarification. So I put this blog post together below. It "allows the sources" to consider the question of how we see the existing universe. How perspective has been focused toward the reductionist understanding while we ponder the very nature of the universe. For example, when neutrinos interact with matter they produce specific kinds of other particles. Catch the neutrino at one moment, and it will interact to produce an electron. A moment later, it might interact to produce a different particle. "Neutrino mixing" describes the original mixture of waves that produces this oscillation effect. By my very nature, I have adopted the views of the Pythagoreans in that, what I see of the universe has it's counter part as some feature within our determinations "as the background" to the "nature of all matter." It's effect, from understanding the very basis of "particle creation" has this factor to be included in our determinations of that particle in question. So, what views shall we assign to the Higg's Boson Field? The view of the cosmos at large? We needed to see that such events can and do happen within the universe. To see them at a level that had not been considered in terms of the microstate blackhole creation that is created from such particle collisions? One needed to identify where "these points" could exist not only in the collider, but in the cosmos at large. How else could you explain the division you have assigned the make up of the cosmos? Usually all physicists see are the remnants of a new particle decaying into other types of particles. From that, they infer the existence of the new species and can determine some of its characteristics. So we move from the limitations of the standard model? This is a fixture of what has been accomplished, yet, how could we see things as so different to include gravity as a feature and new force carrier? If we are to consider the energy of all these matters, then how else could you have included gravity? To slow them down, theorists proposed a mysterious, universe-filling, not-yet-seen "liquid" called the Higgs field. Also, physicists now understand that 96 percent of the universe is not made of matter as we know it, and thus it does not fit into the Standard Model. How to extend the Standard Model to account for these mysteries is an open question to be answered by current and future experiments. While it is some what mysterious, the applications as ancient as they may seem, they are not apart from our constitutions as we have applied our understanding of the universe it seems:)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5677452087402344, "perplexity": 5077.936984645678}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442900.2/warc/CC-MAIN-20141017005722-00018-ip-10-16-133-185.ec2.internal.warc.gz"}
http://clay6.com/qa/50030/which-of-the-following-is-a-good-predictor-of-a-period-text-of-a-swinging-p
# Which of the following is a good predictor of a period $\text{t}$ of a swinging pendulum based on the factors such as Length (l), Angle ($\theta$), the mass (m) and acceleration due to gravity (g)? Given the four factors, we can assume the period of the pendulum to be $[t] = [kl^a m^b g^c \theta^d]$, where $a,b,c,d$ are unknown real numbers. However, $\theta$ is dimensionless, and $k$ is a constant that is assumed to be dimensionless as well, so we get: $T= L^a M^b (LT^{-1})^c$ Equating the powers of $M, L$ and $T$ on both sides of $[t] = T$, we get: $L: 0 = a+c, M:0=b, T:1=-2c \rightarrow b=0, c = \large\frac{-1}{2}$$, a = -c = \large\frac{1}{2}$ $\Rightarrow t = kl^{\large\frac{1}{2}} g^{\large\frac{-1}{2}} \theta^d$ (d is unresolved and can assume any value) $\Rightarrow$ The general result is: $t = f(\theta)l^{\large\frac{1}{2}} g^{\large\frac{-1}{2}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867698550224304, "perplexity": 273.1104936782316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00691.warc.gz"}
https://profdoc.um.ac.ir/paper-abstract-1041235.html
Linear Algebra and its Applications, ( ISI ), Volume (447), No (4), Year (2014-4) , Pages (26-37) #### Title : ( Further refinements of the Heinz inequality ) Authors: Rupinderjit Kaur , Mohammad Sal Moslehian , Mandeep Singh , Cristian Conde , Access to full-text not allowed by authors Citation: BibTeX | EndNote #### Abstract The celebrated Heinz inequality asserts that $2|||A^{1/2}XB^{1/2}|||\leq |||A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}|||\leq |||AX+XB|||$ for $X \in \mathbb{B}(\mathscr{H})$, $A,B\in \+$, every unitarily invariant norm $|||\cdot|||$ and $\nu \in [0,1]$. In this paper, we present several improvement of the Heinz inequality by using the convexity of the function $F(\nu)=|||A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}|||$, some integration techniques and various refinements of the Hermite--Hadamard inequality. In the setting of matrices we prove that \begin{eqnarray*} &&\hspace{-0.5cm}\left|\left|\left|A^{\frac{\alpha+\beta}{2}}XB^{1-\frac{\alpha+\beta}{2}}+A^{1-\frac{\alpha+\beta}{2}}XB^{\frac{\alpha+\beta}{2}}\right|\right|\right|\leq\frac{1}{|\beta-\alpha|} \left|\left|\left|\int_{\alpha}^{\beta}\left(A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\right)d\nu\right|\right|\right|\nonumber\\ &&\qquad\qquad\leq \frac{1}{2}\left|\left|\left|A^{\alpha}XB^{1-\alpha}+A^{1-\alpha}XB^{\alpha}+A^{\beta}XB^{1-\beta}+A^{1-\beta}XB^{\beta}\right|\right|\right|\,, \end{eqnarray*} for real numbers α, β. #### Keywords , Heinz inequality; convex function; Hermite, , Hadamard inequality; positive definite matrix; unitarily invariant norm برای دانلود از شناسه و رمز عبور پرتال پویا استفاده کنید. @article{paperid:1041235, author = {Rupinderjit Kaur and Sal Moslehian, Mohammad and Mandeep Singh and Cristian Conde}, title = {Further refinements of the Heinz inequality}, journal = {Linear Algebra and its Applications}, year = {2014}, volume = {447}, number = {4}, month = {April}, issn = {0024-3795}, pages = {26--37}, numpages = {11}, keywords = {Heinz inequality; convex function; Hermite--Hadamard inequality; positive definite matrix; unitarily invariant norm}, } [Download] %0 Journal Article %T Further refinements of the Heinz inequality %A Rupinderjit Kaur %A Sal Moslehian, Mohammad %A Mandeep Singh %A Cristian Conde %J Linear Algebra and its Applications %@ 0024-3795 %D 2014 [Download]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735644817352295, "perplexity": 22813.88381816675}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00090.warc.gz"}
https://brilliant.org/problems/4-equations-with-8-unknowns/
# 4 Equations With 8 Unknowns Logic Level 1 $\LARGE \begin{array} { l l l l l } \square & + & \square & = & \square \\ + & & \times & & \\ \square & \div & \square & = & \square \\ \hspace{1mm} {\small ||} & & \hspace{1mm} {\small ||} & & \\ \square & & \square \\ \end{array}$ Fill in the 8 boxes with distinct digits, such that all 4 equations are true. Clearly, we cannot use the digit 0. What other digit is not used? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28494566679000854, "perplexity": 1003.132397181994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647146.41/warc/CC-MAIN-20180319194922-20180319214922-00754.warc.gz"}
http://null.zbr.pt/?p=2181
# Maigret in Montmartre This is the fourth episode, in case you consider this a series, or the third sequel (well, terminology), and the one I liked the least. I am not sure if because I knew the story (I’ve read the book) or if my predisposition was just different this time. The cast is great, the acting good, the atmosphere authentic. But I do not think that the story was well told. Although I’ve read the book, I did not recall completely the history, and I confess that, at the end, I did not really understand the history. I had to think about it, trying to remember the book, to be able to glue parts. This is the first time I needed such a thing (and in fact, the first time I could do it). Again, reasons can vary… In any case, if this year we have any other episode, I would watch it happily.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549205660820007, "perplexity": 600.1798595351299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526401.41/warc/CC-MAIN-20190720004131-20190720030131-00435.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/when-to-use-a-castle-nut-and-cotter-pin.31228/
# When to use a Castle nut and cotter pin ? Discussion in 'Workshop Tips and Secrets / Tools' started by Pops, Feb 26, 2019. ### Help Support HomeBuiltAirplanes Forum by donating: 1. Feb 26, 2019 ### Pops #### Well-Known Member Joined: Jan 1, 2013 Messages: 6,910 5,825 Location: USA. I know you need to used a castle nut and cotter pin when bolting something when the parts move (rotate) as in the Cub type landing gear legs, etc. Control cable shackle, etc. Been looking for the answer , nothing yet. How about when using a rod end where the bolt is torqued down and the rod is turning on the rod end ball that is stationary ? 2. Feb 26, 2019 ### TFF #### Well-Known Member Joined: Apr 28, 2010 Messages: 11,415 3,164 Location: Memphis, TN Depends. Flight controls? It use to be ok by the FAA to just have a nylock nut. For flight controls on new aircraft the FAA requires two forms of safety, so many use castle nylock with the cotter pin as two. Also depends on if I can see it or not. Can’t see it, I’m sticking a cotter pin in it. Victor Bravo and don january like this. 3. Feb 26, 2019 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,865 4,694 Location: KWHP, Los Angeles CA, USA A spherical ball joint usually qualifies as allowing movement without trying to loosen the nut/bolt. If the ball joint freezes or rusts or gets pitted, and it doesn't want to rotate easily... you could make a case that in an extreme situation it could try to impart a rotational force on the fastener, which could maaaay-beeee loosen it. Unlikely but certainly possible. HOWEVER, if you had a flight or engine control using a ball joint and a Castle Nut-Cotter pin, you also have the equally valid possibility of the end of the Cotter Pin catching on something (electrical wire, control cable, edge of a bulkhead) and jamming or impeding a flight control because of that. Because of this, in the "real world", I believe a good, new, tight locknut with 2 threads showing is likely as low-risk as a Cotter Pin. Many factors could slant the odds one way or another. 4. Feb 27, 2019 ### Pops #### Well-Known Member Joined: Jan 1, 2013 Messages: 6,910 5,825 Location: USA. Wish all the DAR's would agree on the answer one way or the other. I agree VB, using a new All Steel locknut with 2 threads showing and in an area that can be inspected, I believe its good. But some DAR's would disagree and that is who you have to satisfy. 5. Feb 27, 2019 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,865 4,694 Location: KWHP, Los Angeles CA, USA Where is this attachment and what is it for? If it is a control linkage and there is something that a Cotter Pin can catch on, then you can make a case with the DAR that you wanted to eliminate that problem because it is more likely than any other type of problem. If it is in an area where there is nothing that it can catch on, then it is hard to make that same argument. If it is something you are never going to see again once the structure is closed, then the Cotter Pin can be explained as be ing more fail-safe over a longer period of time. So the question is what environment is this joint going to be in. 6. Feb 27, 2019 ### cvairwerks #### Well-Known Member Joined: May 12, 2010 Messages: 173 65 Location: North Texas In the production world, any rod end gets a castle nut and cotter key or safety cable. 7. Feb 27, 2019 ### Pops #### Well-Known Member Joined: Jan 1, 2013 Messages: 6,910 5,825 Location: USA. So to cover all bases and DAR's it looks like going to the castle nut and cotter key is the way to go. Thanks. 8. Mar 1, 2019 ### Marc Zeitlin #### Well-Known Member Joined: Dec 11, 2015 Messages: 447 613 Location: Tehachapi, CA I will respectfully disagree that this is the only way to go. While it is possible to use a castle nut and cotter pin to fasten a rod-end to <something> properly, I rarely see it DONE properly. See AC43.13-1B Section 7 paragraph 40(f), which discusses castle nut torquing. This does NOT address when to use them - just how to install them. It clearly indicates that there should be torque on the nut/bolt, but many times when castle nuts are used in control systems (I've seen them on rod ends and engine control pushrods, as well as other control system components) they are installed essentially loose or at best finger tight, with the cotter pin being the only thing keeping the nut from running off the bolt. In many of these cases, vibration has caused substantial motion and wear of the bolt holding the rod-end onto the <component>. Next, see Section 7, paragraph 64, which discusses nuts of various types. Section (b) says that castle nuts with cotter pins can be used in ANY system, but obviously the caveat is that they must have drilled bolts and must be torqued appropriately per 7-40(f). Section (a) says that self-locking nuts (and the first part of the paragraph states that they're referring to metal and/or fiber self-locking nuts here) shouldn't be used on parts subject to rotation. But the inside ball of a rod-end is NOT subject to rotation on its axis - it's SUPPOSED to be clamped tight against whatever it's connected to. So as an A&P, given the above, I'm perfectly happy to see either metal or fiber locknuts on rod-ends (with a large area washer, so that if the ball gets loose in the housing, it's still captured and can't slide over the nut) in the cabin of an aircraft, and in the engine compartment, only metal locking nuts are acceptable. I'm also perfectly OK with castle nuts and cotter pins, as long as they're installed and torqued correctly. If talking about parts that ARE subject to rotation by design, THEN I believe that AC43.13-1B implies that only castle nuts and cotter pins are acceptable, because they're not considered "self-locking" nuts, even though they're in the **** paragraph labeled "self-locking nuts". Sigh... Pops and wsimpso1 like this. 9. Mar 1, 2019 ### wsimpso1 #### Well-Known Member Joined: Oct 19, 2003 Messages: 5,712 2,952 Location: Saline Michigan I have used FMEA a bunch in my career since being trained by DuPont in 1980, so when I look at a system, I first marvel over how it was designed (sometimes the marveling is at the beauty, and sometimes at the lousiness) then at how well the failure modes are shoved out. I have always looked at rod ends and other linkage items for how it can mess up. It seems that with a spherical rod end, its intended function is covered wonderfully with a fiber locking nut. But how can it fail and do we worry over it? Well, if it is flight controls or engine controls, we definitely worry over it. So, failure modes: Containment of Ball in Housing is Lost - If it is sandwiched on both sides with the arm, we are done, but if it is open on one side, we apply a large washer to keep it from getting away; Ball Seizes within Housing - If the pilot continues to exercise the control, it will move and one or more of the surfaces may get loosened (whichever joint is slipping). Which options do we have? Ball Slips on the Arm - the control action will get sloppy, and it may exercise the bolt and nut; Ball Slips on the Washer (not sandwiched) - control action will get sloppy and the nut may become loosened, and; Washer slips on the Nut - Control action which apply torque to the nut. Now if the Ball is Seized in the Housing, I can easily feature that it is also Seized to the Bolt - If this joint is then exercised, the nut will have torque applied to it. To actually do an FMEA, we would need to assess failure mode severity and liklihood - I WILL not attempt that. But just looking at things that can happen gets my attention. So what to do about them? We already either sandwich the ball between arms or use a big washer to keep the ball somewhat contained, but the other modes all seem to include loosing the nut. We definitely do not want the nut coming off. Keeping it tight once the other modes commence is not going to be assured, so we have to be happy with retaining the nut. We all know that the cotter pin will resist a decent amount of torque, but only a little rotation of the nut relative to the bolt will shear the pin. If the pin shears, the nut may continue to rotate - there will be some drag from the remnants of the pin within the bolt, but I dislike counting on that. Use a self locking nut has its good point in that it will have a prevailing torque required to unwind it until the self locking feature comes off the bolt. In my mind, seized parts appear to be possible to loosen a self locking nut and to shear the cotter pin. The important thing would seem to be how much torque and how much energy is needed to commence loosening and then to unwind the nut in each case. Got me tempted to run a few tests out in the shop with a direct reading torque wrench... Once other thing to think about is that when we use the rod end with one side open, the bolt will wobble about, the hole in the arm will open with continued action, and we may well keep enough load and thus torque on the nut to continue loosening and wind the nut past the retaining features (sheared pin or self locking portion). But if instead, we sandwich the ball between paired arms, once the axial preload on the nut is lost, the torque to further unwind the nut is lost. From a failure mode management perspective, it appears to this engineer that the best scheme is probably to sandwich the rod end between two pieces of the same arm, as the rate of failure progression drops with it once the preload is lost. Once you are committed design-wise to single sided arms (it is done a lot and I do not like changing designs on known airplanes) I do not know which will stay put more often once bearing failures commence. Yeah, I am thinking about the eight places in my aileron circuit that are single shear on ball-type rod ends... Enough of the over-educated voice - Let's get the voice of experience on how this stuff all really runs out in the world. What failure modes do we actually see? Which schemes stay put better when the failure modes do occur? Billski BoKu likes this. 10. Mar 1, 2019 ### ScaleBirdsScott #### Well-Known Member Joined: Feb 10, 2015 Messages: 942 603 Location: Uncasville, CT Not to go off the trail too far but, what about Nord-lock type wedge bolts/washers/etc? I like the theory behind them, seen their videos comparing them to traditional self-locking nuts and lock washers and so on, but havn't bothered to use them personally, and so I'm wondering if they used in aviation to any degree? Are they just an expensive gimmick that doesn't truly solve a problem, or a pragmatic solution to certain cases? 11. Mar 1, 2019 ### Pops #### Well-Known Member Joined: Jan 1, 2013 Messages: 6,910 5,825 Location: USA. I'll still stand by my quote, if the DAR will not sign it off, you don't fly. So if you want a sign off, the DAR is the boss. But, I agree with you on the rest of your post. Castle nut just snugged up so the part can still rotate ( like the Cub landing gear leg) install cotter pin. On a rod-end the bolt and nut need to be torqued up and the rod end rotates on the captured ball. In this case, I always use an all metal self locking nut, because it seems to take more torque to turn it. The reason I ask the question. A friend of mine just had his third homebuilt turned down by a $800 DAR inspection because he didn't use castle nuts and cotter pins on the rod -ends. He told him to fix it and let me know when you wants another inspection and left. Will he have to pay another$800 ?, we well see. Almost forgot, The DAR also wanted to see that all the AD's were done on the Experimental Cont - C-85 engine. The owner is upset to say the least. 12. Mar 1, 2019 ### Hot Wings #### Well-Known MemberHBA Supporter Joined: Nov 14, 2009 Messages: 6,200 2,171 Location: Rocky Mountains Probably way off topic: This is just another example of why the FAA needs to change the way they approve DARs. It is perfectly with in the FAAs mandate to require persons providing this service to be qualified. IMHO for them to control the number and distribution of DARs is not. Safety is one thing. Market control is something best left to the market. gtae07 and Pops like this. 13. Mar 1, 2019 ### cvairwerks #### Well-Known Member Joined: May 12, 2010 Messages: 173 65 Location: North Texas Scott: Nords require the one washer face to impart damage to the structure as part of the locking mechanism. Same reason you don’t see star locks or split lock washers on aluminum structure. BoKu likes this. 14. Mar 1, 2019 ### ScaleBirdsScott #### Well-Known Member Joined: Feb 10, 2015 Messages: 942 603 Location: Uncasville, CT I did recall seeing that it left impressions on the part. Pretty sure I wouldn't use them on aluminum for that same reason as you mention. But as I havn't really used them personally I wasn't sure if they were just leaving some marks on a finish, or actually cutting into the parts. I suppose that's a matter of torque and material hardnesses. Whether on a steel structure if those marks are significant enough to compromise the part hard for me to know. But nevertheless that is a major downside for our applications in aviation. 15. Mar 1, 2019 ### Dan Thomas #### Well-Known Member Joined: Sep 18, 2008 Messages: 4,784 2,013 You wouldn't believe the number of times, and failed components, we've found due to incorrectly installed fasteners. It seems that far too many mechanics don't understand what the designed intended; the mechanics are just wrench-turners that didn't get the education required. I've seen control surface hinges destroyed by loose hardware, left like that by mechanics that think the bolt is the bearing surface. Apart from the aforementioned Cub landing gear and the like, which uses substantial lengths of tubing of doubled or tripled layers of thick 4130 sheet to bear the loads, many hinges involve light aluminum brackets with a bushing trapped between them by the bolt, and that bushing rotates in a bearing pressed into a lug on the mating surface. The bolt needs to be torqued up to clamp that bushing tightly so it can't have any relative movement against those light aluminum brackets. Cessna, where much of my maintenance experience lies, does this a lot, in numerous places, and they use nylocks or all-metal locknuts, not cotter-pinned castellated nuts and bolts. This is common in the light-aircraft production world and I have yet to see a locknut missing. In some places, like the aileron bellcrank inside the wing, a castellated nut with its cotter pin and longer, drilled bolt, can snag on adjacent structure and cause control problems. I've found that more than once. Pinned assemblies require longer bolts that often can't fit into confined places, and getting the pin in (or out) can be a nightmare. Loose bolts in such assemblies rapidly wear out expensive brackets. Oh, the bracket might only be $200, but its replacement requires removal of the control surface, drilling out of rivets, and installing the new part and then reassembling everything. Sometimes you might have to disassemble the surface to get at stuff. Ugh.$. In the 180/185, the stabilizer hinges are retained by bushings and bolts at the aft end of the tailcone, and those bolts pass through the ends of aluminum angles that run about five feet forward in the fuselage on both sides, and aluminum brackets picking up the inboard sides of the hinges. Those angles are VERY expensive and VERY difficult to replace, and it's all because some guy left the bolts loose so the stab could rotate for trim function. The bushings are supposed to be clamped by the 1/4" bolts at 70 in-lb, as specified by the manuals, which many mechanics don't read, maybe don't even have. And the bolts use MS21042 locknuts, which Cessna specifies in a lot of places. No pinning. Engine controls usually use all-steel locknuts or pinned nuts, with pinned nuts required on throttle controls for many aircraft. Nylocks ahead of the firewall are frowned upon. Yet I have found nylock nuts on the exhaust pipe clamp bolts! Duh. Nylon long gone. Nuts were still tight. Aircraft exhaust systems typically run very hot, often glowing red at full power. Seized rod-end balls and the like are a result of irresponsible maintenance, like never changing the oil in your car. There's no need for such stuff at all, but we see it anyway bceause too many owners are cheap, or own far more airplane than they can afford. They push the shop to keep the labor times down, and inspection detail suffers. In the end, the airplane is a worthless mess of junk once the guy goes to sell it. Last edited: Mar 1, 2019 D Hillberg, Marc Zeitlin and wsimpso1 like this. 16. Mar 1, 2019 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,865 4,694 Location: KWHP, Los Angeles CA, USA Marc Zeitlin's post reminded me that you often cannot perfectly torque a nut when you are using a Cotter pin, because you usually have to loosen or tighten it a little to get the bolt holes to line up with the castellations in the nut. So if proper torque is important (bolt stretch, slack removal, clamping force, or pre-load on a structure, etc.) then you may be better off with a locknut. On a truly life-critical component where you have zero tolerance for the fastener coming apart, you can "peen" or "stake" the threads downstream of the nut, to providee a last line of defense. This makes it into a semi-permanent fastener instead of a removable fastener, usually requiring you to throw away the hardware if you ever have to disassemble the joint. 17. Mar 1, 2019 ### Pops #### Well-Known Member Joined: Jan 1, 2013 Messages: 6,910 5,825 Location: USA. When I am building, I temporary assemble with non-locking nuts. Then when assembling for the last time I put the proper nut on and torque and mark with a red paint mark. gtae07, wsimpso1 and BoKu like this. 18. Mar 1, 2019 ### Marc Zeitlin #### Well-Known Member Joined: Dec 11, 2015 Messages: 447 613 Location: Tehachapi, CA I see this on occasion and it's not something I like. I get concerned that the temporary non-locking nut will be forgotten or missed and stay on the assembly for flight. I've seen it happen - people THINK that they're not assembling something for the last time, but then don't check it or reassemble it. My position is, you either assemble the bolt and nut with the final hardware any time there's the slightest chance that the plane will fly, or else you don't assemble it at all. No partial measures that can set up a failure. Of course, if you're perfect, and NEVER forget anything or forget to check anything, then my point is invalid. But the people I've met that fall into that category are few and far between (me included)... 19. Mar 1, 2019 ### Pops #### Well-Known Member Joined: Jan 1, 2013 Messages: 6,910 5,825 Location: USA. For a simple airplane it easy to go over everything and check for the red paint mark. That is also why I check other peoples work and they check mine. MadRocketScientist and Hot Wings like this. 20. Mar 1, 2019 Joined: Apr 4, 2007 Messages: 8,506
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32266417145729065, "perplexity": 3226.4391549048687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572556.54/warc/CC-MAIN-20190916120037-20190916142037-00440.warc.gz"}
http://www.ck12.org/book/Probability-and-Statistics---Advanced-%2528Second-Edition%2529/r1/section/1.2/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 1.2: An Overview of Data Difficulty Level: At Grade Created by: CK-12 ## Learning Objective • Understand the difference between the levels of measurement: nominal, ordinal, interval, and ratio. ## Introduction This lesson is an overview of the basic considerations involved with collecting and analyzing data. ## Levels of Measurement In the first lesson, you learned about the different types of variables that statisticians use to describe the characteristics of a population. Some researchers and social scientists use a more detailed distinction, called the levels of measurement, when examining the information that is collected for a variable. This widely accepted (though not universally used) theory was first proposed by the American psychologist Stanley Smith Stevens in 1946. According to Stevens’ theory, the four levels of measurement are nominal, ordinal, interval, and ratio. Each of these four levels refers to the relationship between the values of the variable. ### Nominal measurement A nominal measurement is one in which the values of the variable are names. The names of the different species of Galapagos tortoises are an example of a nominal measurement. ### Ordinal measurement An ordinal measurement involves collecting information of which the order is somehow significant. The name of this level is derived from the use of ordinal numbers for ranking (\begin{align*}1^{\text{st}}, \ 2^{\text{nd}}, \ 3^{\text{rd}}\end{align*}, etc.). If we measured the different species of tortoise from the largest population to the smallest, this would be an example of ordinal measurement. In ordinal measurement, the distance between two consecutive values does not have meaning. The \begin{align*}1^{\text{st}}\end{align*} and \begin{align*}2^{\text{nd}}\end{align*} largest tortoise populations by species may differ by a few thousand individuals, while the \begin{align*}7^{\text{th}}\end{align*} and \begin{align*}8^{\text{th}}\end{align*} may only differ by a few hundred. ### Interval measurement With interval measurement, there is significance to the distance between any two values. An example commonly cited for interval measurement is temperature (either degrees Celsius or degrees Fahrenheit). A change of 1 degree is the same if the temperature goes from \begin{align*}0^\circ\end{align*} C to \begin{align*}1^\circ\end{align*} C as it is when the temperature goes from \begin{align*}40^\circ\end{align*} C to \begin{align*}41^\circ\end{align*} C. In addition, there is meaning to the values between the ordinal numbers. That is, a half of a degree has meaning. ### Ratio measurement A ratio measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind. A variable measured at this level not only includes the concepts of order and interval, but also adds the idea of 'nothingness', or absolute zero. With the temperature scale of the previous example, \begin{align*}0^\circ\end{align*} C is really an arbitrarily chosen number (the temperature at which water freezes) and does not represent the absence of temperature. As a result, the ratio between temperatures is relative, and \begin{align*}40^\circ\end{align*} C, for example, is not twice as hot as \begin{align*}20^\circ\end{align*} C. On the other hand, for the Galapagos tortoises, the idea of a species having a population of 0 individuals is all too real! As a result, the estimates of the populations are measured on a ratio level, and a species with a population of about 3,300 really is approximately three times as large as one with a population near 1,100. ## Comparing the Levels of Measurement Using Stevens’ theory can help make distinctions in the type of data that the numerical/categorical classification could not. Let’s use an example from the previous section to help show how you could collect data at different levels of measurement from the same population. Assume your school wants to collect data about all the students in the school. If we collect information about the students’ gender, race, political opinions, or the town or sub-division in which they live, we have a nominal measurement. If we collect data about the students’ year in school, we are now ordering that data numerically (\begin{align*}9^{\text{th}}, \ 10^{\text{th}}, 11^{\text{th}}\end{align*}, or \begin{align*}12^{\text{th}}\end{align*} grade), and thus, we have an ordinal measurement. If we gather data for students’ SAT math scores, we have an interval measurement. There is no absolute 0, as SAT scores are scaled. The ratio between two scores is also meaningless. A student who scored a 600 did not necessarily do twice as well as a student who scored a 300. Data collected on a student’s age, height, weight, and grades will be measured on the ratio level, so we have a ratio measurement. In each of these cases, there is an absolute zero that has real meaning. Someone who is 18 years old is twice as old as a 9-year-old. It is also helpful to think of the levels of measurement as building in complexity, from the most basic (nominal) to the most complex (ratio). Each higher level of measurement includes aspects of those before it. The diagram below is a useful way to visualize the different levels of measurement. ## Lesson Summary Data can be measured at different levels, depending on the type of variable and the amount of detail that is collected. A widely used method for categorizing the different types of measurement breaks them down into four groups. Nominal data is measured by classification or categories. Ordinal data uses numerical categories that convey a meaningful order. Interval measurements show order, and the spaces between the values also have significant meaning. In ratio measurement, the ratio between any two values has meaning, because the data include an absolute zero value. ## Point to Consider • How do we summarize, display, and compare data measured at different levels? ## Review Questions 1. In each of the following situations, identify the level(s) at which each of these measurements has been collected. 1. Lois surveys her classmates about their eating preferences by asking them to rank a list of foods from least favorite to most favorite. 2. Lois collects similar data, but asks each student what her favorite thing to eat is. 3. In math class, Noam collects data on the Celsius temperature of his cup of coffee over a period of several minutes. 4. Noam collects the same data, only this time using degrees Kelvin. 2. Which of the following statements is not true. 1. All ordinal measurements are also nominal. 2. All interval measurements are also ordinal. 3. All ratio measurements are also interval. 4. Steven’s levels of measurement is the one theory of measurement that all researchers agree on. 3. Look at Table 3 in Section 1. What is the highest level of measurement that could be correctly applied to the variable 'Population Density'? 1. Nominal 2. Ordinal 3. Interval 4. Ratio Note: If you are curious about the “does not apply” in the last row of Table 3, read on! There is only one known individual Pinta tortoise, and he lives at the Charles Darwin Research station. He is affectionately known as Lonesome George. He is probably well over 100 years old and will most likely signal the end of the species, as attempts to breed have been unsuccessful. On the Web Levels of Measurement: Peter and Rosemary Grant: http://en.wikipedia.org/wiki/Peter_and_Rosemary_Grant ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Tags: Subjects:
{"extraction_info": {"found_math": true, "script_math_tex": 14, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7702414989471436, "perplexity": 1148.9451247687339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00144-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/136512-basis-s-can-someone-please-check-my-work-print.html
# Basis of S - can someone please check my work • Mar 30th 2010, 09:25 AM mybrohshi5 Basis of S - can someone please check my work Let S = $\begin{bmatrix}2\\1\\0\\4 \end{bmatrix} \begin{bmatrix}1\\3\\4\\-4 \end{bmatrix} \begin{bmatrix}3\\4\\4\\0 \end{bmatrix}\begin{bmatrix}4\\-3\\-3\\-2 \end{bmatrix}$ and let W be the subspace spanned by S. Find a basis for W and the dimension of W. I found the vectors $\begin{bmatrix}2\\1\\0\\4 \end{bmatrix} \begin{bmatrix}1\\3\\4\\-4 \end{bmatrix} \begin{bmatrix}4\\-3\\-3\\-2 \end{bmatrix}$ to be Linearly Independent so the basis of W would just be these $\begin{bmatrix}2\\1\\0\\4 \end{bmatrix} \begin{bmatrix}1\\3\\4\\-4 \end{bmatrix} \begin{bmatrix}4\\-3\\-3\\-2 \end{bmatrix}$ and the dimension of W would then be 3. Does that all look right? Thanks for checking :) • Apr 24th 2010, 06:21 PM dwsmith Looks fine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042716026306152, "perplexity": 1283.9859042310447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889660.55/warc/CC-MAIN-20180120142458-20180120162458-00416.warc.gz"}
https://www.physicsforums.com/threads/problem-1-please-help.160893/
1. Mar 15, 2007 blackout85 1. The problem statement, all variables and given/known data A 10-ohm resistor has a constant current. If 1200 C of charge flow through it in 4 minutes what is the value of the current? A) 3.0 A B) 5.0 A C) 11 A D) 15 A E) 20 A 3. The attempt at a solution I get B as an answer. I=(Q/t). I= 1200C/ (4 * 60) = 5.0 A The book says the answer is D. I thought the problem provided to much information. Can someone explain how D might be an answer. 2. Mar 15, 2007 Staff: Mentor Your method and answer are correct; answer D is not. The value of the resistance is irrevelant to the question asked, but don't let that distract you.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816604614257812, "perplexity": 967.9047079306083}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00576.warc.gz"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=502&pid=5173&mode=threaded
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 closed form for regular superfunction expressed as a periodic function sheldonison Long Time Fellow Posts: 641 Threads: 22 Joined: Oct 2008 08/30/2010, 03:09 AM (This post was last modified: 08/30/2010, 03:33 AM by sheldonison.) (08/28/2010, 11:21 PM)tommy1729 Wrote: that seems efficient and intresting. in fact i doubt it hasnt been considered before ?Thanks Tommy! I assume it has been considered, and probably calculated before. I think Kneser developed the complex periodic regular tetration for base e, and probably would've generated the coefficients. But I haven't seen them before. Perhaps Henryk (or someone else) could comment??? I figured out the closed form equation for a couple more terms, and I have an equation that should generate the other terms, but I'm still working it, literally as I write this post! $a_2 = (1/2)/(L - 1)$ $a_3 = (1/6 + a_2)/(L*L - 1)$ $a_4 = (1/24 + (1/2)*a_2*a_2 + (1/2)*a_2 + a_3)/(L*L*L-1)$ What I did is start with the equation: $\text{RegularSuperf}(z) = \sum_{n=0}^{\infty}a_nL^{nz}$ and set it equal to the equation $\text{RegularSuperf}(z) = \exp{(\text{RegularSuperf}(z-1))}$ Continuing, there is a bit of trickery in this step to keep the equations in terms of $L^{nz}$, instead of in terms of $L^{n(z-1)}$. Notice that $L^{n(z-1)}=L^{(nz-n)}=L^{-n}L^{nz}$. $\text{RegularSuperf}(z) = \exp{(\text{RegularSuperf}(z-1))} = \exp{( \sum_{n=0}^{\infty}\exp^{(L^{-n}a_nL^{nz})})}$ This becomes a product, with $a_0=L$ and $a_1=1$ $\text{RegularSuperf}(z) = \prod_{n=0}^{\infty} \exp{(L^{-n}a_nL^{nz})}$ The goal is to get an equation in terms of $L^{nz}$ on both sides of the equation. Then I had a breakthrough, while I was typing this post!!!! The breakthrough is to set $y=L^z$, and rewrite all of the equations in terms of y! This wraps the 2Pi*I/L cyclic Fourier series around the unit circle, as an analytic function in terms of y, which greatly simplifies the equations, and also helps to justify the equations. $\text{RegularSuperf}(z) = \sum_{n=0}^{\infty}a_ny^n = \prod_{n=0}^{\infty} \exp{(L^{-n}a_ny^n)}$ The next step is to expand the individual Tayler series for the $\exp {(L^{-n}a_ny^n)}$, and multiply them all together (which gets a little messy, but remember a0=L and a1=1), and finally equate the terms in $y^n$ on the left hand side equation with those on the right hand side equation, and solve for the individual $a_n$ coefficients. Anyway, the equations match the numerical results. I'll fill in the Tayler series substitution next time; this post is already much more detailed then I thought it was going to be! I figured a lot of this out as I typed this post! - Sheldon « Next Oldest | Next Newest » Messages In This Thread closed form for regular superfunction expressed as a periodic function - by sheldonison - 08/27/2010, 02:09 PM RE: regular superfunction expressed as a periodic function - by sheldonison - 08/28/2010, 08:44 PM RE: regular superfunction expressed as a periodic function - by tommy1729 - 08/28/2010, 11:21 PM RE: regular superfunction expressed as a periodic function - by sheldonison - 08/30/2010, 03:09 AM RE: regular superfunction expressed as a periodic function - by Gottfried - 08/30/2010, 09:22 AM RE: regular superfunction expressed as a periodic function - by tommy1729 - 08/30/2010, 09:41 AM RE: regular superfunction expressed as a periodic function - by tommy1729 - 08/30/2010, 09:46 AM RE: regular superfunction expressed as a periodic function - by Gottfried - 08/31/2010, 08:34 PM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/03/2010, 08:19 PM RE: closed form for regular superfunction expressed as a periodic function - by sheldonison - 09/05/2010, 05:36 AM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/05/2010, 04:45 PM RE: closed form for regular superfunction expressed as a periodic function - by sheldonison - 09/07/2010, 03:54 PM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/07/2010, 07:46 PM RE: closed form for regular superfunction expressed as a periodic function - by bo198214 - 09/08/2010, 06:03 AM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/08/2010, 06:55 PM RE: closed form for regular superfunction expressed as a periodic function - by bo198214 - 09/09/2010, 10:12 AM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/09/2010, 10:18 PM Possibly Related Threads... Thread Author Replies Views Last Post New mathematical object - hyperanalytic function arybnikov 4 1,244 01/02/2020, 01:38 AM Last Post: arybnikov Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 668 09/09/2019, 10:55 PM Last Post: tommy1729 Is there a function space for tetration? Chenjesu 0 722 06/23/2019, 08:24 PM Last Post: Chenjesu Degamma function Xorter 0 1,178 10/22/2018, 11:29 AM Last Post: Xorter Periodic analytic iterations by Riemann mapping tommy1729 1 2,577 03/05/2016, 10:07 PM Last Post: tommy1729 Should tetration be a multivalued function? marraco 17 19,207 01/14/2016, 04:24 AM Last Post: marraco Introducing new special function : Lambert_t(z,r) tommy1729 2 4,131 01/10/2016, 06:14 PM Last Post: tommy1729 Natural cyclic superfunction tommy1729 3 3,502 12/08/2015, 12:09 AM Last Post: tommy1729 Tommy-Mandelbrot function tommy1729 0 2,226 04/21/2015, 01:02 PM Last Post: tommy1729 Can sexp(z) be periodic ?? tommy1729 2 4,309 01/14/2015, 01:19 PM Last Post: tommy1729 Users browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 18, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.790084958076477, "perplexity": 2831.379648248648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881763.20/warc/CC-MAIN-20200706160424-20200706190424-00558.warc.gz"}
http://gravityandlevity.wordpress.com/2010/12/07/feynmans-ratchet-and-the-perpetual-motion-gambling-scheme/
Can you spot a perpetual motion machine when you see one? In physics, that question is equivalent to “can you spot a scam when you see one?”.  That’s because a perpetual motion machine is, by definition, a fraud.  It is a device that claims to generate useful work in a way that violates one of the most basic laws of physics: the laws of thermodynamics.  The laws of thermodynamics are extremely fundamental to physics; they belong to a set of five or so ideas that can really be called “laws”, upon which the rest of physics is built. So if you (portrayed below by Lisa Simpson) submit an idea or invention to the physics community (portrayed by Homer Simpson) that violates one of the laws of thermodynamics, you’re opening yourself up to a world of ridicule. If someone tells you “what you’re proposing is a perpetual motion machine” (they’ll say perpetuum mobile if they’re trying to sound snooty), they might as well be saying “you couldn’t tell a Lagrangian from a lawnmower”.  It’s a pretty strong rebuke. In my experience, though, most physics students have a false sense of confidence in their own ability to spot a perpetual motion machine.  They think that such a whimsical contraption will have an obvious, glaring flaw that’s easy to notice because it will violate energy conservation.  “Oh, you forgot to take into account friction,” they’ll say, and then they’ll give you a short lecture on the First Law of thermodynamics.  “Energy is neither created nor destroyed,” they’ll say. The truth, however, is that most perpetual motion machines that you are likely to encounter do not violate energy conservation.  Rather, the tricky and persistent scientific “scams” violate the much more nebulous Second Law of Thermodynamics, which says (in one of its formulations): It is impossible for a device to receive heat from a single reservoir and do a net amount of work. It is much easier to be fooled by proposals which violate this Second Law, which ultimately has its roots in probability rather than in the deterministic notions of energy conservation.  In my life I have been fooled on two noteworthy occasions by seemingly good ideas that violate the Second Law of Thermodynamics.  One idea was for a hypothetical machine to generate energy from thin air (molecules).  The other was a sure-fire gambling method.  In this post I’ll discuss both of these fraudulent schemes and why they fail, and I’ll try to explain why the Second Law of Thermodynamics can be stated like this: It is impossible to profit, in the long run, from a truly random process. $\hspace{10mm}$ The remainder of this post is organized thusly: First, I’ll introduce you to Feynman’s ratchet, a fairly popular thought experiment that seemingly yields a perpetual motion machine.  I won’t tell you why it fails, though, until later.  In the second section I’ll introduce you to an idea that I once thought could make me a rich gambler and I’ll explain why it doesn’t work.  Finally I’ll come back to Feynman’s ratchet and explain why it also must fail for a very similar reason. $\hspace{10mm}$ $\hspace{10mm}$ Feynman’s Ratchet Imagine that you manage to construct the following device.  You take a very small, very light-weight metal rod and attach some thin, paddle-like fins to one end.  Let’s say that the rod is held in place by some low-friction bearing which allows it to rotate on its axis.  If the rod/fins are sufficiently light-weight, then when they are exposed to randomly-moving air molecules some of these molecules can hit the fins and cause the rod to rotate in one direction or the other.  You, the inventor, are hoping to harness some of this rotation in a useful way, but you need the rod to rotate consistently in one direction before you can do anything with it.  So you attach the other end of the rod to a ratchet mechanism: a saw-toothed gear that interlocks with a spring-loaded lever (called a pawl).  Like this: The ratchet, according to your design, will allow the rod to rotate easily in one direction (counterclockwise) but will not let it rotate in the other direction (clockwise). So there you have it.  A simple perpetual motion machine.  As long as the surrounding air molecules continue to move randomly, the ratchet should continue to spin (perhaps sporadically) in the counterclockwise direction, driven by occasional collisions with high-energy air molecules.  You can even get useful work out of the ratchet if you want, for example by winding up a rope that lifts a small mass or by using the rod to drive a tiny electrical generator. This clever thought experiment is generally known as “Feynman’s Ratchet”.  It was popularized by Richard Feynman in his Lectures on Physics, although the original explanation belongs to Smoluchowski (of diffusion law fame) in 1912.  I first heard of it as a riddle passed around by undergraduate students. It’s not immediately obvious that such a machine should be impossible.  It certainly doesn’t violate energy conservation, nor does it rely on any “zero friction” assumptions.  Feynman’s ratchet gradually uses up the energy of the randomly-moving air molecules around it (cooling the air as it gains energy through collisions), but so long as the earth is heated by the sun it should continue to rotate and, seemingly, provide useful work.  It seemed to me, as an undergraduate, that this was a clever little device for converting solar energy to useful work. But, by decree of thermodynamics, Feynman’s ratchet cannot work as a heat engine.  It plainly violates the Second Law, which says that useful work can only be obtained by the flow of energy from high to low temperature.  This device purports to get energy from a single temperature reservoir: that of the air around it. Where does it go wrong? If you’re encountering this riddle for the first time, you can try and figure it out for yourself before I tell you the answer below.  But it may help you to first consider another bogus scheme, which I stumbled upon as a high school student and thought for sure could make someone a fortune. $\hspace{10mm}$ $\hspace{10mm}$ The perpetual motion gambling scheme It was during high school that my nerdy friends and I first discovered the joys of computer programming.  It seemed to me then (and still seems now) a remarkable form of instant gratification: if you want to see what happens in a particular hypothetical situation, you just ask the computer to work it out for you and you get to avoid a lot of tedious and questionable theorizing.  Of course, the marvelousness of the computer can quickly lead to the programmer developing an over-reliance on its powers, and from there it’s easy to fall into a kind of intellectual laziness that gets you into all kinds of (scientific) trouble.  It’s probably this computer-born laziness that first allowed me to be fooled by the “perpetual motion gambling scheme”. Back in 11th grade, the programming platform of choice for my friends and I was the TI-83 graphing calculator.  Our setting of choice was the back of physics class.  On one particular day, I was playing a simple blackjack program that my friend had made when I discovered that I could make money every single time I played.  What’s more, I could make an arbitrarily large amount of money, apparently only by judiciously deciding how much to bet at each hand.  I only learned much later in life that I had stumbled across a system called the “martingale strategy“.  And only very recently did I realize that hoping to profit from the martingale strategy amounts to a perpetual motion machine, and is in violation of the Second Law. If you’re unfamiliar with the martingale strategy, it goes as follows.  Consider the simplest possible gambling game (you can easily generalize to other games, like blackjack): you place a bet and then flip a coin.  If the coin comes up tails, then you lose all the money you bet.  If the coin comes up heads, then the money you bet is doubled and given back to you.  It’s a completely fair game which, on average, should give you zero net profit.  The martingale strategy is to place an initial bet (say, $1), and then double your bet each time you lose. In this way a victory at any given coin toss will completely compensate for all previous losses and give you a net profit of$1.  In flowchart form, it looks like this: Notice that there’s no exit to this flow chart except at “Congratulations”.  You can’t lose! Of course, it’s possible that you, the bettor, only have a finite amount of money to bet, which would imply another ignominious exit to this flow chart corresponding to “you have completely run out of money”.  (This was impossible in my friend’s TI-83 blackjack program, which allowed you to go into arbitrarily large amounts of debt).  But the finiteness of a person’s funds didn’t seem like an insurmountable problem to me. Here’s how the strategy played out in my high school student imagination.  Come to the gambling table with some unthinkably huge amount of money: say, $2^{10} = 1,024$ dollars.  Now follow the martingale system until you reach a profit.  The only way the system could fail is in the extremely unlikely event that the coin comes up tails ten consecutive times.  The probability of that happening is only $(1/2)^{10} = 0.097 \%$, so, I reasoned, it can be ignored.  Once you’ve followed the chart and won your $1, start over by resetting your bet to$1.  Repeat the system ad nauseum until you’ve made all the money you want.  Go home rich and happy. And, of course, the strategy is very flexible.  If you’re richer than my “unthinkable” thousandaire and you’re not content with a 1-in-1000 chance of losing, then you can start by coming to the table with $2^{15} = 32,768$ dollars, which would imply a tiny $0.003 \%$ chance of failure.  Or if you want to make money faster (with slightly higher risk), then at each coin toss you could bet (total amount of money lost) + $10 instead of +$1.  What could go wrong? $\hspace{10mm}$ $\hspace{10mm}$ What could go wrong, of course, is the Second Law of thermodynamics.  It says (in my formulation) “you cannot profit from a random process.”  Long-time readers of this blog (thanks!) may notice that the martingale system sounds suspiciously similar to Matt Ridley‘s strategy for biasing the gender distribution: keep having children until you have a boy, and then stop.  It didn’t work there for the same reason that it doesn’t work here: a truly random process cannot be used for directed motion. And, actually, the martingale system isn’t too hard to pick apart once you stop being analytically lazy (as I was in high school) and actually weigh the different outcomes.  Take the example where I come to the gaming table with $2^{10}$ dollars and follow the strategy from the flowchart above.  Then 1023 out of every 1024 games my strategy will succeed, and I’ll receive as my prize $1. However, once in every 1024 games the strategy will fail, and when it fails it will fail spectacularly: I’ll lose$1023.  So if I keep playing the game long enough, on the whole I will make zero profit. Just to make the point visually, here is a simulated string of “martingale” rounds, showing one possible evolution of the gambler’s net profit over time. Note that at a given round, your profit is almost certainly increasing (positive slope), which is why the martingale strategy is so alluring.  If you start from zero, then you will most likely earn some money in the short term.  But given enough time, those big drops will hit you and you will find the strategy unprofitable. $\hspace{10mm}$ Let me say this more again explicitly, as a hint to those still thinking about Feynman’s ratchet.  You cannot get directed motion out of a random process.  You can set up a system that makes a step in one direction (profit) more likely than a step in the other direction (loss), but it will always be accompanied by a change in the size of those steps so that on the whole you go nowhere. Got it? Feynman’s ratchet is explained after the jump $\hspace{10mm}$ $\hspace{10mm}$ The downfall of Feynman’s ratchet The problem with Feynman’s ratchet, as you’ve probably figured out by now, is that there is no such thing as a perfect ratchet mechanism.  What I drew above was a spring-loaded lever that is supposed to prevent the gear from rotating backward.  But in a thermal environment, where energy can be absorbed from randomly-moving air molecules, nothing is impossible.  Things only become improbable due to the high energy they require. So it must be possible for the gear to rotate backwards (clockwise).  In this case, it requires a strong collision from some air molecules against the lever, so that the lever gets pushed up and past the tooth of the gear and the gear can slip backward.  There is a corresponding small rate at which the gear skips backward by one tooth (so that the lever snaps into place in a new location). Of course, this backwards rotation is much less probable than a small forward rotation.  But consider that for the gear to rotate forward by one tooth, a whole bunch of small rotations must be chained together consecutively.  The net rate of all of those small rotations coming together is also be fairly small. And, in fact, the Second Law guarantees that the rates of a forward rotation and a backward rotation are the same.  It seems surprising that this should be the case, no matter how carefully the ratchet is designed and no matter what size/shape the various pieces are.  But it is.  In the Lectures on Physics, Feynman estimates the rates of these two processes and shows that they are, in fact, equal (Chapter 46). Of course, if you really wanted to make the machine work you could cool down the air on the ratchet side or heat up the air on the fin side, like this: But in this case, you’ve only managed to generate work in the same way as a common steam engine: by creating a temperature difference and then using some of the heat that flows from hot to cold.  (Here you’ll need a heat pump to prevent the temperature $T_1$ from equilibrating with $T_2$ by conduction along the metal rod). $\hspace{10mm}$ $\hspace{10mm}$ $\hspace{10mm}$ What did we learn? And now,  like a good episode of G. I. Joe, this post concludes with a recap of the morals to be taken from it.  The first moral is the Second Law itself: it is impossible to extract directed motion from a random process (a single heat reservoir).  Anyone who claims they can do so is either mistaken or a charlatan. A perhaps equally important lesson, though, is that it is easy to be fooled when it comes to the laws of thermodynamics.  In the last decade or two, for example, there was much controversy over the mechanism by which muscle fibers contracted, before someone realized that one of the leading proposals amounted to a perpetual motion machine. So be aware.  Because knowing is half the battle. December 7, 2010 10:24 pm Awesome post! I frickin’ love your blog :) December 7, 2010 11:12 pm Thanks, Erin. I wish the posts came more often, but they generally take me about 8 – 10 hours each to write so I tend to put them off. My “things to blog about” list is growing pretty long! December 8, 2010 8:59 pm Fantastic stuff, again! A university, and their students, is going to get very lucky when you graduate (or after a postdoc, as necessary). December 8, 2010 11:24 pm Thanks shane. I don’t think I have the ego to make a fan page for myself, but your other suggestions are probably a good idea. I think I just successfully added a twitter/facebook “share” link. Let me know if it doesn’t work. December 9, 2010 12:24 pm Yep, it works. Nice. I look forward to your next installment, as always. December 9, 2010 12:52 am How does a windmill work then? Does it work under the same principle as the ratchet? The windmill only pumps water upward out of the ground, not down, and relies upon air movement. Do windmills only work where the wind mostly blows in one direction and not randomly? What if a fluid flows randomly in opposite directions? Can energy by extracted to perform useful work, such as pumping water upward? Your post needs further explanation. December 9, 2010 9:35 am A windmill works, ultimately, by exploiting temperature differences that produce gradients in air pressure. Hot temperatures create higher air pressure while cold temperatures produce low air pressure. Wind is the process of moving the thermal energy of air molecules from high to low temperature/pressure. A windmill exploits this motion (which is not random, as you suggested, but has an overall drift) in a way that is allowed by the Second Law. December 9, 2010 8:04 am Well,the law of entropy says that temperaratures and pressures ,after doing work decay and increase the entropy of the system and tend to make them uniform or equal at everypoint.Consider the pressures in the atmosphere and the occeans which depend only on gravity which will never allow the pressures to become equal at every point on a vertical line.How to explain? December 9, 2010 10:42 am Hi antripathy. You have to be a little careful with how you’re stating the second law. The second law says that the entropy of a closed system will become uniform over time (i.e. the pressure and density will be equal everywhere). A box of gas molecules, for example, when isolated from the rest of the world will eventually equilibrate completely so that its pressure is uniform. If you have a column of water interacting with a gravitational field, however, that doesn’t constitute a closed system. There is still some large mass outside the system pulling on the water molecules and causing them to arrange in a way that prefers higher pressure at lower altitudes. A more correct principle for this system is that the chemical potential is uniform. This is the average free energy per water molecule: energy minus temperature times entropy. December 9, 2010 12:19 pm It is impossible to profit, in the long run, from a truly random process. I’m not sure this is the best way to summarize your observations. Say I invent a game where we flip a fair coin, and every time it comes up heads, I give you $5. It seems like you profit, in the long run, from a truly random process. Also, what about this article: (http://www.telegraph.co.uk/finance/personalfinance/consumertips/8185280/Is-this-a-bet-you-cant-lose.html). It says that some people are finding surefire ways to profit from betting on horses because sometimes bookies give away free bets. Are we to conclude from this that horse racing isn’t random (over the long term)? • gravityandlevity permalink* December 9, 2010 12:34 pm Good point. I guess a better statement would be “It is impossible to profit, in the long run, from an unbiased random process.” Here, “unbiased” means “zero expectation value. Your hypothetical game has an expectation value of$2.50, so of course you will profit in the long run. Coincidentally, this is why it is possible to make a profit from the stock market. There is a net upward drift in the total stock market value due to the world becoming more efficient at producing goods and services. Of course, there is the fact that that stock market isn’t “truly random”: you can make use of knowledge about the company to weigh the probabilities. As for your article, it seems to me that the crux of the “sure fire” method is in taking advantage of incentives offered to first-time bettors. It looks like certain bookies have found it profitable to unbalance the odds for first-time users (so that users win, on average) in order to get people hooked on using their gambling service. I don’t know if the scheme reported in the article works, but if it does then it’s based on quickly jumping from one service to another and taking advantage of their “one time” offers. In that sense it’s a little bit like signing up for one of those “get 12 cd’s for 1 cent!” subscriptions and then canceling immediately. Again, the Second Law remains intact. : ) December 9, 2010 9:41 pm Your restatement of the second law of thermodynamics as “impossible to profit, in the long run, from a truly random process” is incorrect. Blackjack is random and it is possible to profit in the long rug by card-counting. December 10, 2010 9:37 am By counting cards you’re removing the randomness. December 13, 2010 12:01 am You’re not removing the randomness. The cards are no more ordered than they were before simply because you made some predictions about the order in which they would turn up. Does making weather forecasts that turn out to be correct more often than not remove randomness from the weather? I don’t buy his reply about windmills either. Windmills I have seen are made so that they pivot vertically to always face toward the wind. Thus, windmills are able to generate work from wind that blows from random directions. December 13, 2010 12:09 am What if am in a sail boat and I want to cross a lake where the wind blows randomly? When the wind is blowing in the right direction I put up my sail. When the wind isn’t favorable, I reef my sail and anchor. Eventually, I will be able to cross the lake by exploiting the wind even though it blows in random directions. December 13, 2010 10:35 am I like your sailboat question a lot. It’s a tricky one! My guess at a solution is this: First let me imagine that the sailboat and the lake are in complete isolation (i.e. not sitting in the middle of a big, externally-imposed air temperature/pressure gradient), so that the wind truly is “blowing randomly”. It does seem possible that you could cross the lake by (very quickly) raising and lowering the sail at the appropriate moments. But then the problem is exactly like the problem of Maxwell’s demon ( http://en.wikipedia.org/wiki/Maxwell%27s_demon ), which says (in this case) that you might be able to move your sailboat from one side to the other, but you are going to expend a lot of metabolic energy raising and lowering the sail. So by the time you’ve managed to cross the lake, you’ve spent enough energy that you’ve essentially moved the boat by the process of transferring energy from a high-temperature source (yourself) to a low temperature source (the surrounding air). It’s the same as if you had just paddled the boat, which doesn’t violate any laws of thermodynamics. While you used the random air molecules to get across the lake, they’re not ultimately the driving force behind your motion. You might think that you could just concoct some automated (ratchet) system to raise and lower the sail by itself. But I can guarantee, by the second law, that this system will either require a fuel source or it will fail. The same way that your swiveling windmill example will fail unless it has some energy input (e.g. a gas-powered engine) or exploits external temperature gradients. December 13, 2010 7:33 pm In your Martingale example, what about the other party to the bets, the casino or the person with the deep pockets that accepts every bet? They make money from the random process. Maybe your restatement should be “[i]t is impossible to profit, in the long run, from a truly random process UNLESS YOU HAVE FIGURED OUT WAY TO PROFIT FROM THE RANDOM PROCESS.” December 13, 2010 8:24 pm The casino actually doesn’t make money in this example. If the game is fair (zero expectation value), then everyone gains nothing on average. December 14, 2010 3:31 am Wrong. The casino wins because it has a much larger bank roll. Review the part about “you only have a finite amount of money to bet”. December 21, 2010 5:16 pm I think that, in a “real” game in a casino, the casino actually makes money not so much because it has a bigger bank roll but because the game is biased in its favor. I’m afraid I’m not that much of a gambling expert but I am pretty sure that this is the case, in terms of odds versus payouts, for roulette, craps, etc. It is notably *not* the case for blackjack, because as you pointed out above, it is possible to skillfully count cards to make money versus the house when playing blackjack. In *that* case, I believe that the advantage you have is not that you have outwitted a random process, but that the house is required to play by a set of fixed rules (hit on 16, stay on 17, or whatever the actual requirement is), and is not allowed to *itself* count cards to follow an optimum strategy. December 16, 2010 7:07 pm “It is impossible to profit, in the long run, from a truly random process.” Seems applicable to explain the faulty premises that belied the trading practices of the “quant jocks” which led to the financial meltdown of 2008? They seemed to have not considered that 1 out of 1024 times their assumptions wouldn’t work! Perhaps they were with you in the back of that physics class playing blackjack on that old TI-83? Curious if think there’s any merit to my conjecture? Love the blog! 9. January 18, 2011 8:56 am Thing is — I can’t help thinking as the statistician that I am. Presumably all of this depends on the distribution of the random air movements. The reasoning seems to rely on a fat tail — i.e. a non-trivial probability of a large swing. Yet it seems equally possible that large swings are EXTREMELY unlikely. I find it hard to believe that there is no hidden assumption there. January 18, 2011 2:45 pm The kinetic energies of the air molecules follow the Boltzmann distribution: they are distributed according to $e^{-E/k_BT}$. So the “fatness” of the distribution depends only on the temperature, and the argument above works for all temperatures. Really, though, all you need to know is that the rate at which a given thing happens depends only on how much energy is required to make it happen. In this case, the ratchet swings backward and forward at equal rates because it takes the same amount of energy to move over one of the teeth forward as to move over it backward (you have to lift the lever one tooth-height either way). The only difference is that the gear can rotate a little bit forward (which requires a small amount of energy) without clearing the edge of one tooth, and then the spring-loaded lever will push the gear back until the lever is at the bottom of the tooth again. On the other hand, if the lever happens to jump up and the gear rotates by only a tiny amount, then the lever will push the gear backwards until the lever rests at the bottom of the preceding tooth. You should also remember that uniform temperature implies that everything in the environment follows the same Boltzmann distribution. So the air molecules and the atoms that make up the ratchet itself are all randomly kicking around. In this way all possible motions by all objects are being explored simultaneously, and any given motion occurs at a rate dependent only on how much energy it requires. February 8, 2011 3:50 am Thanks for the impressive post. It clarified many issues and, as it is probably supposed to be, suggest new ones…:-) Here’s my thought experiments that apparently violates the second law. Not a very original one, but still I cannot see where it breaks down. Imagine you have a very thin and tiny whisker (what in scanning probe microscopy is usually called a cantilever). It is clamped at one end and free to move at the other. The system is at room temperature. The whisker will vibrate at its resonance frequency around the equilibrium position just for the fact that it is at a finite temperature. Above the whisker you have some kind of traducing system: to simplify things just imagine that the vibrating sting hits something and then transfer to it some of its mechanical energy. This “hitting something” will damp a little the vibration, but the thermal bath is providing constantly thermal energy so we will still be in steady state. Maybe it’s a different steady state than if the sting would be freely vibrating, but it’s still a steady state. Am I not getting mechanical energy out of one single thermal bath or, if you prefer, out of a truly random process? 11. January 4, 2013 11:54 pm What if the saw-toothed gear and the ratchet are in a vacuum?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5893838405609131, "perplexity": 747.0363363287344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924501.17/warc/CC-MAIN-20140909013106-00346-ip-10-180-136-8.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/introduction-to-cryptography-part-2-diffie-hellman/
# This note has been used to help create the Diffie-Hellman wiki This the second installment in my series of post related to modern cryptography...Click here for part 1 on RSA ciphers Introduction $$\textbf{Introduction:-}$$Most ciphers require the sender and the receiver to exchange a key prior to exchanging coded messages.But the key which is exchanged may itself be vulnerable to attack by an unauthorized third party while it is being transmitted over a network.So serious effort is essential to ensure the safety the safety of the key so that it does not all into the wrong hands. key $$\textbf{An Analogy:-}$$Suppose two people Alice and Bob want to exchange a secret.So at first Alice puts the secret in a box and locks it up using a padlock(which only she can unlock) and sends the locked box to Bob.Bob who is unable to open the padlock puts another padlock on the box(which only he can open and sends the box back to Alice.Now Alice receives the box with two padlocks and she opens up her padlock and returns the box to Bob.Notice that now the box only contains Bob's padlock.After receiving the box Bob opens up his own padlock and hence opens the box containing the secret. Imd $$\textbf{ALGORITHM(Diffie-Hellman):-}$$ Step 1. Together Alice and Bob choose a 200-digit number p which is likely to be a prime and a number $$g$$ such that $$1 < g <p$$ $$\textit{Step 2.}$$Alice secretly chooses an integer $$n$$ $$\textit{Step 3.}$$Bob secretly chooses an integer $$m$$ $$\textit{Step 4.}$$Alice computes $$g^{n} \mod p$$ and tells that to Bob. $$\textit{Step 5.}$$Bob computes $$g^{m} \mod p$$ and tells that to Alice. $$\textit{Step 6.}$$The shared secret key is now $s \equiv (g^{n})^{m} \equiv (g^{m})^{n} \equiv g^{mn} \pmod{p}$ Notice that both Alice and Bob can easily compute it. Alice computes this as $s \equiv (g^{m})^{n} \equiv g^{mn} \pmod{p}$ Bob computes this as $s \equiv (g^{n})^{m} \equiv g^{mn} \pmod{p}$ Now they can communicate with each other using their agreed upon secret key by encrypting messages using a suitable cipher(like Arcfour,Bowfish,Cast128 etc). $$\textbf{Safety and the Discreet Log problem:-}$$ in order to understand the conversation between Alice and Bob the eavesdropper needs the shared secret k $$s$$.BUt it is extremely difficult to compute $$s$$ given only $$p,g,g^{n} \mod p,g^{m} \mod p$$.One way to compute would be to compute n from the knowledge of $$g$$ and $$g^{n}$$ bt for extremely large values of p this is computationally in-feasible.A polynomial time bound in a quantum computer was provided by Peter Shor.This is known as the $$\textit{Discreet Logarithm problem}$$. Part of the reason why this is difficult is because the logarithm of real numbers is continuous but (minimum) logarithm of a number $$\mod p$$ bounces around at random. Img The above plot shows this exotic behavior. $$\textbf{Man in the middle attack:-}$$ Suppose when Alice is sending a message to Bob announcing $$g^{n} \pmod{p}$$, a mamn(eavesdropper) intercepts the message and sends his own number $$g^{t} \pmod{p}$$ to Bob.Eventually Bob and the man agree on a secret key $$g^{tm} \pmod{p}$$ and Alice and the Man agree upon the key $$g^{tn} \pmod{p}$$. When Alice sens a message to Bob she unknowingly uses the secret key $$g^{tn} \mod p$$,the man intercepts it decrypts it,changes it and re-encrypts it using the key $$g^{tm} \mod n$$ and sends it to Bob.This is bad because the man can now read every message between Alice and Bob and moreover can change them in submission in subtle ways. hacker One way to get around this attack is to use the Digital Signature Scheme based on the RSA Cryptosystem. Please reshare as much as possible if you want more such posts on modern cryptography!!Feel free to ask any questions related to this in the comments ...and if you want to make another series of notes alongside this cryptography series then post your suggestions in the comment box. Have a nice day!! :) 4 years, 3 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Hey, I have programmed this! The Diffie-Hellman App To use it, you have to first create a thread that will contain the messages between you and your partner. The primes and the base used is the one which is used everywhere. It is inspired by this post - 4 years, 3 months ago After sharing the key by what cipher are they encoded? - 4 years, 3 months ago rc4 You might consider reading the source - 4 years, 3 months ago Nice job!!! You've used PHP right??? - 4 years, 3 months ago yes :) My webhost supports only that - 4 years, 3 months ago Cool beans! - 4 years, 3 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271276354789734, "perplexity": 1852.9268341983513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00316.warc.gz"}
http://www.ni.com/documentation/en/labview-comms/2.0/node-ref/struve-function/
Struve Function (G Dataflow) Version: Computes the Struve function. x The input argument. Default: 0 v The index parameter. error in Error conditions that occur before this node runs. The node responds to this input according to standard error behavior. Default: No error Hv(x) Value of the Struve function. error out Error information. The node produces this output according to standard error behavior. Algorithm for Computing the Struve Function For the Struve function of order v, Hv(x) is a solution of the following differential equation. ${x}^{2}\frac{{d}^{2}w}{d{x}^{2}}+x\frac{dw}{dx}+\left({x}^{2}-{v}^{2}\right)w=\frac{{4\left(0.5x\right)}^{v+1}}{\sqrt{\pi }\mathrm{\Gamma }\left(v+0.5\right)}$ The function is defined according to the following intervals for the input values. $v\in \Im ⇒x\in \Re$ $v\in \Re \\Im ⇒x\in \left[0,\infty \right)$ This node supports the entire domain of this function that produces real-valued results. For any integer value of order v, the function is defined for all real values of x. Otherwise, the function is defined for nonnegative real values of x. Where This Node Can Run: Desktop OS: Windows FPGA: Not supported
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6365292072296143, "perplexity": 1850.8848724481152}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646178.24/warc/CC-MAIN-20180318224057-20180319004057-00270.warc.gz"}
https://www.physicsforums.com/threads/please-help-in-limits.111507/
1. Feb 20, 2006 ### mohlam12 hello everyone we've been doing some exercices of limits at class and there are many ones that i didn't understand... and since you don't get that chance to ask your teacher after class in Morcco, I came here for help! here are two of the tens that i didn't undersatnd: so to solve this limit: lim (x-1)/(sqrt(x²+1)) x-> +infinity you have to go from or each x Є ]-infinity,0[ U ]0, +infinity[ g(x)=(x-1)/(sqrt(x²+1)) .........x(1-1/x) g(x)=----------------- .........|x| sqrt(1+1/x²) i just want to understand how you go from that first line to th second line !? and also on this one: how to go from $sqrt(x^2+x+1)-x$ to: $x(sqrt(1+1/x+1/x^2)-1))$ for each x Є ]0,+infinity[ i really appreciate your help, and also if there is a website that gives you the trucks to solve these kind of limits...thanks again 2. Feb 20, 2006 ### Galileo It's just bringing the x outside of the brackets. Isn't is clear that x(1-1/x) is the same as (x-1) for x/=0? Just expand the brackets. Same thing with sqrt(x^2+1). You can bring out the x^2 in (x^2+1), giving x^2(1+1/x^2) (valid for x/=0) But you don't need it to solve the limit. Intuitively you can argue that the -1 in the numerator and the +1 in the denominator are pretty insignificant for large x, so ignoring those you get x/|x|, whose limit is clear. You can also simply divide top and bottom of the fraction by x.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7976638674736023, "perplexity": 1297.7786236363895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583867214.54/warc/CC-MAIN-20190122182019-20190122204019-00509.warc.gz"}
https://wtskills.com/right-triangle/
# Right Triangle Triangle whose one angle is exactly 90 degree is known as Right Triangle Why its called Right Triangle? The name is derived from right angle whose measure is 90 degree. Examples of Right angle triangle Given below are examples of right triangle. Note all of them contain angle measuring exactly 90 degree ## Structure of Right Triangle Right triangle contain following components: (A) Hypotenuse The side opposite to 90 degree angle is called Hypotenuse Its also the longest side of the triangle (B) Base The bottom side of the triangle is known as base (C) Height The line perpendicular to the base is called height of the right triangle. Note that base and height are perpendicular (i.e. 90 degree) to each other While naming the right triangle, you have to crucial to identify Hypotenuse correctly, the other arms “base” and “height” can be used interchangeably Let us observe some examples relate to Right Triangle structure Example 01 Above image is of right triangle ABC, where; Side BC = Base Side AB = Height Side AC = Hypotenuse Observe that: Height and Base are perpendicular to each other Hypotenuse is the longest side Example 02 The above image of right triangle is tricky because its tilted. Here the side BC is at the bottom but it cannot be called Base because it is opposite to the 90 degree angle and also the longest side of the triangle Hence, Side BC = Hypotenuse The other sides are base and height, both can be named interchangeably ## Angles of Right Triangle In right triangle, one angle measures exactly 90 degree while the other two angles are acute angle. In all the above examples, note one of the angle is measures 90 degree while other angles in blue color are acute angle. Figure (A) : Angle B is 90 degree Figure (B) : Angle A measures 90 degree Figure (C) : Angle C measures 90 degree ## Area of Right Triangle The formula for right triangle area calculation is very easy and straight forward Area=\ \frac{1}{2} \times \ base\ \times height ## Property of Right Triangle (01) Right triangle contain one 90 degree angle and two acute angle (02) Pythagoras Theorem It says that square of hypotenuse is equal to sum of square of other two sides Consider the below right triangle ABC Pythagoras Theorem states that: ( hypotenuse)^{2} =\ \ height^{2} +base^{2} ( hy)^{2} =\ \ h^{2} +b^{2} This is one of the most important concepts of right angle triangle. Please practice the formula on paper in order to remember it for your exams. Let us solve some problems related to the concept: Example 01 Below right angle triangle ABC, Base = 3 cm and Height = 4 cm. Find the length of hypotenuse ( hy)^{2} =\ \ h^{2} +b^{2}\\\ \\ ( hy)^{2} =\ \ 4^{2} +3^{2}\\\ \\ ( hy)^{2} =\ \ 16\ +\ 9\\\ \\ ( hy)^{2} =\ 25\\\ \\ hy\ =\ 5 Hence, length of Hypotenuse is 5 cm Example 02 Given below is right angled triangle with height 5 cm and hypotenuse 8 cm. Find the measurement of the third side Using Pythagoras Theorem (hy)^{2} =\ \ h^{2} +b^{2}\\\ \\ ( 8)^{2} =\ \ 5^{2} +b^{2}\\\ \\ 64-25=\ \ b^{2}\\\ \\ ( b)^{2} =\ 39\\\ \\ b\ =\ 6.24 (03) Circumcenter of Right Triangle The circumcenter of right triangle lies at the midpoint of the hypotenuse. What is Circumcenter? It is the center of circle which passes through all the three vertices of triangle. The circumcenter can be located by the intersection of perpendicular bisector of triangle sides Locating Circumcenter of Right Triangle Below is the right triangle of which we locate the circumcenter Step 01 Draw perpendicular bisector of side BC Step 02 Similarly draw perpendicular bisector of side BA The intersection point of the perpendicular bisector (point O) is the circumcenter of the triangle. Note the point O is just at the middle of hypotenuse. Below is how circumcircle looks like for the triangle The radius of circumcircle is half the hypotenuse Conclusion The circumcenter of right triangle lies at the midpoint of hypotenuse of the triangle (04) Centroid of the Right Triangle Centroid of right triangle lie inside the triangle What is centroid? Centroid is the middle point of any graphical figure. In triangle centroid is located at the intersection of medians. Note: Centroid divides median in the ration 2:1 Locating Centroid of Right Triangle Step 01 Find the midpoint of side BC and draw a line touching midpoint and opposite vertex of triangle This line MA is the median of side BC Step 02 Similarly draw the median from side AB The point of intersection of two median (i.e point O) is the centroid of the triangle ABC Median & Centroid Point Note that the centroid divides the median in the ratio of 2 : 1 Suppose the length of median CN be x cm Then length of CO = 2/3 x Length of ON = 1/3 x Similar is the case for other median lines. Conclusion The median of right triangle is inside the triangle ## Frequently asked Question – Right Triangle (01) Is the below image is of Right triangle? No, It is not a right triangle. For Right Triangle the angle measurement has to be exactly 90 degrees (02) Can Right Obtuse angle exists? No !! If one angle is 90 degrees, the other two angles should be acute angles, otherwise the following rule of triangle will not satisfy. Sum of three angle of triangle = 180 degree (03) Can a triangle have two 90 degree angle? Not Possible. We know that: sum of interior angle = 180 When there are two 90 degree angle, the above rule will not be satisfied as there will be no room left for third angle. ⟹ 90 + 90 + x = 180 ⟹ 180 + x = 180 No room for angle x !! (04) If length of two sides of right triangle is given. How to find the length of third side?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871515035629272, "perplexity": 1791.5151039878258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00169.warc.gz"}
http://spmaddmaths.onlinetuition.com.my/2013/05/fractional-indices.html
# Fractional Indices ### Fractional Indices $${a^{\frac{1}{n}}}$$ is a nth root of a. ${a^{\frac{1}{n}}} = \sqrt[n]{a}$ $${a^{\frac{m}{n}}}$$ means the nth root of $${a^m}$$. $a m n = a m n$ Example: Find the value of the following: a. $${81^{\frac{1}{2}}}$$ b. $${64^{\frac{1}{3}}}$$ c. $${625^{\frac{1}{4}}}$$ Answer: a. ${81^{\frac{1}{2}}} = \sqrt {81} = 9$ b. ${64^{\frac{1}{3}}} = \sqrt[3]{{64}} = 4$ c. ${625^{\frac{1}{4}}} = \sqrt[4]{{625}} = 5$ Example: Find the value of the following: a. $${16^{\frac{3}{2}}}$$ b. $${\left( {\frac{{27}}{{64}}} \right)^{\frac{2}{3}}}$$ Answer: a. $16 3 2 = ( 16 ) 3 = 4 3 =64$ b. $( 27 64 ) 2 3 = ( 27 64 3 ) 2 = ( 3 4 ) 2 = 9 16$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5089690089225769, "perplexity": 1962.0802018028917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608633.44/warc/CC-MAIN-20170526032426-20170526052426-00459.warc.gz"}
https://math.stackexchange.com/questions/3174030/induction-proof-of-sum-i-0n5i3-fracn5n1123-for-every-natur
# Induction proof of $\sum_{i=0}^{n}(5i+3) = \frac{n(5n+11)}{2}+3$ for every natural $n$ $$S_{n} = \sum_{i=0}^{n}(5i+3)$$ I received a homework problem that instructed me to use induction to prove that for all natural numbers n $$S_{n} = \frac{n(5n+11)}{2}+3$$ First I proved that my base case of $$S_{0}$$ holds, because substituting $$0$$ for $$n$$ in both the top formula and the following formula makes both equal to $$3$$. The next step is to form my inductive hypothesis. My hypothesis is that $$\sum_{i=0}^{n}(5i+3) = \frac{n(5n+11)}{2}+3$$ for all natural numbers $$n$$. Then I'm assuming that $$\sum_{i=0}^{k}(5i+3) = \frac{k(5k+11)}{2}+3$$ holds when $$n$$ = some arbitrary natural number $$k$$ (I've since been told not to do $$n=k$$ for some reason). Next step is to prove that $$S_{k+1}$$ holds, because if it does, knowing that my base case holds will tell me that $$S_{1}$$ holds, telling me that $$S_{2}$$ holds, etc. To prove this, I took the equation from my assumption and substituted $$k+1$$ for $$k$$. Evaluating the left hand side of $$\frac{(k+1)(5(k+1)+11)}{2}+3$$ eventually yielded $$\frac{5k^2+21k+22}{2}$$, and solving the right hand side of $$\sum_{i=0}^{k+1}(5i+3)$$ using Gauss's(?) sum and splitting the terms of the sum (I don't know what to call it) to come to the same result. Since both sides of the equation reduced to the same expression, I reasoned that this proves that my original assumption holds, therefore the statement at the top has been proven. I've gone wrong somewhere above, since I was told that I proved the original assertion with a direct proof rather than by induction. Where did I go wrong? I thought that after making my assumption and learning the case that needs to hold to make such assumption true, all I need to do is see if both sides of the equation equal each other. Has doing a direct proof of the original statement caused me to make too many assumptions? Or have I done something else inappropriate? • You were told... by whom? Your proof seems to line up with induction nicely. – abiessu Apr 4 at 0:16 • @abiessu I was told this by my TA – user2709168 Apr 4 at 0:34 Typically, you want to remember that, for proof by induction, you have to make use of the induction assumption. You assume some case greater than your base case holds, and then show it implies the succeeding step - that gives you the whole "$$S_1 \implies S_2 \implies S_3 \implies ...$$" chain. So our assumption is $$\sum_{i=0}^{k}(5i+3) = \frac{k(5k+11)}{2}+3$$ We seek to show $$\sum_{i=0}^{k+1}(5i+3) = \frac{(k+1)(5(k+1)+11)}{2}+3 = \frac{(k+1)(5k+16)}{2}+3$$ Starting with the sum at the left, we can pull out the $$(k+1)^{th}$$ term: $$\sum_{i=0}^{k+1}(5i+3) = 5(k+1) + 3 + \sum_{i=0}^{k}(5i+3) = 5k+8 + \sum_{i=0}^{k}(5i+3)$$ As it happens, this new summation is precisely what we assume holds. So we substitute the corresponding expression and do some algebra: \begin{align} 5k+9 + \sum_{i=0}^{k}(5i+3) &= 5k+8 + \frac{k(5k+11)}{2}+3\\ &=\frac{10k+16 + 5k^2 + 11k}{2} + 3\\ &=\frac{5k^2+21k+16}{2} + 3\\ &= \frac{(k+1)(5k+16)}{2}+3 \end{align} Thus, the case for $$(k+1)$$ holds, completing the induction step. • I think I mixed up my expressions in the post, but my intention was to have what you had as your assumption(?) as my inductive hypothesis. Do I not use that hypothesis when proving that the k+1 substitution holds? – user2709168 Apr 4 at 0:48 • You used something you refer to as "Gauss's (?) sum" in that, so, no, you did not make use of your induction hypothesis. At least in any obvious way because I have no idea what this sum you refer to is. – Eevee Trainer Apr 4 at 0:54 • Are you saying my inductive hypothesis was Gauss's sum? Because that's not what I thought I was asserting Copy pasting a different comment of mine explaining what I meant: "I mentioned Gauss's sum because that's one of the things I used to evaluate the right side of my equation- through turning the sum of 5i into 5((k+1)(k+2))/2." – user2709168 Apr 4 at 0:56 • I'm saying that you're not making use of the inductive hypothesis. You verified the inductive step by another method, which makes no use of the inductive hypothesis. You have to assume the inductive hypothesis holds when you verify the inductive step: that's the whole point of the "this implies that implies that" domino effect. Alongside the base case and the fact that one implies the next - and you have to have a step implying the next, and have to show that implication holds - that gives us the domino effect. Verifying the induction step independently does not show $S_k\implies S_{k+1}$. – Eevee Trainer Apr 4 at 1:04 • What is my inductive hypothesis in this situation? I thought the hypothesis was that the statement holds when n=k, therefore by proving it holds for k+1 then it holds for all n – user2709168 Apr 4 at 1:08 I think the place where you say you used the "Gauss sum" is where your instructor says you just gave a direct proof. It's hard to tell, because you didn't show us your proof, you just said "and then I did and then ...". What's expected is that you write the result for a particular value of $$k$$ - the inductive hypothesis, then add the next term and do some algebra to show that you get the result for $$k+1$$. As an aside, I really don't like a question that asks you to prove something by induction when there is an easier straightforward way - in this case, Gauss's method. • What do you mean by a particular value of k? I mentioned Gauss's sum because that's one of the things I used to evaluate the right side of my equation- through turning the sum of 5i into 5((k+1)(k+2))/2. I thought the particular value was writing that the statement holds when the value of k is n (or the other way around? trying to figure out what supposedly went wrong is confusing me), and then I can prove I get the same result for k+1. – user2709168 Apr 4 at 0:53 • I can't explain any better what I mean than what is in @EeveeTrainer 's answer and comments. – Ethan Bolker Apr 4 at 1:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9511373043060303, "perplexity": 306.39232558522275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00184.warc.gz"}
https://brilliant.org/discussions/thread/magnetic-force-with-damping/?sort=new
# Magnetic Force with Damping Note by Steven Chase 1 year ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: @Steven Chase are you comfortable with Fluid Mechanics? Can I ask doubt of that topic? - 1 year ago @Steven Chase hello sir ,can you help me in this problem Or at least give me a reply that “I am busy, sorry i can't help” It is a request - 1 year ago @Steven Chase sir i think you have solved my latest problem correctly. Foolish learner has posted a report on it. Can you post its solution, whenever you will be free. - 1 year ago @Steven Chase Sir the problem which you have reported ,only that much information is given in the question in book. I just notice now a solver. Is it you? - 1 year ago Ok, that was the problem. I deleted the report. Thanks - 1 year ago Oh, I think I missed that the inductors are supposed to be identical. Let me try again with that constraint added - 1 year ago Interesting. No, it's not me - 1 year ago Thank you for sharing this note - 1 year ago @Karan Chatrath can you see my note? - 1 year ago @Steven Chase Please take a look into this, and sorry for that if it hurt you. Hope I am not disturbing you - 1 year ago @Steven Chase Hello, can you post a analytical solution for this problem - 1 year ago @Steven Chase In the meantime have a look on this problem - 1 year ago @Steven Chase I am doing, adding the $(W)$work done by electric field and electrostatic force And then $\frac{dW}{dx}=0$ - 1 year ago @Steven Chase The answer given in back of the book is in 3 different cases Like for $l<, l=, l>$ Which is possible. So the author is. Careful while making them. - 1 year ago @Steven Chase I think except $E=0$ , there can also be some cases, when there will be $\infty$ separation. - 1 year ago @Steven Chase i don't know why but, I am not able to think properly what will happen in different cases. - 1 year ago For all we know, the solutions might have entirely different forms, depending on the relative strengths of different constants. - 1 year ago Neither am I. Many of these problems are like that. I wonder if the authors have been careful enough in preparing them - 1 year ago @Steven Chase we are probably neglecting gravitational force, If we take $G=1$ , the question will become more beautiful. - 1 year ago @Steven Chase in some condition there separation can be $\infty$ also may be. - 1 year ago Yes, if $E = 0$, there is no finite maximum separation - 1 year ago @Steven Chase Have a look on this problem - 1 year ago This would be a good problem for the E&M Section - 1 year ago @Steven Chase Yes I was thinking that only. But i want to do it with pen. Do you want to see my attempt.? - 1 year ago @Steven Chase Thanks - 1 year ago @Steven Chase It was solved. But thankyou so much for for still providing solution - 1 year ago Here is my take on the magnetic problem with damping - 1 year ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035018086433411, "perplexity": 3448.6214645357727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00104.warc.gz"}
http://www.ck12.org/geometry/Angle-Bisectors-in-Triangles/lesson/Angle-Bisectors-in-Triangles-Intermediate/r8/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Angle Bisectors in Triangles ( Read ) | Geometry | CK-12 Foundation You are viewing an older version of this Concept. Go to the latest version. # Angle Bisectors in Triangles % Progress Practice Angle Bisectors in Triangles Progress % Angle Bisectors in Triangles What if the cities of Verticville, Triopolis, and Angletown were joining their city budgets together to build a centrally located airport? There are freeways between the three cities and they want to have the freeway on the interior of these freeways. Where is the best location to put the airport so that they have to build the least amount of road? In the picture below, the blue lines are the proposed roads. After completing this Concept, you'll be able to use angle bisectors to help answer this question. ### Guidance Recall that an angle bisector cuts an angle exactly in half. Let’s analyze this figure. $\overrightarrow{BD}$ is the angle bisector of $\angle ABC$ . Looking at point $D$ , if we were to draw $\overline{ED}$ and $\overline{DF}$ , we would find that they are equal. Recall that the shortest distance from a point to a line is the perpendicular length between them. $ED$ and $DF$ are the shortest lengths between $D$ , which is on the angle bisector , and each side of the angle. Angle Bisector Theorem: If a point is on the bisector of an angle, then the point is equidistant from the sides of the angle. In other words, if $\overleftrightarrow{BD}$ bisects $\angle ABC, \overrightarrow{BE} \bot \overline{ED}$ , and $\overrightarrow{BF} \bot \overline{DF}$ , then $ED = DF$ . Proof of the Angle Bisector Theorem: Given : $\overrightarrow{BD}$ bisects $\angle ABC, \overrightarrow{BA} \bot \overline{AD}$ , and $\overrightarrow{BC} \bot \overline{DC}$ Prove : $\overline{AD} \cong \overline{DC}$ Statement Reason 1. $\overrightarrow{BD}$ bisects $\angle ABC, \overrightarrow{BA} \bot \overline{AD}, \overrightarrow{BC} \bot \overline{DC}$ Given 2. $\angle ABD \cong \angle DBC$ Definition of an angle bisector 3. $\angle DAB$ and $\angle DCB$ are right angles Definition of perpendicular lines 4. $\angle DAB \cong \angle DCB$ All right angles are congruent 5. $\overline{BD} \cong \overline{BD}$ Reflexive PoC 6. $\triangle ABD \cong \triangle CBD$ AAS 7. $\overline{AD} \cong \overline{DC}$ CPCTC The converse of this theorem is also true. Angle Bisector Theorem Converse: If a point is in the interior of an angle and equidistant from the sides, then it lies on the bisector of the angle. Because the Angle Bisector Theorem and its converse are both true we have a biconditional statement. We can put the two conditional statements together using if and only if. A point is on the angle bisector of an angle if and only if it is equidistant from the sides of the triangle. Like perpendicular bisectors, the point of concurrency for angle bisectors has interesting properties. ##### Investigation: Constructing Angle Bisectors in Triangles Tools Needed: compass, ruler, pencil, paper 1. Draw a scalene triangle. Construct the angle bisector of each angle. Use Investigation 1-4 and #1 from the Review Queue to help you. Incenter: The point of concurrency for the angle bisectors of a triangle. 2. Erase the arc marks and the angle bisectors after the incenter. Draw or construct the perpendicular lines to each side, through the incenter. 3. Erase the arc marks from #2 and the perpendicular lines beyond the sides of the triangle. Place the pointer of the compass on the incenter. Open the compass to intersect one of the three perpendicular lines drawn in #2. Draw a circle. Notice that the circle touches all three sides of the triangle. We say that this circle is inscribed in the triangle because it touches all three sides. The incenter is on all three angle bisectors, so the incenter is equidistant from all three sides of the triangle. Concurrency of Angle Bisectors Theorem: The angle bisectors of a triangle intersect in a point that is equidistant from the three sides of the triangle. If $\overline{AG}, \overline{BG}$ , and $\overline{GC}$ are the angle bisectors of the angles in the triangle, then $EG = GF = GD$ . In other words, $\overline{EG}, \overline{FG}$ , and $\overline{DG}$ are the radii of the inscribed circle. #### Example A Is $Y$ on the angle bisector of $\angle XWZ$ ? In order for $Y$ to be on the angle bisector $XY$ needs to be equal to $YZ$ and they both need to be perpendicular to the sides of the angle. From the markings we know $\overline{XY} \bot \overrightarrow{WX}$ and $\overline{ZY} \bot \overrightarrow{WZ}$ . Second, $XY = YZ = 6$ . From this we can conclude that $Y$ is on the angle bisector. #### Example B If $J, E$ , and $G$ are midpoints and $KA = AD = AH$ what are points $A$ and $B$ called? $A$ is the incenter because $KA = AD = AH$ , which means that it is equidistant to the sides. $B$ is the circumcenter because $\overline{JB}, \overline{BE}$ , and $\overline{BG}$ are the perpendicular bisectors to the sides. #### Example C $\overrightarrow{AB}$ is the angle bisector of $\angle CAD$ . Solve for the missing variable. $CB=BD$ by the Angle Bisector Theorem, so we can set up and solve an equation for $x$ . $x+7 &=2(3x-4)\\x+7 &=6x-8\\15 &=5x\\x &=3$ Watch this video for help with the Examples above. #### Concept Problem Revisited The airport needs to be equidistant to the three highways between the three cities. Therefore, the roads are all perpendicular to each side and congruent. The airport should be located at the incenter of the triangle. ### Vocabulary An angle bisector cuts an angle exactly in half. Equidistant means the same distance from. A point is equidistant from two lines if it is the same distance from both lines. When we construct angle bisectors for the angles of a triangle, they meet in one point. This point is called the incenter of the triangle. ### Guided Practice 1. Is there enough information to determine if $\overrightarrow{A B}$ is the angle bisector of $\angle CAD$ ? Why or why not? 2. $\overrightarrow{MO}$ is the angle bisector of $\angle LMN$ . Find the measure of $x$ . 3. A $100^\circ$ angle is bisected. What are the measures of the resulting angles? 1. No because $B$ is not necessarily equidistant from $\overline{AC}$ and $\overline{AD}$ . We do not know if the angles in the diagram are right angles. 2. $LO = ON$ by the Angle Bisector Theorem. $4x - 5 &= 23\\4x &= 28\\x &=7$ 3. We know that to bisect means to cut in half, so each of the resulting angles will be half of $100$ . The measure of each resulting angle is $50^\circ$ . ### Practice For questions 1-6, $\overrightarrow{AB}$ is the angle bisector of $\angle CAD$ . Solve for the missing variable. Is there enough information to determine if $\overrightarrow{AB}$ is the angle bisector of $\angle CAD$ ? Why or why not? 1. Fill in the blanks in the Angle Bisector Theorem Converse. Given : $\overline{AD} \cong \overline{DC}$ , such that $AD$ and $DC$ are the shortest distances to $\overrightarrow{BA}$ and $\overrightarrow{BC}$ Prove : $\overrightarrow{BD}$ bisects $\angle ABC$ Statement Reason 1. 2. The shortest distance from a point to a line is perpendicular. 3. $\angle DAB$ and $\angle DCB$ are right angles 4. $\angle DAB \cong \angle DCB$ 5. $\overline{BD} \cong \overline{BD}$ 6. $\triangle ABD \cong \triangle CBD$ 7. CPCTC 8. $\overrightarrow{BD}$ bisects $\angle ABC$ Determine if the following descriptions refer to the incenter or circumcenter of the triangle. 1. A lighthouse on a triangular island is equidistant to the three coastlines. 2. A hospital is equidistant to three cities. 3. A circular walking path passes through three historical landmarks. 4. A circular walking path connects three other straight paths. Multi- Step Problem 1. Draw $\angle ABC$ through $A(1, 3), B(3, -1)$ and $C(7, 1)$ . 2. Use slopes to show that $\angle ABC$ is a right angle. 3. Use the distance formula to find $AB$ and $BC$ . 4. Construct a line perpendicular to $AB$ through $A$ . 5. Construct a line perpendicular to $BC$ through $C$ . 6. These lines intersect in the interior of $\angle ABC$ . Label this point $D$ and draw $\overrightarrow{BD}$ . 7. Is $\overrightarrow{BD}$ the angle bisector of $\angle ABC$ ? Justify your answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 100, "texerror": 0, "math_score": 0.8016451001167297, "perplexity": 320.87722432703765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121899763.95/warc/CC-MAIN-20150124175139-00244-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.thejournal.club/c/paper/23678/
Novel Relations between the Ergodic Capacity and the Average Bit Error Rate Ferkan Yilmaz, Mohamed-Slim Alouini Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. arrow_drop_up
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8373171091079712, "perplexity": 235.1904200518894}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00288.warc.gz"}
http://math.stackexchange.com/questions/114878/boundary-of-invertibles-in-a-normed-algebra
# Boundary of invertibles in a normed algebra A student and I are reading the book Introduction to Banach Spaces and Algebras, by Allan, and we're stuck. Exercise 4.5 says: Let $A$ be a normed algebra with unit sphere $S$. Let $a\in A$. Then $a$ is a topological divisor of 0 if $$\inf\{\|ab\|+\|ba\|:b\in S \}=0.$$ Prove that every element in the frontier of $G(A)$ is a topological divisor of $0$. Here $G(A)$ is the collection of invertible elements of $A$. I assume that the question really means to say that $A$ is a unital normed algebra. Then the book already essentially proves this result for Banach algebras (Corollary 4.13). So if $B$ is the completion of $A$, and if $a$ is still in the frontier of $G(B)$, then we're done (the infimum obviously doesn't change if we replace $S$ by the unit sphere of $B$). Conversely, if there is an example of $a\in\partial G(A)$ with $a\in G(B)$, then we have a counter-example to the exercise. So my question is: If $a\in\partial G(A)$ and $B$ is the completion of $A$, then is $a\in\partial G(B)$? Edit: Embarrassingly, I think I can now answer this! Let $A$ be the complex polynomials, interpreted as an algebra of continuous functions on the interval $[0,1]$. A little bit of algebra shows that $G(A)$ consists of just the constant polynomials. So $G(A)$ is actually closed (not open, which would be the case if $A$ were Banach). So being careful about what "frontier" means, I guess $G(A)$ is its own frontier. But then the exercise is trivially false, as the frontier of $G(A)$ contains invertibles. So the exercise seems wrong. But somehow my counter-example seems cheap. So a new question: Can the frontier of $G(A)$ contain a non-invertible element which is invertible in $B$? Are there examples where $G(A)$ is open? - It still feels to me that there should be a "simpler" or "more natural" commutative counterexample - some kind of explicitly defined algebra of functions on $[0,1]$, for instance... – user16299 Mar 1 '12 at 23:03 I think I can answer the first part of the revised question, though I could just be doing something stupid. Let $A=\ell^1(F_2)$ sitting inside $B=C_r^*(F_2)$. There exists $a\in A$ which is self-adjoint but whose spectrum in $A$, call it $S$, is not contained in ${\mathbb R}$. (We can take $a$ to have finite support: I forget the exact formula which works, but it can be found in Palmer Vol. II in one of the sections on "hermitian" groups, the point being that $F_2$ is not hermitian.) Take a sequence $a_n=a-\lambda_n I$ for scalars $\lambda_n\notin S$ with $\lambda_n\to \lambda$ for some $\lambda \in S \setminus {\mathbb R}$. Then we have a sequence of invertible elements in $A$, which converge in $B$ to an element that is invertible in $B$ but not in $A$. I guess that by taking bicommutants we can get a commutative example from this one, in particular $B=C(X)$ for some $X\subseteq {\mathbb R}$. Of course this doesn't answer your second question, where (if I understand correctly) you want $G(A)$ to be open in $A$ with respect to the norm from $B$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328626990318298, "perplexity": 116.84203978438865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701174607.44/warc/CC-MAIN-20160205193934-00340-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.or-exchange.org/questions/7516/find-shortest-path-that-uses-exactly-one-arc-in-a-subset
# Find shortest path that uses exactly one arc in a subset 2 Let G(N,A) be a graph with nodes set N and arcs set A. Problem: find the shortest path from node i to node j such that the path contains exactly one arc from the set \$A', which is a subset of A. I am guessing this is a well-known problem. Can someone point me to a reference? Thanks! asked 28 Feb '13, 21:50 Hugh Medal 95●5 accept rate: 0% 3 How large is the set $$A'$$? If it's small, you can simply compute shortest paths from $$i$$ to each arc in $$A'$$, and shortest paths from the end of each arc to $$j$$ (noting that you'll have to remove arcs that were in the path from $$i$$ to the arc before doing the second shortest path problem.) Then pick the shortest of these paths. (01 Mar '13, 00:48) Brian Borchers A' is not going to be huge. Probably between 10x10=100 and 20x20=400. Thats a good idea. (01 Mar '13, 05:52) Hugh Medal 3 You can modify standard shortest path algorithms by keeping two labels at each node i: distance(i,TRUE) and distance(i,FALSE). The TRUE labels track paths that have contained A', the FALSE contains those with no A' edges. At each step, choose the node i with the smallest unhandled label (track the handling of the labels for TRUE and FALSE separately), and update accordingly if (i,j) in A' distance(j,TRUE) = min (distance(j,TRUE), distance(i,FALSE) + d(i,j)) if (i,j) notin A' distance(j,FALSE) = min (distance(j,FALSE),distance(i,FALSE)+d(i,j)) distance(j,TRUE) = min (distance(j,TRUE),distance(i,TRUE)+d(i,j)) At the end, you want distance(target,TRUE) answered 01 Mar '13, 07:26 Mike Trick ♦♦ 743●1●6 accept rate: 18% toggle preview community wiki By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "Title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported Tags: ×147 ×5 ×2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117760419845581, "perplexity": 2305.717816196581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462710.27/warc/CC-MAIN-20150226074102-00208-ip-10-28-5-156.ec2.internal.warc.gz"}
https://thomaspowell.com/2010/10/18/everything-in-math-comes-down-to-calculus/
# Everything in math comes down to calculus Everything else is a generalization. Take, for example, the formula for the area of a rectangle: $Area=lw$ In reality, this is the result of the equation: $int_{0}^{w}l,dx=lw$ Where the length of the rectangle lies along the y-axis, and the width along the x-axis See? Isn’t that simple, and a much more accurate representation of the area of a rectangle? I thought so.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7671678066253662, "perplexity": 266.8161597529082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00088.warc.gz"}
https://www.physicsforums.com/threads/understanding-core-loss-measurement.842813/
# Understanding Core Loss Measurement 1. Nov 12, 2015 ### reson8r Hello folks, I'm looking for a way to measure core loss in RF transformers and inductors. I found a PhD thesis here: https://vtechworks.lib.vt.edu/bitstream/handle/10919/19296/Mu_M_D_2013.pdf which looks like just the ticket, but I'm having trouble understanding key parts. I've tried contacting the author with no success. Chapter 3 introduces a good practical method, but I'm stumped at eqn 3.1 (pg 66). • Why does Pcore depend on capital Ipp and not lower-case ipp? Capital Ipp results from the magnetizing inductance (Lm in fig 3.1), but lower-case ipp is the current step due to core loss. • Why does Pcore depend on duty cycle D? Driving fig 3.1 with a rectangular wave would result in a rectangular voltage across Rcore, so the power loss in Rcore shouldn't depend on D at all. • Not only do I not understand why D is in there, I don't see how its use in eqn 3.1 is derived. In short, I pretty much don't get eqn 3.1 at all! If anyone can help, I would greatly appreciate it. Then perhaps eqn 3.2 will start making sense... Gerrit 2. Nov 17, 2015
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101487755775452, "perplexity": 1825.927857589173}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648113.87/warc/CC-MAIN-20180323004957-20180323024957-00004.warc.gz"}
https://www.pythoninformer.com/python-language/intermediate-python/lists-vs-tuples/
# Python informer ## Lists vs tuples When you start learning Python, it isn’t too long before you encounter lists. Then at some point you will meet tuples. These are strange objects which are quite a lot like lists, and yet you will probably get the impression that they are not meant to be used in quite the same way. In this article, I hope to shed some light on how tuples are intended to be used. ## The technical differences There are very few actual differences between lists and tuples. One obvious difference is that they are declared in different ways: k = [1, 2, 3] #list t = (1, 2, 3) #tuple The main difference is that tuples are immutable – once you have created a tuple, you cannot change it. You can’t add elements, remove elements, reorder the elements, or change the value of any element. For example if you try to alter the value of an element in a tuple, it will give an error: t[1] = 5 # TypeError: 'tuple' object does not support item assignment If you look at built-in methods of lists, you will see that many of them don’t exist for tuples. Anything which doesn’t alter the tuple (such as find()) is OK, but methods such as sort() simply don’t exist for a tuple. A final difference is that tuples support packing and unpacking notation. You can create a tuple without the brackets (packing), and you can extract the elements to variables in one line (unpacking): t = 1, 2, 3 #packing a, b, c = t #unpacking You might also hear it said that tuples are more efficient, because they do not need to support mutability. This may be true in some limited cases, but it won’t usually make a noticeable difference to your code. If you scatter tuples around your code in the hope of making it run faster, you are going to be disappointed. That really isn’t the point of them ## The intent The original purpose of tuples was to be used to hold records. What is a record? Well if you look at a typical spreadsheet (or database) each row will probably be a record. For example, if the spread sheet held the details of the members of your local Badminton society, the columns might be title, first name, last name, phone number. One row would contain the details of a member, eg ('Mr', 'John', 'Smith' '0123 4567') This would be a prime candidate for using a tuple, rather than a list, because the items are related, that is they all refer to different properties the same person, object or whatever. On the other hand, if you took one of the columns, for example the list of everyone’s surnames, it would be exactly that – a list (not a tuple). The names don’t all relate to the same object. They are all names of members of the Badminton club, but they are not really properties of the club. Now quite often we might want to return a record from a function. Which is were the packing syntax comes in handy. Suppose we have a function mousepos(), which returns the x and y coordinates of the current mouse cursor. If the coordinates are held in variables x and y, we can create a tuple on the fly and return it using packing: def mousepos(): ... return x, y And when we call the function we can assign the result to two variables x and y using unpacking: x, y = mousepos() This gives the helpful illusion that our function has returned two values (which of course it has, but with the help of an invisible tuple along the way). ## Spotting a tuple So when exactly should you use a tuple rather than a list? Here are some indicators that you might need a tuple: • The elements each represent a different property of a single object • The number of elements if fixed (for example, an x, y coordinate always has two elements) • The elements may be different types – a heterogeneous list Here are cases where you might be better with a list: • The items do not form a record, ie they are not related to a single object • The number of elements can vary, for example the number of names in the list of badsoc members could be anything, it depends on the current size of the club. • The elements will often (but not always) be the same type – a homogeneous list ## Immutability Now to muddy the waters a little, we have the separate fact that tuples are immutable. This means that tuples can be used as an alternative to lists in a couple other circumstances: • If you pass a list into a function, the function might change the list. This might be the intended behaviour, which is fine, but if it happen unexpectedly it can cause bugs. Sometimes, if you have a list which should never change, creating it as a tuple rather than a list can be safer. • If you are using dictionaries, you can use tuples as keys, but you can’t use lists as keys. The reason is to do with creating hash values to search keys efficiently, but that is another topic. In summary, there are rules about when to use lists or tuples, but in many real life situations it is not completely clear cut and can be a judgement call. In fact, even if you get it completely wrong, your code will still work. Still, it is always nice to know the right way to do it!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2956429719924927, "perplexity": 561.6331122668637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00468.warc.gz"}
https://intl.siyavula.com/read/science/grade-10/magnetism/15-magnetism-01
We think you are located in United States. Is this correct? # Do you need more Practice? Siyavula Practice gives you access to unlimited questions with answers that help you learn. Practise anywhere, anytime, and on any device! Sign up to practise now # Chapter 15: Magnetism ## 15.1 Introduction (ESAEF) Magnetism is an interaction that allows certain kinds of objects, which are called 'magnetic' objects, to exert forces on each other without physically touching. A magnetic object is surrounded by a magnetic 'field' that gets weaker as one moves further away from the object. A second object can feel a magnetic force from the first object because it feels the magnetic field of the first object. The further away the objects are the weaker the magnetic force will be. Humans have known about magnetism for many thousands of years. For example, lodestone is a magnetised form of the iron oxide mineral magnetite. It has the property of attracting iron objects. It is referred to in old European and Asian historical records; from around $$\text{800}$$ BCE in Europe and around $$\text{2 600}$$ BCE in Asia. Magnetic objects stuck to a magnet The root of the English word magnet is from the Greek word magnes, probably from Magnesia in Asia Minor, once an important source of lodestone.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2527664303779602, "perplexity": 1068.3571193989271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150000.59/warc/CC-MAIN-20210723175111-20210723205111-00441.warc.gz"}
https://pure.royalholloway.ac.uk/portal/en/publications/universal-scaling-of-the-anomalous-hall-effect(3239de6b-c02f-45f1-a7cd-7aa354d49422).html
Universal scaling of the anomalous Hall effect. / Liu, Wenqing. In: Journal of Physics D: Applied Physics, Vol. 50, No. 15, 13.03.2017, p. 1-5. Research output: Contribution to journalLetterpeer-review Published ### Abstract We have undertaken a detailed study of the magneto-transport properties of ultra-thin Fe films epitaxially grown on GaAs (1 0 0). A metal–semiconductor transition has been observed with a critical thickness of 1.25 nm, which was thought to be related to the thermally activated tunneling between metallic clusters. By fitting ${{\rho}_{\text{AH}}}$ versus $\rho _{xx}^{2}$ with the TYJ equation (Tian et al 2009 Phys. Rev. Lett. 103 087206), we found that the magnetization is negligible for the scaling of the anomalous Hall effect in ultra-thin Fe films. Furthermore, the intrinsic term, which is acquired by the linear fitting of ${{\rho}_{\text{AH}}}$ versus $\rho _{xx}^{2}$ , shows an obvious decrease when the film thickness drops below 1.25 nm, which was thought to be related to the fading of the Berry curvature in the ultra-thin film limit. Original language English 1-5 5 Journal of Physics D: Applied Physics 50 15 20 Jan 2017 https://doi.org/10.1088/1361-6463/aa5b1c Published - 13 Mar 2017 This open access research output is licenced under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. ID: 28113974
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7702535390853882, "perplexity": 1748.0507036563636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00621.warc.gz"}
https://link.springer.com/chapter/10.1007%2F978-3-030-12815-9_9
# Wind-Tunnel Experiments on a Large-Scale Flettner Rotor • G. Bordogna • S. Muggiasca • S. Giappino • M. Belloli • J. A. Keuning • R. H. M. Huijsmans • A. P. van‘t Veer Conference paper Part of the Lecture Notes in Civil Engineering book series (LNCE, volume 27) ## Abstract Experiments on a large-scale Flettner rotor were carried out in the boundary-layer test section of Politecnico di Milano wind tunnel. The rotating cylinder used in the experimental campaign (referred to as Delft Rotor) had a diameter of 1.0 m and span of 3.73 m. The Delft Rotor was equipped with two purpose-built force balances and two different systems to measure the pressure on the rotor’s outer skin. The goal of the experiments was to study the influence of different Reynolds numbers on the aerodynamic forces generated by the spinning cylinder. The highest Reynolds number achieved during the experiments was $${\text{Re}} = 1.0 \cdot 10^{6}$$. ## Keywords Flettner rotor Rotating cylinder Magnus effect Wind power Wind assisted ship propulsion Green shipping ## Notes ### Acknowledgements This research was supported by the Sea Axe Fund. The author would like to thank the research sponsor as well as all the staff at Politecnico di Milano wind tunnel for their kind help. ## References 1. Badalamenti C, Prince SA (2008) The effects of endplates on a rotating cylinder in crossflow. In: 26th AIAA applied aerodynamics conference, Honolulu, HI, USA, 18–21 AugustGoogle Scholar 2. Bergeson L, Greenwald CK (1985) Sail assist developments 1979–1985. J Wind Eng Ind Aerodyn 19:45–114 3. Clayton BR (1985) BWEA initiative on wind assisted ship propulsion (WASP). J Wind Eng Ind Aerodyn 19:251–276 4. Craft TJ, Iacovides H, Johnson N, Launder BE (2012) Back to the future: Flettner-Thom rotors for maritime propulsion? In: 7th international symposium on turbulence heat and mass transfer, Palermo, Italy, 24–27 SeptemberGoogle Scholar 5. Everts M, Ebrahim R, Kruger JP, Miles E, Sharifpur M, Meyer JP (2014) Turbulent flow across a rotating cylinder with surface roughness. In: 10th international conference on heat transfer, fluid mechanics and thermodynamics, Orlando, FL, USA, 14–16 JulyGoogle Scholar 6. Kayser LD, Clay WH, Damico Jr WP (1986) Surface pressure measurements on a 155 mm projectile in free-flight at transonic speed. In: 14th aerodynamic testing conference, West Palm Beach, FL, USA, 5–7 MarchGoogle Scholar 7. Kurtz AD, Ainsworth RW, Thorpe SJ, Ned A (2003) Further work on acceleration insensitive semiconductor pressure sensor for high bandwidth measurements on rotating turbine blades. In: NASA propulsion measurement sensor development workshop, Huntsville, AL, USA, 13–15 MayGoogle Scholar 8. Lafay A (1912) Contribution Experimentale a l’Aerodynamique du Cylindre. Reveus Mechanique 30:417–442Google Scholar 9. McLaughlin TE, Stephen EJ, Robinson MC (1991) Pressure measurements on a rotating circular cylinder. AIAA-91-3265-CPGoogle Scholar 10. Miller MC (1976) Surface pressure measurements on a spinning wind tunnel model. AIAA J 14:1669–1670 11. Pollack FG, Liebert CH, Peterson VS (1972) Rotating pressure measuring system for turbine cooling investigations. Technical report TM X-2621, NASAGoogle Scholar 12. Reid EG (1924) Tests of rotating cylinders. Technical report TN-209, NACAGoogle Scholar 13. Rollstin LR (1990) Measurement of in-flight base pressure on an artillery-fired projectile. J Spacecraft Rockets 27:5–6 14. Swanson WM (1961) The Magnus effect: a summary of investigations to date. J Basic Eng 83:461–470 15. Thom A (1926) The aerodynamics of a rotating cylinder. PhD thesis, University of Glasgow, UKGoogle Scholar 16. Thom A (1931) Experiments on the flow past a rotating cylinder. Technical report R&M No. 1410, Aeronautical Research CouncilGoogle Scholar 17. Thom A, Sengupta SR (1932) Air torque on a cylinder rotating in an air stream. Technical report R&M No. 1520, Aeronautical Research CouncilGoogle Scholar 18. Thom A (1934) Effects of discs on the air forces on a rotating cylinder. Technical report R&M No. 1623, Aeronautical Research CouncilGoogle Scholar 19. Zdravkovich MM (2003) Flow around circular cylinders, volume 2: applications. Oxford University Press Inc., New YorkGoogle Scholar © Springer Nature Switzerland AG 2019 ## Authors and Affiliations • G. Bordogna • 1 Email author • S. Muggiasca • 2 • S. Giappino • 2 • M. Belloli • 2 • J. A. Keuning • 1 • R. H. M. Huijsmans • 1 • A. P. van‘t Veer • 1 1. 1.Section of Ship HydromechanicsDelft University of TechnologyDelftThe Netherlands 2. 2.Department of Mechanical EngineeringPolitecnico di MilanoMilanItaly
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603627681732178, "perplexity": 19727.424252963283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00064.warc.gz"}
http://billyfung.com/blog/2017/01/pg-phrase-search/
# tie fighter ## Tie fighter operator One of the new features in Postgres 9.6 is the ability to do phrase text searching. There has always been full text search available in previous versions, but if you wanted to look for specific phrases, that required more of an involved query. But now with version 9.6, we are able to search for phrases where the words are grouped together. Say if we had the text My weekly summary text, previous versions could allow you to search for weekly summary and find it, but then you might also find a text that had weekly and summary separated by 100 words. With 9.6, there is a ‘tie fighter’ operator <-> that allows for specifying the number of words in between. This allows for grouping of words, known as phrases. Test table looking like: postgres=# select * from test; id | sample_text ----+------------------------------------- 1 | test string number 1 2 | test daily string number 2 3 | test weekly string number 3 4 | nothing to be worried about 5 | something not to be me caring about ### 9.4 postgres=# select * from test where sample_text::tsvector @@ 'test & number'::tsquery; id | sample_text ----+----------------------------- 1 | test string number 1 2 | test daily string number 2 3 | test weekly string number 3 This shows searching for text test and number within the table. ### 9.6 postgres=# select * from test where sample_text @@ to_tsquery('test & number') ; id | sample_text ----+----------------------------- 1 | test string number 1 2 | test daily string number 2 3 | test weekly string number 3 (3 rows) postgres=# select * from test where sample_text @@ phraseto_tsquery('test number'); id | sample_text ----+------------- (0 rows) postgres=# select * from test where sample_text @@ to_tsquery('test <2> number') ; id | sample_text ----+---------------------- 1 | test string number 1 (1 row) postgres=# select * from test where sample_text @@ to_tsquery('test <3> number') ; id | sample_text ----+----------------------------- 2 | test daily string number 2 3 | test weekly string number 3 (2 rows) Here you can see the tie fighter operator in use, being able to specify the number of words between the two specified words. That’s a very quick and short summary of phrase searching for now, I haven’t been using it extensively yet but there is also tsvector editing functions to fine tune editing features.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22862392663955688, "perplexity": 7047.444966416834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156460.64/warc/CC-MAIN-20180920101233-20180920121604-00028.warc.gz"}
https://www.scienceforums.net/topic/40919-scalar-product-of-different-sized-vectors/?tab=comments
# Scalar product of different sized vectors ## Recommended Posts The dot product of two i-sized vectors $\vec{A}$, $\vec{B}$ is defined as $\sum_i A_iB_i$, What if the vectors aren't equal in dimension? What is the scalar product of $\vec{A} = (1, 3, 5)$ and $\vec{B} = (2, 4)$? ##### Share on other sites It's not defined. ##### Share on other sites Which means that the cross product isn't either? But the tensor product should be, right? Speaking of tensors, what is $\mathbf{\bar{C}}$ ? In the context I've seen it, it refers to a 3rd degree tensor, but I'm not sure if it has other meanings (related to tensors) ##### Share on other sites The cross product is defined for 3-vectors period. It does not generalize directly. The tensor product has the same constraints as the inner product. Dimensions have to match. ##### Share on other sites The cross product is defined for 3-vectors period. It does not generalize directly. The generalisation to arbitrary dimensions is the wedge product and the Lie algebra of mulitivector fields. ##### Share on other sites The dot product of two i-sized vectors $\vec{A}$, $\vec{B}$ is defined as $\sum_i A_iB_i$, What if the vectors aren't equal in dimension? What is the scalar product of $\vec{A} = (1, 3, 5)$ and $\vec{B} = (2, 4)$? Doesn't $\vec{B} = (2, 4)$ imply $\vec{B} = (2, 4, 0)$? ##### Share on other sites The generalisation to arbitrary dimensions is the wedge product and the Lie algebra of mulitivector fields. Not quite the same thing, though. The cross product is a vector (well, almost; it's a pseudo-vector); the wedge product is not a vector, period. The cross product as such can be defined in R3 and R7 only. Think of it as an offshoot of the imaginary part of the quaternion and octonion product. Why not R15, R31, ...? The sedenions aren't even alternative. While the Cayley–Dickson construction goes on forever, after a few iterations (reals, complex numbers, quaternions, octonions) its pretty much useless. ##### Share on other sites The cross product as such can be defined in R3 and R7 only. Shoot, forgot about that. But thanks for the reminder. Why isn't the outer product defined for arbitrary dimensions? I always thought of the result as a sort of multiplication table... If we had $\vec{A}=(2, 4)$ and $\mathbf{B}= \left( \begin{array}{ccc} 1 & 3 & 5 \\ 2 & 5 & 7 \\ 4 & 7 & 9 \end{array} \right)$ couldn't $\vec{A}\mathbf{B} = \mathbf{\bar{C}}$, where $\mathbf{C}_{1ij} = \left( \begin{array}{ccc} 2 & 6 & 10 \\ 4 & 10 & 14 \\ 8 & 14 & 18 \end{array} \right)$ and $\mathbf{C}_{2ij} = \left( \begin{array}{ccc} 8 & 24 & 40 \\ 16 & 40 & 56 \\ 32 & 56 & 72 \end{array} \right)$ ##### Share on other sites That's an outer product. Now what are you going to do with it? (In other words, what are you really asking in this thread?) ##### Share on other sites Not quite the same thing, though. The cross product is a vector (well, almost; it's a pseudo-vector); the wedge product is not a vector, period. The cross product as such can be defined in R3 and R7 only. Think of it as an offshoot of the imaginary part of the quaternion and octonion product. Why not R15, R31, ...? The sedenions aren't even alternative. While the Cayley–Dickson construction goes on forever, after a few iterations (reals, complex numbers, quaternions, octonions) its pretty much useless. I did not say the same, I said a generalisation. What is true is that the collection of multivector fields over a smooth manifold is a vector space (or if we do not include zero vectors, a module over smooth functions). (Same holds for all tensors.) The fact that the wedge product holds in any dimensions makes it far more useful than the cross product. Edited by ajb ##### Share on other sites $\vec{B} = (2, 4, 0)$? If you specify that B is a vector on the (x,y,0) plane, then yes. Which you might want to do if you were discussing, I dunno, the something on a flat surface and something airbourne. But exactly which subspace B occupies has to be specified - I think. ##### Share on other sites If you specify that B is a vector on the (x,y,0) plane, then yes. Which you might want to do if you were discussing, I dunno, the something on a flat surface and something airbourne. But exactly which subspace B occupies has to be specified - I think. Why would you be trying to multiply vectors in different co-ordinate systems? ##### Share on other sites Why would you be trying to multiply vectors in different co-ordinate systems? Unless one of the vectors is in a subspace, which as the tree said need to be specified it is confusing. I am not sure what the original question is pointing to. However, we can form the tensor product of two arbitrary vector spaces and ask questions there. ##### Share on other sites Why would you be trying to multiply vectors in different co-ordinate systems? Well, that much would be silly, but finding a problem where the 2d vector isn't on the (x,y,0) plane* seems quite natural (the first thing that comes to mind is an aircraft flying with a fixed (non-zero) altitude versus a surface to air missile). Which is why it'd need to be specified to avoid confusion. *osculating plane? Is that the word? ##### Share on other sites The tensor product has the same constraints as the inner product. Dimensions have to match. In my example dimensions didn't match...one was a 2x1 matrix, the other a 3x3 matrix. My original question being answered, I was asking if the tensor product was defined for matrices (or any tensors) with different dimensions. Your first answer was no, yet what I did was an outer product, which leaves me confused... ##### Share on other sites In my example dimensions didn't match...one was a 2x1 matrix, the other a 3x3 matrix. My original question being answered, I was asking if the tensor product was defined for matrices (or any tensors) with different dimensions. Your first answer was no, yet what I did was an outer product, which leaves me confused... There is a bit of confusion here. On a manifold, lets say Euclidean space of a given dimension tensors are sections of various natural bundles. Lets look at vector fields, which are sections of the tangent bundle $TM$. The point is that the dimensions of the fibres are the same as the manifold. So, naturally vector fields are of the same "size". Tensors are sections of tensor products of the tangent and cotangent bundle. So, for example matrix should be thought of as a section of $TM \otimes T^{*}M$, which thus has fibre dimension $n^{2}$ if the manifold is of dimension $n$. So again, matrices are of the same "size". But what you can do is consider more general vector bundles over the manifold and these can have fibre dimensions different to the base manifold. A good example is Lie algebra valued tensors. You can then consider tensor products (and other constructions) of what ever vector bundles you like. The tensor product of two sections is locally just the standard product of the components. So, you can multiply two vectors of different sizes. Similar things hold for more general tensors. Edited by ajb ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335479497909546, "perplexity": 637.6741939738631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738819.78/warc/CC-MAIN-20200811180239-20200811210239-00462.warc.gz"}
http://www.aimsciences.org/article/doi/10.3934/jmd.2015.9.67
# American Institute of Mathematical Sciences 2015, 9: 67-80. doi: 10.3934/jmd.2015.9.67 ## Topological full groups of minimal subshifts with subgroups of intermediate growth 1 Laboratoire de Mathémathiques d’Orsay, Université Paris-Sud, F-91405 Orsay Cedex & DMA, École Normale Supérieure, 45 Rue d’Ulm, 75005, Paris, France Received  August 2014 Revised  January 2015 Published  May 2015 This work is partially supported by the ERC starting grant GA 257110 “RaWG”. We show that every Grigorchuk group $G_\omega$ embeds in (the commutator subgroup of) the topological full group of a minimal subshift. In particular, the topological full group of a Cantor minimal system can have subgroups of intermediate growth, a question raised by Grigorchuk; moreover it can have finitely generated infinite torsion subgroups, answering a question of Cornulier. By estimating the word-complexity of this subshift, we deduce that every Grigorchuk group $G_\omega$ can be embedded in a finitely generated simple group that has trivial Poisson boundary for every simple random walk. This work is partially supported by the ERC starting grant GA 257110 “RaWG”. Citation: Nicolás Matte Bon. Topological full groups of minimal subshifts with subgroups of intermediate growth. Journal of Modern Dynamics, 2015, 9: 67-80. doi: 10.3934/jmd.2015.9.67 ##### References: [1] A. Avez, Théorème de Choquet-Deny pour les groupes à croissance non exponentielle,, C. R. Acad. Sci. Paris Sér. A, 279 (1974), 25. [2] L. Bartholdi and R. I. Grigorchuk, On the spectrum of Hecke type operators related to some fractal groups,, Tr. Mat. Inst. Steklova (Din. Sist., 231 (2000), 5. [3] L. Bartholdi, R. I. Grigorchuk and Z. Šuniḱ, Branch groups,, in Handbook of Algebra, (2003), 989. doi: 10.1016/S1570-7954(03)80078-5. [4] J. Cassaigne and F. Nicolas, Factor complexity,, in Combinatorics, (2010), 163. [5] Y. Cornulier, Groupes pleins-topologiques [d'après Matui, Juschenko, Monod,...],, Astérisque, (2012). [6] G. Elek and N. Monod, On the topological full group of a minimal Cantor $\mathbbZ^2$-system,, Proc. Amer. Math. Soc., 141 (2013), 3549. doi: 10.1090/S0002-9939-2013-11654-0. [7] R. Grigorchuk, D. Lenz, and T. Nagnibeda, Spectra of Schreier graphs of Grigorchuk's group and Schroedinger operators with aperiodic order,, preprint, (2014). [8] A. P. Gorjuškin, Imbedding of countable groups in $2$-generator simple groups,, Mat. Zametki, 16 (1974), 231. [9] W. H. Gottschalk, Almost period points with respect to transformation semi-groups,, Ann. of Math. (2), 47 (1946), 762. doi: 10.2307/1969233. [10] T. Giordano, I. F. Putnam and C. F. Skau, Full groups of Cantor minimal systems,, Israel J. Math., 111 (1999), 285. doi: 10.1007/BF02810689. [11] R. I. Grigorchuk, Degrees of growth of finitely generated groups and the theory of invariant means,, Izv. Akad. Nauk SSSR Ser. Mat., 48 (1984), 939. [12] P. Hall, On the embedding of a group in a join of given groups,, Collection of articles dedicated to the memory of Hanna Neumann, 17 (1974), 434. doi: 10.1017/S1446788700018073. [13] K. Juschenko and N. Monod, Cantor systems, piecewise translations and simple amenable groups,, Ann. of Math. (2), 178 (2013), 775. doi: 10.4007/annals.2013.178.2.7. [14] V. A. Kaĭmanovich and A. M. Vershik, Random walks on discrete groups: Boundary and entropy,, Ann. Probab., 11 (1983), 457. doi: 10.1214/aop/1176993497. [15] H. Matui, Some remarks on topological full groups of Cantor minimal systems,, Internat. J. Math., 17 (2006), 231. doi: 10.1142/S0129167X06003448. [16] H. Matui, Some remarks on topological full groups of Cantor minimal systems II,, Ergodic Theory Dynam. Systems, 33 (2013), 1542. doi: 10.1017/S0143385712000399. [17] N. Matte Bon, Subshifts with slow complexity and simple groups with the Liouville property,, Geom. Funct. Anal., 24 (2014), 1637. doi: 10.1007/s00039-014-0293-4. [18] M. Queffélec, Substitution Dynamical Systems-Spectral Analysis,, Lecture Notes in Mathematics, (1294). [19] P. E. Schupp, Embeddings into simple groups,, J. London Math. Soc. (2), 13 (1976), 90. [20] E. K. van Douwen, Measures invariant under actions of $F_2$,, Topology Appl., 34 (1990), 53. doi: 10.1016/0166-8641(90)90089-K. [21] Ya. Vorobets, On a substitution subshift related to the Grigorchuk group,, Tr. Mat. Inst. Steklova, 271 (2010), 319. doi: 10.1134/S0081543810040218. [22] Ya. Vorobets, Notes on the Schreier graphs of the Grigorchuk group,, in Dynamical Systems and Group Actions, (2012), 221. doi: 10.1090/conm/567/11250. show all references ##### References: [1] A. Avez, Théorème de Choquet-Deny pour les groupes à croissance non exponentielle,, C. R. Acad. Sci. Paris Sér. A, 279 (1974), 25. [2] L. Bartholdi and R. I. Grigorchuk, On the spectrum of Hecke type operators related to some fractal groups,, Tr. Mat. Inst. Steklova (Din. Sist., 231 (2000), 5. [3] L. Bartholdi, R. I. Grigorchuk and Z. Šuniḱ, Branch groups,, in Handbook of Algebra, (2003), 989. doi: 10.1016/S1570-7954(03)80078-5. [4] J. Cassaigne and F. Nicolas, Factor complexity,, in Combinatorics, (2010), 163. [5] Y. Cornulier, Groupes pleins-topologiques [d'après Matui, Juschenko, Monod,...],, Astérisque, (2012). [6] G. Elek and N. Monod, On the topological full group of a minimal Cantor $\mathbbZ^2$-system,, Proc. Amer. Math. Soc., 141 (2013), 3549. doi: 10.1090/S0002-9939-2013-11654-0. [7] R. Grigorchuk, D. Lenz, and T. Nagnibeda, Spectra of Schreier graphs of Grigorchuk's group and Schroedinger operators with aperiodic order,, preprint, (2014). [8] A. P. Gorjuškin, Imbedding of countable groups in $2$-generator simple groups,, Mat. Zametki, 16 (1974), 231. [9] W. H. Gottschalk, Almost period points with respect to transformation semi-groups,, Ann. of Math. (2), 47 (1946), 762. doi: 10.2307/1969233. [10] T. Giordano, I. F. Putnam and C. F. Skau, Full groups of Cantor minimal systems,, Israel J. Math., 111 (1999), 285. doi: 10.1007/BF02810689. [11] R. I. Grigorchuk, Degrees of growth of finitely generated groups and the theory of invariant means,, Izv. Akad. Nauk SSSR Ser. Mat., 48 (1984), 939. [12] P. Hall, On the embedding of a group in a join of given groups,, Collection of articles dedicated to the memory of Hanna Neumann, 17 (1974), 434. doi: 10.1017/S1446788700018073. [13] K. Juschenko and N. Monod, Cantor systems, piecewise translations and simple amenable groups,, Ann. of Math. (2), 178 (2013), 775. doi: 10.4007/annals.2013.178.2.7. [14] V. A. Kaĭmanovich and A. M. Vershik, Random walks on discrete groups: Boundary and entropy,, Ann. Probab., 11 (1983), 457. doi: 10.1214/aop/1176993497. [15] H. Matui, Some remarks on topological full groups of Cantor minimal systems,, Internat. J. Math., 17 (2006), 231. doi: 10.1142/S0129167X06003448. [16] H. Matui, Some remarks on topological full groups of Cantor minimal systems II,, Ergodic Theory Dynam. Systems, 33 (2013), 1542. doi: 10.1017/S0143385712000399. [17] N. Matte Bon, Subshifts with slow complexity and simple groups with the Liouville property,, Geom. Funct. Anal., 24 (2014), 1637. doi: 10.1007/s00039-014-0293-4. [18] M. Queffélec, Substitution Dynamical Systems-Spectral Analysis,, Lecture Notes in Mathematics, (1294). [19] P. E. Schupp, Embeddings into simple groups,, J. London Math. Soc. (2), 13 (1976), 90. [20] E. K. van Douwen, Measures invariant under actions of $F_2$,, Topology Appl., 34 (1990), 53. doi: 10.1016/0166-8641(90)90089-K. [21] Ya. Vorobets, On a substitution subshift related to the Grigorchuk group,, Tr. Mat. Inst. Steklova, 271 (2010), 319. doi: 10.1134/S0081543810040218. [22] Ya. Vorobets, Notes on the Schreier graphs of the Grigorchuk group,, in Dynamical Systems and Group Actions, (2012), 221. doi: 10.1090/conm/567/11250. [1] Fabien Durand, Alejandro Maass. A note on limit laws for minimal Cantor systems with infinite periodic spectrum. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 745-750. doi: 10.3934/dcds.2003.9.745 [2] Fabio Augusto Milner. How Do Nonreproductive Groups Affect Population Growth?. Mathematical Biosciences & Engineering, 2005, 2 (3) : 579-590. doi: 10.3934/mbe.2005.2.579 [3] Chungen Liu, Xiaofei Zhang. Subharmonic solutions and minimal periodic solutions of first-order Hamiltonian systems with anisotropic growth. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1559-1574. doi: 10.3934/dcds.2017064 [4] L. Yu. Glebsky and E. I. Gordon. On approximation of locally compact groups by finite algebraic systems. Electronic Research Announcements, 2004, 10: 21-28. [5] Adriano Da Silva, Alexandre J. Santana, Simão N. Stelmastchuk. Topological conjugacy of linear systems on Lie groups. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3411-3421. doi: 10.3934/dcds.2017144 [6] Stéphane Sabourau. Growth of quotients of groups acting by isometries on Gromov-hyperbolic spaces. Journal of Modern Dynamics, 2013, 7 (2) : 269-290. doi: 10.3934/jmd.2013.7.269 [7] Sonia Martínez, Jorge Cortés, Francesco Bullo. A catalog of inverse-kinematics planners for underactuated systems on matrix groups. Journal of Geometric Mechanics, 2009, 1 (4) : 445-460. doi: 10.3934/jgm.2009.1.445 [8] Shuhong Chen, Zhong Tan. Optimal partial regularity results for nonlinear elliptic systems in Carnot groups. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3391-3405. doi: 10.3934/dcds.2013.33.3391 [9] Paweł G. Walczak. Expansion growth, entropy and invariant measures of distal groups and pseudogroups of homeo- and diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4731-4742. doi: 10.3934/dcds.2013.33.4731 [10] Neal Koblitz, Alfred Menezes. Another look at generic groups. Advances in Mathematics of Communications, 2007, 1 (1) : 13-28. doi: 10.3934/amc.2007.1.13 [11] Sergei V. Ivanov. On aspherical presentations of groups. Electronic Research Announcements, 1998, 4: 109-114. [12] Emmanuel Breuillard, Ben Green, Terence Tao. Linear approximate groups. Electronic Research Announcements, 2010, 17: 57-67. doi: 10.3934/era.2010.17.57 [13] Benjamin Weiss. Entropy and actions of sofic groups. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3375-3383. doi: 10.3934/dcdsb.2015.20.3375 [14] Robert McOwen, Peter Topalov. Groups of asymptotic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6331-6377. doi: 10.3934/dcds.2016075 [15] Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 725-738. doi: 10.3934/dcds.2008.20.725 [16] Elon Lindenstrauss. Pointwise theorems for amenable groups. Electronic Research Announcements, 1999, 5: 82-90. [17] Hans Ulrich Besche, Bettina Eick and E. A. O'Brien. The groups of order at most 2000. Electronic Research Announcements, 2001, 7: 1-4. [18] Światosław R. Gal, Jarek Kędra. On distortion in groups of homeomorphisms. Journal of Modern Dynamics, 2011, 5 (3) : 609-622. doi: 10.3934/jmd.2011.5.609 [19] Marc Peigné. On some exotic Schottky groups. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 559-579. doi: 10.3934/dcds.2011.31.559 [20] Uri Bader, Alex Furman. Boundaries, Weyl groups, and Superrigidity. Electronic Research Announcements, 2012, 19: 41-48. doi: 10.3934/era.2012.19.41 2016 Impact Factor: 0.706
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6451452374458313, "perplexity": 4381.935643233626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863570.21/warc/CC-MAIN-20180520131723-20180520151723-00133.warc.gz"}
https://imathworks.com/tex/tex-latex-create-a-local-texmf-tree-in-mactex/
# [Tex/LaTex] Create a local texmf tree in MacTeX mactextexmf I know that TeX Users Group recommends putting your local texmf tree at ~/Library/texmf (see link). However, I want MacTeX to see my local texmf tree in a folder in my Dropbox. This allows me to refer to the same local texmf tree on multiple computers, which makes updating my local texmf tree easier (just update the version in the Dropbox and all the computers see the update). How do I do this? Note: I found some similar tex.stackexchange questions, but they don't answer my question. Here is how I arrived at the solution. First, I tried to run tlmgr but I got an error: my-iMac:~ myname\$ tlmgr Based on Joseph Wright's comment, I realized that tlmgr must be installed. The question then was: where is it located in the file system? Based on tlmgr is not accessible after installing TeX Live 2011 on a Ubuntu system, I found that on my system, tlmgr is located at /usr/local/texlive/2013/bin/x86_64-darwin/tlmgr Therefore, to add the folder ~/Dropbox/computer/localtexmf to the LaTeX search path, I ran the command: sudo /usr/local/texlive/2013/bin/x86_64-darwin/tlmgr conf texmf TEXMFHOME "~/Library/texmf:~/Dropbox/computer/localtexmf"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538173675537109, "perplexity": 4734.013801946818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00630.warc.gz"}
http://math.hawaii.edu/wordpress/bjoern/author/admin/
# The maximum probability of writing 001 in a two-state Hidden Markov Model is $8/27$ The aim of this note is to give the simplest possible non-trivial calculation of the parameters of a HMM that maximize the probability of emitting a certain string. Let $\{0,1\}$ be our alphabet. Let $p$ be the probability of emitting 1 in state $s_0$. Let $q$ be the probability of emitting 1 in state $s_1$. Let $\epsilon$ be the probability of transitioning from $s_0$ to $s_1$. Let $\delta$ be the probability of transitioning from $s_1$ to $s_0$. Let $S(t)$ be the state after transitioning $t$ times, a random variable. The probability of emitting the string 001 when starting in state $s_0$ is then $$f(p,q,\epsilon,\delta)=\Pr(001; S(1)=s_0=S(2))+\Pr(001; S(1)=s_0, S(2)=s_1)$$ $$+\Pr(001; S(1)=s_1, S(2)=s_0)+\Pr(001; S(1)=s_1=S(2))$$ $$=\overline p^2 p \overline\epsilon^2 + \overline p^2q\overline\epsilon\epsilon + \overline p\overline q p\epsilon\delta + \overline p\overline q q \epsilon\overline\delta.$$ Which choice of parameters $p, q, \epsilon, \delta$ will maximize this probability? To answer this we first compute $\partial f/\partial\delta$ and set it equal to 0. We find the solution: $p=1$ or $q=1$ or $\epsilon=0$ or $p=q$. Going through these possibilities we keep finding values of $f$ bounded above by $1/4$. The boundary value choice $\delta=0$ (and hence we also assume $p=0$, since there is no use in considering a positive probability of emitting a 1 in state $s_0$ if there is no chance of ever returning to that state), however, gives upon calculation of $\partial f/\partial q$ that $\epsilon=2\overline q$, which gives $f=2q^2\overline q$. This is maximized at $q=2/3$ which corresponds to $\epsilon=2/3$ as well, and gives a value $f=8/27>1/4$. This $8/27$ is decomposable as a sum of two disjoint scenarios of probability $4/27$: 1. One is that after writing the first 0 we stay in state $s_0$, write another 0, and then transition to state $s_1$ to write a 1. 2. The other is that after writing the first 0 we move to state $s_1$, write the 2nd zero there, and stay there to write the 3rd letter, 1. # Computability Theory List Server As of June 15, 2006, we are not posting emails for ANY third party. To post one must be a subscriber to the list. If you are having problems first confirm yourself as subscriber (directions are below) and if that does not work please remove yourself from the list and resubscribe. Directions on how to do this can be found by following the links below. To use the list just send email to comp-thy@lists.hawaii.edu, the list server will take care of the rest. You must be a member of the list to send mail to the list. Anyone is free to join the list. Use the list just as you would a normal email address expect for the fact that everyone subscribed to the list will receive a copy of your email. It may take some time before your message reaches everyone on the list. You may use the list as you see fit. Although it would be best if it were used for short announcements of interest to all computability theorists. A WORD OF CAUTION: Large files cause problems for many mailers. ## Using the list server The list server at University of Hawaii maintains the mailing list. It can do many things. For example, it can be used to subscribe, unsubscribe, or look at the archive for the list. These and others tasks are completed by issuing commands to the list server. The easiest way to do this is do use the WWW interface at listserv.hawaii.edu. # Superposition as memory: unlocking quantum automatic complexity Imagine a lock with two states, locked and unlocked, which may be manipulated using two operations, called 0 and 1. Moreover, the only way to (with certainty) unlock using four operations is to do them in the sequence 0011, i.e., $0^n1^n$ where $n=2$. In this scenario one might think that the lock needs to be in certain further states after each operation, so that there is some memory of what has been done so far. Here we show that this memory can be entirely encoded in superpositions of the two basic states locked and unlocked, where, as dictated by quantum mechanics, the operations are given by unitary matrices. Moreover, we show using the Jordan–Schur lemma that a similar lock is not possible for $n=60$. Details in the paper: Superposition as memory: unlocking quantum automatic complexity which is to appear in the Lecture Notes in Computer Science volume of the conference Unconventional Computation and Natural Computation (UCNC) 2017. # Few paths, fewer words: model selection with automatic structure functions The paper “Kolmogorov structure functions for automatic complexity in computational statistics” appeared in the Lecture Notes in Computer Science proceedings of the conference COCOA 2014, Maui, Hawaii. The paper then appeared in the journal Theoretical Computer Science 2015. The ideas are implemented in the Structure function calculator. A new paper Few paths, fewer words: model selection with automatic structure functions has been conditionally accepted for publication in Experimental Mathematics. Some slides # Covering the computable sets I participated in the workshop on Algorithmic Randomness in Singapore and the conference on Computability, Complexity and Randomness. With host Frank Stephan and fellow participant Sebastiaan Terwijn we wrote a paper entitled Covering the recursive sets which appeared in Lecture Notes in Computer Science, Conference on Computability in Europe, 2015, and has now been published in Annals of Pure and Applied Logic. # A Conflict Between Some Semantic Conditions Damir Dzhafarov, Stefan Kaufmann, Bjørn Kjos-Hanssen, Dave Ripley, et al., at the 2016 ASL Annual Meeting at UConn. Slides José Carmo and Andrew J.I. Jones have studied contrary-to-duties obligations in a series of papers. They develop a logical framework for scenarios such as the following: 1. There ought to be no dog. 2. If there is a dog, there ought to be a fence. One conjecture from Carmo and Jones 1997 was refuted in a rather technical way in my 1996 term paper at University of Oslo. The conjecture stated that one could simply add the condition $\DeclareMathOperator{\pii}{ob}$ $$(Z \in \pii(X)) \land (Y \subseteq X) \land (Y \cap Z \ne \emptyset ) \rightarrow (Z \in \pii(Y )) \tag{5e}$$ for the conditional obligation operator ob. In a follow-up paper (2001) they argued that (5e) could be added by weakening some other conditions. In a new paper, to appear in Studia Logica, and presented at the Association for Symbolic Logic Annual Meeting 2016 at UConn, I argue that (5d) and (5e) are in conflict with each other. The argument is a generalization and strengthening of the 1996 argument.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7748567461967468, "perplexity": 594.9041175097193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189313.82/warc/CC-MAIN-20170322212949-00263-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/180804/how-to-get-the-aspect-ratio-of-an-image
# How to get the aspect ratio of an image? I have an image that is: 320 original width 407 original height I want to let users resize the image via a form I am building on a webpage. They can adjust either the width or height. When they do the other dimension should auto adjust to maintain the aspect ratio. When a user updates the width field I get the adjusted height like so: aspect ratio = original width ÷ original height adjusted height = <user chosen width> ÷ aspect ratio Is this correct? Also how to get the adjusted width when the user changes the height field? I know it should be simple enough but I just can't figure it out. - Yes, this is correct – though it unnecessarily uses two divisions instead of one division and a multiplication, adjusted height = <user-chosen width> * original height / original width. The corresponding formula for the adjusted width is adjusted width = <user-chosen height> * original width / original height, or if you want to do it your way, adjusted width = <user-chosen height> * aspect ratio, with aspect ratio calculated as before. - perfect, thanks. –  JakeRow123 Aug 9 '12 at 19:43 Yes. You said you want the aspect ratio of the adjusted image to be the same as the aspect ratio of the original equation, so you want $\frac{\text{adjusted width}}{\text{adjusted height}} = \frac{\text{original width}}{\text{original height}}$ Multiplying both sides of the equation by the adjusted height, you get $\text{adjusted width} = \frac{\text{original width}}{\text{original height}} \cdot \text{adjusted height}$ Then, dividing both sides of the equation by $\frac{\text{original width}}{\text{original height}}$, you get $\text{adjusted width } / \frac{\text{original width}}{\text{original height}} = \text{adjusted height},$ which is the formula you suggested! So, how do you get the adjusted width when the user adjusts the height field? The answer is hidden in the reasoning above! Can you find it? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450674653053284, "perplexity": 513.3191930864795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826679.55/warc/CC-MAIN-20140820021346-00062-ip-10-180-136-8.ec2.internal.warc.gz"}
http://pyxplot.org.uk/examples/03td/06surface_cosine/index.html
# Pyxplot ## Examples - Cosine amplitude An example of the surface plotting style, with variable color Pyxplot's surface plotting style evaluates a function at a grid of points in the x-y plane, and draws a 3D surface showing how the function varies across the plane. For added prettiness, an expression is also given for the color of the line, which varies from point to point. As in expressions passed to the using modifier, the columns of data are referred to as $1 for the first column, i.e. x;$2 for the second column, y; etc. The expression given here uses the built-in function hsb() to produce a color object with the specified hue, saturation and brightness. ### Script set numerics complex set xlabel r"$x$" set ylabel r"$y$" set zlabel r"$\left|\cos(x+iy)\right|$" set xformat r"%s$\pi$"%(x/pi) set key below set size 6 ratio 1 zratio 0.5 set grid plot 3d [-pi:pi][-1:1] abs(cos(x+i*y)) with surface \ fillc hsb(\$1/pi/2+0.5,0.9,0.8)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5093007683753967, "perplexity": 4297.907403458283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00303.warc.gz"}
https://www.analyzemath.com/calculus/Integrals/volume_square_pyramid.html
# Find The Volume of a Square Pyramid Using Integrals Find the formula for the volume of a square pyramid using integrals in calculus. Problem : A pyramid is shown in the figure below. Its base is a square of side a and is orthogonal to the y axis. The height of the pyramid is H. Use integrals and their properties to find the volume of the square pyramid in terms of a and H. Solution to the problem: Let us first position the pyramid so that two opposite sides of the square base are perpendicular to the x axis and the center of its base is at the origin of the x-y system of axes. If we look at the pyramid in a direction orthogonal to the x-y plane, it will look like a two dimensional shape as shown below. AC is the slant height. Let x = A'B' be the length of half of the side of the square at height y. The area A of the square at height y is given by: A(x) = (2x)2 The volume is found by adding all the volumes A dy that make the pyramid from y = 0 to y = H. Hence Volume = 0H A 2 dy = 4 0H x 2 dy We now use the fact that triangles ABC and AB'C' are similar and therefore the lengths of their corresponding sides are proportional to write: (a/2) / x = H / (H - y) We now solve the above for x to obtain x = a (H - y) / (2 H) We now substitute x in the integral that gives the volume to obtain Volume = 4 (a/2H)20H(H - y) 2 dy Let us define t by t = H - y and dt = - dy The volume is now given by Volume = 4 (a/2H)2H0t 2 (- dt) Evaluate the integral and simplify Volume = 4 (a/2H)2 [H3 / 3] Volume = a2 H / 3 The volume of a square pyramid is given by the area of the base times the third of the height of the pyramid. More references on integrals and their applications in calculus. Area under a curve. Area between two curves. Find The Volume of a Solid of Revolution. Volume by Cylindrical Shells Method.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422419667243958, "perplexity": 204.32586382235647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00452.warc.gz"}
https://enacademic.com/dic.nsf/enwiki/4712487/Capital%2C_Volume_I
# Capital, Volume I "Capital, Volume I" is the first of three volumes in Karl Marx's monumental work, "Das Kapital," and the only volume to be published during his lifetime. Originally published in 1867, Marx's aim in "Capital, Volume I" is to uncover and explain the laws specific to the capitalist mode of production and of the class struggles rooted in these capitalist social relations of production. Part One: Commodities and Money Chapters 1-3 begin with a dense theoretical discussion of the commodity, value, exchange, and the genesis of money. As Marx writes, "Beginnings are always difficult in all sciences ... the section that contains the analysis of commodities, will therefore present the greatest difficulty." [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 89.] Chapter 1: The Commodity ection 1. The Two Factors Of The Commodity: Use-Value And Value (Substance Of Value, Magnitude Of Value) Marx begins the chapter with this: “The wealth of societies in which the capitalist mode of production prevails appears as an ‘immense collection of commodities’…” [ibid. 125. ] A commodity, Marx states, is an external object that satisfies any human need; he adds that it does not matter how the commodity meets those needs. To be a commodity, something must have both a use-value and a value. A use-value is another way of knowing how useful a commodity is, and it cannot be separated from the “physical body” of the commodity. Value is the thing that is common to different commodities, and a commodity's value is found in the labor that goes into producing it. When Marx says "substance of value,” he is equating it with “labor in the abstract”; abstract labor is not a strict definition of labor, but simply labor of some form or another. When Marx speaks of a commodity’s “magnitude of value,” he is referring to “socially necessary labor time.” This is, in other words, the way in which a commodity’s value is measured. “Socially necessary labor time” is defined as “the labor-time required to produce any use-value under the conditions of production normal for a given society and with the average degree of skill and intensity of labor prevalent in that society.” [ibid. 129. ] And so the value of a commodity would rise, for instance, if the productivity of labor is low, but the labor time to produce it is high. (Think of a diamond; this is a rare commodity, and to find one usually requires a great deal of time.) Marx goes on to say that just because something is a use-value does not mean it has value. To understand this, think of such things as air, virgin soil, etc.; these are use-values, but they do not require labor to produce them, so therefore they have no value. One must think of a commodity going through the social function of exchange. If something is just a use-value for oneself, then it is not a commodity; something only becomes a commodity when it is produced for others. Finally, it is important to note that if labor is producing something that is useless, then, as Marx says, “the labor does not count as labor, and therefore creates no value." [ibid. 131. ] ection 2. The Dual Character Of The Labour Embodied In Commodities Marx discusses the relationship between labor and value. Marx states if there is a change in the quantity of labor expended to produce an article, the value of the article will change. This is, in fact, a direct correlation. Marx gives an example of the value of linen versus cloth to explain the worth of each commodity in a capitalist society. Linen is hypothetically twice as valuable as thread because more socially necessary labor time was used to create it. Use-value of every commodity is produced by useful labor. Use-value measures the actual usefulness of a commodity, whereas value is a measurement of exchange value. The source of value is labor-power. Objectively speaking, linen and thread have some value. Different forms of labor create different kinds of use-values. The value of the different use-values created by different types of labor can be compared because both are expenditures of human labor. One coat and ten yards of linen take the same amount of socially necessary labor time to make, so they have the same value. ection 3. The Value-Form or Exchange-Value (a) The Simple, Isolated, or Accidental Form of Value In this chapter Marx explains that commodities come in double form. Commodities come in natural form and value form. We don't know commodities' value until we know how much human labor was put in it. Commodities are traded between each other, after their value is decided socially. Then there is value-relation, which lets us trade between different kind of commodities. Marx explains value without using money. Marx uses 20 yards of linen and a coat to show the value of each other. (20 yards of linen = 1 coat, or: 20 yards of linen are worth 1 coat) pg. 139. Marx calls this an equivalent form. He adds that the value 20 yards of linen is just 20 yards of linen, there is no expression of value. Linen is an object of utility and that we cannot tell its value until we compare it to another commodity. Figuring out the value of commodity depends on its position in the expression value. Also commodities value depends in its being expressed or a commodity which the value is being expressed. (b) The Total or Expanded Form of Value Marx begins this section with an equation for the expanded form of value: "z commodity A = u commodity B or = v commodity C or = w commodity D or = x commodity E or = etc." where the lower case letters (z, u,v, w, and x) represent quantities of a commodity and upper case letters (A, B,C,D, and E) represent specific commodities so that an example of this could be: "20 yards of linen = 1 coat or = 10 lb. tea or = 40 lb. coffee or = 1 quarter of corn or = 2 ounces of gold or = ½ ton of iron or = etc." [ibid. 155. ] Marx explains that with this example of the expanded form of value the linen “is now expressed in terms of innumerable other members of the world of commodities. Every other commodity now becomes a mirror of linen’s value.” [ibid. 155. ] At this point, the particular use-value of linen becomes unimportant, but rather it is the magnitude of value (determined by socially necessary labor time) possessed in a quantity of linen which determines its exchange with other commodities. This chain of particular kinds (different commodites) of values is endless in that it contains every commodity and is constantly changing as new commodities come into being. (c) The General Form of Value Marx Begins this section with the table:: [ibid. 157. ] In such a table, the value of specific quantities of various commodities are now expressed in relation to a specific quantity of one single commodity, so that the form of value is now both simple and unified. Marx writes, “through its equation with linen, the value of every commodity is now not only differentiated from its own use-value, but from all use-values, and is, by that very fact expressed as that which is common to all commodities. By this form commodities are, for the first time, really brought into relation with each other as values, or permitted to appear to each other as exchange-values”. [ibid. 158.] At this juncture, one commodity (here linen) becomes a universal equivalent. It establishes the exchange between all other commodities and loses its ability to function in the relative form of value because it cannot function as its own equivalent (20 yds. of linen = 20 yds. of linen is not a meaningful expression of value). (d) The Money Form When in the course of history, one commodity, through social custom takes the form of the universal equivalent that commodity becomes the money commodity and begins to serve as money. Marx claims that in European history that commodity is gold. Thus, in the first table gold and linen switch places creating: : [ibid. 162. ] When the relative value of a commodity is expressed in terms of a commodity serving as the money commodity that is considered the price form. ection 4. The Fetishism of the Commodity and Its Secret Marx's inquiry in this section focuses on the nature of the commodity, apart from its basic use-value. In other words, why does the commodity in its value-form (exchange) appear to be something other than the aggregation of homogenous human labor? Marx contends that due to the historical circumstances of capitalist society, the values of commodities are usually studied by political economists in their most advanced form: money. These economists see the value of the commodity as something metaphysically autonomous from the social labor that is the actual determinant of value. Marx calls this fetishism - the process whereby the society that originally generated an idea eventually, through the distance of time, forgets that the idea is actually a social and therefore all-too-human product. What this means is that this society will not look beneath the veneer of the idea (in this case the value of commodities) as it currently exists. They will simply take the idea as a natural and/or God-given inevitability that they are powerless to alter. Marx compares this fetishism to the manufacturing of religious belief; people initially create a deity to fulfill whatever desire or need they have in present circumstances, but then these products of the human brain appear as autonomous figures endowed with a life of their own, which enter into a relation both with each other and with the human race (165). Similarly, commodities only enter into relation with each other through exchange, which is a purely social phenomenon. Before that, they are simply useful items, but not commodities. Value itself cannot come from use-value because there is no way to compare or contrast the usefulness of an item; there are simply too many potential functions. So once in exchange, their value is determined by the amount of socially useful labor-time put into them because labor can be generalized. It takes longer to mine diamonds than it does to dig for quartz, thus diamonds are worth more. Fetishism within capitalism occurs once labor has been socially divided and centrally coordinated, and the worker no longer owns the means of production. They no longer have access to the knowledge of how much labor went into a product because they no longer control its distribution. From there, the only obvious determinant of value to the mass of people is the value that has been assigned in the past. Thus the value of a commodity seems to arise from a mystical property inherent to it, rather than from the labor-time, the actual determinant of value. Chapter 2: The Process of Exchange Marx explains commodity exchange and the fact that commodities need assistance to be exchanged. Owners enable commodities to be exchanged. Commodities cannot hide from humans. Trading commodities requires relating the commodities in the sense of how much of commodity x equals how much of commodity y. Marx states that "humans are made for each other to be holders or representatives of commodities." [ ibid. 179.] Commodities have no direct use-value to their owners. Owners’ commodities have exchange-value which must be realized “before they can be realized as use-values.” [ibid. 179.] As far as the value of owned commodities, the labor expended on them only counts insofar as it is expended in a form which is useful for others.” [ibid. 180.] Since commodities values must be related to other commodities and society must accept the equal value. Exchange requires a “universal equivalent” –- money -- “the money-form is merely the reflection thrown upon a single commodity by the relations between all other commodities.” [ibid. 184.] Money is a commodity and ultimately a symbol of human labor. Closing the chapter, Marx states that "Men are henceforth related to each other in their social process of production in a purely atomistic way. ... because the products of men's labour universally take on the form of commodities." [ibid. 187.] Chapter 3: Money, or the Circulation of Commodities 1. The Measure of Values Functions of Metallic Money In chapter 3 section I Marx examines the functions of money commodities. According to Marx the main function of money is to provide commodities with the medium for the expression of their values, i.e. labor time. The function of money as a measure of value serves only in an imaginary or ideal capacity. That is, the money that performs the functions of a measure of value is only imaginary because it is society that has given the money its value. The value that is contained in one ton of iron for example, is expressed by an imaginary quantity of the money commodity, which contains the same amount of labor as the gold. Multiple Forms of Metallic Money As a measure of value and a standard of price money performs two functions. First it is the measure of value as the social incarnation of human labor and secondly it serves as a standard of price as a quantity of metal with a fixed weight. As in any case where quantities of the same denomination are to be measured the stability of the measurement is of the utmost importance. Hence the less the unit of measurement is subject to variations the better it fulfills its role. Metallic currency may only serve as a measure of value because it is itself a product of human labor. Commodities with definite prices appear in this form: a commodity A= x gold; b commodity B= y gold; c commodity C= z gold, etc where a, b,c represent definite quantities of the commodities A, B,C and x, y,z definite quantities of gold.In spite of the varieties of commodities their values become magnitudes of the same denomination, gold-magnitudes. Since these commodities are all magnitudes of gold they are comparable and interchangeable. Price Price is the money-name of the labor objectified in a commodity. Like the relative form of value in general, price expresses the value of a commodity by asserting that a given quantity of the equivalent is directly interchangeable. The price form implies both the exchangeability of commodities for money and the necessity of exchange. Gold serves as an ideal measure of value only because it has already established itself as the money commodity in the process of exchange. 2. The Means of Circulation (a) The Metamorphosis of Commodities In this section Marx further examines the paradoxical nature of the exchange of commodities. The contradictions that exist within the process of exchange provide the structure for “social metabolism”. The process of social metabolism “transfers commodities from hands in which they are non-use-values to hands in which they are use-values…” (198). Commodities can only exist as “values” for a seller and “use-values” for a buyer. In order for a commodity to be both a “value” and a “use-value” it must be produced for exchange. The process of exchange alienates the ordinary commodity when its antithesis, the “money commodity” becomes involved. During exchange, the money commodity confronts the ordinary commodity disguising the true form of the ordinary commodity. Commodities and money are at opposites spectrums, and exist as separate entities. In the process of exchange, gold or money, functions as “exchange-value” while commodities function as “use-values”. A commodity’s existence is only validated through the form of money and money is only validated through the form of a commodity. This dualistic phenomenon involving money and commodities is directly related to Marx’s concept of “use-value” and “value”. Commodity-Money-Commodity $C->M->C$Marx examines the two metamorphoses of the commodity through sale and purchase. In this process, “as far as concerns its material content, the movement is C-C, the exchange of one commodity for another, the metabolic interaction of social labor, in whose result the process itself becomes extinguished” (200). $C->M$ First metamorphosis of the commodity, or sale. In the process of sale, the value of a commodity, which is measured by socially necessary labor-time, is then measured by the universal equivalent, gold. $M->C$ The second or concluding metamorphosis of the commodity: purchase. Through the process of purchase all commodities lose their form by the universal alienator, money. “Since every commodity disappears when it becomes money it is impossible to tell from the money itself how it got into the hands of its possessor, or what article has been changed into it” (205). $M->C$ "=" $C->M$ A purchase represents a sale although they are two separate transformations. This process allows for the movement of commodities and the circulation of money. (b) The Circulation of Money The circulation of money is first initiated by the transformation of a commodity into money. The commodity is taken from its natural state and transformed into its monetary state. When this happens the commodity “falls out of circulation into consumption”. The previous commodity now in its monetary form replaces a new and different commodity continuing the circulation of money. In this process, money is the means for the movement and circulation of commodities. Money assumes the measure of value of a commodity (i.e.) the socially necessary labor-time. The repetition of this process constantly removes commodities from their starting places, taking them out of the sphere of circulation. Money circulates in the sphere and fluctuates with the sum of all the commodities that co-exist within the sphere. The price of commodities varies by three factors. “…the movement of prices, the quantity of commodities in circulation, and the velocity of circulation of money” (218). (c) Coin. The Symbol of Value Money takes the shape of a coin because of how it behaves in the sphere of circulation. Gold became the universal equivalent by the measurement of its weight in relation to commodities. This process was a job that belonged to the state. The problem with gold is that it wore down as it circulated from hand to hand. The introduction of paper money as a representation of gold arose from the state as a new circulating medium. This form of imaginary expression continues to mystify and intrigue. Marx views money as a “symbolic existence” which haunts the sphere of circulation and arbitrarily measures the product of labor. 3. Money (a) Hoarding The exchange of money is a continuous flow of sales and purchase. Marx goes on to say, “In order to be able to buy without selling, he must have previously sold without buying.” This simple illustration demonstrates the essence of hoarding. In order to potentially buy, without selling a commodity in your possession, you must have hoarded some degree of money in the past. Money becomes greatly desired due to potential purchasing power. If you have money, you can exchange this for commodities and vice versa. However, while satisfying this newly arisen fetish for gold, the hoard causes the hoarder to make personal sacrifices. (b) Means of Payment In this section Marx analyzes the relationship between debtor and creditor and exemplifies the idea of the transfer of debt. In relation to this, Marx discusses how the money-form has become a means of incremental payment for a service or purchase. He states that the “function of money as means of payment begins to spread out beyond the sphere of circulation of commodities. It becomes the universal material of contracts.” Due to fixed payments and the like, debtors are forced to hoard money in preparation for these dates.“While hoarding, as a distinct mode of acquiring riches, vanishes with the progress of civil society, the formation of reserves of the means of payment grows with that progress.” (c) World Money Countries have reserves of gold and silver for two purposes: (1) Home Circulation; and (2) External Circulation in World Markets. Marx says that it is essential for countries to hoard, as it is needed “as the medium of the home circulation and home payments, and in part out of its function of money of the world.” With all of this discussed hoarding and the aforementioned idea of hoarded money’s inability to contribute to the growth of a capitalist society, Marx states that banks are the relief to this problem.“Countries in which the bourgeois form of production is developed to a certain extent, limit the hoards concentrated in the strong rooms of the banks to the minimum required for the proper performance of their peculiar functions. Whenever these hoards are strikingly above their average level, it is, with some exceptions, an indication of stagnation in the circulation of commodities, of an interruption in the even flow of their metamorphoses.” Part Two: The Transformation of Money into Capital Chapters 4-6 connect the abstract discussion of commodities, money, and value begun in Part I with their role in the formation of class relations under capitalism. Chapter 4: The General Formula For Capital In this chapter, Marx explains what capital is and how it is produced. The form of capital is money, yet not all money is capital. In Marx’s words there is “money as money" and "money as capital" (247). For money to be converted into the form of capital, it must undergo a deliberate process based on the circulation of commodities in an exchange market. There are two different forms of commodity circulation: the direct or simple form of circulation (C-M-C) and the capital-generating form (M-C-M). Two common elements can easily be identified: the sale of a commodity (C-M) and the purchase of a commodity (M-C). Clearly the two contrast simply by their order in which the sale and purchase occur. In the first form (C-M-C), a commodity is produced in order that one may acquire the means to purchase another commodity i.e. “selling in order to buy” (247). This particular form of commodity circulation is essentially a closed system. Once a commodity has been obtained by the purchase (M-C), the commodity exits the exchange-market thereby achieving its aim as a use-value, consumed according to necessity. The money is simply expended achieving its objective as a medium of exchange between two qualitatively different commodities. The second, capital-generating form of commodity production (M-C-M) is simply an inversion of the previous form. In this instance, money purchases a commodity (M-C) in order that it be sold for money i.e. "buying in order to sell" (248). In contrast to the simple form of circulation, the capital-generating form culminates in the reflux, or return of money back to the capitalist as the desired result. Essentially money is exchanged for money (M-M). Since money is the starting point and conclusion of the capital-generating form of circulation "its determining purpose, is therefore exchange-value" (250). Consequently, use-values of commodities become negligible while the exchange-value of a commodity acquires meaning solely in the context of exchange. Therefore, in order for money to retain its form as capital it must remain in circulation otherwise it would simply be expended. Finally, we must consider the role and motivations of the capitalist in the capital-generating form of circulation. If one expends money for the sake of receiving its return, then the capitalist necessarily desires its augmentation known as surplus-value. Since capital must remain in circulation, it manifests itself as ongoing process without end. The new value (original + surplus) becomes the starting point of a new cycle for the capitalist to exchange his money for more money. Therefore, the general formula for capital becomes M-C-M' in which M'=M+ΔM or the original value of money plus some increment of money, or surplus-value. Chapter 5: Contradictions in the General Formula After distinguishing money as capital (M-C-M) from money in general, Marx raises the question of how the capitalist accrues surplus-value by buying a commodity and then selling it. The key problem is how the exchange of equivalent values can produce more value. He rejects 5 possibilities: *First: Marx recognizes that use-value changes among buyers. So, one could just sell the commodity to the buyer with the most use-value and charge more to them. However, use-value does not determine value which is what is being created as a surplus. Since money is the universal equivalent of "value", “the circulation of commodities involves a change only in the form of their values”. [ibid. 260] As above, one can exchange 2oz of gold for only 1 coat and get only 2oz of gold for that coat, so long as the socially necessary labor time remains constant. Use-value plays no role in these formulas. *Second: In principle, “commodities may be sold at prices which diverge from their values, but this divergence appears as an infringement of the laws governing the exchange of commodities”. [ibid. 261] Yet, the capitalist may possess some privilege by which s/he can sell above value. However, other capitalists have this privilege and overcharge the first capitalist who then loses his/her profit. The result is nominal price inflation which only changes the purchasing power of money, but not the real value. [ibid. 263] *Third: There could be a class of people who consume but do not produce. They pay for commodities, and thus have money, without having produced and sold some commodity beforehand. The “free” money they thus put into circulation can be used to artificially alter exchange-values. However, since the consumption class has money without producing anything, the money must come from those outside of the consumption class. (Marx gives the example of Roman taxes.) Thus this class uses others' money to buy those people's products. No actual surplus-value is being created. [ibid. 264-5] *Fourth: It may be that the capitalist overcharges a buyer as in the second rejection, but the buyer cannot recover the lost value and the capitalist will not be overcharged him/herself. However, there is still no surplus-value. If the buyer has \$40 and the capitalist a \$10 shirt, there is a total value of \$50 in circulation. If the buyer pays \$20 for the \$10 shirt and has \$20 left over, there is still a total value of \$50. The only change is distribution of wealth. [ibid. 265] *Fifth: Since the circulation of commodities is between equivalent values, any surplus-value must emerge outside of this circulation. However, outside of circulation, the value of a commodity is only the labor put into it. The producer “cannot create values which can valorize themselves.” Surplus-value thus cannot emerge outside of circulation. [ibid. 268] Conclusion: “Circulation, or the exchange of commodities, creates no value”. [ibid. 266] Only labor does. And, since nothing external to circulation creates value, something must be operating within circulation which is not circulation itself. Marx closes with the contradiction: "The transformation of money into capital has to be developed on the basis of the immanent laws of the exchange of commodities, in such a way that the starting-point is the exchange of equivalents". [ibid. 268-9] Chapter 6: The Sale and Purchase of Labour-Power Marx begins this chapter by explaining how a change in the value of money must take place in the first act of commodity circulation, M-C, where the commodity is bought, but must originate in the commodities' use-value. Such a change requires a commodity whose use-value possesses the property of being a source of value, and thus, inherently creates value through its consumption. Labour-power embodies everything mentally and physically capable by labour, but is only realized through the production of use-values. Thus, labour-power becomes the commodity that inherently creates value. However, its possessor must be the free proprietor of their labour and be willing to sell it as a commodity. The sale of labour simultaneously alienates labour-power from the means of production while allowing them to maintain ownership rights. Marx even goes on to say, “the owner of money must find the free worker available on the commodity market; and this worker . . . can dispose of his labour-power as his own commodity, and . . . is free of all objects needed for the realization of his labour-power.” [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 272. ] This realization of labour-power can occur only when labour is used in conjunction with the means of production; hence, a separation from the means of production compels individuals’ to put their labour on the market because of an inability to sell commodities. The value of labour-power is determined by the value of the means of subsistence necessary for the maintenance of the owner. These means of subsistence must be sufficient to maintain a “normal state” as a working individual, and, in order to perpetuate a presence in the marketplace, the means necessary for subsistence of the worker’s replacements. Marx continues with an example on the value of labour-power, which states that, “half a day of labour is required for the daily production of labour-power,” and that, “half a day of average social labour is present in 3 shillings, . . . the price corresponding to the value of a day’s labour-power”. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 276. ] Subsequently, three shillings would now represent a buying and selling price for labour-power because the value of the necessary means of subsistence equals three shillings. The creation of an obscure commodity, labour-power, isn’t without its consequences; for instance, the value of labour-power is determined prior to the sale of said labour-power. Coincidentally, the owners of labour-power will not realize their use-value until after production has occurred, and in most cases production must occur for an extended period before pay is obtained. Therefore, labour is forced to produce on credit because the money-owner consumes labour prior to payment. Furthermore, labour’s use-value can only manifest through the consumption of labour-power, and this consumption becomes the commodity production process as well as the creator of surplus value. In the end Marx shifts from a transparent realm, “where everything takes place on the surface and in full view” to the reality of commodity production where, “the money-owner now strides out in front as a capitalist; the possessor of labour-power follows as his worker”. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 280. ] Part Three: The Production of Absolute-Surplus Value In chapters 7-11, Marx elaborates his analysis of exploitation and the extraction of surplus-value. He highlights both the fundamental way in which capital seeks to increase the rate of surplus-value through lengthening the working day, as well as the variety of ways workers strive to resist this increased exploitation. Chapter 7: The Labour Process and the Valorization Process 1. The Labor Process The utilization of a worker’s labor-power is the labor process. The process changes the objects being worked upon—which was the intention of the worker at the start of the process—and creates use-values which can be called products or commodities. Labor will ultimately be objectified in the final product produced by this process.Marx points out that this production of use-values (the labor process) is the same under capitalism or any other economic system, and that the simple elements of this process are: 1. Purposeful activity—the work itself 2. An object on which work is carried out (object of labor) a. Marx calls the object of labor a raw material when it has undergone some alteration by means of a previous labor process. 3. Instruments of labor a. A thing or complex of things which the worker uses in acting on the object of labor b. Marx considers all objective conditions necessary for carrying out the labor process to be instruments of labor. The means of production are made up of the instruments and object(s) of labor. These instruments and objects are often the use-values of previous labor. A hammer (instrument) used to nail up sheetrock (object) and the sheetrock itself are both use-values (products of earlier labor) incorporated into the labor process of building a wall as well as means of producing a wall. If a use-value involved in the labor process is an object of labor, instrument of labor or product of labor depends on that use-values’ role in the process. The fact that use-values produced by prior labor can become means of production for other labor illustrates that labor itself is in part a process of consumption. Marx calls this consumption productive consumption. The persons consuming in the course of action of productive consumption are the workers whose labor-power is the force that gives the labor process its life. Capitalism and the labor process: To the capitalist labor-power is a necessary use-value or product purchased to animate the labor process. Without labor-power, the capitalist cannot be successful—his/her objects and instruments of labor would be wasted raw materials with no one to create the use-value or commodity the capitalist wishes to sale on the market. Marx points out that labor-power is a commodity which the capitalist consumes by causing the worker (who is selling the labor-power) to consume the means of production by his/her labor.Marx points out two phenomena of the capitalist’s consumption of labor-power: 1. The worker works under the control of the capitalist to whom his labor belongs. 2. The product is the property of the capitalist and not that of the worker—the person who made the product. 2. The Valorization Process The capitalist strives to produce a use-value with exchange-value (a commodity) that has a greater value than the sum of its parts (the means of production and labor-power). The process of production must therefore be understood as the synthesis of the labor process and the process of creating value. Marx uses as an example of valorization the spinning of cotton into yarn. When the capitalist purchases 10 lb. of cotton at its full value of 10 shillings, the price conveys the labor objectified in the cotton. In addition, all of the socially necessary means of production used up in the cotton’s production, which Marx represents in the wear and tear of the spindle, have a value of 2 shillings. To determine the value of the yarn, then, all of the successive processes necessary to produce the cotton, manufacture the spindle, and spin the yarn must be taken into account. The values of the means of production, the 12 shillings, are a part of the total value of the product. There are two conditions that must be met for this process to produce value. First, a use-value must have been produced. Second, the labor-time used to produce the use-value (in this case yarn) must be no more than that which is socially-necessary. Yet these factors still only amount to a portion of the total value of the yarn; labor constitutes the remainder. Since labor-power is “absorbed” by the raw material in the form of spinning, the resulting yarn “is now nothing more than a measure of the labor absorbed by the cotton.” [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 297. ] Value is added to the yarn by the labor objectified in it. The value of a day’s labor-power (six hours of labor) is 3 shillings. Assuming the spinner can turn 10 lb. of cotton into 10 lb. of yarn in six hours, the cotton contains six hours of labor, the same amount contained by 3 shillings of gold. Spinning, then, has added a value of 3 shillings to the cotton. The total value of the 10 lb. of yarn is calculated by adding all of the socially necessary means of production used up and all of the socially necessary labor absorbed. Two days of labor were absorbed by the cotton and spindle, a half a day by the process of spinning. The two and a half days of labor are represented by a piece of gold valued at 15 shillings. This is thus the price of the 10 lb. of yarn. In this process, though, the capitalist has not created surplus-value, even though value has been added to the product (10 shillings to produce the cotton, 2 shillings for the worn spindle, and 3 shillings to purchase the labor-power). The capitalist has broken even, which is not why they entered business. The capitalist realizes that the worker spins for only six hours to survive for twenty-four hours, and could remain effective for twelve hours. This increases the value created twofold. The two and a half days of labor become five, the 10 lb. of yarn becomes 20, and the value created becomes 30 shillings. The price of the 20 lb. of cotton is thus 30 shillings, but the total value of the labor and means of production is only 27 shillings. A surplus-value of three shillings has been created, and money has been transformed into capital. Everything is exchanged equally and the capitalist pays the full value for each commodity consumed. Chapter 8: Constant Capital and Variable Capital This chapter is all about Marx’s break down of value. Marx starts out by explaining that a worker adds value to the capitalist product in two different ways. The first way is by investing the workers own socially necessary labor time into making the product and the second way the workers also adds value to the product is by transferring the value of the means of production into the product. To explain this further Marx breaks down the transference of value from the machinery to the product. Marx explains this with “It is known by experience how long on the average a machine of a particular kind will last. Suppose its use-value in the labor-process to last only six days. Then, on the average, it loses each day one-sixth of its use-value, and therefore parts with one-sixth of its value to the daily product” [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 317. ] Marx then notes that “the means of production can never add more value to the product than they themselves possess independently of the process in which they assist”. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 312. ] Marx renames the twofold purposes of a workers labor as constant capital and variable capital. Constant capital is represented by the means of production or those materials that do not undergo any quantitative alterations of vlaue. Variable capital does undergo an alteration of value. It both reproduces the equivalent of its ovn value, and also produces an excess, a surplus-value which may itself vary. To explain all of this Marx uses the example of converting cotton into yarn. So if the worker can not spin the yarn productively then he will not be able to make the cotton into yarn, which means that he can not shift the values of the cotton and spindle to the yarn. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 312. ] Thus, the value of cotton has a particular amount of socially necessary labor, and the amount of value of cotton that it can attribute to something else is the value it has. Marx therefore highlights a characteristic of what he terms constant capital. Chapter 9: The Rate of Surplus Value 1. The Degree of Exploitation of Labor-Power Capital advanced is the sum of constant capital and variable capital. C = c + v For example \$500 (advanced capital) = \$410 (constant) + \$90 (labor) Through the process of production a surplus, s, is created. The resulting sum is C’ constituting the total of constant capital, variable capital, and surplus value. Thus C’ = (c + v) + s For example \$590 = \$410 + \$90 + \$90 (surplus) Surplus value comprises the excess of the value of the commodity produced over the capital advanced for production. Constant capital transfers only part of its value during the process of production, as the remainder of its value continues to reside in the machinery. This does not alter the calculation because the constant value incorporated in the equation is only concerned with the materials actually utilized in production. For example, if the constant capital equals \$410, it could be broken down as such \$ 312 raw materials \$ 44 auxiliary materials \$ 54 value of the machine worn away during productionc = \$ 410 Let’s say the total value of the machinery itself costs \$1054. The only value imparted to the product is the \$54 of wear, because the machinery retains \$1000 of value. If this value is entered into the equation, it must be entered on both sides. C = \$500, + \$1000 = \$1500C’ = \$590, + \$1000 = \$1590 Either considering or neglecting the value retained by the machinery results in the same difference, or surplus value, of \$90. Constant capital, c, will therefore refer solely to the value actually consumed during production. The constant capital advanced transfers all of its value to the product and therefore cannot produce surplus value. Thus we can equate constant capital with 0. The new value produced is therefore not the total value of the product, (c + v) + s or \$590, but the sum of variable capital and surplus value, v + s or \$180. The absolute quantity of surplus value is \$90. The relative quantity of surplus value, or rate of exploitation, can be determined by s/v. In our example this equates to 90/90, or % 100. 2. The Representation of the Components of the Value of the Product by Corresponding Proportional Parts of the Product Itself In order to explain the conversion of money into capital, Marx offers an example of yarn production. A 12-hour workday produces 20 lbs of yarn valued at a total of 30s. 24s of value are the result of constant capital (20 lbs of cotton at 20s, and 4s for repair of the machine). The remaining 6s includes the variable capital (the worker’s wage) and the surplus value (profit). The workers wage is 3s and 3s of surplus are produced, equaling a % 100 rate of exploitation (3s/3s). This exploitation occurs not only during the process of spinning the yarn, but during the production of all its constituents (i.e. picking the cotton, building the spindle etc.) 3. Senior’s “Last Hour” In section 3 of chapter 9, Marx condemns the economic analysis of cotton mills conducted by Professor Nassau W. Senior in his pamphlet “Letters on the Factory Act, as it affects the cotton manufacture.” The pamphlet suggest that the entirety of the surplus value is produced during the last hour of a 12 hour workday, with the remainder contributing to the wage of the worker. Marx disputes Senior’s analysis on the grounds that Senior failed to separate constant and variable capital, instead including the mill and machinery as one factor and the wages and raw material as a second factor. This failure to discern between variable and constant capital was considered by Marx to be a crucial blunder. Marx also notes that the value produced during each hour is likely to be equal, and if 11 hours were contributing to wages, with 1 hour producing net profit, that proportion should be reflected in the ratio of wages paid to profits. Senior’s calculations do not take into account the value embedded in the constant capital and takes for granted their contribution of value to the finished product, instead of acknowledging that the method of transferring that value is through the use of labor-power. 4. Surplus Produce Surplus-produce refers to the portion of the product that represents surplus-value. Like the ratio of surplus value to variable capital, the relative quantity of surplus production is determined by the ratio of surplus-produce to the part of the total product in which necessary labor time is incorporated. The sum of necessary and surplus labor constitutes the working day. Chapter 10: The Working Day 1. The Limits of the Working Day. Value of labor power is the necessary labor time required for a worker to produce an amount equal to his means of subsistence. Any amount of time worked beyond the necessary labor time is called surplus labor. The Working Day = A-------B----------C. The line AC represents the working day. AB is the necessary labor time and BC is surplus labor. Under the Capitalist system, Marx says that a working day can never be reduced to necessary labor time only. This would not profit the capitalist. Marx then describes the two factors that limit the maximum length of the working day. First, there are physical limits to labor power. A worker can only work for so long before he must rest, eat, sleep, etc. In addition to physical limitations, there are moral obstacles that limit the working day. “The worker needs time in which to satisfy his intellectual and social requirements, and the extent and the number of these requirements is conditioned by the general level of civilization.” The capitalist seeks to create the most surplus value he can by extending the working day to absorb the greatest possible amount of surplus labor. In other words, the capitalist is like any other buyer of commodities; he wants to extract the maximum use value of his commodity, which in this case is labor power. The worker who sells his labor power must be able to reproduce it every day in order to resell it. If the working day is extended too far, the capitalist may extract a greater quantity of labor power than the worker is able to restore before his next shift begins. This will cause deterioration in the health of the worker. Marx goes on to describe how the process of overwork leads to the worker being unfairly compensated for his labor power in his daily wages. He gives this example: If the average length of time a worker can live and do a reasonable amount of work is 30 years, the value of his labor power for which he is paid from day to day is: 1 /( 365 x 30) or 1 / 10,950 of its total value. If worked too hard and the value of his labor power is consumed in 10 years, he is still paid the same amount. That is 1 / 10,950 of his labor power daily instead of 1 / 3650. Therefore he is only paid 1 / 3 the value of his labor power daily, and is robbed of 2 / 3. Marx concludes this section saying that the establishment of a norm for the working day presents its self as a struggle between collective capital (class of capitalists) and collective labor (working classes). 2. The Voracious Appetite for Surplus Labor. Manufacturer and Boyar. “Capital did not invent surplus labor”, begins Marx. This section focuses of the historical tendency for the owners of the means of production to attempt to extract greater and greater amounts of surplus labor out of workers, often in an unscrupulous manner. He gives two main examples. The first is the system of the “corvée” in the Danubian provinces of precapitalist Russia. The second example is that of a fraudulent capitalist factory owner. Under the corvée system of the early 19th century, the peasant owed a specific quantity of labor to the landlord annually. This labor consisted of 12 general labor days, one day of field labor, and one day of wood carrying. This amounted to a total of 14 days a year. These days were not average work days, however. They amounted to the total time necessary to produce an average daily product. Russian leaders took such great liberties in deciding what an average daily product was that it turned out to be equal to three days of actual work. The 14 days of corvée became 42 days of required labor. Marx then gives a few more examples of ways the landlords were able to tack on more and more days to the corvée. The second main example for the strong appetite for surplus labor is that of a fraudulent factory owner that illegally extends the working day. He does this by starting work 10 or 15 minutes early or by shaving a few minutes off of the beginning and ending of breaks and lunch hours. These small additions of work add up to hours of uncompensated surplus labor over the course of a week, and weeks of surplus labor over the course of a year. 3. Branches of English Industry without Legal Limits to Exploitation In this section, Marx presents the historical debate regarding the work day and the exploitation of children. Using examples of match manufacturing and bread-making, Marx describes the issues children faced by working long hours in poor conditions at extremely young ages. In many cases, these children died or faced long-term ailments. Marx also covers similar issues of exploitation of children in both Ireland and Scotland. 4. Day-Work and Night-Work. The Shift-System Means of production (constant capital) exists to be used to produce commodities. When not in use, no surplus-labor is occurring. This unproductive time can only be seen as good for the capitalist if the time the constant capital lies dormant is used so that the worker can rest and be more productive when again using the means of production to create a commodity. Although is it ideal for the capitalist to have people who could work constantly (24 hours a day), the workers would suffer and thus profits could not continue to increase. In order to produce 24 hours a day, the capitalist realized that by instituting shift-work, he could both maximize his profits while allowing the individual worker to get his necessary rest. 5. The Struggle for a Normal Working Day. Laws for the Compulsory Extension of the Working Day, from the Middle of the Fourteenth to the End of the Seventeenth Century Ideally a capitalist would have an individual worker producing 24 hours a day with only so much rest time as the worker needed to continue to function. The worker is nothing more to the capitalist than labor-power and a means to accumulate surplus-value and so the worker’s personal time means nothing to the capitalist. Yet according to Marx, having this attitude is a double-edge sword in that if the capitalist had his way and exploited the worker to this extent, he would in essence be killing his own workforce. However the value of labor-power (the worker) includes all of the commodities that go into keeping the worker alive. In the end, it is more expensive for the capitalist to replace a worker than to keep one sustained adequately. Ultimately it is in capital’s best interest to not have an individual work 24 hours a day with only necessary breaks but to have a work day that is short enough to sustain healthy strong labor-power. Marx summarizes this process in saying, “Establishment of a normal working day is the result of centuries of struggle between the capitalist and the worker” (382). Chapter 11: The Rate and Mass of Surplus-Value Having explained the rate of surplus-value earlier in Ch. 9, Marx focuses this chapter on the mass of surplus-value. The mass of surplus-value is completely dependent on the number of workers under the capitalist's control. We know that the rate of surplus value is equal to $frac\left\{S\right\}\left\{V\right\}$ or the $frac\left\{mbox\left\{Surplus Labor Time\left\{mbox\left\{Necessary Labor Time$ or the $frac\left\{mbox\left\{Rate of Surplus Value\left\{mbox\left\{Rate of Exploitation$. To increase this rate, the capitalist simply has to work his workers longer hours. This is simply a rate dependent on the degree of exploitation of the workers. The mass of surplus-value is determined by the rate of surplus value (S/V) multiplied by the number of workers. The more exploitation that exists, the more money the capitalist will possess. This brief chapter is filled with typical Marxist theoretical situations to explain the mass of surplus-value but the core of the chapter lies in the three laws that Marx describes dealing with the rate of surplus-value. Marx explains as his first and most fundamental law that, "the mass of surplus-value produced is equal to the amount of the variable capital advanced multiplied by the rate of surplus-value". [Marx, Karl. Capital, Volume I. Trans. Ben Fowkes. London: Penguin, 1990.418] Basically, as stated above, the mass of surplus-value is totally dependent on the total numbers of workers being exploited by the same capitalist and how much exploitation is going on. The formula $S = V\left(frac\left\{s\right\}\left\{v\right\}\right) , P\left(frac\left\{a\text{'}\right\}\left\{a\right\}\right)$ comes about, where $s$ is equal to surplus value produced by the individual worker, $v$ being equal to the variable capital advanced in the purchase of an individual labor power, $V$ being the total amount of variable capital, $P$ being the value of an average labor-power, and $a\text{'}/a$ represents the degree of exploitation (surplus labor/necessary labor). [Marx, Karl. Capital, Volume I. Trans. Ben Fowkes. London: Penguin, 1990.418] Marx lets it be known that under this law, that the mass of surplus-value is not constant in nature, but that a decrease in one factor can be made up by an increase in another factor, such as variable capital dropping but the exploitation of the labor-power increasing to make up for this change. Marx's second law deals with the limitation of compensating for lacking factors. He says that the "compensation for a decrease in the number of workers employed cannot be overcome" [Marx, Karl. Capital, Volume I. Trans. Ben Fowkes. London: Penguin, 1990.419] What we see is the natural tendency of capital automatically reducing the number of workers employed by the capitalist. It is important to note the average working day in terms of compensating for fluctuating factors, because it sets an absolute limit on such compensations. Marx's third law states that "The rate of surplus-value and the value of labor time, being given, it is self-evident that the greater the variable capital, the greater would be the mass of the value produced and of the surplus-value". [Marx, Karl. Capital, Volume I. Trans. Ben Fowkes. London: Penguin, 1990. 420] These two factors explained are completely dependent on the mass of labor performed by the worker, or better yet how heavily the worker is exploited. Marx states that this law, or the numerical value of the factors involved here, are determined by how much variable capital is advanced from the capitalist. He further suggests that we now know the capitalist divides his capital into two parts; one part on the means of production (a constant factor) and the other on the living labor-power, which is heavily exploited and forms his variable capital. Part Four: The Production of Relative-Surplus Value Chapters 12-15 focus on the ways in which capital seeks to increase worker productivity as a means of increasing the rate of workers' exploitation. Chapter 12: The Concept of Relative Surplus-Value In the beginning of this chapter, Marx provides an illustration of the working day whose length is defined, and its division between necessary labor and surplus labor is marked as well. The line, AC, looks like this: A - - - - - - - - - - B - - CThe section AB represents necessary labor, and the section BC represents surplus labor. He then poses the question, “How can the production of surplus-value be increased, i.e. how can surplus labor be prolonged, without any prolongation, or independently of any prolongation, of the line AC?” [ibid. 429] . Marx proposes that it is in the best interest of the capitalist to divide the working day like this: A - - - - - - - - - B’ - B - - CThis is showing that the amount of surplus labor is increased, while the amount of necessary labor is decreased. Through this, part of the labor-time that was used by the worker for the worker is lost, and the lost time there would be used as labor-time for the benefit of the capitalist. When there is a change in the amount of necessary labor-time, and therefore an increase in surplus-value, Marx calls this relative surplus-value. (Whereas when there is an actual lengthening in the working day and surplus value is produced, this is called absolute surplus-value.)Marx then goes onto discuss what can decrease the value of labor-power. First, remember that the value of labor-power is “the labor-time necessary to produce labor-power” [ibid. 430] . With this in mind, Marx says that the value of labor-power can be decreased if there is an increase in the productivity of labor. But productivity of labor cannot be increased without there first being a change in the mode of production, i.e. there must be innovations in both the technical and social conditions of the process of labor. And when the value of labor-power falls alongside an increase in the productivity of labor, commodities become cheaper. Along with this, Marx states that, as the productivity of labor increases, so, too, does the relative surplus-value; on the other hand, when there is a decrease in the productivity of labor, the relative surplus-value decreases as well. In other words, there is a direct proportion between the two things. The perpetual drive of capital according to Marx is to increase the productivity of labor, so that commodities can become cheaper. Through this process, the worker himself becomes cheaper. The reader is reminded that the capitalist is not interested in the absolute value of a commodity; instead, he is concerned with the surplus-value that is there in it, a value that is recognized through the sale of that commodity. Marx concludes that, through the increase of the productivity of labor, the aim of capitalist production “is the shortening of that part of the working day in which the worker must work for himself, and the lengthening, thereby, of the other part of the day, in which he is free to work for nothing for the capitalist” [ibid. 438] . Chapter 13: Co-operation A group working under a capitalist does as much work as another group of the same amount of people working under the same capitalist. Their skills and shortcomings balance each other out to make groups of the same number equatable. Upon dividing up groups into smaller subgroups, changes can be noticed. Marx states, "Of the six small masters, then, one would squeeze out more than the average rate of surplus-value, another less. The inequalities would cancel out for the society as a whole, but not for the individual masters." (p 441). To simplify, among individual groups, there will be stronger, more productive groups and weaker, less productive groups. However, when reexamining the subgroups as a whole, the strong balance out the weak and balance is restored. With this information Marx defines co-operation as, "When numerous workers work together side by side in accordance with a plan, whether in the same process, or in different but connected processes." (p 443). This works well for capitalists in that, social contact brings out a natural competitive nature in people which in turn produces more commodities. Co-operation also shortens the time needed to complete a certain task. Marx says, "If the labour process is complicated, then the sheer number of the co-operators permits the apportionment of various operations to different hands, and consequently their simultaneous performance. The time necessary for the completion of the whole work is thereby shortened." (p 445). The only problem for capitalists comes with payment. It is easier for a capitalist to hire fewer people and pay them for a longer period of time than to pay many workers for a short amount of time. In essence, the amount of capital a capitalist has to spare for payment affects how many laborers he can hire at any given time. With co-operation also comes resistance. The larger a group, the more likely they are to resist conditions implemented by the capitalist and so the more the capitalist must overcome their resistance. Marx also makes the point to say, "It is not because he is a leader of industry that a man is a capitalist; on the contrary, he is a leader of industry because he is a capitalist." (p 450). Marx concludes by showing an example of co-operation that many are familiar with: the creation of the pyramids. As the food grown in the Nile valley belonged to the king, he was able to commission a large number of people to work in co-operation with one another to create the pyramids in a very short amount of time. Chapter 14: The Division of Labour and Manufacture In section I, “The Dual Origin of Manufacture” Marx identifies two ways in which manufacture originates. The first method occurs when a series of workers with different trades are brought together to work for one capitalist under the same roof, in such a way that a single product passes from one worker to the next. Under this method tradesmen find themselves making only one type of product: “so that a locksmith working for a carriage company would make locks only for carriages when he used to make locks for a variety of different products ”. [ibid. 455] . The second form occurs when a capitalist hires a number of workers, each worker making an entire product himself. Under the external circumstance of requiring a need to speed up production this method changes so that each worker is given a specific task within the making of a product”. [ibid. 456] . Isolated jobs on each commodity can start to be assigned to one worker and a division of labour can be created in this manner. In section 2, “The Specialized Worker and His Tools” Marx argues that a worker who performs only one task throughout his life will perform his job at a faster and more productive rate, forcing Capital to favor the specialized worker to the traditional craftsmen”. [ibid. 458] . In this section Marx also demonstrates that a specialized worker doing only one task can use a more specialized tool, which cannot do many jobs but can do the one job well, in a more efficient manner than a traditional craftsman using a multi-purpose tool on any specific task ”. [ibid. 460] . Marx considers this a basic element of manufacture. In Section 3, “The Two Fundamental Forms of Manufacture- Heterogeneous and Organic” Marx argues that the production of various commodities produces a hierarchy of skilled and unskilled labor. Skilled labor requires large amounts of training or skill and tends to command a higher value of labor-power, while unskilled labor, which any man can do, takes little to no training and commands a lower value of labor-power”. [ibid. 470] . Keeping these highly specialized workers focused on keeping there highly valued job skills along with keeping them divided from their trade as a whole making of one commodity further devalues there labor power to each of them. Also one item with several menial processes (each assigned to one worker) helps to divide the workers from the value of their own labor power. In section 4, “The Division of Labour in Manufacture and the Division of Labour in Society” Marx argues that the division of labor in society has existed long before capitalism. However, Marx sees the division of labour within a factory or workshop as something totally unique to the capitalist mode of production”. [ibid. 480] . While physiological and social circumstances may mediate the division of labour in society, it is the need to produce surplus value which creates the need for a division of labour within manufacture. In section 5, “The Capitalist Character of Manufacture” Marx considers the way in which a division of labour within manufacture limits the mind and education of a worker. Marx also points to the revolution of machinery as a way to increase surplus-value by increasing the productivity of each worker thereby reducing the number of unskilled workers necessary. Chapter 15: Machinery and Large-Scale Industry 1. Development of Machinery Marx explains the Development of Machinery. “The machine is a means for producing surplus-value” (Marx 492). Machines shorten the part of the working day that the worker works for his means of subsistence, in turn lengthening the work they do contributing to the capitalists Surplus Value. Marx explains the three parts of machinery. The motor mechanism, the transmitting mechanism, and the working machine. :*1. The motor mechanism powers the mechanism. Be it a steam engine, water wheel or a person’s caloric engine. :*2. The transmitting mechanism, wheels, screws, and ramps and pulleys. These are the moving parts of the machine. :*3. The working machine uses itself to sculpt whatever it was built to do. Machines do tasks that workers formerly did with hand tools, but a bit more efficiently. Workers still must run these machines and maybe even power them. This is where animal power comes into play. Marx says that if a man is found operating a machine where an animal could do just as well, “it is purely accidental that the motive power seems to be clothed in the form of a human” (Marx 497). Marx states obviously that the bigger the size of the machine, the more motive power it will need to be run. As far as the development of machinery, necessity is the mother of invention. “Men wore clothes before there were any tailors” (Marx 503). Inventors started inventing machines to complete necessary tasks, the machine making industry grew larger and workers efforts started going toward building machines. Many machines being made, spawned need for new machines. For example, the spinning machine started a need for printing and dying, and the designing of the cotton gin. Here, started the building of machines, by machines. “Without steam engines, the hydraulic press could not have been made.” Along with the press, came the mechanical lathe and an iron cutting machine. Labor assumes a material mode of existence which necessitates the replacement of human force by natural forces” (Marx 508). Human labor is often taken over by practical/natural forces, saving the laborer work time done for his means of subsistence and increasing his Surplus Value. 2. The Value Transferred by Machinery to the Product Machines have been introduced to capital. Workers now go to work, not to handle tools, but to operate machines that handle tools. These machines ‘raise the productivity of labor’ without an increase in labor expended. “Machinery, like every other component of constant capital (C), creates no new value, but yields up its own value to the product it serves to beget. Insofar as the machine has value and, as a result, transfers value to the product” (Marx 509). Since machines have high cost to make, the product that they make will have a higher value. Machine made goods have a higher value than a craft made by any given person. Machines give value to products as well as making products, the longer a machine operates and depreciates, the greater the difference is of the two. The value given to a product by means of the machine differs due to the size of the product. The value given to the product also depends on the value of the machinery, and the labor used to make the machine. As far as the use of machines, ‘if the machine costs as much labor to make it, as is saved by the use of it, labor was only displaced. Labor to make a commodity has not been lessened.’ “The productivity of the machine is therefore measured by the human labor-power it replaces” (Marx 513). In addition to this, “The use of machinery is limited by the requirement that less labor must be expended in producing the machinery than is displaced by the employment of that machinery” (Marx 515). Marx goes on explaining how machines took the place of girls and women working in mines. Women were also used to haul barges in England, “The land of machinery” (Marx 517). 4. The Factory Marx begins this section with two descriptions of the factory as a whole. “Combined co-operation of many orders of workpeople, adult and young, in tending with assiduous skill, a system of productive machines, continuously impelled by a central power” (the prime mover); on the other hand, as “a vast automaton, composed of various mechanical and intellectual organs, acting in uninterrupted concert for the production of a common object, all of them being subordinate to a self-regulated moving force.” (544-545) This twofold description shows the characteristics of the relationship between the collective body of labor power and the machine. In the first description, the workers, or collective labor power, are viewed as separate entities from the machine. In the second description, the machine is the dominant force, with the collective labor acting as mere appendages of the self operating machine. Marx uses the latter description to display the characteristics of the modern factory system under capitalism. In the factory, the tools of the worker disappear, and the worker’s skill is passed on to the machine. The division of labor and specialization of skills re-appear in the factory, only now as a more exploitative form of capitalist production. Work is still organized into co-operative groups. Work in the factory usually consists of two groups, people who are employed on the machines and those who attend to the machines. The third group, outside of the factory, is a superior class of workers, trained in the maintenance and repair of the machines. Factory work begins at childhood to ensure that a person may adapt to the systematic movements of the automated machine, therefore increasing productivity for the capitalist. Marx describes this work as being extremely exhausting to the nervous system and void of intellectual activity. Factory work robs workers of basic working conditions like clean air, light, space, and protection. Marx ends the section by asking if jourier was wrong when he called factories ‘mitigated jails’? 5. The Struggle between Worker and Machine In the beginning of this section Marx recounts the introduction of machinery and the resistance among workers that followed it. Marx does not criticize the machines themselves or technology, but the capitalist system that envelopes the machines. He states that, “It took both time and experience before workers learned to distinguish between machinery and their employment by capital, and therefore to transfer their attacks from the material instruments of production to the form of society which utilizes those instruments.” (554) Marx describes the machine as the instrument of labor for the capitalists’ material mode of existence. The machine competes with the worker, diminishing the use-value of the worker’s labor-power. Marx also points out that with the advance in technology of machines led to the substitution of less skilled work for more skilled work which ultimately led to a change in wages. During the progression of machinery the numbers of skilled workers decreased, while child labor flourished, increasing profits for the capitalist. 6. The Compensation Theory, With Regard to the Workers Displaced by Machinery In this section, Marx sets forth to illuminate the error within the compensation theory of the political economists. According to this theory, the displacement of workers by machinery will necessarily “set free” an equal stable, amount of variable capital previously used for the purchase of labor-power and remains available for the same purpose. However, Marx argues that the introduction of machinery is simply a shift of variable capital to constant capital. The capital “set free” cannot be used for compensation since the displacement of variable capital available becomes embodied in the machinery purchased [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 565] .The capital that may become available for the compensation will always be less than the total amount of capital previously used to purchase labor-power before the addition of machinery. Furthermore, the remainder of variable capital available is directed towards hiring workers with the expertise skills to operate new machinery. Therefore the conversion of the greater part of the total capital is now used as constant capital, a reduction of variable capital necessarily follows. As a result of machinery, displacedworkers are not so quickly compensated by employment in other industries but are forced into an expanding labor-market at a disadvantage and available for greater capitalist exploitation without the ability to procure the means of subsistence for survival [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 566-568] . Furthermore, Marx argues that the introduction of machinery may increase employment in other industries, yet this expansion “has nothing in common with the so-called theory of compensation” [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 570] . Greater productivity will necessarily generate an expansion of production into peripheral fields that provide raw materials. Conversely machinery introduced to industries that produce raw materials will lead to an increase in those industries that consume them. The production of greater surplus-value leads to greater wealth of the ruling classes, an increase in the labor-market, and consequently the establishment of new industries. As such Marx cites the growth of the domestic service industry equated to greater servitude by the exploited classes [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 570-575] . 7. Repulsion and Attraction of Workers Through The Development of Machine Production, Crises in the Cotton Industry The political economist apology for the displacement of workers by machinery asserts that there is a corresponding increase in employment. Marx is quick to cite the example of the silk industry in which an actual decrease of employment appears simultaneously with an increase of existing machinery. On the other hand an increase in the number of factory workers employed is the result of “the gradual annexation of neighboring branches of industry” and “the building of more factories or the extension of old factories in a given industry.” [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 576-577] Furthermore, Marx argues that an increase in factory workers is relative since the displacement of workers creates a proportionately wider gap between the increase of machinery and a proportionate decrease of labor required to operate that machinery. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 578] The constant expansion of capitalism and ensuing technical advances leads to extension of markets until it reaches all corners of the globe thus creating cycles of economic prosperity and crisis. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 580] Finally, the “repulsion and attraction” of workers therefore results as a cycle in which there is a constant displacement of workers by machinery which necessarily leads to increased productivity followed by a relative expansion of industry and higher employment of labor. This sequence renews itself as all components of the cycle lead to novel technological innovation for "replacing labor-power." [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 582-583] Part Five: The Production of Absolute and Relative Surplus-Value Chapters 16-18 examine how the capitalist strategies for the production of both absolute and relative surplus-value are combined and can function simultaneously. Chapter 17 Changes of Magnitude in the price of Labor-Power and in Surplus-Value The Value of Labor power, also known as wage, is the first thing that Marx begins to re-explain in the opening of the chapter stressing that it is equal to the quantity of the “necessaries of life habitually required by the average laborer.” By re-stressing the importance of this concept he is building a foundation on which he can begin to elaborate his argument on the changing price of labor. In order to make his argument, Marx states that he will leave out two certain factors of change (the expenses of labor power that differ with each mode of production and the diversity of labor power between men and women, children and adults) and that he will also be making 2 assumptions. The two assumptions made are first, the commodities are sold at their values, and second, the price of labor-power occasionally rises above its value, but never falls beneath it. Given these assumptions Marx begins to formulate his argument by first establishing the three determinants of the price of labor power. These three determinants, or circumstances as Marx calls them, are: the length of the working day, the normal intensity of labor, and the productiveness of labor. Formulating these three circumstances into different combinations of variables and constants Marx begins to clarify the changes in Magnitude in the price of labor-power. The majority of Chapter XVII is dedicated to the chief combinations of these three factors. “I. Length of the working day and Intensity of labor constant; Productiveness of labor variable.” Starting out with these assumptions Marx explains that there are three laws that determine the value of labor-power. The first of these three laws states that a working day of given amount of hours will always produce the same amount of value. This value will always be a constant, no matter the productiveness of labor, or the price of the commodity produced. The second states that the surplus-value and labor-power are negatively correlated or that when surplus-value increases a unit and value stays the same labor-power must decrease one unit also. The third of these laws is that a change in surplus-value presupposes a change in that of the labor-power. Given these three laws Marx explains how the productiveness of labor, being the variable, changes the magnitude of labor-value. Marx explains saying “a change in the magnitude of surplus-value, presupposes a movement in the value of labour-power, which movement is brought about by a variation in the productiveness of labour.” This variation in the productiveness of labor is what eventually leads to the developing change in value, which is then divided by either the laborers, through extra labor-value, or the capitalist, through extra surplus value. “II. Working-day constant; Productiveness of labor constant; Intensity of labor.” The Intensity of labor is the expenditure that the laborer puts into a commodity. The increase in the intensity of labor results in the increase of value that the labor is producing. This increase that the laborer is producing is again divided amongst the capitalist and laborer in the form of either surplus-value or an increase in the value of labor power. Though they may both increase simultaneously the addition to the labor may not be an addition if the extra payment received from his increase in intensity does not cover the wear and tear it has on the laborer. “III. Productivity and Intensity of Labor Constant; Length of Working Day Variable.”In this example it is possible to change the length of the working day by either lengthening of shortening the time spent at work. Leaving the other two variables constant, reducing the length of the work day leaves the labor-power’s value the same as it was before. This reducing of the length of the work day will reduce the surplus labor and surplus value dropping it below its value. The other option in changing the workday is to lengthen it. If the labor-power stays the same with a longer workday then the surplus-value will increase relatively and absolutely. The relative value of labor-power will fall even though it will not absolutely. With the lengthening of the workday and the nominal price staying the same, the price of labor-power possibly could fall below its value. The value is estimated to be what is produced by the worker and a longer workday will affect the production and therefore the value.It is fine to assume the other variables stay constant, but a change in the work day with the others constant will not result in the outcomes supposed here. A change in the work day by the capitalists will most definitely affect the productivity and intensity of the labor. “IV. Simultaneous Variations in the Duration, Productivity and Intensity of Labor.” In the real world it is almost never possible to isolate each of the aspects of labor. Two or even three of the variables may vary and in different aspects. One may move up while another moves down, or in the same direction. The combinations are endless, but may be characterized by the first three examples. However, Marx limits his analysis to two cases. “(1) Diminishing productivity of labor with a simultaneous lengthening of the workday.” This example is one where workers are working longer hours paying less attention or dedication on the job and productivity is in turn reduced; or productivity decreases, increasing the workday to achieve the same output. Therefore, the magnitude of these changes will continue on its path causing longer and longer workdays with lower productivity until the system can sustain no more. “(2) Increasing intensity and productivity of labor with simultaneous shortening of the working day.” Productivity and intensity are closely related and offer similar outcomes. Higher productivity and intensity will increase the workers output allowing for the workday to be shorter as they will achieve their necessary subsistence. The working day can shrink multiple times so long as the other elements live up to their sides of the bargain. The price of labor-power is affected by many things that can be broken down. The three main elements of intensity, productivity and length of workday were broken down and analyzed separately and then together. From the examples presented it is possible to see what would happen in any and all situations. Part Six: Wages Chapters 19-22 examine the ways in which capital manipulates the money wage as ways of both concealing exploitation and of extorting increased amounts of unpaid labor from workers. Chapter 19: The Transformation of the Value (and Respective Price) of Labour-Power into Wages In this chapter Marx brings into perspective how wages fit into the picture of capitalism. Marx begins by noticing how oblivious society is when it speaks "of the value of labour and call its expression in money its necessary or natural price"(675). A laborer is interested primarily in meeting his means of subsistence; therefore, he is easily exploited by the capitalist, who, is more interested in paying him as little as possible than endowing him with an equal exchange for the value he creates. As selling this labor-power is a way of ensuring the means of subsistence, laborers are willing to sell their labor power to a capitalist, for whom he will serve a specific function. Marx compares this version of surviving to that of a slave, who similarly gives functions of labor power to his master to ensure his subsistence. Though the labor of a slave would appear unpaid to most, he too is in fact ensuring his subsistence. Chapter 20: Time-Wages In the first part of his analysis of the forms wages take, Marx presents time-wages; whereby, a worker is paid a certain amount per certain period of time. An example of a time-wage is working for \$10.00/hr. The time-wage and actual cost of labor are independent of one another. The cost of labor for Marx is the average daily value of labor-power divided by the hours in the average working day. [ibid. 684] It is the actual value of a worker over a day's labor. Time-wages are the compensation workers get per unit of time. Thus, if the value of labor-power falls, the cost of labor will fall though the actual wage a worker receives, the time-wage, may not. This independence allows the capitalist to pay the worker less than the value of their labor-power while still appearing to offer fair compensation. [ibid. 686] For the worker to survive then, she must work more hours or get a second job. To both the worker and capitalist, time-wages seem to demonstrate the connection between working harder and being more successful. To Marx, this appearance only hides the fact that time-wages are ways of increasing the rate of surplus-value by reducing variable costs and the length of the working day. Chapter 21: Piece-Wages Marx explains the exploitative nature of the piece-wage system. Under this system workers are paid a pre-determined amount for each piece they produce, creating a modified form of the time-wage system. A key difference is in the fact that the piece-wage system provides an exact measure of the intensity of labor. Meaning that the capitalists’ know about how long it takes to produce one piece of finished product. Those who cannot meet these standards of production will not be allowed to keep their jobs. This system also allows for middlemen to usurp positions between the capitalists and laborers. These middlemen make their money solely from paying labor less than capitalists are actually allotting, thus, bringing about worker on worker exploitation. Logic would lead a laborer to believe that straining one’s labor power “as intensely as possible” works in one’s own interests because the more efficiently they produce the more they will be paid. Therefore, the workday will lengthen to the extent that worker’s allow and necessitate. However, prolongation in the workday requires the price of labor to fall. Marx elucidates that, “The piece-wage therefore has a tendency, while raising the wages of individuals above the average, to lower this average itself”, and “it is apparent that the piece-wage is the form of wage most appropriate to the capitalist mode of production.” Marx gives examples of the Weaving Industry around the time of the Anti-Jacobin War where "piece-wages had fallen so low that in spite of the very great lengthening of the working day, the daily wage was then lower than it had been before." So in this example we are able to see how piece-wages do nothing but decrease the value of labor and better disguise the true way the workers are exploited. [Marx, Karl. Capital, Volume I. Trans. Ben Fowkes. London: Penguin, 1990. 697-698] . Part Seven: The Process of Accumulation of Capital Chapters 23-25 explore the ways in which profits are used to recreate capitalist class relations on an ever expanding scale and the ways in which this expansion of capitalism creates periodic crises for capitalist accumulation. For Marx, these crises in accumulation are also always crises in the perpetuation of the class relations necessary for capitalist production and so are also opportunities for revolutionary change. Chapter 23: Simple Reproduction Just as a society cannot stop consuming, it cannot stop producing, either. “Every social process of production,” writes Marx, “is at the same time a process of reproduction.” [ibid. 711] This is one of Marx’s most important points, for capital must be seen as a forever developing value. Since labor power and the means of production are constantly consumed in the process of production, they must be reproduced for production to continue. Simple reproduction refers to a capitalist consuming all of the surplus value created and reinvesting the same amount of capital during each cycle. This causes production levels to remain constant. Marx pauses here to clarify two points. First, though workers are seemingly paid in money, in actuality they are paid in wages. Off the clock, in order for workers to obtain part of their means of subsistence, they must give these wages back to the capitalist class. “The transaction,” Marx writes, “is veiled by the commodity-form of the product and the money-form of the commodity.” [ibid. 713] Second, Marx points out that the capitalist must produce surplus value in order for production to continue. If surplus value is not created, and the capitalist keeps advancing capital (and consuming) from his own pocket, he will eventually go broke. Simple reproduction therefore “converts all capital into accumulated capital.” [ibid. 715] Part of the cycle of simple reproduction is the replication of class relations. Workers receive enough to keep them at work and purchase their means of subsistence. “The worker always leaves the process in the same state as he entered it – a personal source of wealth, but deprived of any means of making that wealth a reality for himself.” [ibid. 716] Since there is nothing left over after purchasing their means of subsistence, they must sell their labor power again. In this way workers remain poor and remain at work. Meanwhile, the capitalists advance capital, create surplus value, and are able to profit and reinvest. Chapter 24: The Transformation of Surplus-Value into Capital In Chapter 24, Marx explains how capitalist are able to transform surplus value into more capital. Marx begins by expounding upon the accumulation of capital, which he defines as “the employment of surplus-value as capital, or its reconversion into capital.” [ibid. 725] Marx uses the illustration of a Master yarn Spinner to demonstrate how capitalist use more money to invest in more means of production and labor-power. Thus, Marx is able to further reiterate that, “accumulation requires to the transformation of a portion of the surplus product into capital.” [ibid. 726] Marx further elaborates that the reason why surplus-value can be transformed into capital is because the surplus product “already comprises the material components of a new quantity of capital." [ibid. 727] Capitalist expansion, according to Marx, requires additional labor-power. Marx explains that the “mechanism of capitalist production” is constantly producing and reproducing a working class that depends on wages to survive, thus replenishing the capitalist need for labor-power and thereby aiding capitalist expansion. [ibid. 727] Marx states that what is true of all accumulated capital in comparison to the addition of capital made by it is that “the original capital continues to reproduce itself and to produce surplus-value alongside the newly formed capital.” [ibid. 728] Marx lists three results of the original transformation of money: (1) that the product belongs to the capitalist and not the worker; (2) that the value of this product includes, apart from the value of the capital advance; a surplus-value which costs the worker labour but the capitalist nothing, and which nonetheless becomes the legitimate property of the capitalist; (3) that the worker has retained his labour-power and can sell it anew if he finds another buyer. [ibid. 731] Marx, therefore concludes, that even in simple reproduction all capital is made into accumulated capital; despite the fact that the capital originally advanced begins to diminish when compared to the directly accumulated capital. [ibid. 734] Chapter 25, Section 3 & 4: The General Law of Capitalist Accumulation The composition of capital undergoes a qualitative change when the total social capital of a society grows, or accumulates. This accumulation presupposes an increase in productivity and efficiency in the affected industries and consequently produces a decreased need for labor in general. If productivity increases (i.e. if there are more machines that take the place of human labor) then there is less employment; for even if the total quantity of labor employed increases, it is in a “constantly diminishing proportion” to the average amount needed by capital for the valorization process. Thus accumulation creates an industrial reserve army of labor that is available for hire. Conversely, if productivity and accumulation is stagnant, or the cost-benefit ratio of machine power to labor power is unfavorable, there exists a greater need for labor to create surplus value. But this is not what a capitalist wants and is antithetical to the ethos of capitalist production.What a capitalist wants is increased productivity and thus the increased production of relative surplus value. By making fewer workers (in proportion to the total population and need for labor) work more productively, and thus put more of their labor time into producing surplus value, there is less need to employ more workers, and the superfluous workers already employed can be discarded. Marx states that this creates a division in the working class of a nation; the forcibly unemployed industrial reserve army already mentioned, versus an employed class of workers who are chronically underpaid and overworked.This affects wages in two ways. If there this a high level of industrial reserve workers in proportion to a low level of employed workers, then obviously demand for labor is low and thus wages are low. Conversely, if there are few in the industrial reserve and many people employed, thus accelerating accumulation, then demand for labor is high and wages are high. However, this upward trend in employment always reaches a critical mass when too much is produced and there are not enough consumers to absorb it, and thus products (and therefore surplus value) go to waste. Then workers are “set free,” wages drop for those still employed, and the cycle begins anew. This is the workers paradox; work harder, produce more, but get fired in the end because they produced too much. As the total social wealth of a nation grows, so does its population, and as its population suffers through the abovementioned cycle, the more people become unemployed due to their own productivity. That is essentially the absolute general law of capitalist accumulation. Part Eight: So-Called Primitive Accumulation Chapters 26-33 concern the history and origins of capitalism and of capitalist class relations. Chapter 26: The Secret of Primitive Accumulation Primitive accumulation is the accumulation of wealth that takes place prior to the capitalist era of production and provides the starting point that makes possible subsequent capitalist accumulation. This chapter describes the creation of capitalist society in terms of the creation of two classes: workers and capitalists. Marx argues that it is not just the accumulation of money that is required in order for capitalism to begin, but also the creation and accumulation of large numbers of “free” workers who are willing, able, and needing to work for a monetary wage. The “secret” of this primitive accumulation lies in the fact that it emerges from a history of systematic violence and brutality, rather than from the simple hard work and thrift of a few would-be capitalists. The “freeing” of serfs and slaves from their feudal lords at the same time “frees” them of their land and homes i.e. their means of production and means of subsistence. “So-called primitive accumulation, therefore, is nothing else than the historical process of divorcing the producer from the means of production.” [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 874-875. ] This process dates from about the sixteenth century and is far from idyllic and peaceful. The history of this primitive accumulation of both capital and workers as it was experienced in England is continued in the following chapters. Chapter 27: The Expropriation of the Agricultural Population from the Land The last third of the 15th Century marked the beginning of the rise of the capitalist mode of production and the initial creation of the working class. At this time, England served as the leading nation in pre-capitalist endeavors as the process of primitive accumulation took place via the usurpation of communal lands owned by peasants and feudal lords alike. The peasants were forced to migrate to barren, coarse lands while their former communal lands became the private property of a few soon-to-be capitalist landowners. The technological advances in the manufacture of textiles and the accompanying higher prices for wool served as the motivation for feudal lords to drive peasantry from communal property. Feudal tenure was abolished as estates were seized and farmlands were liquidated into the hands of a few landowners. Marx refers to this process as a time of "turning arable lands into sheepwalks," and thus, usurping and using the land strictly for the purpose of industry, only to later be turned into deer preserves still keeping peasants from their native soil. Similarly, the forced dissolution of the property associated with the Catholic church by the state added to the bulk of land that transferred hands. Though the peasant farmers were initially driven off the land by brute force, the ‘Bills for Inclosure of Commons’ was later sanctified by the Parliament in the 18th Century, legally enforcing the ability of landowners to claim peoples’ land as private property. No longer able to provide their own means of subsistence without arable land or property of their own, a body of wage-laborers was created in need of employment for survival. The seized, liquidated and privately owned land then provided the means for large-scale agricultural production, which provided the market and employment for the new "free and rightless" proletariat class who now depended on its industry. Chapter 28: Bloody Legislation against the Expropriated since the End of the Fifteenth Century. The Forcing Down of Wages by Act of Parliament The creation of the bourgeoisie, fueled by the enclosure of common lands that once sustained a vast peasant population, left these peasants and serfs, the new proletariat, to adapt to a new way of life. The old subsistent mode of production no longer existed leaving these people, as Marx puts it, “free and rightless”. Though at this point in history we see the beginnings of a bourgeoisie class, there is not yet an efficient capitalist mode of production to absorb the newly ‘created’ proletariat. Because it was not yet natural to sell ones labor for profit many became beggars and thieves. This is why at the end of the fifteenth century there is a sudden increase in the number of harsh and violent laws against vagabonds. From the early 1500s on, harsh punishment ranged from whipping and mortification to forced slavery and even beheading. Thomas More indicates that as many as 72,000 were put to death under the pretext of theft. Over generations, these violent laws made people begin to look “upon the requirements of that mode of production(Capitalism) as self evident natural laws”. And thus, the once self sufficient peasants and serfs were forced to accept giving their labor up as pure commodity in order to buy products they once produced themselves. Chapter 29: The Genesis of the Capitalist Farmer In this chapter Marx discusses how the class of the capitalist farmer originated. This class was made up by farmers who employed wage-laborers, and then paid rent to a landlord; they became wealthy through a relatively simple series of events. In the sixteenth century the value of precious metals decreased; the farmers could pay wage-laborers and landlords less because contracts were based on old money values. The Agricultural Revolution had already increased the productivity of their farms, and the value of agricultural products rose netting the farmer relatively huge profits. Chapter 30: Impact of the Agricultural Revolution on Industry. The Creation of a Home Market for Industrial Capital As the self supporting proletariats were forced from their rural homes to urban areas they dramatically speed up the process to full scale capitalism. Working long hours for a determined wage to be able to acquire living necessities was a completely different concept than providing for oneself. This massive change was absorbed by waiting agricultural capitalists who began purchasing and operating large scale agricultural ventures. The demands of the once farmer turn wage laborer now had to be met by the same hand they are employed by. Since there was no way to create ones means of subsistence in a large urban area the newly arrived working class provided the steady market necessary for capitalism to secure control of commerce. "And only the destruction of rural domestic industry can give the home market of a country that extension and stability which the capitalist mode of production requires." (p.911) Chapter 31: The Genesis of the Industrial Capitalist It is out of the ashes of feudalism, via widespread expropriation of the agricultural peoples that Industrial Capitalism springs. Deprived of their traditional means of subsistence, many of these displaced farmers and artisans found themselves faced with few options for survival, and therefore they themselves became capitalists. In this sense, workers parted with tangible property ownership in exchange for capital with no legal right to the property used for its production. The worker is deprived power over the land and goods that they are expected to maintain through forced, and waged-labor. The capitalists then required great labor forces to manage their newly amassed resources. To solve this problem, the capitalists then began a long process of enslaving people to provide this labor force. The implementation of government systems in England such as national debt, taxes, and the military assured the continuation of capitalist demand (915). This colonial system became the structure necessary for the execution of commercial wars to accumulate resources for the generation of capital via the exploitation of waged, and slave laborers. The employment of child labor and exploitation became common, as it was maximally profitable for the capitalist objective. The market for capital was of course then able to be controlled for the benefit of the capitalist. Even famine became a tool for capitalist generation in 1769-1770 when England bought up rice for the purpose of selling it at a disproportionately larger profit (917). The workers under the capitalist society are by definition vital proponents of the capitalist market. Encouraged to participate in the creation of debt, each worker participates in the creation of “joint-stock companies, the stock-exchange and modern bankocracy” (919). The international credit system conceals the source of its generation, the exploitation of slave and wage laborers. Taxpayers keep buying into these credit systems and paying taxes, but are not able to escape either system. In fact, for any person involved in this system they themselves cannot escape capital’s bloody roots (926). Chapter 32: The Historical Tendency of Capitalist Accumulation In this chapter, Marx is explaining the direction that Capitalist Accumulation is going, which is ultimately the downfall of capitalism through a revolution of the mass workers. Marx begins this chapter with a question, "What does the primitive accumulation of capital resolve itself into?" The answer to that question is "the dissolution of private property based on the labour of its owners, i.e expropriation of the immediate producers". [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 927] The private property of the worker is essential to establish small-scale industries, and furthermore, small-scale industry is a necessary condition for the development of social production and of the free individuality of the worker. The worker is his own boss, whether if that is through cultivating their own land, or the artisan owning the tools with which he is an accomplished performer. However, we now see that a mass expropriation of the worker lays the foundation for capitalist history. Private property is now replaced with capitalist private property, by the highest form of exploitation, and we now see the shift from the days of free labour to immigrant/alien labour. Workers are turned into proletarians and their means of labour are transformed into capital by the capitalist who is exploiting the workers on a large scale. Capitalist private property is formed from the capital mode of appropriation, which has dwindled away the once existent private property that was founded on personal labour of workers. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 929] The nature of capitalism, though, brings about its own demise. In a system that "exlcudes co-operation, the social control and regulation of forces of nature, and the free development of the productive forces of society" it allows no room other for modes of production outside of its own. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 927] As capitalists begin to strike down one another they eventually are faced with being expropriated by the workers through a "centralization of capitals" [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990.929] With a working class that is strong in number and fully united, this group soon becomes "incompatible with their capitalist integument" and "the expropriators are expropriated" and the we know have a situation in which "the expropriation of a few usurpers by the mass of people". [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 929] Of course, Marx stresses that the demise of capitalism does not necessarily mean the return of private property. "It does not re-establish private property, but it does indeed establish individual property on the basis of the achievements of the capitalist era: namely co-operation and the possession in common of the land and the means of production produced by labour itself." That is to say that the transformation of capitalist private property, which already sustains itself by society, into social property. Chapter 33: The Modern Theory of Colonization The chapter begins with the explanation of two different kinds of private property. The first "rests on the labour of the producer himself", while the second, "(rests) on the exploitation of the labour of others". [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 931] Marx says that the second type was a result of the first and only grows off the first type. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 931] With the advent of colonies, capitalist nature has to take a different structure than that of "Homeland Capitalism". E.G. Wakefield said, "in the colonies capital is not a thing but a social relation between persons which is mediated through things". [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 932] The wage labourer of colonies is not willing to sell himself of his own free will, as many do back in the mother country, as land is plentiful and cheap if not free in the colonies. Capitalism has to turn the means of production from the individual producers into capital. This can be achieved by expropriation and heavy exploitation of the worker. If domination over the workers free will cannot be achieved, Marx then asks, "how did capital and wage-labour come into existence?" [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 933] This comes about through the division of workers into owners of capital and owners of labour and the workers have essentially expropriated themselves in order to accumulate capital. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 934] This self-expropriation served as primitive accumulation and therefore the catalyst for capitalism in the colonies. The worker is socially dependent on the capitalist, by expropriating himself and selling himself to the capitalist in return for capital(through some artificial means<. Once thref>Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 934] ) and the workers continue to increase in number, the mode of production will generate capital. But it is important to realize the difference in the dependence of the worker to the mother land compared to the colony. We have seen that the capitalist practice of primitive accumulation has been prescribed for use in the colonies now and also the continual expropriation of the worker. [Marx, Karl. "Capital, Volume I". Trans. Ben Fowkes. London: Penguin, 1990. 939] Though the chapter is heavy on the establishment of colonies and the implementation of capitalist methods in the new areas, Marx makes his point fairly clear in the last paragraph of Chapter 33. He is letting the reader know that capitalist private prosperity can only exist once the private property of the individual producer has been expropriated and annihilated. References Notes External links * [http://www.marxists.org/archive/marx/works/1867-c1/index.htm "Capital, Volume I"] , by Karl Marx. * [http://www.marxists.org/archive/marx/works/1867-c1/1868-syn/index.htm "Synopsis of Capital, Volume I"] , by Friedrich Engels. * [http://www.graphicwitness.org/contemp/marxtitle.htm "Capital in Lithographs"] , by Hugo Gellert. * [http://www.eco.utexas.edu/faculty/Cleaver/357ksg.html Study Guide to "Capital, Volume I"] , by Harry Cleaver. * [http://www.duke.edu/~hardt/Capital.html Reading Notes on Marx's "Capital"] , by Michael Hardt. * [http://www.appstate.edu/~stanovskydj/marxfiles.html The MarX-Files: Resources on Karl Marx and Friedrich Engels] * [http://davidharvey.org Reading Marx’s Capital] An open course consisting of a close reading of the text of Marx's Capital Volume I in 13 two-hour video lectures with David Harvey Wikimedia Foundation. 2010. ### Look at other dictionaries: • Capital, Volume II — Capital , Volume 2, subtitled The Process of Circulation of Capital was prepared by Friedrich Engels from notes left by Karl Marx and published in 1893. It is divided into three parts [cite book|title=Capital Vol2.|isbn=0 7178 0483… …   Wikipedia • Capital, Volume III — Capital , Volume 3, subtitled The Process of Captialist Production as a Whole was prepared by Friedrich Engels from notes left by Karl Marx and published in 1894. It is in seven parts [cite book|title=Capital Vol3.|isbn=0 7178 0490… …   Wikipedia • Capital —    1) The Marxist understanding of the meaning of capital is connected to the everyday and non Marxist economist use of the term to refer to an asset owned by an individual which is capable of generating income. For Karl Marx, capital is better… …   Historical dictionary of Marxism • Capital and Interest — is a three volume work on finance published by Austrian economist Eugen von Böhm Bawerk.The first two volumes were published in the 1880s when he was teaching at the University of Innsbruck.The first volume of Capital and Interest, titled History …   Wikipedia • Capital punishment in Maryland — Capital punishment is a legal form of judicial punishment in the U.S. state of Maryland. It has been in use in the state since June 20, 1638, when two men were hanged for piracy in St. Mary s County. A total of 309 people were executed by a… …   Wikipedia • Capital Punishment (Death Penalty) —     Capital Punishment     † Catholic Encyclopedia ► Capital Punishment     The infliction by due legal process of the penalty of death as a punishment for crime.     The Latins use the word capitalis (from caput, head) to describe that which… …   Catholic encyclopedia • Capital Research Center — (CRC) is a conservative non profit organization that was founded in 1984 by Willa Johnson to study non profit organizations, with a special focus on reviving the American traditions of charity, philanthropy, and voluntarism. They also investigate …   Wikipedia • Capital.fr — Capital (mensuel) Pour les articles homonymes, voir Capital (homonymie).   …   Wikipédia en Français • Capital (Mensuel) — Pour les articles homonymes, voir Capital (homonymie).   …   Wikipédia en Français • Volume 2 (Chuck Berry album) — Volume 2 is a vinyl anthology LP audio record of hit Chuck Berry recordings, made and printed in France on the impact Records label. In the 1970s, it was available for purchase in U.S. music stores, with a small adhesive sticker on the reverse of …   Wikipedia ### Share the article and excerpts ##### Direct link Do a right-click on the link above and select “Copy Link” We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5288796424865723, "perplexity": 2564.324546626577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00305.warc.gz"}
https://physics.stackexchange.com/questions/464144/electrostatics-and-gauss-law
# Electrostatics and Gauss law [duplicate] How can we prove that surface integral of the electric fieldfor a point charge that is outside a gaussian surface, $$\int\mathbf E\cdot\,\mathrm d\mathbf S,$$ without actually using the concept of flux? • You can prove this with a fairly difficult intigration – Sourabh Mar 3 '19 at 14:20 • From what assumptions do you want to prove this claim? – ACuriousMind Mar 3 '19 at 14:24 • $\int E\cdot dS$ is the definition of flux. How can you prove it is zero without using its definition? – garyp Mar 3 '19 at 16:37 • But how can you even talk about flux at all without referencing its definition? I can talk about an automobile because I have a definition of "automobile" implicit in my memory. How can I talk about a "Xxfghold" without defining what a "Xxfghold" is? How can I prove flux is zero without knowing what flux is? – garyp Mar 3 '19 at 16:41 Let us start from Coulomb's law: $$\vec{\mathbf{E}}(\vec{\mathbf{r}}) = \frac{1}{4\pi\varepsilon_0}\frac{q}{r^2}\hat{\mathbf{r}}$$ where we take $$\vec{\mathbf{r}}$$ to be our coordinate, and $$\varepsilon_0$$ to be the vacuum permittivity. Of course, you should be able to see how we can express this same law in terms of the charge density, instead of the charge itself, through integrating over space: $$\vec{\mathbf{E}}(\vec{\mathbf{r}}) = \frac{1}{4\pi\varepsilon_0}\int\frac{\rho(\vec{\mathbf{r}}_0)}{||\vec{\mathbf{r}}_0-\vec{\mathbf{r}}||^2}\hat{\mathbf{r}}~\mathrm{d}^3\vec{\mathbf{r}}$$ where $$\rho$$ is the aforementioned charge density. Here, we can take as a theorem the divergence of an inverse-square vector field to be $$\vec{\nabla}\cdot\frac{1}{r^2}\vec{\mathbf{r}} = 4\pi\delta(\vec{\mathbf{r}})$$ Using this, we now can take the divergence of our charge density-defined electric field equation: $$\vec{\nabla}\cdot\vec{\mathbf{E}}(\vec{\mathbf{r}}) = \frac{1}{\varepsilon_0}\int\rho(\vec{\mathbf{r}})\delta(\vec{\mathbf{r}}_0-\vec{\mathbf{r}})~\mathrm{d}^3\vec{\mathbf{r}}$$ which is obviously $$\vec{\nabla}\cdot\vec{\mathbf{E}}(\vec{\mathbf{r}}) = \frac{\rho(\vec{\mathbf{r}})}{\varepsilon_0}$$ By the divergence theorem, we can see that this is equivalent to $$\oint \vec{\mathbf{E}}\cdot\mathrm{d}\vec{\mathbf{S}} = \frac{Q}{\varepsilon_0}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730048775672913, "perplexity": 305.3792556319883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00156.warc.gz"}