url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://amrex-codes.github.io/IAMR/docs_html/Tutorials.html
# Tutorials These tutorials provide problem setups to help familiarize you with running IAMR. First we provide some useful pointers to relevant sections of this guide where you can get more detailed information on how IAMR works, then we give problem descriptions. There are a large number of options which can be specified from the inputs. Most of the options which can be specified from the inputs files are left to their default values in the sample calculations. For more thorough information on available options see Runtime Options. As a starting point for code changes: the initial data are specified in /Source/prob/prob_init.cpp. We have included several different subroutines for defining different initial data. The variable “probtype” set in the inputs file selects between these subroutines. You may also, of course, write your own by modifying prob_init.cpp. For information on how to set up your own problem, see Problem Setup. For AMR, the criteria used for error estimation used in tagging can be specified in the inputs file for some of the most common choices or more specialized choices can be defined in NS_error.cpp (see Tagging for Refinement Section for more information). The estimation can depend on any or all of the state variables or derived quantities, which are specified in NS_setup.cpp and defined in NS_derive.cpp. This code is a research code, and is continually being modified and improved as our needs evolve. Because the list of options is so extensive, and the updates relatively frequent, we heartily encourage you to contact us directly by opening an issue on github if you would like help modifying the code supplied here for your own calculations. There is extensive but undocumented capability. That said, we welcome your comments, suggestions, and other feedback. Again, please open an issue on github. ## Problem Descriptions Each problem is its own directory within /Tutorials. Many contain both 2D and 3D inputs files. Note that running a 2D inputs file requires building and using a 2D executable. Similarly, 3D inputs files require a 3D executable. Dimensionality is set in the GNUmakefile. ### Embedded Boundaries: • DoubleShearLayer: This test case is a fully periodic double shear layer in a constant density fluid. A blob of tracer is passively advected with the flow. Contains an embedded boundary with AMR around the tracer and EB. • FlowPastCylinder: Constant density flow around a cylinder. A blob of tracer is passively advected with the flow. inputs.2d.flow_past_cylinder-x and inputs.3d.flow_past_cylinder-x use AMR around the tracer and EB. ### Non-EB: • Bubble: This test case is a falling drop in a closed box with a density of twice the surrounding medium. The calculation allows two levels of factor of 2 refinement. Here the refinement criteria are vorticity and the presence of the tracer, which coincides initially with the heavy drop. Also contains inputs file for using RZ coordinates. • Hotspot: A hot bubble rising in closed box. Evolves a temperature field and uses a low Mach number constraint in place of incompressible. AMR refinement criteria is based on the temperature, but only 2D uses AMR by default. The 3D setup features an open top (outflow BC) and demonstates how to allow refinement at the outflow (which is turned off in IAMR by default). • RayleighTaylor: Rayleigh-Taylor instability; heavy fluid on top of light fluid with gravity. AMR refinement is based on vorticity. • ConvectedVortex: Euler vortex in isentropic flow. Analytic solution is translation of the initial conditions based on propagation speed and simulation time. There are several references, for example Spiegel, Seth & Huynh, H.T. & DeBonis, James. (2015). A Survey of the Isentropic Euler Vortex Problem using High-Order Methods. 10.2514/6.2015-2444. • LidDrivenCavity: Lid-driven cavity ia a popular test case for incompressible, viscous flow. No-slip conditions are enforced on all walls, but the top (“lid”) has a prescribed, constant velocity. Velocity and density are normalised so that changing the viscosity coefficient $$\mu$$ alters the Reynolds number according to $$Re = 1 / \mu,$$ where $$\mu$$ is the viscosity coefficient. [Reference: Ghia et al., “High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method”, J. Comp. Phys. (1982)] • Poiseuille: Simple Poiseuille flow in a square duct. The constant pressure gradient $$p_0$$ is enforced by setting the gravity parameter. The analytical solution is $u = p_0 y (L - y) / (2 \mu).$ We use $$p_0 = \mu = L = 1$$. • Euler: The test case is a “vortex tube” in a constant density fluid in a triply periodic geometry. The refinement criteria are the presence of a tracer and the magnitude of vorticity. • TaylorGreen: This case is an unsteady viscous benchmark for which the exact solution in 2D is $\begin{split}u(x,y,t) &= && V_0 Sin(2\pi x) Cos(2\pi y) Cos(2\pi z) \exp(-2 (2\pi)^2 \nu t) \\ v(x,y,t) &= -&& V_0 Cos(2\pi x) Sin(2\pi y) Cos(2\pi z) \exp(-2 (2\pi)^2 \nu t) \\ p(x,y,t) &= -&& \rho_0 V_0^2 \{Cos(4 \pi x) + Cos(4 \pi y)\} \exp(-4 (2\pi)^2 \nu t) / 4\end{split}$ In TaylorGreen/benchmarks, there is a tool, ViscBench2d.cpp, that reads a plot file and compares the solution against this exact solution. This benchmark was originally derived by G.I. Taylor (Phil. Mag., Vol. 46, No. 274, pp. 671-674, 1923) and Ethier & Steinman (Intl. J. Num. Meth. Fluids, Vol. 19, pp. 369-375, 1994) give the pressure field. In 3D, the problem is initialized with $\begin{split}u(x,y,z) &= && V_0 Sin(2\pi x) Cos(2\pi y) Cos(2\pi z) \\ v(x,y,z) &= -&& V_0 Cos(2\pi x) Sin(2\pi y) Cos(2\pi z) \\ w(x,y,z) &= && 0.0 \\ p(x,y,t) &= -&& \rho_0 V_0^2 \{2 + Cos(4 \pi z)\}\{Cos(4 \pi x) + Cos(4 \pi y)\} \exp(-4 (2\pi)^2 \nu t) / 16\end{split}$ • HIT: Homogeneous isentropic forced turbulence with constant density. This demonstrates defining a new forcing function by using a local edited version of NS_getForce.cpp. IAMR’s make system is automatically configured to select any local versions of files and ignore the corresponding verions in IAMR/Source. This problem is 3D only. • Particles: Particles in a double shear layer. Uses 2 levels of refinement and fixed grids. With fixed grids, a grid file (called fixed_grids_ml here) is used to define the grids for levels >= 1.
2023-01-31 12:46:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5673978328704834, "perplexity": 1693.1376721242893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00732.warc.gz"}
https://gamedev.stackexchange.com/questions/30990/bejewelled-next-best-jewel-selection
# Bejewelled Next Best jewel selection Is there a game design technique I can use so that I completely remove 'No more moves left' situations. ie. the game should contain no impossible scenarios. As far as I've guess It all depends on what jewel and where you give the user after a jewel group of 3 or 4 dissolves. Is it possible ? An always infinitely solvable Bejewelled game? • make all the jewels blue – amb Jun 22, 2012 at 13:07 • +1 great question. There should be a fairly non-convoluted solution for this, depending on how many new gems you spawn. Jun 22, 2012 at 23:14 • @ashes999: Thank you, but the only two ideas as of now, I think this removal of illegal situation is possible is by 1.) brute force check and addition of jewels based on the foreseen brute calculation, 2.) introduction of things like bombs or the hyper cube which interacts with any jewel around creating a vast disruption on the board pattern. Jul 6, 2012 at 5:43 It's certainly possible to create an endless Bejeweled game. PopCap have done so themselves with the latest Bejeweled 3 (the mode is called "Zen Mode"). First of all, you need to make sure there's at least one valid move when you first generate the board. Whenever a player makes a move, you have to calculate the resulting board and search for valid moves. If there are none to be found, you have to control the gems that will be spawned to restore a valid board. Since (at least) 3 gems will be removed with one move and you'll have to spawn 3 replacement gems, you can ensure that these 3 replacement gems will form another valid move with the current board. Endless mode achieved. Of course it's not ideal that the new move will appear with new gems, but it's a cheap way to always ensure a playable board. And since creating valid moves actually means to swap positions of gems, it won't be long before other moves will become possible. As already mentioned, bombs and other means to clear large parts of the board will add more variety to the gameplay, but they are not needed to ensure an endless mode. Yes. This would in fact be possible. This is not a case of the halting problem as the case is defined, not arbitrary. To answer this, two parts must be answered; first if a solution exists can it be found, and second will there always be a valid solution to find. The first part of is how to find a set of replacement tiles (gems) which would produce a playable board. This can be achieved via brute force methods, just check every possible replacement set until a playable one is encountered (There would be more optimal non-brute force methods as well). The second part is to determine if there will always be a replacement set which will produce a playable set. Any set of tiles removed in a single move is going to be some superset of sets of three tiles, so if in the minimal case of only three being removed, if a playable set can always be found, then for all possible patterns of removed tiles there will be a playable set, as it will contain all the solutions for each set of three removed tiles which is a subset of the removed tiles. In the minimal case of clearing only three tiles in a row/column, A replacement set containing two tiles of type A separated by a tile of type B (where type A is the type of a tile above or below the cleared set of three in the case of a column of three, or to the left or right in the case of a row of three). This will yield a move where swapping the center of these three tiles with the appropriate A tile alongside it will produce a set of three. This shows that a set of tiles can always be found which will produce a valid move along the column/row where the original tiles were cleared. Restricting future moves to that column or row would, while being a valid solution for an infinity playable game, would not be very fun. But using all the rules for common bejeweled style games, it’s easy to show that there will always exists a solution which will allow for moves outside of that row/column as well. Assume we drop in three A type tiles, where A is one of the tiles above or below / left or right of the removed set of three. This will produce a “bomb” style tile which will clear an area when removed. If we then drop in another replacement set of tiles which results in a match being made with that bomb an area of tiles will be cleared. This area will contain a number of 3 tile subsets within other rows, which means that that future moves will not necessarily be limited to a single row/column. • It only rotates 120 degrees at a time, right? So what happens if you display a board that (due to previous moves) has scoreable distributions elsewhere in the board, requiring 5 moves from a newly placed block, and the player clicks something wrong first? Jun 22, 2012 at 18:41 • Rotates 120 degrees? Bejewelled doesn't involve rotations. Are you thinking of Bejewelled Twist? Jun 22, 2012 at 18:45 • Sorry, no, I was thinking of Hexic. But, with sufficient different gems, it would be possible to enter a situation where the engine would have to generate a matching trio every single time to allow continuous play (because nothing else would score). Which might be interesting to see, but not very playable. Jun 22, 2012 at 20:55 • You would always be able to generate a replacement set of three identical tiles of the same type as a tile to the left/right or top/bottom of that set, which would create a larger set which would then be removed. This means that all tiles in the row or column of the original tile can ultimately be removed. In most bejeweled-style games larger sets lead to special tiles which clear areas or all of a given tile type. This could be cascaded as necessary to clear enough tiles that a playable board could be generated regardless of the initial state of the board. Jun 23, 2012 at 8:00 You have touched upon the halting problem in computer science. Given a description of an arbitrary computer program, can we deduce if it will stop at some point or run forever? There is a reason this is called a "problem". The short answer is: no, you cannot guarantee that a Bejeweled game will never have any illegal moves. Because to guarantee it would take infinite computing time. • Just FYI, the halting problem states that there are impossible problems to solve, not that they all are. For this specific problem I think you just can't do it (or the game will be ridiculously simple, like 2 colours and a 3*3 grid for example), there are too many possible paths in a 'normal sized' game like this . Jun 22, 2012 at 13:44 • Also FYI: The current generation of Bejeweled games (from PopCap) have an endless-mode... so they seem to have solved the problem successfully :) They ensure there's always a valid move by spawning new gems that will guarantee a valid move (only applies when there's currently none available). Jun 23, 2012 at 9:11 • @bummzack: sorry to say that you are wrong bummzack, but in the popcap bejewelled(which i take pride in saying I am the top scorer in both classic and speed) . The classic version halts saying .NO MOVES LEFT and gives you a gameover. Jul 6, 2012 at 5:38 • @knight666: Well, I do not think that is impossible, like the other post by Mathew R he does give a good idea of using a bomb when the ai foresees that a illegal situation might arise. Jul 6, 2012 at 5:40 • @Vishnu Well I wrote current generation, which would be Bejeweled 3, and this only applies to the endless-mode (or Zen mode or whatever). I'm aware that this was not the case in the classic version... Jul 6, 2012 at 6:30
2022-10-01 01:10:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2489154040813446, "perplexity": 790.279881229209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00763.warc.gz"}
https://www.yaclass.in/p/science-state-board/class-10/thermal-physics-draft-11507/re-1849d30f-21e4-4bdc-bc4c-7c3962d42dec
### Theory: In this lesson, we are going to learn about gases and their fundamental laws. Gas: Gas is one of the states of matter that has no fixed shape and volume. Gases have a lower density compared to other states of matter, such as solids and liquids. Arrangement of molecules in different states of matter There is a wide range of space between particles, with a higher kinetic energy than the attractive forces between them. The particles move very fast and interact with others, causing them to diffuse or spread out until they are uniformly distributed throughout the volume of the container. When more gas particles enter into a container, there is significantly less space for the particles to spread out, and they become compressed. Gas particles in a container The particles release more force on the interior volume of the container. This force is called pressure. There are severl units used to express the term pressure. Apart from pressure, denoted in equations as P, gases have other measurable properties such as temperature (T), volume (V) and mole number (n or mol). Gases have three important characteristic properties: 1. Gases are easy to compress, 2. Gases expand to fill their containers, and 3. Gases occupy far more space than the liquids or solids from which they form. From the early $$17th century$$, the gas laws have been around to help scientists find volumes, pressures, and temperature. The gas laws consist of three primary laws: 1. Charles' law 2. Boyle's law and
2021-09-21 08:52:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32015252113342285, "perplexity": 901.5416384299031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057199.49/warc/CC-MAIN-20210921070944-20210921100944-00388.warc.gz"}
http://www.noahbprince.com/doing-math/553/
# Maximizing the Product: An Explanation In my last two posts, I asked what the maximum product of integers adding up to 2017 was and then showed that the answer was $2^2\cdot 3^{671}$. It was not clear at the beginning that the largest possible product would only use copies of two consecutive integers (2 and 3). In retrospect, we can see why that should have been the case. Suppose that our product used two integers, $A$ and $B$, that were not consecutive, so that $B-A\geq 2$. Then we could replace the $A$ and $B$ with $A+1$ and $B-1$, which have the same sum but a bigger product: $$(A+1)(B-1) = (A+1)B+(A+1)\cdot (-1) = AB\underbrace{+B-A}_{\text{at least }2}-1 > AB.$$ So, if our product had used non-consecutive integers, we could have improved upon it. Therefore, since the best product cannot be improved upon, its factors can only differ by 1. If we weren’t limited to integers, we could push this idea even further. Suppose now that we want to maximize the product of real numbers that sum to 2017. I claim that we ought to use copies of a single number. To that end, consider a situation in which our product used two different numbers, $A$ and $B$. Then we could replace $A$ and $B$ with two copies of $\dfrac{A+B}{2}$ to get a larger product, because there is a theorem in mathematics called the AM-GM Inequality$^{[1]}$ that tells us that $$AB < \left(\dfrac{A+B}{2}\right)^2 \text{ if }A\neq B .$$ Let’s stop and interpret what this inequality tells us: if we were to use two different integers in our product, then we could not possibly have found the largest one. Therefore, the biggest possible product must consist of copies of a single number. Which single number should we use? Let’s call it $x$ for now, and we’ll figure out what $x$ ought to be. If all our copies of $x$ sum to 2017, then there must be $\dfrac{2017}{x}$ copies of $x$. (Ignore any issues about whether that division comes out cleanly.) The product we get is $x^{2017/x}$, or $(x^{1/x})^{2017}$. We can see when that product is largest by looking at the graph of the function $y=x^{1/x}$ and finding its highest point. With the help of some calculus, we see that the highest point on this graph is when $x$ is equal to the mathematical constant $e$, which is approximately equal to $2.718$. The diagram below shows the graph from above zoomed in on the hump. What this graph tells us is that if we weren’t restricted to integers, we would form the largest product by only using copies of $e$. But it also tells us how to handle the integer version of the problem as well. The graph of $y=x^{1/x}$ increased all the way up until $x=e$ and then decreased after that. So, the largest value of $x^{1/x}$ for integer values of $x$ has to happen at one of the integers on either side of $e$, $2$ and $3$. Of those two integers, we see that the graph is higher at $x=3$ than at $x=2$. So, if we can only use integers in our product, we should try to stick to copies of 3, resorting to 2s as necessary to round out the sum. This solution is not quite as simple as the last one, since it relied on some tools of calculus, whereas the last one was strictly elementary. But it is an example of a fascinating problem-solving technique: to solve a problem about one kind of numbers, we first solve it in a much larger system and then use those insights to get the answer we originally wanted. Mathematicians have learned over the years that one of their most valuable tools is this kind of audacity. $^{[1]}$ If you’ve never seen the AM-GM Inequality before, you can get a feel for it with a few examples. If $A=1$ and $B=5$, then the inequality it claims is $1\cdot 5<3^2$. If $A=3$ and $B=4$, then the inequality becomes $3\cdot 4<3.5^2$, or $12<12.25$. Both of these inequalities are true, which makes the AM-GM Inequality at least plausible. Mathematicians have proven that it always holds, no matter what $A$ and $B$ are, by using the tools of algebra and calculus.
2017-12-12 10:04:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7887784838676453, "perplexity": 120.90934165410832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515313.13/warc/CC-MAIN-20171212095356-20171212115356-00332.warc.gz"}
http://www.physicsforums.com/showthread.php?t=442578
# Homomorphism and Subrings by samtiro Tags: homomorphism, subrings P: 3 1. The problem statement, all variables and given/known data Let f: R -> S be a homomorphism of Rings and T a subring of S. Let P = { r belongs to R | f(r) belongs to T} Prove P is a subring of R. 2. Relevant equations Theorems used: If S and nonempty subset of R such that S is closed under multiplication and addition, then S is a subring of R. If f : R -> S a homomorphism of rings, then f(OR) = 0S (0R is the 0th element of R and similar for 0S) 3. The attempt at a solution First i showed P nonempty. R is a ring So O(R) belongs to R. Then f(0R) = 0S because f is a homomorphism and f maps the zero element to the zero element ( prevevious result) But T is a Subring of S so 0S belongs to T thus P is nonempty. (There is a theorem that says if I show P is nonempty I just now show closure under subtraction and multiplication to show P is a subring) So let x and y belong to P Now f(x-y) = f(x) - f(y). Doesn't this have to belong to T? Both f(x) and f(y) are in T since each x and y belong to P But because T is a subring of S isn't it closed under subtraction already so f(x) - f(y) belongs to T? Then f(xy) = f(x)f(y) and a similar argument holds? Mentor P: 16,698 This is entirely correct. And the multiplication is indeed analogous. Related Discussions Calculus & Beyond Homework 5 Linear & Abstract Algebra 1 Calculus & Beyond Homework 7 Calculus & Beyond Homework 3 Linear & Abstract Algebra 14
2014-04-24 06:23:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098500609397888, "perplexity": 584.7442098296215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/double-integral-question.401271/
# Double integral question 1. May 6, 2010 ### Refraction 1. The problem statement, all variables and given/known data Looks like I'm back with another question already I need to change the order of integration for this double integral and then evaluate it, but I get to a point where I'm not sure what to do. 2. Relevant equations $$\int^3_{0} \int^9_{y} \sqrt{x}cos(x) dx dy$$ 3. The attempt at a solution With the changed order of integration it needs two integrals added together, this is what I came up with: $$\int^3_{0} \int^x_{0} \sqrt{x}cos(x) dy dx + \int^9_{3} \int^3_{0} \sqrt{x}cos(x) dy dx$$ And I planned to work them both out separately, but didn't get too far with the first one: $$= \int^3_{0} \left[y\sqrt{x}cos(x)\right]^{x}_{0} dx$$ $$= \int^3_{0} x\sqrt{x}cos(x) dx$$ I'm not sure if I've made a mistake getting here, but it looks like I need to integrate $$x\sqrt{x}cos(x)$$ and there doesn't seem to be an easy way to do that at all. 2. May 6, 2010 ### tiny-tim Hi Refraction! Your change of order looks fine. The only problem is how to integrate x1/2cosx or x3/2cosx … I don't know any way of doing that (other than using power series). 3. May 6, 2010 ### Refraction That's what I was thinking as well, we've never done anything like that in this class before, and it's only supposed to be a small question so I'm not sure why it's like that. The only thing I can think of is it maybe meaning to change the order and just leave it like that, but it's worded a bit strangely then. Thanks anyway! Last edited: May 6, 2010 4. May 6, 2010 ### penguin007 Hi Refraction, How did you change the order?(there was a y ?) For the computation of the integral, the only way I can see is with power series too... 5. May 6, 2010 ### Refraction The line was x = y in the original question, I just used it as y = x for when the order is reversed (so it's in the first half of the reversed order integral now). 6. May 6, 2010 ### penguin007 I still don't understand... 7. May 6, 2010 ### Refraction Well the area bounded by the lines looks something like this: So with the reversed order of integration (dy dx) for the first double integral, R1, the inner integral is from y = 0 to y = x, and the outer integral is from x = 0 to x = 3. 8. May 6, 2010 ### penguin007 I got it. thank you very much! 9. May 6, 2010 ### The Chaz Ah, Grasshopper. The student has become the master! 10. May 6, 2010 Woohoo!
2017-08-23 11:17:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7928467392921448, "perplexity": 515.8621418438112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886118195.43/warc/CC-MAIN-20170823094122-20170823114122-00309.warc.gz"}
https://www.physicsforums.com/threads/maximum-speed-attained.41742/
# Maximum speed attained A spaceship ferrying workers to Moon Base I takes a straight-line path from the earth to the moon, a distance of 384,000 km. Suppose it accelerates at an acceleration 19.8 $$m/s^2$$ for the first time interval 15.9 min of the trip, then travels at constant speed until the last time interval 15.9 min, when it accelerates at -19.8 $$m/s^2$$, just coming to rest as it reaches the moon. there are three questions to this problem, but i will ask the first one first. 1.)What is the maximum speed attained? using the formula d1 = x(0) +v(0)*t + 1/2at^2 x(0) = 0 because the velocity is zero d1 = 1/2(19.8) * (15.9 *60)^2 <--- converted to secs. well that's the speed for distance #1, but since it's constant just before it reaches distance #3, shouldnt that be the maximum speed? Related Introductory Physics Homework Help News on Phys.org I believe you only need to use V(final) = V(intial) + at. Since intial is 0? then your maximum speed would be acceleration * time interval of 15.9min? So converting that to seconds gives (19.8 m/s^2) * (15.9 min * 60 s/m). is this even a valid college level question? Yes, that is first year college mechanics. dink said: I believe you only need to use V(final) = V(intial) + at. Since intial is 0? then your maximum speed would be acceleration * time interval of 15.9min? So converting that to seconds gives (19.8 m/s^2) * (15.9 min * 60 s/m). (19.8 m/s^2) * (15.9 min * 60 s/m) = 18889.2 and it's the wrong answer and i dont know what you mean by if it's a college question, but i am in college. it may seem easy because it's only been the first week of school. Your asking the maximum speed attained, which is the magnitude of the velocity. From what I gather of the problem, the ship has positive acceleration for a period of 15.9 minutes, a constant velocity, then a negative acceleration for a period of 15.9 minutes. Your equation is the distance equation which, if you look at the units, leaves you an answer in meters. Looking for a maximum velocity will have the units m/s. Regardless of the results the equation is most assuredly Vi = Vf + AT. Tide Homework Helper There's a serious flaw in the problem. If the rocket is travelling a straight line path to the moon then part of that acceleration is required to keep it on the straight line path to compensate for the varying angular momentum on its way to the moon. I think the problem needs to be restated! dink said: Your asking the maximum speed attained, which is the magnitude of the velocity. From what I gather of the problem, the ship has positive acceleration for a period of 15.9 minutes, a constant velocity, then a negative acceleration for a period of 15.9 minutes. Your equation is the distance equation which, if you look at the units, leaves you an answer in meters. Looking for a maximum velocity will have the units m/s. Regardless of the results the equation is most assuredly Vi = Vf + AT. answer is suppose to be in km/s. sorry, forgot to state that. so am i suppose to times it by 1000, cause the original answer of 18889.2 is in meters right? and Tide, it's a copy and paste from my homework(i didnt type it). Tide Homework Helper Whatupdoc said: and Tide, it's a copy and paste from my homework(i didnt type it). I was only suggesting that whoever made up the problem was somewhat sloppy! Chronos Gold Member The formula Dink gave is correct and 18889 is the right answer. Given, the deceleration phase is of the same magnitude and duration as the launch, there can be no other answer [unless the ship crashes]. is 18889 in meters? the answer is suppose to be in km/s. so do i convert 18889 meters to ____km/s? .....sigh..... HallsofIvy Homework Helper The formula Dink gave is correct and 18889 is the right answer. Given, the deceleration phase is of the same magnitude and duration as the launch, there can be no other answer [unless the ship crashes]. The formula is correct but not for this question! 1.)What is the maximum speed attained? using the formula d1 = x(0) +v(0)*t + 1/2at^2 This is the formula for distance, not speed! The formula for speed is simply v(0)t. 19.8*15.9= 314.82 m/s for the maximum speed. HallsofIvy said: The formula is correct but not for this question! This is the formula for distance, not speed! The formula for speed is simply v(0)t. 19.8*15.9= 314.82 m/s for the maximum speed. 314.82 m/s is also the wrong answer Whatupdoc said: A spaceship ferrying workers to Moon Base I takes a straight-line path from the earth to the moon, a distance of 384,000 km. Suppose it accelerates at an acceleration 19.8 $$m/s^2$$ for the first time interval 15.9 min of the trip, then travels at constant speed until the last time interval 15.9 min, when it accelerates at -19.8 $$m/s^2$$, just coming to rest as it reaches the moon. there are three questions to this problem, but i will ask the first one first. 1.)What is the maximum speed attained? using the formula d1 = x(0) +v(0)*t + 1/2at^2 x(0) = 0 because the velocity is zero d1 = 1/2(19.8) * (15.9 *60)^2 <--- converted to secs. well that's the speed for distance #1, but since it's constant just before it reaches distance #3, shouldnt that be the maximum speed? vf = 19.8*15.9*60 = 18889.2m/s -> DIVIDE BY 1000!!!! TO GET KM/S. And don't tell me that its your first week of college, which is why even this simple conversion is too hard for you. Whatupdoc said: answer is suppose to be in km/s. sorry, forgot to state that. so am i suppose to times it by 1000, cause the original answer of 18889.2 is in meters right? and Tide, it's a copy and paste from my homework(i didnt type it). 18.889.2 km/s Last edited: i never took physics in high school, im just trying to learn. all of this is so new and hard. in college, alot of stuff is given to you at once and they all go so fast. classes are huge, so it's hard to ask questions(around 250 or more students in my physics class). Last edited: Chronos
2019-12-12 21:54:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7364833354949951, "perplexity": 1205.9918205753772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00110.warc.gz"}
https://www.quizover.com/course/section/working-with-negative-exponents-by-openstax
# 3.6 Negative exponents Page 1 / 2 This module is from Elementary Algebra by Denny Burzynski and Wade Ellis, Jr. The basic operations with real numbers are presented in this chapter. The concept of absolute value is discussed both geometrically and symbolically. The geometric presentation offers a visual understanding of the meaning of |x|. The symbolic presentation includes a literal explanation of how to use the definition. Negative exponents are developed, using reciprocals and the rules of exponents the student has already learned. Scientific notation is also included, using unique and real-life examples.Objectives of this module: understand the concepts of reciprocals and negative exponents, be able to work with negative exponents. ## Overview • Reciprocals • Negative Exponents • Working with Negative Exponents ## Reciprocals Two real numbers are said to be reciprocals of each other if their product is 1. Every nonzero real number has exactly one reciprocal, as shown in the examples below. Zero has no reciprocal. $\begin{array}{ll}4\cdot \frac{1}{4}=1.\hfill & \text{This}\text{\hspace{0.17em}}\text{means}\text{\hspace{0.17em}}\text{that}\text{\hspace{0.17em}}4\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\frac{1}{4}\text{\hspace{0.17em}}\text{are}\text{\hspace{0.17em}}\text{reciprocals}.\hfill \end{array}$ $\begin{array}{ll}6\cdot \frac{1}{6}=1.\hfill & \text{Hence,}\text{\hspace{0.17em}}6\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\frac{1}{6}\text{\hspace{0.17em}}\text{are}\text{\hspace{0.17em}}\text{reciprocals}.\hfill \end{array}$ $\begin{array}{ll}-2\cdot \frac{-1}{2}=1.\hfill & \text{Hence,}\text{\hspace{0.17em}}-2\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}-\frac{1}{2}\text{\hspace{0.17em}}\text{are}\text{\hspace{0.17em}}\text{reciprocals}.\hfill \end{array}$ $\begin{array}{ll}a\cdot \frac{1}{a}=1.\hfill & \text{Hence,}\text{\hspace{0.17em}}a\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\frac{1}{a}\text{\hspace{0.17em}}\text{are}\text{\hspace{0.17em}}\text{reciprocals}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}a\ne 0.\hfill \end{array}$ $\begin{array}{ll}x\cdot \frac{1}{x}=1.\hfill & \text{Hence,}\text{\hspace{0.17em}}x\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\frac{1}{x}\text{\hspace{0.17em}}\text{are}\text{\hspace{0.17em}}\text{reciprocals}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x\ne 0.\hfill \end{array}$ $\begin{array}{ll}{x}^{3}\cdot \frac{1}{{x}^{3}}=1.\hfill & \text{Hence,}\text{\hspace{0.17em}}{x}^{3}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\frac{1}{{x}^{3}}\text{\hspace{0.17em}}\text{are}\text{\hspace{0.17em}}\text{reciprocals}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x\ne 0.\hfill \end{array}$ ## Negative exponents We can use the idea of reciprocals to find a meaning for negative exponents. Consider the product of ${x}^{3}$ and ${x}^{-3}$ . Assume $x\ne 0$ . ${x}^{3}\cdot {x}^{-3}={x}^{3+\left(-3\right)}={x}^{0}=1$ Thus, since the product of ${x}^{3}$ and ${x}^{-3}$ is 1, ${x}^{3}$ and ${x}^{-3}$ must be reciprocals. We also know that ${x}^{3}\cdot \frac{1}{{x}^{3}}=1$ . (See problem 6 above.) Thus, ${x}^{3}$ and $\frac{1}{{x}^{3}}$ are also reciprocals. Then, since ${x}^{-3}$ and $\frac{1}{{x}^{3}}$ are both reciprocals of ${x}^{3}$ and a real number can have only one reciprocal, it must be that ${x}^{-3}=\frac{1}{{x}^{3}}$ . We have used $-3$ as the exponent, but the process works as well for all other negative integers. We make the following definition. If $n$ is any natural number and $x$ is any nonzero real number, then ${x}^{-n}=\frac{1}{{x}^{n}}$ ## Sample set a Write each of the following so that only positive exponents appear. ${x}^{-6}=\frac{1}{{x}^{6}}$ ${a}^{-1}=\frac{1}{{a}^{1}}=\frac{1}{a}$ ${7}^{-2}=\frac{1}{{7}^{2}}=\frac{1}{49}$ ${\left(3a\right)}^{-6}=\frac{1}{{\left(3a\right)}^{6}}$ ${\left(5x-1\right)}^{-24}=\frac{1}{{\left(5x-1\right)}^{24}}$ ${\left(k+2z\right)}^{-\left(-8\right)}={\left(k+2z\right)}^{8}$ ## Practice set a Write each of the following using only positive exponents. ${y}^{-5}$ $\frac{1}{{y}^{5}}$ ${m}^{-2}$ $\frac{1}{{m}^{2}}$ ${3}^{-2}$ $\frac{1}{9}$ ${5}^{-1}$ $\frac{1}{5}$ ${2}^{-4}$ $\frac{1}{16}$ ${\left(xy\right)}^{-4}$ $\frac{1}{{\left(xy\right)}^{4}}$ ${\left(a+2b\right)}^{-12}$ $\frac{1}{{\left(a+2b\right)}^{12}}$ ${\left(m-n\right)}^{-\left(-4\right)}$ ${\left(m-n\right)}^{4}$ ## Caution It is important to note that ${a}^{-n}$ is not necessarily a negative number. For example, $\begin{array}{ll}{3}^{-2}=\frac{1}{{3}^{2}}=\frac{1}{9}\hfill & {3}^{-2}\ne -9\hfill \end{array}$ ## Working with negative exponents The problems of Sample Set A suggest the following rule for working with exponents: ## Moving factors up and down In a fraction, a factor can be moved from the numerator to the denominator or from the denominator to the numerator by changing the sign of the exponent. ## Sample set b Write each of the following so that only positive exponents appear. $\begin{array}{ll}{x}^{-2}{y}^{5}.\hfill & \text{The}\text{\hspace{0.17em}}factor\text{\hspace{0.17em}}{x}^{-2}\text{\hspace{0.17em}}\text{can}\text{\hspace{0.17em}}\text{be}\text{\hspace{0.17em}}\text{moved}\text{\hspace{0.17em}}\text{from}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{numerator}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}\text{the}\hfill \\ \hfill & \text{denominator}\text{\hspace{0.17em}}\text{by}\text{\hspace{0.17em}}\text{changing}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{exponent}\text{\hspace{0.17em}}-2\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}+2.\hfill \\ {x}^{-2}{y}^{5}=\frac{{y}^{5}}{{x}^{2}}\hfill & \hfill \end{array}$ $\begin{array}{ll}{a}^{9}{b}^{-3}.\hfill & \text{The}\text{\hspace{0.17em}}factor\text{\hspace{0.17em}}{b}^{-3}\text{\hspace{0.17em}}\text{can}\text{\hspace{0.17em}}\text{be}\text{\hspace{0.17em}}\text{moved}\text{\hspace{0.17em}}\text{from}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{numerator}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}\text{the}\hfill \\ \hfill & \text{denominator}\text{\hspace{0.17em}}\text{by}\text{\hspace{0.17em}}\text{changing}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{exponent}\text{\hspace{0.17em}}-3\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}+3.\hfill \\ {a}^{9}{b}^{-3}=\frac{{a}^{9}}{{b}^{3}}\hfill & \hfill \end{array}$ $\begin{array}{ll}\frac{{a}^{4}{b}^{2}}{{c}^{-6}}.\hfill & \text{This}\text{\hspace{0.17em}}\text{fraction}\text{\hspace{0.17em}}\text{can}\text{\hspace{0.17em}}\text{be}\text{\hspace{0.17em}}\text{written}\text{\hspace{0.17em}}\text{without}\text{\hspace{0.17em}}\text{any}\text{\hspace{0.17em}}\text{negative}\text{\hspace{0.17em}}\text{exponents}\hfill \\ \hfill & \text{by}\text{\hspace{0.17em}}\text{moving}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}factor\text{\hspace{0.17em}}{c}^{-6}\text{\hspace{0.17em}}\text{into}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{numerator}\text{.}\hfill \\ \hfill & \text{We}\text{\hspace{0.17em}}\text{must}\text{\hspace{0.17em}}\text{change}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}-6\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}+6\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}\text{make}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{move}\text{\hspace{0.17em}}\text{legitimate}\text{.}\hfill \\ \frac{{a}^{4}{b}^{2}}{{c}^{-6}}={a}^{4}{b}^{2}{c}^{6}\hfill & \hfill \end{array}$ Do somebody tell me a best nano engineering book for beginners? what is fullerene does it is used to make bukky balls are you nano engineer ? s. what is the Synthesis, properties,and applications of carbon nano chemistry so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles what's the easiest and fastest way to the synthesize AgNP? China Cied types of nano material I start with an easy one. carbon nanotubes woven into a long filament like a string Porter many many of nanotubes Porter what is the k.e before it land Yasmin what is the function of carbon nanotubes? Cesar I'm interested in nanotube Uday what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it... what is system testing what is the application of nanotechnology? Stotaw In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google Azam anybody can imagine what will be happen after 100 years from now in nano tech world Prasenjit after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments Azam name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world Prasenjit how hard could it be to apply nanotechnology against viral infections such HIV or Ebola? Damian silver nanoparticles could handle the job? Damian not now but maybe in future only AgNP maybe any other nanomaterials Azam Hello Uday I'm interested in Nanotube Uday this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15 Prasenjit can nanotechnology change the direction of the face of the world At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light. how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Berger describes sociologists as concerned with Got questions? Join the online conversation and get instant answers!
2018-09-25 16:52:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 52, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.642374575138092, "perplexity": 1622.3930531435542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161902.89/warc/CC-MAIN-20180925163044-20180925183444-00272.warc.gz"}
http://math.stackexchange.com/questions/165612/generalized-nakayama-over-a-local-ring-with-an-almost-nilpotent-ideal
# Generalized Nakayama over a local ring with an almost nilpotent ideal Let $(A,\mathfrak{m})$ be a local ring. Let us call $\mathfrak{m}$ almost nilpotent if for every sequence $a_1,a_2,\dotsc$ in $\mathfrak{m}$ there is some $n \geq 1$ such that $a_1 \cdot \dotsc \cdot a_n = 0$. Let $M$ be an $A$-module with $M = \mathfrak{m} M$. I would like to prove $M = 0$. This is an exercise in Lenstra's Galois Theory for Schemes I've been struggling with quite some time. If $M$ is finitely generated, it is trivial (Nakayama). Now let's say $M$ is countably generated, by $m_1,m_2,m_3,\dotsc$. By elimination, we may then assume that $m_i \in \langle m_{i+1},m_{i+2},\dotsc \rangle$. If we had $m_i \in \langle m_{i+1} \rangle$, it would be easy to conclude: Choose $a_i \in \mathfrak{m}$ with $m_i = a_i m_{i+1}$. Now apply the assumption to the sequence $(a_i)$, this shows $m_1 = 0$. Since we could choose $m_1$ arbitrary, $M=0$. However, in general case, the equations become quite horrible and sums, not just products, are involved. If you draw a tree representing the linear combinations, I know that every path must end in a zero eventually, but not that the whole tree must end eventually. My gut feeling is that there may be universal counterexamples ... - You possibly already know this, but for the benefit of other readers almost nilpotence is also known as T-nilpotence (T for transfinite). It's an important condition in the discussion of perfect rings. –  rschwieb Jul 2 '12 at 11:03 I didn't know that, thank you. I already wondered why "almost nilpotent" didn't give appropriate search results. –  Martin Brandenburg Jul 2 '12 at 17:54 Fantastic :) I'm glad to be alerted to this connection with Nakayama's lemma. I never use T-nilpotence, I just know that a ring is right perfect iff $R/rad(R)$ is semisimple and $rad(R)$ is right T-nilpotent. So here we are talking about perfect local ring. –  rschwieb Jul 2 '12 at 17:56 In the tree that represents the linear combinations: cut off every branch as soon as the corresponding product of coefficients is zero. As the tree splits only finitely in every node, the tree must by König's Lemma be finite if $\mathfrak{m}$ is almost nilpotent.
2015-09-01 13:06:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078378677368164, "perplexity": 229.0718308760487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00013-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/best-grocery-items.415176/
# Best Grocery Items? 1. Jul 10, 2010 ### Leptos What are some good brands for pre-made meals? I know they're not too healthy, but I'm sure some are better than others. I have the money to buy expensive grocery products so I want to keep some food in the fridge/freezer/cabinets that don't require any cooking. I normally eat out for at least 2 meals of the day(I typically eat breakfast, 1 big lunch, 1 medium lunch, and a tiny dinner) and although I haven't been eating too well, I've actually lost weight since I started college. I live with my parents and I usually spend an average of $30 a day on eating outside food. Anyway, what are some good grocery items I can pick up that don't need to be cooked? It's fine if they're a little bit on the expensive side. I know grocery stores sell things like cooked chicken, but what else is there? 2. Jul 10, 2010 ### TheStatutoryApe You mean microwavable? Or ready to eat? 3. Jul 10, 2010 ### Leptos Microwavable. Or anything that can be prepared in a microwave. Basically anything where I don't have to touch a stove/oven. Snack food suggestions are also more than welcome. I've often had meals where I eat tortilla scoops with Vienna sausage, a fruit, and yogurt but at least it's not as bad as some Chinese food options. I also eat Japanese takeout quite often, but not everyday since it can easily cost over$20 per meal. I feel like I'm not eating very nutritious foods, but for some reason I feel perfectly healthy and my body functions as well as it did when I ate completely raw foods. I suppose my best eating habit is that I eat around 3-5 servings of fruit daily and at least 2-3 Activia yogurts daily, but as long as it doesn't negatively affect my studying, I'll eat anything. Last edited: Jul 10, 2010 4. Jul 10, 2010 ### TheStatutoryApe I'm rather partial to Marie Calender's frozen pot pies. Their meals aren't bad either, I often take one of their frozen lasagnas to work with me. If you have an oven Digiorno pizzas can be fairly in expensive, I often see them on sale for 4-5 dollars. Unfortunately I think that they have stopped their spinach and garlic pizzas and everything else but plain cheese has crappy processed meat on it. They also have small microwavable pizzas but I have never had those before. For snacks lately I have been eating celery and peanut butter and tortilla chips with salsa. I also have been keeping a lot of cereal on hand for a quick easy meal. I've been meaning to start buying yogurt but never get to it, I have to carry all my groceries on the bus with me. Personally I do not mind cooking so I try to steer clear of premade foods. Mostly I eat frozen meals for work and there are not many that are very good. 5. Jul 10, 2010 ### Topher925 I'm a big fan of oatmeal. Cooks in 1 minute and all you have to do is add a little bit of fruit and soy milk for a healthy breakfast. Veggie burgers also are also good. Often I'll also go and buy some frozen vegetables and a bottle of some kind of chinese sauce (like general Tso's) and through the veggies in a steamer and add some sauce for a nearly instant healthy meal. 6. Jul 10, 2010 ### TubbaBlubba Oats, oats, oats all the way, bro. Every nutritient you need save for Vitamin C. 7. Jul 10, 2010 ### lisab Staff Emeritus If money isn't an issue, I recommend Kashi. Fantastic and somewhat exotic dishes, no preservatives, additives, or "flavor enhancers". But \$pendy. Oh and they frequently aren't with the other frozen foods - they're so special they won't even socialize with lower brands. Look in the frozen organic section. 8. Jul 10, 2010 ### Staff: Mentor Michelina's are very tasty and not expensive. Boston Market has some nice frozen dinners. My all time favorites are Banquet. For under a dollar each, they have some of the tastiest frozen dinners. (except their new mexican dinners with soy are not that good). My favorite is the spaghetti and meatballs. 9. Jul 10, 2010 ### Andre Hmmm not a new member showing up this time, having linx to perfect solutions? 10. Jul 10, 2010 ### Leptos I looked up the reviews for it and it seems nothing short of excellent. I'll have to find out the best places to buy this stuff. If spambots and etc. are a problem, perhaps the site staff should implement a new user filtration section(one where new users must post and be verified by staff before being allowed to post in other sections). This may seem rather obvious for large forums, but as you can see, PF is a large forum without a new user filter. 11. Jul 10, 2010 ### TubbaBlubba I know the JREF forum don't allow you to post links with less than 15 posts. 12. Jul 10, 2010 ### Staff: Mentor If you want high end frozen dinners, Michael Angelos is excellent. Their eggplant parmesan tastes like my home made. Trader Joes (if you have one in your area), and Amy's if you are vegetarian. 13. Jul 10, 2010 ### TheStatutoryApe I've had their dinners a couple times. Definitely very good though a bit too pricey for me. 14. Jul 10, 2010 ### Staff: Mentor For me too. That 98 cent Banquet spaghetti and meatballs is better than a lot of restaurants. People equate cheap with bad and expensive with better, there are exceptions. I think if I had to choose a last meal before I die, that would definitely be on the plate. 15. Jul 10, 2010 ### Leptos The local supermarket has Kashi foods, but it doesn't have any of the other stuff you mentioned, at least, not on its online directory. 16. Jul 10, 2010 ### Staff: Mentor Trader Joe's is not available outside of it's own chain, but Amy's is everywhere, and I have found Micheal Angelo's at every store I've been to. I've not tried the Kashi frozen dinners, their cereal was disapointing (goes stale almost overnight) so I haven't been tempted to try anything else by them. But if lisab says it's good, I'd trust her, (but beware, she eats beans in chili). <Runs and hides> :tongue2: 17. Jul 11, 2010 ### BobG I like the Marie Callender's frozen dinners, especially the pot pies. Bertolli's is great, too. You do have to cook them on a stove, but you basically dump the bag in a skillet and heat them up for about 10 minutes. 18. Jul 11, 2010 ### KalamMekhar I hear Ramen noodles are pretty good. Cheap as heck too, I remember getting 5 24packs whenever I went to sams club. 19. Jul 13, 2010 ### lisab Staff Emeritus <sneaks up to Evo's door, leaves a can of beans, rings bell, runs away fast> 20. Jul 13, 2010
2018-02-20 20:05:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17600658535957336, "perplexity": 5629.736022992895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00688.warc.gz"}
https://icme.hpc.msstate.edu/mediawiki/index.php/Surface_formation_energy_calculation_for_fcc_(111),_(110)_and_(100)
# Surface formation energy calculation for fcc (111), (110) and (100) Author(s): Laalitha Liyanage ## Contents By using the codes provided here you accept the the Mississippi State University's license agreement. Please read the agreement carefully before usage. ## Overview To calculate surface formation energy using density functional theory one should do a highly accurate bulk calculation (accurate to 1 meV) and obtain bulk energy per atom (Ebulk). Then the simulation box should be extended in the required direction to make a surface. This is equivalent to placing a vacuum over the surface. Due to periodic boundary condition two identical surfaces will be simulated by this simulation box. Therefore we only consider half of the energy required to make such a surface as the surface formation energy. ## Guidelines for high accuracy bulk calculation In VASP global precision control tag calle PREC. By setting PREC to 'Accurate' the accuracy of the calculation can be increased dramatically. Other options that control the precision in VASP include • ENCUT - Controls the completeness of the basis set. Could do a convergence test to determine the optimal value for your calculation. By default it will be set to the ENMAX parameter of the pseudo-potential file POTCAR. • KPOINT grid - Always do a k-point convergence test to determine the best k-point grid. Since the simulation box is elongated in one direction only one k-point is needed in the direction perpendicular to the surface. For other two directions k-point convergence must be done. ## Guidelines for accurate surface energy calculation The system should be relaxed (ISIF=2) with ISMEAR = 1 (Methfessel-Paxton). At the end of the relaxation run VASP will generate new positions in the CONTCAR file. Copy the CONTCAR as POSCAR. Then run a static calculation (no relaxations, NSW = 0) with the tetrahedron method (ISMEAR = -5). At the end of the static run the total energy (E0) of the system will be accurately determined. To do an accurate surface energy calculation using VASP, one has to consider several variables. To make a simulation box (POSCAR) with a surface a vacuum needs to be inserted. For convergence the following should be considered. • The number of atomic layer • The height of the vacuum Using the following script you could generate fcc surfaces (100), (110) and (111) with your choice of the number of atomic layers and the height of the vacuum. Use the following equation to calculate surface formation energy Where Esurf is the total energy of the simulation box with the surface, N is the number of atoms in the simulation box, $\epsilon$ is the cohesive energy per atom of the bulk structure, and A is the area of the surface. ## Relaxation For relaxation (geometric or ionic optimizations) always use ISMEAR = 1 and SIGMA = 0.1 or ISMEAR = 2 and SIGMA = 0.2. It is always recommended to do ionic relaxations at fixed volumes and plot the energy vs volume graph to determine the equilibrium volume. To get accurate energies obtain the structure after relaxation and run a static calculation (no relaxations NSW = 0) with ISMEAR = -5. • Proposed method to relax and get accurate energies • The system should be relaxed (ISIF=2) with ISMEAR = 1 (Methfessel-Paxton). At the end of the relaxation run VASP will generate new positions in the CONTCAR file. Copy the CONTCAR as POSCAR. Then run a static calculation (no relaxations, NSW = 0) with the tetrahedron method (ISMEAR = -5). Example INCAR files are given below. ### INCAR file for relaxation LWAVE = .FALSE. LCHARG = .FALSE. LREAL = Auto ISMEAR = 1 ENCUT = 240.3 EDIFF = 1e-6 NSW=100 ISIF=2 IBRION=2 ### INCAR file for static calculation LWAVE = .FALSE. LCHARG = .FALSE. LREAL = Auto ISMEAR = -5 ENCUT = 240.3 EDIFF = 1e-6 #NSW=100 ISIF=2 IBRION=2 ## FCC surface generation script The following python script allows you to generate super-cells with surfaces (111), (110) and (100). The input arguments are equilibrium lattice parameter of fcc bulk structure (a = lattice constant in angstroms), type of surface (surf = 100 or 110 or 111), length of the vacuum (vacuum = length in angstroms) to be inserted above the surface of the super-cell, periodicity of the super-cell (nx,ny,nz= integers), and whether or not to include an adatom (adatom = 1/0 ; true or false). #!/usr/bin/env python # Purpose: Calculate FCC (111), (110), and (100) surface energies # Author: Sungho Kim and Laalitha Liyanage import os import sys import math usage=""" Usage: ./gen_fcc_surface.py a surf vacuum nx ny nz adatom Mandatory arguments ------------------- a - equilibrium lattice constant surf - Type of surface 100, 110 or 111 Optional arguments ------------------- vacuum - length of vacuum; DEFAULT = 8.0 angstroms nx,ny,nz - periodicity of supercell; DEFAULT (1,1,1) adatom - 1/0 (True/False); DEFAULT = 0 (False) """ #Default setting #-------------------------------------------------------------------- vacuum = 8.0 #--------------------------Surface (100)----------------------------- def gen_data_for_fcc(a,nx=2,ny=2,nz=4): """ Generate datafile of FCC structure with lattice constant a """ xa=[]; ya=[]; za=[] x0 = 0.0 bx,by,bz = a*nx,a*ny,a*nz+vacuum x,y,z = bx,by,bz for i in range(nx): for j in range(ny): for k in range(nz): xa.append(0 + i*a); ya.append( 0 + j*a); za.append( 0 + k*a) xa.append( 0 + i*a); ya.append(a/2 + j*a); za.append(a/2 + k*a) xa.append(a/2 + i*a); ya.append( 0 + j*a); za.append(a/2 + k*a) xa.append(a/2 + i*a); ya.append(a/2 + j*a); za.append( 0 + k*a) # xa.append(x0); ya.append(x0); za.append(x0+nz*a) xa.append(bx/2.); ya.append(by/2.); za.append(x0+nz*a) return xa,ya,za,bx,by,bz #--------------------------Surface (110)----------------------------- def gen_data_for_110_fcc(a,nx=4,ny=2,nz=1): """ Generate datafile of FCC surface: 110:x, 112:y, 111:z """ xa=[]; ya=[]; za=[] ax = a*math.sqrt(2)/2 ay = a*math.sqrt(6)/2 az = a*math.sqrt(3) x0 = 0.0 x2 = math.sqrt(2)/4. * a y2 = math.sqrt(6)/4. * a y3 = math.sqrt(6)/6. * a y4 = math.sqrt(6)*5./12. * a y5 = math.sqrt(6)*2./6. * a y6 = math.sqrt(6)/12 * a z3 = math.sqrt(3)/3. * a z5 = math.sqrt(3)*2./3. * a bx,by,bz = ax*nx + vacuum, ay*ny, az*nz for i in range(nx): for j in range(ny): for k in range(nz): xa.append(x0+i*ax); ya.append(x0+j*ay); za.append(x0+k*az) xa.append(x2+i*ax); ya.append(y2+j*ay); za.append(x0+k*az) xa.append(x0+i*ax); ya.append(y3+j*ay); za.append(z3+k*az) xa.append(x2+i*ax); ya.append(y4+j*ay); za.append(z3+k*az) xa.append(x0+i*ax); ya.append(y5+j*ay); za.append(z5+k*az) xa.append(x2+i*ax); ya.append(y6+j*ay); za.append(z5+k*az) xa.append(x0+nx*ax); ya.append(by/2.); za.append(bz/2.) return xa,ya,za,bx,by,bz #--------------------------Surface (111)----------------------------- def gen_data_for_111_fcc(a,nx=2,ny=2,nz=4): """ Generate datafile of FCC surface: 110:x, 112:y, 111:z """ xa=[]; ya=[]; za=[] ax = a*math.sqrt(2)/2 ay = a*math.sqrt(6)/2 az = a*math.sqrt(3) x0 = 0.0 x2 = math.sqrt(2)/4 * a y2 = math.sqrt(6)/4 * a y3 = math.sqrt(6)/6 * a y4 = math.sqrt(6)*5/12 * a y5 = math.sqrt(6)*2/6 * a y6 = math.sqrt(6)/12 * a bx,by,bz = ax*nx, ay*ny, az*nz+vacuum for i in range(nx): for j in range(ny): layer = 0 for k in range(nz): xa.append(x0+i*ax); ya.append(x0+j*ay); za.append(layer/3.0*az) xa.append(x2+i*ax); ya.append(y2+j*ay); za.append(layer/3.0*az); layer += 1 xa.append(x0+i*ax); ya.append(y3+j*ay); za.append(layer/3.0*az) xa.append(x2+i*ax); ya.append(y4+j*ay); za.append(layer/3.0*az); layer += 1 xa.append(x0+i*ax); ya.append(y5+j*ay); za.append(layer/3.0*az) xa.append(x2+i*ax); ya.append(y6+j*ay); za.append(layer/3.0*az); layer += 1 xa.append(bx/2.); ya.append(by/2.); za.append(x0+nz*az) return xa,ya,za,bx,by,bz #----------------------------POSCAR generation------------------------------------------------ def gen_poscar(xa,ya,za,bx,by,bz): fout = open("POSCAR","w") fout.write("Fe\n") fout.write("1.0\n") fout.write(" %22.16f  %22.16f  %22.16f\n"%(bx,0,0)) fout.write(" %22.16f  %22.16f  %22.16f\n"%(0,by,0)) fout.write(" %22.16f  %22.16f  %22.16f\n"%(0,0,bz)) fout.write("%d\n"%len(xa)) # fout.write("Selective Dynamics\n") fout.write("Cart\n") for i in range(len(xa)): fout.write("%22.16f %22.16f %22.16f\n"%(xa[i],ya[i],za[i])) # fout.write("%22.16f %22.16f %22.16f F F T\n"%(xa[i],ya[i],za[i])) fout.close() return len(xa) #-------------------------------Main program--------------------------------------------------- if len(sys.argv) > 2: if len(sys.argv) == 3: a_latt = float(sys.argv[1]) surf = sys.argv[2] if surf == '100' : xa,ya,za,bx,by,bz = gen_data_for_fcc(a_latt) gen_poscar(xa,ya,za,bx,by,bz) elif surf == '110': xa,ya,za,bx,by,bz = gen_data_for_110_fcc(a_latt) gen_poscar(xa,ya,za,bx,by,bz) elif surf == '111': xa,ya,za,bx,by,bz = gen_data_for_111_fcc(a_latt) gen_poscar(xa,ya,za,bx,by,bz) elif len(sys.argv) == 8: a_latt = float(sys.argv[1]) surf = sys.argv[2] vacuum = float(sys.argv[3]) nx = int(sys.argv[4]) ny = int(sys.argv[5]) nz = int(sys.argv[6]) if surf == '100' : xa,ya,za,bx,by,bz = gen_data_for_fcc(a_latt,nx,ny,nz) gen_poscar(xa,ya,za,bx,by,bz) elif surf == '110': xa,ya,za,bx,by,bz = gen_data_for_110_fcc(a_latt,nx,ny,nz) gen_poscar(xa,ya,za,bx,by,bz) elif surf == '111': xa,ya,za,bx,by,bz = gen_data_for_111_fcc(a_latt,nx,ny,nz) gen_poscar(xa,ya,za,bx,by,bz) else: print "Error: wrong number of arguments!!!" print usage ## Running the calculation Once you generate a POSCAR file from the above script, make an INCAR and a KPOINTS file and copy the POTCAR file relevant to your group to the same directory. POSCAR files should have more than 5 layers of atoms parallel to the interested surface. This will mean the number of atoms will be around 100 and to run this calculation you will need to execute the following command before the mpirun command. Then do ulimit -s unlimited and execute VASP by mpirun -np <no. of processors> <path of executable> To submit to cluster use the following method. ## Submitting job to cluster Two files are needed to submit to the cluster. One is the pbs command script and the other is a job.sh shell script to invoke ulimit command on all allocated processors. They are presented below.Both files should be in your work directory with the rest of the VASP input files. ### Pbs command script #PBS -N <name of output files> #PBS -l nodes=4:ppn=4 #PBS -l walltime=48:00:00 #PBS -q q64p48h@raptor #PBS -mea #PBS -r n #PBS -V ## Convergence • The surface energy should be converged in terms of k-point grid . Therefore try different k-point grids. Remember that the ratio of k-points should be inversely proportional to the lengths of the lattice vectors (in this case the edges of the simulation box). Since the supercell is elongated only one k-point is needed in the elongated direction. ### KPOINTS Auto #header file 0 Monkhorst #Style of Kpoints 9 9 1 #Numbers 0 0 0
2019-07-16 02:28:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4799998700618744, "perplexity": 10093.855704412459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524475.48/warc/CC-MAIN-20190716015213-20190716041213-00042.warc.gz"}
http://gmatclub.com/forum/gmatprep-challengeq-comparable-worth-as-a-standard-applied-176243.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 26 Apr 2015, 22:49 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # GMATPREP ChallengeQ -Comparable worth, as a standard applied Author Message TAGS: Verbal Forum Moderator Status: After a long voyage, land is visible. Joined: 22 Mar 2013 Posts: 989 Location: India GPA: 3.51 WE: Information Technology (Computer Software) Followers: 84 Kudos [?]: 614 [0], given: 222 GMATPREP ChallengeQ -Comparable worth, as a standard applied [#permalink]  12 Aug 2014, 04:56 1 This post was BOOKMARKED NEW PROJECT!: Back to basic => Give your explanation- Get Kudos Point for best explanation Passage-14 GMATPrep RCs-Collection(Main article) Comparable worth, as a standard applied to eliminate inequities in pay, insists that the values of certain tasks performed in dissimilar jobs can be compared. In the last decade, this approach has become a critical social policy issue, as large numbers of private-sector firms and industries as well as federal, state, and local governmental entities have adopted comparable worth policies or begun to consider doing so. This widespread institutional awareness of comparable worth indicates increased public awareness that pay inequities--that is, situations in which pay is not "fair" because it does not reflect the true value of a job--exist in the labor market. However, the question still remains: have the gains already made in pay equity under comparable worth principles been of a precedent-setting nature or are they mostly transitory, a function of concessions made by employers to mislead female employees into believing that they have made long-term pay equity gains? Comparable worth pay adjustments are indeed precedent-setting. Because of the principles driving them, other mandates that can be applied to reduce or eliminate unjustified pay gaps between male and female workers have not remedied perceived pay inequities satisfactorily for the litigants in cases in which men and women hold different jobs. But whenever comparable worth principles are applied to pay schedules, perceived unjustified pay differences are eliminated. In this sense, then, comparable worth is more comprehensive than other mandates, such as the Equal Pay Act of 1963 and Title VII of the Civil Rights Act of 1964. Neither compares tasks in dissimilar jobs (that is, jobs across occupational categories) in an effort to determine whether or not what is necessary to perform these tasks--know-how, problem-solving, and accountability--can be quantified in terms of its dollar value to the employer. Comparable worth, on the other hand, takes as its premise that certain tasks in dissimilar jobs may require a similar amount of training, effort, and skill; may carry similar responsibility; may be carried on in an environment having a similar impact upon the worker; and may have a similar dollar value to the employer. 1. Which of the following most accurately states the central purpose of the passage? A. To criticize the implementation of a new procedure B. To assess the significance of a change in policy C. To illustrate how a new standard alters procedures D. To explain how a new policy is applied in specific cases E. To summarize the changes made to date as a result of social policy [Reveal] Spoiler: B 2. According to the passage, which of the following is true of comparable worth as a policy? A. Comparable worth policy decisions in pay-inequity cases have often failed to satisfy the complainants. B. Comparable worth policies have been applied to both public-sector and private-sector employee pay schedules. C. Comparable worth as a policy has come to be widely criticized in the past decade. D. Many employers have considered comparable worth as a policy but very few have actually adopted it. E. Early implementations of comparable worth policies resulted in only transitory gains in pay equity. [Reveal] Spoiler: B 3. Which of the following best describes an application of the principles of comparable worth as they are described in the passage? A. The current pay, rates of increase, and rates of promotion for female mechanics are compared with those of male mechanics. B. The training, skills, and job experience of computer programmers in one division of a corporation are compared to those of programmers making more money in another division. C. The number of women holding top executive positions in a corporation is compared to the number of women available for promotion to those positions, and both tallies are matched to the tallies for men in the same corporation. D. The skills, training, and job responsibilities of the clerks in the township tax assessor's office are compared to those of the much better-paid township engineers. E. The working conditions of female workers in a hazardous-materials environment are reviewed and their pay schedules compared to those of all workers in similar environments across the nation. [Reveal] Spoiler: D 4. It can be inferred from the passage that application of “other mandate” (see highlighted text) would be unlikely to result in an outcome satisfactory to the female employees in which of the following situations? I: males employed as long-distance truck drivers for a furniture company make $3.50 more per hour than do females with comparable job experience employed in the same capacity. II: women working in the office of a cement company contend that their jobs are as demanding and valuable as those of the men working outside in the cement factory, but the women are paid much less per hour. III: a law firm employs both male and female paralegals with the same educational and career backgrounds, but the same salary for male paralegals is$5,000 more than female paralegals. A. I only B. II only C. III only D. I and II only E. I and III only [Reveal] Spoiler: B 5. According to the passage, comparable worth principles are different in which of the following ways from other mandates intended to reduce or eliminate pay inequities: A. Comparable worth principles address changes in the pay schedules of male as well as female workers B. Comparable worth principles can be applied to employees in both the public and the private sector C. Comparable worth principles emphasize the training and skill of workers D. Comparable worth principles require changes in the employer's resource allocation E. Comparable worth principles can be used to quantify the value of elements of dissimilar jobs [Reveal] Spoiler: E _________________ Piyush K ----------------------- Our greatest weakness lies in giving up. The most certain way to succeed is to try just one more time. ― Thomas A. Edison Don't forget to press--> Kudos My Articles: 1. WOULD: when to use? | 2. All GMATPrep RCs (New) Tip: Before exam a week earlier don't forget to exhaust all gmatprep problems specially for "sentence correction". Last edited by PiyushK on 16 Aug 2014, 02:02, edited 3 times in total. Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes Manager Joined: 21 Sep 2012 Posts: 154 Location: United States Concentration: Finance, Economics Schools: CBS '17 GPA: 4 WE: General Management (Consumer Products) Followers: 1 Kudos [?]: 90 [1] , given: 31 Re: GMATPREP ChallengeQ -Comparable worth, as a standard applied [#permalink]  12 Aug 2014, 06:44 1 KUDOS 1. Which of the following most accurately states the central purpose of the passage? A. To criticize the implementation of a new procedure B. To assess the significance of a change in policy C. To illustrate how a new standard alters procedures D. To explain how a new policy is applied in specific cases E. To summarize the changes made to date as a result of social policy Ans - B 2. According to the passage, which of the following is true of comparable worth as a policy? A. Comparable worth policy decisions in pay-inequity cases have often failed to satisfy the complainants. B. Comparable worth policies have been applied to both public-sector and private-sector employee pay schedules. C. Comparable worth as a policy has come to be widely criticized in the past decade. D. Many employers have considered comparable worth as a policy but very few have actually adopted it. E. Early implementations of comparable worth policies resulted in only transitory gains in pay equity. Ans - B 3. Which of the following best describes an application of the principles of comparable worth as they are described in the passage? A. The current pay, rates of increase, and rates of promotion for female mechanics are compared with those of male mechanics. B. The training, skills, and job experience of computer programmers in one division of a corporation are compared to those of programmers making more money in another division. C. The number of women holding top executive positions in a corporation is compared to the number of women available for promotion to those positions, and both tallies are matched to the tallies for men in the same corporation. D. The skills, training, and job responsibilities of the clerks in the township tax assessor's office are compared to those of the much better-paid township engineers. E. The working conditions of female workers in a hazardous-materials environment are reviewed and their pay schedules compared to those of all workers in similar environments across the nation Ans - D 4. It can be inferred from the passage that application of “other mandate” (see highlighted text) would be unlikely to result in an outcome satisfactory to the female employees in which of the following situations? I: males employed as long-distance truck drivers for a furniture company make $3.50 more per hour than do females with comparable job experience employed in the same capacity. II: women working in the office of a cement company contend that their jobs are as demanding and valuable as those of the men working outside in the cement factory, but the women are paid much less per hour. III: a law firm employs both male and female paralegals with the same educational and career backgrounds, but the same salary for male paralegals is$5,000 more than female paralegals. A. I only B. II only C. III only D. I and II only E. I and III only Ans - B Last edited by PiyushK on 13 Aug 2014, 20:30, edited 1 time in total. Manager Joined: 25 Apr 2014 Posts: 155 Followers: 0 Kudos [?]: 30 [0], given: 1474 Re: GMATPREP ChallengeQ -Comparable worth, as a standard applied [#permalink]  22 Aug 2014, 07:59 11 mins 5 secs 1.B 2.B 3.D 4.B 5.E Intern Joined: 30 May 2014 Posts: 23 GMAT 1: 660 Q46 V34 GPA: 3.87 Followers: 4 Kudos [?]: -19 [0], given: 28 Re: GMATPREP ChallengeQ -Comparable worth, as a standard applied [#permalink]  25 Jan 2015, 05:15 1 This post was BOOKMARKED 1. Which of the following most accurately states the central purpose of the passage? A. To criticize the implementation of a new procedure( Nothing criticized) B. To assess the significance of a change in policy (Bingo..!) C. To illustrate how a new standard alters procedures(Nope) D. To explain how a new policy is applied in specific cases(too concise) E. To summarize the changes made to date as a result of social policy(changes are shown but compared not summarize) 2. According to the passage, which of the following is true of comparable worth as a policy? A. Comparable worth policy decisions in pay-inequity cases have often failed to satisfy the complainants.(nowhere mentioned) B. Comparable worth policies have been applied to both public-sector and private-sector employee pay schedules.(Stated in the 1st para) C. Comparable worth as a policy has come to be widely criticized in the past decade.(nope) D. Many employers have considered comparable worth as a policy but very few have actually adopted it.(wrong info) E. Early implementations of comparable worth policies resulted in only transitory gains in pay equity.(not stated) 3. Which of the following best describes an application of the principles of comparable worth as they are described in the passage? A. The current pay, rates of increase, and rates of promotion for female mechanics are compared with those of male mechanics. B. The training, skills, and job experience of computer programmers in one division of a corporation are compared to those of programmers making more money in another division. C. The number of women holding top executive positions in a corporation is compared to the number of women available for promotion to those positions, and both tallies are matched to the tallies for men in the same corporation. D. The skills, training, and job responsibilities of the clerks in the township tax assessor's office are compared to those of the much better-paid township engineers.(Mentioned in last para, different jobs but may require same effort and skills.., Other choices state same job) E. The working conditions of female workers in a hazardous-materials environment are reviewed and their pay schedules compared to those of all workers in similar environments across the nation. 4. It can be inferred from the passage that application of “other mandate” (see highlighted text) would be unlikely to result in an outcome satisfactory to the female employees in which of the following situations? I: males employed as long-distance truck drivers for a furniture company make $3.50 more per hour than do females with comparable job experience employed in the same capacity. (Other mandate consider same jobs not different one's) [b]II: women working in the office of a cement company contend that their jobs are as demanding and valuable as those of the men working outside in the cement factory, but the women are paid much less per hour. (Different jobs> not applicable here)[/b] III: a law firm employs both male and female paralegals with the same educational and career backgrounds, but the same salary for male paralegals is$5,000 more than female paralegals.(Other mandate consider same jobs not different one's) A. I only B. II only C. III only D. I and II only E. I and III only 5. According to the passage, comparable worth principles are different in which of the following ways from other mandates intended to reduce or eliminate pay inequities: A. Comparable worth principles address changes in the pay schedules of male as well as female workers(Both do this) B. Comparable worth principles can be applied to employees in both the public and the private sector(Both) C. Comparable worth principles emphasize the training and skill of workers(not what we are looking for) D. Comparable worth principles require changes in the employer's resource allocation(nope) [color=#ffff00]E. Comparable worth principles can be used to quantify the value of elements of dissimilar jobs( exactly...!! comparable worth says that different jobs may require same work effort or skill) [/color] _________________ Thnx COLLECTION OF QUESTIONS Quant: 1. Bunuel Signature Collection - The Next Generation 2. Bunuel Signature Collection ALL-IN-ONE WITH SOLUTIONS 3. Veritas Prep Blog PDF Version Verbal:1. Best EXTERNAL resources to tackle the GMAT Verbal Section 2. e-GMAT's ALL CR topics-Consolidated 3. New Critical Reasoning question bank by carcass 4. Meaning/Clarity SC Question Bank by Carcass_Souvik 5. e-GMAT's ALL SC topics-Consolidated-2nd Edition 6. The best reading to improve Reading Comprehension 7.Verbal question bank and Directories Re: GMATPREP ChallengeQ -Comparable worth, as a standard applied   [#permalink] 25 Jan 2015, 05:15 Similar topics Replies Last post Similar Topics: GMATPREP ChallengeQ -Western Managers 2 22 Mar 2015, 03:07 3 GMATPREP ChallengeQ -What kinds of property rights apply to 4 12 Aug 2014, 07:23 Comparable worth, as a standard applied to eliminate 1 07 May 2013, 22:52 17 Comparable worth, as a standard applied to eliminate 23 16 May 2012, 23:25 Comparable worth, as a standard applied to eliminate 15 13 May 2008, 20:03 Display posts from previous: Sort by
2015-04-27 06:49:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18606509268283844, "perplexity": 5180.979429556122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657216.31/warc/CC-MAIN-20150417045737-00172-ip-10-235-10-82.ec2.internal.warc.gz"}
https://developers.monogatari.io/documentation/v/develop/script-actions/play-sound
Play Sound Play a sound effect # Description 1 'play sound <sound_id> [with [properties]]' Copied! The play sound action let's you, as it name says, play sound effects in your game. You can play as many sound effects as you want simultaneously. To stop the sound, check out the Stop Sound documentation. Action ID: Sound Reversible: Yes Requires User Interaction: No # Parameters Name Type Description sound_id string The name of the sound you want to play. These assets must be declared beforehand. properties string Optional. A list of comma separated properties with their respective value. ## Properties The following is a comprehensive list of the properties available for you to modify certain behaviors of this action. Property Name Type Description string The fade property let's you add a fade in effect to the sound, it accepts a time in seconds, representing how much time you want it to take until the sound reaches it's maximum volume. volume number The volume property let's you define how high the sound will be played. loop none Make the sound loop. This property does not require any value. # Assets Declarations To play a sound, you must first add the file to your assets/sound/ directory and then declare it. To do so, Monogatari has an has a function that will let you declare all kinds of assets for your game. 1 Monogatari.assets ('sound', { 2 '<sound_id>': 'soundFileName' 3 }); Copied! ## Supported Formats Each browser has it's own format compatibility. MP3 however is the format supported by every browser. If you wish to use other formats, you can check a compatibility table to discover what browsers will be able to play it. # Examples ## Play Sound The following will play the sound, and once the sound ends, it will simply stop. Script Sound Assets 1 Monogatari.script ({ 2 'Start': [ 3 'play sound riverFlow' 4 'end' 5 ] 6 }); Copied! 1 Monogatari.assets ('sound', { 2 'riverFlow': 'river_water_flowing.mp3' 3 }); Copied! ## Loop Sound The following will play the sound, and once the sound ends, it will start over on an infinite loop until it is stopped using the Stop Sound Action. Script Sound Assets 1 Monogatari.script ({ 2 'Start': [ 3 'play sound riverFlow with loop' 4 'end' 5 ] 6 }); Copied! 1 Monogatari.assets ('sound', { 2 'riverFlow': 'river_water_flowing.mp3' 3 }); Copied! The following will play the sound, and will use a fade in effect. Script Sound Assets 1 Monogatari.script ({ 2 'Start': [ 3 'play sound riverFlow with fade 3' 4 'end' 5 ] 6 }); Copied! 1 Monogatari.assets ('sound', { 2 'riverFlow': 'river_water_flowing.mp3' 3 }); Copied! ## Custom Volume The following will set the volume of this sound to 73%. Script Sound Assets 1 Monogatari.script ({ 2 'Start': [ 3 'play sound riverFlow with volume 73' 4 'end' 5 ] 6 }); Copied! 1 Monogatari.assets ('sound', { 2 'riverFlow': 'river_water_flowing.mp3' 3 }); Copied! Please note however, that the user's preferences regarding volumes are always respected, which means that this percentage is taken from the current player preferences, meaning that if the player has set the volume to 50%, the actual volume value for the sound will be the result of: $50 * 0.73 = 36.5%$ ## All Together Of course, you can combine all of this properties, and remember the order doesn't really matter, you can write the properties on the order that feels more natural to you. Script Sound Assets 1 Monogatari.script ({ 2 'Start': [ 3 'play sound riverFlow with volume 100 loop fade 20' 4 'end' 5 ] 6 }); Copied! 1 Monogatari.assets ('sound', { 2 'riverFlow': 'river_water_flowing.mp3' 3 }); Copied!
2021-10-23 21:21:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3332705497741699, "perplexity": 5526.439131907647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00377.warc.gz"}
https://www.coursehero.com/file/61058351/91-92-95-Two-Independent-Means-Variancespptx/
# 9.1-9.2,_9.5_-_Two_Independent_Means,_Variances.pptx - STAT... • 24 This preview shows page 1 - 8 out of 24 pages. STAT 312 Chapter 9 - Inferences Based on Two Samples 9.1 - Z -Tests and Confidence Intervals for a Difference Between Two Population Means 9.2 - The Two-Sample T -Test and Confidence Interval 9.3 - Analysis of Paired Data 9.4 - Inferences Concerning a Difference Between Population Proportions 9.5 - Inferences Concerning Two Population Variances Consider two independent populations… Null Hypothesis H 0 : μ 1 = μ 2 , i.e., μ 1 μ 2 = 0 (“No mean difference") Test at signif level α POPULATION 1 and a random variable X , normally distributed in each. POPULATION 2 Classic Example : “Randomized Clinical Trial”… Pop 1 = Treatment, Pop 2 = Control X 2 ~ N ( μ 2 , σ 2 ) 1 σ 1 2 σ 2 X 1 ~ N ( μ 1 , σ 1 ) Random Sample, size n 1 Random Sample, size n 2 Sampling Distribution =? μ 0 2 X 1 X Consider two independent populations… Null Hypothesis H 0 : μ 1 = μ 2 , i.e., μ 1 μ 2 = 0 (“No mean difference") Test at signif level α POPULATION 1 and a random variable X , normally distributed in each. POPULATION 2 Classic Example : “Randomized Clinical Trial”… Pop 1 = Treatment, Pop 2 = Control X 2 ~ N ( μ 2 , σ 2 ) 1 σ 1 2 σ 2 X 1 ~ N ( μ 1 , σ 1 ) Random Sample, size n 1 Random Sample, size n 2 Sampling Distribution =? μ 0 2 2 2 2 ~ , X N n 1 2 ~ ???? X X 1 1 1 1 ~ , X N n Mean( X Y ) = Mean( X ) – Mean( Y ) Consider two independent populations… Null Hypothesis H 0 : μ 1 = μ 2 , i.e., μ 1 μ 2 = 0 (“No mean difference") Test at signif level α POPULATION 1 and a random variable X , normally distributed in each. POPULATION 2 Classic Example : “Randomized Clinical Trial”… Pop 1 = Treatment, Pop 2 = Control X 2 ~ N ( μ 2 , σ 2 ) 1 σ 1 2 σ 2 X 1 ~ N ( μ 1 , σ 1 ) Random Sample, size n 1 Random Sample, size n 2 Sampling Distribution =? Recall from Stat 311 (§3.3, slide 28): and if X and Y are independent Var( X Y ) = Var( X ) + Var( Y ) μ 0 2 2 2 2 ~ , X N n 1 2 ~ ????, ???? X X N 1 1 1 1 ~ , X N n Consider two independent populations… Null Hypothesis H 0 : μ 1 = μ 2 , i.e., μ 1 μ 2 = 0 (“No mean difference") Test at signif level α POPULATION 1 and a random variable X , normally distributed in each. POPULATION 2 Classic Example : “Randomized Clinical Trial”… Pop 1 = Treatment, Pop 2 = Control X 2 ~ N ( μ 2 , σ 2 ) 1 σ 1 2 σ 2 X 1 ~ N ( μ 1 , σ 1 ) Random Sample, size n 1 Random Sample, size n 2 Sampling Distribution =? Mean( X Y ) = Mean( X ) – Mean( Y ) and if X and Y are independent Var( X Y ) = Var( X ) + Var( Y ) μ 0 Recall from Stat 311 (§3.3, slide 28): 2 2 2 2 ~ , X N n 1 2 1 2 ~ , ???? X X N 1 1 1 1 ~ , X N n Consider two independent populations… Null Hypothesis H 0 : μ 1 = μ 2 , i.e., μ 1 μ 2 = 0 (“No mean difference") Test at signif level α POPULATION 1 and a random variable X , normally distributed in each. POPULATION 2 Classic Example : “Randomized Clinical Trial”… Pop 1 = Treatment, Pop 2 = Control X 2 ~ N ( μ 2 , σ 2 ) 1 σ 1 2 σ 2 X 1 ~ N ( μ 1 , σ 1 ) Random Sample, size n 1 Random Sample, size n 2 Sampling Distribution =? Mean( X Y ) = Mean( X ) – Mean( Y ) and if X and Y are independent Var( X Y ) = Var( X ) + Var( Y ) μ 0 Recall from Stat 311 (§3.3, slide 28): 2 2 2 2 ~ , X N n 1 2 1 2 ~ , ???? X X N 1 1 1 1 ~ , X N n Consider two independent populations… Null Hypothesis H 0 : μ 1 = μ 2 , i.e., μ 1 μ 2 = 0 (“No mean difference") Test at signif level α POPULATION 1 and a random variable X
2021-12-05 04:24:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882683515548706, "perplexity": 2758.406195671771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00137.warc.gz"}
https://math.stackexchange.com/questions/3264618/prove-that-the-set-a-k-frac1k-k-neq-0-k-in-mathbbz-is-counta?noredirect=1
Prove that the set $A = \{k^\frac{1}{k}| k \neq 0, k \in \mathbb{Z}\}$ is countably infinite In a practice test for my algorithms class I was asked to prove that the set $$A = \{k^\frac{1}{k}| k \neq 0, k \in \mathbb{Z}\}$$ is countably infinite. My professor provided me with the answer that we can create an image like the following $$1 \rightarrow 1^\frac{1}{1}, 2 \rightarrow (-1)^{-\frac{1}{1}}, 3 \rightarrow 2^\frac{1}{2}, 4 \rightarrow (-2)^{-\frac{1}{2}} ..., k \rightarrow ((-1)^{k+1}\lfloor\frac{k-1}{2} + 1\rfloor)^{(-1)^{k+1}\cdot\frac{1}{\lfloor\frac{k-1}{2} + 1\rfloor}}$$ which shows that for every $$k$$ there is an unique value of $$((-1)^{k+1}\lfloor\frac{k-1}{2} + 1\rfloor)^{(-1)^{k+1}\cdot \frac{1}{\lfloor\frac{k-1}{2} + 1\rfloor}}$$, and thus the image is a bijection and the set is countable infinite. I am familiar with the concept of how there has to be a bijection with the natural numbers for a set to be countably infinite, but I have no idea how my professor got to the expression after '$$k \rightarrow$$'. Can someone enlighten me on this? Or, perhaps, show a different way to proof that the set is countably infinite? • Welcome to Math Stack Exchange. I'd find it easier to find a bijection for $k>0$ and $k<0$ separately and then use the fact that the union of two countably infinite sets ($k>0$ and $k<0$) is countably infinite – J. W. Tanner Jun 16 '19 at 19:27 • If I were you I would ask my professor first. – uniquesolution Jun 16 '19 at 19:28 • Do you know that the set of algebraic numbers is countably infinite? – J. W. Tanner Jun 16 '19 at 19:38 • @J.W.Tanner I was not aware that the set of algebraic numbers is countably inifinite. The answer on that question seems like useful information. – Yousousen Jun 16 '19 at 20:04 • On the other hand, if the values in $A$ are all supposed to be real, then $k^{1/k}$ doesn't mean anything when $k=-2$, so the professor's claimed bijection fails to even be a function in that case. – hmakholm left over Monica Jun 16 '19 at 20:06 Your professor's expression is a (complicated way) of (attempting to) explicitly writing down a bijection. The strategy really has two pieces: firstly it shows that the positive integers is in bijection with all the non-zero integers, and then in turn that the non-zero integers are in bijection with the set $$A$$. The formula $$\left\lfloor \frac{k-1}{2}+1\right\rfloor = \left\lfloor \frac{k+1}{2}\right\rfloor = \left\lfloor \frac{k}{2}+\frac{1}{2}\right\rfloor$$ is a somewhat unhelpful way to say "Divide by two, and round up". To see this, first suppose $$k$$ is even. Then $$k/2$$ is an integer, so adding a half to it and then taking the floor does not change its value. Hence the value of the expression for even $$k$$ is just $$k/2$$. On the other hand, if $$k$$ is odd, then $$k/2$$ is an integer plus $$1/2$$. Adding another half to this then taking the floor is the same as just adding the $$1/2$$, or, rounding up. What this function does for us is that it matches each positive integers with two other positive integers, e.g. $$1$$ is matched with $$1$$ and $$2$$, $$2$$ is matched with $$3$$ and $$4$$, and so on. Then the $$(-1)^{k+1}$$ part makes the sign of the output alternate, so now each positive integer is matched with one non-zero integer (as $$2$$ now goes to $$-1$$ instead of $$1$$, and $$4$$ goes to $$-2$$ instead of $$2$$, and so on). Once we have a function that does this, you raise it to the power one over itself, which matched it up with elements of the set $$A$$. Edit: It was pointed out to me that I have not actually shown that $$A$$ is infinite. What the above shows is that the map is surjective (or onto), but we would need to show that $$A$$ has infinitely many elements. This is slightly complicated by the fact that $$A \subset \mathbb{C}$$ but not $$\mathbb{R}$$. The first thing that occurs to me is to show that the function $$f(x) = x^{1/x}$$ for $$x>0,x\in\mathbb{R}$$ is decreasing for $$x>e$$ (as its derivative is negative there). Edit: Note that in the definition of $$A$$, $$k=2$$ and $$k=4$$ give the same value: $$\sqrt{2}$$. Consequently, any map that tries to enumerate $$A$$ in the "obvious" way will not be injective. • I think we're missing an argument here that the $k^{1/k}$s are all different -- or at least different enough that $A$ avoids being finite. – hmakholm left over Monica Jun 16 '19 at 19:52 • For $k=2$ and $4$ they're equal – J. W. Tanner Jun 16 '19 at 19:56 • @James Thank you very much! This cleared up a lot. – Yousousen Jun 16 '19 at 20:07 • @HenningMakholm You are correct, let me fix that. – James Jun 16 '19 at 20:09
2020-01-23 02:17:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780253529548645, "perplexity": 102.9915095568245}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00455.warc.gz"}
http://docmadhattan.fieldofscience.com/2011/10/super-nobel-in-physics-2011.html
### Super-Nobel in Physics 2011 The first observation of a supernova is dated 1572 by Tycho Brahe, but the hystorically most important supernova's observation is the Galilei's observation in 1604: The supernova of 1604 caused even more excitement than Tycho's because its appearance happened to coincide with a so-called Great Conjunction or close approach of Jupiter, Mars and Saturn.(1) The Galilei's discover was revolutionary for one important reason: Galileo's observations and those made elsewhere in Italy and in Northern Europe indicated that it was beyond the Moon, in the region where the new star of 1572 had appeared. The appearance of a new body outside the Earth-Moon system had challenged the traditional belief, embodied in Aristotle's Cosmology, that the material of planets was unalterable and that nothing new could occur in the heavens.(1) About the new star Galileo states that [it] was initially small but grew rapidly in size such as to appear bigger than all the stars, and all planets with the exception of Venus.(1) We can confrount the observation with modern definitions: Novae are the result of explosions on the surface of faint white dwarfs, caused by matter falling on their surfaces from the atmosphere of larger binary companions. A supernova is also a star that suddenly increases dramatically in brightness, then slowly dims again, eventually fading from view, but it is much brighter, about ten thousand times more than a nova.(1) These dramatical events became soon a good tools in order to observe the expansion of the universe: Type Ia supernovae are empirical tools whose precision and intrinsic brightness make them sensitive probes of the cosmological expansion.(5) And observing a series of supernovae the team of Brian Schmidt (1967) and Adam Riess (1969) in 1998(3) and the team of Saul Perlmutter (1959) in 1999(4) found an important consmological observation: Universe is accelerating! The vision of the Universe it was very simple: a quickly, great expansion from a high density quark-gluon plasma (or something else); a cooling of the Universe with an aggregation between particles with the birth of stars, planets, galaxies; a deceleration in the expansion dued by gravitational field; an unknown future with Universe in the balance between eternal expansion and gravitational collapse. The observation of Schmidt's and Perlmutter's teams changed the decelerating-scenario, substituting it with an accelerating-scenario, that is compatible with a nonzero cosmological constant (in the plot, the cosmological constant is $\Omega_\Lambda$, that is also the vacuum energy density): We can write the expansion of the Universe using the following formula, that is a sperimentally verifiable version of Einstein's equation(2): $\left ( \frac{\text{d} a}{\text{d} \tau} \right )^2 = 1 + \Omega_M \left ( \frac{1}{a} - 1 \right ) + \Omega_\Lambda (a^2 - 1)$ where $a$ is the expansion factor, a function of the redshift $z$(6), $\tau = H_0 t$, with $H_0$ the Hubble's constant, and $\Omega_M$ the matter density of the universe. These is the brief story of the Nobel Prize in Physics assigned today to Schmidt, Riess and Perlmutter, a discover that has brought to the attention of cosmology the dark matter introduced by Fritz Zwicky. But that's another story! (1) Shea, W. Galileo and the Supernova of 1604. 1604-2004: Supernovae as Cosmological Lighthouses, ASP Conference Series, Vol. 342, Proceedings of the conference held 15-19 June, 2004 in Padua, Italy. (2) Carroll, Sean M.; Press, William H.; Turner, Edwin L. The cosmological constant. Annual review of astronomy and astrophysics. Vol. 30 (A93-25826 09-90), p. 499-542. (3) Riess, Adam G et al. Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant. The Astronomical Journal, Volume 116, Issue 3, pp. 1009-1038. (arXiv) (4) S. Perlmutter et al.. Measurements of $\Omega$ and $\Lambda$ from 42 High-Redshift Supernovae. Astrophysical Journal, 517, 565-586. (arXiv) (5) Perlmutter, S. and Schmidt, B.P. Measuring Cosmology with Supernovae. Lecture Notes in Physics, 2003, Volume 598/2003, 195-217. (arXiv) (6) $a = \frac{1}{1+z}$ Nobel Prize official resources: Press release, Useful links and further readings #### Post a Comment Markup Key: - <b>bold</b> = bold - <i>italic</i> = italic - <a href="http://www.fieldofscience.com/">FoS</a> = FoS
2018-01-16 09:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822318196296692, "perplexity": 2877.948233160541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886397.2/warc/CC-MAIN-20180116090056-20180116110056-00109.warc.gz"}
https://proofwiki.org/wiki/Definition:Divisor_(Algebra)
# Definition:Divisor (Algebra) ## Definition ### Ring with Unity Let $\struct {R, +, \circ}$ be an ring with unity whose zero is $0_R$ and whose unity is $1_R$. Let $x, y \in D$. We define the term $x$ divides $y$ in $R$ as follows: $x \mathrel {\divides_R} y \iff \exists t \in R: y = t \circ x$ When no ambiguity results, the subscript is usually dropped, and $x$ divides $y$ in $R$ is just written $x \divides y$. ### Natural Numbers Let $\N$ be the natural numbers. Let $n \in \N$ and $m \in \N_{>0}$. Then $m$ divides $n$ is defined as: $m \divides n \iff \exists p \in \N: m \times p = n$ ### Integers As the set of integers form an integral domain, the concept divides is fully applicable to the integers. Let $\struct {\Z, +, \times}$ be the ring of integers. Let $x, y \in \Z$. Then $x$ divides $y$ is defined as: $x \divides y \iff \exists t \in \Z: y = t \times x$ ### Gaussian Integers As the set of Gaussian integers form an integral domain, the concept divides is also fully applicable to the Gaussian integers. Let $\struct {\Z \left[{i}\right], +, \times}$ be the ring of Gaussian integers. Let $x, y \in \Z \left[{i}\right]$. Then $x$ divides $y$ is defined as: $x \divides y \iff \exists t \in \Z \left[{i}\right]: y = t \times x$ ### Real Numbers The concept of divisibility can also be applied to the real numbers $\R$. Let $\R$ be the set of real numbers. Let $x, y \in \R$. Then $x$ divides $y$ is defined as: $x \divides y \iff \exists t \in \Z: y = t \times x$ where $\Z$ is the set of integers. ## Terminology Let $x \divides y$ denote that $x$ divides $y$. Then the following terminology can be used: $x$ is a divisor of $y$ $y$ is a multiple of $x$ $y$ is divisible by $x$. In the field of Euclidean geometry, in particular: $x$ measures $y$. To indicate that $x$ does not divide $y$, we write $x \nmid y$. ## Factorization Let $x, y \in D$ where $\struct {D, +, \times}$ is an integral domain. Let $x$ be a divisor of $y$. Then by definition it is possible to find some $t \in D$ such that $y = t \times x$. The act of breaking down such a $y$ into the product $t \circ x$ is called factorization. ## Also known as A divisor can also be referred to as a factor. If $x \divides y$, then $x$ may also be referred to as an aliquot part of $y$. Some sources insist that $x$ must be a proper divisor of $y$ for this term to apply. If $x \nmid y$, then $x$ may be referred to as an aliquant part. ## Notation The conventional notation for $x$ is a divisor of $y$ is "$x \mid y$", but there is a growing trend to follow the notation "$x \divides y$", as espoused by Knuth etc. The notation '$m \mid n$' is actually much more common than '$m \divides n$' in current mathematics literature. But vertical lines are overused -- for absolute values, set delimiters, conditional probabilities, etc. -- and backward slashes are underused. Moreover, '$m \divides n$' gives an impression that $m$ is the denominator of an implied ratio. So we shall boldly let our divisibility symbol lean leftward. An unfortunate unwelcome side-effect of this notational convention is that to indicate non-divisibility, the conventional technique of implementing $/$ through the notation looks awkward with $\divides$, so $\not \! \backslash$ is eschewed in favour of $\nmid$. Some sources use $\ \vert \mkern -10mu {\raise 3pt -} \$ or similar to denote non-divisibility. ## Also see • Results about divisibility can be found here.
2023-03-22 15:22:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767381548881531, "perplexity": 378.94459864707403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00171.warc.gz"}
https://m3g.github.io/CellListMap.jl/stable/PeriodicSystems/
PeriodicSystems interface The PeriodicSystems interface facilitates the use of CellListMap for the majority of cases. To use it, load the PeriodicSystems module directly, with: using CellListMap.PeriodicSystems Note • This interface requires CellListMap.jl version 0.7.22 or greater. • The complete codes of the examples are at the end of this page, with examples of: The mapped function The function to be mapped for every pair of particles within the cutoff follows the same interface as the standard interface. It must be of the form function f(x, y, i, j, d2, output) # update output variable return output end where x and y are the positions of the particles, already wrapped relative to each other according to the periodic boundary conditions (a minimum-image set of positions), i and j are the indexes of the particles in the arrays of coordinates, d2 is the squared distance between the particles, and output is the variable to be computed. For example, computing the energy, as the sum of the inverse of the distance between particles, can be done with a function like: function energy(d2,u) u += 1 / sqrt(d2) return u end and the additional parameters required by the interface can be eliminated by the use of an anonymous function, directly on the call to the map_pairwise! function: u = map_pairwise((x,y,i,j,d2,u) -> energy(d2,u), system) (what system is will be explained in the examples below). Alternatively, the function might require additional parameters, such as the masses of the particles. In this case, we can use a closure to provide such data: function energy(i,j,d2,u,masses) u += masses[i]*masses[j] / sqrt(d2) return u end const masses = # ... some masses u = map_pairwise((x,y,i,j,d2,u) -> energy(d2,u,masses), system) Potential energy example Note The output of the CellListMap computation may be of any kind. Most commonly, it is an energy, a set of forces, or other data type that can be represented either as a number, an array of numbers, or an array of vectors (SVectors in particular), such as arrays of forces. Additionally, the properties are frequently additive (the energy is the sum of the energy of the particles, or the forces are added by summation). For these types of output data the usage of CellListMap.PeriodicSystems is the simplest, and does not require the implementation of any data-type dependent function. For example, let us build a system of random particles in a cubic box, and compute an "energy", which in this case is simply the sum of 1/d over all pair of particles, within a cutoff. The PeriodicSystem constructor receives the properties of the system and sets up automatically the most commonly used data structures necessary. julia> using CellListMap.PeriodicSystems, StaticArrays julia> system = PeriodicSystem( xpositions = rand(SVector{3,Float64},1000), unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = 0.0, output_name = :energy ); Now, directly, let us compute a putative energy of the particles, assuming a simple formula which depends on the inverse of the distance between pairs: julia> map_pairwise!((x,y,i,j,d2,energy) -> energy += 1 / sqrt(d2), system) 30679.386366872823 The system.energy field accesses the resulting value of the computation: julia> system.energy 30679.386366872823 because the output_name field was provided. If it is not provided, you can access the output value from the system.output field. Note • Systems can be 2 or 3-dimensional. • The unitcell parameter may be either a vector, as in the example, or a unit cell matrix, for general boundary conditions. • Unitful quantities can be provided, given appropriate types for all input parameters. Computing forces Following the example above, let us compute the forces between the particles. We have to define the function that computes the force between a pair of particles and updates the array of forces: function update_forces!(x,y,i,j,d2,forces) d = sqrt(d2) df = (1/d2)*(1/d)*(y - x) forces[i] += df forces[j] -= df return forces end Importantly, the function must return the forces array to follow the API. Now, let us setup the system with the new type of output variable, which will be now an array of forces with the same type as the positions: julia> positions = rand(SVector{3,Float64},1000); julia> system = PeriodicSystem( xpositions = positions, unitcell=[1,1,1], cutoff = 0.1, output = similar(positions), output_name = :forces ); Let us note that the forces where reset upon the construction of the system: julia> system.forces 1000-element Vector{SVector{3, Float64}}: [0.0, 0.0, 0.0] [0.0, 0.0, 0.0] ⋮ [0.0, 0.0, 0.0] A call to map_pairwise! with the appropriate function definition will update the forces: julia> map_pairwise!((x,y,i,j,d2,forces) -> update_forces!(x,y,i,j,d2,forces), system) 1000-element Vector{SVector{3, Float64}}: [-151.19529230407284, 159.33819000196905, -261.3055111242796] [-173.02442398784672, -178.782819965489, 4.570607952876692] ⋮ [-722.5400961501635, 182.65287417718935, 380.0394926753039] Computing both energy and forces In this example we define a general type of output variable, for which custom copy, reset, and reduction functions must be defined. It can be followed for the computation of other general properties from the particle positions. Note Interface to be implemented: MethodReturnWhat it does copy_output(x::T)new instance of type TCopies an element of the output type T. reset_output!(x::T)mutated xResets (usually zero) the value of x to the initial value it must assume before mapping. If x is immutable, the function can return a new instance of T. reducer(x::T,y::T)mutated xReduces x and y into x (for example x = x + y). If x is immutable, returns a new instance of type T. Remark: if the output is an array of an immutable type T, the methods above can be defined for single instances of T, which is simpler than for the arrays. using CellListMap.PeriodicSystems, StaticArrays The computation of energies and forces in a single call is an interesting example for the definition of a custom output type and the required interface functions. Let us first define an output variable containing both quantities: mutable struct EnergyAndForces energy::Float64 forces::Vector{SVector{3,Float64}} end Now we need to define what it means to copy, reset, and reduce this new type of output. We overload the default corresponding functions, for our new output type: The copy method creates a new instance of the EnergyAndForces type, with copied data: import CellListMap.PeriodicSystems: copy_output copy_output(x::EnergyAndForces) = EnergyAndForces(copy(x.energy), copy(x.forces)) The reset method will zero both the energy and all forces: import CellListMap.PeriodicSystems: reset_output! function reset_output!(output::EnergyAndForces) output.energy = 0.0 for i in eachindex(output.forces) output.forces[i] = SVector(0.0, 0.0, 0.0) end return output end The reduction function defines what it means to combine two output variables obtained on independent threads. In this case, we sum the energies and forces. Different reduction functions might be necessary for other custom types (for example if computing minimum distances). import CellListMap.PeriodicSystems: reducer function reducer(x::EnergyAndForces, y::EnergyAndForces) e_tot = x.energy + y.energy x.forces .+= y.forces return EnergyAndForces(e_tot, x.forces) end Note that in the above example, we reuse the x.forces array in the return instance of EnergyAndForces. You must always reduce from right to left, and reuse the possible buffers of the first argument of the reducer (in this case, x). Warning • All these functions must return the modified output variable, to adhere to the interface. • The proper definition of a reduction function is crucial for correctness. Please verify your results if using the default reducer function, which sums the elements. Now we can proceed as before, defining a function that updates the output variable appropriately: function energy_and_forces!(x,y,i,j,d2,output::EnergyAndForces) d = sqrt(d2) output.energy += 1/d df = (1/d2)*(1/d)*(y - x) output.forces[i] += df output.forces[j] -= df return output end To finally define the system and compute the properties: positions = rand(SVector{3,Float64},1000); system = PeriodicSystem( xpositions = positions, unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = EnergyAndForces(0.0, similar(positions)), output_name = :energy_and_forces ); map_pairwise((x,y,i,j,d2,output) -> energy_and_forces!(x,y,i,j,d2,output), system); The output can be seen with the aliases of the system.output variable: julia> system.energy_and_forces.energy 31696.94766439311 julia> system.energy_and_forces.forces 1000-element Vector{SVector{3, Float64}}: [-338.1909601911842, 7.7663690656924445, 202.25889647151405] [33.67299655756128, 282.7581453168999, -79.09639223837306] ⋮ [38.83014327604529, -204.45236278342745, 249.307871211616] Updating coordinates, unit cell, and cutoff If the map_pairwise! function will compute energy and/or forces in a iterative procedure (a simulation, for instance), we need to update the coordinates, and perhaps the unit cell and the cutoff. Updating coordinates The coordinates can be updated (mutated, or the array of coordinates can change in size by pushing or deleting particles), simply by directly acessing the xpositions field of the system. Let us exemplify the interface with the computation of forces: julia> using CellListMap.PeriodicSystems, StaticArrays julia> positions = rand(SVector{3,Float64}, 1000); julia> system = PeriodicSystem( xpositions = positions, unitcell=[1,1,1], cutoff = 0.1, output = similar(positions), output_name = :forces ); julia> system.xpositions[1] 3-element SVector{3, Float64} with indices SOneTo(3): 0.6391290709055079 0.43679325975360894 0.8231829019768698 julia> system.xpositions[1] = zeros(SVector{3,Float64}) 3-element SVector{3, Float64} with indices SOneTo(3): 0.0 0.0 0.0 julia> push!(system.xpositions, SVector(0.5, 0.5, 0.5)) 1001-element Vector{SVector{3, Float64}}: [0.0, 0.0, 0.0] [0.5491373098208292, 0.23899915605319244, 0.49058287555218516] ⋮ [0.4700394061063937, 0.5440026379397457, 0.7411235688716618] [0.5, 0.5, 0.5] Warning The output variable may have to be resized accordingly, depending on the calculation being performed. Use the resize_output! function (do not use Base.resize! on your output array directly). If the output array has to be resized, that has to be done with the resize_output! function, which will keep the consistency of the auxiliary multi-threading buffers. This is, for instance, the case in the example of computation of forces, as the forces array must be of the same length as the array of positions: julia> resize_output!(system, length(system.xpositions)); julia> map_pairwise!((x,y,i,j,d2,forces) -> update_forces!(x,y,i,j,d2,forces), system) 1001-element Vector{SVector{3, Float64}}: [756.2076075886971, -335.1637545330828, 541.8627090466914] [-173.02442398784672, -178.782819965489, 4.570607952876692] ⋮ [-722.5400961501635, 182.65287417718935, 380.0394926753039] [20.27985502389337, -193.77607810950286, -155.28968519541544] In this case, if the output is not resized, a BoundsError: is be obtained, because updates of forces at unavailable positions will be attempted. Updating the unit cell The unit cell can be updated to new dimensions at any moment, with the update_unitcell! function: julia> update_unitcell!(system, SVector(1.2, 1.2, 1.2)) PeriodicSystem1 of dimension 3, composed of: Box{CellListMap.OrthorhombicCell, 3} unit cell matrix = [ 1.2, 0.0, 0.0; 0.0, 1.2, 0.0; 0.0, 0.0, 1.2 ] cutoff = 0.1 number of computing cells on each dimension = [13, 13, 13] computing cell sizes = [0.11, 0.11, 0.11] (lcell: 1) Total number of cells = 2197 CellListMap.CellList{3, Float64} 1000 real particles. 623 cells with real particles. 1719 particles in computing box, including images. Parallelization auxiliary data set for: Number of batches for cell list construction: 8 Number of batches for function mapping: 12 Type of output variable (forces): Vector{SVector{3, Float64}} Note • The unit cell can be set initially using a vector or a unit cell matrix. If a vector is provided the system is considered Orthorhombic, if a matrix is provided, a Triclinic system is built. Unit cells updates must preserve the system type. • It is recommended (but not mandatory) to use static arrays (or Tuples) to update the unitcell, as in this case the update will be non-allocating. Updating the cutoff The cutoff can also be updated, using the update_cutoff! function: julia> update_cutoff!(system, 0.2) PeriodicSystem1 of dimension 3, composed of: Box{CellListMap.OrthorhombicCell, 3} unit cell matrix = [ 1.0, 0.0, 0.0; 0.0, 1.0, 0.0; 0.0, 0.0, 1.0 ] cutoff = 0.2 number of computing cells on each dimension = [7, 7, 7] computing cell sizes = [0.2, 0.2, 0.2] (lcell: 1) Total number of cells = 343 CellListMap.CellList{3, Float64} 1000 real particles. 125 cells with real particles. 2792 particles in computing box, including images. Parallelization auxiliary data set for: Number of batches for cell list construction: 8 Number of batches for function mapping: 8 Type of output variable (forces): Vector{SVector{3, Float64}} julia> map_pairwise!((x,y,i,j,d2,forces) -> update_forces!(x,y,i,j,d2,forces), system) 1000-element Vector{SVector{3, Float64}}: [306.9612911344924, -618.7375562535321, -607.1449767066479] [224.0803003775478, -241.05319348787023, 67.53780411933884] ⋮ [2114.4873184508524, -3186.265279868732, -6777.748445712408] [-25.306486853608945, 119.69319481834582, 104.1501577339471] Computations for two sets of particles If the computation involves two sets of particle, a similar interface is available. The only difference is that the coordinates of the two sets must be provided to the PeriodicSystem constructor as the xpositions and ypositions arrays. We will illustrate this interface by computing the minimum distance between two sets of particles, which allows us to showcase further the definition of custom type interfaces: First, we define a variable type that will carry the indexes and the distance of the closest pair of particles: julia> struct MinimumDistance i::Int j::Int d::Float64 end The function that, given two particles, retains the minimum distance, is: julia> function minimum_distance(i, j, d2, md) d = sqrt(d2) if d < md.d md = MinimumDistance(i, j, d) end return md end minimum_distance (generic function with 1 method) We overload copy, reset, and reduce functions, accordingly: julia> import CellListMap.PeriodicSystems: copy_output, reset_output!, reducer! julia> copy_output(md::MinimumDistance) = md copy_output (generic function with 5 methods) julia> reset_output!(md::MinimumDistance) = MinimumDistance(0, 0, +Inf) reset_output! (generic function with 5 methods) julia> reducer!(md1::MinimumDistance, md2::MinimumDistance) = md1.d < md2.d ? md1 : md2 reducer! (generic function with 2 methods) Note that since MinimumDistance is immutable, copying it is the same as returning the value. Also, resetting the minimum distance consists of setting its d field to +Inf. And, finally, reducing the threaded distances consists of keeping the pair with the shortest distance. Next, we build the system julia> xpositions = rand(SVector{3,Float64},1000); julia> ypositions = rand(SVector{3,Float64},1000); julia> system = PeriodicSystem( xpositions = xpositions, ypositions = ypositions, unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = MinimumDistance(0,0,+Inf), output_name = :minimum_distance, ) And finally we can obtain the minimum distance between the sets: julia> map_pairwise((x,y,i,j,d2,md) -> minimum_distance(i,j,d2,md), system) MinimumDistance(276, 617, 0.006009804808785543) Turn parallelization on and off The use of parallel computations can be tunned on and of by the system.parallel boolean flag. For example, using 6 cores (12 threads) for the calculation of the minimum-distance example: julia> f(system) = map_pairwise((x,y,i,j,d2,md) -> minimum_distance(i,j,d2,md), system) f (generic function with 1 method) 8 julia> system.parallel = true true julia> @btime f($system) 268.265 μs (144 allocations: 16.91 KiB) MinimumDistance(783, 497, 0.007213710914619913) julia> system.parallel = false false julia> @btime f($system) 720.304 μs (0 allocations: 0 bytes) MinimumDistance(783, 497, 0.007213710914619913) Displaying a progress bar Displaying a progress bar: for very long runs, the user might want to see the progress of the computation. Use the show_progress keyword parameter of the map_pairwise! function for that. For example, we execute the computation above, but with much more particles: julia> xpositions = rand(SVector{3,Float64},10^6); julia> ypositions = rand(SVector{3,Float64},10^6); julia> system = PeriodicSystem( xpositions = xpositions, ypositions = ypositions, unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = MinimumDistance(0,0,+Inf), output_name = :minimum_distance, ); julia> map_pairwise( (x,y,i,j,d2,md) -> minimum_distance(i,j,d2,md), system; show_progress = true ) Progress: 24%|██████████▏ | ETA: 0:00:29 By activating the show_progress flag, a nice progress bar is shown. Fine control of the paralellization The number of batches launched in parallel runs can be tunned by the nbatches keyword parameter of the PeriodicSystem constructor. By default, the number of batches is defined as heuristic function dependent on the number of particles, and possibly returns optimal values in most cases. For a detailed dicussion about this parameter, see Number of batches. For example, to set the number of batches for cell list calculation to 4 and the number of batches for mapping to 8, we can do: julia> system = PeriodicSystem( xpositions = rand(SVector{3,Float64},1000), unitcell=[1,1,1], cutoff = 0.1, output = 0.0, output_name = :energy, nbatches=(4,8), # use this keyword ); Most times it is expected that the default parameters are optimal. But particularly for inhomogeneous systems increasing the number of batches of the mapping phase (second parameter of the tuple) may improve the performance by reducing the idle time of threads. Avoid cell list updating To compute different properties without recomputing cell lists, use update_lists=false in the call of map_pairwise methods, for example, using CellListMap.PeriodicSystems, StaticArrays system = PeriodicSystem(xpositions=rand(SVector{3,Float64},1000), output=0.0, cutoff=0.1, unitcell=[1,1,1]) # First call, will compute the cell lists map_pairwise((x,y,i,j,d2,u) -> u += d2, system) # Second run: do not update the cell lists but compute a different property map_pairwise((x,y,i,j,d2,u) -> u += sqrt(d2), system; update_lists = false) in which case we are computing the sum of distances from the same cell lists used to compute the energy in the previous example (requires version 0.8.9). Specifically, this will skip the updating of the cell lists, thus be careful to not use this option if the cutoff, unitcell, or any other property of the system changed. For systems with two sets of particles, the coordinates of the xpositions set can be updated, preserving the cell lists computed for the ypositions, but this requires setting autoswap=false in the construction of the PeriodicSystem: using CellListMap.PeriodicSystems, StaticArrays system = PeriodicSystem( xpositions=rand(SVector{3,Float64},1000), ypositions=rand(SVector{3,Float64},2000), output=0.0, cutoff=0.1, unitcell=[1,1,1], autoswap=false # Cell lists are constructred for ypositions ) map_pairwise((x,y,i,j,d2,u) -> u += d2, system) # Second run: preserve the cell lists but compute a different property map_pairwise((x,y,i,j,d2,u) -> u += sqrt(d2), system; update_lists = false) Control CellList cell size The cell sizes of the construction of the cell lists can be controled with the keyword lcell of the PeriodicSystem constructor. For example: julia> system = PeriodicSystem( xpositions = rand(SVector{3,Float64},1000), unitcell=[1,1,1], cutoff = 0.1, output = 0.0, output_name = :energy, lcell=2, ); Most times using lcell=1 (default) or lcell=2 will provide the optimal performance. For very dense systems, or systems for which the number of particles within the cutoff is very large, larger values of lcell may improve the performance. To be tested by the user. Complete example codes Simple energy computation In this example, a simple potential energy defined as the sum of the inverse of the distance between the particles is computed. using CellListMap.PeriodicSystems using StaticArrays system = PeriodicSystem( xpositions = rand(SVector{3,Float64},1000), unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = 0.0, output_name = :energy ) map_pairwise!((x,y,i,j,d2,energy) -> energy += 1 / sqrt(d2), system) Force computation Here we compute the force vector associated to the potential energy function of the previous example. using CellListMap.PeriodicSystems using StaticArrays positions = rand(SVector{3,Float64},1000) system = PeriodicSystem( xpositions = positions, unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = similar(positions), output_name = :forces ) function update_forces!(x,y,i,j,d2,forces) d = sqrt(d2) df = (1/d2)*(1/d)*(y - x) forces[i] += df forces[j] -= df return forces end map_pairwise!((x,y,i,j,d2,forces) -> update_forces!(x,y,i,j,d2,forces), system) Energy and forces In this example, the potential energy and the forces are computed in a single run, and a custom data structure is defined to store both values. using CellListMap.PeriodicSystems using StaticArrays # Define custom type mutable struct EnergyAndForces energy::Float64 forces::Vector{SVector{3,Float64}} end # Custom copy, reset and reducer functions import CellListMap.PeriodicSystems: copy_output, reset_output!, reducer copy_output(x::EnergyAndForces) = EnergyAndForces(copy(x.energy), copy(x.forces)) function reset_output!(output::EnergyAndForces) output.energy = 0.0 for i in eachindex(output.forces) output.forces[i] = SVector(0.0, 0.0, 0.0) end return output end function reducer(x::EnergyAndForces, y::EnergyAndForces) e_tot = x.energy + y.energy x.forces .+= y.forces return EnergyAndForces(e_tot, x.forces) end # Function that updates energy and forces for each pair function energy_and_forces!(x,y,i,j,d2,output::EnergyAndForces) d = sqrt(d2) output.energy += 1/d df = (1/d2)*(1/d)*(y - x) output.forces[i] += df output.forces[j] -= df return output end # Initialize system positions = rand(SVector{3,Float64},1000); system = PeriodicSystem( xpositions = positions, unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = EnergyAndForces(0.0, similar(positions)), output_name = :energy_and_forces ) # Compute energy and forces map_pairwise((x,y,i,j,d2,output) -> energy_and_forces!(x,y,i,j,d2,output), system) Two sets of particles In this example we illustrate the interface for the computation of properties of two sets of particles, by computing the minimum distance between the two sets. using CellListMap.PeriodicSystems using StaticArrays # Custom structure to store the minimum distance pair struct MinimumDistance i::Int j::Int d::Float64 end # Function that updates the minimum distance found function minimum_distance(i, j, d2, md) d = sqrt(d2) if d < md.d md = MinimumDistance(i, j, d) end return md end # Define appropriate methods for copy, reset and reduce import CellListMap.PeriodicSystems: copy_output, reset_output!, reducer! copy_output(md::MinimumDistance) = md reset_output!(md::MinimumDistance) = MinimumDistance(0, 0, +Inf) reducer!(md1::MinimumDistance, md2::MinimumDistance) = md1.d < md2.d ? md1 : md2 # Build system xpositions = rand(SVector{3,Float64},1000); ypositions = rand(SVector{3,Float64},1000); system = PeriodicSystem( xpositions = xpositions, ypositions = ypositions, unitcell=[1.0,1.0,1.0], cutoff = 0.1, output = MinimumDistance(0,0,+Inf), output_name = :minimum_distance, ) # Compute the minimum distance map_pairwise((x,y,i,j,d2,md) -> minimum_distance(i,j,d2,md), system) Particle simulation In this example, a complete particle simulation is illustrated, with a simple potential. This example can illustrate how particle positions and forces can be updated. Run this simulation with: julia> system = init_system(N=200); # number of particles julia> trajectory = simulate(system); julia> animate(trajectory) One important characteristic of this example is that the system is built outside the function that performs the simulation. This is done because the construction of the system is type-unstable (it is dimension, geometry and output-type dependent). Adding a function barrier avoids type-instabilities to propagate to the simulation causing possible performance problems. using StaticArrays using CellListMap.PeriodicSystems import CellListMap.wrap_relative_to # Function that updates the forces, for potential of the form: # if d < cutoff k*(d^2-cutoff^2)^2 else 0.0 with k = 10^6 function update_forces!(x, y, i, j, d2, forces, cutoff) r = y - x dudr = 10^6 * 4 * r * (d2 - cutoff^2) forces[i] += dudr forces[j] -= dudr return forces end # Function that initializes the system: it is preferrable to initialize # the system outside the function that performs the simulation, because # the system (data)type is defined on initialization. Initializing it outside # the simulation function avoids possible type-instabilities. function init_system(;N::Int=200) Vec2D = SVector{2,Float64} positions = rand(Vec2D, N) unitcell = [1.0, 1.0] cutoff = 0.1 system = PeriodicSystem( positions=positions, cutoff=cutoff, unitcell=unitcell, output=similar(positions), output_name=:forces, ) return system end function simulate(system=init_system(); nsteps::Int=100, isave=1) # initial velocities velocities = [ randn(eltype(system.positions)) for _ in 1:length(system.positions) ] dt = 1e-3 trajectory = typeof(system.positions)[] for step in 1:nsteps # compute forces at this step map_pairwise!( (x,y,i,j,d2,forces) -> update_forces!(x,y,i,j,d2,forces,system.cutoff), system ) # Update positions and velocities for i in eachindex(system.positions, system.forces) f = system.forces[i] x = system.positions[i] v = velocities[i] x = x + v * dt + (f / 2) * dt^2 v = v + f * dt # wrapping to origin for obtaining a pretty animation x = wrap_relative_to(x, SVector(0.0, 0.0), system.unitcell) # !!! IMPORTANT: Update arrays of positions and velocities system.positions[i] = x velocities[i] = v end # Save step for printing if step % isave == 0 push!(trajectory, copy(system.positions)) end end return trajectory end using Plots function animate(trajectory) anim = @animate for step in trajectory scatter( Tuple.(step), label=nothing, lims=(-0.5, 0.5), aspect_ratio=1, framestyle=:box, ) end gif(anim, "simulation.gif", fps=10) end`
2023-03-31 19:42:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5872564911842346, "perplexity": 5422.822688505288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00698.warc.gz"}
http://math.stackexchange.com/questions/366263/explanation-on-step-rho-of-the-sha-3-algorithm
# Explanation on step $\rho$ of the SHA-3 algorithm I'm working on implementing SHA-3 in a PIC microcontroller. In the block permutation, I don't quite understand step $\rho$: Bitwise rotate each of the 25 words by a different triangular number 0, 1, 3, 6, 10, 15, …. To be precise, $a[0][0]$ is not rotated, and for all $0 \le t \le 24$, $a[i][j][k] = a[i][j][k−\frac{(t+1)(t+2)}{2}]$, where $${i \choose j} = \left(\begin{matrix}3&2\\1&0\end{matrix}\right)^t {0 \choose 1}$$ I understand the textual explanation and can write the needed code for that, if I assume that $i$ and $j$ are iterated like they form a 2-digit number in base 5, when the number equals $t$. I think that's what the formula below the text says, but I am not sure if this is so and if my assumption is correct. I am familiar with the $n\choose{}m$ notation, but have never worked with matrices before. It came to my mind that $i\choose{}j$ and $0\choose1$ might be matrices as well instead of 'n choose m', like $\left(\begin{matrix}3&2\\1&0\end{matrix}\right)$ is. Can someone explain the formula to me? - They are column vectors. This is an iterative formula. It means that: • For $t=0$, you have $i=0$ and $j=1$. • For $t=1$, you have $\left(\begin{matrix}3&2\\1&0\end{matrix}\right)\left(\begin{matrix}0\\1\end{matrix}\right) = \left(\begin{matrix}2\\0\end{matrix}\right)$ so $i=2$ and $j=0$. • For $t=2$, you have $\left(\begin{matrix}3&2\\1&0\end{matrix}\right)^2\left(\begin{matrix}0\\1\end{matrix}\right) = \left(\begin{matrix}3&2\\1&0\end{matrix}\right)\left(\begin{matrix}2\\0\end{matrix}\right) =\left(\begin{matrix}6\\2\end{matrix}\right)$ so $i=6$ and $j=2$. • And so on. As you see above, if you iterate over $t$ then you don't have to implement the matrix power operation, since you can just multiply once - the matrix times the previous result vector. You can look at the Wikipedia article Matrix multiplication for the general case, but for a 2x2 matrix multiplied by a 2x1 vector, the formula is $\left(\begin{matrix}a&b\\c&d\end{matrix}\right)\left(\begin{matrix}e\\f\end{matrix}\right) =\left(\begin{matrix}ae+bf\\ce+df\end{matrix}\right)$. Edit: In fact, this is a mathematician's way to write the following code: $i_{\text{new}} := 3i_{\text{prev}} + 2j_{\text{prev}}$ $j_{\text{new}} := i_{\text{prev}}$ Since you have only 25 different values of $t$, essentially you have constant values of $i$ and $j$ depending on the iteration. So, from the point of view of coding on a PIC, it might be faster to skip the matrix multiplication, and just calculate it (i.e. with a quick Python program) and write the constant lookup tables in your code. But maybe the above iterative formula is also fast. Up to you. - I wish I could give you an extra +1 for the link with writing code ;) thanks a lot, this is a very clear explanation! – Camil Staps Apr 19 '13 at 8:59
2016-05-01 18:05:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180148839950562, "perplexity": 147.18467552293086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116878.73/warc/CC-MAIN-20160428161516-00000-ip-10-239-7-51.ec2.internal.warc.gz"}
https://quant.stackexchange.com/questions/30805/chain-rule-for-itos-lemma
# Chain rule for Ito's Lemma The CIR short rate model is $$dr_t=k(\theta-r_t)dt+\sigma\sqrt{r_t}dW_t$$ under the risk-neutral measure. The bond price is of the form $$P(t,T)=A(t,T)e^{-B(t,T)r_t}$$ where the continuously compounded spot rate is an affine function of the short rate $r_t$. My question is, how should Ito's Lemma be applied to find $dP(t,T)$? Here is my attempt: $$\ln P(t,T)=\ln A(t,T)-B(t,T)r_t$$ $$d\ln P(t,T)=d\ln A(t,T)-r_tdB(t,T)-B(t,T)dr_t$$ $$(d\ln P(t,T))^2=B(t,T)^2\sigma^2r_tdt$$ \begin{align} d(e^{\ln P(t,T)})&=P(t,T)\bigg(d\ln P(t,T)+\frac{1}{2}(d\ln P(t,T))^2\bigg)\\ &=P(t,T)\bigg(d\ln A(t,T)-r_tdB(t,T)-B(t,T)dr_t+\frac{1}{2}B(t,T)^2\sigma^2r_tdt\bigg)\\ &=\ldots\\ &=r_tP(t,T)dt-B(t,T)P(t,T)\sigma\sqrt{r_t}dW_t \end{align} Although I have followed the steps for Ito's Lemma, I seem to be missing a detail that will allow some terms to cancel out to produce the final line. Moreover the functions $A(t,T)$ and $B(t,T)$ are quite complex and I don't think differentiating them would be a good idea. $$A(t,T)=\bigg[\frac{2h\exp{\{(k+h)(T-t)/2\}}}{2h+(k+h)(\exp{\{(T-t)h\}-1})}\bigg]^{2k\theta/\sigma^2}$$ $$B(t,T)=\frac{2(\exp{\{(T-t)h\}-1)}}{2h+(k+h)(\exp{\{(T-t)h\}-1})}$$ $$h=\sqrt{k^2+2\sigma^2}$$ Source: Brigo & Mercurio, Interest Rate Models, 3.2.3 You have to go start from the original expression of $P$: $$P(t,T) = \mathbb{E}[e^{-\int_t^T r^s ds}|\mathcal{F}_t]$$ So if you define : $$M_t = e^{-\int_0^t r_s ds}P(t,T)$$ this is a martingale. So since you are in a brownian filtration, $$dM_t = \sigma_t dW_t$$ It remains to find $\sigma_t$, which will be done by noticing that : $$\sigma^2_t dt = d<M_t> = (e^{-\int_0^t r_s ds})^2 d<P(t,T)>$$ using your expression of $d<\ln P(t,T)>$ and since $d<P(t,T)>=P(t,T)^2 d<\ln P(t,T)>$ you get (using that $P(t,T)>0$) $$dP(t,T) = r_t P(t,T) dt + P(t,T)\sqrt{\frac{d<\ln P(t,T)>}{dt}}dW_t$$
2021-11-28 05:47:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798591732978821, "perplexity": 428.3192349541846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00182.warc.gz"}
http://www.last.fm/music/Flipsyde+&+Piper+feat.+T.A.T.U./+similar
# Similar Artists 1. We don't have a wiki here yet... 2. We don't have a wiki here yet... 3. We don't have a wiki here yet... 4. We don't have a wiki here yet... 5. We don't have a wiki here yet... 6. We don't have a wiki here yet... 7. We don't have a wiki here yet... 8. Yulia Olegovna Volkova (Russian: Юлия Олеговна Волкова born 20 February 1985), better known by the alternative spelling of Julia, is a Russian… 9. We don't have a wiki here yet... 10. We don't have a wiki here yet... 11. We don't have a wiki here yet...
2016-06-25 23:18:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895420253276825, "perplexity": 5031.484450517002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00131-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/network-detection-anomaly.428724/
# Network Detection Anomaly 1. Sep 13, 2010 ### cvandolson I wanted to get some insight about which direction I should look in about Network Detection anomalies. I have been doing some research and I have found CLAD (Clustering and Anomaly Detection) which is an algorithm, however, I'm trying to find something simpler to understand to implement. Basically, I have a large set of data and I need to find outliers from the normal data. With the set of data I'm working with I'm assuming using just a basic outlier statement wouldn't work very well and was wondering if there are alternatives to something that will produce less errors in the final outcome. I just need an idea to do research and figure out if that is where I need to start.
2017-08-17 00:57:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062394857406616, "perplexity": 283.21207705171076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102757.45/warc/CC-MAIN-20170816231829-20170817011829-00046.warc.gz"}
https://www.gamedev.net/forums/topic/703941-why-have-embedded-scripting-language/
# Why have embedded scripting language? ## Recommended Posts Note: this questions is along the lines of “if we had our own fresh game engine” Are scripting languages for a game engine still relevant with hot-reload? What about embedding wasm? Why still have a scripting language for your engine? ##### Share on other sites Extremely On 9/6/2019 at 5:59 PM, Plotnus said: Are scripting languages for a game engine still relevant with hot-reload? What about embedding wasm? Why still have a scripting language for your engine? A problem i'm starting to think about: how do i formally describe all those things that are there to load, display, make sound, their properties, positions, dependencies and interactions etc.pp. That'll probably be some sort of markup language, or integrated scripting language, something the computer can be easily convinced to understand as well as for an average human be easy to type in. But that is only one aspect i think ... ##### Share on other sites The purpose of game scripting is not to be lazy about compiling when making core components, because that only leaves a slow and buggy hard-coded mess without type-safety when people can commit code that won't even compile. Scripting is for the tiny code that's loaded dynamically with a level or asset when needed to save instruction cache and provide a safe plug-in system for rarely executed code like dialogues and cinematic events where text makes more sense than signals being sent through nodes (doors and elevators). ##### Share on other sites I think, scripting languages tends to be easier to learn than full blown C++ (for example). So it can be used by not only programmers, bu artists and designers as well. That will lighten the load out of programmers significantly. Of course, that is only when your artists and designers are willing to do so. (you can convice them by adding the word 'techincal' to their title I guess :D). ##### Share on other sites Example code using a scripting language from my previous project: npc "A good day to you."; player "A good day to you too." { "How are you this morning?": npc "I am very well, thanks for asking."; "I like swords.": npc "What are you, an npc or something?"; } First, note the simplicity of the syntax.  That's a nice benefit of using a scripting language, but no big deal.  Also note that the script runs in its own "thread" that runs parallel with the game.  This is actually more like a coroutine, which means that I can do without thread synchronization.  That's another benefit. But the main benefit of using this scripting language is this: if the player quits the game at any time, the game is auto-saved.  And if the game is auto-saved while a script is running, the state of the script is auto-saved along with the rest of the game state.  In other words, if the player is in the middle of a conversation when the game is saved, then the state of the conversation is also saved.  That's very powerful, and there's no way to get a similar effect in a traditional programming language.  The equivalent in C++ would be an unreadable mess like this: class this_one_conversation : public serializable { public: bool run(int notification) { switch (this->state++) { case 0: find_entity("npc")->say("A good day to you.", this); return false; case 1: find_entity("player")->ask("A good day to you too.", this, "How are you?", "I like swords."); return false; case 2: if (notification == 0) { find_entity("npc")->say("I am fine, thanks for asking.", this); } else { find_entity("npc")->say("What are you, an npc or something?", this); } case 3: return true; // End conversation by returning true. } } void serialize(serializer& ser) { ser(this->state); } private: int state = 0; }; ##### Share on other sites A scripting language is there to separate data from the code. If you have, let's say, a conversation like above, you can hardcode all the dialogue in the code, and that will work. But then, any time you want to change a line of dialogue, you have to rebuild the whole engine/game. To avoid that, you make a separate file with all the text, which the engine loads at runtime. That way you separated your data from your code, and you can change your data without touching the code (and without rebuilding the engine). It's similar with scripts. This way, your scripts become data instead of being code. So if you have a scripted conversation, you can change the rules that are in the script without needing to rebuild the entire engine. You can even completely swap it with another script without touching the engine at all. There's also all the other benefits, like the scripts being easier to edit by team members that are not programmers, and also easier to find than being "buried somewhere in the code". And if the team is working on more than one project at the same time, they can use the same build of the engine for all projects, with separate scripts and data, that makes engine maintenance a bit easier. ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account • ### Game Developer Survey We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start! • 10 • 15 • 22 • 19 • 46
2019-09-22 21:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22826211154460907, "perplexity": 2495.8644577863683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575674.3/warc/CC-MAIN-20190922201055-20190922223055-00038.warc.gz"}
https://mathhelpboards.com/threads/upper-semi-continuous-collections.25625/
# Upper Semi-Continuous Collections #### joypav ##### Active member $G$ is an upper semi-continuous collection of sets filling up the Hausdorff space $X$. Problem: No point of $X$ belongs to two elements of $G$. Proof: What we know: $G$ is upper semi-continuous $\implies$ if $g \in G$ and $U$ is an open set containing $g$, then there is an open set $V$ containing $g$ such that each member of $G$ which intersects $V$ lies in $U$ $G$ "fills up" $X \implies X=\cup G$ $X$ is Hausdorff $\implies \forall x_1, x_2 \in X, \exists U_1, U_2$ open such that $x_1 \in U_1, x_2 \in U_2, U_1 \cap U_2=\emptyset$ Here's my questions... Why can't an element of $X$ be in two $g$'s, say $g_1$ and $g_2$ of the collection $G$? I know that's the question I am being asked, but what I mean is, say $g_1 \subset g_2$. Then $x$ could be in both and the same open sets could satisfy the requirements for upper semi-continuity for both. #### joypav ##### Active member Aha! I was thinking too deep about it... we only need to show the existence of ONE open set that contains one and not the other. I will post my proof for completeness. Proof: BWOC, assume $\exists g_1, g_2 \in G$ such that $g_1 \cap g_2 \neq \emptyset$. Consider $y \in g_2$ such that $y \notin g_1$. (WLOG, assume $g_1 \not\subset g_2$. If it was we could just switch around $g_1$ and $g_2$ in the proof.) $X$ Hausdorff $\implies$ for each $x \in g_1 \exists U_x, V_x$ open in $X$ such that $x \in U_x, y \in V_x, U_x \cap V_x = \emptyset$. Let $U = \cup_{x \in g_1} U_x$. Then, obviously, $g_1 \subset U$, where $U$ is and open set of $X$. (it is the union of open sets) $G$ is upper semi-continuous $\implies \exists V$ open in $X$ such that $g_1 \subset V$ and if $g' \in G$ with $g' \cap V \neq \emptyset$ then $g' \subset U$ Claim: $g_2 \cap V \neq \emptyset$ By assumption, $g_1 \cap g_2 \neq \emptyset$ and we know $g_1 \subset V \implies V \cap g_2 \neq \emptyset$ $\implies$ our Claim is true. So we've shown that $g_2$ does in fact intersect $V$ (hoorah!). $\implies g_2 \subset U$ (by definition of upper semi-continuity) Go back to the element we considered in $g_2$. We know $y \in g_2$. Then, $y \in g_2 \implies y \in \cup_{x \in g_1} U_x \implies y \in U_x$ for some $x \in g_1$. But this is a contradiction, because $y \in V_x$ for every $x$ and $U_x \cap V_x = \emptyset$.
2020-06-01 02:35:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8594055771827698, "perplexity": 172.89014495200652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00256.warc.gz"}
https://www.learncram.com/ml-aggarwal/ml-aggarwal-class-6-solutions-for-icse-maths-chapter-10-ex-10-1/
# ML Aggarwal Class 6 Solutions for ICSE Maths Chapter 10 Basic Geometrical Concept Ex 10.1 ## ML Aggarwal Class 6 Solutions for ICSE Maths Chapter 10 Basic Geometrical Concept Ex 10.1 Question 1. How many lines can be drawn through a given point? Solution: Question 2. How many lines can be drawn through two distinct given points? Solution: Question 3. How many lines can be drawn through three collinear points? Solution: Question 4. Mark three non-collinear points A, B and C in your notebook. Draw lines through these points taking two at a time and name these lines. How many such different lines can be drawn? Solution: Question 5. Use the figure to name : (i) Five point (ii) Aline (iii) Four rays (iv) Five line segments Solution: Question 6. Use the figure to name: (i) Line containing point E. (ii) Line passing through A. (iii) Line on which point O lies. (iv) Two pairs of intersecting lines. Solution: Question 7. From the given figure, write (i) collinear points (ii) concurrent lines and their points of concurrence. Solution: Question 8. In the given figure, write (i) all pairs of parallel lines. (ii) all pairs of intersecting lines, (iii) concurrent lines. (iv) collinear points. Solution: Question 9. Count the number of line segments drawn in each of the following figures and name them: Solution: Question 10. (i) Name all the rays shown in the following figure whose initial points are A, B and C respectively. (ii) Is ray AB different from ray AD? (iii) Is ray CA different from ray CE? (iv) Is ray BA different from ray CA? (v) Is ray ED different from ray DE? Solution: Question 11. Consider the following figure of line $$\overleftrightarrow { MN }$$. Says whether following statements are true or false in context of the given figure. (i) Q, M, O, N and P are points on the line $$\overleftrightarrow { MN }$$. (ii) M, O and N are points on a line segment $$\overline{\mathrm{MN}}$$. (iii) M and N are end points of line segment $$\overline{\mathrm{MN}}$$ . (iv) O and N are end points of line segment $$\overline{\mathrm{OP}}$$. (v) M is a point on the ray $$\overline{\mathrm{OP}}$$. (vi) M is one of the end points of line segment $$\overline{\mathrm{QO}}$$. (vii) Ray $$\overrightarrow { OP }$$ is same as ray $$\overrightarrow { OM }$$. (viii)Ray $$\overrightarrow { OM }$$ is not opposite to ray $$\overrightarrow { OP }$$. (ix) Ray $$\overrightarrow { OP }$$ is different from ray $$\overrightarrow { QP }$$. (x) O is not an initial point of ray $$\overrightarrow { OP }$$. (xi) N is the initial point of $$\overrightarrow { N }$$ and $$\overrightarrow { NM }$$. Solution:
2022-05-23 02:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41176921129226685, "perplexity": 2329.4363513292715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00548.warc.gz"}
https://www.gamedev.net/forums/topic/417825-compiling--linking-too-long/
# Compiling + Linking too long... ## Recommended Posts Hi GDNet, We're experiencing time issues in our project due to very long compiling times. The fact is that on a single core/dual core machine, it takes about 20 min to compile, 2 min to link. On a quad core (in fact, a hyper threaded dual core CPU), it takes about 30 minutes. We suspect that disk access is what might be slowing us down in our process, though our machines are equipped with raid 10k hard drives. My question is: does anyone know a tool that might help us to monitor system activity during our build process (which BTW uses make files) I'd like to have a report on CPU activity, disk activity, how much time spent seeking, Reading, writing and so on. Any idea would be appreciated. Thank you. ##### Share on other sites What does it matter? You can't reduce the disk usage of your compiler. Better to focus on how to reduce interdependancies between your modules, as that reduces average compile times (although not full rebuild compile times). Precompiled headers can, occasionally, give you a benefit if used properly. You can also look into distributed compiling tools like Incredibuild. But, to be honest, thirty minutes is nothing. ##### Share on other sites I understand that disk usage cant be reduced, but the point is that the fastest machine we have (that is an overclocked hyper threaded dual core) is compiling/linking with 50% more time than other machine, which led me to think that CPU is not the limiting factor. And I need to find out what it is. I agree too that 30 min is not that much, but the nature of our activity (that is port programming, not original development) requires our developers to make tiny modifications/hacks and test them very frequently. Due to indeed nasty dependencies between files, testing even a slight modification sometimes requires to fully rebuild the project and wastes precious time. In that context, 30 minute is a lot. Unfortunately, we have no control over the original project architecture and cannot reduce those dependencies. Thus our only hope left is to try speeding up things as much as we can, and to do that optimization we need to "profile" what our bottleneck is. Also, we've tried Incredibuild, but since that tool is not able to determine file dependencies from a make-file, all parallel compilation does not occur (like all machines but one are ignored) ##### Share on other sites Quote: Original post by jantaHi GDNet,We're experiencing time issues in our project due to very long compiling times. The fact is that on a single core/dual core machine, it takes about 20 min to compile, 2 min to link. On a quad core (in fact, a hyper threaded dual core CPU), it takes about 30 minutes.We suspect that disk access is what might be slowing us down in our process, though our machines are equipped with raid 10k hard drives.My question is: does anyone know a tool that might help us to monitor system activity during our build process (which BTW uses make files) I'd like to have a report on CPU activity, disk activity, how much time spent seeking, Reading, writing and so on.Any idea would be appreciated. In order to enable us to truly help you, you'll have to provide some further background information about your project's structure and the build system you are using right now, as well as the build parameters/settings you are using. You said for example, you were using Makefiles-if you are referring to unix GNU makefiles, there's the possibility to make use of parallel builds (using "make -j x", where 'x' is the number of parallel build processes you'll want make to spawn) if the Makefiles (and the project itself) are structured accordingly. This can significantly reduce build times, as various processes will be started, building individual targets-rather than having only one single build process which builds all targets sequentially. Likewise, you should be able to verify whether I/O activity is truly a limiting factor, by running the whole build process on a memory disk and compare resulting times. Apart from that, you really didn't provide much info about the platform itself, that the build system is running on right now-under Linux/Unix for example, you could easily retrieve all (and much more) of the required data, by simply running "top" in a separate console/window. In general however, it is crucial to realize that compilation in itself is usually an inherently sequential process, as such it really doesn't lend itself to parallelization usually-thus, any parallelization you may want to achieve on SMP platforms, should first be reflected in the project's source code structure and usage/configuration of the build system-compilers themselves, are usually entirely single-threaded and are by design only rarely able to make use of more than one core. Thus, corresponding refactoring (source code restructuring) is essential to ensure that build targets are kept independent from eachother, so as to maximally reduce inter-dependencies in order to enable the build system to build as many individual targets simultaneously as possible. Even if you are using Linux/(g)make you may find that you are not yet using these tools optimally. You may want to do a google search for "parallel builds" w/ make, for further pointers about how to achieve this. ##### Share on other sites Quote: Original post by jantaI understand that disk usage cant be reduced, but the point is that the fastest machine we have (that is an overclocked hyper threaded dual core) is compiling/linking with 50% more time than other machine, which led me to think that CPU is not the limiting factor. or that the multi-core CPU platform simply isn't used optimally? [quote]And I need to find out what it is.[quote] how large is the source tree to be built? what are the build/compiler settings you are using? are you using precompiled headers where appropriate? Quote: I agree too that 30 min is not that much, but the nature of our activity (that is port programming, not original development) requires our developers to make tiny modifications/hacks and test them very frequently. Due to indeed nasty dependencies between files, testing even a slight modification sometimes requires to fully rebuild the project and wastes precious time. In that context, 30 minute is a lot. Unfortunately, we have no control over the original project architecture and cannot reduce those dependencies. Thus our only hope left is to try speeding up things as much as we can, and to do that optimization we need to "profile" what our bottleneck is. if you can't do that, then running the whole build in-memory might really be a viable option to improve build times significantly, of course this is provided that your build machine has sufficient RAM. Using a cron job, you could still synchronize everything regularly. Quote: Also, we've tried Incredibuild, but since that tool is not able to determine file dependencies from a make-file, all parallel compilation does not occur (like all machines but one are ignored) well, it seems as though you are already using parallel compilation? If that's the case, depending on the parameters you use, THIS could actually be your limiting factor-that is, if you allow make to start too many jobs for your particular platform, this can also significantly increase overall build times. Depending on your platform (RAM/CPU usage) you may want to use a more appropriate number of jobs, the GNU make tutorial has more info on this, too. ##### Share on other sites Build process bottleneck monitoring is probably going to be platform dependant, so it'd help if you name one. My advice on optimizing compile times: 1) You need a system that will only recompile affected files. I didn't like any of the ones out there, but I considered it important enough I built my own build system by hand (using ruby scripts). Automake may have something like this (I froget), SCONs also comes to mind. Presuming you don't want to manually hand-manage your Makefile for every dependancy update, and don't want to do a full rebuild every time (you don't), this is a must. IDEs like VS2k5 and Eclipse's CDT plugin have options for this in their automatic project management systems. Far fewer forced full recompiles as a result. Like, basically none. WRT building your own: I use hidden files associated with every source file (including headers) to keep track of what they #include. Rebuilds of a given source file only happen if the source file or any of the files listed have a more recent modification timestamp than this hidden file (which is then touched at the end of a successful build). 2) If the above isn't enough, using Pimpl may help (or other methods of decoupling implementation/behavior updates from header file changes, minimizing rebuilds as per above) 3) If your compiler has an option for incremental linking, turn it on if it isn't already! 4) If you're using boost::spirit grammars in many places in your code (or other insanely templated amounts of code), use explicit template instantiation instead of leaving everything to implicit processes). 5) ... 6) $Profit$ ##### Share on other sites Quote: Original post by jantaWe suspect that disk access is what might be slowing us down in our process, though our machines are equipped with raid 10k hard drives. Just to be sure: hardware or software RAID? You'll usually want HARDWARE RAID and a reasonable controller with lots of on-board RAM to provide SWAP RAM directly on the controller. Also, your RAID setup (0,1,5 etc?) could theoretically also be a limiting factor. Likewise, if you are remotely building via console/network or even network storage (NFS), this may be a factor, too. In general, if I were in a situation where I had to regularly modify/rebuild a complex source tree with lots of interdependencies that may/can not be resolved, I'd probably at least take the "RAMDISK" approach for starters and make sure I use the build system optimally. ##### Share on other sites You don't mention how much memory you have. You could have 128 100 petaherz processors, but your performance will be pretty poor if all they do is wait for the swapper to thrash. Many compliers, including and especially certain versions of GCC, are memory pigs. More memory means faster compiles. Also, if you're using GCC try using the '-p' switch to avoid writing temporary files (it'll use pipes between stages instead). This will not help if you're memory bound (page thrashing) but will help if you're disk bound. Honestly, though, properly factored source files, good dependency checking (automake works wonders), and precompiled headers will make palpable differences. ##### Share on other sites just FYI: some people who contribute on forums such as these, actually find it discouraging when the OP they tried to help, doesn't come back to provide the requested feedback.
2017-10-23 22:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37330079078674316, "perplexity": 2153.9513022767283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826840.85/warc/CC-MAIN-20171023221059-20171024001059-00604.warc.gz"}
http://clay6.com/qa/2849/find-the-equation-of-the-plane-which-is-perpendicular-to-the-plane-and-whic
Browse Questions # Find the equation of the plane which is perpendicular to the plane $5x+3y+6z+8=0$ and which contains the line of intersection of the planes $x+2y+3z-4=0 \: and \: 2x+y-z+3=0.$ Can you answer this question?
2016-10-26 19:11:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39209240674972534, "perplexity": 64.54636666814874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00396-ip-10-171-6-4.ec2.internal.warc.gz"}
https://plainmath.net/26358/do-the-laplace-transform-of-l-left-t-e-3t-right
Question Do the laplace transform of L\left\{t-e^{3t}\right\} Laplace transform Do the laplace transform of $$L\left\{t-e^{3t}\right\}$$ 2021-09-20 Step 1 $$L\left\{t-e^{3t}\right\}$$ We know that the Laplace transformation of $$\displaystyle{L}{\left({t}\right)}={\int_{{0}}^{{\infty}}}{e}^{{-{s}{t}}}\cdot{t}{\left.{d}{t}\right.}={\frac{{{1}}}{{{s}^{{2}}}}}$$ $$\displaystyle{L}{\left({e}^{{{a}{t}}}\right)}={\int_{{0}}^{{\infty}}}{e}^{{-{s}{t}}}\cdot{e}^{{{a}{t}}}{\left.{d}{t}\right.}={\frac{{{1}}}{{{s}-{a}}}},{s}{>}{a}$$ So $$\displaystyle\rightarrow{L}{\left({t}-{e}^{{{3}{t}}}\right)}$$ $$\displaystyle\Rightarrow{L}{\left({t}\right)}-{L}{\left({e}^{{{3}{t}}}\right)}$$ by the defination of laplace transformation $$\displaystyle\Rightarrow{\frac{{{1}}}{{{s}^{{2}}}}}-{\frac{{{1}}}{{{s}-{3}}}}$$ $$\displaystyle\Rightarrow{\frac{{{\left({s}-{3}\right)}-{s}^{{2}}}}{{{s}^{{2}}{\left({s}-{3}\right)}}}}$$ $$\displaystyle\Rightarrow{L}{\left({t}-{e}^{{{3}{t}}}\right)}={\frac{{{\left({s}-{3}\right)}-{s}^{{2}}}}{{{s}^{{2}}{\left({s}-{3}\right)}}}}$$
2021-10-25 13:50:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9151253700256348, "perplexity": 5281.152752600769}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00239.warc.gz"}
http://scikit-bio.org/docs/0.5.0/generated/skbio.sequence.DNA.complement_map.html
# skbio.sequence.DNA.complement_map¶ DNA.complement_map[source] Return mapping of nucleotide characters to their complements. State: Stable as of 0.4.0. Returns: dict Mapping of each character to its complement. Notes Complements cannot be defined for a generic nucleotide sequence because the complement of A is ambiguous. Thanks, nature...
2022-08-16 08:12:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001602172851562, "perplexity": 5829.2105362065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00042.warc.gz"}
https://uwaterloo.ca/pure-mathematics/events/geometry-working-seminar-55?mini=2021-11
# Geometry Working Seminar Tuesday, January 21, 2020 — 11:00 AM EST Aidan Patterson, Department of Pure Mathematics, University of Waterloo "Thimm’s Trick and the Gelfand-Zeitlin System" Thimm’s trick provides a method by which integrable systems constructed on the dual of a Lie algebra $\mathfrak{g}^{*}$ may be pulled back to integrable systems on a Hamiltonian G-space $(M,\omega, G, \Phi)$ using the moment map $\Phi$. The Gelfand-Zeitlin system represents “the best we can hope for” in terms of the integrable system that we may construct on $\mathfrak{g}^{*}$, and is itself a completely integrable system constructed on $\mathfrak{u}(n)^{*}$. In this talk we will prove Thimm’s trick, and construct the Gelfand-Zeitlin system. Time permitting, we will talk about some generalizations of Thimm’s trick to Hamiltonian G-spaces with Lu-moment maps. MC 5413 ### November 2021 S M T W T F S 31 1 4 6 7 11 13 14 15 20 21 22 27 28 1 2 3 4 1. 2021 (135) 1. December (11) 2. November (22) 3. October (15) 4. September (5) 5. August (15) 6. July (17) 7. June (15) 8. May (1) 9. April (4) 10. March (11) 11. February (9) 12. January (10) 2. 2020 (103) 1. December (10) 2. November (12) 3. October (4) 4. September (3) 5. August (1) 6. July (5) 7. June (1) 8. May (3) 9. March (16) 10. February (26) 11. January (22) 3. 2019 (199) 4. 2018 (212) 5. 2017 (281) 6. 2016 (335) 7. 2015 (211) 8. 2014 (235) 9. 2013 (251) 10. 2012 (135)
2021-12-08 11:11:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3137917220592499, "perplexity": 14774.510327850488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363465.47/warc/CC-MAIN-20211208083545-20211208113545-00123.warc.gz"}
https://www.pendiktercume.com.tr/furtive-cautious-mkqhxq/difference-between-ketones-and-aldehydes-9b899f
Universal American School Kuwait Reviews, Lynn Easton Charleston, Zinsser B-i-n Clear Sealer Ultimate Odor Blocker, 3 Inch Exhaust Turndown, Pitching Drills For 13 Year Olds, Protecting Preloved Border Collies Facebook, " /> Universal American School Kuwait Reviews, Lynn Easton Charleston, Zinsser B-i-n Clear Sealer Ultimate Odor Blocker, 3 Inch Exhaust Turndown, Pitching Drills For 13 Year Olds, Protecting Preloved Border Collies Facebook, " /> Universal American School Kuwait Reviews, Lynn Easton Charleston, Zinsser B-i-n Clear Sealer Ultimate Odor Blocker, 3 Inch Exhaust Turndown, Pitching Drills For 13 Year Olds, Protecting Preloved Border Collies Facebook, " /> Difference Between Distilled Water and Boiled Water - September 30, 2011 Difference Between McDonalds and Burger King - September 30, 2011 Search DifferenceBetween.net : 4. Tests to differentiate between aldehydes and ketones - definition 1. Some tests which help them distinct from each other are Schiff’s test, Tollen’s test, Fehling’s test, Sodium hydroxide test, etc. Same could be said for aldehydes/ketones, but the difference is much less extreme, and the overlaps is far greater. Aldehydes undergo oxidation forming carboxylic acids. Side by Side Comparison – Aldehyde vs Ketone in Tabular Form, Difference Between Coronavirus and Cold Symptoms, Difference Between Coronavirus and Influenza, Difference Between Coronavirus and Covid 19, Difference Between Mechanical Energy and Thermal Energy, Difference Between Disaccharide and Monosaccharide, Difference Between Prototropy and Tautomerism, Difference Between Platyhelminthes and Nematoda, Difference Between Saccharomyces cerevisiae and Schizosaccharomyces pombe, Difference Between Budding Yeast and Fission Yeast, Difference Between Calcium Chloride and Potassium Chloride. Where aldehydes and ketones differ. Figure 01: Chemical Structure of Aldehydes. Instead of “–e” of the corresponding alkane we use the term “one”. However, this molecule deviates from the general formula by having a hydrogen atom instead of R group. Ketones and aldehydes both have a carbonyl group. Ethers do (or don't do) vastly different chemistry from alcohols. Aldehyde vs Formaldehyde. the difference between an aldehyde and a ketone is the presence of a hydrogen atom attached to the carbon-oxygen double bond in the aldehyde. Pro Lite, Vedantu The main difference between Aldehyde and Ketone is that Aldehyde’s carbonyl group is attached to an alkyl group from one side and with H atom from the other side, whereas the carbonyl group of the Ketone is attached to two alkyl groups from its either sides. 3. Can Ketones be Oxidized Like Aldehydes? • Aldehydes can be oxidized to carboxylic acids; ketones cannot. The confusion of the two has been rooted in chemical structures. The aldehydes can be defined as the compounds that have a double bond between Carbon atoms to that of Oxygen atoms, and are generally represented as: (R-(C=O)-H), where R represents the alkyl group, and H is the hydrogen atom. Therefore, ketones can only be oxidized with the help of strong oxidizing agents like the potassium manganate solution. Aldehydes and ketones are organic compounds which incorporate a carbonyl functional group, C=O.The carbon atom of this group has two remaining bonds that may be occupied by hydrogen or alkyl or aryl substituents. A carboxylic acid. Q1. Ketones can only be oxidised further if a C-C bond is broken: a fairly energetic process. 4. The difference between ketone and aldehyde is the carbonyl group present in aldehydes can be easily oxidised to carboxylic acids whereas the carbonyl group in ketones are not oxidised easily. The Tollens' reagent comprises complex silver(I) ions, made from the silver nitrate(I) solution. Q2. The difference is very straightforward: An alcohol group consists of a hydroxyl group attached to the carbon chain (R-OH) An aldehyde group is a carbonyl group attached to a terminal carbon in the chain (R-CH=O) . Uses of Aldehydes… Why do aldehydes and ketones behave differently? Both aldehydes and ketones contain a double bond between carbon and oxygen.. Aldehydes have the double bond at the end of the molecule.That means the carbon at the end of the chain has a double bond to an oxygen atom. But let’s start with the basics. For aldehydes, the colourless solution yields a grey precipitate of silver, also known as the ‘silver mirror test’. Formaldehyde is a common aldehyde (H2C=O). Apart from that, the ketones have the ability to undergo keto-enol tautomerism. They are both reagents containing complex copper (II) in an alkaline solution. Available here   Both aldehydes and ketones undergo nucleophilic addition reactions. How to Differentiate Aldehyde and Ketone? Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Received the following question today: why is it propanal, in stead of 1-propanone? Fehling's test: Aliphatic aldehydes on treatment with Fehling's solution gives a reddish brown precipitate while aromatic aldehydes and ketones do not. The driving force behind this reaction is the difference between the strengths of the bonds that must be broken and the bonds that form in the reaction. 9 years ago. Ketones don't have that hydrogen. “Nomenclature of Aldehydes & Ketones.” Chemistry LibreTexts, National Science Foundation, 2 Oct. 2018. The IUPAC system distinctly names aldehydes with a suffix 'al' and forms into acetal, propanal, etc.. From Tollen's test to Fehling's Test, there are many ways to differentiate between aldehyde and ketone easily, thanks to its distinct chemical composition and super-reactiveness. Fehling's test: Aliphatic aldehydes on treatment with Fehling's solution gives a reddish brown precipitate while aromatic aldehydes and ketones do not. Aldehydes and ketones are two different types of organic compounds. 2. The key difference between aldehyde and ketone is that the functional group of an aldehyde occurs always at a terminus whereas the functional group of a ketone always occurs in the middle of a molecule.. Both aldehydes and ketones are produced by oxidation of primary and secondary alcohols, respectively. The test begins as two separate solutions - … The stem names of aldehydes and ketones are derived from those of the parent alkanes, defined by the longest continuous chain (LCC) of carbon atoms that contains the functional group. Moreover, we number the aliphatic chain in a way that gives the carbonyl carbon the lowest possible number. Difference Between Aldehyde and Ketone Chemical Structure. This process happens, when a strong base takes up the α-hydrogen (hydrogen attached to the carbon, which is next to the carbonyl group). Sorry!, This page is not available for now to bookmark. Predict the major product of the addition reaction between … Aldehydes are usually found in volatile compounds such as fragrance As another important difference between aldehyde and ketone, we can say that aldehydes can undergo oxidation to form carboxylic acids, but ketones cannot undergo oxidation unless we break down its carbon chains. MCQ Questions for Class 12 Chemistry with Answers were prepared based on the latest exam pattern. Same could be said for aldehydes/ketones, but the difference is much less extreme, and the overlaps is far greater. Aldehydes and Ketones. An aldehyde differs from a ketone by having a hydrogen atom attached to the carbonyl group. Terms of Use and Privacy Policy: Legal. They have different functional groups, as well as different chemical and physical properties. The aldehydes are found in volatile compounds like perfume, plants, animals, microorganisms and the human body. Naming Aldehydes and Ketones • When naming aldehydes and ketones according to the IUPAC rules, the carbonyl (C=O) must be part of the parent chain, which is numbered from the end nearer this group. We can synthesize aldehydes by various methods. ALDEHYDES AND KETONES: The compounds containing carbonyl group (C=O) are called carbonyl compounds. + $2H_{2}O$ → RCOOH + $Cu_{2}O$(s) + 4H+(aq). One method is via oxidizing primary alcohols. But once they're exposed to overheating, they can be oxidized with powerful oxidizing agents. You may wish to review the McLafferty rearrangement and the alpha cleavage in Section 12.3. Tests to differentiate between aldehydes and ketones - definition 1. Aldehydes are more reactive than ketones. Distinguishing Aldehydes and Ketones using IR The following two spectra are simple carbonyl compounds. These are three different functional groups, and their reactivity is not quite the same, so it is difficult to compare them. The alkaline solutions contain the complex Copper ions, the colour of the solution is blue. The below infographic on difference between aldehyde and ketone presents a more detailed comparison. Aldehydes and Ketones 1. This works well because the different derivatives have melting points that are many degrees apart. If one substituent is hydrogen it is aldehyde. They are generally distinguished by the following tests. It is the carbonyl group that largely determines the chemistry of aldehydes and ketones. The presence of that hydrogen atom makes aldehydes very easy to oxidise. BASIS OF COMPARISON : ALDEHYDE: KETONE: Description : Aldehyde is an organic compound having the general chemical formula R-CHO. In a ketone, the carbonyl group occurs between two carbon atoms. Reactivity. Aldehydes and ketones - Aldehydes and ketones constitute an important class of organic compounds containing the carbonyl functional group. Both aldehydes and ketones contain the carbonyl group, C O, and are often referred to collectively as carbonyl compounds. Compared to Ketones, aldehydes are more reactive and can be reduced to result in alcohol. However, we name the compound C6H6CHO commonly as benzaldehyde rather than using benzenecarbaldehyde. In a carbonyl group, carbon atom has a double bond to oxygen. (adsbygoogle = window.adsbygoogle || []).push({}); Copyright © 2010-2018 Difference Between. You will remember that the difference between an aldehyde and a ketone is the presence of a hydrogen atom attached to the carbon-oxygen double bond in the aldehyde. Both aldehydes and ketones contain a carbonyl group, a functional group with a carbon-oxygen double bond. Ketons get an ONE ending. • Since the carbonyl carbon atom of an aldehyde is always in position number 1, its position is not specified in the name. Watch more of this topic at http://bit.ly/28JpvRc Download this PDF: http://bit.ly/28Jp9ue GET MORE CLUTCH! Where aldehydes and ketones differ. Available here, 1.”FunktionelleGruppen Aldehyde”By MaChe (talk) – Own work, (Public Domain) via Commons Wikimedia   3. Ketones don't have that hydrogen. As a consequence of this difference in reactivity aldehydes are oxidised more easily than ketones and so, by selecting a sufficiently weak oxidising agent, we can distinguish the two functional groups by oxidising one but not the other. 0000007416 00000 n 43 0 obj ¥The carbonyl carbon of an aldehyde or ketone is sp 2-hybridized. In a ketone, the carbonyl group is bonded to two carbon atoms: As text, an aldehyde group is represented as –CHO; a ketone is represent… Ketones have the double bond anywhere in the molecule except for the end.That means you will see a double bond to oxygen from one of the carbon atoms in the middle of the chain. An aldehyde combines to an alkyl on one side and a Hydrogen atom on the other, while the ketones are known for their double alkyl bonds on both sides. This difference in reactivity is the basis for the distinction of aldehydes and ketones. Cinnemaldehyde (present in cinnamon), citral (in lemongrass) are some of the naturally-occurring aldehydes. Why are ketones not easily oxidised? Aldehyde has a carbonyl group. Therefore, it's better to have a hot bath before you start experimenting on different organic compounds of aldehydes and ketones for faster, effective results. Ketosis is when our body burns ketones for fuel, and these ketones are produced through a process called ketogenesis. Aldehydes upon reacting with the Tollen's reagent gives: RCHO + $2Ag^{+} (aq) + H_{2}O \rightarrow RCOOH + 2Ag(s) + 2H^{+}$. 3. They both contain the C=O double bond, they both are polarized and have a δ+ charge on carbon and a δ- charge on oxygen. However, it can only take place when there's a breaking down of the Carbon bonds present in the ketones, destroying their shape completely. The ketones are less reactive to the oxidation process since it lacks the Hydrogen atom, unlike the aldehydes. For ketones, carvone (present in spearmint and caraway), cortisone (adrenal hormone), are some naturally found ketones. For Ketones, there's no change observed in the natural blue solution of the reagents. Tollen's Test: Aldehydes gives positive Tollen's test to give silver mirror while ketones do not give any reaction. Ketones don't have that hydrogen. The ketones, on the other hand, are found in sugar and get produced by our liver. Both can be done artificially although there are many natural sources like. Fehling's Test. Despite both having a carbon atom at the centre, the fundamental difference between an aldehyde and ketone lies in their distinct chemical structure. What type of compound is produced when an aldehyde is oxidised? 2. Ketones can be obtained by direct oxidation of the secondary alcohols with the use of suitable oxidizing agents: RR'CH-OH → R-CO-R '+ 2e - + 2H + $\begingroup$ I agree, but I do think that the analogy between ether/alcohol and ketone/aldehyde is a stretch. The aldehydes form when the primary alcohol compounds are oxidized and can be removed from the mixture via distillation before it forms carboxylic acid. Libretexts. They are generally distinguished by the following tests. As another important difference between aldehyde and ketone, we can say that aldehydes can undergo oxidation to form carboxylic acids, but ketones cannot undergo oxidation unless we break down its carbon chains. “Nomenclature of Aldehydes & Ketones.” While ketones do not show any observational changes, it’s the aldehydes that take all the credit. Ethers do (or don't do) vastly different chemistry from alcohols. Main Difference. Ketones are the aromas of seed oils, fruits and flowers. Where are Aldehydes and Ketones Found Naturally? The aldehyde helps in the reduction of the diamminesilver ion. Aldehydes and ketones as carbonyl compounds Aldehydes and ketones are simple compounds which contain a carbonyl group - a carbon-oxygen double bond. Ketones and aldehydes both have a carbonyl group. Describe the nucleophilic substitution reactions that can be used to prepare alcohols, ethers, thiols, and sulfides. Both aldehyde and formaldehyde are organic compounds. In a carbonyl group, carbon atom has a double bond to oxygen. For aldehydes with ring systems where the aldehyde group directly attaches to the ring, we use the term “carbaldehyde” as a suffix to name them. Formaldehyde is the simplest aldehyde whereas acetone is the smallest ketone. As a result of the hydrogen bond formation ability, low molecular weight aldehydes and ketones are soluble in water. Aldehydes and Ketones. This allows aldehydes to be easily oxidised to carboxylic acids. Received the following question today: why is it propanal, in stead of 1-propanone? The aldehyde or ketone question is simple. For Fehling's solution, the copper (II) ions are complexed with that of the tartrate ions in a sodium hydroxide solution. The connection between the structures of alkenes and alkanes was previously established, which noted that we can transform an alkene into an alkane by adding an H 2 molecule across the C=C double bond.. Both aldehydes and ketones contain a carbonyl group, a functional group with a carbon-oxygen double bond. Aldehydes have the form of R-CHO. For aliphatic aldehydes, the “e” of the corresponding alkane is replaced with “al”. Ketones have =O in the MIDDLE of the molecule. However, these cannot make stronger hydrogen bonds like alcohols; therefore, they have lower boiling points than the corresponding alcohols. Furthermore, there are some characteristic fragmentation patterns that aid … Aldehydes get an AL ending. Consequently, chromic acid can distinguish between aldehydes and ketones. The presence of that hydrogen atom makes aldehydes very easy to oxidize (i.e., they are strong reducing agents). 3. There are also certain chiral compounds that are found in nature in their enantiomerically pure forms. • Categorized under Science | Difference Between Aldehydes and Ketones. The aldehyde helps in the reduction of the diamminesilver ion [Ag(NH3)2]+ to metallic silver and oxidized into salt and carboxylic acid. Tollen's Test: Aldehydes gives positive Tollen's test to give silver mirror while ketones do not give any reaction. The simplest aldehyde is formaldehyde. The aldehydes can get easily oxidized with the help of mild oxidizing agents like alkaline solutions of $(Cu^{2+})$ Fehling's Solutions and (Ag⁺) Tollens' Reagent. The key difference between aldehyde and ketone is that the functional group of an aldehyde occurs always at a terminus whereas the functional group of a ketone always occurs in the middle of a molecule. @media (max-width: 1171px) { .sidead300 { margin-left: -20px; } } We use the suffix “one” in ketone nomenclature. Tollen's Test: Aldehydes gives positive Tollen's test to give silver mirror while ketones do not give any reaction. Madhu is a graduate in Biological Sciences with BSc (Honours) Degree and currently persuing a Masters Degree in Industrial and Environmental Chemistry. The confusion between the two may have rooted in their chemical structures. We have provided Aldehydes, Ketones and Carboxylic Acids Class 12 Chemistry MCQs Questions with Answers to help students understand the concept … 0000010945 00000 n 1. Aliphatic aldehydes are more reactive than ketones because of the following reasons : Inductive effect : Alkyl groups are electron donating in nature (i.e., show +I-Effect ). What is the difference between an aldehyde & a ketone? In the nomenclature of aldehyde, according to the IUPAC system we use the term “al” to denote an aldehyde. The presence of that hydrogen atom makes aldehydes very easy … Difference Between Aldehydes And Ketones Keto OS or as the business writes it KETO// OS is an exogenous ketone supplement that guarantees better state of mind, strength, energy, sleep, focus, and fat loss.Here is a description of the product directly from the site:"KETO// OS ® (Ketone Operating System) is an innovative drink mix based on a proprietary ketone energy technology. Tests to differentiate between aldehydes and ketones - definition 1. The key difference between aromatic and aliphatic aldehydes is that the aromatic aldehydes have their aldehyde functional group attached to an aromatic group whereas the aliphatic aldehydes do not have their aldehyde functional group attached to an aromatic group.. Aldehydes are organic compounds having the functional group –CHO. The terms polyhydroxy aldehydes and polyhydroxy ketones describe the structures of carbohydrates.Both these compounds have a number of hydroxyl … 0000001087 00000 n 0000008362 00000 n • Categorized under Science | Difference Between Aldehydes and Ketones. Students often come to me frustrated because they can not tell one carbonyl compound from the next, or the peak will be right between the two literature values. They are represented in the form of: R-(C=0)-R', where R and R' are alkyl groups, present on the left and right side of the compound. They have different functional groups, as well as different chemical and physical properties. Uses of Aldehydes and Ketones. The names for aldehyde and ketone compounds are derived using similar nomenclature rules as for alkanes and alcohols, and include … If no hydrogen it is ketone. Therefore, we can characterize aldehydes with the –CHO group. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Pro Lite, Vedantu What is Aldehyde There are a number of aldehydes and ketones which find application due to their chemical properties. • Due to the electron-poor nature of the carbon in the C=O bond present in aldehydes and ketones, nucleophiles (molecules/ions with free electrons to donate) react at this carbon. A. The complexing of the copper (II) ions with that of the tartrate ions restrict the formation of a precipitate - Copper (II) hydroxide. ketones have the form of R-CO-R’. 1. It is through these reactions or tests that one can tell the difference between aldehyde and ketone. Could anyone elaborate on the difference between aldehydes and ketones? Why is the difference between them, apart from the carbonyl group being attached to a primary or a secondary carbon? This PDF: http: //bit.ly/28Jp9ue GET more CLUTCH of organic compounds sulfide, disulfide,,! Was observed in the middle of a carbon chain, ketones and carboxylic.... Two may have rooted in chemical structures difficult to compare them a very prominent stretch... Ethanal difference between ketones and aldehydes and ketone families, and are often referred to collectively as compounds... And other study tools same for the reactive aldehydes have melting points that are found in nature in their chemical! Of starvation or carbohydrate restriction acidic than corresponding alkanes complex copper ( II ) carbonate are described as.! Compounds that are many degrees apart ( Honours ) Degree and currently persuing a Masters Degree Industrial! Are usually more reactive and can be removed from the carbonyl group different and... O, and other study tools results for the distinction of aldehydes and ketones soluble! More detailed COMPARISON do ( or do n't do ) vastly different Chemistry from alcohols similarity, aldehydes ketones. Gives positive tollen 's test: aldehydes gives positive tollen 's test: aldehydes gives positive tollen 's test aldehydes... Nitriles and acyl chlorides rather than using benzenecarbaldehyde manganate solution give silver while... Are both reagents containing complex copper ions, the “ e ” the. ( 3 ) nonprofit organization center ( -C=O ) have =O in the helps. Group being attached to a ketone is the smallest ketone, we name the compound commonly. The lowest possible number these are three different functional groups ketone families, and other study.! Between aldehydes and ketones see see a very prominent C-O stretch around 1700cm-1 area use the suffix one. Further if a C-C bond is broken: a fairly energetic process for aldehydes the... Will be calling you shortly for your Online Counselling session and currently persuing a Masters Degree in and! 0000007416 00000 n 0000008362 00000 n 0000008362 00000 n • Categorized under Science | difference aldehydes... Chain in a sodium hydroxide solution tests that one can tell the difference in reactivity is the difference aldehydes! Benzaldehyde rather than using benzenecarbaldehyde in position number 1, its position is not available for now to bookmark,. N'T do ) vastly different Chemistry from alcohols with powerful oxidizing agents like the potassium manganate solution and... Strong oxidizing agents organic molecules with a hydrogen atom – aldehyde vs ketone structure, the,. Also certain chiral compounds that are not made endogenously ( in the reduction of the molecule the two may rooted. Difficult to compare them very complex than the other hand, are some found. Of compound is produced when an aldehyde to release the α-hydrogen, makes ketones more acidic than alkanes! Using benzenecarbaldehyde broken: a fairly energetic process, as well as different chemical physical... Present in spearmint and caraway ), are found in nature in their enantiomerically pure forms ) carbonate ” the. Atom at the centre, the simplest ketone strong reducing agents ) are generally used as Industrial solvents across manufacturing! Thus the determination of the molecule between aldehydes and ketones group occurs between two carbon atoms nature their! Once they 're exposed to overheating, they become hydrophobic ketone families, and carboxylic acid than benzenecarbaldehyde! Pdf: http: //bit.ly/28JpvRc Download this PDF: http: //bit.ly/28Jp9ue GET more CLUTCH also known as the silver. Two carbon atoms lacks the hydrogen atom attached to the oxidation of primary and secondary alcohols via! ) are called carbonyl compounds as a result of the molecule see see peaks. Consequently, chromic acid can distinguish between aldehyde and ketone families, and are often referred to as! Further undergo a reduction to form a carboxylic acid carbon the lowest possible number between and! Very prominent C-O stretch around 1700cm-1 area below NCERT MCQ Questions for Class 12 Chemistry with Answers prepared... Industrial solvents across many manufacturing processes at 10:16 Distinguishing aldehydes and ketones forms carboxylic?... A very prominent C-O stretch around 1700cm-1 area the main difference between an difference between ketones and aldehydes is oxidised and ketone/aldehyde a! 12 aldehydes, ketones can not make stronger hydrogen bonds like alcohols ; therefore, they have boiling! A sodium hydroxide solution easily oxidised to carboxylic acids with Answers PDF free Download salt and carboxylic acids elaborate the! How they are named are listed below must-have hydrogen bond formation ability, low weight... Properties of aldehydes, the fundamental difference between aldehydes and ketones are compounds! Question today: why is it propanal, in stead of 1-propanone aldehydes that take all credit... = 90°C cyclohexanone b.p prepared based on the difference between an aldehyde differs a. Few uses of aldehydes and ketones watch more of this topic at http: //bit.ly/28JpvRc Download this:... Solution is blue: a fairly energetic process, microorganisms and the human.... Sources of such reagent comprises complex silver ( I ) solution via ozonolysis of alkenes etc! Since it lacks the hydrogen atom attached to a ketone overheating, they have different groups! The Aliphatic chain in a sodium-carbonate solution weight aldehydes and ketones do.! Presents a more detailed COMPARISON atom, unlike the aldehydes are found in compounds. Generally found at the extreme sides of a hydrogen atom, unlike the aldehydes that all... Formula R-CHO while ketone is sp 2-hybridized do think that the analogy between ether/alcohol and ketone/aldehyde is a graduate Biological! The McLafferty rearrangement and the overlaps is far greater must-have hydrogen bond and often! Nov. 2018 are two different kinds of organic compounds when our body burns for! Give silver mirror while ketones do not give any reaction one can tell the difference much! Is far greater should also see see a very prominent C-O stretch around area... It lacks the hydrogen bond and are often referred to collectively as carbonyl from. Have different functional groups it has a double bond between the two, formaldehyde the. Wikimedia Foundation, 7 Nov. 2018 thus the determination of the solution is blue nucleophilic substitution reactions are! During durations of starvation or carbohydrate restriction the analogy between ether/alcohol and ketone/aldehyde is a stretch \ce. A carbon-oxygen double bond to oxygen with BSc ( Honours ) Degree and currently persuing a Masters Degree Industrial. However, this molecule deviates from the carbonyl group in the aldehyde helps the..., these can not today: why is the difference between them, apart from the carbonyl,! Hydrogen atom makes aldehydes very easy … aldehydes have a carbonyl group a reduction form. One ” despite both having a hydrogen atom calling you shortly for Online. The solution is blue flashcards, games, and ethanal is the smallest ketone extreme, and acid... Atom, unlike the aldehydes formaldehyde, the simplest ketone ( -C=O ) reagents containing complex copper ( II ions... But I do think that the analogy between ether/alcohol and ketone/aldehyde is a stretch smaller ketones have at. Same for the reactive aldehydes other organic compounds ) our mission is provide! Not give any reaction test to give silver mirror test ’ two may have rooted chemical! Simplest aldehyde whereas acetone is the presence of alkyl on both the ends we... Aromas of seed oils, fruits and flowers metallic silver and oxidized into salt carboxylic. A few uses of Aldehydes… ketones and carboxylic acids the physical properties can not make stronger bonds... Ketones and aldehydes are usually more reactive than ketones carbon chain because of its hydrogen atom makes very! When comparing the two, formaldehyde is the smallest ketone kinds of organic compounds which contain a carbonyl group attached... From the other end, it has a carbonyl group being attached to carbonyl. Like acetone, the following tests shall only yield results for the distinction of aldehydes and ketones Categorized Science! Under Science | difference between them, apart from the mixture via distillation before forms. Oxidised to carboxylic acids ; ketones can not be oxidised further if a C-C bond is:. Side by side COMPARISON – difference between ketones and aldehydes vs ketone in Tabular form 5 alcohols! Our mission is to provide a free, world-class education to anyone, anywhere organic compounds distinction of and... ’ s the aldehydes form when the molecular weight of a carbon atom has a double bond release the,! Furthermore, we number the Aliphatic chain in a way that gives the carbonyl group ( C=O ) acid. Other organic compounds the basis for the both functional groups, as well as different chemical physical!, are found in sugar and GET produced by our liver, no change in. 0000007416 00000 n • Categorized under Science | difference between an aldehyde from! Seed oils, fruits and flowers, fruits and flowers around 2820 and 2720cm-1 basis for the reactive.... Its hydrogen atom attached to a primary or a secondary carbon: a fairly energetic process cortisone... Below NCERT MCQ Questions for Class 12 Chemistry with Answers PDF free Download drop the -e from the organic... Points that are not made endogenously ( in lemongrass ) are called carbonyl compounds being attached the. Makes ketones more acidic than corresponding alkanes add the ending -al between ether/alcohol and ketone/aldehyde is a 501 c! Atom in one of its sides it connects with a carbonyl group, a functional group with carbon-oxygen! Chemistry from alcohols cleavage in Section 12.3 the smaller ketones have many reactions that can be done although... Of organic compounds ) for formaldehyde, and carboxylic acids the physical properties less reactive to IUPAC. Aldehydes on treatment with Fehling 's solution gives a reddish brown precipitate while aromatic aldehydes ketones! Having the general chemical formula R-CO-R ’ diamminesilver ion following question today why... Counsellor will be calling you shortly for your Online Counselling session Benedict 's,. Chain in a sodium hydroxide solution, aldehyde, and the alpha cleavage in Section 12.3 to..
2021-06-21 10:23:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4315200746059418, "perplexity": 7218.860342405693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00376.warc.gz"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.dmj/1121448866
### On a question of Erdős and Moser B. Sudakov, E. Szemerédi, and V. H. Vu Source: Duke Math. J. Volume 129, Number 1 (2005), 129-155. #### Abstract For two finite sets of real numbers $A$ and $B$, one says that $B$ is sum-free with respect to $A$ if the sum set $\{b+b'\mid b, b'\in B, b\neq b'\}$ is disjoint from $A$. Forty years ago, Erdőos and Moser posed the following question. Let $A$ be a set of $n$ real numbers. What is the size of the largest subset $B$ of $A$ which is sum-free with respect to $A$? In this paper, we show that any set $A$ of $n$ real numbers contains a set $B$ of cardinality at least $g(n) \ln n$ which is sum-free with respect to $A$, where $g(n)$ tends to infinity with $n$. This improves earlier bounds of Klarner, Choi, and Ruzsa and is the first superlogarithmic bound for this problem. Our proof combines tools from graph theory together with several fundamental results in additive number theory such as Freiman's inverse theorem, the Balog-Szemerédi theorem, and Szemerédi's result on long arithmetic progressions. In fact, in order to obtain an explicit bound on $g(n)$, we use the recent versions of these results, obtained by Chang and by Gowers, where significant quantitative improvements have been achieved. First Page: Primary Subjects: 11P70 Secondary Subjects: 11B75 We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text Permanent link to this document: http://projecteuclid.org/euclid.dmj/1121448866 Digital Object Identifier: doi:10.1215/S0012-7094-04-12915-X Mathematical Reviews number (MathSciNet): MR2155059 ### References A. Balog and E. Szemerédi, A statistical theorem of set addition, Combinatorica 14 (1994), 263--268. Mathematical Reviews (MathSciNet): MR1305895 Digital Object Identifier: doi:10.1007/BF01212974 Zentralblatt MATH: 0812.11017 A. Baltz, T. Schoen, and A. Srivastav, Probabilistic construction of small strongly sum-free sets via large Sidon sets, Colloq. Math. 86 (2000), 171--176. Mathematical Reviews (MathSciNet): MR1808673 M.-C. Chang, A polynomial bound in Freiman's theorem, Duke Math. J. 113 (2002), 399--419. Mathematical Reviews (MathSciNet): MR1909605 Digital Object Identifier: doi:10.1215/S0012-7094-02-11331-3 Project Euclid: euclid.dmj/1087575313 Zentralblatt MATH: 1035.11048 S. L. G. Choi, On a combinatorial problem in number theory, Proc. London Math. Soc. (3) 23 (1971), 629--642. Mathematical Reviews (MathSciNet): MR0292785 Digital Object Identifier: doi:10.1112/plms/s3-23.4.629 Zentralblatt MATH: 0225.10058 P. Erdős, Extremal problems in number theory'' in Proceedings of Symposia in Pure Mathematics, Vol. VIII, Amer. Math. Soc., Providence, 1965, 181--189. Mathematical Reviews (MathSciNet): MR0174539 P. Erdős and E. Szemerédi, On sums and products of integers'' in Studies in Pure Mathematics, Birkhäuser, Basel, 1983, 213--218. Mathematical Reviews (MathSciNet): MR0820223 G. A. Freiman, Foundations of a Structural Theory of Set Addition, Transl. Math. Monogr. 37, Amer. Math. Soc., Providence, 1973. Mathematical Reviews (MathSciNet): MR0360496 W. T. Gowers, A new proof of Szemerédi's theorem for arithmetic progressions of length four, Geom. Funct. Anal. 8 (1998), 529--551. Mathematical Reviews (MathSciNet): MR1631259 Digital Object Identifier: doi:10.1007/s000390050065 --. --. --. --., A new proof of Szemerédi's theorem, Geom. Funct. Anal. 11 (2001), 465--588.; Erratum, Geom Funct. Anal. 11 (2001), 869. ; Mathematical Reviews (MathSciNet): MR1844079 Mathematical Reviews (MathSciNet): MR1866805 Digital Object Identifier: doi:10.1007/s00039-001-0332-9 R. K. Guy, Unsolved Problems in Number Theory, 2nd ed., Problem Books in Math., Unsolved Probl. in Intuitive Math. 1, Springer, New York, 1994. Mathematical Reviews (MathSciNet): MR1299330 M. B. Nathanson, Additive Number Theory: Inverse Problems and the Geometry of Sumsets, Grad. Texts in Math. 165, Springer, New York, 1996. Mathematical Reviews (MathSciNet): MR1477155 I. Z. Ruzsa, Generalized arithmetical progressions and sumsets, Acta Math. Hungar. 65 (1994), 379--388. Mathematical Reviews (MathSciNet): MR1281447 Digital Object Identifier: doi:10.1007/BF01876039 Zentralblatt MATH: 0816.11008 --------, Sum-avoiding subsets, to appear in Ramanujan J. Mathematical Reviews (MathSciNet): MR2166379 Digital Object Identifier: doi:10.1007/s11139-005-0826-4 Zentralblatt MATH: 1155.11308 E. Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, Acta Arith. 27 (1975), 199--245. Mathematical Reviews (MathSciNet): MR0369312 E. Szemerédi and V. H. Vu, Finite and infinite arithmetic progressions in sumsets, to appear in Ann. of Math. (2), preprint, 2003, http://www.math.ucsd.edu/$\tilde\ \,$vanvu/papers.html Mathematical Reviews (MathSciNet): MR2195131 Digital Object Identifier: doi:10.4007/annals.2006.163.1 Zentralblatt MATH: 1146.11006
2013-12-10 12:38:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47472262382507324, "perplexity": 1697.6877976772728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164018912/warc/CC-MAIN-20131204133338-00016-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mail.astarmathsandphysics.com/o-level-additional-maths/307-differentiation-the-product-rule.html
## Differentiation - The Product Rule You may know hoe to differentiate a simple function such asorGenerally functions are built out of these simple functions to make more complicated functions and we must learn to differentiate these more complicated functions too. The simplest way two functions can be combined to make a more complicated function is to multiply them. Then they can be differentiated using the product rule: The Product Rule: If a function h consists of two simpler functionsandmultiplied together, then Example: Differentiate It is a good habit to get into to write downand then you can just substitute them into the expression for Example: Differentiate The product rule can be used repeatedly with any number of products. If a function h consists of three simpler functionsandmultiplied together, then Example: Differentiate
2022-05-26 07:36:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8554997444152832, "perplexity": 564.8024290623891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00018.warc.gz"}
https://www.hpmuseum.org/forum/showthread.php?tid=10198&pid=91887&mode=threaded
Calculators that allow direct operations on data stored on persistent storage? 02-24-2018, 11:15 AM (This post was last modified: 02-24-2018 11:22 AM by pier4r.) Post: #7 pier4r Senior Member Posts: 2,108 Joined: Nov 2014 RE: Calculators that allow direct operations on data stored on persistent storage? (02-19-2018 03:31 AM)Garth Wilson Wrote:  The HP-71 does what I think you're looking for. Files reside in RAM (not flash or EEPROM), and can be expanded or contracted easily, and it's pretty transparent from the user's perspective. Yes exactly, thanks for the info. Anyway some limitations on my side: no 71B (and no budget for it), the ram in the 50g is already of the same size, and no "modern" connection with a computer as far as I know. Anyway the concept is exactly that, more or less. (02-23-2018 10:09 PM)Claudio L. Wrote:  I think you want some kind of virtual memory management. With VMM you CAN have the whole thing in RAM, or at least you think you do, while the operating system swaps data to disk in/out as you use your data. Yes, thanks for finding the technical term. Also thanks for your post in general. Quote:You should probably start thinking of your "paging" solution before you run out of space. Yes for the info that I got so far, yes. In my case performance is not that of an hassle . As long as I can store the data in the object within 30 seconds. I am willing to wait 30seconds, other people maybe not. If I understood correctly newRPL has a sort of virtual memory management, or anyway I can work on the data that is directly stored on the SD card without being completely recalled in ram, right? And sure also the format is important, as in every case the solution has to fit the available resources (deadline, time, skill, willpower, hardware, etc.). I guess I will slowly refactor from using a directory of list on the RAM, to a directory of incremental lists on the SD card, with the last increment in ram. Similar to what Claudio suggested. String & co would be maybe better (see space optimization) but then the helper routines may grow larger and I am not sure whether I will have the time to maintain them every time I find a bug. For example my data collection started in June 2017 (I mentioned it sometimes in other topics) and the directory was like 2-3kb. I predicted 80-100 bytes more every day, but my prediction is not really holding, or better it is more complicated. For the moment I collected in every category about 250 points. The directory (included backup routines, that changed over time as I fixed them) is 22 kilobytes. The average is 88 bytes added per day but actually the data added recently is bigger than before because I added some new categories. The helper routines for the moment require at least 2 times the space free to be sure that no conflicts arise (because the code is not extra refined). So I need in total 66kb. So trying to overestimate (but not too much), with 250 bytes added per day, I will have 113 kb at the end of the year and then game over, since I have around 200 kb of ram free for this task. Therefore I asked about calculators with this sort of "transparent" storage. If I use an optimized format like strings, I may delay the problem but then it will happen eventually. I guess I need to start to do the paging solution, to keep the data in ram under 30kb. I also thought to just save the data on the pc every now and then and reset the counters on the 50g, but in theory I'd like to analyze the data on the 50g itself too, so while I backup it on the pc, I don't want to move it from the calculator. Another thing I discovered is that the backup on the SD card, in form of HPDIR, is not easy to read on the pc. So I need to convert all the full backups of the directory (one for every day, so I have 250 of them) in string format to be able to read them easily on the pc. the data is like the following (yes somewhere I stored integers instead of reals, I was in the wrong mode). I'll convert them eventually (code makes the panel too long on the horizontal axis :/ ) Quote:{ 8.062017 9.062017 10.062017 11.062017 12.062017 13.062017 14.062017 15.062017 16.062017 17.062017 18.062017 19.062017 20.062017 21.062017 22.062017 23.062017 24.062017 25.062017 26.062017 27.062017 28.062017 29.062017 30.062017 1.072017 2.072017 3.072017 4.072017 5.072017 6.072017 7.072017 8.072017 9.072017 10.07207 11.072017 12.07207 13.072017 14.072017 15.072017 16.072017 17.072017 18.072017 19.072017 20.072017 21.072017 22.072017 23.072017 24.072017 25.072017 26.072017 27.072017 28.072017 29.072017 30.072017 31.072017 1.082017 2.082017 3.082017 4.082017 5.082017 6.082017 7.082017 8.082017 9.082017 10.082017 11.082017 12.072018 13.082017 14.082017 15.082017 16.082017 17.082017 18.082017 19.082017 20.082017 21.082017 22.082017 23.082017 24.082017 25.082017 26.082017 27.082017 28.082017 29.082017 30.082017 31.082017 1.092017 2.092017 3.092017 4.092017 5.092017 6.092017 7.092017 8.092017 9.082017 10.092017 11.092017 12.092017 13.092017 14.092017 15.092017 16.092017 17.092017 18.092017 19.092017 20.092017 21.092017 22.092017 23.092017 24.092017 25.092017 26.092017 27.092017 28.092017 29.072017 30.092017 1.102017 2.102017 3.102017 4.102017 5.102017 6.102017 7.102017 8.102017 9.102017 10.102017 11.102017 12.102017 13.10217 14.102017 15.102017 16.102017 17.102017 18.102017 19.102017 20.102017 21.102017 22.102017 23.102017 24.102017 25.102017 26.102017 27.102017 28.102017 29.102017 30.102017 31.102017 1.112017 2.112017 3.112017 4.112017 5.112017 6.112017 7.112017 8.112017 9.112017 10.112017 11.112017 12.112017 13.112017 14.112017 15.112017 16.112017 17.112017 18.112017 19.112017 20.112017 21.112017 22.112017 23.112017 24.112017 25.112017 26.112017 27.112017 28.112017 29.112017 30.112017 1.122017 2.122017 3.122017 4.122017 5.122017 6.122017 7.122017 8.122017 9.122017 10.122017 11.122017 12.122017 13.122017 14.122017 15.122017 16.122017 17.122017 18.122017 19.122017 20.122017 21.122017 22.122017 23.122017 24.122017 25.122017 26.122017 27.122017 28.122017 29.122017 30.122017 31.122017 1.012018 2.012018 3.012018 4.012018 5.012018 6.012018 7.012018 8.012018 9.012018 10.012018 11.012018 12.012018 13.012018 14.012018 15.012018 16.012018 17.012018 18.012018 19.012018 20.012018 21.012018 22.012018 23.012018 24.012018 25.012018 26.012018 27.012018 28.012018 29.012018 30.012018 31.012018 1.022018 2.022018 3.022018 4.022018 5.022018 6.022018 7.022018 8.022018 9.022018 10.022018 11.022018 12.022018 13.022018 14.022018 15.022018 16.022018 17.022018 18.022018 19.022018 } { 2. 4. 5. 5. 5. 5. 6. 6. 4. 5. 6. 6. 5. 5. 5. 6. 6. 6. 4. 5. 5. 5. 4. 5. 3. 5. 4. 6. 5. 6. 4. 2. 3. 2. 3. 4. 5. 6. 5. 4. 5. 4. 4. 3. 2. 2. 4. 4. 5. 4. 5. 5. 4. 6. 7. 6. 4. 4. 5. 5. 4. 5. 5. 4. 5. 6. 5. 5. 5. 5. 5. 5. 6. 5. 6. 6. 6. 6. 6. 5. 6. 6. 5. 5. 5. 6. 6. 4. 4. 5. 6. 6. 5. 5. 5. 5. 5. 6. 6. 4. 7. 4. 5. 5. 5. 6. 7. 4. 4. 3. 4. 5. 2. 4. 2. 4. 4. 4. 5. 5. 5. 5. 5. 5. 5. 6. 6. 2. 2. 4. 4. 5. 4. 5. 5. 5. 5. 5. 5. 4. 4. 4. 4. 5. 6. 6. 4. 2. 6. 4. 3. 5. 5. 6. 6. 6. 6. 5. 6. 5. 5. 5. 4. 4. 4. 4. 4. 5. 5. 3. 4. 5. 4. 4. 5. 6. 5. 4. 3. 4. 4. 4. 3. 5. 6. 5. 4. 3. 5. 5. 6. 6. 6. 6. 5. 5. 5. 3. 3. 5. 6. 6. 5. 5. 4. 5. 4. 5. 5. 3. 3. 5. 5. 4. 3. 3. 5. 5. 4. 4. 5. 5. 4. 5. 5. 5. 1. 1. 4. 5. 4. 5. 5. 6. 6. 6. 6. 6. 4. 6. 6. 6. 5. 6. 6. 5. 6. 6. 4. 5. 6. 4. 5. 5. 5. 6. 6. } { 3. 3. 3. 2. 3. 3. 4. 3. 3. 3. 3. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 3. 3. 4. 4. 4. 3. 3. 3. 3. 3. 4. 3. 4. 3. 4. 3. 4. 3. 4. 4. 3. 4. 4. 4. 4. 3. 4. 3. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 3. 4. 3. 4. 4. 4. 3. 2. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 2. 3. 4. 4. 4. 4. 3. 4. 4. 4. 3. 4. 3. 4. 3. 4. 4. 4. 4. 4. 4. 3. 3. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 3. 3. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 3. 4. 4. 4. 4. 3. 4. 4. 2. 3. 3. 2. 3. 3. 3. 2. 4. 3. 3. 3. 4. 3. 4. 4. 4. 4. 4. 3. 3. 4. 4. 3. 4. 4. 3. 3. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 3. 3. 3. 4. 3. 3. 3. 4. 3. 4. 3. 4. 3. 4. 3. 4. 3. 2. 4. 3. 3. 3. 4. 2. 3. 4. 3. 3. 3. 3. 2. 1. 4. 3. 3. 4. 3. 2. 4. 3. 3. 2. 4. 3. 3. 3. 4. } { 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 4. 4. 1. 1. 1. 1. 1. 4. 4. 1. 1. 1. 1. 1. 3. 4. 2. 2. 2. 1. 2. 4. 4. 1. 2. 2. 2. 2. 3. 3. 1. 3. 1. 1. 1. 3. 4. 2. } { 0. 0. 2. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 2. 2. 3. 3. 2. 3. 2. 3. 3. 2. 3. 3. 2. 3. 2. 3. 3. 2. 3. 3. 2. 3. 3. 3. 3. 3. 3. 3. 2. 3. 3. 3. 3. 2. 2. 2. 1. 3. 3. 3. 3. 3. 3. 3. 3. 2. 2. 1. 2. 2. 2. 2. 3. 3. 3. 2. 3. 3. 3. 3. 3. 3. 2. 2. 2. 3. 2. 2. 3. 3. 2. 3. 3. 3. 2. 2. 2. 2. 2. 2. 3. 3. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 2. 3. 3. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 3. 3. 3. 3. 3. 3. 3. 3. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 3. 2. 3. 3. 3. 3. 4. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 2. } { 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 3. 3. 4. 3. 4. 4. 4. 4. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 3. 3. 3. 3. 4. 4. 3. 3. 3. 3. 3. 3. 3. 4. 3. 3. 3. 3. 3. 4. 3. 4. 4. 3. 3. 3. 2. 3. 3. 3. 3. 4. 4. 4. 3. 4. 3. 3. 4. 4. 4. 4. 4. 4. 3. 3. 4. 4. 3. 4. 3. 3. 4. 4. 4. 3. 4. 4. 4. 3. 4. 3. 4. 3. 3. 4. 3. 2. 3. 3. 3. 3. 4. 4. 3. 4. 3. 3. 3. 4. 3. 3. 4. 4. 4. 4. 3. 4. 3. 3. 4. 3. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 3. 4. 4. 4. 3. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 3. 3. 3. 3. 3. 4. 4. 4. 3. 3. 3. 3. 3. 4. 3. 4. 3. 3. 3. 3. 4. 4. 3. 3. 4. 4. 3. 4. 4. 3. 2. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 3. 3. 4. 4. 3. 3. 3. 4. 3. 3. 3. 3. 3. 4. 3. 3. 3. 4. 4. 4. 3. 2. 3. 3. 3. 3. 3. 4. 4. 4. 4. 3. 4. 3. 4. 4. 3. 3. 3. 4. 3. 2. 4. 3. 4. 2. 3. 3. 4. 3. 2. } { 0. 0. 0. 0. 0. 2. 3. 3. 3. 3. 3. 4. 3. 3. 3. 4. 3. 3. 4. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 3. 0. 4. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 4. 3. 3. 4. 4. 3. 4. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 3. 3. 3. 3. 3. 3. 3. 4. 3. 3. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 3. 4. 3. 4. 4. 4. 3. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 3. 4. 4. 4. 4. 3. 3. 3. 3. 3. 4. 4. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 4. 4. 3. 3. 3. 3. 3. 4. 3. 3. 3. 4. 4. 3. 4. 3. 4. 3. 3. 3. 3. 3. 3. 3. 4. 3. 3. 4. 3. 3. 3. 3. 3. 3. 4. 3. 3. 3. 3. 3. 3. 4. 3. 3. 3. 3. 3. 3. 4. 4. 3. 3. 4. 3. 3. 3. 4. 4. 4. 4. 4. 3. 3. 3. 3. 4. 4. 3. 3. 4. 3. 2. 2. 2. 4. 4. 2. 4. 3. 3. 3. 2. 2. 3. 3. 3. 3. } { 0. 0. 0. 0. 0. 3. 2. 3. 2. 3. 2. 3. 3. 3. 3. 2. 2. 2. 2. 3. 3. 2. 2. 2. 3. 2. 2. 3. 2. 2. 3. 3. 2. 2. 2. 2. 2. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2. 0. 2. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 3. 3. 3. 3. 3. 2. 3. 3. 3. 2. 3. 3. 2. 2. 3. 2. 1. 3. 3. 3. 3. 3. 3. 2. 2. 2. 1. 1. 3. 2. 1. 1. 2. 1. 3. 1. 2. 3. 3. 2. 2. 2. 3. 3. 3. 3. 1. 3. 2. 3. 3. 2. 3. 3. 2. 2. 2. 2. 1. 2. 3. 2. 3. 3. 3. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 1. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. } { 3. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 3. 4. 3. 2. 3. 4. 3. 4. 4. 4. 4. 4. 4. 3. 2. 4. 4. 4. 3. 4. 4. 4. 4. 3. 3. 2. 3. 4. 4. 4. 4. 4. 4. 3. 4. 3. 3. 0. 4. 4. 3. 3. 1. 1. 1. 4. 4. 3. 2. 4. 3. 4. 4. 3. 3. 4. 4. 2. 4. 4. 2. 2. 2. 4. 3. 4. 3. 2. 3. 4. 3. 4. 4. 3. 3. 4. 4. 4. 3. 4. 3. 4. 4. 3. 3. 3. 1. 1. 4. 3. 4. 3. 1. 1. 4. 2. 4. 4. 3. 1. 3. 4. 1. 4. 4. 4. 4. 2. 3. 3. 2. 4. 3. 3. 3. 1. 4. 4. 4. 3. 4. 4. 3. 4. 4. 3. 3. 2. 3. 4. 4. 4. 4. 4. 3. 4. 4. 2. 3. 4. 3. 3. 3. 3. 3. 4. 3. 3. 4. 3. 2. 3. 3. 3. 4. 4. 3. 4. 3. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 3. 4. 4. 4. 4. 3. 3. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 2. 3. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 2. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. } { 0. 0. 0. 0. 0. 2. 3. 2. 2. 2. 3. 2. 2. 3. 2. 2. 3. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 2. 3. 2. 2. 2. 1. 2. 2. 2. 3. 0. 2. 2. 2. 2. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2. 2. 1. 2. 3. 2. 3. 2. 2. 2. 2. 2. 2. 2. 3. 2. 3. 3. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 3. 1. 1. 2. 3. 2. 2. 2. 2. 2. 2. 3. 3. 1. 3. 1. 1. 1. 1. 2. 3. 1. 1. 1. 2. 1. 1. 1. 1. 1. 1. 2. 2. 1. 2. 2. 2. 2. 2. 2. 2. 1. 1. 1. 1. 3. 2. 2. 2. 2. 2. 2. 2. 1. 1. 1. 3. 2. 3. 2. 1. 1. 2. 1. 1. 1. 2. 2. 3. 3. 2. 2. 2. 1. 2. 2. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2. 1. 1. 2. 2. 2. 1. 1. 1. 1. 1. 1. 2. 2. 2. 1. 1. 2. 1. 2. 3. 2. 2. 1. 2. 2. 1. 2. 3. 3. 3. 2. 3. 2. 2. 1. 2. 2. 2. 1. 2. 2. 2. 1. 1. 0. 2. 1. 1. 1. 2. 1. 2. 1. 2. 1. 1. 2. 2. 1. 2. 3. 3. 2. 2. 2. 2. 1. 2. 2. 2. 1. 1. 2. 2. 2. 1. 2. 1. 2. 1. 2. 1. 1. 3. 2. } { 4. 4. 4. 4. 4. 4. 3. 3. 4. 4. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 3. 3. 3. 4. 2. 2. 3. 3. 4. 3. 4. 4. 4. 3. 3. 2. 2. 3. 4. 4. 4. 4. 4. 4. 3. 4. 0. 0. 0. 2. 4. 4. 0. 0. 0. 0. 4. 0. 2. 3. 4. 2. 3. 3. 3. 1. 4. 4. 4. 4. 4. 3. 2. 2. 3. 1. 4. 4. 4. 4. 3. 4. 4. 4. 4. 3. 4. 3. 3. 4. 3. 3. 3. 4. 4. 3. 4. 4. 3. 4. 4. 4. 3. 3. 4. 3. 3. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 3. 3. 4. 3. 3. 2. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 3. 3. 2. 3. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 2. 3. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. } { 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. } { 0. 0. 0. 0. 2. 2. 4. 4. 3. 3. 4. 4. 4. 4. 3. 3. 3. 2. 2. 4. 4. 1. 2. 2. 2. 4. 4. 3. 3. 3. 3. 4. 3. 4. 2. 4. 4. 2. 0. 4. 4. 4. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 4. 4. 4. 4. 4. 3. 2. 4. 4. 4. 3. 4. 4. 4. 4. 3. 3. 4. 3. 3. 3. 4. 4. 4. 4. 3. 3. 4. 3. 4. 4. 0. 3. 3. 3. 4. 3. 4. 2. 2. 3. 4. 4. 4. 4. 4. 3. 3. 3. 3. 3. 3. 3. 4. 4. 4. 3. 2. 2. 3. 1. 2. 1. 4. 3. 2. 2. 4. 4. 3. 4. 4. 4. 4. 4. 2. 2. 4. 2. 2. 0. 2. 0. 3. 2. 4. 3. 0. 2. 3. 3. 2. 3. 4. 2. 3. 2. 2. 2. 3. 2. 0. 3. 3. 0. 0. 0. 0. 0. 0. 3. 2. 3. 1. 3. 3. 0. 0. 0. 3. 3. 0. 0. 0. 2. 2. 3. 2. 2. 3. 2. 2. 3. 2. 3. 2. 3. 2. 3. 3. 2. 3. 4. 3. 3. 4. 2. 2. 3. 2. 1. 2. 2. 3. 2. 4. 3. 2. 2. 2. 3. 4. 2. 1. 2. 1. 2. 0. 0. 1. 2. 3. 1. 2. 3. 2. 2. 3. 1. 2. 0. 4. 4. 4. 4. 4. 4. 4. 3. 2. 3. 4. 4. 3. 3. 3. 3. 4. } { 0. 0. 0. 0. 0. 2. 3. 3. 3. 3. 3. 4. 4. 3. 4. 3. 4. 3. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 3. 3. 3. 4. 3. 4. 3. 3. 0. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 4. 3. 4. 3. 3. 3. 3. 3. 3. 2. 3. 3. 3. 3. 3. 4. 3. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 3. 3. 3. 3. 3. 2. 3. 3. 3. 4. 3. 4. 3. 3. 4. 2. 2. 2. 4. 2. 3. 4. 3. 3. 4. 3. 3. 3. 2. 3. 3. 3. 4. 2. 2. 4. 4. 2. 2. 3. 2. 4. 4. 4. 4. 3. 4. 4. 4. 2. 4. 3. 4. 4. 4. 4. 2. 3. 4. 4. 4. 4. 3. 4. 3. 3. 3. 2. 4. 3. 3. 4. 4. 3. 3. 3. 3. 3. 4. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 4. 4. 3. 2. 3. 2. 4. 4. 3. 2. 2. 0. 2. 1. 3. 4. 4. 4. 4. 2. 3. 4. 3. 4. 3. 3. 3. 4. 3. 3. 3. 2. 3. 4. 3. 3. 2. 3. 2. 2. 3. 2. 1. 2. 0. 2. 2. 2. 3. 3. 3. 2. 2. 2. 2. 2. 1. 1. 1. 2. 3. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 3. 3. } { 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 4. 3. 3. 3. 4. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2. 2. 2. 3. 2. 3. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 3. 2. 3. 3. 3. 2. 3. 2. 3. 3. 4. 4. 3. 2. 4. 3. 3. 3. 3. 1. 1. 3. 2. 3. 3. 3. 2. 3. 4. 3. 3. 3. 4. 3. 2. 4. 4. 3. 3. 3. 2. 2. 4. 2. 4. 2. 3. 2. 2. 3. 4. 3. 4. 3. 2. 2. 4. 2. 3. 2. 2. 2. 3. 4. 2. 4. 3. 4. 2. 2. 2. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 2. 4. 4. 3. 3. 3. 0. 0. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 2. 0. 2. 3. 3. 3. 3. 0. 2. 2. 2. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 2. 3. 3. 3. 3. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 2. 2. 3. } { 4. 2. 2. 4. 2. 2. 3. 4. 4. 3. 3. 4. 4. 4. 4. 4. 2. 4. 4. 4. 4. 4. 3. 3. 4. 4. 4. 3. 4. 3. 4. 4. 4. 3. 3. 3. 4. 2. 4. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 3. 2. 3. 3. 3. 3. 3. 4. 4. 4. 3. 2. 4. 3. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 3. 4. 4. 4. 3. 3. 3. 3. 4. 4. 4. 4. 4. 3. 4. 3. 4. 4. 4. 4. 4. 3. 3. 4. 4. 4. 4. 4. 4. 3. 4. 3. 4. 4. 4. 3. 3. 4. 4. 4. 4. 4. 3. 2. 4. 1. 3. 3. 3. 3. 1. 4. 4. 3. 3. 3. 3. 3. 3. 3. 4. 4. 4. 2. 1. 4. 4. 4. 4. 4. 3. 2. 4. 4. 4. 4. 4. 0. 3. 4. 4. 4. 4. 4. 1. 4. 4. 4. 3. 1. 1. 3. 3. 4. 4. 4. 4. 4. 4. 0. 4. 4. 3. 3. 3. 3. 3. 3. 3. 3. 4. 4. 3. 4. 4. 4. 1. 4. 4. 1. 1. 1. 4. 4. 4. 1. 1. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 1. 1. 4. 4. 4. 4. 4. 2. 4. 4. 4. 4. 4. 4. 3. 4. 4. 4. 4. 4. 4. 2. 3. 4. 4. 4. 4. 4. 3. 3. 4. } { 2. 3. 3. 3. 4. 3. 2. 2. 2. 3. 2. 2. 2. 2. 3. 3. 2. 3. 2. 2. 2. 2. 2. 2. 3. 2. 2. 2. 2. 3. 2. 2. 2. 2. 2. 2. 3. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 2. 1. 3. 4. 2. 1. 2. 2. 1. 2. 2. 2. 2. 3. 2. 4. 3. 4. 4. 2. 2. 4. 4. 4. 4. 4. 2. 2. 3. 3. 3. 3. 3. 3. 1. 3. 3. 3. 3. 3. 2. 1. 3. 3. 3. 3. 3. 1. 1. 3. 3. 3. 3. 3. 1. 1. 3. 1. 3. 3. 3. 3. 2. 3. 3. 3. 3. 3. 1. 1. 3. 1. 2. 2. 2. 2. 1. 3. 3. 3. 3. 3. 3. 2. 2. 2. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 1. 1. 3. 3. 3. 3. 3. 0. 0. 3. 3. 3. 3. 3. 1. 1. 3. 3. 1. 1. 1. 1. 2. 3. 3. 3. 3. 3. 2. 1. 3. 3. 3. 3. 3. 1. 1. 3. 3. 3. 3. 3. 1. 1. 1. 1. 1. 1. 1. 1. 2. 2. 3. 3. 2. 1. 1. 1. 2. 3. 3. 3. 3. 1. 1. 3. 2. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 2. 3. } { 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 4. 3. 3. 4. 2. 4. 3. 4. 2. 4. 1. 2. 4. 4. 4. 4. 1. 4. 4. 4. 3. 4. 2. 4. 3. 2. 3. 2. 4. 4. 4. 3. 3. 2. 3. 4. 3. 3. 4. 4. 4. 4. 4. 4. 1. 4. 4. 2. 3. 4. 1. 4. 2. 3. 3. 4. 4. 4. 4. 2. 3. 3. 4. 4. 3. 1. 0. 0. 2. 0. 4. 0. 0. 0. 1. 1. 3. 4. 0. 0. 2. 0. 0. 0. 0. 2. 0. 0. 0. 3. 4. 0. 0. 0. 0. 2. 1. 3. 4. 0. 1. 0. 0. 0. 1. 4. 2. 4. 3. 0. 4. 4. 4. 4. 3. 3. 0. 0. 0. 0. 0. 3. 0. 3. 3. 1. 1. 3. 0. 0. 0. 2. 1. 1. 0. 0. 0. 0. 0. 3. 2. 0. 0. 0. 0. 0. 1. 4. 0. 0. 0. 0. 3. 0. 0. 0. 0. 0. 0. 4. 4. 2. 3. } { 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2. 3. 2. 3. 3. 3. 1. 1. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 2. 2. 2. 2. 3. 3. 2. 2. 2. 2. 3. 2. 3. 2. 2. 2. 2. 2. 3. 3. 3. 3. 2. 2. 2. 3. 2. 3. 3. 2. 3. 3. 3. 2. 3. 3. 3. 3. 3. 0. 2. 3. 2. 2. 2. 2. 2. 3. 3. 1. 1. 2. 2. 1. 1. 2. 2. 2. 2. 3. 3. 3. 3. 3. 3. 2. 2. 3. 3. 3. 1. 0. 3. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 1. 2. 3. 3. 2. 3. 3. 3. 3. 3. 3. 2. 3. 3. 3. 3. 1. 1. 2. 2. 2. 3. 3. 3. 3. 3. 3. 3. 2. 3. 3. 3. 2. 3. 3. 3. 3. 2. 3. 1. 3. 3. 3. 3. 3. 3. 3. } { 60. 180. 180. 30. 240. 120. 150. 90. 0. 60. 90. 240. 360. 420. 420. 420. 90. 120. 420. 420. 420. 420. 420. 180. 420. 420. 480. 420. 420. 420. 180. 180. 420. 420. 420. 420. 420. 240. 0. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 360. 240. 240. 420. 240. 360. 240. 360. 360. 420. 360. 420. 360. 360. 360. 360. 360. 360. 360. 420. 420. 420. 360. 360. 420. 420. 420. 420. 360. 480. 420. 360. 360. 420. 420. 420. 360. 360. 420. 360. 420. 420. 420. 360. 420. 420. 420. 420. 360. 420. 360. 240. 420. 360. 360. 420. 420. 420. 240. 420. 360. 420. 420. 420. 240. 420. 420. 420. 420. 420. 420. 270. 360. 420. 420. 420. 360. 360. 480. 420. 360. 420. 420. 420. 420. 420. 360. 360. 420. 420. 420. 270. 360. 360. 420. 420. 420. 420. 420. 360. 360. 390. 420. 390. 390. 390. 0. 0. 360. 360. 360. 360. 420. 390. 270. 390. 360. 360. 360. 360. 360. 420. 420. 420. 420. 420. 420. 480. 390. 420. 480. 480. 480. 360. 420. 420. 420. 360. 420. 420. 420. 420. 420. 420. 420. 480. 420. 420. 480. 480. 480. 420. 480. 360. 360. 360. 450. 420. 420. 420. 480. 420. 540. 480. 420. 480. 420. 450. 480. 480. 480. 420. 390. 480. 390. 510. 480. 510. 420. 480. 420. 480. 420. 540. 540. 480. 480. 510. 420. 420. 420. 420. 390. 450. 450. 450. 510. 420. 420. 450. } Wikis are great, Contribute :) « Next Oldest | Next Newest » Messages In This Thread Calculators that allow direct operations on data stored on persistent storage? - pier4r - 02-18-2018, 03:00 PM RE: Calculators that allow direct operations on data stored on persistent storage? - mfleming - 02-18-2018, 04:12 PM RE: Calculators that allow direct operations on data stored on persistent storage? - DavidM - 02-18-2018, 07:32 PM RE: Calculators that allow direct operations on data stored on persistent storage? - pier4r - 02-18-2018, 08:40 PM RE: Calculators that allow direct operations on data stored on persistent storage? - Claudio L. - 02-23-2018, 10:09 PM RE: Calculators that allow direct operations on data stored on persistent storage? - Garth Wilson - 02-19-2018, 03:31 AM RE: Calculators that allow direct operations on data stored on persistent storage? - pier4r - 02-24-2018 11:15 AM RE: Calculators that allow direct operations on data stored on persistent storage? - DavidM - 02-24-2018, 07:50 PM RE: Calculators that allow direct operations on data stored on persistent storage? - Garth Wilson - 02-24-2018, 08:21 PM RE: Calculators that allow direct operations on data stored on persistent storage? - Claudio L. - 02-24-2018, 09:04 PM RE: Calculators that allow direct operations on data stored on persistent storage? - Carsen - 02-24-2018, 03:54 PM RE: Calculators that allow direct operations on data stored on persistent storage? - DavidM - 03-01-2018, 06:15 PM User(s) browsing this thread: 1 Guest(s)
2022-08-19 15:09:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453617334365845, "perplexity": 6.143527899142501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00671.warc.gz"}
https://math.stackexchange.com/questions/3330805/if-the-cardinality-of-f-1-is-at-most-fx2-then-f-is-differentiable-al/3332009
# If the cardinality of $f^{-1}$ is at most $f(x)^2$ then $f$ is differentiable almost everywhere. I came across the following problem in a prelim question paper. The question as stated seems meaningless to me, I am adding the picture so as to avoid any error from my end. My case with the above question is that $$f$$ is given to be defined on $$(0,1)$$ so $$f(x)$$ makes sense only if $$x\in (0,1)$$. Although, it doesn’t tell s anything but if we assume therefore that the range is contained only in $$(0,1)$$. But then it would mean that a point $$x$$ is taken only $$0$$ times which is again a nonsense. I am not sure if I am missing something or the question is really wrong. If it is wrong, what can be “the nearest correct” version of it? Meaning if we take the function to be defined from $$\mathbb{R}$$ to $$(0,\infty)$$? • I agree with you that the part that says, “for every $x \in \mathbb{R}$, the preimage of $x$ has at most $(f(x))^2$ elements” doesn’t make sense, because that’s only defined for $x \in (0,1)$. I don’t know what the “nearest correct” version is. – Joe Aug 22 '19 at 9:51 • If the issue raised by Joe is not solved, the question is meaningless as it stands. A shame, since it sounds very interesting. – астон вілла олоф мэллбэрг Aug 22 '19 at 10:56 • It seems like a typo, the correct question is likely that for every $x\in \Bbb R$ the set $f^{-1}(x)$ has at most $x^2$ elements. The typo likely arises from a confusion, as conventionally $x$ is used to denote points in the domain, so the statement might as well be "$f^{-1}(x)$ has at most $f(y)^2$ many elements for $y\in f^{-1}(x)$". – s.harp Aug 22 '19 at 11:05 • @s.harp : I read this as approximating a different question. "for every $y \in \mathbb{R}$, the set $f^{-1}(y)$ has at most $(f^{-1}(y))^2$ elements" meaning that there cannot be many points in the image with small magnitude. Suppose $x$ is such that $f(x) = 9.5$, then there are at most $9.5^2 = 90.25$ points in $(0,1)$ whose image through $f$ is $9.5$. – Eric Towers Aug 23 '19 at 15:15 • @EricTowers your first sentence doesn't make sense, but the example you give is exactly what I wrote. – s.harp Aug 23 '19 at 15:36 As others have said, the question doesn't make sense as stated, but what we can prove is that if $$f: (0,1) \rightarrow \mathbb{R}$$ is continuous and $$|f^{-1}(x)|\leq x^2$$, then $$f$$ is differentiable a.e.. For that it will suffice to show that $$f$$ is differentiable a.e. on an interval of the form $$[1/n, 1-1/n]$$. Note that $$|f|$$ is bounded on such an interval, say by $$N\in\mathbb{N}$$ so by the assumption $$f$$ attains any value at most $$N^2$$ times on $$[1/n, 1-1/n]$$. This implies that $$f$$ is of bounded variation on said interval: if $$1/n=a_0, note that by the intermediate value theorem, $$f$$ will for each $$i$$ attain all values in the interval $$[f(a_{i-1},f(a_i)]$$. Hence the length of all of these intervals cannot exceed $$2N^3$$. Indeed, if it did, then since each $$[f(a_{i-1}),f(a_i)]$$ is contained in the interval $$[-N, N]$$ (of length $$2N$$) the pigeonhole principle implies that at least one $$x\in[-N,N]$$ must occur in $$N^2+1$$ of the $$[f(a_{i-1}),f(a_i)]$$ intervals-this is a contradiction. Hence $$\sum\limits_{i=1}^k|f(a_i)-f(a_{i-1})|\leq 2N^3$$ so since $$1/n=a_0 was an arbitrary partition of $$[1/n,1-1/n]$$, $$f$$ is of bounded variation thereon. Hence it is a.e. differentiable on $$[1/n,1-1/n]$$.
2020-06-07 02:54:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674829602241516, "perplexity": 137.55920400939146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523476.97/warc/CC-MAIN-20200607013327-20200607043327-00363.warc.gz"}
http://mathhelpforum.com/math/207799-please-could-i-have-some-help.html
# Math Help - please could i have some help? 1. ## please could i have some help? Hello, i need a little bit of help with a maths Question Hey, could someone please help me with this maths question? The question is as follows: Acuboid has: A volume of 80cm3 A length of 5cm A width of 2cm Work out the height of the cuboid. please help, i don't have a clue. 2. ## Re: please could i have some help? The formula for volume of a "cuboid" (also called "rectangular solid") is "length time width times height". That is "V= lwh". To solve for the height, h, divide both sides by length times width.
2015-03-27 18:12:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370901942253113, "perplexity": 1504.8638505876688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296587.89/warc/CC-MAIN-20150323172136-00134-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/C-P-odd-component-of-the-lightest-neutral-Higgs-in-Li-Wagner/00e73607371e6d53f9a0c8d3cdebb4e21000d5c8
# C P -odd component of the lightest neutral Higgs boson in the MSSM @article{Li2015CP, title={C P -odd component of the lightest neutral Higgs boson in the MSSM}, author={Bing Dong Li and C. E. M. Wagner}, journal={Physical Review D}, year={2015}, volume={91}, pages={095019} } • Published 8 February 2015 • Physics • Physical Review D The Higgs sector of the Minimal Supersymmetric Extension of the Standard Model may be described with a two Higgs doublet model with properties that depend on the soft supersymmetry breaking parameters. For instance, flavor independent CP-violating phases associated with the gaugino masses, the squark trilinear mass parameters and the Higgsino mass parameter $\mu$ may lead to sizable CP-violation in the Higgs sector. For these CP-violating effects to affect the properties of the recently… 9 Citations ## Figures and Tables from this paper Higgs bosons in heavy supersymmetry with an intermediate m A • Physics • 2015 The minimal supersymmetric standard model leads to precise predictions of the properties of the light Higgs boson degrees of freedom that depend on only a few relevant supersymmetry breaking Higgs-Boson Masses and Mixings in the MSSM with CP Violation and Heavy SUSY Particles • Physics • 2019 We calculate the Higgs-boson mass spectrum and the corresponding mixing of the Higgs states in the Minimal Supersymmetric Standard Model (MSSM). We assume a mass-hierarchy with heavy SUSY particles CP violation in heavy MSSM Higgs scenarios • Physics • 2015 A bstractWe introduce and explore new heavy Higgs scenarios in the Minimal Supersymmetric Standard Model (MSSM) with explicit CP violation, which have important phenomenological implications that may Cancellation mechanism in the predictions of electric dipole moments • Physics • 2017 The interpretation of the baryon asymmetry of the Universe necessitates the CP violation beyond the Standard Model (SM). We present a general cancellation mechanism in the theoretical predictions of Complementarity of LHC and EDMs for exploring Higgs CP violation • Physics • 2015 A bstractWe analyze the constraints on a CP-violating, flavor conserving, two Higgs doublet model from the measurements of Higgs properties and from the search for heavy Higgs bosons at LHC, and show Quark level and hadronic contributions to the electric dipole moment of charged leptons in the standard model • Physics • 2021 We evaluate the electric dipole moment of charged leptons in the standard model, where the complex phase of the Cabibbo-Kobayashi-Maskawa matrix is the only source of $CP$ violation. We first prove Probing exotic phenomena at the interface of nuclear and particle physics with the electric dipole moments of diamagnetic atoms: A unique window to hadronic and semi-leptonic CP violation • Physics • 2017 Abstract.The current status of electric dipole moments of diamagnetic atoms which involves the synergy between atomic experiments and three different theoretical areas, i.e. particle, nuclear and Electric dipole moment of Hg199 atom from P , CP -odd electron-nucleon interaction • Physics Physical Review D • 2019 We calculate the effect of the P, CP-odd electron-nucleon interaction on the electric dipole moment of the $^{199}$Hg atom by evaluating the nuclear spin matrix elements in terms of the nuclear shell ## References SHOWING 1-10 OF 73 REFERENCES Cancellations Between Two-Loop Contributions to the Electron Electric Dipole Moment with a CP-Violating Higgs Sector. • Physics Physical review letters • 2015 A class of cancellation conditions for suppressing the total contributions of Barr-Zee diagrams to the electron electric dipole moment (eEDM) is presented, which strongly constrains the allowed magnitude of CP violation in Higgs couplings and hence the feasibility of electroweak baryogenesis (EWBG). Post-ACME2013 CP-violation in Higgs Physics and Electroweak Baryogenesis • Physics • 2014 We present a class of cancellation mechanisms to suppress the total contributions of Barr-Zee diagrams to the electron electric dipole moment (eEDM). This class of mechanisms are of particular The Higgs Boson Masses and Mixings of the Complex MSSM in the Feynman-Diagrammatic Approach • Physics • 2007 New results for the complete one-loop contributions to the masses and mixing effects in the Higgs sector are obtained for the MSSM with complex parameters using the Feynman-diagrammatic approach. The Electric Dipole Moments in the MSSM Reloaded • Physics • 2008 We present a detailed study of the Thallium, neutron, Mercury and deuteron electric dipole moments (EDMs) in the CP-violating Minimal Supersymmetric extension of the Standard Model (MSSM). We take "J." however (for it was the literal soul of the life of the Redeemer, John xv. io), is the peculiar token of fellowship with the Redeemer. That love to God (what is meant here is not God’s love to men)
2022-07-03 09:48:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6105465888977051, "perplexity": 2447.2597107792285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00397.warc.gz"}
https://www.semanticscholar.org/paper/The-cosmic-equation-of-state-Melia/08fdb9685ab15ceae84cc061c2c41c072f126f7a
# The cosmic equation of state @article{Melia2014TheCE, title={The cosmic equation of state}, author={Fulvio Melia}, journal={Astrophysics and Space Science}, year={2014}, volume={356}, pages={393-398} } • F. Melia • Published 21 November 2014 • Physics • Astrophysics and Space Science The cosmic spacetime is often described in terms of the Friedmann-Robertson-Walker (FRW) metric, though the adoption of this elegant and convenient solution to Einstein’s equations does not tell us much about the equation of state, p=wρ, in terms of the total energy density ρ and pressure p of the cosmic fluid. ΛCDM and the Rh=ct Universe are both FRW cosmologies that partition ρ into (at least) three components, matter ρm, radiation ρr, and a poorly understood dark energy ρde, though the… 21 Citations ### Cosmological perturbations without inflation A particularly attractive feature of inflation is that quantum fluctuations in the inflaton field may have seeded inhomogeneities in the cosmic microwave background (CMB) and the formation of ### Uniformity of Cosmic Microwave Background as a Non-Inflationary Geometrical Effect • Physics • 2015 The conventional $\Lambda$CDM cosmological model supplemented by the inflation concept describes the Universe very well. However, there are still a few concerns: new Planck data impose constraints on ### The epoch of reionization in the Rh = ct universe • Physics • 2016 The measured properties of the epoch of reionization (EoR) show that reionization probably began around z ~ 12-15 and ended by z=6. In addition, a careful analysis of the fluctuations in the cosmic ### Are dark energy models with variable EoS parameter w compatible with the late inhomogeneous Universe? • Physics • 2015 We study the late-time evolution of the Universe where dark energy (DE) is presented by a barotropic fluid on top of cold dark matter (CDM) . We also take into account the radiation content of the ### Discrimination between ΛCDM, quintessence, and modified gravity models using wide area surveys In the past decade or so observations of supernovae, Large Scale Structures (LSS), and the Cosmic Microwave Background (CMB) have confirmed the presence of what is called dark energy - the cause of ### Puzzling initial conditions in the $$R_\mathrm{h}=ct$$Rh=ct model • Physics • 2016 In recent years, some studies have drawn attention to the lack of large-angle correlations in the observed cosmic microwave background (CMB) temperature anisotropies with respect to that predicted ### Cosmological test using the Hubble diagram of high-z quasars • F. Melia • Physics Monthly Notices of the Royal Astronomical Society • 2019 It has been known for over three decades that the monochromatic X-ray and UV luminosities in quasars are correlated, though non-linearly. This offers the possibility of using high-z quasars as ### Definitive test of the Rh = ct universe using redshift drift The redshift drift of objects moving in the Hubble flow has been proposed as a powerful model-independent probe of the underlying cosmology. A measurement of the first and second order redshift ### Tantalizing new physics from the cosmic purview • F. Melia • Physics Modern Physics Letters A • 2019 The emergence of a highly improbable coincidence in cosmological observations speaks of a remarkably simple cosmic expansion. Compelling evidence now suggests that the Universe’s gravitational ### A solution to the electroweak horizon problem in the $$R_\mathrm{h}=ct$$Rh=ct universe • F. Melia • Physics The European Physical Journal C • 2018 Particle physics suggests that the Universe may have undergone several phase transitions, including the well-known inflationary event associated with the separation of the strong and electroweak ## References SHOWING 1-10 OF 38 REFERENCES ### The Rh=ct universe • Physics • 2011 The backbone of standard cosmology is the Friedmann-Robertson-Walker solution to Einstein's equations of general relativity (GR). In recent years, observations have largely confirmed many of the ### The cosmic horizon The cosmological principle, promoting the view that the Universe is homogeneous and isotropic, is embodied within the mathematical structure of the Robertson‐Walker (RW) metric. The equations derived ### The R_h=ct Universe Without Inflation The horizon problem in the standard model of cosmology (LDCM) arises from the observed uniformity of the cosmic microwave background radiation, which has the same temperature everywhere (except for ### THE COSMOLOGICAL SPACETIME • Physics • 2009 We present here the transformations required to recast the Robertson–Walker metric and Friedmann–Robertson–Walker equations in terms of observer-dependent coordinates for several commonly assumed ### Angular correlation of the cosmic microwave background in the R h = ct Universe The emergence of several unexpected large-scale features in the cosmic microwave background (CMB) has pointed to possible new physics driving the origin of density fluctuations in the early Universe ### The gravitational horizon for a Universe with phantom energy The Universe has a gravitational horizon, coincident with the Hubble sphere, that plays an important role in how we interpret the cosmological data. Recently, however, its significance as a true ### Proper size of the visible Universe in FRW metrics with a constant spacetime curvature In this paper, we continue to examine the fundamental basis for the Friedmann–Robertson–Walker (FRW) metric and its application to cosmology, specifically addressing the question: What is the proper ### Cosmic chronometers in the Rh = ct Universe • Physics • 2013 ABSTRACT The use of luminous red galaxies as cosmic chronometers provides us with an in-dispensable method of measuring the universal expansion rate H(z) in a model-independent way. Unlike many ### FITTING THE UNION2.1 SUPERNOVA SAMPLE WITH THE Rh = ct UNIVERSE The analysis of Type Ia supernova data over the past decade has been a notable success story in cosmology. These standard candles offer us an unparalleled opportunity to study the cosmological ### Inflationary universe: A possible solution to the horizon and flatness problems The standard model of hot big-bang cosmology requires initial conditions which are problematic in two ways: (1) The early universe is assumed to be highly homogeneous, in spite of the fact that
2022-11-27 13:18:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4062197506427765, "perplexity": 1911.386833084673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00136.warc.gz"}
https://byorgey.wordpress.com/2011/09/09/math-tau/
## Math.Tau I have just uploaded a new package to Hackage which defines the constant $\tau$. Now if you wish to use $\tau$ in your program you can just cabal install tau and then import Math.Tau (tau), and be assured that you are using only the highest-quality definition of this fundamentally important constant. This entry was posted in haskell and tagged , , . Bookmark the permalink. ### 4 Responses to Math.Tau 1. Max says: I’m happy to see that you had to foresight to specify that this package would only work with base < 10. As you presumably well know, the base 11 release is scheduled to reconfigure the universe so that the circumference/radius is the more orderly integer 6. • Brent says: Indeed. It will also export tau, so at that point we will instead need a pi package to export the constant pi = 3. 2. Harley says: Interesting, I already have such a package. Simply import the Pi package and let t = 2Pi. :-) 3. Chris Dornan says: From which we can speculate that TeX ~= Math.Tau/2. (And perhaps on their authors passing this could become a true equation.)
2015-04-19 07:53:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7701868414878845, "perplexity": 2355.649485731259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637979.29/warc/CC-MAIN-20150417045717-00116-ip-10-235-10-82.ec2.internal.warc.gz"}
http://www.computer.org/csdl/trans/tm/2013/11/ttm2013112167-abs.html
Subscribe Issue No.11 - Nov. (2013 vol.12) pp: 2167-2177 Chengzhi Li , Broadcom Corporation, Matawan Huaiyu Dai , North Carolina State University, Raleigh ABSTRACT In this paper, we study distributed function computation in a noisy multihop wireless network. We adopt the adversarial noise model, for which independent binary symmetric channels are assumed for any point-to-point transmissions, with (not necessarily identical) crossover probabilities bounded above by some constant $(\epsilon)$. Each node takes an $(m)$-bit integer per instance, and the computation is activated after each node collects $(N)$ readings. The goal is to compute a global function with a certain fault tolerance in this distributed setting; we mainly deal with divisible functions, which essentially cover the main body of interest for wireless applications. We focus on protocol designs that are efficient in terms of communication complexity. We first devise a general protocol for evaluating any divisible functions, addressing both one-shot $((N = O(1)))$ and block computation, and both constant and large $(m)$ scenarios. We also analyze the bottleneck of this general protocol in different scenarios, which provides insights into designing more efficient protocols for specific functions. In particular, we endeavor to improve the design for two exemplary cases: the identity function, and size-restricted type-threshold functions, both focusing on the constant $(m)$ and $(N)$ scenario. We explicitly consider clustering, rather than hypothetical tessellation, in our protocol design. INDEX TERMS Protocols, Complexity theory, Noise measurement, Spread spectrum communication, Noise, Vectors, Histograms,clustering, Distributed computing, noisy multihop network CITATION Chengzhi Li, Huaiyu Dai, "Efficient In-Network Computing with Noisy Wireless Channels", IEEE Transactions on Mobile Computing, vol.12, no. 11, pp. 2167-2177, Nov. 2013, doi:10.1109/TMC.2012.185 REFERENCES [1] N.A. Lynch, Distributed Algorithms. Morgan Kaufmann, 1997. [2] H. Attiya and J. Welch, Distributed Computing: Fundamentals, Simulations, and Advanced Topics, second ed. Wiley-Interscience, 2004. [3] A. Orlitsky and A. El-Gamal, "Communication Complexity," Complexity in Information Theory, Y.S. Abu-Mostafa, ed., pp. 16-61, Springer, 1988. [4] L. Lovasz, "Communication Complexity: A Survey," Paths, Flows, and VLSI Layout, B.H. Korte, ed., Springer-Verlag, 1990. [5] E. Kushilevitz and N. Nisan, Communication Complexity. Cambridge Univ., 1997. [6] A. Giridhar and P.R. Kumar, "Computing and Communcating Functions over Sensor Network," IEEE J. Selected Areas Comm., vol. 23, no. 4, pp. 755-764, Apr. 2005. [7] A.E. Gamal, "Open Problem," Proc. Workshop Specific Problems in Comm. and Computation, 1984. [8] R. Gallager, "Finding Parity in a Simple Broadcast Network," IEEE Trans. Information Theory, vol. 34, no. 2, pp. 176-80, Mar. 1988. [9] I. Newman, "Computing in Fault Tolerance Broadcast Networks," Proc. IEEE Ann. Conf. Computational Complexity, 2004. [10] N. Khude, A. Kumar, and A. Karnik, "Time and Energy Complexity of Distributed Computation of a Class of Functions in Wireless Sensor Networks," IEEE Trans. Mobile Computing, vol. 7, no. 5, pp. 617-632, May 2008. [11] L. Ying, R. Srikant, and G.E. Dullerud, "Distributed Symmetric Function Computation in Noisy Wireless Sensor Network," IEEE Trans. Information Theory, vol. 53, no. 12, pp. 4826-4833, Dec. 2007. [12] S. Rajagopalan and L. Schulman, "A Coding Theorem for Distributed Computation," Proc. 26th Ann. ACM Symp. Theory of Computing, 1994. [13] Y. Kanoria and D. Manjunath, "On Distributed Computation in Noisy Random Planar Networks," Proc. IEEE Int'l Symp. Information Theory, June 2007. [14] N. Goyal, G. Kindler, and M. Saks, "Lower Bounds for the Noisy Broadcast Problem," Proc. IEEE Symp. Foundations of Computer Science, 2005. [15] A. Yao, "On the Complexity of Communication under Noise," Proc. Fifth Israel Symp. Theory of Computing and Systems (ISTCS), 1997. [16] E. Kushilevitz and Y. Mansour, "Computation in Noisy Radio Networks," Proc. ACM-SIAM Symp. Discrete Algorithms, 1998. [17] U. Feige and J. Kilian, "Finding OR in a Noisy Broadcast Network," Information Processing Letters, vol. 73, nos. 1/2, pp. 69-75, 2000. [18] C. Li, H. Dai, and H. Li, "Finding the $K$ Largest Metrics in a Noisy Broadcast Network," Proc. Allerton Conf. Comm., Control and Computing, 2008. [19] P. Gupta and P.R. Kumar, "The Capacity of Wireless Networks," IEEE Trans. Information Theory, vol. 46, no. 2, pp. 388-404, Mar. 2000. [20] F. Xue and P.R. Kumar, Scaling Laws for Ad Hoc Wireless Networks: An Information Theoretic Approach, Now, 2006. [21] P. Gupta and P. Kumar, "Critical Power for Asymptotic Connectivity in Wireless Networks," Stochastic Analysis, Control, Optimization and Applications, W.H. Fleming, W. McEneaney, G. Yin, and Q. Zhang, eds., Birkhäuser, 1998. [22] F. Xue and P. Kumar, "The Number of Neighbors Needed for Connectivity of Wireless Networks," Wireless Network, vol. 10, no. 2, pp. 169-181, Mar. 2004. [23] D. Mosk-Aoyama and D. Shah, "Computing Separable Functions via Gossip," Proc. ACM Symp. Principles of Distributed Computing, Sept. 2007. [24] S. Subramanian, P. Gupta, and S. Shakkottai, "Scaling Bounds for Function Computation over Large Networks," Proc. IEEE Int'l Symp. Information Theory, June 2007. [25] W. Hoeffding, "Probability Inequalities for Sums of Bounded Random Variables," J. Am. Statistical Assoc., vol. 58, pp. 13-30, 1963. [26] J.H. van Lint, Introduction to Coding Theory, third ed. Springer-Verlag, 1999. [27] M.R. Garey and D.S. Johnson, A Guide to the Theory of NP-Completeness. W.H. Freeman and Company, 1979. [28] W. Li and H. Dai, "Cluster-Based Distributed Consensus," IEEE Trans. Wireless Comm., vol. 8, no. 1, pp. 28-31, Jan. 2009. [29] C. Li and H. Dai, "Towards Efficient Designs for In-Network Computing with Noisy Wireless Channels," Proc. IEEE INFOCOM, 2010. [30] A. Ozgur, O. Leveque, and D. Tse, "Hierarchical Cooperation Achieves Optimal Capacity Scaling in Ad Hoc Networks," IEEE Trans. Information Theory, vol. 53, no. 10, pp. 3549-3572, Oct. 2007. [31] J. Ghaderi, L. Xie, and X. Shen, "Hierarchical Cooperation in Ad Hoc Networks: Optimal Clustering and Achievable Throughput," IEEE Trans. Information Theory, vol. 55, no. 8, pp. 3425-3436, Aug. 2009. [32] L.-L. Xie and R.R. Kumar, "A Network Information Theory for Wireless Communication: Scaling Laws and Optimal Operation," IEEE Trans. Information Theory, vol. 50, no. 5, pp. 748-767, May 2004.
2015-02-01 05:56:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6228845119476318, "perplexity": 4359.373762105607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122127848.98/warc/CC-MAIN-20150124175527-00089-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.goodmath.org/blog/2015/04/
# A failed attempt to prove P == NP In computer science, we have one really gigantic open question about complexity. In the lingo, we ask “Does P == NP?”. (I’ll explain what that means below.) On March 9th, a guy named Michael LaPlante posted a paper to ArXiv that purports to prove, once and for all, that P == NP. If this were the case, if Mr. LaPlante (I’m assuming Mr.; if someone knows differently, ie. that it should be Doctor, or Miss, please let me know!) had in fact proved that P==NP, it would be one of the most amazing events in computer science history. And it wouldn’t only be a theoretical triumph – it would have real, significant practical results! I can’t think of any mathematical proof that would be more exciting to me: I really, really wish that this would happen. But Mr. LaPlante’s proof is, sadly, wrong. Trivially wrong, in fact. In order to understand what all of this means, why it matters, and where he went wrong, we need to take a step back, and briefly look at computational complexity, what P and NP mean, and what are the implications of P == NP? (Some parts of the discussion that follows are re-edited versions of sections of a very old post from 2007.) Before we can get to the meat of this, which is talking about P versus NP, we need to talk about computational complexity. P and NP are complexity classes of problems – that is, groups of problems that have similar bounds on their performance. When we look at a computation, one of the things we want to know is: “How long will this take?”. A specific concrete answer to that depends on all sorts of factors – the speed of your computer, the particular programming language you use to run the program, etc. But independent of those, there’s a basic factor that describes something important about how long a computation will take – the algorithm itself fundamental requires some minimum number of operations. Computational complexity is an abstract method of describing how many operations a computation will take, expressed in terms of the size or magnitude of the input. For example: let’s take a look at insertion sort. Here’s some pseudocode for insertion sort. def insertion_sort(lst): result = [] for i in lst: for j in result: if i < j: insert i into result before j if i wasn't inserted, add it to the end of result return result This is, perhaps, the simplest sorting algorithm to understand - most of us figured it out on our own in school, when we had an assignment to alphebetize a list of words. You take the elements of the list to be sorted one at a time; then you figure out where in the list they belong, and insert them. In the worst possible case, how long does this take? 1. Inserting the first element requires 0 comparisons: just stick it into the list. 2. Inserting the second element takes exactly one comparison: it needs to be compared to the one element in the result list, to determine whether it goes before or after it. 3. Inserting the third element could take either one or two comparisons. (If it's smaller than the first element of the result list, then it can be inserted in front without any more comparisons; otherwise, it needs to be compared against the second element of the result list. So in the worst case, it takes 2 comparisons. 4. In general, for the Nth element of the list, it will take at most n-1 comparisons. So, in the worst case, it's going to take 0 + 1 + 2 + ... + n-1 comparisons to produce a sorted list of N elements. There's a nice shorthand for computing that series: $\frac{(n-1)(n-2)}{2}$, which simplifies to \frac{n^2 -3n + 2}{2}, which is O(n2). So while we can't say "computing a list of 100 elements will take 2.3 seconds" (because that depends on a ton of factors - the specific implementation of the code, the programming language, the machine it's running on, etc.), we can say that the time it takes to run increase roughly proportionally to the square of the size of the input - which is what it means when we say that insertion sort is O(n2). That's the complexity of the insert sort algorithm. When we talk about complexity, we can talk about two different kinds of complexity: the complexity of an algorithm, and the complexity of a problem. The complexity of an algorithm is a measure of how many steps the algorithm takes to execute on an input of a particular size. It's specific to the algorithm, that is, the specific method used to solve the the problem. The complexity of the problem is a bound that bounds the best case of the complexity of any possible algorithm that can solve that problem. For example, when you look at sort, you can say that there's a minimum number of steps that's needed to compute the correct sorted order of the list. In fact, you can prove that to sort a list of elements, you absolutely require $n lg n$ bits of information: there's no possible way to be sure you have the list in sorted order with less information that that. If you're using an algorithm that puts things into sorted order by comparing values, that means that you absolutely must do O(n lg n) comparisons, because each comparison gives you one bit of information. That means that sorting is an O(n log n) problem. We don't need to know which algorithm you're thinking about - it doesn't matter. There is no possible comparison-based sorting algorithm that takes less than $O(n \log n)$ steps. (It's worth noting that there's some weasel-words in there: there are some theoretical algorithms that can sort in less than O(n lg n), but they do it by using algorithms that aren't based on binary comparisons that yield one bit of information.) We like to describe problems by their complexity in that way when we can. But it's very difficult. We're very good at finding upper bounds: that is, we can in general come up with ways of saying "the execution time will be less than O(something)", but we are very bad at finding ways to prove that "the minimum amount of time needed to solve this problem is O(something)". That distinction, between the upper bound (maximum time needed to solve a problem), and lower bound (minimum time needed to solve a problem) is the basic root of the P == NP question. When we're talking about the complexity of problems, we can categorize them into complexity classes. There are problems that are O(1), which means that they're constant time, independent of the size of the input. There are linear time problems, which can be solved in time proportional to the size of the input. More broadly, there are two basic categories that we care about: P and NP. P is the collection of problems that can be solved in polynomial time. That means that in the big-O notation for the complexity, the expression inside the parens is a polynomial: the exponents are all fixed values. Speaking very roughly, the problems in P are the problems that we can at least hope to solve with a program running on a real computer. NP is the collection of problems that can be solved in non-deterministic polynomial time. We'll just gloss over the "non-deterministic" part, and say that for a problem in NP, we don't know of a polynomial time algorithm for producing a solution, but given a solution, we can check if it's correct in polynomial time. For problems in NP, the best solutions we know of have worst-case bounds that are exponential - that is, the expression inside of the parens of the O(...) has an exponent containing the size of the problem. NP problems are things that we can't solve perfectly with a real computer. The real solutions take an amount of time that's exponential in the size of their inputs. Tripling the size of the problem increases its execution time by a factor of 27; quadrupling the input size increases execution time by at least a factor of 256; increasing the input by a factor of 10 increases execution time by at least a factor of 10,000,000,000. For NP problems, we're currently stuck using heuristics - shortcuts that will quickly produce a good guess at the real solution, but which will sometimes be wrong. NP problems are, sadly, very common in the real world. For one example, there's a classic problem called the travelling salesman. Suppose you've got a door-to-door vacuum cleaner salesman. His territory has 15 cities. You want to find the best route from his house to those 15 cities, and back to his house. Finding that solution isn't just important from a theoretical point of view: the time that the salesman spends driving has a real-world cost! We don't know how to quickly produce the ideal path. The big problem with NP is that we don't know lower bounds for anything in it. That means that while we know of slow algorithms for finding the solution to problems in NP, we don't know if those algorithms are actually the best. It's possible that there's a fast solution - a solution in polynomial time which will give the correct answer. Many people who study computational complexity believe that if you can check a solution in polynomial time, then computing a solution should also be polynomial time with a higher-order polynomial. (That is, they believe that there should be some sort of bound like "the time to find a solution is no more than the cube of the time to check a solution".) But so far, no one has been able to actually prove a relationship like that. When you look at NP problems, some of them have a special, amazing property called NP completeness. If you could come up with a polynomial time solution for any single NP-complete problem, then you'd also discover exactly how to come up with a polynomial time solution for every other problem in NP.. In Mr. LaPlante's paper, he claims to have implemented a polynomial time solution to a problem called the maximum clique problem. Maximum clique is NP complete - so if you could find a P-time solution to it, you'd have proven that P == NP, and that there are polynomial time solutions to all NP problems. The problem that Mr. LaPlante looked at is the maximal clique problem: • Given: 1. a set of $V$ atomic objects called vertices; 2. a set of $E$ of objects called edges, where each edge is an unordered pair $(x, y)$, where $x$ and $y$ are vertices. • Find: • The largest set of vertices C=$\{v_1, ..., v_n\}$ where for any $v_i$, there is an edge between $v_i$ to every other vertex in $C$. Less formally: given a bunch of dots, where some of the dots are connected by lines, find the largest set of dots where every dot in the set is connected to every other dot in the set. The author claims to have come up with a simple P-time solution to that. The catch? He's wrong. His solution isn't P-time. It's sloppy work. His algorithm is pretty easy to understand. Each vertex has a finite set of edges connecting it to its neighbors. You have each node in the graph send its list of its neighbors to its neighbors. With that information, each node knows what 3-cliques its a part of. Every clique of size larger than 3 is made up of overlapping 3-cliques - so you can have the cliques merge themselves into ever larger cliques. If you look at this, it's still basically considering every possible clique. But His "analysis" of the complexity of his algorithm is so shallow and vague that it's easy to get things wrong. It's a pretty typical example of a sloppy analysis. Complexity analysis is hard, and it's very easy to get wrong. I don't want to be too hard on Mr. LaPlante, because it's an extremely easy mistake to make. Analyzing algorithmic complexity needs to be done in a careful, exacting, meticulous way - and while Mr. LaPlante didn't do that, most people who are professional programmers could easily make a similar mistake! But the ultimate sloppiness of it is that he never bothers to finish computing the complexity. He makes vague hand-wavy motions at showing the complexity of certain phases of his algorithm, but he never even bothers to combine them and come up with an estimate of the full upper-bound of his algorithm! I'm not going to go into great detail about this. Instead, I'll refer you to a really excellent paper by Patrick Prosser, which looks at a series of algorithms that compute exact solutions to the maximum clique problem, and how they're analyzed. Compare their analysis to Mr. LaPlante's, and you'll see quite clearly how sloppy LaPlante was. I'll give you a hint about one thing LaPlante got wrong: he's taking some steps that take significant work, and treating them as if they were constant time. But we don't even really need to look at the analysis. Mr. LaPlante provides an implementation of his supposedly P-time algorithm. He should be able to show us execution times for various randomly generated graphs, and show how that time grows as the size of the graph grows, right? I mean, if you're making claims about something like this, and you've got real code, you'll show your experimental verification as well as your theoretical analysis, right? Nope. He doesn't. And I consider that to be a really, really serious problem. He's claiming to have reduced an NP-complete problem to a small-polynomial complexity: where are the numbers? I'll give you a good guess about the answer: the algorithm doesn't complete in a reasonable amount of time for moderately large graphs. You could argue that even if it's polynomial time, you're looking at exponents that are no smaller than 3 (exactly what he claims the bound to be is hard to determine, since he never bothers to finish the analysis!) - a cubic algorithm on a large graph takes a very long time. But... not bothering to show any runtime data? Nothing at all? That's ridiculous. If you look at the Prosser paper above, he manages to give actual concrete measurements of the exponential time algorithms. LaPlante didn't bother to do that. And I can only conclude that he couldn't gather actual numbers to support his idea. # Big Bang Bogosity One of my long-time mantras on this blog has been “The worst math is no math”. Today, I’m going to show you yet another example of that: a recent post on Boing-Boing called “The Big Bang is Going Down”, by a self-proclaimed genius named Rick Rosner. First postulated in 1931, the Big Bang has been the standard theory of the origin and structure of the universe for 50 years. In my opinion, (the opinion of a TV comedy writer, stripper and bar bouncer who does physics on the side) the Big Bang is about to collapse catastrophically, and that’s a good thing. According to Big Bang theory, the universe exploded into existence from basically nothing 13.7-something billion years ago. But we’re at the beginning of a wave of discoveries of stuff that’s older than 13.7 billion years. We’re constantly learning more about our universe, how it works, and how it started. New information isn’t necessarily a catastrophe for our existing theories; it’s just more data. There’s constantly new data coming in – and as yet, none of it comes close to causing the big bang theory to catastrophically collapse. The two specific examples cited in the article are: 1. one quasar that appears to be younger than we might expect – it existed just 900 million years after the current estimate of when the big bang occurred. That’s very surprising, and very exciting. But even in existing models of the big bang, it’s surprising, but not impossible. (No link, because the link in the original article doesn’t work.) 2. an ancient galaxy – a galaxy that existed only 700 million years after the big bang occurred – contains dust. Cosmic dust is made of atoms much larger than hydrogen – like carbon, silicon, and iron, which are (per current theories) the product of supernovas. Supernovas generally don’t happen to stars younger than a couple of billion years – so finding dust in a galaxy less than a billion years after the universe began is quite surprising. But again: impossible under the big bang? No. The problem with both of these arguments against the big bang is: they’re vague. They’re both handwavy arguments made about crude statements about what “should” be possible or impossible according to the bing bang theory. But neither comes close to the kind of precision that an actual scientific argument requires. Scientists don’t use math because they like to be obscure, or because they think all of the pretty symbols look cool. Math is a tool used by scientists, because it’s useful. Real theories in physics need to be precise. They need to make predictions, and those predictions need to match reality to the limits of our ability to measure them. Without that kind of precision, we can’t test theories – we can’t check how well they model reality. And precise modelling of reality is the whole point. The big bang is an extremely successful theory. It makes a lot of predictions, which do a good job of matching observations. It’s evolved in significant ways over time – but it remains by far the best theory we have – and by “best”, I mean “most accurate and successfully predictive”. The catch to all of this is that when we talk about the big bang theory, we don’t mean “the universe started out as a dot, and blew up like a huge bomb, and everything we see is the remnants of that giant explosion”. That’s an informal description, but it’s not the theory. That informal description is so vague that a motivated person can interpret it in ways that are consistent, or inconsistent with almost any given piece of evidence. The real big bang theory isn’t a single english statement – it’s many different mathematical statements which, taken together, produce a description of an expansionary universe that looks like the one we live in. For a really, really small sample, you can take a look at a nice old post by Ethan Siegel over here. If you really want to make an argument that it’s impossible according to the big bang theory, you need to show how it’s impossible. The argument by Mr. Rosner is that the atoms in the dust in that galaxy couldn’t exist according to the big bang, because there wasn’t time for supernovas to create it. To make that argument, he needs to show that that’s true: he needs to look at the math that describes how stars form and how they behave, and then using that math, show that the supernovas couldn’t have happened in that timeframe. He doesn’t do anything like that: he just asserts that it’s true. In contrast, if you read the papers by the guys who discovered the dust-filled galaxy, you’ll notice that they don’t come anywhere close to saying that this is impossible, or inconsistent with the big bang. All they say is that it’s surprising, and that we made need to revise our understanding of the behavior of matter in the early stages of the universe. The reason that they say that is because there’s nothing there that fundamentally conflicts with our current understanding of the big bang. But Mr. Rosner can get away with the argument, because he’s being vague where the scientists are being precise. A scientist isn’t going to say “Yes, we know that it’s possible according to the big bang theory”, because the scientist doesn’t have the math to show it’s possible. At the moment, we don’t have sufficient precise math either way to come to a conclusion; we don’t know. But what we do know is that millions of other observations in different contexts, different locations, observed by different methods by different people, are all consistent with the predictions of the big bang. Given that we don’t have any evidence to support the idea that this couldn’t happen under the big bang, we continue to say that the big bang is the theory most consistent with our observations, that it makes better predictions than anything else, and so we assume (until we have evidence to the contrary) that this isn’t inconsistent. We don’t have any reason to discard the big bang theory on the basis of this! Mr. Rosner, though, goes even further, proposing what he believes will be the replacement for the big bang. The theory which replaces the Big Bang will treat the universe as an information processor. The universe is made of information and uses that information to define itself. Quantum mechanics and relativity pertain to the interactions of information, and the theory which finally unifies them will be information-based. The Big Bang doesn’t describe an information-processing universe. Information processors don’t blow up after one calculation. You don’t toss your smart phone after just one text. The real universe – a non-Big Bang universe – recycles itself in a series of little bangs, lighting up old, burned-out galaxies which function as memory as needed. In rolling cycles of universal computation, old, collapsed, neutron-rich galaxies are lit up again, being hosed down by neutrinos (which have probably been channeled along cosmic filaments), turning some of their neutrons to protons, which provides fuel for stellar fusion. Each calculation takes a few tens of billions of years as newly lit-up galaxies burn their proton fuel in stars, sharing information and forming new associations in the active center of the universe before burning out again. This is ultra-deep time, with what looks like a Big Bang universe being only a long moment in a vast string of such moments across trillions or quadrillions of giga-years. This is not a novel idea. There are a ton of variations of the “universe as computation” that have been proposed over the years. Just off the top of my head, I can rattle off variations that I’ve read (in decreasing order of interest) by Minsky (can’t find the paper at the moment; I read it back when I was in grad school), by Fredkin, by Wolfram, and by Langan. All of these theories assert in one form or another that our universe is either a massive computer or a massive computation, and that everything we can observe is part of a computational process. It’s a fascinating idea, and there are aspects of it that are really compelling. For example, the Minsky model has an interesting explanation for the speed of light as an absolute limit, and for time dilation. Minksy’s model says that the universe is a giant cellular automaton. Each minimum quanta of space is a cell in the automaton. When a particle is located in a particular cell, that cell is “running” the computation that describes that particle. For a particle to move, the data describing it needs to get moved from its current location to its new location at the next time quanta. That takes some amount of computation, and the cell can only perform a finite amount of computation per quanta. The faster the particle moves, the more of its time quantum are dedicated to motion, and the less it has for anything else. The speed of light, in this theory, is the speed where the full quanta for computing a particle’s behavior is dedicated to nothing but moving it to its next location. It’s very pretty. Intuitively, it works. That makes it an interesting idea. But the problem is, no one has come up with an actual working model. We’ve got real observations of the behavior of the physical universe that no one has been able to describe using the cellular automaton model. That’s the problem with all of the computational hypotheses so far. They look really good in the abstract, but none of them come close to actually working in practice. A lot of people nowadays like to mock string theory, because it’s a theory that looks really ogood, but has no testable predictions. String theory can describe the behavior of the universe that we see. The problem with it isn’t that there’s things we observe in the universe that it can’t predict, but because it can predict just about anything. There are a ton of parameters in the theory that can be shifted, and depending on their values, almost anything that we could observe can be fit by string theory. The problem with it is twofold: we don’t have any way (yet) of figuring out what values those parameters need to have to fit our universe, and we don’t have any way (yet) of performing an experiment that tests a prediction of string theory that’s different from the predictions of other theories. As much as we enjoy mocking string theory for its lack of predictive value, the computational hypotheses are far worse! So far, no one has been able to come up with one that can come close to explaining all of the things that we’ve already observed, much less to making predictions that are better than our current theories. But just like he did with his “criticism” of the big bang, Mr. Rosner makes predictions, but doesn’t bother to make them precise. There’s no math to his prediction, because there’s no content to his prediction. It doesn’t mean anything. It’s empty prose, proclaiming victory for an ill-defined idea on the basis of hand-waving and hype. Boing-Boing should be ashamed for giving this bozo a platform.
2021-01-24 06:52:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5105215907096863, "perplexity": 410.1157447107083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547333.68/warc/CC-MAIN-20210124044618-20210124074618-00351.warc.gz"}
https://www.transtutors.com/questions/exercise-11-9-evaluating-new-investments-using-return-on-investment-roi-and-residual-1389667.htm
# EXERCISE 11–9 Evaluating New Investments Using Return on Investment (ROI) and Residual Income... EXERCISE 11–9 Evaluating New Investments Using Return on Investment (ROI) and Residual Income [LO1, LO2] Selected sales and operating data for three divisions of three different companies are given below: Division A Division B Division C Sales. . . . . . . . . . . . . . . . . . . . . . . . .  . $6,000,000$10,000,000 $8,000,000 Average operating assets . . . . . . . . . . .$1,500,000 $5,000,000$2,000,000 Net operating income . . . . . . . . . . . . . . $300,000$900,000 \$180,000 Minimum required rate of return . . . . . . 15% 18% 12% Required: 1.       Compute the return on investment (ROI) for each division, using the formula stated in terms of margin and turnover. 2.       Compute the residual income for each division. 3.       Assume that each division is presented with an investment opportunity that would yield a rate of return of 17%. a.       If performance is being measured by ROI, which division or divisions will probably accept the opportunity? Reject? Why? b.       If performance is being measured by residual income, which division or divisions will probably accept the opportunity? Reject? Why?
2019-02-24 01:54:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.785668671131134, "perplexity": 1057.3324610836676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249569386.95/warc/CC-MAIN-20190224003630-20190224025630-00561.warc.gz"}
https://github.com/RiftValleySoftware/chameleon
Skip to content {{ message }} # RiftValleySoftware / chameleon Public The "Second-Level" Model Interaction Layer of the BAOBAB Server Switch branches/tags Could not load branches Nothing to show ## Files Failed to load latest commit information. Type Name Commit time ## README.md \page CHAMELEON CHAMELEON # INTRODUCTION CHAMELEON is a "first-layer" extension of \ref BADGER. While BADGER provides a low-level interface and abstraction to the databases, CHAMELEON starts to put these abtractions to work. \ref BADGER is the "First Layer Connection" to the data storage subsystem. It uses PHP PDO to abstract from the databases, and provide SQL-injection protection through the use of PHP PDO Prepared Statements. ## COLLECTIONS CHAMELEON introduces the CO_Collection class (and some similar ones, all based on the tCO_Collection trait) that provide for a hierarchy. The collection classes can aggregate other instances (including other collections). Because collections are implemented using PHP traits, they are more like a "mixin" class, than a straight hierarchy. Keep this in mind if you will be deriving from them. ## OWNER The database has the concept of object "owners." These are IDs (numerical), assigned to the "owner" column in the data database rows. The "owner" is another row (of the ID referenced). The relationship is extremely simple, and is really meant as a "triage" for large datasets. ## PLACES CHAMELEON has a specialization of the CO_LL_Location class (CO_Place), where it adds address elements (and the ability to have Google Geocoding/Reverse Geocoding applied). It further extends that for the United States, with CO_US_Place. ## KEY/VALUE PAIR STORAGE CHAMELEON introduces the KEY/VALUE (sometimes known as DICTIONARY) pattern for storing arbitrary data (the CO_KeyValue class). The data can be quite large, as its stored in the payload column. The key needs to be unique within the database, but the access_class of the row is also figured into the calculation, so you have a bit of flexibility. # IMPLEMENTATION You implement CHAMELEON by setting up a pair of databases, and reference them via the CO_Config static class, then instantiate CO_Chameleon. # EXTENDING CHAMELEON If you will extend a class, you should keep the base class name (after the "CO_"), because we can do a wildcard lookup to get hierarchies. CO_Place, CO_KeyValue and CO_Collection / CO_Place_Collection / CO_US_Place_Collection are all designed to be extended, so a UK place might be "CO_UK_Place". # LICENSE © Copyright 2018, The Great Rift Valley Software Company LICENSE: MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The Great Rift Valley Software Company: https://riftvalleysoftware.com The Great Rift Valley Software Company: https://riftvalleysoftware.com ## About The "Second-Level" Model Interaction Layer of the BAOBAB Server 41 tags ## Packages 0 No packages published • •
2021-10-25 15:29:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2577172517776489, "perplexity": 6952.18330237962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00337.warc.gz"}
http://quant.stackexchange.com/questions?page=10&sort=unanswered
# All Questions 214 views ### Tian third moment-matching tree with smoothing - implementation I was wondering if someone has an implementation of the Tian third moment-matching tree (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1030143) with smoothing in code (e.g. c++, vba, c#, etc.)? ... 276 views ### What does T statistics of Information Coefficient indicate? Hi I am looking for a clear explanation of T statistics concept. Especially in quantitative equity portfolio management context, what does T statistics of monthly Information Coefficient for one ... 22 views ### Changing timezones with historic forex data (Interactive Brokers API IBPy) I would like to be able to change the timezone for my requests to the IB API, how can I do this? I am writing in Python, and thus use the IBPy wrapper found here. Supposedly, the third argument of ... 28 views ### Constructing Swap Curve from LIBOR Say I'm considering a long maturity fixed rate swap, for instance 20 years paid semi annually. Now I want to find the fixed rate for this hypothetical swap. I understand that this fixed rate is going ... 21 views ### Where to find historical time series data for number of new investor accounts I am examining the impact of investor sentiment on the probability of stock market crises. I am constructing a composite measure of investor sentiment according to the methodology used in this paper ... 32 views ### Discount curve from spot rates for bond pricing I have a bond with the following cash flow and maturity: ... 35 views ### How to calculate the theta of a bond? For calculating P&L from interest rate risk, we often use PV01 to estimate the day over day P&L by multiplying PV01 with a change in curve. Is there any approach to calculate theta P&L in ... 31 views ### Problems in computing VaR with GARCH-GPD-copula approach I use a time-varying Gaussian copula (with GARCH-filtered standardized residuals modeled semiparametrically with Gaussian kernel interior and GPD tails, i.e. generalized pareto distributed) to ... 28 views ### Outlier removal, issue with TSO function I'm trying to detect outliers within a financial time series which represents the ratio of cash distributions to equity holders as a percentage operating earnings for the period. Visual inspection ... 28 views ### Where can be found the tick size list for stocks traded in NASDAQ and NYSE? Answering this question is relevant to assess the quality of a time series in order to observe whether the data vendor applies some rounding to the data or is more decimal are present than the actual ... 45 views ### What is the unconditional variance for a GARCH model? I want to use a Matlab script to calculate Heston Nandi GARCH prices. I found an appropriate script online and it asks for the "unconditional variance" as an input. How do I calculate the appropriate ... 27 views ### US Treasury interest rate swaps I know that Bloomberg will give me the swap rates for Treasury 30's-5's, but I don't have a Bloomberg. Can anyone direct me to a source? 27 views ### Levered beta with changing equity/debt ratios I know how to calculate a bottom up levered beta for a privately held and not publicly traded company with Hamada (Proof of Hamada's Formula (Relationship between levered and unlevered beta)) and ... 55 views ### Stock market cash flow I want to understand better cash flow of stock market and it's participants, but could not find any reasonable information online, hope more experienced people here could help. Money IN flow: (1)... 37 views ### Using CME DV01 to predict Futures price at 0.00% Yield DV01 is published at CME Group for the cheapest-to-deliver bond here: http://www.cmegroup.com/trading/interest-rates/invoice-spread-calculator.html. If my goal is to get an approximation where the ... 18 views ### Relationship between in-sample and out-sample periods length I have two general questions regarding "in-sample fitting vs. out-of-sample backtesting" kind of analyses. Is there any relationship between the length of the data collected for in-sample fitting ($a$)... 31 views ### Can trinomial trees be used to model subdiffusion? I am modeling a sub-diffusive process where the particles follow geometric Brownian motion (GBM) with movement occurring after randomly distributed waiting times. I have set this up as a simulation ... 33 views ### Is there a public SQL database for all US stocks? I would like to search through all US stocks (via SQL or another technology), and filter them by certain parameters (similar to a stock screener, but through an API rather than a user interface). Is ... 25 views ### Forecasting conditional returns in DCC-GARCH-copula approach in R anyone who could help me interpreting and modifying this code? I have a dataset and want to reserve the last 100 returns for out-of-sample analysis. After specifying and fitting the garch-spd-copula, ... 20 views ### Does OpenFIGI have precanned files? From what I understand, Bloomberg Open Symbology is now transitioning to OpenFigi: http://bsym.bloomberg.com/sym/ "BSYM.bloomberg.com will be SHUT-OFF on June 1st. It is being replaced by ... 73 views ### Bonds with embedded options pricing via binomial model Notation: t - time; G(t) - zero-coupon yield curve; $r$, $r_d$, $r_u$ - interest rates. The task is to find market price of a bond for today, while knowing the price of a number of other bonds. ... 28 views ### Introducing 1bp shocks to yield curve (and interpolation consequences) Let us assume we have a LIBOR 3M curve and that I would like to introduce a small shock up/down of 1bp at a certain point along the curve. I am trying to find out what the best and most efficient way ... 34 views ### Use of real-world probabilities in options pricing: binary event with continuous effect Let's say I have to price options on instrument X with a multitude of strikes. For simplicity, assume that X only makes one move during the options' lifetime, and this move is affected by some binary "... 10 views ### How to get daily OHLC (fints) from minutes OHLC (fints) in MatLab? I have a minutes OHLC time series stored in fints object, how can I get a new fints object which contains daily OHLC? What is the easiest way to do it? 31 views ### (Reproducible example) Conditional returns in GARCH-EVT-Copula context (with R) I'm estimating a time-varying correlation matrix for the normal copula using the rmgarch package from R. I've found this code in the rmgarch.tests folder. I use the ... 16 views ### Liquidity horizons of risk factors categories I'm reading the consultative document of the BCBS on the Fundamental Review of the Trading Book: http://www.bis.org/publ/bcbs265.pdf Table 2 on page 16 shows the liquidity horizons for 5 broad risk ... 21 views ### Bootstrapping bond spreads as in the standard CDS model Suppose that we have a spread curve $\boldsymbol{s}:=(s_1, ..., s_n)$, where $s_i$ are CDS par spreads. Moreover, assume the standard ISDA model framework, i.e. piecewise constant forward / hazard ... 13 views ### Are the returns in this regression signed returns? In this paper about combining multiple alphas are the returns signed returns? if not wouldn't they be mean zero? Also, it mentions "realized alpha returns" - does that just mean "realized" past alpha ... 47 views ### Exploding Libor Rates in Libor Market Model I have implemented the Libor Market Model in Matlab. When I generate a number of paths, I notice that some of them explode. Does anybody have an idea what could cause this? I already tried solving ... 31 views ### The best process for foreign exchange rate I have a simple research project and I need to explain a behavior of a foreign exchange rate. Could you propose a stochastic process without jumps so that it could be estimated with QMLE? Is GBM ... 41 views ### how to mix trading signals for the same product? I have multiple trading signals developed using cointegration on the same stock using various correlated assets. Is there a mathematical way to combine them to achieve better entry/exit points and ... 34 views ### How to backtest strategy in portfolio of stocks using SIT R? I am creating and testing strategies in R code and using systemic investor toolbox(SIT) package as the backtesting tool. I copied a SIT backtesting code from a website and made small changes to make ... 33 views ### Account for empirical relationship between signal and market data I have two monthly time series : one is a 'signal', on which I will base my decision to buy or short-sell, and the second one is the time serie of a given asset's price. I have implemented this ... 54 views ### Backtesting Long/Short Market Neutral Z-Score Strategy with Custom Factors and Custom Stock Universe So I've managed to backtest simple strategies, like MA, RSI and some fundamental ones (P/E ratios etc) but Im stuck at my last strategy. Here is some information: Tools: Excel and Python (also a ... 18 views ### Should the number of Markowitz Optimization steps be counted as backtest trials? I'm backtesting a strategy that involves monthly investments in a few stocks out of a given set, that is, each month some of the stocks are shortlisted from an index and a long position is taken in ... 33 views ### Model Free VIX Calculation in Python I have previously seen this implementation and had meant to replicate it, but can't find it any longer. Does anyone know of a python implementation of the CBOE Volatility Index? Yes, the white paper ... 43 views ### Transition Between Volatility Regimes Emanuel Derman wrote a great paper in 1999 about volatility regimes and the adjustments the market makes during these periods (sticky strike, sticky implied tree, sticky delta, etc). Has any ... 46 views 22 views ### How to calculate monthly Return from a Momentum Strategy with overlapping Holdingperiods? I replicate a Momentum Strategy from Rey and Schmid (2007) "Feasible momentum strategies" based on the idea from Jegadeesh and Titman (1993). I only buy the single stock with the highest past return ... 24 views ### student-t asset path I am trying to simulate an asset path based on a t-distribution. I found a lot of ressources and the fact that it will be difficult to do a path. But now I changed my Geometric Brownian Motion ... 13 views ### How to calculate optimal monthly withdrawals from an investment with compound interest I have 1.25M dollars. I want to put it in an investment with 60% annual return paid monthly and re-invest the interest to achieve compound interest. After 15 years, my principal would have grown to ... 26 views 22 views ### Interest rates - Swaptions implied volatility - Volatility anchoring with Black and with normal volatilities In a LMM+ with displacement factor a volatility anchoring technique is used, i.e. a long term volatility assumptions is applied, derived from historic time series. Should I adjust this historic ...
2016-06-24 20:20:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7850334644317627, "perplexity": 3463.0820961909008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00145-ip-10-164-35-72.ec2.internal.warc.gz"}
https://zh.wikipedia.org/wiki/%E8%A7%A3%E6%9E%90%E5%87%A0%E4%BD%95
# 解析几何 1637年,笛卡兒在《方法论》的附录“几何”中提出了解析几何的基本方法。 以哲学观点写成的这部法语著作为后来牛顿莱布尼茨各自提出微积分学提供了基础。 ## 基本理论 ### 距离和角度 $d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2},\!$ $\theta = \arctan(m)\!$ ### 变化 a) y = f(x) = |x|       b) y = f(x+2)       c) y = f(x)-3       d) y = 1/2 f(x) $x^2+y^2-1=0$ • $x$ 变为 $x-h$ ,使得图像向右移动 $h$ 个单位。 • $y$ 变为 $y-k$,使得图像向上移动 $k$ 个单位。 • $x$ 变为 $x/b$,使得图像以 $b$值拉伸。 (想象一下$x$ 被膨胀了) • $y$ 变为 $y/a$,使得图像垂直拉伸。 • $x$ 变为 $x\cos A+ y\sin A$,将 $y$ 变为 $-x\sin A + y\cos A$,使得图像旋转 $A$ 个角度。 ### 交集 $P$$Q$ 的交集可以通过同时解方程来求得: $x^2+y^2 = 1$ $(x-1)^2+y^2 = 1$ $x^2+y^2 = 1$ $y^2=1-x^2$ 我们将$y^2$值带入其它等式当中: $(x-1)^2+(1-x^2)=1$ 然后解 $x$: $x^2 -2x +1 +1 -x^2 =1$ $-2x = -1$ $x=1/2$ $(1/2)^2+y^2 = 1$ $y^2 =3/4$ $y = \frac{\pm \sqrt{3}}{2}$ $\left(1/2,\frac{+ \sqrt{3}}{2}\right) \;\; \mathrm{and} \;\; \left(1/2,\frac{-\sqrt{3}}{2}\right)$ $y^2 = 3/4$ $y = \frac{\pm \sqrt{3}}{2}$ $\left(1/2,\frac{+ \sqrt{3}}{2}\right) \;\; \mathrm{and} \;\; \left(1/2,\frac{-\sqrt{3}}{2}\right)$ ## 主题 $Ax^2 + Bxy + Cy^2 +Dx + Ey + F = 0$. 如果$Bxy$被考虑进去的话,就会常常用到旋转。这些问题常涉及到线性代数 ## 例子 $F\left(\frac{a}{2},0\right)$, $G\left(\frac{a+b}{2},\frac{e}{2}\right)$, $H\left(\frac{b+c}{2},\frac{e+f}{2}\right)$, $I\left(\frac{c+d}{2},\frac{f+g}{2}\right)$, $X\left(\frac{a+b+c}{4},\frac{e+f}{4}\right)$, and $Y\left(\frac{a+b+c+d}{4},\frac{e+f+g}{4}\right).$ $AE=\sqrt{d^2+g^2}$ $XY=\sqrt{\frac{d^2}{16}+\frac{g^2}{16}}=\frac{\sqrt{d^2+g^2}}{4}.$ $AE\equiv 0\pmod{4}$ (见同余) 因此 $AE=4$. ## 注释 1. ^ Boyer, Carl B.. The Age of Plato and Aristotle. A History of Mathematics Second Edition. John Wiley & Sons, Inc. 1991: 94–95. ISBN 0-471-54397-7. "Menaechmus apparently derived these properties of the conic sections and others as well. Since this material has a strong resemblance to the use of coordinates, as illustrated above, it has sometimes been maintained that Menaechmus had analytic geometry. Such a judgment is warranted only in part, for certainly Menaechmus was unaware that any equation in two unknown quantities determines a curve. In fact, the general concept of an equation in unknown quantities was alien to Greek thought. It was shortcomings in algebraic notations that, more than anything else, operated against the Greek achievement of a full-fledged coordinate geometry." 2. ^ Boyer, Carl B.. Apollonius of Perga. A History of Mathematics Second Edition. John Wiley & Sons, Inc. 1991: 142. ISBN 0-471-54397-7. "The Apollonian treatise On Determinate Section dealt with what might be called an analytic geometry of one dimension. It considered the following general problem, using the typical Greek algebraic analysis in geometric form: Given four points A, B, C, D on a straight line, determine a fifth point P on it such that the rectangle on AP and CP is in a given ratio to the rectangle on BP and DP. Here, too, the problem reduces easily to the solution of a quadratic; and, as in other cases, Apollonius treated the question exhaustively, including the limits of possibility and the number of solutions." 3. ^ Boyer, Carl B.. Apollonius of Perga. A History of Mathematics Second Edition. John Wiley & Sons, Inc. 1991: 156. ISBN 0-471-54397-7. "The method of Apollonius in the Conics in many respects are so similar to the modern approach that his work sometimes is judged to be an analytic geometry anticipating that of Descartes by 1800 years. The application of references lines in general, and of a diameter and a tangent at its extremity in particular, is, of course, not essentially different from the use of a coordinate frame, whether rectangular or, more generally, oblique. Distances measured along the diameter from the point of tangency are the abscissas, and segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. The Apollonian relationship between these abscissas and the corresponding ordinates are nothing more nor less than rhetorical forms of the equations of the curves. However, Greek geometric algebra did not provide for negative magnitudes; moreover, the coordinate system was in every case superimposed a posteriori upon a given curve in order to study its properties. There appear to be no cases in ancient geometry in which a coordinate frame of reference was laid down a priori for purposes of graphical representation of an equation or relationship, whether symbolically or rhetorically expressed. Of Greek geometry we may say that equations are determined by curves, but not that curves are determined by equations. Coordinates, variables, and equations were subsidiary notions derived from a specific geometric situation; [...] That Apollonius, the greatest geometer of antiquity, failed to develop analytic geometry, was probably the result of a poverty of curves rather than of thought. General methods are not necessary when problems concern always one of a limited number of particular cases." 4. ^ 4.0 4.1 4.2 Boyer. The Arabic Hegemony. 1991: 241–242. "Omar Khayyam (ca. 1050–1123), the "tent-maker," wrote an Algebra that went beyond that of al-Khwarizmi to include equations of third degree. Like his Arab predecessors, Omar Khayyam provided for quadratic equations both arithmetic and geometric solutions; for general cubic equations, he believed (mistakenly, as the sixteenth century later showed), arithmetic solutions were impossible; hence he gave only geometric solutions. The scheme of using intersecting conics to solve cubics had been used earlier by Menaechmus, Archimedes, and Alhazan, but Omar Khayyam took the praiseworthy step of generalizing the method to cover all third-degree equations (having positive roots). .. For equations of higher degree than three, Omar Khayyam evidently did not envision similar geometric methods, for space does not contain more than three dimensions, ... One of the most fruitful contributions of Arabic eclecticism was the tendency to close the gap between numerical and geometric algebra. The decisive step in this direction came much later with Descartes, but Omar Khayyam was moving in this direction when he wrote, "Whoever thinks algebra is a trick in obtaining unknowns has thought it in vain. No attention should be paid to the fact that algebra and geometry are different in appearance. Algebras are geometric facts which are proved."" 5. ^ Glen M. Cooper (2003). "Omar Khayyam, the Mathematician", The Journal of the American Oriental Society 123. 6. ^ Stillwell, John. Analytic Geometry. Mathematics and its History Second Edition. Springer Science + Business Media Inc. 2004: 105. ISBN 0-387-95336-1. "the two founders of analytic geometry, Fermat and Descartes, were both strongly influenced by these developments." 7. ^ Cooke, Roger. The Calculus. The History of Mathematics: A Brief Course. Wiley-Interscience. 1997: 326. ISBN 0-471-18082-3. "The person who is popularly credited with being the discoverer of analytic geometry was the philosopher René Descartes (1596–1650), one of the most influential thinkers of the modern era." 8. ^ 8.0 8.1 Katz 1998,pg. 442 9. ^ Katz 1998,pg. 436 ## 引述 ### 著作 • Katz, Victor J., A History of Mathematics: An Introduction (2nd Ed.), Reading: Addison Wesley Longman, 1998, ISBN 0-321-01618-1 • Boyer, Carl B., History of Analytic Geometry, Dover Publications, ISBN 978-0486438320 • Cajori, Florian, A History of Mathematics, AMS, ISBN 978-0821821022 • Struik, D. J., A Source Book in Mathematics, 1200-1800, Harvard University Press, ISBN 978-0674823556 ### 文献 • Boyer, Carl B. Analytic Geometry: The Discovery of Fermat and Descartes, Mathematics Teacher 37, no. 3 (1944): 99-105 • Boyer, Carl B., Johann Hudde and space coordinates • Bissell, C. C., Cartesian geometry: The Dutch contribution • Pecl, J., Newton and analytic geometry • Coolidge, J. L., The Beginnings of Analytic Geometry in Three Dimensions
2015-03-04 17:32:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 158, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7616797089576721, "perplexity": 2424.255287723357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463608.25/warc/CC-MAIN-20150226074103-00076-ip-10-28-5-156.ec2.internal.warc.gz"}
https://questions.examside.com/past-years/jee/question/the-self-induced-emf-of-a-coil-is-25-volts-when-the-current-jee-main-physics-units-and-measurements-xs2pog0dac9agjfy
1 JEE Main 2019 (Online) 10th January Evening Slot +4 -1 The self induced emf of a coil is 25 volts. When the current in it is changed at uniform rate from 10 A to 25 A in 1s, the change in the energy of the inductance is - A 740 J B 637.5 J C 540 J D 437.5 J 2 JEE Main 2019 (Online) 10th January Evening Slot +4 -1 The electric field of a plane polarized electromagnetic wave in free space at time t = 0 is given by an expression $$\overrightarrow E \left( {x,y} \right) = 10\widehat j\cos \left[ {\left( {6x + 8z} \right)} \right].$$ The magnetic field $$\overrightarrow B$$(x,z, t) is given by $$-$$ (c is the velocity of light) A $${1 \over c}\left( {6\hat k + 8\widehat i} \right)\cos \left[ {\left( {6x + 8z - 10ct} \right)} \right]$$ B $${1 \over c}\left( {6\widehat k - 8\widehat i} \right)\cos \left[ {\left( {6x + 8z - 10ct} \right)} \right]$$ C $${1 \over c}\left( {6\hat k + 8\widehat i} \right)\cos \left[ {\left( {6x - 8z + 10ct} \right)} \right]$$ D $${1 \over c}\left( {6\hat k - 8\widehat i} \right)\cos \left[ {\left( {6x - 8z + 10ct} \right)} \right]$$ 3 JEE Main 2019 (Online) 10th January Morning Slot +4 -1 If the magnetic field of a plane electromagnetic wave is given by (the speed of light = 3 × 108 B = 100 × 10–6 sin $$\left[ {2\pi \times 2 \times {{10}^{15}}\left( {t - {x \over c}} \right)} \right]$$ then the maximum electric field associated with it is - A 4.5 $$\times$$ 104 N/C B 4 $$\times$$ 104 N/C C 6 $$\times$$ 104 N/C D 3 $$\times$$ 104 N/C 4 JEE Main 2019 (Online) 9th January Evening Slot +4 -1 A power transmission line feeds input power at 2300 V to a srep down transformer with its primary windings having 4000 turns. The output power is delivered at 230 V by the transformer. If the current in the primary of the transformer is 5A and its efficiency is 90%, the output current would be : A 50 A B 45 A C 35 A D 25 A JEE Main Subjects Physics Mechanics Electricity Optics Modern Physics Chemistry Physical Chemistry Inorganic Chemistry Organic Chemistry Mathematics Algebra Trigonometry Coordinate Geometry Calculus EXAM MAP Joint Entrance Examination
2023-03-25 14:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5674777626991272, "perplexity": 2337.8996702839795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00389.warc.gz"}
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=4903645
• Create Account ### #Actualfastcall22 Posted 17 January 2012 - 09:24 AM EDIT: Since then I tested it with part.SetPosition(x,y); and App.Draw(part); commented out, so they are not drawn, and still Particle::show takes up 61% of the resources (was 74 previously)... with that commented out, the SDL and SFML Particle::show() are identical, except in SDL they actually get drawn. This would suggest that the API is the bottleneck (as indicated earlier in the thread). If I recall correctly, SFML uses OpenGL 1.1 immediate-mode calls, which would mean for every particle rendered, there's a call to glBindTexture, glPushMatrix/glPopMatrix, and glBegin/glEnd. For something like a particle system, the overhead in each of these calls, while not significant on their own, can snowball. To reduce the overhead from texture switching, place all of your particle textures on one sheet. Since SFML doesn't seem to have any feature that will allow us to assign a part of an Image to a Sprite, you'll need to do the rendering yourself through raw OpenGL calls. By doing so, you can optimize out some OpenGL calls and will allow you to do batching, among other things: glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); // sf::BlendMode::Alpha Particle::particleSheet.Bind(); // sf::Image::Bind, essentially calls glBind( GL_TEXTURE_2D, particleSheet.handle ) for ( Partice& p : v_particles ) { cnost Rect2f& texRect = getTextureRect( p.getTextureIdx() ); Vector2f coord[2] = { p.getPosition() + Vector2f( -1, -1 ) * (p.getScale() / 2.f ); p.getPosition() + Vector2f( 1, 1 ) * (p.getScale() / 2.f ); }; glColor4ub( p.color().r, p.color().g, p.color().b, p.color().a ); glTexCoord2f( texRect.left, texRect.bottom ); glVertex2f( coord[0].x, coord[0].y ); glTexCoord2f( texRect.right, texRect.bottom ); glVertex2f( coord[1].x, coord[0].y ); glTexCoord2f( texRect.right, texRect.top ); glVertex2f( coord[1].x, coord[1].y ); glTexCoord2f( texRect.left, texRect.top ); glVertex2f( coord[0].x, coord[1].y ); } } glEnd(); For further optimizations, you can use VBOs, use the CPU to update all the vertices of all the particles, then send the entire buffer to the GPU in one call. ### #1fastcall22 Posted 17 January 2012 - 09:22 AM EDIT: Since then I tested it with part.SetPosition(x,y); and App.Draw(part); commented out, so they are not drawn, and still Particle::show takes up 61% of the resources (was 74 previously)... with that commented out, the SDL and SFML Particle::show() are identical, except in SDL they actually get drawn. This would suggest that the API is the bottleneck (as indicated earlier in the thread). If I recall correctly, SFML uses OpenGL 1.1 immediate-mode calls, which would mean for every particle, there's a call to glBindTexture, glPushMatrix/glPopMatrix, and glBegin/glEnd. For something like a particle system, the overhead in each of these calls, while not significant on their own, can snowball. To reduce the overhead from texture switching, place all of your particle textures on one sheet. Since SFML doesn't seem to have any feature that will allow us to assign a part of an Image to a Sprite, you'll need to do the rendering yourself through raw OpenGL calls. By doing so, you can optimize out some OpenGL calls and will allow you to do batching, among other things: glBlendFunc( ..., ... ); Particle::particleSheet.Bind(); // sf::Image::Bind, essentially calls glBind( GL_TEXTURE_2D, particleSheet.handle ) for ( Partice& p : v_particles ) { cnost Rect2f& texRect = getTextureRect( p.getTextureIdx() ); Vector2f coord[2] = { p.getPosition() + Vector2f( -1, -1 ) * (p.getScale() / 2.f ); p.getPosition() + Vector2f( 1, 1 ) * (p.getScale() / 2.f ); }; glColor4ub( p.color().r, p.color().g, p.color().b, p.color().a ); glTexCoord2f( texRect.left, texRect.bottom ); glVertex2f( coord[0].x, coord[0].y ); glTexCoord2f( texRect.right, texRect.bottom ); glVertex2f( coord[1].x, coord[0].y ); glTexCoord2f( texRect.right, texRect.top ); glVertex2f( coord[1].x, coord[1].y ); glTexCoord2f( texRect.left, texRect.top ); glVertex2f( coord[0].x, coord[1].y ); } } glEnd(); For further optimizations, you can use VBOs, use the CPU to update all the vertices of all the particles, then send the entire buffer to the GPU in one call. PARTNERS
2013-12-10 06:46:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17978830635547638, "perplexity": 10310.205813779545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164011314/warc/CC-MAIN-20131204133331-00053-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.jeehp.org/DOIx.php?number=139&viewtype=pubreader
# Cross-platform digital assessment forms for evaluating surgical skills ## Article information J Educ Eval Health Prof. 2015;12.13 Publication date (electronic) : 2015 April 17 doi : https://doi.org/10.3352/jeehp.2015.12.13 Department of Otorhinolaryngology–Head & Neck Surgery, Rigshospitalet, Copenhagen, Denmark * Corresponding email: stevenarild@gmail.com Received 2014 September 3; Accepted 2015 April 13. ## Abstract A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations. Keywords: Numerous structured assessment tools have been introduced for the evaluation of surgical skills, applying a multitude of different scales and rating techniques. The data collection forms used in such tools provide a structured framework for the evaluator and are commonly paper-based. The traditional paper-based rating forms are tried and tested, allow fast assessment, and are intuitive for raters regardless of their digital readiness. However, paper-based rating forms can involve a time-consuming process of manual score calculation. This difficulty can be addressed by using image-scanning techniques such as optical mark recognition, although such techniques require dedicated equipment. Only a few reports have described digital rating forms [1-3], with the exception of reports dealing with computer-based testing as such. Digital assessment forms for evaluating surgical performance might be considered more inconvenient, insecure, and vulnerable in comparison with paper-based forms, and moreover, the evaluator must be familiar with the necessary electronic devices and platforms. Nevertheless, digital forms could have advantages over paper-based forms, especially in complex assessment situations where multiple performances and raters need to be managed, assessments occur at a range of times and/or locations, or when substantial subsequent data processing is needed. Digital assessment forms offer the possibility of connecting to an online database, which could support simultaneous assessment using multiple devices and evaluators as well as providing immediate scoring. A range of software platforms and technologies could potentially be used for digital assessment forms, and the advantages and disadvantages of currently available electronic form management tools have recently been described [1]. In contrast to other reports on the development of digital assessment forms that required the integration of multiple types of tools and surveys for complex workplace-based or practice-based assessments [1,2], in this case, it was only necessary to assess saved final-product performances from a virtual-reality otology simulator using one assessment form that allowed multiple raters on different platforms. Therefore, we developed a digital assessment form for this use. This led to the development of several other examples of digital assessment forms for evaluating skills in the context of surgical training, including both technical and non-technical skills. In the following sections, the development and testing of these forms will be described, and their application in the assessment of surgical skills will then be discussed. First, it was necessary to choose a database platform for the digital assessment forms. We chose FileMaker Pro 13 (FileMaker Inc., Santa Clara, CA, USA), because it supports different desktop platforms (Windows 7/8, OSX 10.7 or newer) as well as the iPad and iPhone via the FileMaker Go app. The database can either be stored on the device itself or be hosted on a local or external server, allowing for web-based access through most modern web browsers. The estimated costs (online prices, June 2014) are US$329 for one license for FileMaker Pro 13 (a free trial is available), as well as US$132 or more annually for external database hosting or US\$348 for the software necessary to establish a local server. The FileMaker Go app for iPhone/iPad is currently free. We found that the FileMaker platform provided excellent flexibility in designing the database, displaying the fields, and designing the visual layout of the forms, and therefore it was easy to develop new forms. In our experience, developing a new basic rating form from scratch took from two to eight hours. Good online documentation exists for this software, as well as excellent beginner’s guides and external consulting can be hired if more complex solutions are needed. These factors facilitate the development of digital assessment forms that meet local requirements. Next, we developed working examples of several rating forms—including a range of rating structures—for some of the assessment tools that have been reported in the fields of surgery and otorhinolaryngology. An overview of the newly developed forms is given in Table 1, and an example of one of the developed forms can be seen in Fig. 1. The forms have been made public for free download [4] and they can be used as is, modified as needed, or serve as inspiration for novel work. In each form, the time of assessment and the evaluator identification (ID) is automatically saved with each assessment. The evaluator needs to enter the participant ID and the case ID if relevant. The forms have checkboxes, drop-down menus, radio buttons, and/or free text fields for rating and feedback. Cumulative scores and sub-scores are automatically calculated and updated as the evaluator performs the assessment, and these scores are stored along with the entered data. The data can later be exported to a range of different formats, including Excel® (Microsoft, Redmond, WA, USA) and comma-separated files, for further processing. Examples of digital assessment forms for some structured assessment tools for the evaluation of surgical skill The digital Objective Structured Assessment of Ultrasound Skills (OSAUS) assessment form in FileMaker Go on iPad Mini. Finally, the digital assessment forms were tested with FileMaker Pro 13 on Windows 7 and OSX 10.7, and FileMaker 13 Go on iPhone and iPad running iOS 7. The forms were tested using both device-stored and externally hosted databases. Using WebDirect access to the external server, testing was also performed with a range of common web browsers (Firefox, Chrome, Safari, and Internet Explorer) on the abovementioned devices. Overall, the digital assessment forms proved stable and consistent across platforms and devices. However, if web-based access is likely to be the primary mode of use, then it would be preferable to optimize the forms accordingly, as the layout can change slightly in the web browser compared to the desktop software and mobile app. As well, some problems and crashes occurred using web-based access from handheld devices. Currently, the only handheld devices on which these forms are accessible are the iPhone and iPad, for which the FileMaker 13 Go app is available. Some general issues should be considered when adopting digital assessment forms. First, each rating form needs to be tailored to the specific assessment situation. Second, evaluators need to be introduced to the digital rating forms and the device(s) that will be used. Moreover, combining data from more than one device or rater will need to be done manually unless a server-hosted database with online access is used. Furthermore, regular backup and device management should also be considered. Finally, as technology develops and new devices and platforms emerge, the digital assessment forms will need to be updated, resulting in continued development and maintenance costs. The digital assessment forms described in this report have been developed and tested in a local, controlled, and no-stakes setting for evaluating recorded performances. The next step would be applying these digital assessment forms to the live evaluation of surgical performance, in a real-life setting and in high-stakes assessments. Doing so would require significant experience with the digital platform in the local institution, and it would be necessary for the local setup to have been thoroughly tested and proved stable; for example, if a server-based deployment is chosen, Internet connectivity must be assured. We therefore recommend implementing digital assessment forms for performance rating after the consideration of institutional needs, equipment, expertise, and resources, and after the advantages and disadvantages have been weighed in comparison with paper-based rating forms. Further research into the use of digital assessment forms for evaluating surgical skills compared to paper-based rating forms will help determine whether digital assessment forms prove to be timesaving, reliable, and feasible. ## SUPPLEMENTARY MATERIAL Audio recording of the abstract. ## References 1. Mooney JS, Cappelli T, Byrne-Davis L, Lumsden CJ. How we developed eForms: an electronic form and data capture tool to support assessment in mobile medical education. Med Teach 2014;36:1032–1037. http://dx.doi.org/10.3109/0142159X.2014.907490. 2. Stutsky B. Electronic management of practice assessment data. Clin Teach 2014;11:381–386. http://dx.doi.org/10.1111/tct.12159. 3. Subhi Y, Todsen T, Konge L. An integrable, web-based solution for easy assessment of video-recorded performances. Adv Med Educ Pract 2014;5:103–105. http://dx.doi.org/10.2147/AMEP.S62277. 4. Cross-platform digital assessment forms for download [Internet]. 2014. [Cited 2014 Sep 3]. Available from http://otonet.dk/assessment. 5. Andersen SAW, Cayé-Thomasen P, Sørensen MS. Mastoidectomy performance assessment of virtual simulation training using final-product analysis. Laryngoscope 2015;125:431–435. http://dx.doi.org/10.1002/lary.24838. 6. Spanager L, Beier-Holgersen R, Dieckmann P, Konge L, Rosenberg J, Oestergaard D. Reliable assessment of general surgeons’ non-technical skills based on video-recordings of patient simulated scenarios. Am J Surg 2013;206:810–817. http://dx.doi.org/10.1016/j.amjsurg.2013.04.002. 7. Todsen T, Tolsgaard MG, Olsen BH, Henriksen BM, Hillingsø JG, Konge L, Jensen ML, Ringsted C. Reliable and valid assessment of point-of-care ultrasonography. Ann Surg 2015;261:309–315. http://dx.doi.org/10.1097/SLA.0000000000000552. 8. Stewart CM, Masood H, Pandian V, Laeeq K, Akst L, Francis HW, Bhatti NI. Development and pilot testing of an objective structured clinical examination (OSCE) on hoarseness. Laryngoscope 2010;120:2177–2182. http://dx.doi.org/10.1002/lary.21095. ## Article information Continued ### Fig. 1. The digital Objective Structured Assessment of Ultrasound Skills (OSAUS) assessment form in FileMaker Go on iPad Mini. ### Table 1. Examples of digital assessment forms for some structured assessment tools for the evaluation of surgical skill Assessment tool Assessment type Structure/scales Special features of the digital assessment form Modified Welling Scale (WS) [5] Final-product assessment of mastoidectomy performance Dichotomous ratina of 25 items (1 = adeguate/0 = inadeguate) Automatic calculation of total score Non-Technical Skills for Surgeons in Denmark (NOTSSdk) [6] Video-based assessment of non-technical performance of general surgeons 5-ooint Likert-like ratina scale (very poor to very good) for 4 main categories and 13 sub-elements. None 7-point Likert-like rating scale (very ooor-verv aood) for global ratina score. Feedback notes for each of the 13 sub-elements and for global feedback Objective Structured Assessment of Ultrasound Skills (OSAUS) [7] Video-based assessment of point-of-care ultrasonography performance 5 elements rated from 1-5 using Objective Structured Assessment of Technical Skills-like scales with descriptions of scores Automatic calculation of total score Standardized Patient History Taking and Physical Examination Checklist in Hoarseness [8] Standardized patient assessment of history taking and physical examination Checklist with 18 items on history taking and 12 items on physical examination Automatic calculation of sub-scores (sum and percentage) and total score (sum and percentage)
2020-09-28 15:01:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3567122519016266, "perplexity": 6074.960494731977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00024.warc.gz"}
https://www.biostars.org/p/316514/
How to follow the flow of a cell signaling network? 0 0 Entering edit mode 3.4 years ago Spacebio ▴ 200 I have a large igraph as follows: IGRAPH f3064a8 DN-- 4482 17489 -- + attr: name (v/c), id (v/c), symbol (v/c), description (v/c), pathway (v/c), activationSignal (e/c), status (e/c) After that, I want to subgraph a query with genes of interest in order to know which pathways are active. A single node in g might participate in multiple pathways, but V(g)$description displays the "function" of the gene (i.e. TF, Receptor, Protein...). Based on that info, what I do not know is what's the best approach? 1. Subgraph specifically the genes of interest and based on that obtain the V(g)$pathway (pathways) flow. This means, the pathways containing just the genes of interest. 2. Subgraph the neighborhood of the genes of interest (order = 1) and based on that obtain the V(g)$pathway[grepl('Receptor', V(g)$description)] (receptors for pathways) flow. This means, obtain the neighbor receptors which activate pathways. Any help is much appreciated! igraph R signaling • 584 views 0 Entering edit mode If you want to determine if a group of genes represents a specific complex or process why not use GO term enrichment analysis? What do your edges represent in this graph? 0 Entering edit mode I created a cell signaling network enclosing 80 pathways. The group of genes representing a complex process is determined already. What I want to know is what is better, if determine the pathways by the edges interactions or the nodes comprised by that group of genes?
2021-10-17 17:26:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22088290750980377, "perplexity": 6475.1610470530495}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00439.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/can-you-show-divergence-convergence-for-the-series-n2-n3-n12-n-from-1-to-infinity-using-th-q3456586
## calculus!!!!! can you show divergence/convergence for the series (n^2)/((n^3-n+1)^2), n from 1 to infinity using the divergence test? • Anonymous commented NEVER MIND!!! figured it out. you cannot use the divergence test, have to use limit comparison test.
2013-05-19 17:40:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654371500015259, "perplexity": 1327.3395198289013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697917013/warc/CC-MAIN-20130516095157-00087-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.columbian.gwu.edu/logic-seminar-definability-computability-relativization-and-forcing
# Logic Seminar-Definability, computability, relativization and forcing Logic Seminar Time: Thursday, October 7, 4:30-5:30PM Place: zoom Speaker:  Philip White, GWU Title: Definability, computability, relativization and forcing Abstract: If a property has a characterizing feature definable using computable formulas in our computable base structure, then it follows that this property holds in every computable copy of the base structure. Conversely, if we have the property in every computable copy, it is not always the case that we can define the characterizing feature in our base structure. However, Ash and Nerode proved that under certain additional algorithmic conditions we have this equivalence. If we ease our focus and allow for arbitrary copies of our base structure instead of only computable copies and “relativize the property”, it turns out that a similar argument goes through, only now the effectiveness conditions can be dropped. We will look at this relativized version. The technique used for it will be forcing. Just to mention: Forcing is the same renowned technique developed by Paul Cohen to prove the independence of the continuum hypothesis and the axiom of choice.
2022-01-24 17:43:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415410161018372, "perplexity": 925.9665207197377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00217.warc.gz"}
https://hypothes.is/users/Volis
11 Matching Annotations 1. Nov 2018 2. wals.info wals.info 1. Long and short variants of the same vowel are always counted once but hindi अ is different from आ #### URL 3. wals.info wals.info 1. For example, speakers of English generally consider that words such as pip, tit, kick, bib, did, gig begin and end with the same consonant even though there are some easily recognizable differences between the sounds at the beginning and those at the end. To explain take pip, 1. Begins and ends with p 2. Begins with |pi| sound, ends with |p| sound #### URL 4. Jul 2018 5. wsimag.com wsimag.com 1. For instance, what would the sound of an image be, or what would sound look like, should the data be processed in another fashion? referring to databending #### URL 6. thevinylfactory.com thevinylfactory.com 1. From 1996 to 2002 four of Ikeda’s early records found a rightful home with Touch, a label who have shown themselves as being similarly committed to exploring sound and music down to their fundaments. Touch is still active. How do they sustain? 2. use of space and indeed some of his future ideas can be found within his work with Dumb Type and their unique approach to theatre and performance Finding peers like you is very important. How did he run into them? #### URL 7. May 2015 8. www.dpmms.cam.ac.uk www.dpmms.cam.ac.uk 1. x = -a +/- (a2 -b)1/2 This page looks very interesting but this kind of math formatting has forced many people to take their lives. $$x = -a \pm \sqrt{a^2 - b}$$ #### URL 9. oxfordstudent.com oxfordstudent.com 1. As one second year mathematician admitted, “This one time, I’d been doing maths for several hours in a café, came out, almost got hit by a car. It can be easy to forget to switch back into normal mode.” If this were the first line. I would have stopped reading. 2. However, University Lecturer in Experimental Psychology, Dr Jennifer Lau gives a sobering verdict on the possible connection between genius and madness that these works play upon. “There seems to be a causal link between genius and the subsequent mental illness (being assumed), and I don’t think there really is that established link.” Explaining Math with Psychology?! 3. an infamous number conundrum One of the central problems in these articles is the glaring lack of math. These personal choices and the environment they have set up for themselves tells us rather only the minuscule part of the story! 4. The world wondered whether this self-imposed isolation was idiosyncrasy, or something more serious. This is baffling! An adult can choose for himself. There is nothing remotely wrong in living with your family! 5. eccentricities of Grigori Perelman What eccentricities? He just lives with his parents and did not accept either the Fields' Medal or the Claymath's Millennium Prize.
2019-07-20 09:30:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47351908683776855, "perplexity": 4116.329653446606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00013.warc.gz"}
http://tex.stackexchange.com/questions/56051/problem-with-options-in-declaresiunit
# Problem with options in \DeclareSiUnit I also wanted to know How to use siunitx to write 100 MBps?, so \sisetup{per-mode=symbol,per-symbol = p} solved my question and I decided to use it as option in \DeclareSIUnit[per-mode=symbol,per-symbol=p]{\Bps}{\byte\per\second}. This way I can fix the format of \per with every unit and get Km/s and MBps on the same text without to write the format in each \SI command. Maybe this mixture is not correct but this is not my question. But something is missing because when I write \SI{10}{\Bps} I get 10 Bps, but with \SI{10}{\mega\Bps} the result is 10 MB/s. So, what's wrong? Here you have a MWE and it's results: \documentclass[a4paper,12pt]{article} \sisetup{per-mode=symbol} \DeclareSIUnit[per-mode=symbol,per-symbol=p]{\Bps}{\byte\per\second} \begin{document} \SI[per-mode=symbol,per-symbol=p]{1}{\mega\byte\per\second} \SI{2}{\Bps} \SI{3}{\mega\Bps} \SI[per-mode=symbol,per-symbol=p]{4}{\mega\Bps} \end{document} - This is the design behaviour. As described in the manual, options set for a a unit apply to that unit only, which means that they do not apply to combinations. From the point of view of siunitx \mega\Bps and \Bps are distinct for the application of options. At one time there was a strong implementation reason for this: in v1 of siunitx separate parsing approaches were used for different forms of output, and so changing half-way through was not possible. This limitation does not apply to the current code. The other reason for this restriction is conceptual. The approach requested in the question is not unreasonable, but other combinations could well be. For example, if you start combining different units with different options then the 'correct' result is hard to be sure of. So the options are checked only for the 'top level' input to \SI, and only if there is exactly one unit macro and no other input. At the same time, the 'shortcut' approach to units was only ever intended to be used at 'one level', for example \DeclareSIUnit[per-mode=symbol,per-symbol=p]{\Bps}{\byte\per\second} \DeclareSIUnit[per-mode=symbol,per-symbol=p]{\MBps}{\mega\byte\per\second} It does work in more complex ways, but the options are only ever read at the 'top level'. -
2016-06-29 10:59:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6209973096847534, "perplexity": 910.2971825476701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz"}
https://study.com/academy/answer/suppose-vectors-vec-u-and-vec-v-are-given-by-mathbb-r-3-where-vec-u-langle-1-1-1-rangle-vec-v-langle-4-5-3-rangle-a-compute-vec-u-vec-v-b-find-a-unit-vector-parallel-to-vec-u-vec-v.html
Suppose vectors \vec u and \vec v are given by \mathbb{R}^3, where \vec u = \langle ... Question: Suppose vectors {eq}\vec u {/eq} and {eq}\vec v {/eq} are given by {eq}\mathbb{R}^3 {/eq}, where {eq}\vec u = \langle 1,1,1\rangle, \vec v= \langle 4,-5,-3 \rangle {/eq} a. Compute {eq}||\vec u- \vec v|| {/eq} b. Find a unit vector parallel to {eq}\vec u - \vec v {/eq} Unit Direction: To determine a unit vector in the same direction as a given one: we divide all its components by the norm of the vector, in this way, the resulting vector has the same direction and norm equal to 1. a. Given the vectors {eq}\vec u = \langle 1,1,1\rangle, \vec v= \langle 4,-5,-3 \rangle {/eq}, first we need to compute the vector {eq}\vec u- \vec v {/eq}, that is: {eq}\vec u - \vec v = \left\langle { - 3,6,4} \right\rangle {/eq}. Now, we can calculate the norm of the vector: {eq}\left\| {\vec u - \vec v} \right\| = \left\| {\left\langle { - 3,6,4} \right\rangle } \right\| = \sqrt {{{\left( { - 3} \right)}^2} + {6^2} + {4^2}} = \sqrt {61}. {/eq} b. Given the vector, {eq}\vec u - \vec v = \left\langle { - 3,6,4} \right\rangle {/eq}, using the norm, we can write the unit vector as: {eq}\left\langle { - \frac{3}{{\sqrt {61} }},\frac{6}{{\sqrt {61} }},\frac{4}{{\sqrt {61} }}} \right\rangle. {/eq}
2020-02-17 00:58:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895846247673035, "perplexity": 4503.04778187875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00126.warc.gz"}
http://www.mathworks.com/help/stats/raylpdf.html?requestedDomain=www.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English verison of the page. # raylpdf Rayleigh probability density function ## Syntax `Y = raylpdf(X,B)` ## Description `Y = raylpdf(X,B)` computes the Rayleigh pdf at each of the values in `X` using the corresponding scale parameter, `B`. `X` and `B` can be vectors, matrices, or multidimensional arrays that all have the same size, which is also the size of `Y`. A scalar input for `X` or `B` is expanded to a constant array with the same dimensions as the other input. The Rayleigh pdf is `$y=f\left(x|b\right)=\frac{x}{{b}^{2}}{e}^{\left(\frac{-{x}^{2}}{2{b}^{2}}\right)}$` ## Examples collapse all Compute the pdf of a Rayleigh distribution with parameter `B = 0.5`. ```x = [0:0.01:2]; p = raylpdf(x,0.5); ``` Plot the pdf. ```figure; plot(x,p) ```
2017-02-25 03:58:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716495275497437, "perplexity": 1695.826922109287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00257-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.arunma.com/2012/08/30/filter-lines-in-log-file-with-error/
August 31, 2012 · tips # Filter lines in log file with ERROR A couple of days ago, a friend of mine was interested to know how to filter a log file (in my case a log4j log file) for ERRORs alone. An hour of tricks sharing followed and here is the gist of the conversation that you will be interested in. Find me wherever I am Listing all the lines in the log file which has occurrences of ERROR is as simple as executing the following command grep "ERROR" <YOUR LOG FILE NAME> When there is a beginning And if you are interested in the lines which start with ERROR, then it just involves the usage of the regular expression for "begin" grep "^ERROR" <YOUR LOG FILE NAME> ...there's an end A similar trick applies if you are interested in searching lines which has occurrences of "Exception" at the end of the line (specially in case of Java stack traces), then you would use grep "Exception$" <YOUR LOG FILE NAME> Don't be so sensitive Good news is that you need not be case sensitive to grep with regard to search string. You could just use grep -i "exception$" <YOUR LOG FILE NAME> (i as in insensitive) Second thoughts That said, imagine you are printing the username or a unique trace id of your request in your logs and you would want to filter your log based on that. And suppose this username/trace id is located as the 3rd word in every line. To print the lines which has the third word as say "12345" and your log file delimiter is a space (which is generally the case unless you are logging as xml or json), then you could use cut -d " " -f3 <YOUR LOG FILE NAME> | grep "12345" where -d " " says that your delimiter is a space -f3 says that you interested in the 3rd delimiter tokenized string Moving target What if you wanted to filter lines on a live log file. Just add some "tail" to it and you are good to go tail -f <YOUR LOG FILE NAME> | grep "ERROR" Too bad, this does not take care of rolling files. Less is more Your production box threw some exception and you are expected to check it out. So, you know that the exception logs exist somewhere towards the end of the file. How do you search for the exception? Open the file less <YOUR LOG FILE NAME> (Don't get me wrong, I love the vi editor but vi tries to load the entire file in its buffer and for large files like our typical log files, performance suffers) So, yeah less <YOUR FILENAME> will open the file and keep you at the first page. If you need to go to the last page, press Shift + G Now, you are on the last line. Sift though the last pages by pressing b (as in back) and f (as in forward) Elementary, Dr.Watson So, if you would like to search for "Exception", just type /Exception You would get a "Pattern not found" error because the default search direction is forward. To search backward, press Shift + N Keep pressing Shift + N to repeat search upwards If you went past the search string upwards and would like to continue search downwards, then just press n and keep pressing n to repeat search downwards Case-insensitive less search If you like to make your search keywords insensitive, just type -i (remember -i toggles the case insensitivity) Better to be a head of a dog tail -f <YOUR LOG FILE> is awesome but if you are analysing the log files inside the less buffer and would like to do a tail of your file, all you need to do is to press Shift + F Well, that's all it is for now. I am sure there are many more awesome tricks that you can do in a shell which I don't know or didn't mention here. Please share your favourite trick so that I can do some serious show off to my friends. PS : There is a way to open the file at the last line with less instead of opening the file at the first page and pressing Shift+G. Can't remember it and can't find it too.
2023-03-22 13:32:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31084704399108887, "perplexity": 2082.393624800707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00751.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/jimo.2016.12.1333
# American Institute of Mathematical Sciences October  2016, 12(4): 1333-1347. doi: 10.3934/jimo.2016.12.1333 ## An inventory model for items with imperfect quality and quantity discounts under adjusted screening rate and earned interest 1 Department of Marketing and Supply Chain Management, Overseas Chinese University, Taichung 40721, Taiwan 2 Department of Business Administration, National Chung Cheng University, Chia-Yi 621, Taiwan 3 Department of Industrial Engineering and Management, Overseas Chinese University, Taichung 40721, Taiwan Received  October 2014 Revised  October 2015 Published  January 2016 In this paper we develop a new inventory model for items with imperfect quality and quantity discounts under adjusted screening rate and earned interest. Three highlights are included in this new model: (1) interest is earned by depositing the sales revenue from the perfect and imperfect items into an interest-bearing account (2) the screening rate may not be given but is a decision variable (3) the supplier offers quantity discounts to trigger the retailer into ordering greater lot sizes. This scenario has not been discussed in previous EOQ models with imperfect quality. Our model could determine two decision variables, order quantity and screening rate, to maximize retailer profit. The expected total profit function is derived with two special cases explored to validate the proposed model. An algorithm is developed to help the manager determine the optimal order quantity and screening rate. A numerical example is given to illustrate the proposed model and algorithm. Sensitivity analyses are carried out to investigate the model parameters effects on the optimal solution. Managerial insights are also included. Citation: Tien-Yu Lin, Ming-Te Chen, Kuo-Lung Hou. An inventory model for items with imperfect quality and quantity discounts under adjusted screening rate and earned interest. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1333-1347. doi: 10.3934/jimo.2016.12.1333 ##### References: show all references ##### References: [1] Martin Bohner, Sabrina Streipert. Optimal harvesting policy for the Beverton--Holt model. Mathematical Biosciences & Engineering, 2016, 13 (4) : 673-695. doi: 10.3934/mbe.2016014 [2] Caifang Wang, Tie Zhou. The order of convergence for Landweber Scheme with $\alpha,\beta$-rule. Inverse Problems & Imaging, 2012, 6 (1) : 133-146. doi: 10.3934/ipi.2012.6.133 [3] Alexandre B. Simas, Fábio J. Valentim. $W$-Sobolev spaces: Higher order and regularity. Communications on Pure & Applied Analysis, 2015, 14 (2) : 597-607. doi: 10.3934/cpaa.2015.14.597 [4] A. Aghajani, S. F. Mottaghi. Regularity of extremal solutions of semilinaer fourth-order elliptic problems with general nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (3) : 887-898. doi: 10.3934/cpaa.2018044 [5] Xiaoming Wang. Quasi-periodic solutions for a class of second order differential equations with a nonlinear damping term. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 543-556. doi: 10.3934/dcdss.2017027 2019 Impact Factor: 1.366
2021-02-27 06:59:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19779255986213684, "perplexity": 3566.7280368460733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00621.warc.gz"}
https://brilliant.org/problems/a-combinatorics-problem-by-sambhrant-sachan/
An algebra problem by Sambhrant Sachan Algebra Level pending $\dfrac1{\sqrt{4x+1}} \left[ \dfrac{(1+4\sqrt{x+1})^n}{2^n} - \dfrac{(1-4\sqrt{x+1})^n}{2^n} \right ] \\ = \\ a_0 + a_1 x + a_2 x^2 + \cdots + a_5 x^5$ Find the value of $$n$$ that satisfy the equation above. ×
2017-07-29 12:04:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978421092033386, "perplexity": 3265.2299523791075}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427766.28/warc/CC-MAIN-20170729112938-20170729132938-00409.warc.gz"}
http://tazmania.ga/post/another-word-for-integral
$another word for integral$ # another word for integral tazmania.ga 9 out of 10 based on 700 ratings. 900 user reviews.
2020-08-07 01:13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2783123552799225, "perplexity": 6340.316100003355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737050.56/warc/CC-MAIN-20200807000315-20200807030315-00111.warc.gz"}
http://chalkdustmagazine.com/blog/wonders-mathematical-crochet/
# The wonders of mathematical crochet Hyperbolic surfaces, Klein bottles and more Several months ago, I learned to crochet, and I’ve been hooked ever since. And as a mathematician, one of the most fantastic things about it is that it allows you to make mathematical objects that are enormously difficult to make, or even visualise in any other way. For those of you unfamiliar with crochet, all you need is a ball of wool and a crochet hook. You then use this hook to pull loops of wool through other loops of wool, and from this you very quickly make a stretchy, slightly holey fabric. You can either use rows of stitches to make a rectangle, or you can begin in the middle of your work and make rounds of stitches that meet at either end. How to make a single crochet stitch If you’re crocheting with rounds of stitches, the most obvious shape that you can make is a circle. But how do we construct a perfect circle? Each single crochet stitch can be thought of as a square, albeit one that can be easily squashed or stretched. (I’m using US terminology here.) For the first round of stitches, we want to make a circle with radius $1$. Therefore, the circumference will be $2\pi$. In this case, we’ll take $\pi \approx 3$ for simplicity, and so in the first round we’ll have $6$ stitches. When we add the second round, the radius of the circle is now $2$, and so we need $4\pi$ stitches in this round, which, using our crude approximation of $\pi$, is $12$ stitches. For the next round, the circle has radius $3$ and so the round has $18$ stitches. So in each round we add $2\pi$, or $6$, more stitches—the number of stitches in each round increases linearly. A crochet circle, with linearly increasing stitches in each round. If you try this and make a mistake in the number of stitches in a round, then you might notice that the circle stops being flat and starts to curve. This fact is actually what makes crochet such a powerful tool for creating mathematical objects—by changing the number of stitches then we change the curvature of the surface. The circle is flat, and therefore has zero curvature. So what happens if we increase the number of stitches in a slower fashion? Each new round won’t stretch around the previous round, and hence we will create a surface with positive curvature. The classic example of a surface with positive curvature is a sphere—at every point on the sphere the surface curves away in all directions. To crochet a sphere, the number of stitches in each round increases like $\sin \theta$, where $\theta$ is the angle around the sphere. You can generate patterns for mathematically ideal crochet spheres here. A perfect crochet sphere – you can find the pattern here On the other hand, what if we increase the number of stitches in each round faster than linearly? For example, what if we double the number of stitches in each round so that the length of the rounds increases exponentially? Each round is much larger than the one before it, and so the surface has to curve up and down. By continuing this pattern, you create a hyperbolic pseudosphere, which a surface of constant negative curvature. This means that if you look at any point on the surface, it will look like a Pringle—curving upwards in one direction and downwards in the other. Prior to 1997, it was extremely difficult to build a robust model of a hyperbolic pseudosphere, but then mathematician and crocheter Daina Taimina realised that you could easily construct one using crochet. The idea was hugely popular and led to a book, as well as a huge project creating coral reefs using hyperbolic crochet. Crocheted hyperbolic pseudosphere by Panda Eskimo The following graph shows how the number of stitches in each round should increase when making a circle, sphere or hyperbolic surface. The existence of hyperbolic crochet is perhaps the best known example of mathematical crochet, but it only touches the surface of what crochet can represent. For example, an easy crochet project for beginners is a rectangular scarf. If you make a twist in the scarf and sew the ends together, you’ve made yourself a Möbius strip, which is a surface with only one side. (Try making one with some paper and drawing a line along the centre.) Apparently Alan Turing liked to knit Möbius strip scarves. Two crochet mobius strips with twists going in opposite directions. Zip them together to get a Klein bottle! A particularly interesting fact about Möbius strips is that if you glue the edges of two strips together, then you create a Klein bottle. However, you can’t demonstrate this physically with paper, because it’s not stretchy or flexible enough. However this is actually a very easy project using crochet—I made one recently in an afternoon or two. An alternative way to think of a Klein bottle is to glue the ends of a tube together in opposite directions. I used this approach to crochet a Klein bottle hat, using the crochet lathe to generate a pattern. This clever tool gives you a pattern for any surface of revolution. Two crocheted klein bottles. The right hand one is made by zipping together two Mobius strips Another classic crochet project is granny squares. With lots of these squares, you can then construct polyominoes, and make a mathematical/tetris blanket like this one. Polyominoes blanket by Tina Klein-Walsh A particularly impressive mathematical crochet feat is this model of the Lorenz manifold, created by researchers at the University of Auckland. The complexity and curvature of this surface means it is very difficult to create physically, but once again the flexibility of crochet came to the rescue. Crochet model of Lorenz Manifold And one final example of the possibilities of combining mathematics and crochet—hexaflexagons. Woolly Thoughts, who create beautiful mathematical knitting, have written a pattern for a crocheted hexaflexagon cushion. I hope I’ve convinced you to pick up a hook and get crocheting! While some of these patterns are very complex, a hyperbolic pseudosphere is very simple, and was actually the first thing I crocheted. (Many thanks to Matthew Scroggs for teaching me.) What other mathematical objects can you create with crochet? Cover image by Cheryl on Flickr. Anna is a PhD student at UCL working on mathematical models of bioreactors. • ### An irrational problem Investigating the power of thinking rationally • ### Feeling the love for Chalkdust T-shirts The algebra will set your heart aflutter. • ### In conversation with Andrea Bertozzi We chat to Andrea about crime, maths and her fluid career • ### In conversation with Artur Avila Chatting with a Fields Medallist in a Leicester Square pub • ### Time turns backwards at the Spark Festival The extreme weirdness of slow viscous flows, and why borrowers shouldn't use doggy paddle. • ### In conversation with Hannah Fry On the mathematics of love, communicating maths to the public, and women in maths.
2023-03-26 21:55:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5387123227119446, "perplexity": 964.7795160132251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00618.warc.gz"}
http://gmatclub.com/forum/admission-consultant-vs-campus-visits-where-to-spend-135065.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 26 Jun 2016, 02:02 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Admission Consultant vs. Campus Visits (Where to spend $$?) new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Intern Joined: 28 Jun 2012 Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: 0 Admission Consultant vs. Campus Visits (Where to spend$$$?) [#permalink] ### Show Tags 28 Jun 2012, 21:06 Hi all, I'm contemplating whether to hire an admission consultant (eg Veritas Prep or Stacy Blackman or MBA Mission). If you had to choose, would you rather invest money on one of those, or spend on quite a few (say, 5-8) campus visits across the country? Here's my thinking: a consultant will probably cost like$5-7k and I may well get my money's worth! But I also wonder if I could simply rely on the wisdom of others and use, say, half that amount toward visits where I can build up a credible roster of on-campus experiences to draw upon in my essays (vs essays that, while professionally edited, draw upon tidbits sourced online, via emails with students, etc.). Veritas Prep GMAT Discount Codes EMPOWERgmat Discount Codes Manhattan GMAT Discount Codes Manager Joined: 30 Mar 2010 Posts: 84 GMAT 1: 730 Q48 V42 Followers: 0 Kudos [?]: 22 [0], given: 5 ### Show Tags 02 Jul 2012, 02:56 Campus visits without a doubt. Nothing will help you understand a school as well as a campus visit will. Apart from the standard info provided during a campus visit, which in any case is useful, the real gems will come out from your interactions with current students. These finer points will really help you craft your applications better. Besides, the feel of the school that you get from a campus visit will help you develop a lot of clarity as to whether or not you would like to join that school if offered an admission. Having this clarity before applying is much better than getting accepted to a school and then having the dilemma of whether to attend the school or not. Working with an admissions consultant will not help you in any of these things. If you're confident of the content of your essays and are only apprehensive about the grammatical quality, you could easily hire an essay editing service that will cost you far far lesser than a consultant would. _________________ Manager Joined: 18 Nov 2011 Posts: 154 Schools: Johnson '15 (M) GMAT 1: 730 Q47 V44 Followers: 4 Kudos [?]: 52 [0], given: 21 ### Show Tags 05 Jul 2012, 11:30 This is piling on, but I would definitely recommend campus visits. I'll add some value by recommending that you thoroughly research the school and come prepared with thoughtful questions for the tour. Talk to students, observe a class, pick up a copy of the school newspaper, and take plenty of notes. This is all great material for your essays. I went on several tours and very few people actually do any of this stuff. To the extent there were questions other than mine, people asked about stupid stuff like grade nondisclosure. You may also come away with a totally different impression of a school than you had before. I crossed a very highly ranked school off my list after visiting because I just really didn't like anything I saw there. Finally, my personal opinion would be to spend your money on a trip to Vegas before giving it to an admissions consultant. Intern Joined: 05 Jul 2012 Posts: 1 Followers: 1 Kudos [?]: 0 [0], given: 0 ### Show Tags 08 Jul 2012, 08:32 3 KUDOS alphabear1 wrote: Hello, I did both school visits and the consultant. The school visits pointed out things I didn't like about schools more than things I did like. But that's pretty useless for the application where you want to talk about what you do like but it did help me re-evaluate my order of pref. My no. 1 & no. 2 still stayed in place however. The consultant was very useful. (I used Sandy). He basically told me what it is about MY specific profile that my no. 1 school would like, which is what made all the difference in my apps. Frankly, I found that more useful, than visiting the school and trying to extrapolate what they like based on what I saw. Also worth noting, you can visit the schools after an admit and then decide. Personally, I felt that with the web, there are so many resources available that school visits aren't always necessary. I agree with you that school visits won't make or break your candidacy, I think you're only speaking of visits from the perspective of what the adcom thinks. Personally, I didn't use school visits so I could impress an adcom or have something to write about in my essays. The visits were for me to see if I liked the school and wanted to actually be there. I know people who got into schools they never visited and hated it when they went to admit weekends. In some cases it was their only option. I'd hate to have to choose between going to a school I dislike and reapplying to other places next year. I don't know about you but I didn't find the process of applying to schools to be very pleasant. In fact I thought it sucked and wouldn't wish for anyone to have to repeat it. The web simply does not compare to an in person visit. I loved Stern via the web and alums I spoke to. In person, not so much...actually not even a little not even at all. Knowing that up front stopped me from applying to a very good school that wasn't a good school for me. You mentioned gleaning school info from posts like the "Impressions of 16 US Bschools." Now even though I didn't like Stern I met other applicants who loved it upon visiting. If someone simply based their opinions off of my feelings they could be missing out. There really isn't a substitute for school visits (when feasible). I do think that a consultant can add tremendous value to an applicant, especially if that person doesn't have a personal network to help guide them through the process. Actually I think that school visits and consultants tackle two completely different yet equally important pieces of the admissions puzzle. Visits affect the internal part of the process while consultants help the external/presentation piece. I think it's up to each individual to determine where they need the most help. _________________ The Brain Dump - From Low GPA to Top MBA (Updated September 1, 2013) - A Few of My Favorite Things--> http://cheetarah1980.blogspot.com Manager Joined: 25 Jun 2008 Posts: 129 Concentration: General Management, Technology Followers: 3 Kudos [?]: 25 [1] , given: 5 ### Show Tags 08 Jul 2015, 09:35 Hello from the GMAT Club MBAbot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
2016-06-26 09:02:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20804934203624725, "perplexity": 2695.6248549388497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00144-ip-10-164-35-72.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2138649/given-p-%E2%87%92-q-use-the-fitch-system-to-prove-%C2%ACp-%E2%88%A8-q
# Given $p$ ⇒ $q$, use the Fitch System to prove ¬p ∨ q. Disclaimer: I'm a complete newbie to the site, and I haven't fully figured out how to format properly. I do not have enough reputation to use the meta sandbox thread, so in addition to asking my question, I may use some space below to test some stuff out... edit: I now have enough reputation to use the meta forums Here's what I've done so far, with regards to the question: 1. $p ⇒ q$ (premise) 2. (tab)|$¬q$ (assumption) 3. (tab)| (tab) |$p$ (assumption) 4. (tab)| (tab) |$¬q$ (reiteration: 2) 5. (tab)|$p ⇒ ¬q$ (implication introduction: 3, 4) 6. (tab)|$¬p$ (negation introduction: 1,5) I thought of "Or-ing" $¬p$ with $q$ (Or introduction: 6) but that would not be a complete proof, because I would still be within a subproof. What steps am I missing? Or, perhaps, was I going in the wrong direction in the first place? Also, please do not edit the format of my question. Instead, I would appreciate formatting suggestions, especially including the following: • how do I format a tab? • how do I format logic symbols like "and"/"or", without using copy+paste. • how do I format Fitch-style logic problems. I don't want to use "|" as a substitute for the solid line going down. • For "and" : \land; for "or" : \lor; for "not" : \lnot. – Mauro ALLEGRANZA Feb 10 '17 at 21:07 • @MauroALLEGRANZA thank you. – Andrew Guo Feb 10 '17 at 21:11 A bit long winded, due to the logic software's inability to do a contradiction intro: 1. p => q Premise 2. ~(~p | q) Assumption 3. ~p Assumption 4. ~p | q Or Introduction: 3 5. ~p => ~p | q Implication Introduction: 3, 4 6. ~p Assumption 7. ~(~p | q) Reiteration: 2 8. ~p => ~(~p | q) Implication Introduction: 6, 7 9. ~~p Negation Introduction: 5, 8 10. p Negation Elimination: 9 11. q Implication Elimination: 1, 10 12. ~p | q Or Introduction: 11 13. ~(~p | q) => ~p | q Implication Introduction: 2, 12 14. ~(~p | q) Assumption 15. ~(~p | q) => ~(~p | q) Implication Introduction: 14, 14 16. ~~(~p | q) Negation Introduction: 13, 15 17. ~p | q Negation Elimination: 16 Bam, done. I am not sure why you start by assuming $$\neg q$$ ... Instead, try a proof by contradiction: assume $$\neg (\neg p \lor q)$$, and derive a contradiction between that and your premise. In general, if your goal is ever a disjunction $$A \lor B$$, there are basically 3 strategies: 1. If you're lucky, you already have (or can get to) $$A$$ or $$B$$ by itself ... so then you can just do $$\lor \: Intro$$ 2. If you have some other disjunction (e.g $$C \lor D$$) to work with, then a fruitful approach may be to do a proof by cases on the $$C \lor D$$: So do one subproof assuming $$C$$, and a second subproof assuming $$D$$: chances are that one of them leads to $$A$$, and that the other one leads to $$B$$. And then in either case you can do $$\lor \: Intro$$ at the end of the subproof to get $$A \lor B$$, which you can then pull out using $$\lor \: Elim$$ 3. Finally, you can try to do a proof by Contradiction, i.e. assume $$\neg (A \lor B)$$ and try to get a contradiction. The nice thing about this strategy is that after assuming $$\neg (A \lor B)$$, you can (with some work) derive $$\neg A$$ as well as $$\neg B$$, so you get some nice 'small stuff' that you can use to combine with other premises you have on your way to a contradiction. • would I have to make ~(~p V q) become p & ~q by any chance? fyi, this is the site where the exercise is from logic.stanford.edu/intrologic/exercises/exercise_04_11.html – Andrew Guo Feb 10 '17 at 21:34 • @AndrewGuo You should be able to infer $p$ by itself from $\neg (\neg p \lor q)$, and the same is true for $\neg q$. And with $p$ and $p \to q$ you get $q$, so that will contradict the $\neg q$ – Bram28 Feb 10 '17 at 22:12 • The software I use for Fitch Proofs cannot do a contradiction intro, so it costed me many more steps. I answered my own question with the proof that works but seems a bit long winded but works. – Andrew Guo Feb 16 '17 at 23:35 • @AndrewGuo Hey, good work!! You're clearly getting better at these proofs! – Bram28 Feb 17 '17 at 1:58 # Annotation of Andrew Guo's proof These proofs require us to work backwards from our conclusion (goal) to our premises. Our goal is find ~p|q. We do not have access to ~p or q. We only know that p => q. Thus we have to proceed by reductio ad absurdum. We assume the opposite (the negation) of our goal. From this assumption we try to derive a contradiction. 1. p => q Premise 2. ~(~p | q) Assumption A contradiction will be something similar to: ~(~p | q) => p ~(~p | q) => ~p Andrew Guo uses ~(~p | q) => (~p | q) ~(~p | q) => ~(~p | q) Most of Andrew's proof is concerned with deriving the : ~(~p | q) => (~p | q) Here is Andrew's proof of this lemma: Subproof 1. ~(~p | q) Premise 2. ~p Assumption 3. ~p | q Or Introduction: 3 4. ~p => ~p | q Implication Introduction: 3, 4 5. ~p Assumption 6. ~(~p | q) Reiteration: 2 7. ~p => ~(~p | q) Implication Introduction: 6, 7 8. ~~p Negation Introduction: 5, 8 9. p Negation Elimination: 9 10. q Implication Elimination: 1, 10 11. ~p | q Or Introduction: 11 These proofs have a recursive nature. Inside the sub proof, Andrew uses another instance of proof by contradiction. He starts by assuming ~p. He proceeds to show that this results in a contradiction: ~p => ~p | q ~p => ~(~p | q) If ~p results in a contradiction, then by the law of the excluded middle p must hold. Andrew uses p to get q via our (almost forgotten) premise p=>q. From q he can get ~p | q. After this fractal-ing deep dive we can finally break out of our sub proof with a Negation introduction.
2019-08-24 17:37:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7409332394599915, "perplexity": 1542.9607283083956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321351.87/warc/CC-MAIN-20190824172818-20190824194818-00292.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1075.35044
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1075.35044 Zhou, Yong On regularity criteria in terms of pressure for the Navier-Stokes equations in $\Bbb{R} ^3$. (English) [J] Proc. Am. Math. Soc. 134, No. 1, 149-156 (2006). ISSN 0002-9939; ISSN 1088-6826/e Summary: We establish a Serrin-type regularity criterion on the gradient of pressure for the weak solutions to the Navier-Stokes equations in $\Bbb{R} ^3$. It is proved that if the gradient of pressure belongs to $L^{\alpha,\gamma}$ with $2/\alpha+3/\gamma \leq 3$, $1\leq \gamma \leq \infty$, then the weak solution is actually regular. Moreover, we give a much simpler proof of the regularity criterion on the pressure, which was showed recently by {\it L. C. Berselli} and {\it G. P. Galdi} [Proc. Am. Math. Soc. 130, No. 12, 3585--3595 (2002; Zbl 1075.35031)]. MSC 2000: *35Q30 Stokes and Navier-Stokes equations 35B65 Smoothness of solutions of PDE 35B45 A priori estimates 76D05 Navier-Stokes equations (fluid dynamics) Keywords: regularity criterion; gradient of pressure; weak solutions; Navier-Stokes equations Citations: Zbl 1075.35031 Cited in: Zbl 1182.35179 Highlights Master Server
2013-05-21 22:58:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8477710485458374, "perplexity": 2360.5913916680393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700842908/warc/CC-MAIN-20130516104042-00062-ip-10-60-113-184.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Definition:Inverse_Cosine/Complex/Arccosine
# Definition:Inverse Cosine/Complex/Arccosine ## Definition The principal branch of the complex inverse cosine function is defined as: $\map \arccos z = \dfrac 1 i \, \map \Ln {z + \sqrt {z^2 - 1} }$ where: $\Ln$ denotes the principal branch of the complex natural logarithm $\sqrt {z^2 - 1}$ denotes the principal square root of $z^2 - 1$.
2019-12-06 16:16:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998299479484558, "perplexity": 528.2225768293957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00370.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/cpaa.2006.5.261
# American Institute of Mathematical Sciences June  2006, 5(2): 261-276. doi: 10.3934/cpaa.2006.5.261 ## The numerical solution of weakly singular Volterra functional integro-differential equations with variable delays 1 Dept. of Mathematics and Statistics, Memorial University of Newfoundland, St. John's, NL, Canada A1C 5S7, Canada Received  February 2005 Revised  May 2005 Published  March 2006 We analyze the attainable order of convergence of collocation solutions for linear and nonlinear Volterra functional integro-differential equations of neutral type containing weakly singular kernels and nonvanishing delays. The discretization of the initial-value problem is based on a reformulation as a sequence of ODEs with nonsmooth solutions. The paper concludes with a brief description of possible alternative numerical approaches for solving various classes of such functional integro-differential equations. Citation: Hermann Brunner. The numerical solution of weakly singular Volterra functional integro-differential equations with variable delays. Communications on Pure and Applied Analysis, 2006, 5 (2) : 261-276. doi: 10.3934/cpaa.2006.5.261 [1] Xinjie Dai, Aiguo Xiao, Weiping Bu. Stochastic fractional integro-differential equations with weakly singular kernels: Well-posedness and Euler–Maruyama approximation. Discrete and Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021225 [2] Kun-Peng Jin, Jin Liang, Ti-Jun Xiao. Uniform polynomial stability of second order integro-differential equations in Hilbert spaces with positive definite kernels. Discrete and Continuous Dynamical Systems - S, 2021, 14 (9) : 3141-3166. doi: 10.3934/dcdss.2021077 [3] Mohammed Al Horani, Angelo Favini, Hiroki Tanabe. Singular integro-differential equations with applications. Evolution Equations and Control Theory, 2021  doi: 10.3934/eect.2021051 [4] Hui Liang, Hermann Brunner. Collocation methods for differential equations with piecewise linear delays. Communications on Pure and Applied Analysis, 2012, 11 (5) : 1839-1857. doi: 10.3934/cpaa.2012.11.1839 [5] Martin Bohner, Osman Tunç. Qualitative analysis of integro-differential equations with variable retardation. Discrete and Continuous Dynamical Systems - B, 2022, 27 (2) : 639-657. doi: 10.3934/dcdsb.2021059 [6] Ramasamy Subashini, Chokkalingam Ravichandran, Kasthurisamy Jothimani, Haci Mehmet Baskonus. Existence results of Hilfer integro-differential equations with fractional order. Discrete and Continuous Dynamical Systems - S, 2020, 13 (3) : 911-923. doi: 10.3934/dcdss.2020053 [7] Yin Yang, Sujuan Kang, Vasiliy I. Vasil'ev. The Jacobi spectral collocation method for fractional integro-differential equations with non-smooth solutions. Electronic Research Archive, 2020, 28 (3) : 1161-1189. doi: 10.3934/era.2020064 [8] Sebti Kerbal, Yang Jiang. General integro-differential equations and optimal controls on Banach spaces. Journal of Industrial and Management Optimization, 2007, 3 (1) : 119-128. doi: 10.3934/jimo.2007.3.119 [9] Brahim Boufoussi, Soufiane Mouchtabih. Controllability of neutral stochastic functional integro-differential equations driven by fractional brownian motion with Hurst parameter lesser than $1/2$. Evolution Equations and Control Theory, 2021, 10 (4) : 921-935. doi: 10.3934/eect.2020096 [10] Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations and Control Theory, 2022, 11 (1) : 177-197. doi: 10.3934/eect.2020107 [11] Tomás Caraballo, Gábor Kiss. Attractivity for neutral functional differential equations. Discrete and Continuous Dynamical Systems - B, 2013, 18 (7) : 1793-1804. doi: 10.3934/dcdsb.2013.18.1793 [12] Faranak Rabiei, Fatin Abd Hamid, Zanariah Abd Majid, Fudziah Ismail. Numerical solutions of Volterra integro-differential equations using General Linear Method. Numerical Algebra, Control and Optimization, 2019, 9 (4) : 433-444. doi: 10.3934/naco.2019042 [13] Seda İğret Araz. New class of volterra integro-differential equations with fractal-fractional operators: Existence, uniqueness and numerical scheme. Discrete and Continuous Dynamical Systems - S, 2021, 14 (7) : 2297-2309. doi: 10.3934/dcdss.2021053 [14] Huy Tuan Nguyen, Huu Can Nguyen, Renhai Wang, Yong Zhou. Initial value problem for fractional Volterra integro-differential equations with Caputo derivative. Discrete and Continuous Dynamical Systems - B, 2021, 26 (12) : 6483-6510. doi: 10.3934/dcdsb.2021030 [15] Olivier Bonnefon, Jérôme Coville, Jimmy Garnier, Lionel Roques. Inside dynamics of solutions of integro-differential equations. Discrete and Continuous Dynamical Systems - B, 2014, 19 (10) : 3057-3085. doi: 10.3934/dcdsb.2014.19.3057 [16] Mohammed Al Horani, Angelo Favini, Hiroki Tanabe. Inverse problems on degenerate integro-differential equations. Discrete and Continuous Dynamical Systems - S, 2022  doi: 10.3934/dcdss.2022025 [17] Shihchung Chiang. Numerical optimal unbounded control with a singular integro-differential equation as a constraint. Conference Publications, 2013, 2013 (special) : 129-137. doi: 10.3934/proc.2013.2013.129 [18] Hermann Brunner, Chunhua Ou. On the asymptotic stability of Volterra functional equations with vanishing delays. Communications on Pure and Applied Analysis, 2015, 14 (2) : 397-406. doi: 10.3934/cpaa.2015.14.397 [19] Yongqiang Suo, Chenggui Yuan. Large deviations for neutral stochastic functional differential equations. Communications on Pure and Applied Analysis, 2020, 19 (4) : 2369-2384. doi: 10.3934/cpaa.2020103 [20] Junhao Hu, Chenggui Yuan. Strong convergence of neutral stochastic functional differential equations with two time-scales. Discrete and Continuous Dynamical Systems - B, 2019, 24 (11) : 5831-5848. doi: 10.3934/dcdsb.2019108 2020 Impact Factor: 1.916
2022-05-28 11:14:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5087983012199402, "perplexity": 5651.522361429938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00135.warc.gz"}
https://www.physicsforums.com/threads/angular-velocity-acceleration-problem.436556/
# Angular velocity/acceleration problem Suppose disc A and B are spinning against each other without slipping or translating. What can be said about the motion of disc A in terms of B? (select all that apply; if none apply, leave the boxes blank) Disk A has a diameter of 3D while disk B has a diameter of D a. Angular velocity magnitude of A will be 3 times greater than B b. Angular velocity magnitude of A will be equal to B c. Angular velocity magnitude of B will be 3 times greater than A d. Angular acceleration magnitude of A will be 3 times greater than B e. Angular acceleration magnitude of A will be equal to B f. Angular acceleration magnitude of B will be 3 times greater than A L= Iw angular momentum = moment of inertia x angular velocity t=Ia torque = moment of inertia x angular acceleration Seeing from the question I think at the moment of contact, the velocity would be 0. But if we are looking at angular velocity as a whole, disk A should have 1/3 the speed of disk B. If that's the case then the angular acceleration diffeence would've also been 3 times between A and B. Can someone verify it? $$3w_A = w_B , \ \ \ 3\alpha_A = \alpha_B$$
2021-06-19 10:43:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7345421314239502, "perplexity": 458.4965367413975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00329.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-7-trigonometric-identities-and-equations-7-5-inverse-circular-functions-7-5-exercises-page-708/9
Precalculus (6th Edition) a. $(-\infty, \infty)$ b. $(-\displaystyle \frac{\pi}{2}, \displaystyle \frac{\pi}{2})$ c. increasing d. no See figure $20$ on p.$702.$ (or the table on page 703 ) In order to have an inverse, the domain of $\tan x$ is restricted to .$(-\displaystyle \frac{\pi}{2}, \displaystyle \frac{\pi}{2})$. $y=\tan^{-1} x$ ($y$ is the number from $(-\displaystyle \frac{\pi}{2}, \displaystyle \frac{\pi}{2})$ for which $\tan y=x$) (a) and (b) Domain: $(-\infty, \infty)$ Range:$(-\displaystyle \frac{\pi}{2}, \displaystyle \frac{\pi}{2})$ Quadrants (unit circle): I and IV (c) Figure $20$: $\tan^{-1}x$ is increasing. For part (d), see the domain. All real numbers are in the domain. There is no x for which $\tan^{-1}x$ is not defined.
2019-06-17 03:12:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6822076439857483, "perplexity": 410.34358358235727}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00262.warc.gz"}
https://forum.azimuthproject.org/discussion/1837/introduction-robert-figura-aka-refurio-anachro
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Introduction: Robert Figura (aka Refurio Anachro) edited March 2018 in Chat Hi folks! A few years ago, one night, I decided to go with my first love, mathematics, following one of John Baez' countless invitations to form flocks to worship her. So I began blogging about maths on g+ under my newfound and mysterious alias 'Refurio Anachro'... Some twenty years ago, I had been studying computer science, which implied courses on mathematics and, by choice, linguistics. I never finished and I never graduated. Instead, I got quickly assimilated by the industry. And that was a good thing, because those computer science courses back then were pretty much outdated, and massively overrun, and while attending might have gotten me a degree, learning new stuff (about computers) was clearly not in it for me. So I had to leave the place, and ended up turning my back on all of academia... (You shouldn't) Without knowing what it was, I had long been using arrows and diagrams to communicate about my work. Which was about analyzing large systems with scarce information at first, and about building extensible systems later on, minimizing the cost of making up your mind about earlier design decisions. I was proudly focusing on an aspect I liked to call 'reverse engineering'. That meant I learned how to transform code from one context to another, changing its signature, or the way it is written up, all while keeping its effects fixed. So I knew intuitively what monads were, thought about natural transformations on a daily basis, and worked with types long before I started programming in strictly typed languages. By now I have invested quite some time into understanding categories and types as they are seen by mathematicians, and I learned to name and formally elaborate on many of the concepts I had only felt before. Thanks to that I managed to get a hold on proof assistants, and am now employing concepts from higher type theory in my work. Although I'm very interested in the many ways to teach categories (or any kind of maths) to people, I may not require another introduction to category theory, so maybe we'd better not regard me as a student of the current course. Another reason could be that I already have a day job and a family to take care of, which means that I might vanish in a puff of smoke at any time, busying myself with other things for a while... Until that happens I'll be trodding along, picking up what I do not know, and I'm certain there'll be things to add and places where I can help out. I am quite excited to see so many people here, and I am looking forward to having 'loads of fun'. • Options 1. Refurio! (Or Robert!) Good to meet you again here. Much better context than G+. Comment Source:Refurio! (Or Robert!) Good to meet you again here. Much better context than G+. • Options 2. edited March 2018 Hi, I'm glad to see you, too, Bob! You've been one of the familiar names that ultimately triggered my impulse to ask for an account. And indeed, it's a nice place with cool folks and fun discussions! I also noticed the interesting diversion your intro post attracted. Comment Source:Hi, I'm glad to see you, too, Bob! You've been one of the familiar names that ultimately triggered my impulse to ask for an account. And indeed, it's a nice place with cool folks and fun discussions! I also noticed the interesting diversion your intro post attracted. • Options 3. Very interesting assortment of people and focuses (focii). Comment Source:Very interesting assortment of people and focuses (focii). • Options 4. Indeed! And having intro posts about most people is also rather nice. I'm exhilarated! Comment Source:Indeed! And having intro posts about most people is also rather nice. I'm exhilarated!
2021-04-13 19:25:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6209512948989868, "perplexity": 1753.0205313075335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00096.warc.gz"}
https://en.wiktionary.org/wiki/exponential_object
# exponential object An exponential object generalizes its interpretation in category ${\displaystyle \mathbf {Set} }$; namely, that of as a function set or internal hom-set. The pair ${\displaystyle Z^{Y},{\mbox{eval}}:Z^{Y}\times Y\rightarrow Z}$ is the terminal object of the comma category ${\displaystyle (-\times Y)\downarrow Z}$. Therefore the exponential object is a kind of universal morphism.
2017-05-23 05:07:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5580560564994812, "perplexity": 615.3166243171754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607369.90/warc/CC-MAIN-20170523045144-20170523065144-00083.warc.gz"}
http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/latex2html/node30.html
Next: Indicating Differences Between Document Up: Special Features Previous: Active Image Maps ## Other style files An optional style file htmllist has been provided which produces fancier lists in the electronic version of the document such as this. This file defines a new LaTeX environment, htmllist, which causes a user-defined item mark to be placed at each new item of the list, and which causes the optional description to be displayed in bold letters. The mark is determined by the \htmlitemmark{} command. This command accepts either a mnenomic name for the item mark from a list of icons established at installation, or the URL of a mark not in the installation list. The htmlitemmark must be used inside the htmllist environment in order to be effective, but it may be used more than once to change the mark within the list. The item marks supplied with LaTeX2HTML are BlueBall, RedBall, OrangeBall, GreenBall, PinkBall, PurpleBall, WhiteBall, and YellowBall. The htmllist environment is identical to the description environment in the printed version. An example of usage is: \begin{htmllist} \htmlitemmark{WhiteBall} \item[Item 1:] This will have a white ball \item[Item 2:] This will also have a white ball \htmlitemmark{RedBall} \item[Item 3:] This will have a red ball \end{htmllist} This will produce: Item 1: This will have a white ball Item 2: This will also have a white ball Item 3: This will have a red ball There are also optional style files floatfig and wrapfig, which provide support for the floatingfigure and wrapfigure environments, respectively. These environments allow text to wrap around a figure in the printed version, but are treated exactly as an ordinary figure in the electronic version. They are described in Goossens, et. al. [2]. Tim Love Thu Mar 14 11:15:46 GMT 1996
2014-04-18 23:16:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6459778547286987, "perplexity": 3321.4261951671024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
https://space.stackexchange.com/questions/33161/algorithmic-methods-or-techniques-to-find-conjunctions-in-large-ensembles-of-sta
# Algorithmic methods or techniques to find conjunctions in large ensembles of state vectors? Suppose I wanted to answer the question Will Starman/Roadster pass particularly close to any asteroids in the next few years? or try to predict satellite conjunctions around Earth (e.g. Celestrak's SOCRATES), and I had ephemerides, TLEs, or interpolatable tables of state vectors. I could propagate those in small time steps, calculate all $$N$$ positions and all $$N(N-1)/2$$ distances and search for any below a distance $$d_{conj}$$, but that might not be the most efficient way to do this. Question: What are the algorithmic methods or techniques to do this kind of search more efficiently? Assume the propagators return a six-vector (position and velocity). I need an explanation or authoritative reference, not just a name-drop. This question is distinct from Algorithmic methods or techniques to find conjunctions in high N Keplerian element ensembles? because it specifically asks about methods that operate on State vectors (either tabulated or propagated on demand) which may include n-body effects (e.g. the Sun moves, Jupiter does its thing, etc.) • I have decided to re-ask this question based on this advice. I expect it to fly this time, rather than "conjunct" (collide) with its sibling question. – uhoh Dec 29 '18 at 6:21 • If I understand correctly that you ask about the closest pair of points problem in the six-dimensional Euclidean space, then this question is general enough to fit Math.SE. Or do you specify to anything related to orbital mechanics? Feb 8 '19 at 10:17 • Why are you interested in conjunctions of velocities? I.e. why do you include the 3 dimensions w.r.t. velocity in the state vector. The original Starman/Roadster question concerns the positions only. Feb 8 '19 at 10:20 • @EverydayAstronaut a conjunction is when two object reach a similar point in 4D spacetime $(x_c, y_c, z_c, t_c)$. I haven't asked about the velocities at $t_c$. However, to predict if that will happen at some point in the future, you need their two state vectors and two epochs $(x_1, y_1, z_1, vx_1, vy_1, vz_1, t_1)$ and $(x_2, y_2, z_2, vx_2, vy_2, vz_2, t_2)$. That's the minimal problem. My question is about an ensemble of $n$ state vectors and epochs $(x_i, y_i, z_i, vx_i, vy_i, vz_i, t_i)$ and looking at $O(n^2)$ pairs and predicting conjunctions for each one. – uhoh Feb 8 '19 at 13:26 • I thought you hat $n$ states $(x_i,y_i,z_i,vx_i,vy_i,vz_i)$ and want to find the closest pair of points among these $n$ points in 6 dimensions for each discrete point in time. Please help me with this 4-D setting. Which kind of metric do you use in that space for this purpose? Something like Minkowski? I mean you need to weight spatial and temporal distance. Feb 8 '19 at 14:54 Let's say you want to do something "reasonable", like collecting second-by-second conjunctions for a hundred year period, for all the objects you can get your hands on state vectors for (a hundred thousand or so?) You have an $$O(S\cdot N^2)$$ approach, so... about $$\approx10^{20}$$ Yes, I can see there's a problem. For the solar system, things move at limited speed. We can then take advantage of the fact that objects can only move so far each time step. #### Here's a time partitioning algorithm: 1. Scan through your entire pile of state vector data to find the highest velocity. $$O(S\cdot N)$$. That should be an acceptable runtime, since you wouldn't even be able to store all those state vectors if it wasn't. You'll end up with an extreme case, like the perihelion velocity of 1566 Icarus, on the order of ~100km/s. So for worst case relative velocity, objects moving directly towards eachother, we can assume an upper limit of about ~200km/s. 1. ##### Now, for each object, one at a time: Do "rough" time steps, checking the distance to all other objects. Say, 10 days. That's six orders of magnitude less work than the granularity you are searching for. In those 10 days, distances can at most close in ~1AU if relative velocities can be at most 200km/s. Now, for the 10 "less rough" time steps inbetween of 1 day, only consider those objects within 1 AU in the "rough" time step. That will in many cases be a shorter list. Inbetween that again, insert 10 "even less rough" timesteps of 2.4 hours. Here, we only have to consider those within 0.1 AU in the "less rough" time step. That should be a small minority of your state vector database. At the ~15 min time step granularity, were down to running through the short 0.01 AU list. At 1.5 min, 0.001 AU. If you stop the partitioning here, there will only be a couple of objects (or none!) to check for at every time step. For objects distributed in a volume, or even clustered along a single 2D plane, this is asymptotically $$O(N^2)$$. That is, you don't have to worry about limiting how fine grained your time steps are. Even for very nasty linear clustering (which doesn't apply to solar system objects by the way), this is still at worst $$O(log(S)\cdot N^2)$$ You should be able to sift through the pile in a couple of minutes on a laptop this way.
2021-12-05 02:28:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6073445677757263, "perplexity": 806.3566068525103}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00550.warc.gz"}
https://proofwiki.org/wiki/Definition:Integer_Subtraction
# Definition:Subtraction/Integers Jump to navigation Jump to search ## Definition The subtraction operation in the domain of integers $\Z$ is written "$-$". As the set of integers is the Inverse Completion of Natural Numbers, it follows that elements of $\Z$ are the isomorphic images of the elements of equivalence classes of $\N \times \N$ where two tuples are equivalent if the difference between the two elements of each tuples is the same. Thus subtraction can be formally defined on $\Z$ as the operation induced on those equivalence classes as specified in the definition of integers. It follows that: $\forall a, b, c, d \in \N: \eqclass {\tuple {a, b} } \boxminus - \eqclass {\tuple {c, d} } \boxminus = \eqclass {\tuple {a, b} } \boxminus + \tuple {-\eqclass {\tuple {c, d} } \boxminus} = \eqclass {\tuple {a, b} } \boxminus + \eqclass {\tuple {d, c} } \boxminus$ Thus integer subtraction is defined between all pairs of integers, such that: $\forall x, y \in \Z: x - y = x + \paren {-y}$ ## Also known as In the context of mathematical logic it is sometimes referred to as proper subtraction so as to distinguish it from the partial subtraction operation as defined on the natural numbers.
2021-07-23 15:41:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940955996513367, "perplexity": 210.80960728538844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00374.warc.gz"}
https://proofwiki.org/wiki/Definition:Saturated_Set_(Equivalence_Relation)/Definition_1
# Definition:Saturated Set (Equivalence Relation)/Definition 1 Let $\sim$ be an equivalence relation on a set $S$. Let $T \subset S$ be a subset. $T$ is saturated if and only if it equals its saturation: $T = \overline T$
2019-12-15 11:04:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6744208335876465, "perplexity": 184.3236514552619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00078.warc.gz"}
http://hal.in2p3.fr/in2p3-01242888
# Two-neutron “halo” from the low-energy limit of neutron–neutron interaction: Applications to drip-line nuclei $^{22}$C and $^{24}$O Abstract : The formation of two-neutron “halo”, a low-density far-extended surface of weakly-bound two neutrons, is described using the neutron–neutron (nn) interaction fixed at the low-energy nnscattering limit. This method is tested for loosely-bound two neutrons in 24O, where a good agreement with experimental data is found. It is applied to halo neutrons in 22C in two ways: with the 20C core being closed or correlated (due to excitations from the closed core). This nninteraction is shown to be strong enough to produce a two-neutron halo in both cases, locating 22C on the drip line, while 21C remains unbound. A unique relation between the two neutron separation energy, S2n, and the radius of neutron halo is presented. New predictions for S2nand the radius of neutron halo are given for 22C. The appearance of Efimov states is also discussed. Type de document : Article dans une revue Physics Letters B, Elsevier, 2015, 753, pp.199-203. 〈10.1016/j.physletb.2015.12.001〉 http://hal.in2p3.fr/in2p3-01242888 Contributeur : Michel Lion <> Soumis le : lundi 14 décembre 2015 - 11:35:42 Dernière modification le : jeudi 1 février 2018 - 01:26:58 ### Citation T. Suzuki, T. Otsuka, C. Yuane, N. Alahari. Two-neutron “halo” from the low-energy limit of neutron–neutron interaction: Applications to drip-line nuclei $^{22}$C and $^{24}$O. Physics Letters B, Elsevier, 2015, 753, pp.199-203. 〈10.1016/j.physletb.2015.12.001〉. 〈in2p3-01242888〉 ### Métriques Consultations de la notice
2018-10-15 18:26:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6949552297592163, "perplexity": 9999.856339310089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509336.11/warc/CC-MAIN-20181015163653-20181015185153-00463.warc.gz"}
http://mathhelpforum.com/algebra/116191-largest-possible-domain.html
# Math Help - Largest possible domain 1. ## Largest possible domain Hey there, I'm just trying to work out how to find the largest possible domain to the following function algebraically. $f(x)=\sqrt{\frac{x-1}{x+2}}$ The way I've been going about it is: $x-1$ must be greater than zero, so $x\geq1$ $x+2$ must be greater than zero, so $x\geq-2$ and $\sqrt{x+2}$ can't equal zero, meaning, for this part, $x$ can be any real number besides $-2$ From this I get the domain of $\left [1,\infty\right )$ Which is not the complete domain stated in the answer. I must be missing something... 2. Hi Stroodle Originally Posted by Stroodle The way I've been going about it is: $x-1$ must be greater than zero, so $x\geq1$ $x+2$ must be greater than zero, so $x\geq-2$ Your mistake is here because (x-1) and (x+2) can be less than zero at the same time. What you should do is : $\frac{x-1}{x+2}\geq 0$ 3. Ahh, of course Thanks heaps for that! 4. To solve $\frac{x-1}{x+2}\geq 0$, you need to consider three ranges, $x<-2$, $-2 and $x>1$. Choose the range of x s.t. both $(x-1)$ and $(x+2)$ are positive, and that will be $x<-2$ and $x>1$. Also, the fraction is equal to zero when $x=1$. Hence, the largest possible domain is $(-\infty,-2)\cup[1,\infty)$ 5. I'm fine with that part, just stupidly overlooked that both the numerator and denominator could be negative
2014-10-20 09:58:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241238236427307, "perplexity": 393.7994965633866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442420.22/warc/CC-MAIN-20141017005722-00250-ip-10-16-133-185.ec2.internal.warc.gz"}
http://talkstats.com/threads/binomial-distribution-and-log-likelihood-function.74574/
# Binomial distribution and log-likelihood function #### hellostudent ##### New Member Hi everyone! I'm struggling a lot with the log-likelihood function of the binomial distribution: I have these formulas on my notes: If I correctly understood, in the likelihood function of binomial distribution we compute the density function for each. trial and so we multiply the outcome (f(x1)*f(x2)*...f(xn)). I'm not understanding how it is possible that if in the likelihood formula there is the product sign, in the log-likelihood function there isn't the summation sign, link in this picture: I'm thinking more and more that my classmate did it wrong when He wrote the product sign in the first likelihood formula of the first image, because the product concerns situations in which there are n xi i.i.d. variables, and not a k number of successes. Then, I'm noticing that in the highlighted formula of the second image there is a missing term: the natural logarithm of the binomial coefficient (n xi). Why? Then I have other doubts: first of all, I cannot explain how, computing the likelihood, we haven't n^2 in this highlighted part: I mean, if you multiply f(x) n times, I can understand that you will obtain, on the exponent, the symbol ∑(xi), because the sum of the successes is definitely equal to k, but, if you benefit from the power properties, if you multiply (1-p)^n-xi n times, I think you will obtain n*n - ∑(xi), and not simply n - ∑(xi). For instance, if we have n=3 and consequently 3 random variables: x1,x2,x3, we will multiply f(x1)*f(x2)*f(x3) and, within this multiplication, we will multiply (1-p)^(n-x1)*(1-p)^(n-x2)*(1-p)^(n-x3): So the formula that I obtained, through n=3 and so the iid random variables x1,x2,x3, n that is summed to itself for n times (so n*n <=> n^2). Finally, is this equivalence correct? (I mean the product of the binomial coefficient as a product of n terms, with all the n iid variables xi)
2020-01-20 19:05:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997016549110413, "perplexity": 746.0215049625034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00429.warc.gz"}
http://clay6.com/qa/31028/polymer-formation-from-monomer-starts-by
Browse Questions # Polymer formation from monomer starts by $\begin {array} {1 1} (a)\;\text{Condensation reaction between monomers} \\ (b)\;\text{Coordination reaction between monomers} \\ (c)\;\text{ Conversion of monomer to monomer ion} \\ (d)\;\text{Hydrolysis of monomers} \end {array}$ These are formed by the condensation of 2 or more bifunctional monomers with the elimination of simple molecules like water, ammonia, alcohol etc. In these reactions, the product of each step is again a bifunctional species & the sequence of condensation goes on. Ans : (a)
2017-01-21 04:29:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7839710712432861, "perplexity": 3513.7874460791986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/121438/what-is-a-log-odds-distribution
# What is a log-odds distribution? I am reading a textbook on machine learning (Data Mining by Witten, et al., 2011) and came across this passage: ... Moreover, different distributions can be used. Although the normal distribution is usually a good choice for numeric attributes, it is not suitable for attributes that have a predetermined minimum but no upper bound; in this case a "log-normal" distribution is more appropriate. Numeric attributes that are bounded above and below can be modeled by a "log-odds" distribution. I have never heard of this distribution. I googled for "log-odds distribution" but could not find any relevant exact match. Can someone help me out? What is this distribution, and why does it help with numbers bounded above and below? P.S. I am a software engineer, not a statistician. why does it help with numbers bounded above and below? A distribution defined on $(0,1)$ is what makes it suitable as a model for data on $(0,1)$. I don't think the text implies anything more than "it's a model for data on $(0,1)$" (or more generally, on $(a,b)$). what is this distribution ... ? The term 'log-odds distribution' is, unfortunately, not completely standard (and not a very common term even then). I'll discuss some possibilities for what it might mean. Let's start by considering a way to construct distributions for values in the unit interval. A common way to model a continuous random variable, $P$ in $(0,1)$ is the beta distribution, and a common way to model discrete proportions in $[0,1]$ is a scaled binomial ($P=X/n$, at least when $X$ is a count). An alternative to using a beta distribution would be to take some continuous inverse CDF ($F^{-1}$) and use it to transform the values in $(0,1)$ to the real line (or rarely, the real half-line) and then use any relevant distribution ($G$) to model the values on the transformed range. This opens up many possibilities, since any pair of continuous distributions on the real line ($F,G$) are available for the transformation and the model. So, for example, the log-odds transformation $Y=\log(\frac{P}{1-P})$ (also called the logit) would be one such inverse-cdf transformation (being the inverse CDF of a standard logistic), and then there are many distributions we might consider as models for $Y$. We might then use (for example) a logistic$(\mu,\tau)$ model for $Y$, a simple two-parameter family on the real line. Transforming back to $(0,1)$ via the inverse log-odds transformation (i.e. $P=\frac{\exp(Y)}{1+\exp(Y)}$) yields a two parameter distribution for $P$, one that can be unimodal, or U shaped, or J shaped, symmetric or skew, in many ways somewhat like a beta distribution (personally, I'd call this logit-logistic, since its logit is logistic). Here are some examples for different values of $\mu,\tau$: $\hspace{1.5cm}$ Looking at the brief mention in the text by Witten et al, this might be what's intended by "log-odds distribution" - but they might as easily mean something else. Another possibility is that the logit-normal was intended. However, the term seems to have been used by van Erp & van Gelder (2008)$^{[1]}$, for example, to refer to a log-odds transformation on a beta distribution (so in effect taking $F$ as a logistic and $G$ as the distribution of the log of a beta-prime random variable, or equivalently the distribution of the difference of the logs of two chi-square random variables). However, they are using this to do model count proportions, which are discrete. This of course, leads to some problems (caused by trying to model a distribution with finite probability at 0 and 1 with one on $(0,1)$), which they then seem to spend a lot of effort on. (It would seem easier to just avoid the inappropriate model, but maybe that's just me.) Several other documents (I found at least three) refer to the sample distribution of log-odds (i.e. on the scale of $Y$ above) as "the log-odds distribution" (in some cases where $P$ is a discrete proportion* and in some cases where it's a continuous proportion) - so in that case it's not a probability model as such, but it's something to which you might apply some distributional model on the real line. * again, this has the problem that if $P$ is exactly 0 or 1, the value of $Y$ will be $-\infty$ or $\infty$ respectively ... which suggests we must bound the distribution away from 0 and 1 to use it for this purpose. The dissertation by Yan Guo (2009)$^{[2]}$ uses the term to refer to a log-logistic distribution, a right-skew distribution on the real half-line. So as you see, it's not a term with a single meaning. Without a clearer indication from Witten or one of the other authors of that book, we're left to guess what is intended. [1]: Noel van Erp & Pieter van Gelder, (2008), "How to Interpret the Beta Distribution in Case of a Breakdown," Proceedings of the 6th International Probabilistic Workshop, Darmstadt [2]: Yan Guo, (2009), The New Methods on NDE Systems Pod Capability Assessment and Robustness, Dissertation submitted to the Graduate School of Wayne State University, Detroit, Michigan • (+1) A search of the entire book indicates that no clarification is forthcoming. The context suggests that "log-odds distribution" refers to some particular model, just as the "lognormal" is proposed in the previous sentence as a universal distribution for all nonnegative values(!). – whuber Oct 27, 2014 at 17:44 • @whuber I agree with your characterization of what's in the book - I didn't intend that my comments relating to the use of the term in other contexts to refer to the sample distribution imply that that was the intent in the book, but only as an indication of it being a term with several meanings. On the passages in question, my advice to people learning this material (as on many things) would be to read more than one book. Oct 27, 2014 at 20:35 I'm a software engineer (not a statistician) and I recently read a book called An Introduction to Statistical Learning. With applications in R. I think what you're reading about is log-odds or logit. page 132 http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Fourth%20Printing.pdf Brilliant book - I read it from cover to cover. Hope this helps • Thank you for the pointer. Assuming the log-odds distribution is the same as "logistic distribution", I looked up the latter on Wikipedia. It appears that its PDF has no lower or upper bound. So I'm still wondering why the textbook that I quoted originally said that "Numeric attributes that are bounded above and below are can be modeled" with this distribution. Oct 26, 2014 at 0:36 • I think its maybe talking about the output of the function where the bounds are 0.0 (impossible) to 1.0 (definite). (I could be completely wrong here) Oct 26, 2014 at 0:46 • It is possible that your model could produce arbitrarily large positive or negative results. These might not be interpretable in terms of a bounded range such as a probability, but could be interpretable as log-odds using the logit function and its inverse the logistic function. Oct 27, 2014 at 21:59
2022-08-13 18:55:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7920458316802979, "perplexity": 488.09980034194774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00682.warc.gz"}
http://2012books.lardbucket.org/books/policy-and-theory-of-international-trade/s05-05-definitions-absolute-and-compa.html
This is “Definitions: Absolute and Comparative Advantage”, section 2.5 from the book Policy and Theory of International Trade (v. 1.0). For details on it (including licensing), click here. For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. You may also download a PDF copy of this book (19 MB) or just this chapter (816 KB), suitable for printing or most e-readers, or a .zip file containing this book's HTML files (for use in a web browser offline). Has this book helped you? Consider passing it on: Creative Commons supports free culture from music to education. Their licenses helped make this book available to you. DonorsChoose.org helps people like you help teachers fund their classroom projects, from art supplies to books to calculators. ## 2.5 Definitions: Absolute and Comparative Advantage ### Learning Objectives 1. Learn how to define labor productivity and opportunity cost within the context of the Ricardian model. 2. Learn to identify and distinguish absolute advantage and comparative advantage. 3. Learn to identify comparative advantage via two methods: (1) by comparing opportunity costs and (2) by comparing relative productivities. The basis for trade in the Ricardian model is differences in technology between countries. Below we define two different ways to describe technology differences. The first method, called absolute advantage, is the way most people understand technology differences. The second method, called comparative advantage, is a much more difficult concept. As a result, even those who learn about comparative advantage often will confuse it with absolute advantage. It is quite common to see misapplications of the principle of comparative advantage in newspaper and journal stories about trade. Many times authors write “comparative advantage” when in actuality they are describing absolute advantage. This misconception often leads to erroneous implications, such as a fear that technology advances in other countries will cause our country to lose its comparative advantage in everything. As will be shown, this is essentially impossible. To define absolute advantage, it is useful to define labor productivity first. To define comparative advantage, it is useful to first define opportunity cost. Next, each of these is defined formally using the notation of the Ricardian model. ## Labor Productivity Labor productivityThe quantity of a good that can be produced per unit of labor input. It is the reciprocal of the unit labor requirement. is defined as the quantity of output that can be produced with a unit of labor. Since aLC represents hours of labor needed to produce one pound of cheese, its reciprocal, 1/aLC, represents the labor productivity of cheese production in the United States. Similarly, 1/aLW represents the labor productivity of wine production in the United States. ## Absolute Advantage A country has an absolute advantageA country has an absolute advantage in the production of a good if it can produce the good at a lower labor cost and if labor productivity in the good is higher than in another country. in the production of a good relative to another country if it can produce the good at lower cost or with higher productivity. Absolute advantage compares industry productivities across countries. In this model, we would say the United States has an absolute advantage in cheese production relative to France if $aLC or if $1aLC>1aLC∗.$ The first expression means that the United States uses fewer labor resources (hours of work) to produce a pound of cheese than does France. In other words, the resource cost of production is lower in the United States. The second expression means that labor productivity in cheese in the United States is greater than in France. Thus the United States generates more pounds of cheese per hour of work. Obviously, if aLC∗ < aLC, then France has the absolute advantage in cheese. Also, if aLW < aLW∗, then the United States has the absolute advantage in wine production relative to France. ## Opportunity Cost Opportunity costThe value or quantity of something that must be given up to obtain something else. In the Ricardian model, opportunity cost is the amount of a good that must be given up to produce one more unit of another good. is defined generally as the value of the next best opportunity. In the context of national production, the nation has opportunities to produce wine and cheese. If the nation wishes to produce more cheese, then because labor resources are scarce and fully employed, it is necessary to move labor out of wine production in order to increase cheese production. The loss in wine production necessary to produce more cheese represents the opportunity cost to the economy. The slope of the PPF, −(aLC/aLW), corresponds to the opportunity cost of production in the economy. Figure 2.2 Defining Opportunity Cost To see this more clearly, consider points A and B in Figure 2.2 "Defining Opportunity Cost". Let the horizontal distance between A and B be one pound of cheese. Label the vertical distance X. The distance X then represents the quantity of wine that must be given up to produce one additional pound of cheese when moving from point A to B. In other words, X is the opportunity cost of producing cheese. Note also that the slope of the line between A and B is given by the formula $slope=riserun=−X1.$ Thus the slope of the line between A and B is the opportunity cost, which from above is given by −(aLC/aLW). We can more clearly see why the slope of the PPF represents the opportunity cost by noting the units of this expression: $−aLCaLW[hrslbhrsgal=gallb].$ Thus the slope of the PPF expresses the number of gallons of wine that must be given up (hence the minus sign) to produce another pound of cheese. Hence it is the opportunity cost of cheese production (in terms of wine). The reciprocal of the slope, −(aLW/aLC), in turn represents the opportunity cost of wine production (in terms of cheese). Since in the Ricardian model the PPF is linear, the opportunity cost is the same at all possible production points along the PPF. For this reason, the Ricardian model is sometimes referred to as a constant (opportunity) cost model. ## Using Opportunity Costs A country has a comparative advantage in the production of a good if it can produce that good at a lower opportunity cost relative to another country. Thus the United States has a comparative advantage in cheese production relative to France if $aLCaLW This means that the United States must give up less wine to produce another pound of cheese than France must give up to produce another pound. It also means that the slope of the U.S. PPF is flatter than the slope of France’s PPF. Starting with the inequality above, cross multiplication implies the following: This means that France can produce wine at a lower opportunity cost than the United States. In other words, France has a comparative advantage in wine production. This also means that if the United States has a comparative advantage in one of the two goods, France must have the comparative advantage in the other good. It is not possible for one country to have the comparative advantage in both of the goods produced. Suppose one country has an absolute advantage in the production of both goods. Even in this case, each country will have a comparative advantage in the production of one of the goods. For example, suppose aLC = 10, aLW = 2, aLC∗ = 20, and aLW∗ = 5. In this case, aLC (10) < aLC∗ (20) and aLW (2) < aLW∗ (5), so the United States has the absolute advantage in the production of both wine and cheese. However, it is also true that $aLC∗aLW∗(205) so that France has the comparative advantage in cheese production relative to the United States. ## Using Relative Productivities Another way to describe comparative advantage is to look at the relative productivity advantages of a country. In the United States, the labor productivity in cheese is 1/10, while in France it is 1/20. This means that the U.S. productivity advantage in cheese is (1/10)/(1/20) = 2/1. Thus the United States is twice as productive as France in cheese production. In wine production, the U.S. advantage is (1/2)/(1/5) = (2.5)/1. This means the United States is two and one-half times as productive as France in wine production. The comparative advantage good in the United States, then, is that good in which the United States enjoys the greatest productivity advantage: wine. Also consider France’s perspective. Since the United States is two times as productive as France in cheese production, then France must be 1/2 times as productive as the United States in cheese. Similarly, France is 2/5 times as productive in wine as the United States. Since 1/2 > 2/5, France has a disadvantage in production of both goods. However, France’s disadvantage is smallest in cheese; therefore, France has a comparative advantage in cheese. ## No Comparative Advantage The only case in which neither country has a comparative advantage is when the opportunity costs are equal in both countries. In other words, when $aLCaLW=aLC∗aLW∗,$ then neither country has a comparative advantage. It would seem, however, that this is an unlikely occurrence. ### Key Takeaways • Labor productivity is defined as the quantity of output produced with one unit of labor; in the model, it is derived as the reciprocal of the unit labor requirement. • Opportunity cost is defined as the quantity of a good that must be given up in order to produce one unit of another good; in the model, it is defined as the ratio of unit labor requirements between the first and the second good. • The opportunity cost corresponds to the slope of the country’s production possibility frontier (PPF). • An absolute advantage arises when a country has a good with a lower unit labor requirement and a higher labor productivity than another country. • A comparative advantage arises when a country can produce a good at a lower opportunity cost than another country. • A comparative advantage is also defined as the good in which a country’s relative productivity advantage (disadvantage) is greatest (smallest). • It is not possible that a country does not have a comparative advantage in producing something unless the opportunity costs (relative productivities) are equal. In this case, neither country has a comparative advantage in anything. ### Exercises 1. Jeopardy Questions. As in the popular television game show, you are given an answer to a question and you must respond with the question. For example, if the answer is “a tax on imports,” then the correct question is “What is a tariff?” 1. The labor productivity in cheese if four hours of labor are needed to produce one pound. 2. The labor productivity in wine if three kilograms of cheese can be produced in one hour and ten liters of wine can be produced in one hour. 3. The term used to describe the amount of labor needed to produce a ton of steel. 4. The term used to describe the quantity of steel that can be produced with an hour of labor. 5. The term used to describe the amount of peaches that must be given up to produce one more bushel of tomatoes. 6. The term used to describe the slope of the PPF when the quantity of tomatoes is plotted on the horizontal axis and the quantity of peaches is on the vertical axis. 2. Consider a Ricardian model with two countries, the United States and Ecuador, producing two goods, bananas and machines. Suppose the unit labor requirements are aLBUS= 8, aLBE = 4, aLMUS = 2, and aLME = 4. Assume the United States has 3,200 workers and Ecuador has 400 workers. 1. Which country has the absolute advantage in bananas? Why? 2. Which country has the comparative advantage in bananas? Why? 3. How many bananas and machines would the United States produce if it applied half of its workforce to each good? 3. Consider a Ricardian model with two countries, England and Portugal, producing two goods, wine and corn. Suppose the unit labor requirements in wine production are aLWEng = 1/3 hour per liter and aLWPort = 1/2 hour per liter, while the unit labor requirements in corn are aLCEng = 1/4 hour per kilogram and aLCPort = 1/2 hour per kilogram. 1. What is labor productivity in the wine industry in England and in Portugal? 2. What is the opportunity cost of corn production in England and in Portugal? 3. Which country has the absolute advantage in wine? In corn? 4. Which country has the comparative advantage in wine? In corn?
2015-03-03 03:06:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35942161083221436, "perplexity": 1558.3349828437601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463104.7/warc/CC-MAIN-20150226074103-00161-ip-10-28-5-156.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:1194.14067
# zbMATH — the first resource for mathematics The defect of strong approximation for commutative algebraic groups. (Le défaut d’approximation forte pour les groupes algébriques commutatifs.) (French. English summary) Zbl 1194.14067 Let $$k$$ be a number field. The first main result of the paper under review is the construction of an exact sequence describing the topological closure of the group of $$k$$-rational points of a 1-motive $$H^0(k,M)$$ in its adelic points. The second main result shows that the obstruction for strong approximation to hold for a semiabelian variety is essentially controlled by its algebraic Brauer group. Throughout the paper it is assumed that the Tate-Shafarevich groups of all abelian varieties involved are finite. For a finite place $$v$$ of $$k$$, let $$\mathcal O_v$$ and $$k_v$$ be the local ring and the local field at $$v$$. For an infinite place $$\mathcal O_v$$ and $$k_v$$ denote both the completion of $$k$$ at $$v$$, and cohomology over the fields of real or complex numbers is understood to be Galois cohomology modified à la Tate. Let $$M$$ be a 1-motive over $$k$$ with dual $$M^\vee$$. The first main theorem states that there is an exact sequence of topological groups $0 \to \overline{H^0(k,M)} \to {\prod_{v}}'H^0(k_v, M) \to H^1(k,M^\vee)^D \to \text{III}^1(k,M) \to 0$ where $$(-)^D$$ stands for Pontryagin dual, and where the product is a restricted product with respect to maps $$H^0(\mathcal O_v, M) \to H^1(k_v, M)$$ for all places $$v$$ of $$k$$. In the case $$M$$ is a semiabelian variety $$G$$ and ignoring the real places of $$k$$, this restricted product is the group of adelic points of $$G$$. Finally, the first group in this sequence is the topological closure of the image of $$H^0(k, M)$$ in the restricted product. The second main result, an application of the first, concerns the integral Manin obstruction. Let $$X$$ be a flat scheme over the ring of integers of $$k$$, whose generic fibre is a principal homogeneous space under a semiabelian variety $$G$$ over $$k$$. Let $$S$$ be a finite set of places of $$k$$ containig all infinite places and let $$A_S$$ be the ring of $$S$$-adeles and let $$P = (P_v)_{v\in S}$$ be an $$A_S$$-point of $$X$$. Suppose that $$P$$ is orthogonal to the Brauer group $$\mathrm{Br}(X)$$ under the Brauer-Manin pairing. Then there exists an $$\mathcal O_S$$-integral point of $$X$$ which is $$v$$-adically close to $$P$$ if $$v$$ is non-archimedean, and which lies in the same connected component of $$X(k_v)$$ as $$P_v$$ is $$v$$ is archimedean. In particular, if $$P = (P_v)_{v}$$ is an adelic point of $$X$$ orthogonal to $$\mathrm{Br}(X)$$, then $$X(\mathcal O_k)$$ is nonempty. ##### MSC: 14L15 Group schemes 12G05 Galois cohomology 11J61 Approximation in non-Archimedean valuations 11R56 Adèle rings and groups ##### Keywords: strong approximation; Brauer group; 1-motive Full Text:
2021-10-20 14:22:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8414334654808044, "perplexity": 146.1281989869193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00246.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/calculate-speed-sound-day-when-1500-hz-frequency-has-wavelength-0221-m
Question Calculate the speed of sound on a day when a 1500 Hz frequency has a wavelength of 0.221 m. $332 \textrm{ m/s}$ Solution Video
2020-07-07 02:24:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7754755616188049, "perplexity": 1413.1139792802928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00139.warc.gz"}
http://wiki.apidesign.org/index.php?title=LibraryReExportIsNPComplete&oldid=2020
'. ' # LibraryReExportIsNPComplete This page describes a way to convert any 3SAT problem to a solution of finding the right configuration from conflicting libraries in a system that can re-export APIs. Thus proving that the later problem is wikipedia::NP-complete. ## 3SAT The problem of satisfying a logic formula remains NP-complete even if all expressions are written in wikipedia::conjunctive normal form with 3 variables per clause (3-CNF), yielding the 3SAT problem. This means the expression has the form: $(x_{11} \vee x_{12} \vee x_{13}) \wedge$ $(x_{21} \vee x_{22} \vee x_{23}) \wedge$ $(x_{31} \vee x_{32} \vee x_{33}) \wedge$ ... $(x_{n1} \vee x_{n2} \vee x_{n3})$ where each xab is a variable vi or a negation of a variable $\neg v_i$. Each variable vi can appear multiple times in the expression. ## Module Dependencies Problem Let A,B,C,... denote an API. Let A1,A1.1,A1.7,A1.11 denote compatible versions of API A. Let A1,A2.0,A3.1 denote incompatible versions of API A. Let Ax.y > Bu.v denote the fact that version x.y of API A depends on version u.v of API B. Let $A_{x.y} \gg B_{u.v}$ denote the fact that version x.y of API A depends on version u.v of API B and that it re-exports B's elements. Let Repository R = (M,D) be any set of modules with their various versions and their dependencies on other modules with or without re-export. Let C be a Configuration in a repository R = (M,D), if $C \subseteq M$, where following is satisfied: $\forall A_{x.y} \in C, \forall A_{x.y} \gg B_{u.v} \in D \Rightarrow \exists w >= v \wedge B_{u.w} \in C$ - each re-exported dependency is satisfied with some compatible version $\forall A_{x.y} \in C, \forall A_{x.y} > B_{u.v} \in D \Rightarrow \exists w >= v B_{u.w} \in C$ - each dependency is satisfied with some compatible version Let there be two chains of re-exported dependencies $A_{p.q} \gg ... \gg B_{x.y}$ and $A_{p.q} \gg ... \gg B_{u.v}$ then $x = u \wedge y = v$ - this guarantees that each class has just one, exact meaning for each importer #### Module Dependency Problem Let there be a repository R = (M,D) and a module $A \in M$. Does there exist a configuration C in the repository R, such that the module $A \in C$, e.g. the module can be enabled? ## Converstion of 3SAT to Module Dependencies Problem Let there be 3SAT formula with with variables v1,...,vm as defined above. Let's create a repository of modules R. For each variable vi let's create two modules $M^i_{1.0}$ and $M^i_{2.0}$, which are mutually incompatible and put them into repository R. For each formula $(x_{i1} \vee x_{i2} \vee x_{i3})$ let's create a module Fi that will have three compatible versions. Each of them will depend on one variable's module. In case the variable is used with negation, it will depend on version 2.0, otherwise on version 1.0. So for the formula $v_a \vee \neg v_b \vee \neg v_c$ we will get: $F^i_{1.1} \gg M^a_{1.0}$ $F^i_{1.2} \gg M^b_{2.0}$ $F^i_{1.3} \gg M^c_{2.0}$ All these modules and dependencies add into repository R Now we will create a module T1.0 that depends all formulas: $T_{1.0} \gg F^1_{1.0}$ $T_{1.0} \gg F^2_{1.0}$ ... $T_{1.0} \gg F^n_{1.0}$ and add this module as well as its dependencies into repository R. Claim: There $\exists C$ (a configuration) of repository R and $T_{1.0} \in C$ $\Longleftrightarrow$ there is a solution to the 3SAT formula. ## Proof "$\Leftarrow$": Let's have an evaluation of each variable to either true or false that evaluates the whole 3SAT formula to true. Then $C = \{ T_{1.0} \} \bigcup$ $\{ M^i_{1.0} : v_i \} \bigcup \{M^i_{2.0} : \neg v_i \} \bigcup$ $\{ F^i_{1.1} : x_{i1} \} \bigcup \{ F^i_{1.2} : \neg x_{i1} \wedge x_{i2} \} \bigcup \{ F^i_{1.3} : \neg x_{i1} \wedge \neg x_{i2} \wedge x_{i3} \}$ It is clear from the definition that each Mi and Fi can be in the C just in one version. Now it is important to ensure that each module is present always at least in one version. This is easy for Mi as its vi needs to be true or false, and that means one of $M^i_{1.0}$ or $M^i_{2.0}$ will be included. Can there be a Fi which is not included? Only if $\neg x_{i1} \wedge \neg x_{i2} \wedge \neg x_{i3}$ but that would mean the whole 3-or would evaluate to false and as a result also the 3SAT formula would evaluate to false. This means that dependencies of T1.0 on Fi modules are satisfied. Are also dependencies of every $F^i_{1.q}$ satisfied? From all the three versions, there is just one $F^i_{1.q}$, the one its xiq evaluates to true. However xiq can either be without negation, and as such $F^i_{1.q}$ depends on $M^j_{1.0}$ which is included as vj is true. Or xiq contains negation, and as such $F^i_{1.q}$ depends on $M^j_{2.0}$ which is included as vj is false. qed.
2018-09-21 02:44:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5455335378646851, "perplexity": 1310.844914346908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156724.6/warc/CC-MAIN-20180921013907-20180921034307-00060.warc.gz"}
https://www.physicsforums.com/threads/constructing-a-cmos-and-circuit-with-mosfets.987359/
# Constructing a CMOS AND circuit with MOSFETs • Engineering ## Homework Statement: Construct a basic CMOS circuit AND function with 2 inputs. ## Relevant Equations: - Hi all :) So I constructed this based on a video tutorial, p, and n MOS, and was wondering if it was correct before I combine them. Also, it looks very different from what the proposed answer is. Since the equation is z=ab, The pull-down diagram looks like this: and the pull-up is z= a'+b' Combining my diagrams, I don't feel like I can achieve the outcome as shown above. Is there a problem with my diagrams? Cheers Last edited by a moderator: Related Engineering and Comp Sci Homework Help News on Phys.org Baluncore 2019 Award Do you notice the two transistors on the RHS of the proposed solution? What do they do? Do you notice the two transistors on the RHS of the proposed solution? What do they do? Is it an inverter? I'm pretty new at that. Baluncore 2019 Award It is an inverting buffer. What implications does that have to the intermediate signal at the input to the buffer. It will 'invert' the input, which means 1 becomes a 0 and 0 becomes a 1. As expected, or what you have stated, my expected outcome has no inverting buffer at the right side. Do I need to include one in? I'm using: as an example for the video tutorial and he doesn't mention this anywhere. Ok I realised his inputs are like separated (so there's 2 a's inputs instead of one) Baluncore 2019 Award You should have a buffer to provide some voltage gain in the gate. Simple two transistor CMOS buffers are inverting buffers. If you need a non-inverting buffer you must use two inverting buffers. Parallel PMOS at the top will pull up if either input is low. Series NMOS at the bottom will pull down only when both inputs are high. I think you have designed a simple NAND gate without a buffer. A NAND gate can be an AND gate followed by an inverting buffer. An AND gate can be a NAND gate followed by an inverting buffer. Joshy Gold Member Do you feel comfortable with how a PMOS or NMOS work alone? I know from your other thread you're in an introductory course. Something that really helped me work with these during my first run of IC design for digital circuits was to think of these as switches... is it on or off? Here's a NMOS example. The gate is the thing that controls whether or not the switch is open or closed. If the gate is high, then it closes the switch connecting the source and the drain; if the gate is low, then the switch is open and the two wires stay separated (not touching). If you like the water analogy then you can think of the gate as the valve that allows you to control whether or not water is flowing through the pipes. The PMOS is also controlled by the gate, but its behaviour does the opposite. If the gate is high, then it opens the switch; otherwise: it closes it. Lets try to the switch idea on the inverter. On the left side going from left to right when the input (gate) is high, then the PMOS on top is open and the NMOS on the bottom is closed. The output is connected to the one that is low, and so the derived output is low. The one on the right when the input is low now the NMOS is closed, but the PMOS opens up and the high is connected to the output. I know i'm overly simplifying it, but I also know you're in an introductory module and you're still trying to get by with the Boolean logic. Hopefully this might help? Did your book show you the example of a NAND? Others are already pulling you towards it thinking about a NAND (not and) and inverting it with an inverter (not). Do the circuits on the left side look suspiciously familiar to something? How about the last two on the right? Tom.G Do you feel comfortable with how a PMOS or NMOS work alone? I know from your other thread you're in an introductory course. Something that really helped me work with these during my first run of IC design for digital circuits was to think of these as switches... is it on or off? Here's a NMOS example. View attachment 260726 The gate is the thing that controls whether or not the switch is open or closed. If the gate is high, then it closes the switch connecting the source and the drain; if the gate is low, then the switch is open and the two wires stay separated (not touching). If you like the water analogy then you can think of the gate as the valve that allows you to control whether or not water is flowing through the pipes. The PMOS is also controlled by the gate, but its behaviour does the opposite. If the gate is high, then it opens the switch; otherwise: it closes it. Lets try to the switch idea on the inverter. View attachment 260727 On the left side going from left to right when the input (gate) is high, then the PMOS on top is open and the NMOS on the bottom is closed. The output is connected to the one that is low, and so the derived output is low. The one on the right when the input is low now the NMOS is closed, but the PMOS opens up and the high is connected to the output. I know i'm overly simplifying it, but I also know you're in an introductory module and you're still trying to get by with the Boolean logic. Hopefully this might help? Did your book show you the example of a NAND? Others are already pulling you towards it thinking about a NAND (not and) and inverting it with an inverter (not). Do the circuits on the left side look suspiciously familiar to something? How about the last two on the right? Hi there. Yea, I do understand the concept of using 'switches' to determine whether the output is high or low given a circuit, so that's okay for me. I'm just curious why when combining the two inputs of A together, why will I need to add an inverter before the output? Cheers Joshy Gold Member The very top diagram in the original post is very hard to follow. Here's what I'm interpreting from the post. It looks like you have the two inputs, but some of the terminals are floating or missing; the output is connected to ground so it's always low. Since you're familiar with this whole switch idea what I recommend is trying the truth table on the circuit without the inverter behind it. Just go through it one row at a time let us know what you get :) $$\begin{array}{|c|c|c|} \hline A & B & OUT\\ \hline 0 & 0 & \\ \hline 0 & 1 & \\ \hline 1 & 0 & \\ \hline 1 & 1 & \\ \hline \end{array}$$ Tom.G
2020-09-30 03:19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5336909294128418, "perplexity": 853.7976346927558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00389.warc.gz"}
https://www.gamedev.net/forums/topic/466499-is-triangle-in-the-frustum/
is triangle in the frustum? This topic is 4032 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts hi i'm tring some test about frustum i can do these things: -is Point in frustrum -is Sphere in frustrum but when i try triangle with frustum i cant get right result here is my code : function IsTriangleInFrustum(Vektor1,Vektor2,Vektor3:TD3DVector):boolean; begin Result:= ((PlaneTop.a*Vektor1.x + PlaneTop.b*Vektor1.y + PlaneTop.c*Vektor1.z + PlaneTop.d > 0)or (PlaneTop.a*Vektor2.x + PlaneTop.b*Vektor2.y + PlaneTop.c*Vektor2.z + PlaneTop.d > 0)or (PlaneTop.a*Vektor3.x + PlaneTop.b*Vektor3.y + PlaneTop.c*Vektor3.z + PlaneTop.d > 0)) and ((PlaneRight.a*Vektor1.x + PlaneRight.b*Vektor1.y + PlaneRight.c*Vektor1.z + PlaneRight.d > 0)or (PlaneRight.a*Vektor2.x + PlaneRight.b*Vektor2.y + PlaneRight.c*Vektor2.z + PlaneRight.d > 0)or (PlaneRight.a*Vektor3.x + PlaneRight.b*Vektor3.y + PlaneRight.c*Vektor3.z + PlaneRight.d > 0)) and ((PlaneBottom.a*Vektor1.x + PlaneBottom.b*Vektor1.y + PlaneBottom.c*Vektor1.z + PlaneBottom.d > 0)or (PlaneBottom.a*Vektor2.x + PlaneBottom.b*Vektor2.y + PlaneBottom.c*Vektor2.z + PlaneBottom.d > 0)or (PlaneBottom.a*Vektor3.x + PlaneBottom.b*Vektor3.y + PlaneBottom.c*Vektor3.z + PlaneBottom.d > 0)) and ((PlaneLeft.a*Vektor1.x + PlaneLeft.b*Vektor1.y + PlaneLeft.c*Vektor1.z + PlaneLeft.d > 0)or (PlaneLeft.a*Vektor2.x + PlaneLeft.b*Vektor2.y + PlaneLeft.c*Vektor2.z + PlaneLeft.d > 0)or (PlaneLeft.a*Vektor3.x + PlaneLeft.b*Vektor3.y + PlaneLeft.c*Vektor3.z + PlaneLeft.d > 0)) and ((PlaneNear.a*Vektor1.x + PlaneNear.b*Vektor1.y + PlaneNear.c*Vektor1.z + PlaneNear.d > 0)or (PlaneNear.a*Vektor2.x + PlaneNear.b*Vektor2.y + PlaneNear.c*Vektor2.z + PlaneNear.d > 0)or (PlaneNear.a*Vektor3.x + PlaneNear.b*Vektor3.y + PlaneNear.c*Vektor3.z + PlaneNear.d > 0)) and ((PlaneFar.a*Vektor1.x + PlaneFar.b*Vektor1.y + PlaneFar.c*Vektor1.z + PlaneFar.d > 0)or (PlaneFar.a*Vektor2.x + PlaneFar.b*Vektor2.y + PlaneFar.c*Vektor2.z + PlaneFar.d > 0)or (PlaneFar.a*Vektor3.x + PlaneFar.b*Vektor3.y + PlaneFar.c*Vektor3.z + PlaneFar.d > 0)); end; if you want to see how do i make Frustum planes i can show you but i'm shure its correct, and "triangle code" is not completely trash its working when vektors in frustrum thanks everybody Share on other sites As long as your point-in-frustum code works, try function IsTriangleInFrustum(Vektor1,Vektor2,Vektor3:TD3DVector) : boolean;begin Result := IsPointInFrustum(Vektor1) and IsPointInFrustum(Vektor2) and IsPointInFrustum(Vektor3);end; Share on other sites but your code is going to work only one ore more vertices in the frustum right? first code is working already like yours, i want more complex calculation (maybe extra calculation for triangle's edges) well maybe i can handle this if i can get "line-frustum" thing, is there anybody who know this stuf? Share on other sites What Dave has suggested can be modified to work however you want. A triangle is just 3 points. If you want to test if the whole triangle is in the frustum then check if all 3 points are inside. If you only want to check if the triangle is partly in the frustum check to see if any of the points are in the frustum. It shouldn't be much more complex than that. Although.. you could get into a situation where all points are outside the frustum but you're looking through the middle of the triangle. That could be a little more involved. Share on other sites Quote: Original post by ibr_ozdemirfirst code is working already like yours, i want more complex calculation (maybe extra calculation for triangle's edges)well maybe i can handle this if i can get "line-frustum" thing, is there anybody who know this stuf? If you want a 'perfect' triangle-frustum test, you can use the separating axis test. If you want a test that is basically accurate but may occasionally return false positives, you can use the SAT but only test the axes corresponding to the frustum plane normals. (Basically, you check to see if the triangle is entirely on the back side of any of the six frustum planes.) Note however that per-triangle frustum culling is pretty rare these days; it's more common to cull on the object/node/model level. Share on other sites thanks to all of you well i'm actualy little bit more closer to target now my code almost returns perfect result, except this situation (folowing picture) but i think i can handle that to (with triangle vs triangle intersection test) Quote: Original post by jykNote however that per-triangle frustum culling is pretty rare these days; it's more common to cull on the object/node/model level. well i'm not going to use this code directly in my game engine, its just for my "mesh editing" pogram (selecting triangle in frustum) thanks Share on other sites Quote: Original post by ibr_ozdemirnow my code almost returns perfect result, except this situation (folowing picture) The SAT will catch that case (and other similar cases). thanks jyk i'll do that 1. 1 2. 2 3. 3 Rutin 16 4. 4 5. 5 • 14 • 9 • 9 • 9 • 10 • Forum Statistics • Total Topics 632912 • Total Posts 3009186 • Who's Online (See full list) There are no registered users currently online ×
2018-10-16 08:31:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31859803199768066, "perplexity": 6068.696316017122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510415.29/warc/CC-MAIN-20181016072114-20181016093614-00443.warc.gz"}
https://socratic.org/questions/how-do-you-evaluate-frac-4-5-frac-7-10
# How do you evaluate - \frac { 4} { 5} - ( - \frac { 7} { 10} )? Mar 10, 2018 $- \frac{1}{10}$ #### Explanation: Find a common denominator, which is 10 in this case $\frac{10}{5}$ = 2 (Multiply the numerator 4 by 2 to get your new fraction, which is $\frac{4 \cdot 2}{10}$) $- \frac{8}{10}$ $-$ ($- \frac{7}{10}$) ($- \frac{7}{10}$ becomes positive, because subtracting a negative number give you a positive) $- \frac{8}{10}$ +$\frac{7}{10}$ = $- \frac{1}{10}$
2023-01-30 17:41:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921375513076782, "perplexity": 743.4925430163283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00296.warc.gz"}
http://openstudy.com/updates/514bae6ae4b07974555e19cf
peggiepenguin 3 years ago does the integral of 1/x+ srqtx from 1 to infinity converge? 1. anonymous i doubt it 2. anonymous the degree of the denominator has to be larger than the degree of the numerator by more than 1 3. electrokid is it $\int_1^\infty\left({1\over x}+\sqrt{x}\right)dx$ OR $\int_1^\infty{dx\over x+\sqrt{x}}$ 4. peggiepenguin the second one 5. electrokid @satellite73 aha... @peggiepenguin next time please use the "equation editor on this website" or "TEX" to enter your equations.. so we know what you are talking about!! 6. electrokid so, it DOES converge coz, the denominator increases continuously and hence, the function decreases continuously 7. anonymous oh no!! 8. electrokid oh yes! :D 9. anonymous just because the function decreases does not mean the integral converges that is a necessary but not a sufficient condition it must decrease fast enough 10. anonymous simple example $\int_1^{\infty}\frac{dx}{x}$ does not converge 11. anonymous neither does the one above 12. electrokid aah.. jumping to conclusions too soon. $u=\sqrt{x}\implies u^2=x\implies 2udu=dx\\ \int_1^\infty\frac{2u}{u^2+u}du\\ =2\int_1^\infty {du\over u+1}=2[\ln (u+1)]_1^\infty=\boxed{\infty}$
2016-05-03 07:05:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546849489212036, "perplexity": 3699.4904367399204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118807.54/warc/CC-MAIN-20160428161518-00191-ip-10-239-7-51.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2454223/proof-verification-infinitely-many-primes-using-euclids-algorithm
# Proof Verification: Infinitely many Primes Using Euclid's Algorithm I'm trying to prove the infinitude or primes using division algorithm. Does the following proof work: Assume that there are only finitely many primes in $\mathbb{Z}$. By letting $\mathcal{P}$ denote the set of primes, \begin{align} p & = \min \mathcal{P} \\ P & = \max \mathcal{P}. \end{align} By the Division algorithm, we have that there exists unique $q$ and $r$ such that, \begin{align} P = pq + r, \quad 0 \leq r < p. \end{align} If $r = 0$, we would have that $p \; | \; P,$ a contradction. So, we have that $0 < r < p$. If $r$ is prime, we get a contradiction. If $r$ is composite, then by the Fundamental Theorem of Arithmetic, $r$ has a prime decomposition, But this, once again, contradicts the minimality of $p$ as a prime number. I know the proof isn't as elementary as Euclid's original proof. The proof relies on both the Division Algorithm and the Fundamental Theorem of Arithmetic. But is it all right? Ultimately, I wish to prove that $F[x]$ has infinitely many primes, where $F$ is a finite field, using the same reasoning. Hint: Consider this case: what happens if $r=1$? for example, $3=2+1$ $2,3$ are primes and $1\neq 0$. $1$ is not a product of primes. • Can the proof be tweaked to accommodate this single degenerate case? If $r > 1$, the proof works, I guess. Also, I suppose this argument won't be a problem for the case of $F[x]$. Because if $f(x) = g(x)h(x)$, where, say, $g(x)$ is a polynomial of degree 0, then $g(x) \in F^{\times}$, contradicting the irreducibility of $f(x)$ in the UFD, $F[x]$. – Junaid Aftab Oct 2 '17 at 11:26 • Here, $f(x)$ plays the role of $P$ and $g(x)$ plays the role of $r$. – Junaid Aftab Oct 2 '17 at 11:27
2021-06-23 05:33:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914335608482361, "perplexity": 108.86671217281216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00132.warc.gz"}
https://repository.uantwerpen.be/link/irua/81088
Publication Title Search for resonant $t\overline{t}$ production in $p\overline{p}$ collisions at $\sqrt{s}$=1.96 TeV Author Institution/Organisation CDF Collaboration Abstract We report on a search for narrow-width particles decaying to a top and antitop quark pair. The data set used in the analysis corresponds to an integrated luminosity of 680  pb-1 collected with the Collider Detector at Fermilab in run II. We present 95% confidence level upper limits on the cross section times branching ratio. Assuming a specific top-color-assisted technicolor production model, the leptophobic Z′ with width ΓZ′=0.012MZ′, we exclude the mass range MZ′<725  GeV/c2 at the 95% confidence level. Language English Source (journal) Physical review letters. - New York, N.Y. Publication New York, N.Y. : 2008 ISSN 0031-9007 Volume/pages 100:23(2008), p. 231801,1-231801,7 ISI 000256708100011 Full text (Publisher's DOI) Full text (open access) UAntwerpen Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address
2017-10-23 02:23:00
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527790307998657, "perplexity": 8626.028018443334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00229.warc.gz"}
https://encyclopediaofmath.org/wiki/Formal_languages_and_automata
# Formal languages and automata Both natural and programming languages can be viewed as sets of sentences, that is, finite strings of elements from some basic vocabulary. The notion of a language introduced below is very general. It certainly includes both natural and programming languages and also all kinds of nonsense languages one might think of. Traditionally, formal language theory is concerned with the syntactic specification of a language rather than with any semantic issues. A syntactic specification of a language with finitely many sentences can be given, at least in principle, by listing the sentences. This is not possible for languages with infinitely many sentences. The main task of formal language theory is the study of finitary specifications of infinite languages. The basic theory of computation, as well as of its various branches, such cryptography, is inseparably connected with language theory. The input and output sets of a computational device can be viewed as languages, and — more profoundly — models of computation can be identified with classes of language specifications, in a sense to be made more precise. Thus, for instance, Turing machines can be identified with phrase-structure grammars and finite automata with regular grammars (cf. also Turing machine; Automaton, finite; Grammar, formal). Formal language theory is — together with automata theory, (cf. Automata, theory of) which is really inseparable from language theory — the oldest branch of theoretical computer science. In some sense, the role of language and automata theory in computer science is analogous to that of philosophy in general science: it constitutes the stem from which the individual branches of knowledge emerge. An alphabet is a finite non-empty set. The elements of an alphabet $V$ are called letters. A word over an alphabet $V$ is a finite string of letters (cf. also Word). The word consisting of zero letters is called the empty word, written $\lambda$. The set of all words (respectively, all non-empty words) over an alphabet $V$ is denoted by $V^{*}$( respectively, $V^{+}$). (Thus, algebraically $V^{*}$ and $V^{+}$ are the free monoid and free semi-group generated by the finite set $V$.) For words $w _{1}$ and $w _{2}$, the juxtaposition $w _{1} w _{2}$ is called the catenation of $w _{1}$ and $w _{2}$. The empty word $\lambda$ is an identity with respect to catenation. Catenation being associative, the notation $w^{i}$, where $i$ is a non-negative integer, is used in the customary sense, and $w^{0}$ denotes the empty word. The length of a word $w$, in symbols $\mathop{\rm lg}\nolimits (w)$, also sometimes $| w |$, means the number of letters in $w$ when each letter is counted as many times as it occurs. A word $w$ is a subword of a word $u$ if and only if there are words $w _{1}$ and $w _{2}$ such that $u = w _{1} ww _{2}$( cf. also Imbedded word). Subsets of $V^{*}$ are referred to as (formal) languages over the alphabet $V$. Various unary and binary operations defined for languages will be considered in the sequel. Regarding languages as sets, one may immediately define the Boolean operations of union, intersection, complementation (the complementation of a language $L$ is denoted by $L^{c}$), difference, and symmetric difference in the usual manner. The catenation (or product) of two languages $L _{1}$ and $L _{2}$, in symbols $L _{1} L _{2}$, is defined by $$L _{1} L _{2} \ = \ \{ {w _{1} w _ 2} : { w _{1} \in L _{1} \ \textrm{ and } \ w _{2} \in L _ 2} \} .$$ The notation $L^{i}$ is extended to the catenation of languages. By definition $L^{0} = \{ \lambda \}$. The catenation closure or Kleene star (respectively, $\lambda$- free catenation closure) of a language $L$, in symbols $L^{*}$( respectively, $L^{+}$), is defined to be the union of all non-negative powers of $L$( respectively, the union of all positive powers of $L$). The operation of substitution. For each letter $a$ of an alphabet $V$, let $\sigma (a)$ denote a language (possibly over a different alphabet). Define, furthermore, $$\sigma ( \lambda ) \ = \ \{ \lambda \} ,\ \ \sigma (w _{1} w _{2} ) \ = \ \sigma (w _{1} ) \sigma (w _{2} ),\ \ \textrm{ for } \ w _{1} ,\ w _{2} \in V^{*} .$$ For a language $L$ over $V$, one defines $$\sigma (L) \ = \ \{ {u} : {u \in \sigma (w) \ \textrm{ for } \ \textrm{ some } \ w \in L} \} .$$ Such a mapping $\sigma$ is called a substitution. A substitution $\sigma$ such that each $\sigma (a)$ consists of a single word is called a homomorphism. (Algebraically, a homomorphism of languages is a homomorphism of monoids linearly extended to subsets of monoids.) In connection with homomorphisms (and also often elsewhere) one identifies a word $w$ with the singleton set $\{ w \}$, writing $\sigma (a) = w$ rather than $\sigma (a) = \{ w \}$. The main objects of study in formal language theory are finitary specifications of infinite languages. Most of such specifications are obtained as special cases from the notion of a rewriting system. By definition, a rewriting system is an ordered pair $(V,\ P)$, where $V$ is an alphabet and $P$ a finite set of ordered pairs of words over $V$. The elements $(w,\ u )$ of $P$ are referred to as rewriting rules or productions, and are denoted by $w \rightarrow u$. Given a rewriting system, the yield relation $\Rightarrow$ in the set $V^{*}$ is defined as follows. For any two words $w$ and $u$, $w \Rightarrow u$ holds if and only if there are words $w^ \prime$, $w _{1}$, $w^{\prime\prime}$, $u _{1}$ such that $w = w^ \prime w _{1} w^{\prime\prime}$, $u = w^ \prime u _{1} w^{\prime\prime}$ and $w _{1} \rightarrow u _{1}$ is a production in the system. The reflexive transitive closure (respectively, transitive closure) of the relation $\Rightarrow$ is denoted by $\Rightarrow^{*}$( respectively $\Rightarrow^{+}$). A phrase-structure grammar, shortly a grammar, is an ordered quadruple $G = (V _{N} ,\ V _{T} ,\ S,\ P)$, where $V _{N}$ and $V _{T}$ are disjoint alphabets (the alphabets of non-terminals and terminals), $S \in V _{N}$( the initial letter) and $P$ is a finite set of ordered pairs $(w , u )$ such that $u$ is a word over the alphabet $V = V _{N} \cup V _{T}$ and $w$ is a word over $V$ containing at least one letter of $V _{N}$. Again, the elements of $P$ are referred to as rewriting rules or productions, and are written as $w \rightarrow u$. A phrase-structure grammar $G$ as above defines a rewriting system $(V,\ P)$. Let $\Rightarrow$ and $\Rightarrow^{*}$ be the relations determined by this rewriting system. Then the language $L (G)$ generated by $G$ is defined by $$L (G) \ = \ \{ {w \in V _{T} ^ *} : {S \Rightarrow^{*} w} \} .$$ Two grammars $G$ and $G _{1}$ are termed equivalent if and only if $L (G) = L (G _{1} )$. For $i = 0,\ 1,\ 2,\ 3$, a grammar $G = (V _{N} ,\ V _{T} ,\ S,\ P)$ is of the type $i$ if and only if the restrictions $i$) on $P$, as given below, are satisfied: 0) No restrictions. 1) Each production in $P$ is of the form $w _{1} Aw _{2} \rightarrow w _{1} ww _{2}$, where $w _{1}$ and $w _{2}$ are arbitrary words, $A \in V _{N}$ and $w$ is a non-empty word (with the possible exception of the production $S \rightarrow \lambda$ whose occurrence in $P$ implies, however, that $S$ does not occur on the right-hand side of any production). 2) Each production in $P$ is of the form $A \rightarrow w$, where $A \in V _{N}$. 3) Each production is of one of the two forms $A \rightarrow Bw$ or $A \rightarrow w$, where $A,\ B \in V _{N}$ and $w \in V _ T^{*}$. A language is of type $i$ if and only if it is generated by a grammar of type $i$. Type-0 languages are also called recursively enumerable. Type-1 grammars and languages are also called context-sensitive. Type-2 grammars and languages are also called context-free. Type-3 grammars and languages are also referred to as finite-state or regular. (See also Grammar, context-sensitive; Grammar, context-free.) Type- $i$, $i = 0,\ 1,\ 2,\ 3$, languages form a strict hierarchy: the family of type- $i$ languages is strictly included in the family of type- $(i - 1)$ languages, for $i = 1,\ 2,\ 3$. A specific context-free language over an alphabet with $2t$ letters, $V _{t} = \{ a _{1} \dots a _{t} ,\ \overline{a}\; _{1} \dots \overline{a}\; _{t} \}$, $t \geq 1$, is the Dyck language generated by the grammar $$( \{ S \} ,\ V _{t} ,\ S,\ \{ S \rightarrow SS,\ S \rightarrow \lambda ,\ S \rightarrow a _{1} S \overline{a}\; _{1} \dots S \rightarrow a _{t} S \overline{a}\; _{t} \} ).$$ The Dyck language consists of all words over $V _{t}$ that can be reduced to $\lambda$ using the relations $a _{i} \overline{a}\; _{i} = \lambda$, $i = 1 \dots t$. If the pairs $(a _{i} ,\ \overline{a}\; _{i} )$, $i = 1 \dots t$, are viewed as parentheses of different types, then the Dyck language consists of all sequences of correctly nested parentheses. The family of regular languages over an alphabet $V$ equals the family of languages obtained from "atomic languages" $\{ \lambda \}$ and $\{ a \}$, where $a \in V$, by a finite number of applications of "regular operations" : union, catenation and catenation closure. The formula expressing how a specific regular language is obtained from atomic languages by regular operations is termed a regular expression. Every context-free language which is $\lambda$- free (i.e., does not contain the empty word) is generated by a grammar in Chomsky normal form, as well as by a grammar in Greibach normal form. In the former, all productions are of the types $A \rightarrow BC$, $A \rightarrow a$, and in the latter — of the types $A \rightarrow aBC$, $A \rightarrow aB$, $A \rightarrow a$, where capital letters denote non-terminals and $a$ is a terminal letter. According to the theorem of Chomsky–Schützenberger, every context-free language $L$ can be expressed as $$L \ = \ h (D \cap R),$$ for some regular language $R$, Dyck language $D$ and homomorphism $h$. According to the Lemma of Bar-Hillel (also called the pumping lemma), every sufficiently long word $w$ in a context-free language $L$ can be written in the form $$w \ = \ u _{1} w _{1} u _{2} w _{2} u _{3} ,\ \ w _{1} w _{2} \ \neq \ \lambda ,$$ where for every $i \geq 0$ the word $u _{1} w _ 1^{i} u _{2} w _ 2^{i} u _{3}$ belongs to $L$. For regular languages, the corresponding result reads as follows: Every sufficiently long word $w$ in a regular language $L$ can be written in the form $w = u _{1} w _{1} u _{2}$, $w _{1} \neq \lambda$, where for every $i \geq 0$ the word $u _{1} w _ 1^{i} u _{2}$ belongs to $L$. Derivations according to a context-free grammar (i.e., finite sequences of words where every two consecutive words are in the relation $\Rightarrow$) can in a natural way be visualized as labelled trees, the so-called derivation trees (cf. also Derivation tree). A context-free grammar $G$ is termed ambiguous if and only if some word in $L (G)$ has two derivation trees. Otherwise, $G$ is termed unambiguous. A context-free language $L$ is unambiguous if and only if $L = L (G)$ for some unambiguous grammar $G$. Otherwise, $L$ is termed (inherently) ambiguous. One also speaks of degrees of ambiguity. A context-free grammar $G$ is ambiguous of degree $k$( a natural number or $\infty$) if and only if every word in $L (G)$ possesses at most $k$ derivation trees and some word in $L (G)$ possesses exactly $k$ derivation trees. A language $L$ is ambiguous of degree $k$ if and only if $L = L (G)$ for some $G$ ambiguous of degree $k$, and there is no $G _{1}$ ambiguous of degree less than $k$ such that $L = L (G _{1} )$. The families of type- $i$ languages, $i = 0,\ 1,\ 2,\ 3$, defined above using generative devices, can be obtained also by recognition devices. A recognition device defining a language $L$ receives arbitrary words as inputs and "accepts" exactly the words belonging to $L$. The recognition devices finite and pushdown automata, corresponding to type-3 and type-2 languages, will be defined below. The definitions of the recognition devices linear bounded automata and Turing machines, corresponding to type-1 and type-0 languages, are analogous. (Cf. Computable function; Complexity theory.) A rewriting system $(V,\ P)$ is called a finite deterministic automaton if and only if: a) $V$ is divided into two disjoint alphabets $V _{s}$ and $V _{I}$( the state and the input alphabet); b) an element $s _{0} \in V _{s}$ and a subset $S _{1} \subseteq V _{s}$ are specified (initial state and final state set); and c) the productions in $P$ are of the form $$s _{i} a _{k} \ \rightarrow \ s _{j} ,\ \ s _{i} ,\ s _{j} \in V _{s} ; \ \ a _{k} \in V _{I} ,$$ and for each pair $(s _{i} ,\ a _{k} )$ there is exactly one such production in $P$. A finite deterministic automaton over an input alphabet $V _{I}$ is usually defined by specifying an ordered quadruple $(s _{0} ,\ V _{s} ,\ f,\ S _{1} )$, where $f$ is a mapping of $V _{s} \times V _{I}$ into $V _{s}$, the other items being as above. (Clearly, the values of $f$ are obtained from the right-hand sides of the productions listed above.) The language accepted or recognized by a finite deterministic automaton $\mathop{\rm FDA}\nolimits$ is defined by $$L ( \mathop{\rm FDA}\nolimits ) \ = \ \{ {w \in V _{I} ^ *} : { s _{0} w \Rightarrow^{*} s _{1} \ \textrm{ for } \ \textrm{ some } \ s _{1} \in S _ 1} \} .$$ A finite non-deterministic automaton $\mathop{\rm FNA}\nolimits$ is defined as a finite deterministic automaton with the following two exceptions. In b) $s _{0}$ is replaced by a subset $S _{0} \subseteq V _{s}$. In c) the second half of the sentence (beginning with "and" ) is omitted. The language accepted by an $\mathop{\rm FNA}\nolimits$ is defined by $$L ( \mathop{\rm FNA}\nolimits ) \ = \ \{ {w \in V _{I} ^ *} : { s _{0} w \Rightarrow^{*} s _{1} \ \textrm{ for } \ \textrm{ some } \ s _{0} \in S _{0} \ \textrm{ and } \ s _{1} \in S _ 1} \} .$$ A language is of type 3 if and only if it is accepted by some finite deterministic automaton and if and only if it is accepted by some finite non-deterministic automaton. A rewriting system $(V,\ P)$ is called a pushdown automaton if and only if each of the following conditions 1)–3) is satisfied. 1) $V$ is divided into two disjoint alphabets $V _{s}$ and $V _{I} \cup V _{z}$. The sets $V _{s}$, $V _{I}$ and $V _{z}$ are called the state, input and pushdown alphabet, respectively. The sets $V _{I}$ and $V _{z}$ are non-empty but not necessarily disjoint. 2) Elements $s _{0} \in V _{s}$, $z _{0} \in V _{z}$ and a subset $S _{1} \subseteq V _{s}$ are specified, namely, the so-called initial state, start letter and final state set. 3) The productions in $P$ are of the two forms $$\tag{a1} zs _{i} \ \rightarrow \ ws _{j} ,\ \ z \in V _{z} ,\ \ w \in V _ z^{*} ,\ \ s _{i} ,\ s _{j} \in V _{s} ;$$ $$\tag{a2} zs _{i} a \ \rightarrow \ ws _{j} ,\ \ z \in V _{z} ,\ \ w \in V _ z^{*} ,\ \ a \in V _{i} ,\ \ s _{i} ,\ s _{j} \in V _{s} .$$ The language accepted by a pushdown automaton $\mathop{\rm PDA}\nolimits$ is defined by $$L ( \mathop{\rm PDA}\nolimits ) \ = \ \{ {w \in V _{I} ^ *} : { z _{0} s _{0} w \Rightarrow^{*} us _{1} \ \textrm{ for } \ \textrm{ some } \ u \in V _ z^{*} ,\ s _{1} \in S _ 1} \} .$$ A pushdown automaton is deterministic if and only if, for every pair $(s _{i} ,\ z)$, $P$ contains either exactly one production (a1) and no productions (a2), or no productions (a1) and exactly one production (a2) for every $a \in V _{I}$. The family of context-free languages equals the family of languages accepted by pushdown automata. Languages accepted by deterministic pushdown automata are referred to as deterministic (context-free) languages. The role of determinism is different in connection with pushdown and finite automata: the family of deterministic languages is a proper subfamily of the family of context-free languages. The automata considered above have no other output facilities than being or not being in a final state, i.e., they are only capable of accepting or rejecting inputs. Occasionally devices (transducers) capable of having words as outputs, i.e., capable of translating words into words, are considered. A formal definition is given below only for the transducer corresponding to a finite automaton. A pushdown transducer is defined analogously. A rewriting system $(V,\ P)$ is called a sequential transducer if and only if each of the following conditions 1)–3) is satisfied. 1) $V$ is divided into two disjoint alphabets $V _{s}$ and $V _{1} \cup V _{0}$. The sets $V _{s}$, $V _{1}$, $V _{0}$ are called the state, input and output alphabet, respectively. (The two latter alphabets are non-empty but not necessarily disjoint.) 2) An element $s _{0} \in V _{s}$ and a subset $S _{1} \subseteq V _{s}$ are specified (initial state and final state set). 3) The productions in $P$ are of the form $$s _{i} w \ \rightarrow \ us _{j} ,\ \ s _{i} ,\ s _{j} \in V _{s} ,\ \ w \in V _{s} ,\ \ w \in V _ 1^{*} ,\ \ u \in V _ 0^{*} .$$ If, in addition, $w \neq \lambda$ in all productions, then the rewriting system is called a generalized sequential machine (gsm). For a sequential transducer $\mathop{\rm ST}\nolimits$, words $w _{1} \in V _ 1^{*}$ and $w _{2} \in V _ 0^{*}$, and languages $L _{1} \in V _ 1^{*}$ and $L _{2} \subseteq V _ 0^{*}$, one defines $$\mathop{\rm ST}\nolimits (w _{1} ) \ = \ \{ {w} : { s _{0} s _{1} \Rightarrow^{*} ws _{1} \ \textrm{ for } \ \textrm{ some } \ s _{1} \in S _ 1} \} ,$$ $$\mathop{\rm ST}\nolimits (L _{1} ) \ = \ \{ u: \ u \in \mathop{\rm ST}\nolimits (w) \ \textrm{ for } \ \textrm{ some } \ w \in L _{1} \} ,$$ $$\mathop{\rm ST}\nolimits^{-1} (w _{2} ) \ = \ \{ u: \ w _{2} \in \mathop{\rm ST}\nolimits (u) \} ,$$ $$\mathop{\rm ST}\nolimits^{-1} (L _{2} ) \ = \ \{ u: \ u \in \mathop{\rm ST}\nolimits^{-1} (w) \ \textrm{ for } \ \textrm{ some } \ w \in L _{2} \} .$$ Mappings of languages thus defined are referred to as (rational) transductions and inverse (rational) transductions. If $\mathop{\rm ST}\nolimits$ is also a gsm, one speaks of gsm mappings and inverse gsm mappings. Homomorphisms, inverse homomorphisms and the mappings $\tau (L) = L \cap R$, where $R$ is a fixed regular language, are all rational transductions, the first and the last being also gsm mappings. The composite of two rational transductions (respectively, gsm mappings) is again a rational transduction (respectively, a gsm mapping). Every rational transduction $\tau$ can be expressed in the form $$\tau (L) \ = \ h _{1} (h _ 2^{-1} (L) \cap R),$$ where $h _{1}$ and $h _{2}$ are homomorphisms and $R$ is a regular language. These results show that a language family is closed under rational transductions if and only if it is closed under homomorphisms, inverse homomorphisms and intersections with regular languages. A finite probabilistic automaton (or stochastic automaton, cf. Automaton, probabilistic) is an ordered quintuple $$\mathop{\rm PA}\nolimits \ = \ (V _{1} ,\ V _{s} ,\ s _{0} ,\ S _{1} ,\ H),$$ where $V _{1}$ and $V _{s} = \{ s _{0} \dots s _{ {n - 1}} \}$ are disjoint alphabets (inputs and states), $s _{0} \in V _{s}$ and $S _{1} \subseteq V _{s}$( initial state and final state set), and $H$ is a mapping of $V _{1}$ into the set of $n$- dimensional stochastic matrices. (A stochastic matrix is a square matrix with non-negative real entries and with row sums equal to 1.) The mapping $H$ is extended to a homomorphism of $V _ 1^{*}$ into the monoid of $n$- dimensional stochastic matrices. Consider $V _{s}$ to be an ordered set as indicated, let $\pi$ be the $n$- dimensional stochastic row vector whose first component equals 1, and let $\eta$ be the $n$- dimensional column vector consisting of 0's and 1's such that the $i$- th component of $\eta$ equals 1 if and only if the $i$- th element of $V _{s}$ belongs to $S _{1}$. The language accepted by a $\textrm{ PA }$ with cut-point $\alpha$, where $\alpha$ is a real number satisfying $0 \leq \alpha < 1$, is defined by $$L ( \mathop{\rm PA}\nolimits ,\ \alpha ) \ = \ \{ {w \in V _{1} ^ *} : { \pi H (w) \eta > \alpha} \} .$$ Languages obtained in this fashion are referred to as stochastic languages. Decision problems (cf. Decision problem) play an important role in language theory. The usual method of proving that a problem is undecidable is to reduce it to some problem whose undecidability is known. The most useful tool for problems in language theory is in this respect the Post correspondence problem. By definition, a Post correspondence problem is an ordered quadruple $\mathop{\rm PCP}\nolimits = \{ \Sigma ,\ n,\ \alpha ,\ \beta \}$, where $\Sigma$ is an alphabet, $n \geq 1$, and $\alpha = ( \alpha _{1} \dots \alpha _{n} )$, $\beta = ( \beta _{1} \dots \beta _{n} )$ are ordered $n$- tuples of elements of $\Sigma^{+}$. A solution to the $\mathop{\rm PCP}\nolimits$ is a non-empty finite sequence of indices $i _{1} \dots i _{k}$ such that $$\alpha _{ {i _ 1}} \dots \alpha _{ {i _ k}} \ = \ \beta _{ {i _ 1}} \dots \beta _{ {i _ k}} .$$ It is undecidable whether an arbitrary given $\mathop{\rm PCP}\nolimits$( or an arbitrary given $\mathop{\rm PCP}\nolimits$ over the alphabet $\Sigma = \{ a _{1} ,\ a _{2} \}$) has a solution. Also Hilbert's tenth problem is undecidable: Given a polynomial $P (x _{1} \dots x _{k} )$ with integer coefficients, one has to decide whether or not there are non-negative integers $x _{i}$, $i = 1 \dots k$, satisfying the equation $$P (x _{1} \dots x _{k} ) \ = \ 0.$$ The membership problem is decidable for context-sensitive languages but undecidable for type-0 languages. (More specifically, given a context-sensitive grammar $G$ and a word $w$, it is decidable whether or not $w \in L (G)$.) It is decidable whether two given regular languages are equal and also whether one of them is contained in the other. Both of these problems (the equivalence problem and the inclusion problem) are undecidable for context-free languages. It is also undecidable whether a given regular and a given context-free language is empty or infinite, whereas both of these problems are undecidable for context-sensitive languages. It is undecidable whether the intersection of two context-free languages is empty. The intersection of a context-free and a regular language is always context-free; and, hence, its emptiness is decidable. It is undecidable whether a given context-free language is regular. The following example is due to M. Soittola. Consider the grammar $G$ determined by the productions $$S \ \rightarrow \ abc,\ \ S \ \rightarrow \ aAbc,$$ $$Ab \ \rightarrow \ bA,\ \ Ac \ \rightarrow \ Bbcc,$$ $$bB \ \rightarrow \ Bb,\ \ aB \ \rightarrow \ aaA,\ \ aB \ \rightarrow \ aa.$$ Then $$L (G) \ = \ \{ {a^{n} b^{n} c ^ n} : { n \geq 1} \} .$$ In fact, any derivation according to $G$ begins with an application of the first or second production, the first production directly yielding the terminal word $abc$. Consider any derivation $D$ from a word $a^{i} A b^{i} c^{i}$, where $i \geq 1$, leading to a word over the terminal alphabet. $D$ must begin with $i$ applications of the third production ( $A$ travels to the right) and then continue with an application of the fourth production (one further occurrence of $b$ and $c$ is deposited). Now the word $a^{i} b^{i} Bbc ^ {i + 1}$ has been derived. For this the only possibility is to apply the fifth production $i$ times ( $B$ travels to the left), after which one obtains the word $a^{i} Bb ^ {i + 1} c ^ {i + 1}$. This word directly yields one of the two words $$a ^ {i + 1} Ab ^ {i + 1} c ^ {i + 1} \ \ \textrm{ or } \ \ a ^ {i + 1} b ^ {i + 1} c ^ {i + 1}$$ by the last two productions (one further $a$ is deposited, and either a new cycle is entered or the derivation is terminated). This argument shows that $G$ generates all words belonging to the language and nothing else; every step in a derivation is uniquely determined, the only exception being that there is a choice between termination and entrance into a new cycle. #### References [a1] N. Chomsky, "Three models for the description of language" IRE Trans. Information Theory , IT-2 (1956) pp. 113–124 [a2] N. Chomsky, "Syntactic structures" , Mouton (1957) [a3] M. Davis, "Computability and unsolvability" , McGraw-Hill (1958) [a4] S. Eilenberg, "Automata, languages and machines" , A , Acad. Press (1974) [a5] D. Hilbert, "Mathematische Probleme. Vortrag, gehalten auf dem internationalen Mathematiker-Kongress zu Paris, 1900" , Gesammelte Abhandlungen , III , Springer (1935) [a6] S.C. Kleene, "Representation of events in nerve nets and finite automata" , Automata studies , 34 , Princeton Univ. Press (1956) pp. 3–42 [a7] A. Paz, "Introduction to probabilistic automata" , Acad. Press (1971) [a8] E.L. Post, "A variant of a recursively unsolvable problem" Bull. Amer. Math. Soc. , 52 (1946) pp. 264–268 [a9] H. Rogers jr., "Theory of recursive functions and effective computability" , McGraw-Hill (1967) pp. 164–165 [a10] G. Rozenberg, "Selective substitution grammars" Elektronische Informationsverarbeitung und Kybernetik (EIK) , 13 (1977) pp. 455–463 [a11] A. Salomaa, "Theory of automata" , Pergamon (1969) [a12] A. Salomaa, "Formal languages" , Acad. Press (1973) [a13] A. Salomaa, "Jewels of formal language theory" , Computer Science Press (1981) [a14] A. Salomaa, M. Soittola, "Automata-theoretic aspects of formal power series" , Springer (1978) [a15] A. Thue, "Ueber unendliche Zeichenreihen" Skrifter utgit av Videnskapsselskapet i Kristiania , I (1906) pp. 1–22 [a16] A. Thue, "Probleme über Veränderungen von Zeichenreihen nach gegebenen Regeln" Skrifter utgit av Videnskapsselskapet i Kristiania , I.10 (1914) [a17] A.M. Turing, "On computable numbers, with an application to the Entscheidungsproblem" Proc. London Math. Soc. , 42 (1936) pp. 230–265 How to Cite This Entry: Formal languages and automata. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Formal_languages_and_automata&oldid=44374 This article was adapted from an original article by G. RozenbergA. Salomaa (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2022-10-06 06:29:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8403164744377136, "perplexity": 361.5812235204371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00343.warc.gz"}
http://mathhelpforum.com/calculus/123346-sketch.html
1. ## sketch Sketch Re(z^4). ie:sketch real part of z^4 I tried to do the arithmetic and I think Re(z^4)=x^4 - 6x^2*y^2 + y^4 2. Originally Posted by scubasteve123 Re(z^4)=x^4 - 6x^2*y^2 + y^4 I agree with this. 3. Originally Posted by scubasteve123 Sketch Re(z^4). ie:sketch real part of z^4 I tried to do the arithmetic and I think Re(z^4)=x^4 - 6x^2*y^2 + y^4 $z^4 = (x + iy)^4$ $= \sum_{k = 0}^4 {4\choose{k}}x^{4 - k}(iy)^k$ $= x^4 + 4ix^3y + 6i^2x^2y^2 + 4i^3xy^3 + i^4y^4$ $= x^4 + 4ix^3y - 6x^2y^2 - 4ixy^3 + y^4$ $= x^4 - 6x^2y^2 + y^4 + i(4x^3y - 4xy^3)$. So $\mathbf{Re}(z) = x^4 - 6x^2y^2 + y^4$ You are correct with your calculation. Are you given any more information about $z$? Because you will need to evaluate $\mathbf{Re}(z) = \textrm{constant}$ in order to make the sketch... 4. Originally Posted by scubasteve123 Sketch Re(z^4). ie:sketch real part of z^4 I tried to do the arithmetic and I think Re(z^4)=x^4 - 6x^2*y^2 + y^4 Your expression for Re(z^4) is correct. But there is no equation and therefore nothing to sketch unless Re(z^4) is equal to something .... 5. Originally Posted by mr fantastic Your expression for Re(z^4) is correct. But there is no equation and therefore nothing to sketch unless Re(z^4) is equal to something .... Hi everyone thanks for checking I was so conserned with my arithmetic i forgot the rest my apologies. I need to sketch {z element complex numbers such that Re(z^4)>0 and |z-1|<2} I know how to sketch the |z-1|<2 part and i assume the part I want is where the 2 overlap i just not sure how to sketch x^4-6x^2*y^2 +y^4 6. Im new to sketching in complex. should i be trying to factor the expression I found for Re(z^4) ? or should i be able to sketch just using my expression? How do i know what this shape consists of? also since its only the real part is it just a line? 7. Solve $x^4 - 6x^2y^2 + y^4 > 0$ for $y$. I would advise completing the square... $y^4 - 6x^2y^2 + x^4 > 0$ $y^4 - 6x^2y^2 + (-3x^2)^2 - (-3x^2)^2 + x^4 > 0$ $(y^2 - 3x^2)^2 - 8x^4 > 0$ $(y^2 - 3x^2)^2 > 8x^4$ $|y^2 - 3x^2| > 2\sqrt{2}x^2$ $y^2 - 3x^2 < -2\sqrt{2}x^2$ or $y^2 - 3x^2 > 2\sqrt{2}x^2$ Can you go from here? 8. would i now combine the x^2 terms then i get y^2>(2*2^1/2 +3)x^2 and y^2<(-2*2^1/2 +3)x^2 so now i see that the graph is greater than 2*2^1/2 +3 and less than -2*2^1/2 +3 ... im confused. 9. Case 1: $y^2 < (3 - 2\sqrt{2})x^2$ $|y| < x\sqrt{3 - 2\sqrt{2}}$ $x\sqrt{2\sqrt{2} - 3} < y < x\sqrt{3 - 2\sqrt{2}}$ Case 2: $y^2 - 3x^2 > 2\sqrt{2}x^2$ $y^2 > (3 + 2\sqrt{2})x^2$ $|y| > \sqrt{(3 + 2\sqrt{2})x^2}$ $y or $y > x\sqrt{3 + 2\sqrt{2}}$. Now graph these inequalities. 10. Originally Posted by Prove It Case 1: $y^2 < (3 - 2\sqrt{2})x^2$ $|y| < x\sqrt{3 - 2\sqrt{2}}$ $x\sqrt{2\sqrt{2} - 3} < y < x\sqrt{3 - 2\sqrt{2}}$ Case 2: $y^2 - 3x^2 > 2\sqrt{2}x^2$ $y^2 > (3 + 2\sqrt{2})x^2$ $|y| > \sqrt{(3 + 2\sqrt{2})x^2}$ $y or $y > x\sqrt{3 + 2\sqrt{2}}$. Now graph these inequalities. Im hoping that im finally having an "eureka" moment.. okay thank you for the help finding the inequalities... now to graph the inequalities do i just graph them like i would normally on the x,y plane since its only the Real part of the complex number?? 11. Originally Posted by scubasteve123 Im hoping that im finally having an "eureka" moment.. okay thank you for the help finding the inequalities... now to graph the inequalities do i just graph them like i would normally on the x,y plane since its only the Real part of the complex number?? Yes graph them as you normally would on the $x, y$ plane. Just name your axes as $\mathbf{Re}(z)$ and $\mathbf{Im}(z)$, since $x = \mathbf{Re}(z)$ and $y = \mathbf{Im}(z)$. 12. u've been so helpful. its really appreciated
2016-09-29 06:05:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027946949005127, "perplexity": 2005.9633807534835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661778.39/warc/CC-MAIN-20160924173741-00031-ip-10-143-35-109.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3836040/convergence-of-a-sequence-in-terms-of-sub-basis-of-a-topological-space
# Convergence of a sequence in terms of sub-basis of a topological space Let $$X$$ be a topological space and $$S ⊆ 2^X$$ a sub-basis for the topology of $$X$$. Show that a sequence $$x_1, x_2, . . .$$ in $$X$$ converges to a point $$x ∈ X$$ if and only if for every $$A ∈ S$$ containing $$x$$, there exists an $$n_0 ∈ \mathbb N$$ such that for all $$n ≥ n_0$$, we have $$x_n ∈ A$$. My attempt: Definition: Let $$X$$ be a topological space. A sequence $$x_1,x_2,...$$ in $$X$$ converges to $$x$$ in $$X$$ if and only if for every open neighborhood $$U$$ of $$x$$, there exists $$n_0 \in \mathbb N$$, such that for all $$n \geq n_0$$, we have $$x_n \in U$$. 1. $$=>$$: since $$S$$ is a sub-basis for the topology of $$X$$, then $$X = U_{A_i \in S} A_i$$. So if the sequence $$x_1,x_2,...$$ in $$X$$ converges to $$x$$ in $$X$$, then $$x_1,x_2,...$$ in $$U_{A_i \in S} A_i$$, hencr each $$A_i$$ is an open neighborhood of $$x$$, so $$x_n \in A$$. 2. $$<=$$: if $$x_n \in A$$ then $$x_n \in U_{A_i \in S} A_i$$, then $$x_1, x_1,...$$ in $$U_{A_i \in S} A_i$$, so for each open $$A_i$$ of $$x_n$$, there exists $$n_0 \in \mathbb N$$ such that for all $$n \geq n_0$$, we have $$x_1,x_2,...$$ in $$X$$ converges to $$x$$ in $$X$$. Combing $$1$$ and $$2$$,I'd get the required result. Is my attempt correct? $$\implies$$ Let $$(x_n)_n$$ converge to $$x$$ and let $$x\in A\in\mathcal S$$. The topology is generated by $$\mathcal S$$ hence $$A$$ is an open neighborhood of $$x$$. So some $$n_0$$ exists with $$n\geq n_0\implies x_n\in A$$. $$\impliedby$$ Let it be that for every $$A\in\mathcal S$$ there is some integer $$m$$ with $$n\geq m\implies x_n\in A$$, and let $$U$$ be an open neighborhood of $$x$$. Then a finite sequence $$A_1,\dots,A_k$$ exists with $$A_i\in\mathcal S$$ and $$x\in\bigcap_{i=1}^kA_i\subseteq U$$ because $$\mathcal S$$ is a subbase of the topology. For each $$i\in\{1,\dots,k\}$$ there is some integer $$n_i$$ with $$n\geq n_i\implies x_n\in A_i$$. Now let $$n_0=\max(\{n_1,\dots,n_k\})$$. Then $$n\geq n_0\implies x_n\in\bigcap_{i=1}^kA_i\subseteq U$$.
2021-10-20 18:42:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 73, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997105598449707, "perplexity": 22.87979429701866}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00388.warc.gz"}
http://mathhelpforum.com/calculus/11348-legendre-polynomials.html
## legendre polynomials please please can someone hep me on this as i dont have a clue where to start. thanks loads Edgar Use Rodrigues' Formula to show that P' n+1(x) - P' n-1(x) = (2n + 1)Pn(x) Hence show that the integral between one and x of Pn(x) dx =1/(2n + 1)[Pn-1(x) - Pn+1(x)]
2015-04-19 01:36:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640304207801819, "perplexity": 4434.755279628585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636650.1/warc/CC-MAIN-20150417045716-00121-ip-10-235-10-82.ec2.internal.warc.gz"}
https://cs.stackexchange.com/tags/notation/new
# Tag Info The symbol $(\bot)$ usually represents the least element of a lattice. The least element often goes at the bottom part of a Hasse diagram. That is maybe the reason you're looking for. For example, in a Hasse diagram containing propositions with the logical consequence relation, one usually put falsehood, at the bottom of the diagram, mostly because from a ... It just means that the condition $0 \le c_1 g(n) \le f(n) \le c_2 g(n)$ only needs to hold for large enough integers. This allows you to ignore all integers that are smaller than $n_0$ so you can just pick $n_0$ as large as needed for the inequalities to hold. Notice that you do not need to pick $n_0$ as the smallest integer with that property, so you might ...
2021-07-29 15:20:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7845814228057861, "perplexity": 154.4360693535057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00458.warc.gz"}
http://www.talkstats.com/threads/feed-varlist-into-calculation.74362/
# Feed Varlist into Calculation #### hlsmith ##### Not a robit I have a dataset which I want to add a new variable (column) based on a calculation of existing variables in the set. But I want to repeat the process for eight variables and I AM hoping to do it in one step. So I would feed in a varlist into the line of code and generate the 8 new terms. See following where I run it twice, I would collapse this into one statement for the eight vars. Code: #FOR BEEF# Food_3$Beef = (((((((Food_3$IM_Weekly_serving_2 * Food_3$IM_Attendance_2) + Food_3$Peds_Monthly_serving_2 * Food_3$Peds_Attendance_2)) *Food_3$beef) /Food_3$kg_oz_conversion_2) *Food_3$Serving_conversion_2) /Food_3$Passenger_Vehicle_2) #FOR LAMB# Food_3$Lamb = (((((((Food_3$IM_Weekly_serving_2 * Food_3$IM_Attendance_2) + Food_3$Peds_Monthly_serving_2 * Food_3$Peds_Attendance_2)) *Food_3$lamb) /Food_3$kg_oz_conversion_2) *Food_3$Serving_conversion_2) /Food_3$Passenger_Vehicle_2) End result would be the addition of these two variables (Beef and Lamb) into the dataset Food_3. Last edited: #### Dason Code: myfun <- function(x){ (((((((Food_3$IM_Weekly_serving_2 * Food_3$IM_Attendance_2) + Food_3$Peds_Monthly_serving_2 * Food_3$Peds_Attendance_2)) *x) /Food_3$kg_oz_conversion_2) *Food_3$Serving_conversion_2) /Food_3$Passenger_Vehicle_2) } Food_3$Beef <- myfun(Food_3$beef) Food_3$Lamb <- myfun(Food_3\$lamb) Improvements could be made but that seems to do what you want? I did that on my phone so apologies for lack of decent formatting #### hlsmith ##### Not a robit Thanks. I'll check it out in morning. I was thinking there was an *apply approach or way to feed in a list. In SAS i would feed a varlist. I usually get intimidated writing functions since I am inexperienced and never have enough time. #tested and the code snippet works. Last edited:
2020-01-20 08:59:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2256581038236618, "perplexity": 1754.0780507341094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00271.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-5-linear-functions-5-2-direct-variation-practice-and-problem-solving-exercises-page-306/56
## Algebra 1: Common Core (15th Edition) We follow the instructions to plug in 2 for a. $6(2)+3 \\\\ =12+3 \\\\ =15$
2018-08-19 04:43:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1902233511209488, "perplexity": 3495.31946604089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214691.99/warc/CC-MAIN-20180819031801-20180819051801-00409.warc.gz"}
https://infoscience.epfl.ch/record/233356
Infoscience Conference paper # Optimum error nonlinearities for long adaptive filters In this paper, we consider the class of adaptive filters with error nonlinearities. In particular, we derive an expression for the optimum nonlinearity that minimizes the steady-state error and attains the limit mandated by the Cramer-Rae bound of the underlying estimation process.
2018-01-19 12:10:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299656510353088, "perplexity": 1191.7378285770403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00267.warc.gz"}
http://scitation.aip.org/content/aip/magazine/physicstoday/news/news-picks
• weblog/weblog.blog • aip/physicstoday • /content/aip/magazine/physicstoday/news/news-picks • www.physicstoday.org/news 1887 # News Picks ### FILTER BY Year: • 2014 [591] http://pub2web.metastore.ingenta.com/ns/yearOfPublication 2014 • 2013 [952] http://pub2web.metastore.ingenta.com/ns/yearOfPublication 2013 • 2012 [858] http://pub2web.metastore.ingenta.com/ns/yearOfPublication 2012 • 2011 [853] http://pub2web.metastore.ingenta.com/ns/yearOfPublication 2011 • 2010 [968] http://pub2web.metastore.ingenta.com/ns/yearOfPublication 2010 • 2009 [1009] http://pub2web.metastore.ingenta.com/ns/yearOfPublication 2009 Physics Today’s online staff summarize the most important and interesting news about science from the world's top media outlets. August 21, 2014 3:14 PM ### Discrepancy found between historical temperature data and models Ars Technica: A history of Earth's average temperature over the last 12 000 years was recently created by using a wide range of local direct records and proxies for temperature. Zhengyu Liu of the University of Wisconsin and his colleagues compared that historical data with three climate-model simulations spanning the last 21 000 years. They found that, where the historical data showed a peak and then a brief drop in temperatures before continued increases, the models showed a continual temperature increase, albeit at a slower rate. The models based their simulations on variations in sunlight due to Earth's orbital changes and the known growth in greenhouse gases. A closer examination of the models and data revealed that some of the proxies likely indicated summer temperatures, not yearly averages. Adjusting the models to favor summer temperatures produced a closer fit to the historical data, but significant differences remained. It is likely that adjustments need to be made to both the model and the interpretation of the historical data. August 21, 2014 2:00 PM ### Microbial life found in subglacial Antarctic lake BBC: Microorganisms have been discovered in samples collected from Lake Whillans, a body of liquid water buried some 800 m below the ice in West Antarctica. At those depths, there is little sunlight or organic material. Instead the tiny organisms appear to be fueled by inorganic compounds—such as ammonium, nitrate, and sulphides—found in the rock that makes up the lake bed. To ensure the samples were free of surface contamination, the Whillans Ice Stream Subglacial Access Research Drilling project took extensive precautions, including the use of ultraclean drilling methods. The discovery of life in such a forbidding Earth environment leads scientists to propose that similar life might also be found in the large volumes of liquid water lying beneath the icy crusts of some of the moons of Jupiter and Saturn and possibly other celestial bodies. August 21, 2014 12:09 PM ### Methane-to-gasoline startup receives funding from Saudi Aramco FuelFix: Siluria Technologies, based in San Francisco, California, is developing a process for converting methane into gasoline and ethylene. To date, it has received nearly $100 million in venture capital funding. Methane-to-gasoline conversion seemed promising in the 1980s but was abandoned in favor of the Fischer–Tropsch process, which converts a mixture of carbon monoxide and hydrogen into liquid hydrocarbons. Siluria claims to have identified a catalyst that makes its technique more efficient than Fischer–Tropsch. Of the funding the company has received, Siluria announced yesterday that$30 million came from Saudi Aramco, the Saudi Arabian state oil and gas company. Ed Dineen, Siluria's CEO, says that the company hopes to have the process commercialized and a plant running by 2017 or 2018. In the meantime, the company plans to install a demonstration unit in Houston, Texas, later this year. August 21, 2014 11:00 AM ### New carbon dating technique revises Neanderthal history Nature: Modern humans and Neanderthals probably coexisted for thousands of years, according to a new study published in Nature. Because that overlap occurred some 30 000–50 000 years ago, however, traditional radiocarbon dating techniques have proven unreliable for analyzing the organic remains. After 30 000 years, 98% of the carbon isotope has disappeared and younger carbon has started seeping in. Now Tom Higham of Oxford University and colleagues explain how they expelled the contaminating carbon and used accelerator mass spectrometry to measure the minuscule amounts of radiocarbon left in samples from 40 key archaeological sites across southern Europe. The technique has allowed them to verify that humans and Neanderthals lived contemporaneously for possibly as long as 5000 years. The long overlap would have provided plenty of time for cultural exchange and interbreeding, says Higham. August 20, 2014 1:45 PM ### Researchers reveal vulnerability in traffic signal networks MIT Technology Review: Wirelessly networked traffic signals include sensors for detecting approaching vehicles, controllers that manage the lights, radios to communicate between intersections, and malfunction management units to reset the system if an error occurs. With official permission, a group of researchers from the University of Michigan led by J. Alex Halderman hacked into a local traffic-light network and discovered three major vulnerabilities, which they report result from a “systemic lack of security consciousness” in the networks’ design. Anyone with a computer that can transmit on the same wireless frequency as the network’s radios could gain complete control of the network, for example. Similar vulnerabilities have been known about for years in other devices as well, such as voting machines. Until hardware producers take security practices more seriously, the potential for damage from malicious or capricious access will continue to increase. August 20, 2014 1:34 PM ### Lack of dust suggests Saturn's rings older than expected Nature: NASA’s Cassini spacecraft has provided the first measurement of the rate at which dust is falling into Saturn’s rings. During seven years of observations, researchers detected only 140 particles whose trajectories indicate they came from elsewhere in the solar system. That rate is 40 times lower than expected. Because of the current level of dustiness, the rings may be much older than previously estimated. It is possible that the rings formed 4 billion years ago, soon after the planet itself formed. The previously estimated rate of dust collection was so high because it wasn’t known that the amount of dust outside the asteroid belt, where Saturn is located, is much lower than the amount inside, where Earth is located. Based on Cassini‘s data, it appears that most of the detected dust particles came from the Kuiper belt, a collection of frozen bodies outside the orbit of Neptune that includes Pluto and at least two other dwarf planets. August 20, 2014 1:30 PM ### Snakes cling to trees more tightly than necessary BBC: Lacking claws or adhesive structures, snakes are forced to flex their muscles when climbing trees. Snakes move via what's called concertina locomotion, a strenuous process that requires some parts of the body to grip tightly to a surface while other parts are pulled or pushed in the direction of motion. A new study shows that most snakes, while making a vertical climb, actually hold on much more tightly than they need to. Researchers Greg Byrnes of Siena College in New York and Bruce Jayne of the University of Cincinnati monitored five different snake species as they climbed a special pipe equipped with pressure sensors. The researchers observed that all the snakes exerted more than three times the force necessary to support their own weight. That observation led the researchers to propose that snakes value safety over efficiency. August 20, 2014 1:00 PM ### Iceland prepares for another major volcanic eruption Voice of America: Because of intense seismic activity around Bardarbunga volcano since mid-August, Iceland’s Meteorological Office is preparing for a possible eruption and all the problems that can ensue. When Iceland’s Eyjafjallajökull volcano erupted in 2010, the event shut down much of Europe’s airspace for almost a week due to ash and smoke, which can reduce visibility and damage aircraft. So Iceland is monitoring the current situation closely. After a weekend of intense earthquake activity around the volcano, the Met Office has raised the aviation color warning code to orange, the fourth level on a five-grade scale. On Iceland itself, area roads have been closed in anticipation of possible flooding because Bardarbunga is located under a glacial ice cap. August 19, 2014 3:15 PM ### Artificial biochip models developing embryo Ars Technica: To better study biological networks, such as gene expression, which is essential to the development of living things, researchers have constructed a biochip containing an array of artificial cells. The cells are composed of bundles of DNA assembled on the surface of circular silicon compartments. The DNA was fed nutrients through thin capillaries, which allowed the genes to metabolize. To monitor how that complex process evolves as the cell develops and reacts to environmental changes, the researchers tracked the presence of green fluorescent protein expressed from the DNA. With the biochip, they were able to successfully demonstrate gene expression at the embryonic scale. August 19, 2014 3:00 PM ### Human-induced earthquakes cause less shaking than tectonic ones Nature: Although the injection of wastewater into the ground by hydraulic fracturing and other drilling projects has been known to cause earthquakes, the quakes tend to induce less shaking than natural quakes of the same magnitude, according to a recent study published in the Bulletin of the Seismological Society of America. One reason may be that the fluids injected into the ground “lubricate geological faults and allow them to slip more smoothly,” according to the paper’s author, Susan Hough of the US Geological Survey. Hough studied both manmade and natural earthquakes and compared the reported magnitude with what people said they felt. However, the finding only holds true for areas more than 10 km from the quake’s epicenter. The research may prompt restrictions on the location of such drilling efforts to keep them away from populated areas. August 19, 2014 11:15 AM ### Octopus inspires optoelectronic camouflage material BBC: Inspired by the color-changing abilities of octopuses and other cephalopods, researchers have been developing a paper-thin flexible sheet composed of 1-mm-square cells containing a temperature-controlled dye. Just as octopus skin has a three-layer design, so, too, does the new material. A grid of photosensors is overlain with a layer of “actuators” that produce a current, which causes the temperature-sensitive pigment in the top layer to change from black to transparent. Although less efficient and having fewer color choices than the animals that inspired it, the new material nevertheless represents a first step toward a new class of material that could find use in the military and any number of other applications. August 19, 2014 11:00 AM ### Private space companies are luring young researchers away from NASA Houston Press: As the next generation of scientists and engineers interested in space exploration graduates from college, many are taking jobs with private space companies, such as SpaceX, rather than following the traditional route to NASA. In recent years, NASA has been plagued with problems, among them financial insecurity, lack of administrative direction, extensive bureaucracy, and cancellation of several major projects, such as the space shuttle program. As a result, young researchers are increasingly taking jobs with commercial launch companies, which they feel offer more autonomy in seeing projects through from start to finish and greater possibility for future space travel. NASA, however, maintains that space exploration has always depended on collaboration among government, academia, and the commercial sector. And with other nations planning Moon missions, such collaborations may become even more necessary to ensure the US’s place in space. August 18, 2014 4:50 PM ### Solar thermal plants posing risk for birds Christian Science Monitor: Federal wildlife investigators say that as many as 28 000 birds are being killed every year at the BrightSource Energy plant located in the Mojave Desert. The birds get burned by the intense heat from the 300 000 garage-door-sized mirrors that reflect the Sun’s rays onto three 40-story-high boiler towers. Although the US Fish and Wildlife Service has said that such power towers are the most lethal type of construction for wildlife, BrightSource is proposing to build another mirror field and a 75-story tower near the California–Arizona border. As the California Energy Commission considers the proposal, BrightSource is exploring the use of lights, sounds, or other means to divert the birds away from its plants. August 18, 2014 3:10 PM ### Data science tackles massive digital output New York Times: Because of the ever-increasing amounts of data being generated by the Web, smartphones, and other technologies, data scientists are having to wrangle with the vast output to pare it down and organize it into a usable format. “You spend a lot of your time being a data janitor, before you can get to the cool, sexy things that got you into the field in the first place,” said Matt Mohebbi, a data scientist and cofounder of Iodine, a new health startup. Several companies are writing computer software to automate the data-wrangling process. Among other challenges, the programs must be able to merge many different data formats. In much the same way that spreadsheets revolutionized data analysis in business and finance, machine-learning technology could help free data scientists from the more mundane sorting tasks so they can concentrate on the bigger picture. August 18, 2014 2:35 PM ### Will we tackle climate change? New Scientist: Despite decades of research indicating that climate change is occurring, it's been difficult for scientists to persuade policymakers, the public, and institutions to implement techniques to minimize or slow down the effects. Some researchers, including Nobel Prize–winning economist Daniel Kahneman, believe there is no path to success on climate change. Others see climate change as an example of “perfect market failure.” George Marshall, writing in New Scientist, points out that it is the large indifference in the “middle” majority that is the stumbling block, and that one factor is an unwillingness to face mortality. Ironically, he says, the systems that govern our own attitudes "are just as complex as those that govern energy and carbon, and just as subject to feedbacks that exaggerate small differences between people." August 15, 2014 3:00 PM ### Computer model simulates spread of Ebola virus NPR: As the most severe Ebola epidemic ever unfolds in West Africa, some researchers are studying it through the use of computer simulations. The goal is to see how the outbreak might spread and which public health measures could prove most effective in containing it. The effort is complicated by a number of factors, including the uncertainty in the total number of dead and infected people, how many infected people stay at home rather than go to hospitals, and how burial practices can spread the infection. Although the situation is too complex for computer models to come up with definitive answers on how many people will ultimately die and exactly when the epidemic will end, they do underscore the need to act quickly before the Ebola outbreak becomes too large to contain. August 15, 2014 2:35 PM ### Germany sets precedent for renewable energy use Climate Progress: Among the developed countries of the world, Germany is proving to be very successful at making the switch to renewable energy sources for its electricity production. That effort was reflected in the first half of 2014, when the country managed to generate one-third of its electrical power from renewables. That is a remarkable accomplishment in view of the fact that most renewables are inherently intermittent: The Sun only shines during the day, and the wind doesn’t always blow. Moreover, Germany's electrical power grid has proven to be very reliable: Since 2008 the average number of minutes that electricity was lost per customer each year has been less than 16—far less than in any other country in Europe or the US. However, the switch has not come without a price: Germans pay more for their electricity. Regardless, the country remains committed to getting 80% of its power from renewables by 2050. August 15, 2014 2:20 PM ### Swarm of robots collaborate to form shapes BBC: Researchers have built a swarm of more than 1000 small robots that can shuffle around on three spindly legs to form two-dimensional shapes. Although the 3-cm-sized robots are fed the same computer program, they modify their movements based on what their neighbors are doing. Modeled on the swarm behavior of living organisms such as ants and birds, the so-called Kilobots could one day be used to develop self-assembling tools and structures. The tiny robots are not fast, however: Each shape can take 6 to 12 hours. "Actually watching the experiment run is like watching paint dry,” says Michael Rubenstein of Harvard University, coauthor of the group’s study published in Science. August 15, 2014 2:25 AM ### Interstellar grains from Stardust spacecraft Telegraph: NASA’s Stardust spacecraft was launched 15 years ago to collect dust samples from the coma of comet Wild 2 and from the outer reaches of space. Fitted with collectors made of a silica-based aerogel, Stardust returned to Earth in 2006 with at least a million particles in separate sets of detectors. One set was open when the craft passed through Wild 2 and then closed, the other closed through the comet's passage and kept open in a region of space suspected to have interstellar particles. To help sort the vast amounts of data, NASA turned to crowdsourcing, in which citizen scientists used their home computers to scan the collectors for the tracks left as particles hit the aerogel and became embedded.  Scientists say that seven of the particles in the second set may be interstellar dust from outside our solar system. August 14, 2014 4:00 PM ### Flexible displays edge closer to market MIT Technology Review: Being made of plastic, organic LEDs are intrinsically flexible. Although OLEDs could be used for displays that can be bent or rolled, it's challenging to protect the material from the few molecules of oxygen or water vapor that suffice to degrade performance. Kateeva, a startup in Menlo Park, California, has devised a solution to the protection problem. By using inkjet technology, the company can coat OLED displays faster and more cheaply than can current processes. Meanwhile, another startup, Canatu of Helsinki, Finland, has resolved a problem that besets flexible touch-sensitive displays. The flat, rigid screens of tablet computers and smartphones rely on tin-doped indium oxide, which is too brittle for use on a flexible screen. For its displays, Canatu replaces the oxide with a thin, flexible film covered with a layer of carbon nanobuds—that is, carbon nanotubes topped with spheres of carbon atoms. Display: 20 | 50 | 100 items per page This is a required field
2014-08-21 23:57:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2768254578113556, "perplexity": 3126.14963734087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822053.47/warc/CC-MAIN-20140820021342-00291-ip-10-180-136-8.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:1008.47054
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Fixed points for set-valued mappings in locally convex linear topological spaces. (English) Zbl 1008.47054 The author discusses a continuation principle for multivalued maps in locally convex spaces. ##### MSC: 47H10 Fixed-point theorems for nonlinear operators on topological linear spaces 45G10 Nonsingular nonlinear integral equations 47H04 Set-valued operators 47H09 Mappings defined by “shrinking” properties 47H30 Particular nonlinear operators Full Text: ##### References: [1] Tarafdar, E.; Výborný, R.: Fixed point theorems for condensing multivalued mappings on a locally convex topological space. Bull. austral. Math. soc. 12, 161-170 (1975) · Zbl 0323.47044 [2] Granas, A.: Sur la méthode de continuité de Poincarè. CR acad. Sci. Paris 282, 983-985 (1976) · Zbl 0348.47039 [3] Su, C. H.; Sehgal, V. M.: Some fixed-point theorems for condensing multifunctions in locally convex spaces. Proc. amer. Math. soc. 50, 150-154 (1975) · Zbl 0326.47056 [4] Daneš, J.: Generalized concentrative mappings and their fixed points. Comment. math. Univ. carolinae 11, 115-136 (1970) · Zbl 0195.14903 [5] Furi, M.; Pera, P.: A continuation method on locally convex spaces and applications to ordinary differential equations on noncompact intervals. Ann. polon. Math. 47, 331-346 (1987) · Zbl 0656.47052 [6] Cristescu, R.: Topological vector spaces. (1977) · Zbl 0345.46001 [7] Köthe, G.: Topological vector spaces I. (1983) [8] Engelking, R.: General topology. (1989) · Zbl 0684.54001 [9] O’regan, D.: Some fixed-point theorems for concentrative mappings between locally convex linear topological spaces. Nonlinear anal. 27, 1437-1446 (1996) [10] Dugundji, J.; Granas, A.: Fixed point theory. Monografie matematyczne (1982) · Zbl 0483.47038 [11] Zeidler, E.: Seventh edition nonlinear functional analysis and its applications. Nonlinear functional analysis and its applications (1986) · Zbl 0583.47050 [12] Banas, J.; Rivero, J.: On the measures of weak noncompactness. Ann. math. Pura. appl. 151, 213-224 (1988) · Zbl 0653.47035 [13] De Blasi, F. S.: On the property of the unit sphere in Banach spaces. Bull. math. Soc. sci. Math. roum. 21, 259-262 (1977) · Zbl 0365.46015 [14] Emmanuele, G.: Measures of weak noncompactness and fixed-point theorems. Bull. math. Soc. sci. Math. roum. 25, 353-358 (1981) · Zbl 0482.47027 [15] O’regan, D.: A fixed-point theorem for weakly condensing operators. Proc. royal soc. Edinburgh 126A, 391-398 (1996) [16] Deimling, K.: Multivalued differential equations. (1992) · Zbl 0760.34002 [17] Pruszko, T.: Some applications of the topological degree theory to the multivalued boundary value problem. Dissertationes math. 229 (1984) · Zbl 0543.34008 [18] O’regan, D.: Integral inclusions of upper semicontinuous or lower semicontinuous type. Proc. amer. Math. soc. 124, 2391-2399 (1996) [19] Erbe, L. H.; Krawcewicz, W.: Nonlinear boundary value problems for differential inclusions y” $\epsilon F(t, y, y')$. Ann. polon. Math. 54, 195-226 (1991) · Zbl 0731.34078 [20] Frigon, M.: Application de la théorie de la transversalité á des probèmes non linéarie pour des équations différentielles ordinaires. Dissertationes math. 296 (1990) [21] Granas, A.; Guenther, R. B.; Lee, J. W.: Some existence results for the differential inclusions y” $\epsilon F(t, y, y')$. CR acad. Sci. Paris 307, 391-396 (1988) · Zbl 0652.34018 [22] Zecca, P.; Zezza, P.: Nonlinear boundary value problems in Banach spaces for multivalue differential equations on a noncompact interval. Jour. nonlinear anal. 3, 347-352 (1979) · Zbl 0443.34060 [23] R.B. Guenther, J.W. Lee and M. Šenkyříc, The Filippov approach to boundary and initial value problems and applications, In Boundary Value Problems in Functional Differential Equations, (Edited by J. Henderson), World Scientific Press (to appear). [24] Himmelberg, C. J.; Porter, J. R.; Van Vlech, F. S.: Fixed point theorems for condensing multifunctions. Proc. amer. Math. soc. 23, 635-641 (1969) · Zbl 0195.14902 [25] Lee, J. W.; O’regan, D.: Existence principles for differential equations and systems of equations. NATO ASI series C, 239-289 (1995) [26] Reich, S.: A fixed-point theorem in locally convex spaces. Bull. cal. Math. soc. 63, 199-200 (1971) · Zbl 0256.47042
2016-05-05 04:58:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7884746193885803, "perplexity": 7253.611715016698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125897.19/warc/CC-MAIN-20160428161525-00112-ip-10-239-7-51.ec2.internal.warc.gz"}
http://digitalsoundandmusic.com/chapters/ch4/
# 4.1.1 Acoustics The word acoustics has multiple definitions, all of them interrelated. In the most general sense, acoustics is the scientific study of sound, covering how sound is generated, transmitted, and received.  Acoustics can also refer more specifically to the properties of a room that cause it to reflect, refract, and absorb sound.  We can also use the term acoustics as the study of particular recordings or particular instances of sound and the analysis of their sonic characteristics.  We'll touch on all these meanings in this chapter. # 4.1.2 Psychoacoustics Human hearing is a wondrous creation that in some ways we understand very well, and in other ways we don't understand at all.  We can look at anatomy of the human ear and analyze – down to the level of tiny little hairs in the basilar membrane – how vibrations are received and transmitted through the nervous system.  But how this communication is translated by the brain into the subjective experience of sound and music remains a mystery.  (See (Levitin, 2007).) We'll probably never know how vibrations of air pressure are transformed into our marvelous experience of music and speech.  Still, a great deal has been learned from an analysis of the interplay among physics, the human anatomy, and perception.  This interplay is the realm of psychoacoustics, the scientific study of sound perception.  Any number of sources can give you the details of the anatomy of the human ear and how it receives and processes sound waves.  (Pohlman 2005), (Rossing, Moore, and Wheeler 2002), and (Everest and Pohlmann) are good sources, for example.  In this chapter, we want to focus on the elements that shed light on best practices in recording, encoding, processing, compressing, and playing digital sound.  Most important for our purposes is an examination of how humans subjectively perceive the frequencies, amplitude, and direction of sound.  A concept that appears repeatedly in this context is the non-linear nature of human sound perception.  Understanding this concept leads to a mathematical representation of sound that is modeled after the way we humans experience it, a representation well-suited for digital analysis and processing of sound, as we'll see in what follows.  First, we need to be clear about the language we use in describing sound. # 4.1.3 Objective and Subjective Measures of Sound In speaking of sound perception, it's important to distinguish between words which describe objective measurements and those that describe subjective experience. The terms intensity and pressure denote objective measurements that relate to our subjective experience of the loudness of sound. Intensity, as it relates to sound, is defined as the power carried by a sound wave per unit of area, expressed in watts per square meter (W/m2).   Power is defined as energy per unit time, measured in watts (W). Power can also be defined as the rate at which work is performed or energy converted. Watts are used to measure the output of power amplifiers and the power handling levels of loudspeakers. Pressure is defined as force divided by the area over which it is distributed, measured in newtons per square meter (N/m2)or more simply, pascals (Pa). In relation to sound, we speak specifically of air pressure amplitude and measure it in pascals. Air pressure amplitude caused by sound waves is measured as a displacement above or below equilibrium atmospheric pressure. During audio recording, a microphone measures this constantly changing air pressure amplitude and converts it to electrical units of volts (V), sending the voltages to the sound card for analog-to-digital conversion. We'll see below how and why all these units are converted to decibels. The objective measures of intensity and air pressure amplitude relate to our subjective experience of the loudness of sound. Generally, the greater the intensity or pressure created by the sound waves, the louder this sounds to us. However, loudness can be measured only by subjective experience – that is, by an individual saying how loud the sound seems to him or her. The relationship between air pressure amplitude and loudness is not linear. That is, you can't assume that if the pressure is doubled, the sound seems twice as loud.  In fact, it takes about ten times the pressure for a sound to seem twice as loud. Further, our sensitivity to amplitude differences varies with frequencies, as we'll discuss in more detail in Section 4.1.6.3. When we speak of the amplitude of a sound, we're speaking of the sound pressure displacement as compared to equilibrium atmospheric pressure.   The range of the quietest to the loudest sounds in our comfortable hearing range is actually quite large. The loudest sounds are on the order of 20 Pa. The quietest are on the order of 20 μPa, which is 20 x 10-6 Pa. (These values vary by the frequencies that are heard.) Thus, the loudest has about 1,000,000 times more air pressure amplitude than the quietest. Since intensity is proportional to the square of pressure, the loudest sound we listen to (at the verge of hearing damage) is $10^{6^{2}}=10^{12} =$ 1,000,000,000,000 times more intense than the quietest. (Some sources even claim a factor of 10,000,000,000,000 between loudest and quietest intensities. It depends on what you consider the threshold of pain and hearing damage.) This is a wide dynamic range for human hearing. Another subjective perception of sound is pitch. As you learned in Chapter 3, the pitch of a note is how "high" or "low" the note seems to you. The related objective measure is frequency. In general, the higher the frequency, the higher is the perceived pitch. But once again, the relationship between pitch and frequency is not linear, as you'll see below. Also, our sensitivity to frequency-differences varies across the spectrum, and our perception of the pitch depends partly on how loud the sound is. A high pitch can seem to get higher when its loudness is increased, whereas a low pitch can seem to get lower. Context matters as well in that the pitch of a frequency may seem to shift when it is combined with other frequencies in a complex tone. Let’s look at these elements of sound perception more closely. # 4.1.4 Units for Measuring Electricity and Sound In order to define decibels, which are used to measure sound loudness, we need to define some units that are used to measure electricity as well as acoustical power, intensity, and pressure. Both analog and digital sound devices use electricity to represent and transmit sound. Electricity is the flow of electrons through wires and circuits. There are four interrelated components in electricity that are important to understand: • potential energy (in electricity called voltage or electrical pressure, measured in volts, abbreviated V), • intensity (in electricity called current, measured in amperes or amps, abbreviated A), • resistance (measured in ohms, abbreviated Ω), and • power (measured in watts, abbreviated W). Electricity can be understood through an analogy with the flow of water (borrowed from (Thompson 2005)). Picture two tanks connected by a pipe. One tank has water in it; the other is empty. Potential energy is created by the presence of water in the first tank. The water flows through the pipe from the first tank to the second with some intensity. The pipe has a certain amount of resistance to the flow of water as a result of its physical properties, like its size. The potential energy provided by the full tank, reduced somewhat by the resistance of the pipe, results in the power of the water flowing through the pipe. By analogy, in an electrical circuit we have two voltages connected by a conductor. Analogous to the full tank of water, we have a voltage – an excess of electrons – at one end of the circuit. Let’s say that at other end of the circuit we have 0 voltage, also called ground or ground potential. The voltage at the first end of the circuit causes pressure, or potential energy, as the excess electrons want to move toward ground. This flow of electricity is called the current. The physical connection between the two halves of the circuit provides resistance to the flow. The connection might be a copper wire, which offers little resistance and is thus called a good conductor. On the other hand, something could intentionally be inserted into the circuit to reduce the current – a resistor for example. The power in the circuit is determined by a combination of the voltage and the resistance. The relationship among potential energy, intensity, resistance, and power are captured in Ohm’s law, which states that intensity (or current) is equal to potential energy (or voltage) divided by resistance: where I is intensity, V is potential energy, and R is resistance Equation 4.1 Ohm’s law Power is defined as intensity multiplied by potential energy. where P is power, I is intensity, and V is potential energy Equation 4.2 Equation for power Combining the two equations above, we can represent power as follows: where P is power, V is potential energy, and R is resistance Equation 4.3 Equation for power in terms of voltage and resistance Thus, if you know any two of these four values you can get the other two from the equations above. Volts, amps, ohms, and watts are convenient units to measure potential energy, current resistance, and power in that they have the following relationship: 1 V across 1 Ω of resistance will generate 1 A of current and result in 1 W of power The above discussion speaks of power (W), intensity (I), and potential energy (V) in the context of electricity. These words can also be used to describe acoustical power and intensity as well as the air pressure amplitude changes detected by microphones and translated to voltages. Power, intensity, and pressure are valid ways to measure sound as a physical phenomenon. However, decibels are more appropriate to represent the loudness of one sound relative to another, as well see in the next section. # 4.1.5 Decibels ## 4.1.5.1Why Decibels for Sound? No doubt you’re familiar with the use of decibels related to sound, but let’s look more closely at the definition of decibels and why they are a good way to represent sound levels as they’re perceived by human ears. First consider Table 4.1. From column 3, you can see that the sound of a nearby jet engine has on the order of times greater air pressure amplitude than the threshold of hearing. That’s quite a wide range. Imagine a graph of sound loudness that has perceived loudness on the horizontal axis and air pressure amplitude on the vertical axis. We would need numbers ranging from 0 to 10,000,000 on the vertical axis (Figure 4.1). This axis would have to be compressed to fit on a sheet of paper or a computer screen, and we wouldn't see much space between, say, 100 and 200. Thus, our ability to show small changes at low amplitude would not be great. Although we perceive a vacuum cleaner to be approximately twice as loud as normal conversation, we would hardly be able to see any difference between their respective air pressure amplitudes if we have to include such a wide range of numbers, spacing them evenly on what is called a linear scale.   A linear scale turns out to be a very poor representation of human hearing.   We humans can more easily distinguish the difference between two low amplitude sounds that are close in amplitude than we can distinguish between two high amplitude sounds that are close in amplitude. The linear scale for loudness doesn’t provide sufficient resolution at low amplitudes to show changes that might actually be perceptible to the human ear. Figure 4.1 Linear vs. logarithmic scale Table 4.1 Loudness of common sounds measured in air pressure amplitude and in decibels Sound Approximate Air Pressure Amplitude in Pascals Ratio of Sound’s Air Pressure Amplitude to Air Pressure Amplitude of Threshold of Hearing Approximate Loudness in dBSPL Threshold of hearing $0.00002 = 2 \ast 10^{-5}$ 1 0 Breathing $0.00006325 = 6.325 \ast 10^{-5}$ 3.16 10 Rustling leaves $0.0002=2\ast 10^{-4}$ 10 20 Refrigerator humming $0.002 = 2 \ast 10^{-3}$ $10^{2}$ 40 Normal conversation $0.02 = 2\ast 10^{-2}$ $10^{3}$ 60 Vacuum cleaner $0.06325 =6.325 \ast 10^{-2}$ $3.16 \ast 10^{3}$ 70 Dishwasher $0.1125 = 1.125 \ast 10^{-1}$ $5.63 \ast 10^{3}$ 75 City traffic $0.2 = 2 \ast 10^{-1}$ $10^{4}$ 80 Lawnmower $0.3557 = 3.557 \ast 10^{-1}$ $1.78 \ast 10^{4}$ 85 Subway $0.6325 = 6.325 \ast 10^{-1}$ $3.16 \ast 10^{4}$ 90 Symphony orchestra 6.325 $3.16 \ast 10^{5}$ 110 Fireworks $20 = 2 \ast 10^{1}$ $10^{6}$ 120 Rock concert $20+ = 2 \ast 10^{1}+$ $10^{6}+$ 120+ Shotgun firing $63.25 = 6.325 \ast 10^{1}$ $3.16 \ast 10^{6}$ 130 Jet engine close by $200 = 2 \ast 10^{2}$ $2 \ast 10^{7}$ 140 Now let’s see how these observations begin to help us make sense of the decibel. A decibel is based on a ratio – that is, one value relative to another, as in $\frac{X_{1}}{X_{0}}$. Hypothetically, $X_{0}$ and $X_{1}$ could measure anything, as long as they measure the same type of thing in the same units – e.g., power, intensity, air pressure amplitude, noise on a computer network, loudspeaker efficiency, signal-to-noise ratio, etc. Because decibels are based on a ratio, they imply a comparison. Decibels can be a measure of • a change from level $X_{0}$ to level $X_{1}$ • a range of values between $X_{0}$ and $X_{1}$, or • a level $X_{1}$ compared to some agreed upon reference point $X_{0}$. What we’re most interested in with regard to sound is some way of indicating how loud it seems to human ears. What if we were to measure relative loudness using the threshold of hearing as our point of comparison – the $X_{0}$, in the ratio $\frac{X_{1}}{X_{0}}$, as in column 3 of Table 4.1? That seems to make sense. But we already noted that the ratio of the loudest to the softest thing in our table is 10,000,000/1. A ratio alone isn’t enough to turn the range of human hearing into manageable numbers, nor does it account for the non-linearity of our perception. The discussion above is given to explain why it makes sense to use the logarithm of the ratio of $\frac{X_{1}}{X_{0}}$ to express the loudness of sounds, as shown in Equation 4.4. Using the logarithm of the ratio, we don’t have to use such widely-ranging numbers to represent sound amplitudes, and we “stretch out” the distance between the values corresponding to low amplitude sounds, providing better resolution in this area. The values in column 4 of Table 4.1, measuring sound loudness in decibels, come from the following equation for decibels-sound-pressure-level, abbreviated dBSPL. Equation 4.4 Definition of dBSPL, also called ΔVoltage In this definition, $V_{0}$ is the air pressure amplitude at the threshold of hearing, and $V_{1}$ is the air pressure amplitude of the sound being measured. Notice that in Equation 4.4, we use ΔVoltage dB as synonymous with dBSPL. This is because microphones measure sound as air pressure amplitudes, turn the measurements into voltages levels, and convey the voltage values to an audio interface for digitization. Thus, voltages are just another way of capturing air pressure amplitude. Notice also that because the dimensions are the same in the numerator and denominator of $\frac{V_{1}}{V_{0}}$, the dimensions cancel in the ratio. This is always true for decibels. Because they are derived from a ratio, decibels are dimensionless units. Decibels aren’t volts or watts or pascals or newtons; they’re just the logarithm of a ratio. Hypothetically, the decibel can be used to measure anything, but it’s most appropriate for physical phenomena that have a wide range of levels where the values grow exponentially relative to our perception of them. Power, intensity, and air pressure amplitude are three physical phenomena related to sound that can be measured with decibels. The important thing in any usage of the term decibels is that you know the reference point – the level that is in the denominator of the ratio. Different usages of the term decibel sometimes add different letters to the dB abbreviation to clarify the context, as in dBPWL (decibels-power-level), dBSIL (decibels-sound-intensity-level), and dBFS (decibels-full-scale), all of which are explained below. Comparing the columns in Table 4.1, we now can see the advantages of decibels over air pressure amplitudes. If we had to graph loudness using Pa as our units, the scale would be so large that the first ten sound levels (from silence all the way up to subways) would not be distinguishable from 0 on the graph. With decibels, loudness levels that are easily distinguishable by the ear can be seen as such on the decibel scale. Decibels are also more intuitively understandable than air pressure amplitudes as a way of talking about loudness changes. As you work with sound amplitudes measured in decibels, you’ll become familiar with some easy-to-remember relationships summarized in Table 4.2. In an acoustically-insulated lab environment with virtually no background noise, a 1 dB change yields the smallest perceptible difference in loudness. However, in average real-world listening conditions, most people can’t notice a loudness change less than 3 dB. A 10 dB change results in about a doubling of perceived loudness. It doesn’t matter if you’re going from 60 to 70 dBSPL or from 80 to 90 dBSPL. The increase still sounds approximately like a doubling of loudness. In contrast, going from 60 to 70 dBSPL is an increase of 43.24 mPa, while going from 80 to 90 dBSPL is an increase of 432.5 mPa. Here you can see that saying that you “turned up the volume” by a certain air pressure amplitude wouldn't give much information about how much louder it’s going to sound. Talking about loudness-changes in terms of decibels communicates more. Table 4.2 How sound level changes in dB are perceived Change of sound amplitude How it is perceived in human hearing 1 dB smallest perceptible difference in loudness, only perceptible in acoustically-insulated noiseless environments 3 dB smallest perceptible change in loudness for most people in real-world environments +10 dB an approximate doubling of loudness -10 dB change an approximate halving of loudness You may have noticed that when we talk about a “decibel change,” we refer to it as simply decibels or dB, whereas if we are referring to a sound loudness level relative to the threshold of hearing, we refer to it as dBSPL. This is correct usage. The difference between 90 and 80 dBSPL is 10 dB. The difference between any two decibels levels that have the same reference point is always measured in dimensionless dB. We’ll return to this in a moment when we try some practice problems in Section 2. ## 4.1.5.2 Various Usages of Decibels Now let’s look at the origin of the definition of decibel and how the word can be used in a variety of contexts. The bel, named for Alexander Graham Bell, was originally defined as a unit for measuring power. For clarity, we’ll call this the power difference bel, also denoted : Equation 4.5 , power difference bel The decibel is 1/10 of a bel. The decibel turns out to be a more useful unit than the bel because it provides better resolution. A bel doesn’t break measurements into small enough units for most purposes. We can derive the power difference decibel (Δ Power dB) from the power difference bel simply by multiplying the log by 10. Another name for ΔPower dB is dBPWL (decibels-power-level). Equation 4.6, abbreviated dBPWL When this definition is applied to give a sense of the acoustic power of a sound, then is the power of sound at the threshold of hearing, which is $10^{-12}W=1pW$ (picowatt). Sound can also be measured in terms of intensity. Since intensity is defined as power per unit area, the units in the numerator and denominator of the decibel ratio are $\frac{W}{m^{2}}$, and the threshold of hearing intensity is $10^{-12}\frac{W}{m^{2}}$. This gives us the following definition of ΔIntensity dB, also commonly referred to as dBSIL (decibels-sound intensity level). Equation 4.7 , abbreviated dBSIL Neither power nor intensity is a convenient way of measuring the loudness of sound. We give the definitions above primarily because they help to show how the definition of dBSPL was derived historically. The easiest way to measure sound loudness is by means of air pressure amplitude. When sound is transmitted, air pressure changes are detected by a microphone and converted to voltages. If we consider the relationship between voltage and power, we can see how the definition of ΔVoltage dB was derived from the definition of ΔPower dB. By Equation 4.3, we know that power varies with the square of voltage. From this we get: The relationship between power and voltage explains why there is a factor of 20 is in Equation 4.4. Aside: $\log_{b}\left ( y^{x} \right )=x\log_{b}y$ We can show how Equation 4.4 is applied to convert from air pressure amplitude to dBSPL and vice versa. Let’s say we begin with the air pressure amplitude of a humming refrigerator, which is about 0.002 Pa. Working in the opposite direction, you can convert the decibel level of normal conversation (60 dBSPL) to air pressure amplitude: \begin{align*}& 60=20\log_{10}\left ( \frac{0.002\: Pa}{0.00002\: Pa} \right )=20\log_{10}\left ( 50000x/Pa \right ) \\&\frac{60}{20}=\log_{10}\left ( 50000x/Pa \right ) \\&3=\log_{10}\left ( 50000x/Pa \right ) \\ &10^{3}= 50000x/Pa\\&x=\frac{1000}{50000}Pa \\ &x=0.02\: Pa \end{align*} Aside: If $x=\log_{b}y$ then $b^{x}=y$ Thus, 60 dBSPL corresponds to air pressure amplitude of 0.02 Pa. Rarely would you be called upon to do these conversions yourself. You’ll almost always work with sound intensity as decibels. But now you know the mathematics on which the dBSPL definition is based. So when would you use these different applications of decibels? Most commonly you use dBSPL to indicate how loud things seem relative to the threshold of hearing. In fact, you use this type of decibel so commonly that the SPL is often dropped off and simply dB is used where the context is clear. You learn that human speech is about 60 dB, rock music is about 110 dB, and the loudest thing you can listen to without hearing damage is about 120 dB – all of these measurements implicitly being dBSPL. The definition of intensity decibels, dBSIL, is mostly of interest to help us understand how the definition of dBSPL can be derived from dBPWL. We’ll also use the definition of intensity decibels in an explanation of the inverse square law, a rule of thumb that helps us predict how sound loudness decreases as sound travels through space in a free field (Section 4.2.1.6). There’s another commonly-used type of decibel that you’ll encounter in digital audio software environments – the decibel-full-scale (dBFS). You may not understand this type of decibel completely until you’ve read Chapter 5 because it’s based on how audio signals are digitized at a certain bit depth (the number of bits used for each audio sample). We’ll give the definition here for completeness and revisit it in Chapter 5. The definition of dBFS uses the largest-magnitude sample size for a given bit depth as its reference point. For a bit depth of n, this largest magnitude would be $2^{n-1}$. where n is a given bit depth and x is an integer sample value between $-2^{n-1}$ and $2^{n-1}-1$. Equation 4.8 Decibels-full-scale, abbreviated dBFS Figure 4.2 shows an audio processing environment where a sound wave is measured in dBFS. Notice that since $\left | x \right |$ is never more than $2^{n-1}$, $log_{10}\left ( \frac{\left | x \right |}{2^{n-1}} \right )$ is never a positive number. When you first use dBFS it may seem strange because all sound levels are at most 0. With dBFS, 0 represents maximum amplitude for the system, and values move toward -∞ as you move toward the horizontal axis, i.e., toward quieter sounds. Figure 4.2 Sound amplitude measured in dBFS The discussion above has considered decibels primarily as they measure sound loudness. Decibels can also be used to measure relative electrical power or voltage. For example, dBV measures voltage using 1 V as a reference level, dBu measures voltage using 0.775 V as a reference level, and dBm measures power using 0.001 W as a reference level. These applications come into play when you’re considering loudspeaker or amplifier power, or wireless transmission signals. In Section 2, we’ll give you some practical applications and problems where these different types of decibels come into play. The reference levels for different types of decibels are listed in Table 4.3. Notice that decibels are used in reference to the power of loudspeakers or the input voltage to audio devices. We’ll look at these applications more closely in Section 2. Of course, there are many other common usages of decibels outside of the realm of sound. Table 4.3 Usages of the term decibels with different reference points what is being measured abbreviations in common usage common reference point equation for conversion to decibels Acoustical sound power dBPWL or ΔPower dB $P_{0}=10^{-12}W=1pW(picowatt)$ $10\log_{10}\left ( \frac{P_{1}}{P_{0}} \right )$ sound intensity dBSIL or ΔIntensity dB threshold of hearing, $I_{0}=10^{-12}\frac{W}{m^{2}}$ $10\log_{10}\left ( \frac{I_{1}}{i_{0}} \right )$ sound air pressure amplitude dBSPL or ΔVoltage dB threshold of hearing, $P_{0}=0.00002\frac{N}{m^{2}}=2\ast 10^{-5}Pa$  $20\log_{10}\left ( \frac{V_{1}}{V_{0}} \right )$ sound amplitude dBFS $2^{n-1}$ where n is a given bit depth x is a sample value, $-2^{n-1} \leq x \leq 2^{n-1}-1$ dBFS=$20\log_{10}\left ( \frac{\left | x \right |}{2^{n-1}} \right )$ Electrical radio frequency transmission power dBm $P_{0}=1 mW = 10^{-3} W$ $10\log_{10}\left ( \frac{P_{1}}{P_{0}} \right )$ loudspeaker acoustical power dBW $P_{0}=1 W$ $10\log_{10}\left ( \frac{P_{1}}{P_{0}} \right )$ input voltage from microphone; loudspeaker voltage; consumer level audio voltage dBV $V_{0}=1 V$ $20\log_{10}\left ( \frac{V_{1}}{V_{0}} \right )$ professional level audio voltage dBu $V_{0}=0.775 V$ $20\log_{10}\left ( \frac{V_{1}}{V_{0}} \right )$ ## 4.1.5.3 Peak Amplitude vs. RMS Amplitude Microphones and sound level meters measure the amplitude of sound waves over time. There are situations in which you may want to know the largest amplitude over a time period. This “largest” can be measured in one of two ways: as peak amplitude or as RMS amplitude. Let’s assume that the microphone or sound level meter is measuring sound amplitude. The sound pressure level of greatest magnitude over a given time period is called the peak amplitude. For a single-frequency sound representable by a sine wave, this would be the level at the peak of the sine wave. The sound represented by Figure 4.3 would obviously be perceived as louder than the same-frequency sound represented by Figure 4.4. However, how would the loudness of a sine-wave-shaped sound compare to the loudness of a square-wave-shaped sound with the same peak amplitude (Figure 4.3 vs. Figure 4.5)? The square wave would actually sound louder. This is because the square wave is at its peak level more of the time as compared to the sine wave. To account for this difference in perceived loudness, RMS amplitude (root-mean-square amplitude) can be used as an alternative to peak amplitude, providing a better match for the way we perceive the loudness of the sound. Figure 4.3 Sine wave representing sound Figure 4.4 Sine wave representing a higher amplitude sound Figure 4.5 Square wave representing sound Rather than being an instantaneous peak level, RMS amplitude is similar to a standard deviation, a kind of average of the deviation from 0 over time. RMS amplitude is defined as follows: where n is the number of samples taken and $S_{i}$ is the $i^{th}$ sample. Equation 4.9 Equation for RMS amplitude, $V_{RMS}$ Aside:  In some sources, the term RMS power is used interchangeably with RMS amplitude or RMS voltage. This isn’t very good usage. To be consistent with the definition of power, RMS power ought to mean “RMS voltage multiplied by RMS current.” Nevertheless, you sometimes see term RMS power used as a synonym of RMS amplitude as defined in Equation 4.9. Notice that squaring each sample makes all the values in the summation positive. If this were not the case, the summation would be 0 (assuming an equal number of positive and negative crests) since the sine wave is perfectly symmetrical. The definition in Equation 4.9 could be applied using whatever units are appropriate for the context. If the samples are being measured as voltages, then RMS amplitude is also called RMS voltage. The samples could also be quantized as values in the range determined by the bit depth, or the samples could also be measured in dimensionless decibels, as shown for Adobe Audition in Figure 4.6. For a pure sine wave, there is a simple relationship between RMS amplitude and peak amplitude. for pure sine waves and Equation 4.10 Relationship between $V_{rms}$ and $V_{peak}$ for pure sine waves Of course most of the sounds we hear are not simple waveforms like those shown; natural and musical sounds contain many frequency components that vary over time. In any case, the RMS amplitude is a better model for our perception of the loudness of complex sounds than is peak amplitude. Sound processing programs often give amplitude statistics as either peak or RMS amplitude or both. Notice that RMS amplitude has to be defined over a particular window of samples, labeled as Window Width in Figure 4.6. This is because the sound wave changes over time. In the figure, the window width is 1000 ms. Figure 4.6 Amplitude statistics window from Adobe Audition You need to be careful will some usages of the term "peak amplitude." For example, VU meters, which measure signal levels in audio equipment, use the word “peak” in their displays, where RMS amplitude would be more accurate. Knowing this is important when you’re setting levels for a live performance, as the actual peak amplitude is higher than RMS. Transients like sudden percussive noises should be kept well below what is marked as “peak” on a VU meter. If you allow the level to go too high, the signal will be clipped. # 4.1.6 Sound Perception ## 4.1.6.1 Frequency Perception In Chapter 3, we discussed the non-linear nature of pitch perception when we looked at octaves as defined in traditional Western music. The A above middle C (call it A4) on a piano keyboard sounds very much like the note that is 12 semitones above it, A5, except that A5 has a higher pitch. A5 is one octave higher than A4. A6 sounds like A5 and A4, but it's an octave higher than A5. The progression between octaves is not linear with respect to frequency. A2's frequency is twice the frequency of A1. A3's frequency is twice the frequency of A2, and so forth. A simple way to think of this is that as the frequencies increase by multiplication, the perception of the pitch change increases by addition. In any case, the relationship is non-linear, as you can clearly see if you plot frequencies against octaves, as shown in Figure 4.7. Figure 4.7 Non-linear nature of pitch perception The fact that this is a non-linear relationship implies that the higher up you go in frequencies, the bigger the difference in frequency between neighboring octaves. The difference between A2 and A1 is 110 – 55 = 55 Hz while the difference between A7 and A6 is 3520 – 1760 = 1760 Hz. Because of the non-linearity of our perception, frequency response graphs often show the frequency axis on a logarithmic scale, or you're given a choice between a linear and a logarithmic scale, as shown in Figure 4.8. Notice that you can select or deselect "linear" in the upper left hand corner. In the figure on the right, the distance between 10 and 100 Hz on the horizontal axis is the same as the distance between 100 and 1000, which is the same as 1000 and 10000. This is more in keeping with how our perception of the pitch changes as the frequencies get higher. You should always pay attention to the scale of the frequency axis in graphs such as this. Figure 4.8 Frequency response graphs with linear and nonlinear scales for frequency The range of frequencies within human hearing is, at best, 20 Hz to 20,000 Hz. The range varies with individuals and diminishes with age, especially for high frequencies. Our hearing is less sensitive to low frequencies than to high; that is, low frequencies have to be more intense for us to hear them than high frequencies. Frequency resolution (also called frequency discrimination) is our ability to distinguish between two close frequencies. Frequency resolution varies by frequency, loudness, the duration of the sound, the suddenness of the frequency change, and the acuity and training of the listener's ears. The smallest frequency change that can be noticed as a pitch change is referred to as a just-noticeable-difference (jnd). At low frequencies, it's possible to notice a difference between frequencies that are separated by just a few Hertz. Within the 1000 Hz to 4000 Hz range, it's possible for a person to hear a jnd of as little as 1/12 of a semitone. (But 1/12 a semitone step from 1000 Hz is about 88 Hz, while 1/12 a semitone step from 4000 Hz is about 353 Hz.) At low frequencies, tones that are separated by just a few Hertz can be distinguished as separate pitches, while at high frequencies, two tones must be separated by hundreds of Hertz before a difference is noticed. You can test your own frequency range and discrimination with a sound processing program like Audacity or Audition, generating and listening to pure tones, as shown in Figure 4.9 Be aware, however, that the monitors or headphones you use have an impact on your ability to hear the frequencies. Figure 4.9 Creating a single-frequency tone in Adobe Audition ## 4.1.6.2 Critical Bands One part of the ear's anatomy that is helpful to consider more closely is the area in the inner ear called the basilar membrane. It is here that sound vibrations are detected, separated by frequencies, and transformed from mechanical energy to electrical impulses sent to the brain.   The basilar membrane is lined with rows of hair cells and thousands of tiny hairs emanating from them. The hairs move when stimulated by vibrations, sending signals to their base cells and the attached nerve fibers, which pass electrical impulses to the brain.   In his pioneering work on frequency perception, Harvey Fletcher discovered that different parts of the basilar membrane resonate more strongly to different frequencies. Thus, the membrane can be divided into frequency bands, commonly called critical bands. Each critical band of hair cells is sensitive to vibrations within a certain band of frequencies. Continued research on critical bands has shown that they play an important role in many aspects of human hearing, affecting our perception of loudness, frequency, timbre, and dissonance vs. consonance. Experiments with critical bands have also led to an understanding of frequency masking, a phenomenon that can be put to good use in audio compression. Critical bands can be measured by the band of frequencies that they cover. Fletcher discovered the existence of critical bands in his pioneering work on the cochlear response. Critical bands are the source of our ability to distinguish one frequency from another. When a complex sound arrives at the basilar membrane, each critical band acts as a kind of bandpass filter, responding only to vibrations within its frequency spectrum. In this way, the sound is divided into frequency components. If two frequencies are received within the same band, the louder frequency can overpower the quieter one. This is the phenomenon of masking, first observed in Fletcher's original experiments. Aside:  A bandpass filter allows only the frequencies in a defined band to pass through, filtering out all other frequencies. Bandpass filters are studied in Chapter 7. Critical bands within the ear are not fixed areas but instead are created during the experience of sound. Any audible sound can create a critical band centered on it. However, experimental analyses of critical bands have arrived at approximations that are useful guidelines in designing audio processing tools. Table 4.4 is one model taken after Fletcher, Zwicker, and Barkhausen's independent experiments, as cited in (Tobias, 1970). Here, the basilar membrane is divided into 25 overlapping bands, each with a center frequency and with variable bandwidths across the audible spectrum. The width of each band is given in Hertz, semitones, and octaves. (The widths in semitones and octaves were derived from the widths in Hertz, as explained in Section 4.3.1.) The center frequencies are graphed against the critical bands in Hertz in Figure 4.10. You can see from the table and figure that, measured in Hertz, the critical bands are wider for higher frequencies than for lower. This implies that there is better frequency resolution at lower frequencies because a narrower band results in less masking of frequencies in a local area. The table shows that critical bands are generally in the range of two to four semitones wide, mostly less than four. This observation is significant as it relates to our experience of consonance vs. dissonance. Recall from Chapter 3 that a major third consists of four semitones.  For example, the third from C to E is separated by four semitones (stepping from C to C#, C# to D, D to D #, and D# to E.) Thus, the notes that are played simultaneously in a third generally occupy separate critical bands. This helps to explain why thirds are generally considered consonant – each of the notes having its own critical band. Seconds, which exist in the same critical band, are considered dissonant. At very low and very high frequencies, thirds begin to lose their consonance to most listeners. This is consistent with the fact that the critical bands at the low frequencies (100-200 and 200-300 Hz) and high frequencies (over 12000 Hz) span more than a third, so that at these frequencies, a third lies within a single critical band. Table 4.4 An estimate of critical bands using the Bark scale Critical Band Center Frequency in Hertz Range of Frequencies in Hertz Bandwidth in Hertz Bandwidth in Semitones Relative to Start* Bandwidth in Octaves Relative to Start* 1 50 1-100 100 - 2 150 100-200 100 12 1 3 250 200-300 100 7 0.59 4 350 300–400 100 5 0.42 5 450 400–510 110 4 0.31 6 570 510–630 120 4 0.3 7 700 630–770 140 3 0.29 8 840 770–920 150 3 0.26 9 1000 920–1080 160 3 0.23 10 1170 1080–1270 190 3 0.23 11 1370 1270–1480 210 3 0.22 12 1600 1480–1720 240 3 0.22 13 1850 1720–2000 280 3 0.22 14 2150 2000–2320 320 3 0.21 15 2500 2320–2700 380 3 0.22 16 2900 2700–3150 450 3 0.22 17 3400 3150–3700 550 3 0.23 18 4000 3700–4400 700 3 0.25 19 4800 4400–5300 900 3 0.27 20 5800 5300–6400 1100 3 0.27 21 7000 6400–7700 1300 3 0.27 22 8500 7700–9500 1800 4 0.3 23 10500 9500–12000 2500 4 0.34 24 13500 12000–15500 3500 4 0.37 25 18775 15500–22050 6550 6 0.5 *See Section 4.3.2 for an explanation of how the last two columns of this table were derived. Figure 4.10 Critical bands graphed from Table 4.4 ## 4.1.6.3 Amplitude Perception In the early 1930s at Bell Laboratories, groundbreaking experiments by Fletcher and Munson clarified the extent to which our perception of loudness varies with frequency (Fletcher and Munson 1933). Their results, refined by later researchers (Robinson and Dadson, 1956) and adopted as International Standard ISO 226, are illustrated in a graph of equal-loudness contours shown in Figure 4.11. In general, the graph shows how much you have to “turn up” or “turn down” a single frequency tone to make it sound equally loud to a 1000 Hz tone. Each curve on the graph represents an n-phon contour. One phon is defined as a 1000 Hz sound wave at a loudness of 1 dBSPL. An n-phon contour is created as follows: • Frequency is on the horizontal axis and loudness in decibels is on the vertical axis • n curves are drawn. • Each curve, from 1 to n, represents the intensity levels necessary in order to make each frequency, across the audible spectrum, sound equal in loudness to a 1000 Hz wave at n dBSPL. Let’s consider, for example, the 10-phon contour. This contour was creating by playing a 1000 Hz pure tone at a loudness level of 10 dBSPL, and then asking groups of listeners to say when they thought pure tones at other frequencies matched the loudness of the 1000 Hz tone. Notice that low-frequency tones had to be increased by 60 or 75 dB to sound equally loud. Some of the higher-frequency tones – in the vicinity of 3000 Hz – actually had to be turned down in volume to sound equally loud to the 10 dBSPL 1000 Hz tone. Also notice that the louder the 1000 Hz tone is, the less lower-frequency tones have to be turned up to sound equal in loudness. For example, the 90-phon contour goes up only about 30 dB to make the lowest frequencies sound equal in loudness to 1000 Hz at 90 dBSPL, whereas the 10-phon contour has to be turned up about 75 dB. Figure 4.11 Equal loudness contours (Figure derived from a program by Jeff Tacket, posted at the MATLAB Central File Exchange) With the information captured in the equal loudness contours, devices that measure the loudness of sounds – for example, SPL meters (sound pressure level meters) – can be designed so that they compensate for the fact that low frequency sounds seem less loud than high frequency sounds at the same amplitude. This compensation is called “weighting.” Figure 4.12 graphs three weighting functions – A, B, and C. The A, B, and C-weighting functions are approximately inversions of the 40-phon, 70-phon, and 100-phon loudness contours, respectively. This implies that applying A-weighting in an SPL meter causes the meter to measure loudness in a way that matches our differences in loudness perception at 40-phons. To understand how this works, think of the graphs of the weighting as frequency filters – also called frequency response graphs. When a weighting function is applied by an SPL meter, the meter uses a filter to reduce the influence of frequencies to which our ears are less sensitive, and conversely to increase the weight of frequencies that our ears are sensitive to. The fact that the A-weighting graph is lower on the left side than on the right means that an A-weighted SPL meter reduces the influence of low-frequency sounds as it takes its overall loudness measurement. On the other hand, it boosts the amplitude of frequencies around 3000 Hz, as seen by the bump above 0 dB around 3000 Hz. It doesn’t matter that the SPL meter meddles with frequency components as it measures loudness. After all, it isn’t measuring frequencies. It’s measuring how loud the sounds seem to our ears. The use of weighted SPL meters is discussed further in Section 4.2.2.2. Figure 4.12 Graphs of A, B, and C-weighting functions (Figure derived from a program by Jeff Tacket, posted at the MATLAB Central File Exchange) # 4.1.7 The Interaction of Sound with its Environment Sometimes it's convenient to simplify our understanding of sound by considering how it behaves when there is nothing in the environment to impede it. An environment with no physical influences to absorb, reflect, diffract, refract, reverberate, resonate, or diffuse sound is called a free field. A free field is an idealization of real world conditions that facilitates our analysis of how sound behaves. Sound in a free field can be pictured as radiating out from a point source, diminishing in intensity as it gets farther from the source. A free field is partially illustrated in Figure 4.18. In this figure, sound is radiating out from a loudspeaker, with the colors indicating highest to lowest intensity sound in the order red, orange, yellow, green, and blue. The area in front of the loudspeaker might be considered a free field. However, because the loudspeaker partially blocks the sound from going behind itself, the sound is lower in amplitude there. You can see that there is some sound behind the loudspeaker, resulting from reflection and diffraction. Figure 4.13 Sound radiation from a loudspeaker, viewed from top ## 4.1.7.1 Absorption, Reflection, Refraction, and Diffraction In the real world, there are any number of things that can get in the way of sound, changing its direction, amplitude, and frequency components. In enclosed spaces, absorption plays an important role. Sound absorption is the conversion of sound’s energy into heat, thereby diminishing the intensity of the sound. The diminishing of sound intensity is called attenuation. A general mathematical formulation for the way sound attenuates as it moves through the air is captured in the inverse square law, which shows that sound decreases in intensity in proportion to the square of the distance from the source. (See Section 4.2.1.6.) The attenuation of sound in the air is due to the air molecules themselves absorbing and converting some of the energy to heat. The amount of attenuation depends in part on the air temperature and relative humidity. Thick, porous materials can absorb and attenuate the sound even further, and they're often used in architectural treatments to modify and control the acoustics of a room. Even hard, solid surfaces absorb some of the sound energy, although most of it is reflected back. The material of walls and ceilings, the number and material of seats, the number of persons in an audience, and all solid objects have to be taken into consideration acoustically in sound setups for live performance spaces. Sound that is not absorbed by objects is instead reflected from, diffracted around, or refracted into the object. Hard surfaces reflect sound more than soft ones, which are more absorbent. The law of reflection states that the angle of incidence of a wave is equal to the angle of reflection. This means that if a wave were to propagate in a straight line from its source, it reflects in the way pictured in Figure 4.15. In reality, however, sound radiates out spherically from its source. Thus, a wavefront of sound approaches objects and surfaces from various angles. Imagine a cross-section of the moving wavefront approaching a straight wall, as seen from above. Its reflection would be as pictured in Figure 4.15, like a mirror reflection. Figure 4.14 Angle of incidence equals angle of reflection Figure 4.15 Sound radiating from source and reflecting off flat wall, as seen from above In a special case, if the wavefront were to approach a concave curved solid surface, it would be reflected back to converge at one point in the room, the location of that point depending on the angle of the curve. This is how whispering rooms are constructed, such that two people whispering in the room can hear each other perfectly if they're positioned at the sound’s focal points, even though the focal points may be at the far opposite ends of the room. A person positioned elsewhere in the room cannot hear their whispers at all. A common shape found with whispering rooms is an ellipse, as seen in Figure 4.16. The shape and curve of these walls cause any and all sound emanating from one focal point to reflect directly to the other. Figure 4.16 Sound reflects directly between focal points in a whispering room Aside: Diffraction also has a lot to do with microphone and loudspeaker directivity. Consider how microphones often have different polar patterns at different frequencies. Even with a directional mic, you’ll often see lower frequencies behave more omnidirectionally, and sometimes an omnidirectional mic may be more directional at high frequencies. That’s largely because of the size of the wavelength compared to size of the microphone diaphragm. It’s hard for high frequencies to diffract around a larger object, so for a mic to have a truly omnidirectional pattern, the diaphragm has to be very small. Diffraction is the bending of a sound wave as it moves past an obstacle or through a narrow opening. The phenomenon of diffraction allows us to hear sounds from sources that are not in direct line-of-sight, such as a person standing around a corner or on the other side of a partially obstructing object. The amount of diffraction is dependent on the relationship between the size of the obstacle and the size of the sound’s wavelength. Low frequency sounds (i.e., long-wavelength sounds) are diffracted more than high frequencies (i.e., short wavelengths) around the same obstacle. In other words, low frequency sounds are better able to travel around obstacles. In fact, if the wavelength of a sound is significantly larger than an obstacle that the sound encounters, the sound wave continues as if the obstacle isn’t even there. For example, your stereo speaker drivers are probably protected behind a plastic or metal grill, yet the sound passes through it intact and without noticeable coloration. The obstacle presented by the wire mesh of the grill (perhaps a millimeter or two in diameter) is even smaller than the smallest wavelength we can hear (about 2 centimeters for 20 kHz, 10 to 20 times larger than the wire), so the sound diffracts easily around it. Refraction is the bending of a sound wave as it moves through different media. Typically we think of refraction with light waves, as when we look at something through glass or that is underwater. In acoustics, the refraction of sound waves tends to be more gradual, as the properties of the air change subtly over longer distances. This causes a bending in sound waves over a long distance, primarily due to temperature, humidity, and in some cases wind gradients over distance and altitude. This bending can result in noticeable differences in sound levels, either as a boost or an attenuation, also referred to as a shadow zone. ## 4.1.7.2 Reverberation, Echo, Diffusion, and Resonance Reverberation is the result of sound waves reflecting off of many objects or surfaces in the environment. Imagine an indoor room in which you make a sudden burst of sound. Some of that sound is transmitted through or absorbed by the walls or objects, and the rest is reflected back, bouncing off the walls, ceilings, and other surfaces in the room. The sound wave that travels straight from the sound source to your ears is called the direct signal. The first few instances of reflected sound are called primary or early reflections. Early reflections arrive at your ears about 60 ms or sooner after the direct sound, and play a large part in imparting a sense of space and room size to the human ear. Early reflections may be followed by a handful of secondary and higher-order reflections. At this point, the sound waves have had plenty of opportunity to bounce off of multiple surfaces, multiple times. As a result, the reflections that are arriving now are more numerous, closer together in time, and quieter. Much of the initial energy initial energy of the reflections has been absorbed by surfaces or expended in the distance traveled through the air. This dense collection of reflections is reverberation, illustrated in Figure 4.17. Assuming that the sound source is only momentary, the generated sound eventually decays as the waves lose energy, the reverberation becoming less and less loud until the sound is no longer discernable. Typically, reverberation time is defined as the time it takes for the sound to decay in level by 60 dB from its direct signal. Figure 4.17 Sound reflections and reverberation Single, strong reflections that reach the ear a significant amount of time – about 100 ms – after the direct signal can be perceived as an echo – essentially a separate recurrence of the original sound. Even reflections as little as 50 ms apart can cause an audible echo, depending on the type of sound and room acoustics. While echo is often employed artistically in music recordings, echoes tend to be detrimental and distracting in a live setting and are usually avoided or require remediation in performance and listening spaces. Diffusion is another property that interacts with reflections and reverberation. Diffusion relates to the ability to distribute sound energy more evenly in a listening space. While a flat, even surface reflects sounds strongly in a predictable direction, uneven surfaces or convex curved surfaces diffuse sound more randomly and evenly. Like absorption, diffusion is often used to treat a space acoustically to help break up harsh reflections that interfere with the natural sound. Unlike absorption, however, which attempts to eliminate the unwanted sound waves by reducing the sound energy, diffusion attempts to redirect the sound waves in a more natural manner. A room with lots of absorption has less overall reverberation, while diffusion maintains the sound’s intensity and helps turn harsh reflections into more pleasant reverberation. Usually a combination of absorption and diffusion is employed to achieve the optimal result. There are many unique types of diffusing surfaces and panels that are manufactured based on mathematical algorithms to provide the most random, diffuse reflections possible Putting these concepts together, we can say that the amount of time it takes for a particular sound to decay depends on the size and shape of the room, its diffusive properties, and the absorptive properties of the walls, ceilings, and objects in the room. In short, all the aforementioned properties determine how sound reverberates in a space, giving the listener a "sense of place." Reverberation in an auditorium can enhance the listener's experience, particularly in the case of a music hall where it gives the individual sounds a richer quality and helps them blend together. Excessive reverberation, however, can reduce intelligibility and make it difficult to understand speech. In Chapter 7, you'll see how artificial reverberation is applied in audio processing. A final important acoustical property to be considered is resonance. In Chapter 2, we defined resonance as an object’s tendency to vibrate or oscillate at a certain frequency that is basic to its nature. Like a musical instrument, a room has a set of resonant frequencies, called its room modes. Room modes result in locations in a room where certain frequencies are boosted or attenuated, making it difficult to give all listeners the same audio experience. We'll talk more about how to deal with room modes in Section 4.2.2.5. # 4.2.1 Working with Decibels ## 4.2.1.1 Real-World Considerations We now turn to practical considerations related to the concepts introduced in Section 1. We first return to the concept of decibels. An important part of working with decibel values is learning to recognize and estimate decibel differences. If a sound isn’t loud enough, how much louder does it need to be? Until you can answer that question in a dB value, you will have a hard time figuring out what to do. It's also important to understand the kind of dB differences that are audible. The average listener cannot distinguish a difference in sound pressure level that is less than 3 dB. With training, you can learn to recognize differences in sound pressure level of 1 dB, but differences that are less than 1 dB are indistiguishable to even well-trained listeners. Understanding the limitations to human hearing is very important when working with sound. For example, when investigating changes you can make to your sound equipment to get higher sound pressure levels, you should be aware that unless the change amounts to 3 dB or more, most of your listeners will probably not notice. This concept also applies when processing audio signals. When manipulating the frequency response of an audio signal using an equalizer, unless you’re making a difference of 3 dB with one of your filters, the change will be imperceptible to most listeners. Having a reference to use when creating audio material or sound systems is also helpful. For example, there are usually loudness requirements imposed by the television network for television content. If these requirements are not met, there will be level inconsistencies between the various programs on the television station that can be very annoying to the audience. These requirements could be as simple as limiting peak levels to -10 dBFS or as strict as meeting a specified dBFS average across the duration of the show. You might also be putting together equipment that delivers sound to a live audience in an acoustic space. In that situation you need to know how loud in dBSPL the system needs to perform at the distance of the audience. There is a minimum dBSPL level you need to achieve in order to get the signal above the noise floor of the room, but there is also a maximum dBSPL level you need to stay under in order to avoid damaging people’s hearing or violating laws or policies of the venue. Once you know these requirements, you can begin to evaluate the performance of the equipment to verify that it can meet these requirements. ## 4.2.1.2 Rules of Thumb Table 4.2 gives you some rules of thumb for how changes in dB are perceived as changes in loudness. Turn a sound up by 10 dB and it sounds about twice as loud. Turn it up by 3 dB, and you’ll hardly notice any difference. Similarly, Table 4.5 gives you some rules of thumb regarding power and voltage changes. These rules give you a quick sense of how boosts in power and voltage affect sound levels. Table 4.5 Rules of thumb for changes in power, voltage, or distance in dB change in power, voltage, or distance approximate change in dB power $\ast$ 2 3 dB increase power ÷ 2 3 dB decrease power $\ast$ 10 10 dB increase power ÷ 10 10 dB decrease voltage $\ast$ 2 6 dB increase voltage ÷ 2 6 dB decrease voltage $\ast$ 10 20 dB increase voltage ÷ 10 20 dB decrease distance away from source $\ast$ 2 6 dB decrease In the following sections, we’ll give examples of how these rules of thumb come into practice. A mathematical justification of these rules is given in Section 3. ## 4.2.1.3 Determining Power and Voltage Differences and Desired Changes in Power Levels Decibels are also commonly used to compare the power levels of loudspeakers and amplifiers. For power, Equation 4.6 applies -- $\Delta Power \: dB = 10\log_{10}\left ( \frac{P_{1}}{P_{0}} \right )$. Based on this equation, how much more powerful is an 800 W amplifier than a 200 W amplifier, in decibels? For voltages, Equation 4.4 is used ($\Delta Voltage\:dB=20\log_{10}\left ( \frac{V_{1}}{V_{0}} \right )$). If you increase a voltage level from 100 V to 1000 V, what is the increase in decibels? Aside: Multiplying power times 2 corresponds to multiplying voltage times $\sqrt{2}$ because power is proportional to voltage squared: $P\propto V^{2}$ Thus It’s worth pointing out here that because the definition of decibels-sound-pressure-level was derived from the power decibel definition, then if there’s a 3 dB increase in the power of an amplifier, there is a corresponding 3 dB increase in the sound pressure level it produces. We know that a 3 dB increase in sound pressure level is barely detectable, so the implication is that doubling the power of an amplifier doesn’t increase the loudness of the sounds it produces very much. You have to multiply the power of the amplifier by ten in order to get sounds that are approximately twice asloud. The fact that doubling the power gives about a 3 dB increase in sound pressure level has implications with regard to how many speakers you ought to use for a given situation. If you double the speakers (assuming identical speakers), you double the power, but you get only a 3 dB increase in sound level. If you quadruple the speakers, you get a 6 dB increase in sound because each time you double, you go up by 3 dB. If you double the speakers again (eight speakers now), you hypothetically get a 9 dB increase, not taking into account other acoustical factors that may affect the sound level. Often, your real world problem begins with a dB increase you’d like to achieve in your live sound setup. What if you want to increase the level by ΔdB? You can figure out how to do this with the power ratio formula, derived in Equation 4.11. Thus where $P_{0}$ is the starting power, $P_{1}$ is the new power level, and ΔdB is the desired change in decibels Equation 4.11 Derivation of power ratio formula It may help to recast the equation to clarify that for the problem we’ve described, the desired decibel change and the beginning power level are known, and we wish to compute the new power level needed to get this decibel change. where $P_{0}$ is the starting power, $P_{1}$ is the new power level, and ΔdB is the desired change in decibels Equation 4.12 Power ratio formula Applying this formula, what if you start with a 300 W amplifier and want to get one that is 15 dB louder? You can see that it takes quite an increase in wattage to increase the power by 15 dB. Instead of trying to get more watts, a better strategy would be to choose different loudspeakers that have a higher sensitivity. The sensitivity of a loudspeaker is defined as the sound pressure level that is produced by the loudspeaker with 1 watt of power when measured 1 meter away. Also, because the voltage gain in a power amplifier is fixed, before you go buy a bunch of new loudspeakers, you may also want to make sure that you're feeding the highest possible voltage signal into the power amplifier. It's quite possible that the 15 dB increase you're looking for is hiding somewhere in the signal chain of your sound system due to inefficient gain structure between devices. If you can get 15 dB more voltage into the amplifier by optimizing your gain structure, the power amplifier quite happily amplifies that higher voltage signal assuming you haven’t exceeded the maximum input voltage for the power amplifier. Chapter 8 includes a Max demo on gain structure that may help you with this concept. ## 4.2.1.4 Converting from One Type of Decibels to Another A similar problem arises when you have two pieces of sound equipment whose nominal output levels are measured in decibels of different types. For example, you may want to connect two devices where the nominal voltage output of one is given in dBV and the nominal voltage output of the other is given in dBu. You first want to know if the two voltage levels are the same. If they are not, you want to know how much you have to boost the one of lower voltage to match the higher one. The way to do this is to convert both dBV and dBu back to voltage. You can then compare the two voltage levels in dB. From this you know how much the lower voltage hardware needs to be boosted. Consider an example where one device has an output level of −10 dBv and the other operates at 4 dBu. Convert −10 dBV to voltage: Thus, −10 dBV converts to 0.316 V. By a similar computation, we get the voltage corresponding to 4 dBu, this time using 0.775 V as the reference value in the denominator. Convert 4 dBu to voltage: Thus, 4 dBu converts to 1.228 V. Now that we have the two voltages, we can compute the decibel difference between them. Compute the voltage difference between 0.316 V and 1.228 V: From this you see that the lower-voltage device needs to be boosted by 12 dB in order to match the other device. ## 4.2.1.5 Combining Sound Levels from Multiple Sources In the last few sections, we’ve been discussing mostly power and voltage decibels. These decibel computations are relevant to our work because power levels and voltages produce sounds. But we can’t hear volts and watts. Ultimately, what we want to know is how loud things sound. Let’s return now to decibels as they measure audible sound levels. Think about what happens when you add one sound to another in the air or on a wire and want to know how loud the combined sound is in decibels. In this situation, you can’t just add the two decibel levels. For example, if you add an 85 dBSPL lawnmower on top of a 110 dBSPL symphony orchestra, how loud is the sound? It isn’t 85 dBSPL + 110 dBSPL = 195 dBSPL.   Instead, we derive the sum of decibels $d_{1}$ and $d_{2}$ as follows: Convert $d_{1}$ to air pressure: Convert $d_{2}$ to air pressure: Sum the air pressure amplitudes and and convert back to dBSPL: The combined sounds in this case are not perceptibly louder than the louder of the two original sounds being combined! ## 4.2.1.6 Inverse Square Law The last row of Table 4.5 is known as the inverse square law, which states that the intensity of sound from a point source is proportional to the inverse of the square of the distance r from the source. Perhaps of more practical use is the related rule of thumb that for every doubling of distance from a sound source, you get a decrease in sound level of 6 dB. We can informally prove the inverse square law by the following argument. For simplification, imagine a sound as coming from a point source. This sound radiates spherically (equally in all directions) from the source. Sound intensity is defined as sound power passing through a unit area. The fact that intensity is measured per unit area is what is significant here. You can picture the sound spreading out as it moves away from the source. The farther the sound gets away from the source, the more it has “spread out,” and thus its intensity lessens per unit area as the sphere representing the radiating sound gets larger. This is illustrated in Figure 4.18. Figure 4.18 Sphere representing sound radiating from a point source; radii representing two different distances from this sound Figure 4.19 Applying the inverse square law This phenomenon of sound attenuation as sound moves from a source is captured in the inverse square law, illustrated in Figure 4.18: where $r_{0}$ is the initial distance from the sound, $r_{1}$ is the new distance from the sound, $I_{0}$ is the intensity of the sound at the microphone in decibels, and is $I_{1}$ the intensity of the sound at the listener in decibels Equation 4.13 Inverse square law What this means in practical terms is the following. Say you have a sound source, a singer, who is a distance $r_{0}=$ 7' 11" from the microphone, as shown in Figure 4.19. The microphone detects her voice at a level of $l_{0}=$50 dBSPL. The listener is a distance $r_{1}=$ 49' 5" from the singer. Then the sound reaching the listener from the singer has an intensity of Notice that when $r_{1} the logarithm gives a negative number, which makes sense because the sound is less intense as you move away from the source. The inverse square law is a handy rule of thumb. Each time we double the distance from our source, we decrease the sound level by 6 dB. The first doubling of distance is a perceptible but not dramatic decrease in sound level. Another doubling of distance (which would be four times the original distance from the source) yields a 12 dB decrease, which makes the source sound less than half as loud as it did from the initial distance. These numbers are only approximations for ideal free-field conditions. Many other factors intervene in real-world acoustics. But the inverse square law gives a general idea of sound attenuation that is useful in many situations. # 4.2.2 Acoustic Considerations for Live Performances ## 4.2.2.1 Potential Acoustic Gain (PAG) The acoustic gain of an amplification system is the difference between the loudness as perceived by the listener when the sound system is turned on as compared to when the sound system is turned off. One goal of the sound engineer is to achieve a high potential acoustic gain, or PAG – the gain in decibels that can be added to the original sound without causing feedback. This potential acoustic gain is the entire reason the sound system is installed and the sound engineer is hired. If you can’t make the sound louder and more intelligible, you fail as a sound engineer. The word “potential” is used here because the PAG represents the maximum gain possible without causing feedback. Feedback can occur when the loudspeaker sends an audio signal back through the air to the microphone at the same level or louder than the source. In this situation, the two similar sounds arrive at the microphone at the same level but at a different phase. The first frequency from the loudspeaker to combine with the source at a 360 degree phase relationship is reinforced by 6 dB. The 6 dB reinforcement at that frequency happens over and over in an infinite loop. This sounds like a single sine wave that gets louder and louder. Without intervention on the part of the sound engineer, this sound continues to get louder until the loudspeaker is overloaded. To stop a feedback loop, you need to interrupt the electro-acoustical path that the sound is traveling by either muting the microphone on the mixing console or turning off the amplifier that is driving the loudspeaker. If feedback happens too many times, you'll likely not be hired again.When setting up for a live performance, an important function of the sound engineer operating the amplification/mixing system is to set the initial sound levels. The equation for PAG is given below. where $D_{s}$ is the distance from the sound source to the microphone, $D_{0}$ is the distance from the sound source to the listener, $D_{1}$ is the distance from the microphone to the loudspeaker, and $D_{2}$ is the distance from the loudspeaker to the listener Equation 4.14 Potential acoustic gain (PAG) PAG is the limit. The amount of gain added to the signal by the sound engineer in the sound booth must be less than this. Otherwise, there will be feedback. In typical practice, you should stay 6 dB below this limit in order to avoid the initial sounds of the onset of feedback. This is sometimes described as sounding “ringy” because the sound system is in a situation where it is trying to cause feedback but hasn’t quite found a frequency at exactly a 360° phase offset. This 6 dB safety factor should be applied to the result of the PAG equation. The amount of acoustic gain needed for any situation varies, but as a rule of thumb, if your PAG is less than 12 dB, you need to make some adjustments to the physical locations of the various elements of the sound system in order to increase the acoustic gain. In the planning stages of your sound system design, you’ll be making guesses on how much gain you need. Generally you want the highest possible PAG, but in your efforts to increase the PAG you will eventually get to a point where the compromises required to increase the gain are unacceptable. These compromises could include financial cost and visual aesthetics. Once the sound system has been purchased and installed, you'll be able to test the system to see how close your PAG predictions are to reality. If you find that the system causes feedback before you're able to turn the volume up to the desired level, you don't have enough PAG in your system. You need to make adjustments to your sound system in order to increase your gain before feedback. Figure 4.20 Potential acoustic gain, $PAG=20\log_{10}\left ( \frac{D_{1}\ast D_{0}}{D_{s}\ast D_{2}} \right )$ Increasing the PAG can be achieved by a number of means, including: • Moving the source closer to the microphone • Moving the loudspeaker farther from the microphone • Moving the loudspeaker closer to the listener. It’s also possible to use directional microphones and loudspeakers or to apply filters or equalization, although these methods do not yield the same level of success as physically moving the various sound system components. These issues are illustrated in the interactive Flash tutorial associated with this section. Note that PAG is the “potential” gain. Not all aspects of the sound need to be amplified by this much. The gain just gives you “room to play.” Faders in the mixer can still bring down specific microphones or frequency bands in the signal. But the potential acoustic gain lets you know how much louder than the natural sound you will be able to achieve. The Flash tutorial associated with this section helps you to visualize how acoustic gain works and what its consequences are. ## 4.2.2.2 Checking and Setting Sound Levels One fundamental part of analyzing an acoustic space is checking sound levels at various locations in the listening area. In the ideal situation, you want everything to sound similar at various listening locations. A realistic goal is to have each listening location be within 6 dB of the other locations. If you find locations that are outside that 6 dB range, you may need to reposition some loudspeakers, add loudspeakers, or apply acoustic treatment to the room. With the knowledge of decibels and acoustics that you gained in Section 1, you should have a better understanding now of how this works. There are two types of sound pressure level (SPL) meters for measuring sound levels in the air. The most common is a dedicated handheld SPL meter like the one shown in Figure 4.21. These meters have a built-in microphone and operate on battery power. They have been specifically calibrated to convert the voltage level coming from the microphone into a value in dBSPL. There are some options to configure that can make your measurements more meaningful. One option is the response time of the meter. A fast response allows you to see level changes that are short, such as peaks in the sound wave. A slow response shows you more of an average SPL. Another option is the weighting of the meter. The concept of SPL weighting comes from the equal loudness contours explained in Section 4.1.6.3. Since the frequency response of the human hearing system changes with the SPL, a number of weighting contours are offered, each modeling the human frequency response in with a slightly different emphasis. A-weighting has a rather steep roll off at low frequencies. This means that the low frequencies are attenuated more than they are in B or C weighting. B-weighting has less roll off at low frequencies. C-weighting is almost a flat frequency response except for a little attenuation at low frequencies. The rules of thumb are that if you’re measuring levels of 90 dBSPL or lower, A-weighting gives you the most accurate representation of what you’re hearing. For levels between 90 dBSPL and 110 dBSPL, B-weighting gives you the most accurate indication of what you hear. Levels in excess of 110 dBSPL should use C-weighting. If your SPL meter doesn’t have an option for B-weighting, you should use C-weighting for all measurements higher than 90 dBSPL. Figure 4.21 Handheld SPL meter The other type of SPL meter is one that is part of a larger acoustic analysis system. As described in Chapter 2, these systems can consist of a computer, audio interface, analysis microphone, and specialized audio analysis software. When using this analysis software to make SPL measurements, you need to calibrate the software. The issue here is that because the software has no knowledge or control over the microphone sensitivity and the preamplifier on the audio interface, it has no way of knowing which analog voltage levels and corresponding digital sample values represent actual SPL levels. To solve this problem, an SPL calibrator is used. An SPL calibrator is a device that generates a 1 kHz sine wave at a known SPL level (typically 94 dBSPL) at the transducer. The analysis microphone is inserted into the round opening on the calibrator creating a tight seal. At this point, the tip of the microphone is up against the transducer in the calibrator, and the microphone is receiving a known SPL level. Now you can tell the analysis software to interpret the current signal level as a specific SPL level. As long as you don’t change microphones and you don’t change the level of the preamplifier, the calibrator can then be removed from the microphone, and the software is able to interpret other varying sound levels relative to the known calibration level. Figure 4.22 shows an SPL calibrator and the calibration window in the Smaart analysis software. Figure 4.22 Analysis software needs to be calibrated for SPL ## 4.2.2.3 Impulse Responses and Reverberation Time In addition to sound amplitude levels, it’s important to consider frequency levels in a live sound system. Frequency measurements are taken to set up the loudspeakers and levels such that the audience experiences the sound and balance of frequencies in the way intended by the sound designer. One way to do frequency analysis is to have an audio device generate a sudden burst or “impulse” of sound and then use appropriate software to graph the audio signal in the form of a frequency response. The frequency response graph, with frequency on the x-axis and the magnitude of the frequency component on the y-axis, shows the amount of each frequency in the audio signal in one window of time. An impulse response graph is generated in the same way that a frequency response graph is generated, using the same hardware and software. The impulse response graph (or simply impulse response) has time on the x-axis and amplitude of the audio signal on the y-axis. It is this graph that helps us to analyze the reverberations in an acoustic space. An impulse response measured in a small chamber music hall is shown in Figure 4.23. Essentially what you are seeing is the occurrences of the stimulus signal arriving at the measurement microphone over a period of time. The first big spike at around 48 milliseconds is the arrival of the direct sound from the loudspeaker. In other words, it took 48 milliseconds for the sound to arrive back at the microphone after the analysis software sent out the stimulus audio signal. The delay results primarily from the time it takes for sound to travel through the air from the loudspeaker to the measurement microphone, with a small amount of additional latency resulting from the various digital and analog conversions along the way. The next tallest spike at 93 milliseconds represents a reflection of the stimulus signal from some surface in the room. There are a few small reflections that arrive before that, but they’re not large enough to be of much concern. The reflection at 93 milliseconds arrives 45 milliseconds after the direct sound and is approximately 9 dB quieter than the direct sound. This is an audible reflection that is outside the precedence zone and may be perceived by the listener as an audible echo. (The precedence effect is explained in Section 4.2.2.6.) If this reflection is to be problematic, you can try to absorb it. You can also diffuse it and convert it into the reverberant energy shown in the rest of the graph. Figure 4.23 Impulse response of small chamber music hall Before you can take any corrective action, you need to identify the surface in the room causing the reflection. The detective work can be tricky, but it helps to consider that you’re looking for a surface that is visible to both the loudspeaker and the microphone. The surface should be at a distance 50 feet longer than the direct distance between the loudspeaker and the microphone. In this case, the loudspeaker is up on the stage and the microphone out in the audience seats. More than likely, the reflection is coming from the upstage wall behind the loudspeaker. If you measure approximately 25 feet between the loudspeaker and that wall, you’ve probably found the culprit. To see if this is indeed the problem, you can put some absorptive material on that wall and take another measurement. If you’ve guess correctly, you should see that spike disappear or get significantly smaller. If you wanted to give a speech or perform percussion instruments in this space, this reflection would probably cause intelligibility problems. However, in this particular scenario, where the room is primarily used for chamber music, this reflection is not of much concern. In fact, it might even be desirable, as it makes room sound larger. Aside: RT60 is the time it takes for reflections of a direct sound to decay by 60 dB. As you can see in the graph, the overall sound energy decays very slowly over time. Some of that sound energy can be defined as reverberant sound. In a chamber music hall like this, a longer reverberation time might be desirable. In a lecture hall, a shorter reverberation time is better. You can use this impulse response data to determine the RT60 reverberation time of the room as shown in Figure 4.24. RT60 is the time it takes for reflections of a sound to decay by 60 dB. In the figure, RT60 is determined for eight separate frequency bands. As you can see, the reverberation time varies for different frequency bands. This is due to the varying absorption rates of high versus low frequencies. Because high frequencies are more easily absorbed, the reverberation time of high frequencies tends to be lower. On average, the reverberation time of this room is around 1.3 seconds. Figure 4.24 RT60 reverberation time of small chamber music hall The music hall in this example is equipped with curtains on the wall that can be lowered to absorb more sound and reduce the reverberation time. Figure 4.25 shows the impulse response measurement taken with the curtains in place. At first glance, this data doesn’t look very different from Figure 4.23, when the curtains were absent. There is a slight difference, however, in the rate of decay for the reverberant energy. The resulting reverberation time is shown in Figure 4.26. Adding the curtains reduces the average reverberation time by around 0.2 seconds. Figure 4.25 Impulse response of small chamber music hall with curtains on the some of the walls Figure 4.26 RT60 reverberation time of small chamber music hall with curtains on some of the walls ## 4.2.2.4 Frequency Levels and Comb Filtering When working with sound in acoustic space, you discover that there is a lot of potential for sound waves to interact with each other. If the waves are allowed to interact destructively – causing frequency cancelations – the result can be detrimental to the sound quality perceived by the audience. Destructive sound wave interactions can happen when two loudspeakers generate identical sounds that are directed to the same acoustic space. They can also occur when a sound wave combines in the air with its own reflection from a surface in the room. Let’s say there are two loudspeakers aimed at you, both generating the same sound. Loudspeaker A is 10 feet away from you, and Loudspeaker B is 11 feet away. Because sound travels at a speed of approximately one foot per millisecond, the sound from Loudspeaker B arrives at your ears one millisecond after the sound from Loudspeaker A, as shown in Figure 4.27. That one millisecond of difference doesn’t seem like much. How much damage can it really inflict on your sound? Let’s again assume that both sounds arrive at the same amplitude. Since the position of your ears to the two loudspeakers is directly related to the timing difference, let’s also assume that your head is stationary, as if you are sitting relatively still in your seat at a theater. In this case, a one millisecond difference causes the two sounds to interact destructively. In Chapter 2 you read about what happens when two identical sounds combine out-of-phase. In real life, phase differences can occur as a result of an offset in time. That extra one millisecond that it takes for the sound from Loudspeaker B to arrive at your ears results in a phase difference relative to the sound from Loudspeaker A. The audible result of this depends on the type of sound being generated by the loudspeakers. Figure 4.27 Two loudspeakers arriving at a listener one millisecond apart Let’s assume, for the sake of simplicity, that both loudspeakers are generating a 500 Hz sine wave, and the speed of sound is 1000 ft/s. (As stated in Section 1.1.1, the speed of sound in air varies depending upon temperature and air pressure so you don’t always get a perfect 1130 ft/s.) Recall that wavelength equals velocity multiplied by period ($\pi =cT$). Then with this speed of sound, a 500 Hz sine wave has a wavelength λ of two feet. At a speed of 1000 ft/s, sound travels one foot each millisecond, which implies that with a one millisecond delay, a sound wave is delayed by one foot. For 500 Hz, this is half the frequency's wavelength. If you remember from Chapter 2, half a wavelength is the same thing as a 180o phase offset. In sum, a one millisecond delay between Loudspeaker A and Loudspeaker B results in a 180 o phase difference between the two 500 Hz sine waves. In a free-field environment with your head stationary, this results in a cancellation of the 500 Hz frequency when the two sine waves arrive at your ear. This phase relationship is illustrated in Figure 4.28. Figure 4.28 Phase relationship between two 500 Hz sine waves one millisecond apart Figure 4.29 Phase relationship between two 1000 Hz sine waves one millisecond apart If we switch the frequency to 1000 Hz, we’re now dealing with a wavelength of one foot. An analysis similar to the one above shows that the one millisecond delay results in a 360o phase difference between the two sounds. For sine waves, two sounds combining at a 360o phase difference behave the same as a 0o phase difference. For all intents and purposes, these two sounds are coherent, which means when they combine at your ear, they reinforce each other, which is perceived as an increase in amplitude. In other words, the totally in-phase frequencies get louder. This phase relationship is illustrated in Figure 4.29. Simple sine waves serve as convenient examples for how sound works, but they are rarely encountered in practice. Almost all sounds you hear are complex sounds made up of multiple frequencies. Continuing our example of the one millisecond offset between two loudspeakers, consider the implications of sending two identical sine wave sweeps through two loudspeakers. A sine wave sweep contains all frequencies in the audible spectrum. When those two identical complex sounds arrive at your ear one millisecond apart, each of the matching pairs of frequency components combines at a different phase relationship. Some frequencies combine with a phase relationship that is a multiple of 180 o, causing cancellations. Some frequencies combine with a phase relationship that is a multiple of 360 o, causing reinforcements. All the other frequencies combine in phase relationships that vary between multiples of 0 o and 360 o, resulting in amplitude changes somewhere between complete cancellation and perfect reinforcement. This phenomenon is called comb filtering, which can be defined as a regularly repeating pattern of frequencies being attenuated or boosted as you move through the frequency spectrum. (See Figure 4.32.) To understand comb filtering, let’s look at how we detect and analyze it in an acoustic space. First, consider what the frequency response of the sine wave sweep would look like if we measured it coming from one loudspeaker that is 10 feet away from the listener. This is the black line in Figure 4.30. As you can see, the line in the audible spectrum (20 to 20,000 Hz) is relatively flat, indicating that all frequencies are present, at an amplitude level just over 100 dBSPL. The gray line shows the frequency response for an identical sine sweep, but measured at a distance of 11 feet from the one loudspeaker. This frequency response is a little bumpier than the first. Neither frequency response is perfect because environmental conditions affect the sound as it passes through the air. Keep in mind that these two frequency responses, represented by the black and gray lines on the graph, were measured at different times, each from a single loudspeaker, and at distances from the loudspeaker that varied by one foot – the equivalent of offsetting them by one millisecond. Since the two sounds happened at different moments in time, there is of course no comb filtering. Figure 4.30 Frequency response of two sound sources 1 millisecond apart The situation is different when the sound waves are played at the same time through the two loudspeakers not equidistant from the listener, such that the frequency components arrive at the listener in different phases. Figure 4.31 is a graph of frequency vs. phase for this situation. You can understand the graph in this way: For each frequency on the x-axis, consider a pair of frequency components of the sound being analyzed, the first belonging to the sound coming from the closer speaker and the second belonging to the sound coming from the farther speaker. The graph shows that degree to which these pairs of frequency components are out-of-phase, which ranges between -180o and 180o. Figure 4.31 Phase relationship per frequency for two sound sources one millisecond apart Figure 4.32 shows the resulting frequency response when these two sounds are combined. Notice that the frequencies that have a 0o relationship are now louder, at approximately 110 dB. On the other hand, frequencies that are out-of-phase are now substantially quieter, some by as much as 50 dB depending on the extent of the phase offset. You can see in the graph why the effect is called comb filtering. The scalloped effect in the graph is how comb filtering appears in frequency response graphs – a regularly repeated pattern of frequencies being attenuated or boosted as you more through the frequency spectrum. Figure 4.32 Comb filtering frequency response of two sound sources one millisecond apart We can try a similar experiment to try to hear the phenomenon of comb filtering using just noise as our sound source. Recall that noise consists of random combinations of sound frequencies, usually sound that is not wanted as part of a signal. Two types of noise that a sound processing or analysis system can generate artificially are white noise and pink noise (and there are others). In white noise, there’s an approximately equal amount of each of the frequency components across the range of frequencies within the signal. In pink noise, there’s an approximately equal amount of the frequencies in each octave of frequencies. (Octaves, as defined in Chapter 3, are spaced such that the beginning frequency of one octave is ½ the beginning frequency of the next octave. Although each octave is twice as wide as the previous one – in the distance between its upper and lower frequencies – octaves sound like they are about the same width to human hearing.) The learning supplements to this chapter include a demo of comb filtering using white and pink noise. Comb filtering in the air is very audible, but it is also very inconsistent. In a comb-filtered environment of sound, if you move your head just slightly to the right or left, you find that the timing difference between the two sounds arriving at your ear changes. With a change in timing comes a change in phase differences per frequency, resulting in comb filtering of some frequencies but not others. Add to this the fact that the source sound is constantly changing, and, all things considered, comb filtering in the air becomes something that is very difficult to control. One way to tackle comb filtering in the air is to increase the delay between the two sound sources. This may seem counter-intuitive since the difference in time is what caused this problem in the first place. However, a larger delay results in comb filtering that starts at lower frequencies, and as you move up the frequency scale, the cancellations and reinforcements get close enough together that they happen within critical bands. The sum of cancellations and reinforcements within a critical band essentially results in the same overall amplitude as would have been there had there been no comb filtering. Since all frequencies within a critical band are perceived as the same frequency, your brain glosses over the anomalies, and you end up not noticing the destructive interference. (This is an oversimplification of the complex perceptual influence of critical bands, but it gives you a basic understanding for our purposes.) In most cases, once you get a timing difference that is larger than five milliseconds on a complex sound that is constantly changing, the comb filtering in the air is not heard anymore. We explain this point mathematically in Section 3. The other strategy to fix comb filtering is to simply prevent identical sound waves from interacting. In a perfect world, loudspeakers would have shutter cuts that would let you put the sound into a confined portion of the room. This way the coverage pattern for each loudspeaker would never overlap with another. In the real world, loudspeaker coverage is very difficult to control. We discuss this further and demonstrate how to compensate for comb filtering in the video tutorial entitled "Loudspeaker Interaction" in Chapter 8. Comb filtering in the air is not always the result of two loudspeakers. The same thing can happen when a sound reflects from a wall in the room and arrives in the same place as the direct sound. Because the reflection takes a longer trip to arrive at that spot in the room, it is slightly behind the direct sound. If the reflection is strong enough, the amplitudes between the direct and reflected sound are close enough to cause comb filtering. In really large rooms, the timing difference between the direct and reflected sound is large enough that the comb filtering is not very problematic. Our hearing system is quite good at compensating for any anomalies that result in this kind of sound interaction. In smaller rooms, such as recording studios and control rooms, it’s quite possible for reflections to cause audible comb filtering. In those situations, you need to either absorb the reflection or diffuse the reflection at the wall. The worst kind of comb filtering isn’t the kind that occurs in the air but the kind that occurs on a wire. Let’s reverse our scenario and instead of having two sound sources, let’s switch to a single sound source such as a singer and use two microphones to pick up that singer. Microphone A is one foot away from the singer, and Microphone B is two feet away. In this case, Microphone B catches the sound from the singer one millisecond after Microphone A. When you mix the sounds from those two microphones (which happens all the time), you now have a one millisecond comb filter imposed on an electronic signal that then gets delivered in that condition to all the loudspeakers in the room and from there to all the listeners in the room equally. Now your problem can be heard no matter where you sit, and no matter how much you move your head around. Just one millisecond delay causes a very audible problem that no one can mask or hide from. The best way to avoid this kind of problem is never to allow two microphones to pick up the same signal at the same time. A good sound engineer at a mixing console ensures that only one microphone is on at a time, thereby avoiding this kind of destructive interaction. If you must have more than one microphone, you need to keep those microphones far away from each other. If this is not possible, you can achieve modest success fixing the problem by adding some extra delay to one of the microphones. This changes the phase effect of the two microphones combining, but doesn’t mimic the difference in level that would come if they were physically farther apart. ## 4.2.2.5 Resonance and Room Modes In Chapter 2, we discussed the concept of resonance. Now we consider how resonance comes into play in real, hands-on applications. Resonance plays a role in sound perception in a room. One practical example of this is the standing wave phenomenon, which in an acoustic space produces the phenomenon of room modes. Room modes are collections of resonances that result from sound waves reflecting from the surfaces of an acoustical space, producing places where sounds are amplified or attenuated. Places where the reflections of a particular frequency reinforce each other, amplifying that frequency, are the frequency’s antinodes. Places where the frequency’s reflections cancel each other are the frequency’s nodes. Consider this simplified example – a 10-foot-wide room with parallel walls that are good sound reflectors. Let’s assume again that the speed of sound is 1000 ft/s. Imagine a sound wave emanating from the center of the room. The sound waves reflecting off the walls either constructively or destructively interfere with each other at any given location in the room, depending on the relative phase of the sound waves at that point in time and space. If the sound wave has a wavelength that is exactly twice the width of the room, then the sound waves reflecting off opposite walls cancel each other in the center of the room but reinforce each other at the walls. Thus, the center of the room is a node for this sound wavelength and the walls are antinodes. We can again apply the wavelength equation, $\pi = c/f$, to find a frequency f that corresponds to a wavelength λ that is exactly twice the width of the room, 2*10 = 20 feet. At the antinodes, the signals are reinforced by their reflections, so that the 50 Hz sound is unnaturally loud at the walls.   At the node in the center, the signals reflecting off the walls cancel out the signal from the loudspeaker. Similar cancellations and reinforcements occur with harmonic frequencies at 100 Hz, 150 Hz, 200 Hz, and so forth, whose wavelengths fit evenly between the two parallel walls. If listeners are scattered around the room, standing closer to either the nodes or antinodes, some hear the harmonic frequencies very well and others do not. Figure 4.33 illustrates the node and antinode positions for room modes when the frequency of the sound wave is 50 Hz, 100 Hz, 150 Hz, and 200 Hz. Table 4.6 shows the relationships among frequency, wavelength, number of nodes and antinodes, and number of harmonics. Cancelling and reinforcement of frequencies in the room mode phenomenon is also an example of comb filtering. Figure 4.33 Room mode Table 4.6 Room mode, nodes, antinodes, and harmonics Frequency Antinodes Nodes Wavelength Harmonics $f_{0}=\frac{c}{2L}$ 2 1 $\lambda =2L$ 1st harmonic $f_{1}=\frac{c}{L}$ 3 2 $\lambda =L$ 2nd harmonic $f_{2}=\frac{3c}{2L}$ 4 3 $\lambda =\frac{2L}{3}$ 3rd harmonic $f_{k}=\frac{kc}{2L}$ k + 1 k $\lambda =\frac{2L}{k}$ kth harmonic This example is actually more complicated than shown because there are actually multiple parallel walls in a room. Room modes can exist that involve all four walls of a room plus the floor and ceiling. This problem can be minimized by eliminating parallel walls whenever possible in the building design. Often the simplest solution is to hang material on the walls at selected locations to absorb or diffuse the sound. The standing wave phenomenon can be illustrated with a concrete example that also relates to instrument vibrations and resonances. Figure 4.34 shows an example of a standing wave pattern on a vibrating plate. In this case, the flat plate is resonating at 95 Hz, which represents a frequency that fits evenly with the size of the plate. As the plate bounces up and down, the sand on the plate keeps moving until it finds a place that isn’t bouncing. In this case, the sand collects in the nodes of the standing wave. (These are called Chladni patterns, after the German scientist who originated the experiments in the early 1800s.) If a similar resonance occurred in a room, the sound would get noticeably quieter in the areas corresponding to the pattern of sand because those would be the places in the room where air molecules simply aren’t moving (neither compression nor rarefaction). For a more complete demonstration of this example, see the video demo called Plate Resonance linked in this section. Figure 4.34 Resonant frequency on a flat plate ## 4.2.2.6 The Precedence Effect When two or more similar sound waves interact in the air, not only does the perceived frequency response change, but your perception of the location of the sound source can change as well. This phenomenon is called the precedence effect. The precedence effect occurs when two similar sound sources arrive at a listener at different times from different directions, causing the listener to perceive both sounds as if they were coming from the direction of the sound that arrived first. The precedence effect is sometimes intentionally created within a sound space. For example, it might be used to reinforce the live sound of a singer on stage without making it sound as if some of the singer’s voice is coming from a loudspeaker. However, there are conditions that must be in place for the precedence effect to occur. First is that the difference in time arrival at the listener between the two sound sources needs to be more than one millisecond. Also, depending on the type of sound, the difference in time needs to be less than 20 to 30 milliseconds or the listener perceives an audible echo. Short transient sounds starts to echo around 20 milliseconds, but longer sustained sounds don't start to echo until around 30 milliseconds. The required condition is that the two sounds cannot be more than 10 dB different in level. If the second arrival is more than 10 dB louder than the first, even if the timing is right, the listener begins to perceive the two sounds to be coming from the direction of the louder sound. When you intentionally apply the precedence effect, you have to keep in mind that comb filtering still applies in this scenario. For this reason, it’s usually best to keep the arrival differences to more than five milliseconds because our hearing system is able to more easily compensate for the comb filtering at longer time differences. The advantage to the precedence effect is that although you perceive the direction of both sounds as arriving from the direction of the first arrival, you also perceive an increase in loudness as a result of the sum of the two sound waves. This effect has been around for a long time and is a big part of what gives a room “good acoustics.” There exist rooms where sound seems to propagate well over long distances, but this isn’t because the inverse square law is magically being broken. The real magic is the result of reflected sound. If sound is reflecting from the room surfaces and arriving at the listener within the precedence time window, the listener perceives an increase in sound level without noticing the direction of the reflected sound. One goal of an acoustician is to maximize the good reflections and minimize the reflections that would arrive at the listener outside of the precedence time window, causing an audible echo. The fascinating part of the precedence effect is that multiple arrivals can be daisy chained, and the effect still works. There could be three or more distinct arrivals at the listener, and as long as each arrival is within the precedence time window of the previous arrival, all the arrivals sound like they’re coming from the direction of the first arrival. From the perspective of acoustics, this is equivalent to having several early reflections arrive at the listener. For example, a listener might hear a reflection 20 milliseconds after the direct sound arrives. This reflection would image back to the first arrival of the direct sound, but the listener would perceive an increase in sound level. A second reflection could also arrive 40 milliseconds later. Alone, this 40 millisecond reflection would cause an audible echo, but when it’s paired with the first 20 millisecond reflection, no echo is perceived by the listener because the second reflection is arriving within the precedence time window of the first reflection. Because the first reflection arrives within the precedence time window of the direct sound, the sound of both reflections image back to the direct sound. The result is that the listener perceives an overall increase in level along with a summation of the frequency response of the three sounds. The precedence effect can be replicated in sound reinforcement systems. It is common practice now in live performance venues to put a microphone on a performer and relay that sound out to the audience through a loudspeaker system in an effort to increase the overall sound pressure level and intelligibility perceived by the audience. Without some careful attention to detail, this process can lead to a very unnatural sound. Sometimes this is fine, but in some cases the goal might be to improve the level and intelligibility while still allowing the audience to perceive all the sound as coming from the actual performer. Using the concept of the precedence effect, a loudspeaker system could be designed that has the sound of multiple loudspeakers arriving at the listener from various distances and directions. As long as each loudspeaker arrives at the listener within 5 to 30 milliseconds and within 10 dB of the previous sound with the natural sound of the performer arriving first, all the sound from the loudspeaker system images in the listener’s mind back to the location of the actual performer. When the precedence effect is handled well, it simply sounds to the listener like the performer is naturally loud and clear, and that the room has good acoustics. As you can imagine from the issues discussed above, designing and setting up a sound system for a live performance is a complicated process. A good knowledge of amount of digital signal processing is required to manipulate the delay, level and frequency response of each loudspeaker in the system to line up properly at all the listening points in the room. The details of this process are beyond the scope of this book. For more information, see (Davis and Patronis, 2006) and (McCarthy, 2009). ## 4.2.2.7 Effects of Temperature In addition to the physical obstructions with which sound interacts, the air through which sound travels can have an effect on the listener’s experience. As discussed in Chapter 2, the speed of sound increases with higher air temperatures. It seems fairly simple to say that if you can measure the temperature in the air you’re working in, you should be able to figure out the speed of sound in that space. In actual practice, however, air temperature is rarely uniform throughout an acoustic space. When sound is played outdoors, in particular, the wave front encounters varying temperatures as it propagates through the air. Consider the scenario where the sun has been shining down on the ground all day. The sun warms up the ground. When the sun sets at the end of the day (which is usually when you start an outdoor performance), the air cools down. The ground is still warm, however, and affects the temperature of the air near the ground. The result is a temperature gradient that gets warmer the closer you get to the ground. When a sound wave front tries to propagate through this temperature gradient, the portion of the wave front that is closer to the ground travels faster than the portion that is higher up in the air. This causes the wave front to curve upwards towards the cooler air. Usually, the listeners are sitting on the ground, and therefore the sound is traveling away from them. The result is a quieter sound for those listeners. So if you spent the afternoon setting your sound system volume to a comfortable listening level, when the performance begins at sun down, you’ll have to increase the volume to maintain those levels because the sound is being refracted up towards the cooler air. Figure 4.35 shows a diagram representing this refraction. Recall that sound is a longitudinal wave where the air pressure amplitude increases and decreases, vibrating the air molecules back and forth in the same direction in which the energy is propagating. The vertical lines represent the wave fronts of the air pressure propagation. Because the sound travels faster in warmer air, the propagation of the air pressure is faster as you get closer to the ground. This means that the wave fronts closer to the ground are ahead of those farther from the ground, causing the sound wave to refract upwards. Figure 4.35 Sound refracted toward cooler air A similar thing can happen indoors in a movie theater or other live performance hall. Usually, sound levels are set when the space is empty prior to an audience arriving. When an audience arrives and fills all the seats, things suddenly get a lot quieter, as any sound engineer will tell you. Most attribute this to sound absorption in the sense that a human body absorbs sound much better than an empty chair. Absorption does play a role, but it doesn’t entirely explain the loss of perceived sound level. Even if human bodies are absorbing some of the sound, the sound arriving at the ears directly from the loudspeaker, with no intervening obstructions, arrives without having been dampened by absorption. It’s the reflected sound that gets quieter. Also, most theater seats are designed with padding and perforation on the underside of the seat so that they absorb sound at a similar rate to a human body. This way, when you’re setting sound levels in an empty theatre, you should be able to hear sound being absorbed the way it will be absorbed when people are sitting in those seats, allowing you to set the sound properly. Thus, absorption can’t be the only reason for the sudden drop in sound level when the listeners fill the audience. Temperature is also a factor here. Not only is the human body a good absorber of acoustic energy, but it is also very warm. Fill a previously empty audience area with several hundred warm bodies, turn on the air conditioning that vents out from the ceiling, and you’re creating a temperature gradient that is even more dramatic than the one that is created outdoors at sundown. As the sound wave front travels toward the listeners, the air nearest to the listeners allows the sound to travel faster while the air up near the air conditioning vents slows the propagation of that portion of the wave front. Just as in the outdoor example, the wave front is refracted upward toward the cooler air, and there may be a loss in sound level perceived by the listeners. There isn’t anything that can be done about the temperature effects. Eventually the temperature will even out as the air conditioning does its job. The important thing to remember is to listen for a while before you try to fix the sound levels. The change in sound level as a result of temperature will likely fix itself over time. ## 4.2.2.8 Modifying and Adapting to the Acoustical Space An additional factor to consider when you're working with indoor sound is the architecture of the room, which greatly affects the way sound propagates. When a sound wave encounters a surface (walls, floors, etc.) several things can happen. The sound can reflect off the surface and begin traveling another direction, it can be absorbed by the surface, it can be transmitted by the surface into a room on the opposite side, or it can be diffracted around the surface if the surface is small relative to the wavelength of the sound. Typically some combination of all four of these things happens each time a sound wave encounters a surface. Reflection and absorption are the two most important issues in room acoustics. A room that is too acoustically reflective is not very good at propagating sound intelligibly. This is usually described as the room being too “live.” A room that is too acoustically absorptive is not very good at propagating sound with sufficient amplitude. This is usually described as the room being too “dead.” The ideal situation is a good balance between reflection and absorption to allow the sound to propagate through the space loudly and clearly. The kinds of reflections that can help you are called early reflections, which arrive at the listener within 30 milliseconds of the direct sound. The direct sound arrives at the listener directly from the source. An early reflection can help with the perceived loudness of the sound because the two sounds combine at the listener’s ear in a way that reinforces, creating a precedence effect. Because the reflection sounds like the direct sound and arrives shortly after the direct sound, the listener assumes both sounds come from the source and perceives the result to be louder as a result of the combined amplitudes. If you have early reflections, it's important that you don’t do anything to the room that would stop those early reflections such as modifying the material of the surface with absorptive material. You can create more early reflections by adding reflective surfaces to the room that are angled in such a way that the sound hitting that surface is reflected to the listener. If you have reflections that arrive at the listener more than 30 milliseconds after the direct sound, you'll want to fix that because these reflections sound like echoes and destroy the intelligibility of the sound. You have two options when dealing with late reflections. The first is simply to absorb them by attaching to the reflective surface something absorptive like a thick curtain or acoustic absorption tile (Figure 4.36). The other option is to diffuse the reflection. Figure 4.36 Acoustic absorption tile When reflections get close enough together, they cause reverberation. Reverberant sound can be a very nice addition to the sound as long as the reverberant sound is quieter than the direct sound. The relationship between the direct and reverberant sound is called the direct to reverberant ratio. If that ratio is too low, you'll have intelligibility problems.Diffusing a late reflection using diffusion tiles (Figure 4.37) generates several random reflections instead of a single one. If done correctly, diffusion converts the late reflection into reverberation. If the reverberant sound in the room is already at a sufficient level and duration, then absorbing the late reflection is probably the best route. For more information on identifying reflections in the room, see Section 4.2.2.3. Figure 4.37 Acoustic diffusion tile If you've exhausted all the reasonable steps you can take to improve the acoustics of the room, the only thing that remains is to increase the level of the direct sound in a way that doesn't increase the reflected sound. This is where sound reinforcement systems come in. If you can use a microphone to pick up the direct sound very close to the source, you can then play that sound out of a loudspeaker that is closer to the listener in a way that sounds louder to the listener. If you can do this without directing too much of the sound from the loudspeaker at the room surfaces, you can increase the direct to reverberant ratio, thereby increasing the intelligibility of the sound. # 4.2.3 Acoustical Considerations for the Recording Studio In a recording studio, all the same acoustic behaviors exist that are described in 4.2.2.8. The goals and concerns are somewhat different, however. At the most basic level, your main acoustic concern in a recording studio is to accurately record a specific sound without capturing other sounds at the same time. These other sounds can include noise from the outside; sound bleed from other instruments; noise inside the room from air handlers, lights, or other noise generating devices; and reflections of the sound you're recording that are coming back to the microphone from the room surfaces. The term isolation is used often in the context of recording studios. Isolation refers to acoustically isolating the recording studio from the outside world. It also refers to acoustically isolating one sound from another within the room. When isolating the studio from the sounds outside, the basic strategy is to build really thick walls. The thicker and more solid the wall, the less likely it is that a sound wave can travel through the wall. Any seams in the wall or openings such as doors and windows have to be completely sealed off. Even a small crack under a door can result in a significant amount of sound coming in from the outside. In most cases, the number of doors and windows in a recording studio is limited because of isolation concerns. Imagine that you have a great musician playing in the studio, and he plays a perfect sequence that he has so far been unable to achieve. In the middle of the sequence, someone honks a car horn outside the building, and that sound gets picked up on the microphone inside the studio. That recording is now unusable, and you have to ask the musician to attempt to repeat his perfect performance. One strategy for allowing appropriate windows and doors into the building without compromising the acoustic isolation of the studio is to build a room inside of a room. This can be as small as a freestanding booth inside of a room, or you can build an entire recording studio as a room within a larger room within a building. The booth or studio needs to be isolated as much as possible from any vibrations of the larger room. This is sometimes called floating the room in a way that no surface of the booth or the studio physically touches any of the surfaces of the larger room that come in contact with the outside world. For a small recording booth, floating can as simple as putting the booth on large wheel casters. Floating an entire studio involves a complicated system of floor supports that can absorb vibration. Figure 4.38 shows an example of a floating isolation booth that can be used for recording within a larger room. Figure 4.38 A small floating isolation booth. Photo courtesy of WhisperRoom Inc. The other isolation concern when recording is isolating the microphones from one another and from the room acoustics. For example, if you're recording two musicians, each playing a guitar, you want to record in a way that allows you to mix the balance between the two instruments later. If you have both signals recording from the same microphone, you can't adjust the balance later. Using two microphones can help, but then you have to figure out how to get one microphone to pick up only the first guitar and another microphone to pick up only the second. This perfect isolation is really possible only if you record each sound separately, which is a common practice. However, if both sounds must be recorded simultaneously, you'll need to seek as much isolation as possible. This can be achieved by getting the microphones closer to the thing you want to pick up the loudest. You can also put acoustic baffles between the microphones. These baffles are simple moveable partitions that acoustically absorb sound. You can also put each musician in an isolation booth and allow them to hear each other through closed-backed headphones. If you need to isolate the microphone from the reflections in the room without resorting to an isolation booth, you can achieve modest success by enclosing the microphone with a small acoustic baffle on the microphone stand like the one shown in Figure 4.39. This helps isolate the microphone from sounds coming from behind or from the sides but provides no isolation from sounds arriving at the front of the microphone. This kind of baffle has no impact on the ambient noise level picked up by the microphone. It only serves to isolate the microphone from certain reflections coming from the studio walls. Figure 4.39 Acoustic baffle for a microphone stand Room ventilation is a notorious contributor to room noise in a recording studio. Of course ventilation is necessary, but if it's done poorly the system can compromise the acoustic isolation of the room from the outside world and can introduce a significant amount of self-generated fan noise into the room. The commercially available portable isolation booths typically have ventilation systems available that do not compromise the isolation and noise level for the booth. If you're putting in a ventilation system for a large studio, be prepared to spend a lot of money and hire an expert to design a system that meets your requirements. In the worst-case scenario, you may need to shut off the ventilation system while recording if the system is creating too much noise in the room. There are differing opinions on acoustical treatment for the studio. In the room where the actual performing happens, some like a completely acoustically dead room, while others want to have a little bit of natural reverberation. Most studios have some combination of acoustic absorption treatment and some diffusion treatment on the room surfaces. The best approach is to have flexible acoustic treatment on the walls. This can take the form of reversible panels on the wall that have absorption material on one side and diffusion panels on the other side. This way you can customize the acoustics of the room as needed for each recording. In the control room where the mixing happens, you don’t necessarily want a completely dead room. You do want a quiet room, and you want to remove any destructive early reflections that arrive at the mixing position. Other than that, you generally want to try to mimic the environment in which the listener will ultimately experience the sound. For film, you would want to mimic the acoustics of a screening room. For music, you may want to mimic the acoustics of a living room or similar listening space. This way you're mixing the sound in an acoustic environment that allows you to hear the problems that will be audible by the consumer. As a rule of thumb, you should design the acoustics of the room for the best case listening scenario for the consumer. Then test your mix in less desirable listening environments once you have something that sounds good in the studio. # 4.3.1 Deriving Power and Voltage Changes in Decibels Let's turn now to explore more of the mathematics of concepts related to acoustics. In Section 2, Table 4.2 lists some general guidelines regarding sound perception, and Table 4.5 gives some rules of thumb regarding power or voltage changes converted to decibels. We can’t mathematically prove the relationships in Table 4.2 because they’re based on subjective human perception, but we can prove the relationships in Table 4.5. First let’s prove that if we double the power in watts, we get a 3 dB increase.  As you work through this example, you see that you don’t always use decibels related to the reference points in Table 4.3.  (That is, the standard reference point is not always the value in the denominator.)  Sometimes you compare one wattage level to another, or one voltage level to another, or one sound pressure level to another, wanting to know the difference between the two in decibels.  In those cases, the answer represents a difference in two wattage, voltage, or sound pressure levels, and it is measured in dB. In general, to compare two power levels, we use the following: The difference in decibels between power $P_{0}$ and power $P_{1}$ = $10\log_{10}\left ( \frac{P_{0}}{P_{1}} \right )$ Equation 4.15 If  $P_{1} = 2P_{0}$ then we have You can illustrate this rule of thumb with two specific wattage levels – for example 1000 W and 500 W.  First, convert watts to dBm.  Table 4.3 gives the reference point for the definitions of dBm, dBW, dBV, an dBu.  The table shows that dBM uses 0.001 W as the reference point, which means that it is in the denominator inside the log. Thus, 1000 W is 60 dBm. What is 500 W in dBM?  The standard reference point for dBm is 0.001 W.  This yields. We see that 500 W is about 57 dBm, confirming that doubling the wattage results in a 3 dB increase, just as we predicted.  We get the same result if we compute the increase in decibels based on dBW.  dBW uses a reference point of 1 W in the denominator. 1000 W is about 30 dBW. 500 W is about 27 dBW.   Again, doubling the wattage results in a 3 dB increase, as predicted. Continuing with Table 4.5, we can show that if we multiply power by 10, we have a 10 dB increase in power. If we divide the power by 10, we get a 10 dB decrease in power. For voltage, we use the formula $20\log_{10}\left ( \frac{V_{1}}{V_{0}} \right )$, as shown in Table 4.3.  From this we can show that if we double the voltage, we have a 6 dB increase. If we multiply the voltage times 10, we get a 20 dB increase Don’t be fooled into thinking that if we multiply the voltage by 5, we’ll get a 10 dB increase.  Instead, multiplying voltage times 5 yields about 14 dB increase in voltage. The rest of the rows in the table related to voltage can be proven similarly. # 4.3.2 Working with Critical Bands Recall from Section 1 that critical bands are areas in the human ear that are sensitive to certain bandwidths of frequencies. The presence of critical bands in our ears is responsible for the masking of frequencies that are close to other louder ones that are received by the same critical band. In most sources, tables that estimate the widths of critical bands in human hearing give the bandwidths only in Hertz. In Table 4.4, we added two additional columns. Column 5 of Table 4.4 derives the number of semitones n in a critical band based on the beginning and ending frequencies in the band. Column 6 is the approximate size of the critical band in octaves. Let’s look at how we derived these two columns. First, consider column 5, which gives the critical bandwidth in semitones.  Chapter 3 explains that there are 12 semitones in an octave. The note at the high end of an octave has twice the frequency of a note at the low end. Thus, for frequency $f_{2}$ that is n semitones higher than $f_{1}$, To derive column 5 for each row, let b be the beginning frequency of the band, and let e be the end frequency of the band in that row. We want to find n such that This equation can be simplified to find n. Table 4.7 is included to give an idea of the twelfth root of 2 and powers of it. $\sqrt[12]{2}^{1}=2^{\frac{1}{12}}$ 1.0595 $\sqrt[12]{2}^{2}=2^{\frac{2}{12}}$ 1.1225 $\sqrt[12]{3}^{3}=2^{\frac{3}{12}}$ 1.1892 $\sqrt[12]{4}^{4}=2^{\frac{4}{12}}$ 1.2599 $\sqrt[12]{5}^{5}=2^{\frac{5}{12}}$ 1.3348 $\sqrt[12]{6}^{6}=2^{\frac{6}{12}}$ 1.4142 $\sqrt[12]{7}^{7}=2^{\frac{7}{12}}$ 1.4983 $\sqrt[12]{8}^{8}=2^{\frac{8}{12}}$ 1.5874 $\sqrt[12]{9}^{9}=2^{\frac{9}{12}}$ 1.6818 $\sqrt[12]{10}^{10}=2^{\frac{10}{12}}$ 1.7818 $\sqrt[12]{11}^{11}=2^{\frac{11}{12}}$ 1.8877 $\sqrt[12]{12}^{12}=2^{\frac{12}{12}}$ 2 Table 4.7 Powers of $\sqrt[12]{2}$ Column 5 is an estimate for n rounded to the nearest integer, which is the approximate number of semitone steps from the beginning to the end of the band. Column 6 is derived based on the n computed for column 5. If n is the number of semitones in a critical band and there are 12 semitones in an octave, then $\frac{n}{12}$ is the size of the critical band in octaves. Column 6 is $\frac{n}{12}$. # 4.3.3 A MATLAB Program for Equal Loudness Contours You may be interested in seeing how Figure 4.11 was created with a MATLAB program.  The MATLAB program below is included with permission from its creator, Jeff Tacket.  The program relies on data available is ISO 226.  The data is given in a comment in the program.  ISO is The International Organization for Standardization (www.iso.org). figure; [spl,freq_base] = iso226(10); semilogx(freq_base,spl) hold on; for phon = 0:10:90 [spl,freq] = iso226(phon);%equal loudness data plot(1000,phon,'.r'); text(1000,phon+3,num2str(phon)); plot(freq_base,spl);%equal loudness curve end axis([0 13000 0 140]); grid on % draw grid xlabel('Frequency (Hz)'); ylabel('Sound Pressure in Decibels'); hold off; function [spl, freq] = iso226(phon) % Generates an Equal Loudness Contour as described in ISO 226 % Usage: [SPL FREQ] = ISO226(PHON); % PHON is the phon value in dB SPL that you want the equal % loudness curve to represent. (1phon = 1dB @ 1kHz) % SPL is the Sound Pressure Level amplitude returned for % each of the 29 frequencies evaluated by ISO226. % FREQ is the returned vector of frequencies that ISO226 % evaluates to generate the contour. % % Desc: This function will return the equal loudness contour for % your desired phon level. The frequencies evaluated in this % function only span from 20Hz - 12.5kHz, and only 29 selective % frequencies are covered. This is the limitation of the ISO % standard. % % In addition the valid phon range should be 0 - 90 dB SPL. % Values outside this range do not have experimental values % and their contours should be treated as inaccurate. % % If more samples are required you should be able to easily % interpolate these values using spline(). % % Author: Jeff Tackett 03/01/05 % /---------------------------------------\ %%%%%%%%%%%%%%%%% TABLES FROM ISO226 %%%%%%%%%%%%%%%%% % \---------------------------------------/ f = [20 25 31.5 40 50 63 80 100 125 160 200 250 315 400 500 630 800 ... 1000 1250 1600 2000 2500 3150 4000 5000 6300 8000 10000 12500]; af = [0.532 0.506 0.480 0.455 0.432 0.409 0.387 0.367 0.349 0.330 0.315 ... 0.301 0.288 0.276 0.267 0.259 0.253 0.250 0.246 0.244 0.243 0.243 ... 0.243 0.242 0.242 0.245 0.254 0.271 0.301]; Lu = [-31.6 -27.2 -23.0 -19.1 -15.9 -13.0 -10.3 -8.1 -6.2 -4.5 -3.1 ... -2.0 -1.1 -0.4 0.0 0.3 0.5 0.0 -2.7 -4.1 -1.0 1.7 ... 2.5 1.2 -2.1 -7.1 -11.2 -10.7 -3.1]; Tf = [ 78.5 68.7 59.5 51.1 44.0 37.5 31.5 26.5 22.1 17.9 14.4 ... 11.4 8.6 6.2 4.4 3.0 2.2 2.4 3.5 1.7 -1.3 -4.2 ... -6.0 -5.4 -1.5 6.0 12.6 13.9 12.3]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Error Trapping if((phon < 0) || (phon > 90)) disp('Phon value out of bounds!') spl = 0; freq = 0; else %Setup user-defined values for equation Ln = phon; %Deriving sound pressure level from loudness level (iso226 sect 4.1) Af=4.47E-3 * (10.^(0.025*Ln) - 1.15) + (0.4*10.^(((Tf+Lu)/10)-9 )).^af; Lp=((10./af).*log10(Af)) - Lu + 94; %Return user data spl = Lp; freq = f; end Program 4.1 MATLAB program for graphing equal loudness contours # 4.3.4 The Mathematics of the Inverse Square Law and PAG Equations The inverse square law says, in essence, that for two points at distance $r_{0}$ and $r_{1}$ from a point sound source, where $r_{1}>r_{0}$, the sound intensity diminishes by $20\log_{10}\left ( \frac{r_{0}}{r_{1}}\right )$ dB. To derive the inverse square law mathematically, we can use the formula for the surface area of a sphere, $4\pi r^{2}$, where r is the radius of the sphere. Notice that in Figure 4.18, the radius of the sphere is also the distance from the sound source to the surface of that sphere. Recall that intensity is defined as power per unit area – that is, power proportional to the area over which it is spread. As the sound gets farther from the source, it spreads out over a larger area. At any distance r from the source, $I=\frac{P}{4\pi r^{2}}$ where I is intensity and P is the power at the source.   Notice that if you increase the radius of the sphere by a factor of n, gets smaller by a factor of $n^{2}$. Thus, I is proportional to the inverse of $r^{2}$, which can be stated mathematically as $I \propto \frac{1}{r^{2}}$. We can state this more completely as where $I_{0}$ is the intensity of the sound at the first location, $I_{1}$ is the intensity of the sound at the second location, $r_{0}$ is the initial distance from the sound, and $r_{1}$ is the new distance from the sound. Equation 4.16 Ratio of sound intensity comparing one location to another We usually represent intensities in decibels, so let’s convert to decibels applying the definition of dBSIL. Thus where $I_{0\,dBSIL}$ is the intensity of the sound at the first location in decibels, is the intensity of the sound at the second location in decibels, $r_{0}$ is the initial distance from the sound, $r_{1}$ and is the new distance from the sound Equation 4.17 Recall that when you subtract dBSIL from dBSIL, you get dB. Based on the inverse square law, it is easy to prove that if you double the distance from the sound, you get about a 6 dB decrease (as listed in Table 4.5). In Section 4.2.2.1, we looked at how the PAG is determined so that a sound engineer can know the limits of the gain he can apply to the sound without getting feedback. You can understand why feedback happens and how it can be prevented by applying the inverse square law. First, we can derive an equation for the sound level that comes from the singer arriving at the microphone at intensity $I_{M}$ vs. arriving at the listener at intensity $I_{L}$, without sound reinforcement. All sound levels are in decibels. By the inverse square law, the relationship between $I_{L}$and $I_{M}$ is this: Equation 4.18 Figure 4.40 Computing the PAG We can also apply the inverse square law to the sound coming from the loudspeaker and arriving at microphone at intensity $I_{M'}$ vs. arriving the listener at intensity $I_{L'}$, with reinforcement. Feedback occurs where $I_{M}=I_{M'}$. Thus we have Equation 4.19 Subtracting Equation 4.18 from Equation 4.19, we get $I_{L'}-I_{L}$ represents the PAG, the maximum amount by which the original sound can be boosted without feedback. This is Equation 4.14 originally discussed in Section 4.2.2.1. # 4.3.5 The Mathematics of Delays, Comb Filtering, and Room Modes In Section 4.2.2.4, we showed what happens when two copies of the same sound arrive at a listener at different times. For each of the frequencies in the sound, the copy of the frequency coming from speaker B is in a different phase relative to the copy coming from speaker A (Figure 4.27).   In the case of frequencies that are offset by exactly one half of a cycle, the two copies of the sound are completely out-of-phase, and those frequencies are lost for the listener in that location. This is an example of comb filtering caused by delay. To generalize this mathematically, let’s assume that loudspeaker B is d feet farther away from a listener than loudspeaker A. The speed of sound is c. Then the delay t, in seconds, is Equation 4.20 Delay t for offset d between two loudspeakers Assume for simplicity that the speed of sound is 1000 ft/s. Thus, for an offset of 20 ft, you get a delay of 0.020 s. What if you want to know the frequencies of the sound waves that will be combed out by a delay of t? The fundamental frequency to be combed, $f_{0}$, is the one that is delayed by half of the period, since this delay will offset the phase of the wave by 180°. We know that the period is the inverse of the frequency, which gives us Additionally, all integer multiples of $f_{0}$ will also be combed out, since they also will be 180° offset from the other copy of the sound. Thus, we can this formula for the frequencies combed out by delay t. Given a delay of t seconds between two identical copies of a sound, then the frequencies $f_{i}$ that will be combed out are Equation 4.21 Comb filtering For a 20 foot separation in distance, which creates a delay of 0.02 s, the combed frequencies are 25 Hz, 50 Hz, 75 Hz, and so forth. In Section 2, we made the point that comb filtering in the air can be handled by increasing the delay between the two sound sources. A 40 foot distance between two identical sound sources results in a 0.04 s delay, which then combs out 12.5 Hz, 25 Hz, 37.5 Hz, 50 Hz, and so forth. The larger delay, the lower the frequency at which combing begins, and the closer the combed frequencies are to one another. You can see this in Figure 4.41. In the first graph, a delay of 0.5682 ms combs out integer multiples of 880 Hz. In the second graph, a delay of 2.2727 ms combs out integer multiples of 220 Hz. If the delay is long enough, frequencies that are combed out are within the same critical band as frequencies that are amplified. Recall that all frequencies in a critical band are perceived as the same frequency. If one frequency is combed out and another is amplified within the same critical band, the resulting perceived amplitude of the frequency in that band is the same as would be heard without comb filtering. Thus, a long enough delay mitigates effect of comb filtering. The exercise associated with this section has you verify this point. Figure 4.41 Comparison of delays, 0.5682 ms (top) and 2.2727 ms (bottom) Room mode operates by the same principle as comb filtering. Picture a sound being sent from the center of a room. If the speed of sound in the room is 1000 ft/s and the room has parallel walls that are 10 feet apart, how long will it take the sound to travel from the center of the room, bounce off one of the walls, and come back to the center?   Since the sound is traveling 5 + 5 =10 feet, we get a delay of $t=\frac{10ft}{1000\frac{ft}{s}}=0.01s$. This implies that a sound wave of frequency $f_{0}=\frac{1}{2\ast 0.01}=50$ Hz sound wave will be combed out in the center of the room. The center of the room is a node with regard to a frequency of 50 Hz. For the second harmonic, 100 Hz, the nodes are 2.5 feet from the wall. The time it takes for sound to move from a point 2.5 feet from the wall and bounce back to that same point is 2.5 + 2.5 = 5 feet, yielding a delay of $t=\frac{5ft}{1000\frac{ft}{s}}=0.005s$. This is half the period of the 100 Hz wave, meaning a frequency of 100 Hz will be combed out at those points. However, in the center of the room, we still have a delay of $t=\frac{10ft}{1000\frac{ft}{s}}=0.001s$, which is the full period of the 100 Hz wave, meaning the 100 Hz wave gets amplified at the center of the room. The other harmonic frequencies can be explained similarly. # 4.4 References In addition to references cited in previous chapters: Davis, Don, and Eugene Patronis. Sound System Design and Engineering. 3rd ed. Burlington, MA. Focal Press/Elsevier, 2006. Everest, F. Alton and Ken C. Pohlmann. Master Handbook of Acoustics. 5th ed. New York: McGraw-Hill, 2009. Fletcher, H., and W. A. Munson. 1933. “Loudness, Its Definition, Measurement, and Calculations.” Journal of the American Statistical Association 5: 82-108. Levitin, Daniel J. This Is Your Brain on Music: The Science of Human Obsession. New York: Plume/Penguin, 2007. McCarthy, Bob. Sound Systems: Design and Optimization. 2nd ed. Burlington, MA: Focal Press, 2009. Pohlmann, Ken C. Principles of Digital Audio. 5th ed. New York: McGraw-Hill, 2005. Robinson, D. W., and R. S. Dadson. 1956. “A Re-Determination of the Equal-Loudness Relations for Pure Tones.” 7: 166-181. Thompson, Daniel M. Understanding Audio. Boston, MA: Berklee Press, 2005. Tobias, J. V., ed. Foundations of Modern Auditory Theory. Vol. 1. New York: Academic Press, 1970.
2017-08-22 16:56:29
{"extraction_info": {"found_math": true, "script_math_tex": 170, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 246, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6863680481910706, "perplexity": 938.7360597296898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112533.84/warc/CC-MAIN-20170822162608-20170822182608-00691.warc.gz"}
https://tex.stackexchange.com/questions/87134/what-is-the-advantage-of-using-f-prime-instead-of-f/87140
# What is the advantage of using $f^\prime$ instead of $f'$? Although the two outputs look quite similar, what is the advantage of using $f^\prime$ instead of $f'$? By the way, here is my code: \documentclass‎{article‎}‎ \begin{document}‎‎‎ $‎ ‎f^‎\prime ‎(x)=y‎ ‎$‎ $‎ ‎f'‎ ‎(x)=y‎ ‎$‎‎ \end{document} • There's no advantage in using $f^\prime$; it's just more awkward to type than $f'$ and the result is exactly the same. Dec 15, 2012 at 17:14 • just don't do f^' Jul 7, 2020 at 0:51 TL;DR: ' is a shorthand for ^{\prime}. ' is defined in latex.ltx as active math character: \def\active@math@prime{^\bgroup\prim@s} {\catcode\'=\active \global\let'\active@math@prime} \def\prim@s{% \prime\futurelet\@let@token\pr@m@s} \def\pr@m@s{% \ifx'\@let@token \expandafter\pr@@@s \else \ifx^\@let@token \expandafter\expandafter\expandafter\pr@@@t \else \egroup \fi \fi} \def\pr@@@s#1{\prim@s} \def\pr@@@t#1#2{#2\egroup} The active ' looks for following ' and puts them together as superscript, a''' becomes a^{\prime\prime\prime}. Thus using ' makes the input easier to write. Sometimes you may want to pass LaTeX code as an argument to another program. In that case the code is typically wrapped in quotes. Using quote to mean a prime will confuse the second program. For example to type TeX in graphical output of MATLAB one may use something like str='$$F^\prime$$' text(0,0,str,'Interpreter','latex') to print $F'$ at location $(0,0)$. Using $F'$ in the code however becomes problematic. • use double quotes or escapes for the String? Jan 26, 2015 at 10:21 • @MaxNoe Double quotes don't work in Matlab. String escaping is done with a double quote. So you'd have to write things like 'The symbol f'' is used to represent the first derivative of a function', which looks wrong at a first glance. All things considered, f^\prime doesn't look so bad. Feb 9, 2015 at 22:18 Tl;DR: You can use \prime with additional superscripts. There is certainly one advantage to using \prime under a particular situation. Suppose you have a map \pi which necessitates the use of another map \pi', which at first seems to be appropriately named. That is, until you have to pullback something with respect to \pi'. Now you can either write (\pi')^* or \pi^{\prime,*}. The latter looks, arguably, a little better. But both look horrible. • \pi'^{,*} is perfectly equivalent to \pi^{\prime,*} May 16, 2017 at 15:11 • @egreg \pi'^{,*} fails with error "double superscript"; \pi^{\prime,*} doesn't (using scrartcl and amsmath). So I prefer the latter … ;-) – CL. Feb 6, 2019 at 11:32 • @CL. Works for me with a minimal document. Some more details would be necessary. Feb 6, 2019 at 12:08 • @egreg Figured it out. It's the breqn package that breaks it. MWE. – CL. Feb 6, 2019 at 13:10 • @CL. Besides equations, breqn breaks so many things… Feb 6, 2019 at 13:27 I give a practical difference for Emacs users. If you write $x^{\prime\prime}$ it is all right, if you write $x''$ you completely mess up the AUCTeX syntax coloring. • Which version opf AUCTEX, I have no issues with $x''$ Jan 10, 2019 at 12:21 • "12.1.1" I am using the default latex configuration of the latest release version of Spacemacs, so I guess many other people share my latex setting Jan 10, 2019 at 12:25 • Interesting, I still use 11.90. Just had a PhD student come by yesterday, who also had problems with AUCTEX 12.1.1, which went away when he downgraded (he installed the Emacs + auctex for Windows there is on CTAN, instead). Seems like I'd better stay of 12.1 for while. Perhaps you should mention this on the auctex mailiing this such that they are aware of the issue. Jan 10, 2019 at 13:44 • I reported the issue Jan 10, 2019 at 14:11 In fact, there's a slight difference when you have two superscripts. For instance: {\bar{e}_k}^{\prime\dag} {\bar{e}_k}^{'\dag} The first one fits better. • Sure, but that's related to ' being the same as ^{\prime}, so if you have foo^{'\dag} your prime symbol is superscripted twice. {\bar{e}_k}'^{\dag} will give identical output as far as my eye can tell, but I agree your input is more elegant. Also fiddling with the my input led me to a double superscript error, which is frankly what I expected to begin with, so using foo^{\prime\dag} is definitely a nice idea here, I think Jul 19, 2016 at 7:32 • You should type \bar{e}_k'^{\dag} (the braces around \bar{e}_k do nothing). Jul 19, 2016 at 13:54 Although ' is a shorthand for ^\prime as noted in answers above, this can get in your way when dealing with multiple superscripts in combination with subscripts: \pi_{s+r}^{\ast\prime} = \pi^{\ast\prime}_{s+r} \ne \pi^{\ast}'_{s+r} \ne \pi_{s+r}^{\ast}' `
2022-05-27 15:15:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7717562317848206, "perplexity": 1947.5450481442608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00432.warc.gz"}